• ๋Œ€ํ•œ์ „๊ธฐํ•™ํšŒ
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • ํ•œ๊ตญ๊ณผํ•™๊ธฐ์ˆ ๋‹จ์ฒด์ด์—ฐํ•ฉํšŒ
  • ํ•œ๊ตญํ•™์ˆ ์ง€์ธ์šฉ์ƒ‰์ธ
  • Scopus
  • crossref
  • orcid

  1. (Dept. of Future Convergence Technology, Soonchunhyang University, Korea.)



Deep learning, Distance estimation, Occupancy grid map, Driving environment, Autonomous vehicle

1. ์„œ ๋ก 

์ž์œจ ์ฃผํ–‰ ์ž๋™์ฐจ(Autonomous vehicle)๋ž€ ์ฃผํ–‰ํ•˜๋Š” ํ™˜๊ฒฝ์„ ์ธ์‹ํ•˜์—ฌ ์šด์ „์„ ๋ณด์กฐํ•˜๊ฑฐ๋‚˜ ์Šค์Šค๋กœ ์ฃผํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์‹œ์Šคํ…œ[1]์œผ๋กœ ๊ตฌ์กฐ๋ฅผ ํฌ๊ฒŒ ์ธ์ง€, ํŒ๋‹จ, ์ œ์–ด์˜ ์„ธ ๊ฐ€์ง€ ๋ฒ”์ฃผ๋กœ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๋‹ค[2]. ํŠนํžˆ, ์ž์œจ ์ฃผํ–‰ ์ž๋™์ฐจ์˜ ์ •ํ™•ํ•œ ์ฃผํ–‰ ํ™˜๊ฒฝ ๋ฐ ์œ„์น˜ ์ธ์‹์„ ์œ„ํ•ด Camera, LiDAR, IMU, GPS ๋“ฑ์˜ ์„ผ์„œ ์œตํ•ฉ ๊ธฐ์ˆ ์— ๊ด€ํ•œ ๋งŽ์€ ์—ฐ๊ตฌ๊ฐ€ ์ง„ํ–‰๋˜์—ˆ๋‹ค[3]. ์„ผ์„œ ์œตํ•ฉ ๊ธฐ์ˆ ์€ ๊ฐ ์„ผ์„œ์˜ ๋‹จ์ ์„ ์ƒํ˜ธ ๋ณด์™„ํ•˜์—ฌ ๋‹จ์ผ ์„ผ์„œ๋ฅผ ์‚ฌ์šฉํ•˜์˜€์„ ๋•Œ๋ณด๋‹ค ์ข‹์€ ์„ฑ๋Šฅ์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ์ง€๋งŒ, ์„ผ์„œ ๋™๊ธฐ ๋ฐ ์ƒ๋Œ€์  ์œ„์น˜์— ๋Œ€ํ•œ ๋ณ€ํ™”์— ๋ฏผ๊ฐํ•˜์—ฌ ์˜ค์ฐจ ๋ณด์ •์„ ์ˆ˜์ฐจ๋ก€ ์ง„ํ–‰ํ•ด์•ผ ํ•œ๋‹ค[4]. ๋˜ํ•œ, ์ž์œจ ์ฃผํ–‰ ์ž๋™์ฐจ์˜ ์ƒ์šฉํ™”์— ์žˆ์–ด ๊ณ ๊ฐ€์˜ ์„ผ์„œ๋ฅผ ๋‹ค์ˆ˜ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์›€์ด ์žˆ๋‹ค[5].

์ด์—, ์ตœ๊ทผ ๋ช‡ ๋…„๊ฐ„ ์„ผ์„œ์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒํ•  ์ˆ˜ ์žˆ๋„๋ก ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์„ผ์„œ ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ์— ๊ด€ํ•œ ์—ฐ๊ตฌ[6,7]๊ฐ€ ํ™œ๋ฐœํžˆ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์˜์ƒ ์ธ์‹ ๊ธฐ์ˆ ์€ ์‚ฌ๋žŒ์˜ ์ธ์‹๋ฅ ๋ณด๋‹ค ๋” ๋†’์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๋ฉฐ, ์ž์œจ ์ฃผํ–‰ ์ž๋™์ฐจ์˜ ์ธ์ง€ ๊ธฐ์ˆ ์—์„œ ๋”ฅ ๋Ÿฌ๋‹์€ ๋”์šฑ ์ค‘์š”๋„๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋‹ค[8].

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” Mono vision์„ ์‚ฌ์šฉํ•˜์—ฌ ๋”ฅ ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ฃผํ–‰ ํ™˜๊ฒฝ(๊ฐ์ฒด, ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ) ์ธ์ง€๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ๋”ฅ ๋Ÿฌ๋‹์˜ ๊ฒฐ๊ณผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์‹œ์Šคํ…œ์„ ์ œ์•ˆํ•œ๋‹ค. ๋”ฅ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ๊ฐ์ฒด ๊ฒ€์ถœ์€ YOLO(You Only Look Once)v3[9]๋ฅผ, ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ์€ Fully Convolutional Networks(FCN)[10]๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ๊ฑฐ๋ฆฌ ์ถ”์ •์€ YOLOv3์™€ FCN ๊ฐ๊ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋’ค์— Fully Connected Layer๋ฅผ ์ถ”๊ฐ€ํ•œ๋‹ค. ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ์€ ๋”ฅ ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฒฐ๊ณผ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ์ด๋•Œ, ๊ฐ์ฒด์— ํ•ด๋‹นํ•˜๋Š” ์ •๋ณด์™€ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์— ํ•ด๋‹นํ•˜๋Š” ์ •๋ณด๋ฅผ ์œตํ•ฉํ•˜์—ฌ ์„ฑ๋Šฅ์„ ๋†’์ธ๋‹ค.

๋ณธ ๋…ผ๋ฌธ์˜ ๊ตฌ์„ฑ์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค. 2์žฅ์—์„œ๋Š” ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ์˜ ๊ตฌ์„ฑ ๋ฐ ๊ตฌํ˜„์— ๋Œ€ํ•œ ์„ค๋ช…, 3์žฅ์—์„œ๋Š” ์ œ์•ˆ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์„ ์‹คํ—˜์— ์ ์šฉํ•˜์—ฌ ์–ป์–ด์ง„ ๊ฒฐ๊ณผ๋ฅผ ์ •๋ฆฌํ•œ๋‹ค. 4์žฅ์—์„œ๋Š” ๊ฒฐ๋ก  ๋ฐ ํ–ฅํ›„ ์—ฐ๊ตฌ ๊ณผ์ œ์— ๋Œ€ํ•ด ๋…ผ์˜ํ•œ๋‹ค.

2. ์‹œ์Šคํ…œ ๊ตฌ์„ฑ

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” Mono vision์˜ image ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฐ์ฒด์™€ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์„ ๊ฒ€์ถœํ•จ๊ณผ ๋™์‹œ์— ๊ฑฐ๋ฆฌ ์ถ”์ •์„ ํ•œ๋‹ค. ๊ฐ๊ฐ์˜ ๊ฒ€์ถœ ๊ฒฐ๊ณผ์™€ ๊ฑฐ๋ฆฟ๊ฐ’์€ ์ง€๋„ ์ƒ์„ฑ๊ธฐ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜๋ฉฐ, ์ตœ์ข…์ ์œผ๋กœ ์ ์œ  ๊ฒฉ์ž ์ง€๋„(Occupancy grid map)๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ด๋•Œ, ์ด์ „ ์‹œ์ ์˜ ์ง€๋„๋ฅผ ์ฐจ๋Ÿ‰ ์ด๋™์— ๋”ฐ๋ผ ๋ณ€ํ˜•(Transformation)ํ•˜๊ธฐ ์œ„ํ•ด ์ฐจ๋Ÿ‰์˜ ์ด๋™ ์ •๋ณด(speed, heading) ๋˜ํ•œ ์ง€๋„ ์ƒ์„ฑ๊ธฐ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” ์‹œ์Šคํ…œ์˜ ์ „์ฒด์ ์ธ ํ๋ฆ„์€ ๊ทธ๋ฆผ 1๊ณผ ๊ฐ™๋‹ค.

๊ทธ๋ฆผ. 1. ์ œ์•ˆ๋œ ์‹œ์Šคํ…œ์˜ ํ๋ฆ„

Fig. 1. Flow of proposed systems.

../../Resources/kiee/KIEE.2020.69.2.356/fig1.png

2.1 ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ

๋‹จ์•ˆ ์˜์ƒ์นด๋ฉ”๋ผ ์„ผ์„œ ๊ธฐ๋ฐ˜์˜ ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ •์„ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” YOLO(You Only Look Once)v3๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. YOLOv3๋Š” ๋‹ค๋ฅธ Object Detection ๋„คํŠธ์›Œํฌ์™€๋Š” ๋‹ค๋ฅด๊ฒŒ Region proposal ๋‹จ๊ณ„ ์—†์ด, ๋„คํŠธ์›Œํฌ ๋‚ด์—์„œ ํ•™์Šต์„ ํ†ตํ•ด ๊ฐ์ฒด์™€ ๊ฐ์ฒด์˜ ์œ„์น˜๋ฅผ ๊ฒ€์ถœํ•œ๋‹ค. ์ด๋Š” Region proposal์„ ํ™œ์šฉํ•˜๋Š” ๋‹ค๋ฅธ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋น„ํ•ด ๋น ๋ฅธ ์‹คํ–‰์†๋„์™€ ์ •ํ™•ํ•œ ๊ฐ์ฒด ๊ฒ€์ถœ์ด ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์žฅ์ ์ด ์žˆ๋‹ค.

๊ทธ๋ฆผ 2๋Š” ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ์˜ ๊ตฌ์กฐ์ด๋‹ค. ๋จผ์ € mono vision์˜ ์˜์ƒ ํ”„๋ ˆ์ž„์„ ์ž…๋ ฅ์œผ๋กœ ํ•˜๋Š” YOLOv3๋ฅผ ํ†ตํ•ด ์˜์ƒ ๋‚ด ๊ฐ์ฒด๋ฅผ ๊ฒ€์ถœํ•˜๊ฒŒ ๋œ๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋Š” ๊ฐ์ฒด ์ข…๋ฅ˜, Bounding box ์ขŒํ‘œ์™€ ํฌ๊ธฐ์ธ x, y, w, h์˜ ์ด 5๊ฐœ์˜ ๊ฐ’์œผ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ์ดํ›„ ํ•ด๋‹น ์ •๋ณด๋Š” Fully Connected Layer(FC Layer)์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜๋ฉฐ ๊ฑฐ๋ฆฌ๋ฅผ regression ํ•œ๋‹ค[11]. ์ด๋•Œ ์ถ”์ •๋œ ๊ฑฐ๋ฆฌ ๊ฒฐ๊ณผ๋Š” ํ•ด๋‹น ๊ฐ์ฒด์˜ ํ”ฝ์…€ ์ขŒํ‘œ์— ๋งค์นญ๋˜๋Š” LiDAR ์„ผ์„œ์˜ ์ขŒํ‘œ x, y, z์— ๋Œ€ํ•ด ์˜ˆ์ธก์„ ์ˆ˜ํ–‰ํ•œ๋‹ค.

๊ทธ๋ฆผ. 2. ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ

Fig. 2. Object detection and distance estimation network structure.

../../Resources/kiee/KIEE.2020.69.2.356/fig2.png

2.2 ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ

์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์€ ์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰ ์ƒํ™ฉ ์‹œ ๋‹ค๋ฅธ ์ฐจ๋Ÿ‰ ๋ฐ ์ฃผํ–‰ ํ™˜๊ฒฝ์ƒ ๊ฐ์ฒด์— ํ”ผํ•ด๋ฅผ ์ฃผ์ง€ ์•Š์œผ๋ฉฐ ์ฃผํ–‰ํ•  ์ˆ˜ ์žˆ๋Š” ์˜์—ญ์ด๋‹ค[12]. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‹ค์–‘ํ•œ ํ™˜๊ฒฝ์—์„œ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์ด ๊ฒ€์ถœํ•  ์ˆ˜ ์žˆ๋„๋ก Semantic Segmentation ๋„คํŠธ์›Œํฌ์ธ Fully Convolutional Networks(FCN)์„ ์‚ฌ์šฉํ•œ๋‹ค. FCN์€ ๋”ฅ ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ ์ค‘ ๊ฐ์ฒด์˜ ์œค๊ณฝ์„ ์„ ๊ฒฝ๊ณ„๋กœ ์žก์•„ ๊ฐ์ฒด๋ฅผ ํ”ฝ์…€ ๋‹จ์œ„๋กœ ๊ฒ€์ถœํ•˜๋Š” ๊ฒƒ์œผ๋กœ ํ•œ ํ”„๋ ˆ์ž„์˜ ์ „์ฒด ํ”ฝ์…€์—์„œ ๋„๋กœ์™€ ๊ฐ™์€ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์— ํ•ด๋‹นํ•˜๋Š” ํ”ฝ์…€๋งŒ ๊ฒ€์ถœํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์‚ฌ์šฉํ•œ FCN์€ Mono vision์˜ image ๋ฐ์ดํ„ฐ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ํ•˜๊ณ  ๊ฒฐ๊ณผ๋กœ ๊ฐ์ฒด๋ฅผ ๊ตฌ๋ถ„ํ•˜๋Š” class ์ •๋ณด์™€ image ์ƒ์˜ ํ”ฝ์…€ ์ขŒํ‘œ(x, y)๋ฅผ ๊ฐ€์ง„๋‹ค.

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ฐจ๋Ÿ‰ ์ „๋ฐฉ์˜ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์— ํฌํ•จ๋˜๋Š” ํ”ฝ์…€์„ ๊ฒ€์ถœํ•˜๋Š” ๊ฒƒ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ํ•ด๋‹น ํ”ฝ์…€์˜ ์‹ค์ œ ๊ฑฐ๋ฆฌ๋ฅผ ์ถ”์ •ํ•œ๋‹ค[13]. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์˜ ๊ด€์ ์—์„œ ๊ฒ€์ถœ๋˜์–ด์•ผ ํ•˜๋Š” Class๋ฅผ ์œ„์ฃผ๋กœ FCN์˜ ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ๋ฅผ ์ˆ˜์ •ํ•˜๊ณ  FCN ๋„คํŠธ์›Œํฌ์˜ ๋’ค์— ๊ฑฐ๋ฆฌ ์ •๋ณด๋ฅผ ์ถ”์ •ํ•˜๋Š” Fully Connected Layer(FC Layer)๋ฅผ ์ถ”๊ฐ€ํ•œ๋‹ค. FC Layer๋Š” FCN์˜ ๊ฒฐ๊ณผ์ธ class ์ •๋ณด์™€ ํ”ฝ์…€ ์ขŒํ‘œ(x, y)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ํ•˜๊ณ  ๊ฒฐ๊ณผ๋กœ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์˜ ์ด๋ฏธ์ง€ ์ขŒํ‘œ์— ํ•ด๋‹นํ•˜๋Š” ๊ฑฐ๋ฆฌ ์ •๋ณด๋ฅผ ๊ฐ€์ง„๋‹ค. ์ด๋•Œ ์ถ”์ •๋œ ๊ฑฐ๋ฆฌ ์ •๋ณด๋Š” ๊ฒ€์ถœ๋œ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ํ”ฝ์…€์— ๋งค์นญ๋˜๋Š” LiDAR ์ขŒํ‘œ x, y๋ฅผ ์˜๋ฏธํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ์— ์‚ฌ์šฉํ•˜๋Š” ์ „์ฒด์ ์ธ ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ๋Š” ๊ทธ๋ฆผ 3๊ณผ ๊ฐ™๋‹ค.

๊ทธ๋ฆผ. 3. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ

Fig. 3. Drivable area detection and distance estimation network structure.

../../Resources/kiee/KIEE.2020.69.2.356/fig3.png

2.3 ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ๊ฐ์ฒด ๊ฒ€์ถœ๊ณผ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ ์œ  ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ๊ฐ๊ฐ์˜ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋กœ ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ทธ๋ฆฌ๋“œ ์…€ ํ‘œํ˜„์œผ๋กœ ๋ณ€ํ™˜ํ•œ๋‹ค. ์ด์ „ ์‹œ์ ์˜ ์ง€๋„๋Š” ์ฐจ๋Ÿ‰์˜ ์ด๋™ ๋ฐ˜๋Œ€ ๋ฐฉํ–ฅ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๋ฉฐ, ํ˜„์žฌ ์‹œ์ ์˜ ๊ฒ€์ถœ ๊ฒฐ๊ณผ ์ง€๋„์™€ ๋ณ€ํ™˜๋œ ์ด์ „ ์‹œ์  ์ง€๋„๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ์ ์œ  ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ ์œ  ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ „์ฒด์ ์ธ ํ๋ฆ„์€ ๊ทธ๋ฆผ 4์™€ ๊ฐ™๋‹ค.

๊ทธ๋ฆผ. 4. ์ ์œ  ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ตฌ์กฐ

Fig. 4. Occupancy grid map generation algorithm structure.

../../Resources/kiee/KIEE.2020.69.2.356/fig4.png

2.3.1 ๊ฒ€์ถœ ๊ฒฐ๊ณผ ๋ณ€ํ™˜

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ฃผํ–‰ ํ™˜๊ฒฝ์— ๋Œ€ํ•ด ๊ฒ€์ถœํ•œ ๊ฒฐ๊ณผ๋ฅผ ๊ฒฉ์ž ์ง€๋„๋กœ ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด YOLOv3์™€ FCN์„ ์‚ฌ์šฉํ•œ๋‹ค. ๊ฐ๊ฐ์˜ ๋„คํŠธ์›Œํฌ ๊ฒฐ๊ณผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋ฅผ ๊ทธ๋ฆฌ๋“œ(grid) ์œ„์˜ ๊ฐ ์…€(cell)์— ํ‘œํ˜„ํ•ด์•ผ ํ•œ๋‹ค. ์ฆ‰, ์‚ฌ์ „์— ์ง€์ •ํ•ด๋‘” ์ผ์ • ๊ณต๊ฐ„์— ๋Œ€ํ•ด ๊ทธ๋ฆฌ๋“œ ์…€์„ ๋ถ€์—ฌํ•˜์—ฌ ์…€๋ณ„๋กœ ์žฅ์• ๋ฌผ์ด ์กด์žฌํ•˜๋Š”์ง€์— ๋Œ€ํ•œ ํ™•๋ฅ ์„ ๊ณ„์‚ฐํ•ด์•ผ ํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์นธ๋‹น 0.5m*0.5m ๊ทœ๊ฒฉ์˜ 80*120 ํฌ๊ธฐ์˜ ๊ฒฉ์ž๋ฅผ ์ƒ์„ฑํ•˜๊ณ  2.1์ ˆ์˜ ๊ฒฐ๊ณผ๋Š” ์‹ (1), 2.2์ ˆ์˜ ๊ฒฐ๊ณผ๋Š” ์‹ (2)์— ๋”ฐ๋ผ ๊ฐ ์…€์— ๊ฐ’์„ ๋ถ€์—ฌํ•œ๋‹ค.

(1)
$$Object \enspace detection =\begin{cases} Object \enspace probabil y&\begin{aligned} if \enspace class=& 'car',\:'truck',\:\\ &'person'\end{aligned}\\ 0& otherwise \end{cases}$$

(2)
$$Drivable \enspace area \enspace detection =\begin{cases} 1&if \enspace class='drivable \enspace area'\\ 0& otherwise \end{cases}$$

2.3.2 ์ •์  ์ง€๋„์™€ ๋™์  ์ง€๋„ ๊ฒฐํ•ฉ

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋™์  ๋Œ€์ƒ์ธ ๊ฐ์ฒด์™€ ์ •์  ๋Œ€์ƒ์ธ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์„ ๊ฒ€์ถœํ•œ ๊ฒฐ๊ณผ๋กœ ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ๋‘ ๊ฐ€์ง€ ๋Œ€์ƒ์˜ ํŠน์ง•์ด ๋‹ค๋ฅด๋ฏ€๋กœ ๊ฐ๊ฐ ๋™์  ์ง€๋„, ์ •์  ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ํŠน์„ฑ์— ๋งž๊ฒŒ ์—…๋ฐ์ดํŠธ ํ›„ ๋‘˜์„ ๊ฒฐํ•ฉํ•˜๋Š” ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•œ๋‹ค. ๋™์  ์ง€๋„๋Š” ์ž๋™์ฐจ ๊ฐ™์€ ๋™์  ์žฅ์• ๋ฌผ์„ ํ‘œํ˜„ํ•œ ์ง€๋„์ด๋ฏ€๋กœ ์ตœ์‹  ์ •๋ณด๋ฅผ ์šฐ์„ ์‹œํ•˜๊ณ  ๊ณผ๊ฑฐ ์ •๋ณด๋ฅผ ์ ์ฐจ ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ์ •์  ์ง€๋„๋Š” ๋„๋กœ ์˜์—ญ์ฒ˜๋Ÿผ ์ƒํƒœ๊ฐ€ ๊ณ ์ •๋œ ๋Œ€์ƒ์„ ํ‘œํ˜„ํ•œ ์ง€๋„์ด๋ฏ€๋กœ ๊ฐ ์…€์˜ ํ™•๋ฅ ์„ log odd๋กœ ํ‘œํ˜„ํ•˜๋Š” binary bayes filter ์•Œ๊ณ ๋ฆฌ์ฆ˜[14]์— ์˜ํ•ด ๊ณผ๊ฑฐ๋กœ๋ถ€ํ„ฐ์˜ ์ •๋ณด๋ฅผ ๋ˆ„์ ํ•œ๋‹ค. Binary bayes filter ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์‹ (3)๊ณผ ๊ฐ™์ด ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.

(3)
$$l_{t}=l_{t-1}+\log\dfrac{p(x|z_{t})}{1-p(x|z_{t})}-\log\dfrac{p(x)}{1-p(x)}$$

๊ฐ๊ฐ ์—…๋ฐ์ดํŠธ๋œ ์ง€๋„๋Š” ๊ทธ๋ฆผ 5์™€ ๊ฐ™์ด ์•ˆ์ „์— ๋น„๊ต์  ์น˜๋ช…์ ์ธ ๋™์  ์ง€๋„๋ฅผ ์ •์  ์ง€๋„ ์œ„์— ๋ฎ์–ด์“ฐ๋Š” ๋ฐฉ์‹์œผ๋กœ ๊ฒฐํ•ฉํ•œ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๋™์  ์ง€๋„๋ฅผ ํ™•์‹คํ•˜๊ฒŒ ํ‘œ์‹œํ•˜๋ฉด์„œ ๋‚˜๋จธ์ง€ ์˜์—ญ์— ์ •์  ์ง€๋„๋ฅผ ํ‘œ์‹œํ•˜๋Š” ํŠน์ง•์ด ์žˆ๋‹ค.

๊ทธ๋ฆผ. 5. ์ •์  ์ง€๋„์™€ ๋™์  ์ง€๋„์˜ ๊ฒฐํ•ฉ

Fig. 5. Integration of static and dynamic maps.

../../Resources/kiee/KIEE.2020.69.2.356/fig5.png

2.3.3 ์ด์ „ ์‹œ์  ์ง€๋„ ์ขŒํ‘œ๊ณ„ ๋ณ€ํ™˜

์ง€๋„๋ฅผ ์ƒ์„ฑํ•  ๋•Œ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์›€์ง์ด๋Š” ์ง€๋„์˜ ์ขŒํ‘œ๊ณ„๋ฅผ ์ง€๋„์— ๋ฐ˜์˜ํ•ด์•ผ ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ์ฐจ๋Ÿ‰ ์ค‘์‹ฌ์˜ ์ด๋™๋Ÿ‰๊ณผ ๋ฐฉํ–ฅ์„ ๊ณ„์‚ฐํ•˜๊ธฐ ์œ„ํ•ด ์ฐจ๋Ÿ‰์˜ ์ด๋™ ์ •๋ณด(speed, heading)๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, ์ด์ „ ์‹œ์ ์˜ ์ง€๋„๋Š” ๊ทธ๋ฆผ 6๊ณผ ๊ฐ™์ด ์ฐจ๋Ÿ‰ ์ด๋™์˜ ๋ฐ˜๋Œ€ ๋ฐฉํ–ฅ์œผ๋กœ ๋ณ€ํ™˜(transformation)ํ•ด์ค€๋‹ค.

๊ทธ๋ฆผ. 6. ๊ฒฉ์ž ์ง€๋„ ๋ณ€ํ™˜

Fig. 6. Grid map transformation.

../../Resources/kiee/KIEE.2020.69.2.356/fig6.png

์ง€๋„์˜ ๋ณ€ํ˜• ๋•Œ๋ฌธ์— ์…€์˜ ์ขŒํ‘œ๊ฐ€ ์ •์ˆ˜๊ฐ€ ์•„๋‹Œ ์‹ค์ˆ˜๊ฐ€ ๋‚˜์˜ค๊ฒŒ ๋˜๋Š”๋ฐ, ์…€์€ ์‹ค์ˆ˜ ํ˜•ํƒœ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ํ‘œํ˜„ํ•  ์ˆ˜ ์—†๋‹ค. ๋”ฐ๋ผ์„œ ์…€์˜ ๊ฐ’์„ ํ•ด๋‹น ์ขŒํ‘œ์˜ ๊ฐ€์ค‘ ํ‰๊ท ์œผ๋กœ ๊ณ„์‚ฐํ•˜๋Š” ๋ฐฉ๋ฒ•์ธ ์–‘์„ ํ˜• ๋ณด๊ฐ„๋ฒ•(Bilinear Interpolation)[15]์„ ์‚ฌ์šฉํ•˜์—ฌ ์…€์— ๋“ค์–ด๊ฐˆ ๊ฐ’์ด ์ •์ˆ˜๋กœ ํ‘œํ˜„๋  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ์–‘์„ ํ˜• ๋ณด๊ฐ„๋ฒ•์€ ์‹ (4)์™€ ๊ฐ™์ด ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.

(4)
$\begin{align*} \widetilde L_{ij}(x,\:y)=\dfrac{1}{(H+1)(W+1)}& [(H+1-x)(W+1-y)\hat m(i,\:j)\\ & +(H+1-x)y\hat m(i,\:j+1)\\ & +x(W+1-y)\hat m(i,\:j+1)\\ & +xy\hat m(i+1,\:j+1)] \end{align*}$

3. ์‹คํ—˜ ๋ฐ ๊ฒฐ๊ณผ

3.1 ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ

๋ณธ ๋…ผ๋ฌธ์˜ ๋„คํŠธ์›Œํฌ ํ•™์Šต ๋ฐ ์‹คํ—˜์˜ Hardware ํ™˜๊ฒฝ์€ ํ‘œ 1๊ณผ ๊ฐ™๋‹ค. Software ํ™˜๊ฒฝ์€ Ubuntu 16.04 LTS์—์„œ Python์„ ์‚ฌ์šฉํ•˜์—ฌ ์ง„ํ–‰ํ•œ๋‹ค. YOLOv3 ๋„คํŠธ์›Œํฌ ์ƒ์„ฑ์„ ์œ„ํ•ด Darknet๋ฅผ ์‚ฌ์šฉํ•˜๋ฉฐ, FCN ๋„คํŠธ์›Œํฌ ์ƒ์„ฑ์„ ์œ„ํ•ด Tensorflow๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. Darknet์€ C์™€ CUDA๋กœ ๋งŒ๋“ค์–ด์ง„ ์‹ ๊ฒฝ ๋„คํŠธ์›Œํฌ(Neural network)๋ฅผ ์œ„ํ•œ ์˜คํ”ˆ์†Œ์Šค ํ”„๋ ˆ์ž„์›Œํฌ์ด๋‹ค[16]. Tensorflow๋Š” ๊ตฌ๊ธ€(Google)์—์„œ C++๋กœ ๋งŒ๋“ , ๋”ฅ๋Ÿฌ๋‹ ํ”„๋กœ๊ทธ๋žจ์„ ์‰ฝ๊ฒŒ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์„ ์ œ๊ณตํ•ด์ฃผ๋Š” ์˜คํ”ˆ์†Œ์Šค ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ์ด๋ฉฐ Python, Java, Go ๋“ฑ ๋‹ค์–‘ํ•œ ์–ธ์–ด๋ฅผ ์ง€์›ํ•œ๋‹ค[17].

ํ‘œ 1. ํ•˜๋“œ์›จ์–ด ์ŠคํŽ™

Table 1. Hardware Spec

Parts

Products

CPU

Intel Xeon Processor

GPU

GTX-1080 Ti

RAM

64 GB

SSD

512 GB

3.2 ์‹คํ—˜ ๋ฐ์ดํ„ฐ์…‹

์‹คํ—˜์— ์ด์šฉํ•œ ๋ฐ์ดํ„ฐ๋Š” KITTI ๋ฐ์ดํ„ฐ์…‹์„ ์‚ฌ์šฉํ•œ๋‹ค. KITTI ๋ฐ์ดํ„ฐ์…‹์€ ์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰ ์‹œ ์ธก์ •๋œ Camera, LiDAR, ๊ณ ์ •๋ฐ€ GPS, IMU ๋“ฑ์˜ ์„ผ์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์ •๋ฆฌํ•˜๊ณ  ์—ฐ๊ตฌ๋ฅผ ์œ„ํ•ด ๊ณต๊ฐœํ•œ ๋ฐ์ดํ„ฐ์…‹์ด๋‹ค[18]. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” Camera, LiDAR์™€ IMU์˜ ์„ผ์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. Camera ๋ฐ์ดํ„ฐ๋Š” ๊ฐ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ(YOLOv3, FCN)์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜๋ฉฐ, LiDAR ๋ฐ์ดํ„ฐ๋Š” ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ ํ•™์Šต ์‹œ ์ฐธ์กฐ(reference) ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์šฉํ•œ๋‹ค. IMU ๋ฐ์ดํ„ฐ๋Š” ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ๊ณผ์ • ์ค‘ ์ด์ „ ์‹œ์ ์˜ ์ง€๋„๋ฅผ ์ฐจ๋Ÿ‰ ์ด๋™์˜ ๋ฐ˜๋Œ€ ๋ฐฉํ–ฅ์œผ๋กœ ๋ณ€ํ™˜ํ•  ๋•Œ ์‚ฌ์šฉํ•œ๋‹ค.

3.3 ์‹คํ—˜ ๊ฒฐ๊ณผ

3.3.1 ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ ๊ฒฐ๊ณผ

๊ฐ์ฒด ๊ฒ€์ถœ๋ฅ  ํ…Œ์ŠคํŠธ๋Š” ์‚ฌ์ „์— ImageNet 1000 Class ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต๋œ Pre-trained model์— KITTI ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ์ „์ดํ•™์Šต์„ ์ง„ํ–‰ํ•œ ๋ชจ๋ธ๋กœ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ์˜์ƒ ๋‚ด์—์„œ ์ฐจ๋Ÿ‰์ด ๊ฒ€์ถœ๋œ ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•œ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋ฆผ 7-(a)์™€ ๊ฐ™๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ์—์„œ ๊ฒ€์ถœ๋ฅ ์€ ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ๊ฐ์ฒด๊ฐ€ ๋งž๊ฒŒ ๊ฒ€์ถœ๋œ ๋น„์œจ์„ ์˜๋ฏธํ•˜๋ฉฐ, ๋ณธ ์‹คํ—˜์—์„œ๋Š” 85.89%์˜ ์„ฑ๋Šฅ์„ ํ™•์ธํ•˜์˜€๋‹ค.

๊ฑฐ๋ฆฌ ์ถ”์ • ์ •ํ™•๋„ ํ…Œ์ŠคํŠธ๋Š” LiDAR ๋ฐ์ดํ„ฐ๋ฅผ ์ฐธ์กฐ(reference) ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต์„ ์ง„ํ–‰ํ•œ ๋ชจ๋ธ๋กœ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ๊ฑฐ๋ฆฌ ์ถ”์ • ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•œ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋ฆผ 7-(b)์™€ ๊ฐ™๋‹ค. ๊ฑฐ๋ฆฌ ์ถ”์ •์—์„œ ์ •ํ™•๋„๋Š” ํ‰๊ท  ์ œ๊ณฑ์˜ค์ฐจ(Mean Square Error, MSE)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ™•์ธํ•˜๋ฉฐ, MSE๋Š” ์‹ (5)์™€ ๊ฐ™์ด ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋‹ค.

(5)
$$Mean \enspace Square \enspace Error=\dfrac{1}{n}\sum_{i=1}^{n}(p_{i}-y_{i})^{2}$$

๋ณธ ์‹คํ—˜์—์„œ๋Š” MSE๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ x์ถ•์— ๋Œ€ํ•œ ์˜ค์ฐจ 0.9m, y์ถ•์— ๋Œ€ํ•œ ์˜ค์ฐจ 2.68m, z์ถ•์— ๋Œ€ํ•œ ์˜ค์ฐจ 0.25m๋ฅผ ํ™•์ธํ•˜์˜€์œผ๋ฉฐ, ์ „์ฒด์ ์ธ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์˜ค์ฐจ๋Š” 2.77m๋กœ ํ™•์ธ๋˜์—ˆ๋‹ค.

๊ทธ๋ฆผ. 7. ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๊ฒฐ๊ณผ (์‹œ๊ฐํ™”)

Fig. 7. Result of object detection and distance estimation (visualization)

../../Resources/kiee/KIEE.2020.69.2.356/fig7.png

3.3.2 ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๋„คํŠธ์›Œํฌ ๊ฒฐ๊ณผ

์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ๋ฅ  ํ…Œ์ŠคํŠธ๋Š” ์‚ฌ์ „์— PASCAL VOC ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ํ•™์Šต๋œ Pre-trained model์— KITTI ๋ฐ์ดํ„ฐ์…‹์œผ๋กœ ์ „์ดํ•™์Šต์„ ์ง„ํ–‰ํ•œ ๋ชจ๋ธ๋กœ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ์˜์ƒ ๋‚ด์—์„œ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์ด ๊ฒ€์ถœ๋œ ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•œ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋ฆผ 8-(a)์™€ ๊ฐ™๋‹ค. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ์—์„œ ๊ฒ€์ถœ๋ฅ ์€ ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ์ฃผํ–‰๊ฐ€๋Šฅ ์˜์—ญ์ด ๋งž๊ฒŒ ๊ฒ€์ถœ๋œ ๋น„์œจ์„ ์˜๋ฏธํ•˜๋ฉฐ, ๋ณธ ์‹คํ—˜์—์„œ๋Š” 90.27%์˜ ์„ฑ๋Šฅ์„ ํ™•์ธํ•˜์˜€๋‹ค.

ํ‘œ 2. ๊ฐ์ฒด ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๊ฒฐ๊ณผ (์ •ํ™•๋„ ๋ฐ ์˜ค์ฐจ)

Table 2. Result of object detection and distance estimation (Accuracy and Error)

Object detection

Distance estimation

x

y

z

Distance

85.89 %

0.9 m

2.68 m

0.25 m

2.77 m

๊ฑฐ๋ฆฌ ์ถ”์ • ์ •ํ™•๋„ ํ…Œ์ŠคํŠธ๋Š” LiDAR ๋ฐ์ดํ„ฐ๋ฅผ ์ฐธ์กฐ(reference) ๋ฐ์ดํ„ฐ๋กœ ํ•™์Šต์„ ์ง„ํ–‰ํ•œ ๋ชจ๋ธ๋กœ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ๊ฑฐ๋ฆฌ ์ถ”์ • ๊ฒฐ๊ณผ๋ฅผ ์‹œ๊ฐํ™”ํ•œ ์ด๋ฏธ์ง€๋Š” ๊ทธ๋ฆผ 8-(b)์™€ ๊ฐ™์œผ๋ฉฐ, ๋…ธ๋ž€์ƒ‰์€ ์ฐธ์กฐ ๋ฐ์ดํ„ฐ๋กœ ์‚ฌ์šฉ๋œ LiDAR์˜ ๋ฐ์ดํ„ฐ๋ฅผ, ๋นจ๊ฐ„์ƒ‰์€ ์˜ˆ์ธกํ•œ LiDAR์˜ x, y ์ขŒํ‘œ๋ฅผ ์˜๋ฏธํ•œ๋‹ค. ๊ฑฐ๋ฆฌ ์ถ”์ •์—์„œ x, y ์ขŒํ‘œ์— ๋Œ€ํ•œ ์ •ํ™•๋„๋Š” ์‹ (6)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ™•์ธํ•œ๋‹ค.

(6)
$$Accuracy=\dfrac{\sum_{i=0}^{k}\sum_{j=0}^{k}(p_{ij}+- 3cm)=y_{ij}}{\sum_{i=0}^{k}\sum_{j=0}^{k}y_{ij}}$$

๊ฐ ์ขŒํ‘œ์— ๋Œ€ํ•œ ์ •ํ™•๋„๋Š” x์ถ•์— ๋Œ€ํ•ด 85.80%, y์ถ•์— ๋Œ€ํ•ด 89.37%๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค. ์ „์ฒด์ ์ธ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์ •ํ™•๋„๋Š” ์‹ (3)์„ ์‚ฌ์šฉํ•˜์—ฌ ํ™•์ธํ•œ๋‹ค. ์ „์ฒด์ ์ธ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์˜ค์ฐจ๋Š” 1.253m๋กœ ํ™•์ธ๋˜์—ˆ๋‹ค.

๊ทธ๋ฆผ. 8. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๊ฒฐ๊ณผ (์‹œ๊ฐํ™”)

Fig. 8. Result of driving areas detection and distance estimation (Visualization)

../../Resources/kiee/KIEE.2020.69.2.356/fig8.png

ํ‘œ 3. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ถ”์ • ๊ฒฐ๊ณผ (์ •ํ™•๋„ ๋ฐ ์˜ค์ฐจ)

Table 3. Result of driving areas detection and distance estimation (Accuracy and Error)

Driving areas detection

Distance estimation

x

y

Distance

90.27 %

85.80 %

89.37 %

1.253 m

3.3.3 ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ๊ฒฐ๊ณผ

๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ๊ณผ์ • ๋ฐ ๊ฒฐ๊ณผ๋Š” ๊ทธ๋ฆผ 9์™€ ๊ฐ™๋‹ค. ์ƒ๋‹จ์˜ ์ด๋ฏธ์ง€๋Š” KITTI ๋ฐ์ดํ„ฐ์…‹์˜ Camera ๋ฐ์ดํ„ฐ๋ฅผ ์˜๋ฏธํ•œ๋‹ค. ํ•˜๋‹จ์˜ ์ด๋ฏธ์ง€๋Š” ์™ผ์ชฝ๋ถ€ํ„ฐ ์ฐจ๋ก€๋Œ€๋กœ ๊ฐ์ฒด ๊ฒ€์ถœ ๊ฒฐ๊ณผ, ๊ฐ์ฒด ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋ฅผ ํ™•๋Œ€ํ•œ ๊ฒƒ, ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๊ฒฐ๊ณผ, ํ˜„์žฌ ์‹œ์ ์˜ ๋™์  ์ง€๋„, ๊ณผ๊ฑฐ ์‹œ์ ์˜ ๋™์  ์ง€๋„, ํ˜„์žฌ ์‹œ์ ์˜ ์ •์  ์ง€๋„, ๊ณผ๊ฑฐ ์‹œ์ ์˜ ์ •์  ์ง€๋„, ์ •์  ์ง€๋„์™€ ๋™์  ์ง€๋„ ๊ฒฐํ•ฉ ๊ฒฐ๊ณผ, ์ตœ์ข…์ ์ธ ์ ์œ  ๊ฒฉ์ž ์ง€๋„๋ฅผ ์˜๋ฏธํ•œ๋‹ค.

๊ทธ๋ฆผ. 9. ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ๊ฒฐ๊ณผ

Fig. 9. Result of grid map generation

../../Resources/kiee/KIEE.2020.69.2.356/fig9.png

4. ๊ฒฐ ๋ก 

๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ž์œจ์ฃผํ–‰ ์ž๋™์ฐจ์˜ ํ™˜๊ฒฝ ์ธ์ง€ ์„ผ์„œ ์ค‘ Mono vision๋งŒ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ ๊ธฐ๋ฐ˜์˜ ์ฃผํ–‰ ํ™˜๊ฒฝ ์ธ์ง€ ๋ฐ ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์ฃผํ–‰ ํ™˜๊ฒฝ ์ธ์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ๋Š” YOLO(You Only Look Once)v3๋ฅผ ์‚ฌ์šฉํ•œ ๊ฐ์ฒด ๊ฒ€์ถœ, FCN(Fully Convolutional Networks)๋ฅผ ์‚ฌ์šฉํ•œ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ์„ ์ˆ˜ํ–‰ํ•˜์˜€์œผ๋ฉฐ, ๊ฐ์ฒด ๊ฒ€์ถœ์€ 85.89%, ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ์€ 90.27%์˜ ํƒ์ง€์œจ(Accuracy)์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ, ๊ฐ ๋„คํŠธ์›Œํฌ์˜ ๋’ค์— Fully Connected Layer(FC Layer)๋ฅผ ์ถ”๊ฐ€ํ•˜์—ฌ ๊ฒ€์ถœ ๊ฒฐ๊ณผ์— ๋Œ€ํ•œ ๊ฑฐ๋ฆฌ ์ •๋ณด๋ฅผ ์ถ”์ •ํ•˜์˜€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด Mono vision๋งŒ์„ ์‚ฌ์šฉํ•˜๋”๋ผ๋„ ๊ฑฐ๋ฆฌ ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๊ฐ์ฒด ๊ฒ€์ถœ์—์„œ์˜ ๊ฑฐ๋ฆฌ ์ถ”์ •์€ ํ‰๊ท  ์ œ๊ณฑ์˜ค์ฐจ(Mean Square Error, MSE)๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ x์ถ•์— ๋Œ€ํ•œ ์˜ค์ฐจ 0.9m, y์ถ•์— ๋Œ€ํ•œ ์˜ค์ฐจ 2.68m, z์ถ•์— ๋Œ€ํ•œ ์˜ค์ฐจ 0.25m๋ฅผ ํ™•์ธํ•˜์˜€์œผ๋ฉฐ, ์ „์ฒด์ ์ธ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์˜ค์ฐจ๋Š” 2.77m๋กœ ํ™•์ธ๋˜์—ˆ๋‹ค. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ์—์„œ์˜ ๊ฑฐ๋ฆฌ ์ถ”์ •์€ ํƒ์ง€์œจ(Accuracy)์„ ์‚ฌ์šฉํ•˜์—ฌ x์ถ•์— ๋Œ€ํ•ด 85.80%, y์ถ•์— ๋Œ€ํ•ด 89.37%์˜ ์„ฑ๋Šฅ์„ ํ™•์ธํ•˜์˜€์œผ๋ฉฐ, ์ „์ฒด์ ์ธ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์˜ค์ฐจ๋Š” MSE๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ 1.253m๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค.

๋”ฅ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋กœ ๊ฒ€์ถœ๋œ ์ฃผํ–‰ ํ™˜๊ฒฝ ์ธ์ง€ ๊ฒฐ๊ณผ๋Š” IMU ๋ฐ์ดํ„ฐ์™€ ํ•จ๊ป˜ ๊ฒฉ์ž ์ง€๋„ ์ƒ์„ฑ ์‹œ์Šคํ…œ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉ๋˜์–ด ์ ์œ  ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•˜์˜€๋‹ค. ๋™์  ๋Œ€์ƒ์ธ ๊ฐ์ฒด์™€ ์ •์  ๋Œ€์ƒ์ธ ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์˜ ํŠน์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ ๊ฐ๊ฐ ๋™์  ์ง€๋„, ์ •์  ์ง€๋„๋ฅผ ์ƒ์„ฑ ํ›„ ๊ฒฐํ•ฉํ•˜์˜€์œผ๋ฉฐ, ์‹ค์‹œ๊ฐ„์œผ๋กœ ์›€์ง์ด๋Š” ์ง€๋„์˜ ์ขŒํ‘œ๊ณ„๋ฅผ ์ง€๋„์— ๋ฐ˜์˜ํ•˜๊ธฐ ์œ„ํ•ด ์ฐจ๋Ÿ‰ ์ด๋™์˜ ๋ฐ˜๋Œ€ ๋ฐฉํ–ฅ์œผ๋กœ ๋ณ€ํ™˜(transformation)ํ•˜์˜€๋‹ค. ํ˜„์žฌ ์‹œ์ ์˜ ์ง€๋„์™€ ์ถ• ๋ณ€ํ™˜๋œ ์ด์ „ ์‹œ์ ์˜ ์ง€๋„๋ฅผ ๊ฒฐํ•ฉํ•˜์—ฌ ์ตœ์ข…์ ์œผ๋กœ ์ ์œ  ๊ฒฉ์ž ์ง€๋„๋ฅผ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด๋Š” ์ดํ›„ ํŒ๋‹จ ๋ถ€๋ถ„์—์„œ ๊ฒฉ์ž์— ์ตœ๋‹จ ๊ฒฝ๋กœ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•  ๋•Œ ํŽธ๋ฆฌํ•˜๊ฒŒ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค.

๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ฃผํ–‰ ํ™˜๊ฒฝ ์ธ์ง€ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฑฐ๋ฆฌ ์ถ”์ •์€ ๊ฑฐ๋ฆฌ๊ฐ€ ๋ฉ€์–ด์งˆ์ˆ˜๋ก ์˜ค์ฐจ๊ฐ€ ํฌ๊ฒŒ ๋ฐœ์ƒํ•œ๋‹ค. ์˜ค์ฐจ๊ฐ€ ๋ฐœ์ƒํ•˜๋Š” ์›์ธ์œผ๋กœ ๊ฐ์ฒด ๊ฒ€์ถœ ๋„คํŠธ์›Œํฌ์—์„œ๋Š” ์ฐจ๋Ÿ‰์˜ ์ข…๋ฅ˜๋ณ„๋กœ ๋‹ค๋ฅธ Bounding Box์˜ ๋น„์œจ์ด๋ผ๊ณ  ์˜ˆ์ƒํ•˜์˜€๋‹ค. RV, ์Šน์šฉ, ๋ฒ„์Šค, ํŠธ๋Ÿญ ๋“ฑ์˜ ์ฐจ๋Ÿ‰์€ ๊ฐ๊ฐ ๋‹ค๋ฅธ ๋น„์œจ์˜ Bounding Box๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ฐ์ฒด ๊ฒ€์ถœ ์‹œ ์ฐจ์ข…์˜ ๋ถ„๋ฅ˜ ์—†์ด ๋ชจ๋“  ์ฐจ์ข…์„ ํ•˜๋‚˜์˜ class๋กœ ๊ตฌ๋ถ„ํ•˜์—ฌ ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๋‹ค. ์ด์— Bounding Box์˜ ๋น„์œจ์— ๋”ฐ๋ผ ๊ฐ’์ด ๋‹ฌ๋ผ์ง€๋Š” ๊ฑฐ๋ฆฌ ์ถ”์ • ํŠน์„ฑ์ƒ ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š” ๊ฒƒ์ด๋ผ ์˜ˆ์ƒํ•˜์˜€๋‹ค. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋„คํŠธ์›Œํฌ์—์„œ์˜ ์›์ธ์€ ๊ฑฐ๋ฆฌ๊ฐ€ ๋ฉ€์ˆ˜๋ก ํ•ด์ƒ๋„(resolution)๊ฐ€ ๋–จ์–ด์ง€๋Š” ๊ฒƒ์ด๋ผ๊ณ  ์˜ˆ์ƒํ•˜์˜€๋‹ค. LiDAR ๋ฐ์ดํ„ฐ๋Š” ๊ฑฐ๋ฆฌ๊ฐ€ ๋ฉ€์ˆ˜๋ก ๋‚ฎ์€ ํ•ด์ƒ๋„๋ฅผ ๊ฐ€์ง€๋Š” ํŠน์„ฑ์ด ์žˆ๋‹ค. ์ด์— ๊ฑฐ๋ฆฌ๊ฐ€ ๋ฉ€์–ด์งˆ์ˆ˜๋ก ํ•™์Šต์— ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋Š” LiDAR์˜ ์ฐธ์กฐ(reference) ๋ฐ์ดํ„ฐ๊ฐ€ ์ ์–ด์ง€๋ฏ€๋กœ ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š” ๊ฒƒ์ด๋ผ ์˜ˆ์ƒํ•˜์˜€๋‹ค.

์ด์— ํ–ฅํ›„ ์—ฐ๊ตฌ์—์„œ๋Š” ํ•™์Šต DB ์ˆ˜๋ฅผ ๋Š˜๋ฆฌ๊ธฐ ์œ„ํ•œ DB ์ถ”๊ฐ€ ๊ตฌ์ถ•์„ ํ•  ์˜ˆ์ •์ด๋‹ค. ๋˜ํ•œ, ๊ฐ์ฒด ๊ฒ€์ถœ ๋„คํŠธ์›Œํฌ๋Š” ์ฐจ๋Ÿ‰ class๋ฅผ ์ฐจ์ข…์— ๋”ฐ๋ผ ์„ธ๋ถ„ํ™”ํ•˜์—ฌ ๋„คํŠธ์›Œํฌ๋ฅผ ์žฌํ•™์Šต ์‹œํ‚ฌ ์˜ˆ์ •์ด๋‹ค. ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ ๊ฒ€์ถœ ๋„คํŠธ์›Œํฌ๋Š” ๋ณด๊ฐ„๋ฒ•(interpolation)๊ณผ ๊ฐ™์€ ๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ฉ€๋ฆฌ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋„ ๊ทผ์ฒ˜ ๋ฐ์ดํ„ฐ์™€ ๋น„์Šทํ•œ ๊ฐ’์„ ๊ฐ€์งˆ ์ˆ˜ ์žˆ๋„๋ก ๊ฑฐ๋ฆฌ ์ถ”์ •์˜ ์ฐธ์กฐ(reference) ๋ฐ์ดํ„ฐ๋ฅผ ์ถ”๊ฐ€๋กœ ์ƒ์„ฑํ•˜๊ณ  ๋„คํŠธ์›Œํฌ๋ฅผ ์žฌํ•™์Šต ์‹œ์ผœ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ์—ฐ๊ตฌ๋ฅผ ์ง„ํ–‰ํ•  ๊ฒƒ์ด๋‹ค.

Acknowledgements

This work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIP; Ministry of Science, ICT& Future Planning) (NRF-2017R1C 1B5018101)

References

1 
Keonyup Chu, et al., 2011, Development of an Autonomous Vehicle: A1, Transactions of the Korean Society of Automotive Engineers, Vol. 19, No. 4, pp. 146-154Google Search
2 
Scott Drew Pendleton, et al., February 2017, Perception, Planning, Control, and Coordination for Autonomous Vehicles, Machines, Vol. 5, No. 1DOI
3 
Minchae Lee, et al., February 2013, Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles, Journal of Korean institute of intelligent systems, Vol. 23, No. 1, pp. 35-45DOI
4 
Si-Jong Kim, et al., June 2011, The Development of Sensor System and 3D World Modeling for Autonomous Vehicle., Journal of Institute of Control, Robotics and Systems, Vol. 17, No. 6, pp. 531-538DOI
5 
Bora Jin, August 2014, Preference analysis of autonomous vehicle in Korea : using mixed Logit model, Seoul UniversityGoogle Search
6 
Yeongbae Hwang, Myeonghyeon Yoon, March 2018, Multi-sensor -based artificial intelligence technology for autonomous driving, OSIA Standards & Technology Review, Vol. 30, No. 1, pp. 23-29Google Search
7 
Yeongguk Ha, July 2018, Deep Learning Technology for Autonomous driving Vehicle, Journal of Korea Robotics Society, Vol. 15, No. 3, pp. 36-46Google Search
8 
Hyun Kim, Hyuk-Jae Lee, June 2018, A Trend in Development of AI Platforms for the Performance Improvement of Autonomous Vehicles, Journal of the Korean Society of Automotive Engineers, Vol. 40, No. 6, pp. 37-42Google Search
9 
Joseph Redmon, and Ali Farhadi, April 2018, Yolov3: An incremental improvement, CoRR, Vol. abs/1804.02767Google Search
10 
Jonathan Long, Evan Shelhamer, Trevor Darrell, 2015, Fully Convolutional Networks for Semantic Segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440DOI
11 
Mingyu Park, et al., April 2019, A Study on Vehicle Detection and Distance Classification Using Mono Camera Based on Deep Learning, Journal of Korean institute of intelligent systems, Vol. 29, No. 2, pp. 83-89Google Search
12 
Sung soo Hwang, Do Hyun Kim, June 2016, Traversable Region Detection Algorithm using Lane Information and Texture Analysis, Journal of Korea Multimedia Society, Vol. 19, No. 6, pp. 979-989DOI
13 
Eungi Cho, Yongbeom Lee, Seongkeun Park, April 2019, Creation of Grid Map using a Driving Areas Detection and Distance Estimation based on Deep Learning Network, in Proceedings of KIIS Spring Conference 2019Google Search
14 
Hyukdoo Choi, Euntai Kim, Gwang-Woong Yang, September 2013, Scan likelihood evaluation in FastSLAM using binary Bayes filter, The 2013 IEEE Image, Video and Multidimensional Signal Processing (IVMSP), pp. 1-3DOI
15 
Ji-Hye Joung, Jeong-Tae Kim, November 2012, Fast Correction of Nonuniform Illumination on Bi-level Images using Block Based Intensity Normalization, The Transactions of the Korean Institute of Electrical Engineers, Vol. 61, No. 12, pp. 1926-1931DOI
16 
Joseph Redmon, 2013, Darknet: Open Source Neural Networks in CGoogle Search
17 
Martรญn Abadi, et al., 2015, {TensorFlow}: Large-Scale Machine Learning on Heterogeneous Systems, Internet: http:// tensorflow.org/DOI
18 
Andreas Geiger, Philip Lenz, Raquel Urtasun, 2012, Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite, Conference on Computer Vision and Pattern Recognition (CVPR)DOI

์ €์ž์†Œ๊ฐœ

์กฐ์€๊ธฐ (Eungi Cho)
../../Resources/kiee/KIEE.2020.69.2.356/au1.png

2018๋…„: ์ˆœ์ฒœํ–ฅ๋Œ€ํ•™๊ต ์ „์ž์ •๋ณด๊ณตํ•™๊ณผ ๊ณตํ•™์‚ฌ

2018๋…„~ํ˜„์žฌ: ์ˆœ์ฒœํ–ฅ๋Œ€ํ•™๊ต ์ผ๋ฐ˜๋Œ€ํ•™์› ๋ฏธ๋ž˜์œตํ•ฉ๊ธฐ์ˆ ํ•™๊ณผ ์„์‚ฌ๊ณผ์ •

๊น€ํ˜„์„ (Hyeonseok Kim)
../../Resources/kiee/KIEE.2020.69.2.356/au2.png

2018๋…„: ์ˆœ์ฒœํ–ฅ๋Œ€ํ•™๊ต ์ „์ž์ •๋ณด๊ณตํ•™๊ณผ ๊ณตํ•™์‚ฌ

2018๋…„~ํ˜„์žฌ: ์ˆœ์ฒœํ–ฅ๋Œ€ํ•™๊ต ์ผ๋ฐ˜๋Œ€ํ•™์› ๋ฏธ๋ž˜์œตํ•ฉ๊ธฐ์ˆ ํ•™๊ณผ ์„์‚ฌ๊ณผ์ •

๋ฐ•์„ฑ๊ทผ (Seongkeun Park)
../../Resources/kiee/KIEE.2020.69.2.356/au3.png

2004๋…„ : ์—ฐ์„ธ๋Œ€ํ•™๊ต ์ „๊ธฐ์ „์ž๊ณตํ•™๋ถ€ ๊ณตํ•™์‚ฌ

2011๋…„ : ์—ฐ์„ธ๋Œ€ํ•™๊ต ์ „๊ธฐ์ „์ž๊ณตํ•™๋ถ€ ๊ณตํ•™๋ฐ•์‚ฌ

2016๋…„~ํ˜„์žฌ : ์ˆœ์ฒœํ–ฅ๋Œ€ํ•™๊ต ์Šค๋งˆํŠธ์ž๋™์ฐจํ•™๊ณผ ์กฐ๊ต์ˆ˜