Ego-Lane Analysis System

De LCAD
Revisão de 09h10min de 1 de março de 2018 por Rodrigo Berriel (discussão | contribs) (bug double-curly-braces)
(dif) ← Edição anterior | Revisão atual (dif) | Versão posterior → (dif)
Ir para: navegação, pesquisa

Authors: Rodrigo Berriel, Edilson de Aguiar, Alberto F. de Souza, Thiago Oliveira-Santos

Image and Vision Computing: 10.1016/j.imavis.2017.07.005

Abstract

Elas-graphical-abstract.png

Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research – detection, estimation, tracking, etc. – in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes presence. In this paper, we propose a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines, Kalman filter and Particle filter). Based on the estimated lane, all other events are detected. To validate ELAS and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e. lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). Moreover, the system was also validated quantitatively and qualitatively on other public datasets. ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.

Demonstration Video

Demonstration Video of ELAS, as proposed by the time of submission

Elas-video1.png

ELAS was weakly integrated into IARA (our autonomous vehicle). The video below shows ELAS performing on IARA (without tuning any parameter).

Elas-carmen.png

Source-Code

Available here

Dataset

To request access to the datasets, read the instructions here.

It contains 22 scenes with a total of 17,092 frames. In 98.54% of the frames, at least one of the sides has white lane markings, while 12.88% of them are yellow. In 5.18% of the frames at least one side has no lane markings. This numbers include frames containing transitions of lane markings type, i.e. images with two lane markings type simultaneously on the same side. Lane markings type transitions are present in 1,854 frames (10.85%). Lane change maneuver are being performed in 5.42% of this dataset, where 64.72% is from right to left and 35.28% in the opposite direction. There are intersections in 2.11% of the frames. In 7.28% of all images, there is at least one pavement marking. 33.92% of the pavement markings are crosswalks and 15.27% are annotated as unknown, i.e. they are of none of the classes of interest and 50.81% comprises arrows and stop lines. There is at least one adjacent lane in 50.34% and 72.14% of the frames for each side, right and left respectively.

Download a short description of each scene (with images).

Some samples below:

Elas-dataset-samples.png

BibTeX

 @article{berriel2017imavis,
   Author  = {Rodrigo F. Berriel and Edilson de Aguiar and Alberto F. de Souza and Thiago Oliveira-Santos},
   Title   = {{Ego-Lane Analysis System (ELAS): Dataset and Algorithms}},
   Journal = {Image and Vision Computing},
   Volume  = {68},
   Pages   = {64--75}
   Year    = {2017},
   DOI     = {10.1016/J.IMAVIS.2017.07.005},
   ISSN    = {0262-8856},
 }