Deliverable 4.3

Initial version of higher-level perception functions The deliverable provides an initial design and implementation of the higher level perception functions referring to road surface and obstacle perception, parking spot detection, road users classification, tracking and signaling perception. pdf

Deliverable 4.2

Initial version of low-level perception functions The deliverable provides an initial design and implementation of the spatio-temporal and appearance based low level representation (STAR) which represents the basis of building a virtual super-sensor that may perceive the environment like it has the capabilities of all available sensors mounted on the vehicles. pdf

Deliverable 2.2

First vehicle platform fully operational This deliverable documents the functionality of the first vehicle platform. It provides a thorough analysis of the sensor setup, presents the high-level processing framework, reports on communication capabilities and provides a brief overview of the safety elements and policies. pdf

Super-sensor for 360-degree Environment Perception: Point Cloud Segmentation Using Image Features

R. Varga, A.D. Costea, H. Florea, I. Giosan, S. Nedevschi Proceedings of 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC 2017), Yokohama, Japan, 16-19 Oct. 2017,  pp. 1-8 This paper describes a super-sensor that enables 360-degree environment perception for automated vehicles in urban traffic scenarios. We use four fisheye cameras, four 360-degree LIDARs and a GPS/IMU sensor mounted on an automated vehicle to build a super-sensor that offers an enhanced low-level representation of the environment by harmonizing all the available sensor measurements. Individual sensors cannot provide a robust 360-degree perception due to their limitations: field of view, range,...

Semantic segmentation-based stereo reconstruction with statistically improved long range accuracy

V.C. Miclea, S. Nedevschi Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV 17), Los Angeles, CA, USA, 11-14 June 2017, pp. 1795-1802 Lately stereo matching has become a key aspect in autonomous driving, providing highly accurate solutions at relatively low cost. Top approaches on state of the art benchmarks rely on learning mechanisms such as convolutional neural networks (ConvNets) to boost matching accuracy. We propose a new real-time stereo reconstruction method that uses a ConvNet for semantically segmenting the driving scene. In a ”divide and conquer” approach this segmentation enables us to split the large heterogeneous traffic scene into smaller...

Semi-Automatic Image Annotation of Street Scenes

Andra Petrovai, Arthur D. Costea and Sergiu Nedevschi Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV 17), Los Angeles, CA, USA, 11-14 June 2017, pp. 448-455 Scene labeling enables very sophisticated and powerful applications for autonomous driving. Training classifiers for this task would not be possible without the existence of large datasets of pixelwise labeled images. Manually annotating a large number of images is an expensive and time consuming process. In this paper, we propose a new semi-automatic annotation tool for scene labeling tailored for autonomous driving. This tool significantly reduces the effort of the annotator and also the time...

Fast Boosting based Detection using Scale Invariant Multimodal Multiresolution Filtered Features

Arthur Daniel Costea, Robert Varga and Sergiu Nedevschi Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 17), Honolulu, HI, USA, 21-26 July 2017, pp. 993-1002 In this paper we propose a novel boosting-based sliding window solution for object detection which can keep up with the precision of the state-of-the art deep learning approaches, while being 10 to 100 times faster. The solution takes advantage of multisensorial perception and exploits information from color, motion and depth. We introduce multimodal multiresolution filtering of signal intensity, gradient magnitude and orientation channels, in order to capture structure at multiple scales...

Traffic Scene Segmentation based on Boosting over Multimodal Low, Intermediate and High Order Multi-range Channel Features

Arthur D. Costea and Sergiu Nedevschi Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV), Redondo Beach, CA, USA, June 11-14, 2017, pp. 74-81 In this paper we introduce a novel multimodal boosting based solution for semantic segmentation of traffic scenarios. Local structure and context are captured from both monocular color and depth modalities in the form of image channels. We define multiple channel types at three different levels: low, intermediate and high order channels. The low order channels are computed using a multimodal multiresolution filtering scheme and capture structure and color information from lower receptive fields. For the intermediate order...