Deliverable 3.2

First development and integration cycle of cloud infrastructure This deliverable corresponds to Task 3.2: First development and integration cycle of cloud infrastructure. It documents the cloud infrastructure that has been selected and implemented within the project. Building on D3.1, this deliverable focuses on the Gitlab source code management system, the Swift object store and OpenStack cloud compute infrastructure functionality. pdf

Deliverable 2.4

Second vehicle platform fully functional This deliverable documents the functionality of the second vehicle platform. Since the vehicle is merely a copy of the first vehicle platform, in this report we thoroughly analyze only the upgrades and differences in the setup. pdf

Deliverable 2.3

Second vehicle platform available This deliverable documents the functionality of the second vehicle platform. It details the sensor setup, presents the high-level processing framework, reports on communication capabilities and provides a brief overview of the safety elements and policies. pdf

Deliverable 2.2

First vehicle platform fully operational This deliverable documents the functionality of the first vehicle platform. It provides a thorough analysis of the sensor setup, presents the high-level processing framework, reports on communication capabilities and provides a brief overview of the safety elements and policies. pdf

Deliverable 2.1

First vehicle platform available This deliverable documents the availability of the first vehicle platform. It gives an overview of the available sensor data, drive by wire functionality and the low-level functionality of the communication, acquisition and processing framework. pdf

Modular Sensor Fusion for Semantic Segmentation

Hermann Blum, Abel Gawel, Roland Siegwart and Cesar Cadena IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018 Sensor fusion is a fundamental process in robotic systems as it extends the perceptual range and increases robustness in real-world operations. Current multi-sensor deep learning based semantic segmentation approaches do not provide robustness to under-performing classes in one modality, or require a specific architecture with access to the full aligned multi-sensor training data. In this work, we analyze statistical fusion approaches for semantic segmentation that overcome these drawbacks while keeping a competitive performance. The studied approaches are modular by construction, allowing to have different...

Fusion Scheme for Semantic and Instance-level Segmentation

Arthur Daniel Costea, Andra Petrovai, Sergiu Nedevschi Proceedings of 2018 IEEE 21th International Conference on Intelligent Transportation Systems (ITSC 2018), Maui, Hawaii, USA, 4-7 Nov. 2018, pp. 3469-3475 A powerful scene understanding can be achieved by combining the tasks of semantic segmentation and instance level recognition. Considering that these tasks are complementary, we propose a multi-objective fusion scheme which leverages the capabilities of each task: pixel level semantic segmentation performs well in background classification and delimiting foreground objects from background, while instance level segmentation excels in recognizing and classifying objects as a whole. We use a fully convolutional residual network...

Super-sensor for 360-degree Environment Perception: Point Cloud Segmentation Using Image Features

R. Varga, A.D. Costea, H. Florea, I. Giosan, S. Nedevschi Proceedings of 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC 2017), Yokohama, Japan, 16-19 Oct. 2017,  pp. 1-8 This paper describes a super-sensor that enables 360-degree environment perception for automated vehicles in urban traffic scenarios. We use four fisheye cameras, four 360-degree LIDARs and a GPS/IMU sensor mounted on an automated vehicle to build a super-sensor that offers an enhanced low-level representation of the environment by harmonizing all the available sensor measurements. Individual sensors cannot provide a robust 360-degree perception due to their limitations: field of view, range,...

Semantic segmentation-based stereo reconstruction with statistically improved long range accuracy

V.C. Miclea, S. Nedevschi Proceedings of 2017 IEEE Intelligent Vehicles Symposium (IV 17), Los Angeles, CA, USA, 11-14 June 2017, pp. 1795-1802 Lately stereo matching has become a key aspect in autonomous driving, providing highly accurate solutions at relatively low cost. Top approaches on state of the art benchmarks rely on learning mechanisms such as convolutional neural networks (ConvNets) to boost matching accuracy. We propose a new real-time stereo reconstruction method that uses a ConvNet for semantically segmenting the driving scene. In a ”divide and conquer” approach this segmentation enables us to split the large heterogeneous traffic scene into smaller...