Aller au contenu

WP2 - Robust Perception, Navigation and Environment Analysis

Dernière mise à jour :

There is no autonomy without perception. The quality of perception, and its measurement, directly impact the robustness of decisions regarding localisation, navigation and environment understanding. Beyond the technical complexity of constantly evolving sensors, the question of robustness in perception is strongly related to the performance of semantic classification based on intrinsic complex data.

Photo credit: @Oleksandr by Adobe Stock

The perception in use in autonomous systems is generally multimodal and also multi-temporal, integrating vision, but also sensors such as a lidar, RGB-D camera, multi or hyperspectral sensors, haptic interfaces. Data fusion is a central issue here, also including control as well as knowledge of the semantics of the environment. Under the prism of robustness, questions of convergence and error control are essential (e.g. particle filters). And for autonomous systems,  it also comes with limited resources. Typically, data available for learning are limited given the difficulty of data collection, within real experiments. Frugal AI, combining real and simulated data, is preferred in such cases.

Robust reconstruction of the environment in 3D relies on the ability of sensors and data processing algorithms to cope with noise under disturbed conditions. This work package addresses these issues through the development of deep learning classifier architectures that can provide noise-tolerant feature maps from various sources, such as standard color images, depth images or even 3D point clouds, to achieve adaptive detection and robust recognition and tracking of moving obstacles. The constitution of 3D models of the scene and of its evolution allow an in-depth comparative analysis of the robustness of supervised networks trained on observation data and of semi-supervised architectures trained on both annotated and unannotated images.

WP2 applies these techniques to agricultural robotics e.g. on the navigation of E-Tract robot (see WP6 on agricultural robots). However, agricultural robotics do not only use perception for navigation but also for action. At the level of sensory-motor loops, the fine observation of the scene with the aim of carrying out a precision action is also important. The aim is to propose strategies to take advantage of the robot motion, including light control if needed, in order to guarantee both the image and its interpretation as robust as possible. Motion allows multiple detections of the same objects, with different angles, distances, brightnesses, and so allows redundancy and then robustness.

Teams Involved

LaBRI IS / IMS MOTIVE / IMB OptimAI - ASTRAL / ONERA