Andrea De Maio

I am a PhD student at the Robotics and InteractionS (RIS) group of the Laboratory for Analysis and Architecture of Systems (LAAS) under the supervision of Simon Lacroix

My research focuses on actively controlling robotic perception processes, with a special focus on ego-motion estimation, using deep neural networks. I actively supported InFuse, a European project developing a data fusion fusion framework for space robotics.


I obtained my Master's Degree in Artificial Intelligence and Robotics from La Sapienza University of Rome. I also spent 1 year as an exchange student at Linkoping University.

I did my Master Thesis on planning and real time execution for mobile robots at Space Applications Services under the joint supervision of Dr. Jeremi Gancet and Prof. Daniele Nardi. I also worked there as a robotics engineer supporting the field validation of the ICARUS project, the proposal preparation of two awarded European projects (InFuse and LUVMI) and the feasibility study of a lunar polar sample return mission.

I was a trainee at the European Space Operations Center (ESOC-ESA) where I worked on time-flexible activity planning under the supervision of Dr. Simone Fratini.

Right before starting my PhD I spent six months working on robot exploration and investigating SLAM systems as a Junior Research Fellow in the Advanced Robotics department at the Italian Institute of Technology (IIT) under the supervision of Dr. Fei Chen.

Research interests

My research aims at making perception processes more adaptive to different contexts. I believe this can be obtained with two complementary approaches: (1) quality assessment and (2) active control. The first one deals with the assessment of the performances in a given process, answering to the question "How good am I doing?". The second one tackles the problem of actively controlling perception processes, which can be either seen as a parameterization problem when dealing with a single process, or as a more complex dynamic optimization problem when dealing with a suite of algorithms. In both cases, the problem tries to find an answer to "How can I improve myself in this context?".

To find an answer to the second question, I am working on breaking the end-to-end standard approach used in deep learning for computer vision trying to embed geometry-based methods in a larger architecture involving the use of convolutional neural networks. The goal is to add classical vision algorithms in the learning pipeline to predict the best set of parameters for every context.

In the same context, I recently started working on a system to correct vision-based motion estimates using visual information. The goal is to have a tool to support classical estimators improving the quality of their outputs and predicting their uncertainty.

My other research interests lie in uncertainty propagation and estimation, long-term autonomy and exploration in gps-denied environments.



7, Avenue du Colonel Roche
31400 Toulouse, France

Tel. +33 (0)5 61 33 78 44
Room H.110