Year of defence: 2023

Abstract

This thesis applies to the context of the aeronautical industry. It is conducted in the context of two projects. The first is Robotics For the Future of Aircraft Manufacturing (ROB4FAM). It is a joint laboratory between Airbus Operations and the Gepetto team of LAAS-CNRS. It aims to study the reactive generation of robotic motion for drilling and deburring tasks for the aeronautical industry. The second is the European project H2020 Memory of Motion (Memmo), coordinated by the Gepetto team and in which Airbus Operations is also implied. This project has as objective to develop methods to generate reactive and complex movements independantly of the robot architecture. It is based on and extended perception of the environment and a preliminary learning of possible robot motions. Historically, the Gepetto team works on humanoids robots because they present a scientific challenge that requires to develop new concepts. The perception of the environment can be broken down into four main areas: knowing where our tools are, knowing where we are, knowing where we are going and knowing where we are going. This thesis aims to study the solutions that can be integrated into a humanoid robot in order to solve the problems mentioned above. This work is based on the humanoid robot Talos to perceive the environment using data from its LiDAR. It first shows that it is possible to accurately estimate the position of the robot in its environment. This is achieved by using LiDAR data and by integrating a system developed by the University of Oxford and provided as part of the Memmo project. Secondly, the localisation of large objects at long distances, as found in the aeronautical industry, is studied. This localisation is looked at using 3D data and geometric descriptors like the Fast Point Feature Histogram. This study is extended to neural networks trained to localise objects carrying little textual information through the use of state-of-the-art methods. In addition, the solution of the kidnapped robot problem is explored using LiDAR information, geometric descriptors and a dictionary mechanism. This problem consists of recognising the surrounding environment at the initialization of the robot localization. Finally, a ground plane detection system is integrated to allow the robot to plan its steps online.

Publications