Year of defence: 2019

Manuscript available here

Abstract

Humanoid robots need exteroceptive sensors such as cameras to perceive their environment and fulfill tasks in it. This thesis deals with the integration of visual information for robot control. More specifically, in order to realize a behavior, visual data are needed to drive the robot's whole body trajectory generator either on flat ground or in multicontact. We first recall how a humanoid robot is controlled for a locomotion task, starting from the reference positions sent to the planner that computes sequence of contacts used to generate centroidal trajectory. This one is injected in a Whole body trajectory generator that provides joint trajectories to be sent to the robot through a stabilizer. Depending on the type of data given by the vision block algorithm (considered as an input during this thesis), visual loops can be made at different levels of the previous pipeline. The objectives were to use on the shelf visual blocks outputs to provide experimental results bas! ed on former blocks integration. We treated first motion capture data as high level information giving them to a Pattern Generator (PG) in charge of computing steps for the robot. One goal was to realize integrative tests for the Koroibot challenge by connecting motions created to pass obstacles like stairs or a beam. Results on the robot were not satisfying due to poor motion repeatability. The fault was due to the assumptions used between model and real robot or external phenomena like mechanical wear and stabilizer effects. To have better quantification of the repeatability and reliability of the walking algorithms on the HRP2 robot, we realized experiments in collaboration with the French Metrology and Tests Laboratory (LNE). Our collaborators provided test plateforms like climatic room, adjustable angle slope and horizontal oscillations floor to measure Key Performance Indicators (KPI). Finally, to reach multicontact motions based on vision output, 2D features projecte! d on image camera plan have been expressed in a promising opti! mal control solver called DDP (Differential Dynamic Programming). It allows to take into account non-linearities of the features projection directly in the whole body trajectory generator. Simulations for locomotion motions with multicontact using simulated visual features were provided with the robot TALOS. The remaining main issue lies in the inequality constraints that are not implemented yet in the DDP solver core. In that last part, all the elements of the pipeline previously exposed are used together : from the pose specification to the motion passed on simulation that uses stabilization module before beeing sent to the actuator commands.

Publications