Year of defence: 2010

Manuscript available here

Abstract

This work addresses the problem of autonomously constructing the 3D model of an unknown object using a humanoid robot.More specifically, we consider a HRP-2 evolving in a known environment, which is possibly cluttered, guided by vision.Our method considers the visual information available, the constraints on the robot body, and the model of the environment in order to generate pertinent postures and the necessary motions around the object.Our two solutions to the Next-Best-View problem are based on a specific posture generator, where a posture is computed by solving an optimization problem.The first solution is a local approach to the problem where an original rendering algorithm is specifically designed in order to be directly included in the posture generator. The rendering algorithm can display complex 3D shapes while taking into account self-occlusions.The second solution seeks more global solutions by decoupling the problem in two steps: (i) find the best sensor pose while satisfying a reduced set of constraints on the humanoid, and (ii) generate a whole-body posture with the posture generator.The first step relies on global sampling and BOBYQA, a derivative-free optimization method, to converge toward pertinent viewpoints in non-convex feasible configuration spaces.Our approach is tested in real conditions by using a coherent architecture that includes various complex software components that consider the specificities of the humanoid robot. This experiment integrates on-going works addressing the tasks of motion planning, motion control, and visual processing, to allow the completion of the 3D object reconstruction in future works.