Entering the Perception / Decision / Action loop
Perception is the way through which robots gather knowledge about their
environment. But before structuring this knowledge into models or to
exploit it, some basics perception functionalities are required. These
pages gather some of our work on such functionalities. They are related to
vision, as it is the main sensor we currently use within the Eden project:
- Stereo-vision describes how a 3D
points image is produced on the basis of a pair of calibrated images.
- Image segmentation describes how color
and texture attributes can be exploited to segment the perceived
scenes into regions.
- Panoramic vision panoramic image acquisitions
(work that exploit such images to localize the rover is
- Interest points matching is an important
basic algorithm, whose results can be exploited the principle of
panoramic image acquisitions (work that exploit such images to localize
the rover is presented here).
- Some visual features tracking algorithms are described
|| [related pages] [BibTeX] [top] |
S. Betge-Brezetz, R. Chatila and M.Devy. Object-Based Modelling and Localization in Natural
Environments. In International Conference on Robotics and Automation, pages 2920-2927. Nagoya (Japan), 1995.