Learning Binary Shapes as Compression and its Cellular Implementation


We present a methodology to learn how to recognize binary shapes, based on the principle that recognition may be understood as a process of information compression. Our approach, directed towards adaptive target tracking applications, is intended to be well suited to fine-grained parallel architectures, such as cellular automata machines that can be integrated in artificial retinas. This methodology, fruitfully explained within the frame of mathematical morphology, is then particularized in the perspective of its actual implementation.

In ACCV'952nd Asian Conference on Computer Vision