Neuromorphic Perception
Modern vision systems are powered by standard frame-based cameras and deep learning architectures with enormous capacity and sample complexity.
State of the art perceptual system can solve relatively static tasks such as face and object recognition but are inefficient in processing video and efficiently computing control and planning. They suffer at dynamic tasks such as estimating rapid optical flow, depth, ego-motion, visual odometry etc., which are essential to embodied navigation. Most important their power consumption is notorious and the information bandwidth between the sensor and throughout the computation pathways is orders of magnitude higher than in animal brains. We propose a research program on how to build bio-inspired perceptual systems starting from event-based cameras and switching from classical deep learning to spiking neural networks. We outline a concrete research procedure using the example of optical flow and propose to extend it to the computation of reactive behaviors as well as perceptual organization and 3D reconstruction.