Friday, November 17th, 4:00 pm, Tolentine 215
reception to follow
Although the vast majority of synapses in the cerebral cortex convey lateral or top-down feed-back, most convolutional neural networks (CNNs), especially if trained by backpropagating a classification error, are based on feed-forward architectures. Even recurrent CNNs (RCNNs), which utilize state-dependent feedback, are often based on a feed-forward backbone. In a similar vein, although cortical receptive fields are richly dynamic and combine temporal with other types of information, most CNNs employ purely static representations in which temporal information is ignored. Input to biological neural systems is also event or spike based, which preserves fine spatiotemporal correlations produced when images and objects slide across the pixel array. Finally, whereas CNNs trained with backprop require large amounts of labeled training data, biological systems learn primarily from raw, unlabeled sensory inputs. Here, I describe how we are investigating the hypothesis that fundamental improvements in the performance of neurally-inspired computer algorithms can be achieved by incorporating lateral and top-down feedback, spatiotemporal representations and event-driven dynamics into hierarchical neural networks, enabling the unsupervised learning of internal representations that model the deep structure of the environment in which they are embedded.