Title: Learning, feature representations, and dynamics of the visual system

Speaker: Dr. Malte Rasch, Research Staff Member, IBM Research AI,TJ Watson Research Lab, New York

Time: 10:00-11:00 open talk, 11:00-11:30 chalk talk, April 24, 2019

Location: Room 1113, Wang Kezhen Building


In recent years, deep learning architectures, such as convolutional networks, have become very popular and powerful for computer vision tasks such as object recognition and image classification. They achieve remarkable training performances on large labeled data sets by supervisedly learning hierarchical feature representations, and are often compared to the (hierarchical) visual system. In this talk, I will contrast the representational structure of deep learning networks with the information representation through neural activity in the visual
system. In particular, using computational modelling, machine learning techniques, and data analysis of electrophysiological recordings, I explore information representation and dynamics in the visual system from four different angles: (1) the temporal neural code of single neurons in the primary visual cortex (V1) during watching a movie, (2) the firing state of V1 during movie watching by comparison of recorded activity from macaque monkeys to a detailed spiking model simulation of V1, (3) the learning and population code modifications of V1 during visual perceptual training of awake monkeys, and (4) the oscillatory dynamics of distinct neuronal cell types in behaving mice. Together, this talk will show the rich diversity of feature representations, learning behaviors, and temporal dynamics of the visual system in comparison to the static artificial networks' view of image recognition.