This project takes recordings from the IT cortex in humans as they viewed a wide array of visual objects and attempts to determine the features of those objects that drive the responses of individual cells.
As the natural world is made up of objects, understanding how objects are encoded in the brain will open the door to a deep mechanistic understanding of sensory perception. Visual perception (like all sensory perception) arises from the electrical activity of individual neurons in the brain. Therefore understanding how individual neurons encode visual features is a crucial problem in systems neuroscience. Prior work in non-human primates and functional magnetic resonance imaging (fMRI) in humans have revealed that the inferotemporal (IT) cortex harbors a high-level code for visual objects. In this project, we took recordings from the IT cortex in humans as they viewed a wide array of visual objects. We attempt to determine the features of those objects that drive the responses of individual cells. To that end, we reverse engineer the encoding process and produce artificial images based on the neural responses to the stimulus set via a deep generative neural network model. In addition, a closed loop experiment is being attempted to test whether neurons in the IT cortex of a patient show the expected responses to the generated images.