Iain DeWitt, PhD
Ph.D. in Neuroscience, Georgetown University
B.A. in Psychobiology, Earlham College Dr. Dewitt is currently a post-doctoral fellow at the National Institutes of Health.
My work focuses on auditory word perception. Identification of a sensory input as an instance of a particular word is non-trivial. In the auditory system, the peripheral sensory apparatus, the cochlea and its sensory epithelium, hair cells, transduce air pressure waves into neural impulses. Prior to neural transduction, the cochlea mechanically decomposes the input signal into a spectral representation, that is, a frequency specific representation. Neural transduction of a complex tone composed of 400 and 800 Hz components, for instance, occurs by the stimulation of hair cells at two separate regions of the cochlea. The intensity of stimulation at each region corresponds to the respective energies of the 400 and 800 Hz components in the stimulus. Speech acoustics, thus, are initially represented in the nervous system as time-varying spectral energy.
In the primate, this spectral-acoustic representation is largely preserved as the input signal ascends from the periphery to auditory cortex. As processing proceeds, however, the signal must be mapped onto some form of abstract, generalized representation. Were this not to occur, each instance of a spoken word would exist in the brain only as a unique sensory impression and not concurrently as a unique impression and as an exemplar of a known generic form. Theory developed in the fields of pattern recognition and computational neuroscience suggest the mapping of spectral-acoustic representations onto abstract representations is performed by a hierarchical feature network. In short, the network performs logical AND-like operations to construct complex combinatorial representations from granular primitive representations. Where two neurons might independently code for the intensity of spectral-acoustic energy at 400 and 800 Hz, for instance, a logical AND-like operation might bind these representations into that of a single representation coded for by a higher-order neuron. Similarly, the network performs logical OR-like operations to allow for tolerance, invariance in neural response, to the exact form of an input stimulus. A higher-order neuron, for instance, might code for the presence of input in any one of several lower-order combinatorial units, each respectively coding for the co-ocurrance of energy at 350 and 700 Hz, 400 and 800 Hz, and 450 and 900 Hz. Through successive iterations of logical AND and logical OR-like operations, invariant representation for complex naturalistic forms emerges in the network. Neurophysiology has shown that the primate brain is performing logical AND-like operations in the supragranular layers of primary auditory cortex and the granular layers of secondary auditory cortex. Brain imaging has shown that that secondary auditory regions are performing logical OR-like operations.
A core prediction of hierarchical feature networks is that combinatorial elaboration is strongly influenced by sensory experience. That is, higher-order combinatorial representations should only exist for spectral-acoustic patterns that are encountered with high-frequency. A monolingual American English speaker, for instance, should not have full higher-order representations for Hindi words. My work leverages this prediction to investigate the recognition of words by auditory cortex.
DeWitt, I, Rauschecker, JP (2012). Phoneme and word recognition in the auditory ventral stream. PNAS, 109, E505-E514.
Binary masks from the main figures in DeWitt & Rauschecker (2012). The masks are in Taliaraich space, not MNIspace.