This site uses cookies, so that our service can work better. I understand

Bernhard Englitz

Computational neuroscience 

Institute for Neurophysiology, Donders Institute for Neuroscience, Nijmegen, The Netherlands


Interested in the computational principles of neural systems, Dr. Englitz studied Cognitive Science and Mathematics in Osnabrück, Zurich and Leipzig, supported by the German national merit foundation. During a Fulbright fellowship, he worked at the Salk Institute with Dr. Terry Sejnowski in San Diego, before starting his doctoral studies at the Max-Planck-Institute for Mathematics in the Sciences in Leipzig in collaboration with the Biological faculty at the university. Addressing the problem of synaptic transmission and signal representation in the auditory brainstem, Dr. Englitz leveraged computational methods for solving biological problems. In his postdoctoral studies, he worked on decoding methods for cortical representations of ambiguous stimuli, thus understanding perception in the context of stimulus history. Since 2014, he leads a research group on Computational Neuroscience at the Donders Center for Neuroscience in NIjmegen, Netherlands. The focus of his research is on the neural mechanisms of processing in complex stimulus conditions.


Computational principles in auditory processing

The brain is an evolved, complex dynamical system. It integrates internal states with current stimuli to optimize the organism’s behavior and survival. This integration can best be understood as a computation that follows a limited set of principles. In the auditory system these principles can be exemplified well on multiple levels. Starting from the signal transformations that govern submillisecond precision for sound localization in the auditory brainstem, to the general filterbank representation in the auditory cortex, to evidence integration for statistical stimuli in the parietal cortex, we identify simple principles that link auditory responses with computational principles. In the last part we will focus on computational principles that govern the active, dynamical state of auditory (and other) cortices, the asynchronous state. This reduced coactivation between neurons has the profound consequence that dynamics speed up and decoding becomes more reliable on single trials. Identifying the principles paves the way for a complete understanding of auditory processing, and thus enables the design of algorithms and machine that mimic human performance in assistive devices.