One of the brain’s main jobs is using sensory information to control action. We study three aspects of this process:

  • Coordination. How do multiple senses, including vision, touch and body-position sense, cooperate to steer multiple body parts?
  • Representations. Information can always be represented in various ways — e.g., an object’s location can be described relative to the eyes, head or trunk — and a well-chosen representation can simplify certain computations. Does the brain use representations that simplify motor control?
  • Learning. When we grow, age or are injured, our brains must adjust or repair our control systems. How does the brain know what adjustments are needed, given the available sensory information?

We simulate sensorimotor systems on computers to identify issues and to reveal the implications of different theories. Then we test competing theories using neuroimaging and behavioural experiments: presenting human subjects with multisensory stimuli and recording their responses — eye, head and limb movements — at high resolution in 3D.

We have started looking at reaching experiments in virtual reality lately. People can see the shapes they are moving, but here we kept the hand semi-transparent, while usually it is invisible. In the first two reaches, the cube moves with the hand, but after that, the reaches need to compensate for the cube moving along a different path: