Every second, our senses are overwhelmed with more information than our brains can process. This information is often noisy and unclear, but we still have a good idea of what is happening around us. Psychologists and neuroscientists think that this is because our brains use context and prior knowledge to fill in what we perceive.
For example, when we listen to a conversation in a noisy room, we often have trouble hearing what is being said. However, if we know the topic being discussed, it becomes much easier to understand the conversation. When we go for holiday to a different country, and we unexpectedly run into our neighbour, it may be difficult to recognize their face, but when we meet them near our home, their face is easily recognizable. Finally, once we learn that monarch butterflies have brightly colored wings with distinctive black, orange and white markings, we start to notice them everywhere.
All these examples show that knowledge about the context, as well as prior knowledge, guide the process of perception. We never see the world exactly as it is, we always perceive it through the lens of what we know and what is likely to be seen. Neuroscientists think that it is a fundamental property of our perception and this is why they find it important to understand how prior knowledge and context are used in our brains. The most popular theory explaining this process is called “predictive coding”. It says that the brain compares what it expects to see with what it actually sees, to compute a so-called “prediction error”.
The brain is organized in a hierarchical manner. Lower brain regions, which are close to the senses, are sensitive to simple features, such as line fragments. Higher regions are sensitive to more complex features, such as textures (for example, animal fur). The highest regions may specialize in recognizing types of objects, such as dogs or balls. These different levels are constantly communicating with each other.
Predictive coding is a theory about how this communication works. It assumes that higher regions only communicate their expectations to lower regions, while lower regions only communicate prediction errors to higher regions. For example, if the superior regions expect to see a dog, they may tell the middle regions “you should be seeing fur-like textures”, which would cause the middle regions to communicate to the lowest regions that they “should be seeing parallel lines" (because this is how fur may look like to the lowest-level parts of the visual brain). But if instead of a dog you see a ball, the lowest regions will respond “I don’t see parallel lines, I see a flat glossy surface”, the middle regions will report back to the superior regions “actually there is no fur, but maybe we’re looking at leather”, and the superior regions will say “Ok, there is no dog. Maybe it is a ball.” and send back revised predictions, now also expecting, for example, that the object is round.
Some researchers think this theory is so important that it might be the “theory of everything” for the brain, like a special math equation that explains how the brain works. However, until now, scientists cannot find any decisive evidence for whether prediction error is really computed in the brain. Finding evidence that the brain codes prediction error is the aim of the current project. I will use special brain scans (fMRI) to see how the brain responds to different kinds of information and will test whether it really uses prediction errors. In the first study, I will show people words and see if their brain responds differently when they expect to see a different word. In the second study, I will show more or less expected words on a cluttered background and see if their brain pays attention to the background differently depending on what they expect to see in the foreground. In both studies, I will try to decode specific patterns of prediction errors. If I find them, it will provide a great support for the predictive coding theory. If I do not, it will put constraints on this theory and help guide future research.