Although the human brain is skilled at facial recognition and discrimination, new research from Georgetown University Medical Center suggests that the brain may not have developed a specific ability for "understanding faces" but instead uses the same kind of pattern recognition techniques to distinguish between people as it uses to search for differences between other groups of objects, such as plants, animals and cars.
The study, published in the April 6 edition of the journal Neuron, adds new evidence to the debate over how the brain understands and interprets faces, an area of neuroscience that has been somewhat controversial. Because the process of facial perception is complicated and involves different and widespread areas of the brain, there is much that remains unknown about how humans perform this task.
"We found that faces aren't special in the way many scientists once thought," says Maximilian Riesenhuber, PhD, assistant professor of neuroscience and senior author of the study. "Rather, they are particular group of objects which the brain has learned to distinguish very well, much as it would for any other similar objects that are critical to human survival and communication."
Riesenhuber hopes that integrative research of this kind will help scientists better understand the neural bases of object recognition deficits in mental disorders, such as autism, dyslexia or schizophrenia. People with autism, for example, experience difficulty with recognizing faces, which might be caused by a defect on the neural level. Breakthroughs in this kind of research could someday lead to targeted therapies for the millions of people who suffer from these disorders.
"The findings are exciting because we are now going to apply this technique to probe the neural bases of face perception deficits in autism," Riesenhuber said.
Because humans are so talented in recognizing faces, many in the scientific community have argued that the brain has developed unique mechanisms for understanding and distinguishing them. However, Riesenhuber and his team thought that a different model could help explain some of the existing knowledge about facial recognition, including a behavioral phenomenon known as the "inversion effect," which has shown that turning a picture upside down has a strong effect on people's ability to recognize faces whereas the ability to recognize other objects, such as houses, is affected only slightly.
"We think that this is because we are face 'experts,' having learned over many years to spot fine differences in upright faces, but not in inverted faces. That experience makes faces unique, but there's nothing scientifically special about faces," Riesenhuber says.
The Georgetown scientists hypothesized that facial recognition does not rely on face-specific mechanisms but instead uses the same neural mechanisms for faces that are used to discriminate other objects. Over the years, because of the importance of facial identity and expression for social communication, humans have simply developed a strong talent for recognizing and distinguishing faces. This experience with faces then leads to the learning of a population of neurons finely tuned to different faces, Riesenhuber says.
The researchers tested their theories using a computational model previously developed by Riesenhuber and his team to predict how different neurons would react during the recognition of non-face objects. They then showed that this simple model, even though not developed for face recognition, could quantitatively account for the inversion effect and make predictions about how selective the group of "face neurons" should be to explain human performance, which provided further evidence that it was unnecessary to postulate any kind of special processing in the brain for faces.
The researchers then tested these predictions against experimental data measured in a functional Magnetic Resonance Imaging (fMRI) machine, a high-powered imaging technology that can measure the brain activity of test subjects, and by other behavioral techniques. Subjects were shown pairs of images of similar human faces that had been morphed using computer graphics software, while the researchers observed how brain activation changed for more or less similar pairs of faces.
They found that a small group of neurons in the "fusiform face area," an area of the brain generally thought to be responsible for face recognition, was highly selective for different faces, just as the model predicted they would behave.
"We knew that the fusiform face area is highly involved and necessary for us to understand faces, but we did not know what kind of processing was going on inside that 'black box'," he said. "By using a computational model to quantitatively link neuronal processing, brain imaging and behavior, we now have a mechanistic model describing which neurons are involved and how they are behaving when we look at faces."
Other authors on the paper include Xiong Jiang, PhD, Ezra Rosen, John VanMeter, PhD, and Tom Zeffiro, PhD, from Georgetown University Medical Center and Volker Blanz, from the Max-Planck-Institute in Germany.
Source : Georgetown University Medical Center