Device Lets Blind People “Hear” Facial Features

[ad_1]


Register for free to listen to this article

Thank you. Listen to this article using the player above.


Want to listen to this article for FREE?


Complete the form below to unlock access to ALL audio articles.

Facial recognition is essential to socializing. A new study used a device that reshapes images into sound signals to let blind people “see” facial features. The results have shed new light on the brain areas used to identify and process faces.

The study was published in PLOS ONE.

Sensory substitution

The brain’s plastic ability to reassign functions from one brain area to another has long fascinated neuroscientists, and blind people have been shown to be able to use other senses to compensate for their loss of sight.

In the new study, scientists at Georgetown University Medical Center mapped the basis of compensation between vision and hearing. They recruited a cohort of 6 blind people and 10 sighted people. The blind people in the study had either been born blind or had become blinded before the age of two. “We used a technology called ’sensory substitution’ that transforms visual shapes into sounds by means of a computer chip,” said Dr. Josef P. Rauschecker, a professor of neuroscience and the co-director of the Center for Neuroengineering at Georgetown University, in an email interview with Technology Networks.

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE

The technology was used to transform facial features into sound waves, which were then played to the cohort. The learning process took time – relying on participants being able to first identify the “sound” of geometric shapes before these were then built up into more complex symbols that represent faces. The training was ultimately successful. “All blind subjects were able to master the task and identify the faces more than 85% of the time,” he explained.

Brain imaging insights

During this process, the participants had their brains imaged three times using magnetic resonance imaging (MRI). This allowed the authors to see which brain areas were being activated as they listened to the “faces”. Several disparate brain areas have been implicated in facial recognition previously. Scientists have previously suggested that this process is so important to socialization that the brain wiring that enables it must be either innate or rely on infants being exposed to visual facial information.

Rauschecker’s findings show that this is not the case.

“To our surprise, one of the major nodes in the brain’s face recognition network, the fusiform face area (FFA), lit up, but on the opposite side of the brain from where it would light up in sighted people,” said Rauschecker. The team believes that this difference may be explained by different roles for the two sides of the FFA – one processing connected patterns in facial information and the other processing separate parts.

The findings suggest that exposure to the contours and geometry of facial expression, rather than visual input specifically, is critical to the function of the FFA. In a press release, Dr. Paula Plaza, one of the study’s lead authors, said that the findings suggest the FFA encodes the “concept” of a face.

New research has recently challenged the mechanisms behind brain plasticity. A paper published in eLife by researchers from Johns Hopkins University and the University of Cambridge suggests that rather than rewiring abilities from one brain area to another, plasticity works by simply switching on abilities that are latent in the new brain region.

Rauschecker acknowledged that while the “rewiring” theory of plasticity is still dominant, his team’s findings can be seen to support both sides. “One could also argue that the neurons in the FFA are tuned to the geometry of facial configurations regardless of sensory modality. In other words, the ability of FFA neurons to respond to sounds and touch is already built in and only needs to be teased out by the right approach,” he explained.

Regardless of how the blind brain teases out visual information, Rauschecker says his team’s approach could help mitigate the impacts of being blind in a society that communicates using visual information. The tool currently allows users to recognize basic emoji-like expressions, but the team hopes that users could eventually use an upgraded version of the tool to identify individual human faces. “I am optimistic that new technologies can be developed to take advantage of this plasticity and will ultimately allow us to build devices that enable blind persons to see again,” Rauschecker concluded.

Reference: Plaza PL, Renier L, Rosemann S, De Volder AG, Rauschecker JP. Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One. 2023;18(11):e0286512. doi: 10.1371/journal.pone.0286512

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top