Workflow
用大脑“说话”:脑机接口让失语者再次发声
Hu Xiu·2025-07-15 00:26

Core Insights - A novel brain-computer interface (BCI) technology has been developed that utilizes AI algorithms to map neural signals to expected sounds, enabling real-time conversion of brain activity into speech, potentially restoring conversational abilities for individuals with speech impairments due to neurological diseases [1][4][18] Group 1: Technology Overview - The research team, comprising members from UC Davis, Brown University, and Massachusetts General Hospital, published their findings in Nature, marking a significant advancement in neuroengineering [1][4] - The BCI system allows for the generation of natural speech with intonation, rhythm, and personalized voice characteristics, enabling speech-impaired patients to communicate in their own voice [5][18] - The system operates through a four-step process: neural recording, neural decoding, speech synthesis, and real-time audio feedback, creating a closed-loop system that translates intent into audible speech [7][12] Group 2: Patient Experience - An ALS patient successfully articulated simple phrases and demonstrated control over tone and emotion, indicating a reconstruction of linguistic identity and personal expression [6][18] - The system's ability to produce speech closely resembling the patient's original voice was enhanced by incorporating early voice recordings into the training of the personalized neural vocoder [10][18] Group 3: Technical Challenges and Future Directions - The research team faced challenges in training the system due to the lack of "real speech" data, which they addressed by developing an innovative algorithm that guides patients to "attempt to speak" while recording neural activity [8][19] - Future developments aim to expand the technology's application to other speech-impaired populations and explore integration with non-invasive brain technologies to lower usage barriers [18][20] - The current system still relies on external prompts for speech generation, indicating that further advancements are needed to achieve fully autonomous communication driven solely by brain activity [19][22]