By utilising artificial intelligence and machine learning, a team of researchers from UC San Francisco and UC Berkeley has developed an innovative communication system in collaboration with the Edinburgh-based Speech Graphics. Through the use of a brain-computer interface and a digital avatar under her control, this device allowed a lady who had been paralysed after a stroke to converse without restriction.
Similar to a digital-to-analogue converter unit in a mixing soundboard but made to fit within the skull, brain-computer interfaces (BCIs) are tools that monitor the analogue impulses produced by the brain and transform them into digital signals that computers can understand.

Dr. Edward Chang, director of neurological surgery at UCSF, oversaw this trial in which a 253-pin electrode array was implanted into the speech centre of a patient. These electrodes monitored and recorded the electrical impulses that caused the muscles in her jaw, lips, and tongue to contract. The impulses were sent to a bank of processors via a cable connection in her head rather than activating these muscles. A machine learning artificial intelligence (AI) that was part of this computational stack learned to recognise more than 1,000 words using the patient’s particular electrical signal patterns after several weeks of training.
This is merely the first part of this innovation, though. The patient may now compose her reply in writing using the AI interface, much like the Synchron system designed for those with locked-in syndrome. She can also “speak” in a manner of speaking, though, by adopting a synthesised voice that was trained on recordings of her real voice before to becoming paralysed — a technique used to replicate famous people digitally.
To create a digital avatar for the patient, the study team worked with Speech Graphics, the firm behind the remarkable face animation technology used in games like Halo Infinite and The Last of Us Part II. Based on a study of auditory input, Speech Graphics’ technology simulates the complex musculoskeletal motions a human face would make. The game engine uses this data to feed it in real-time, which produces a smooth animation of the avatar’s face emotions. The patient may express emotions and interact nonverbally since her brain signals are connected to the avatar.
Speaking about the possibilities for AI-driven faces beyond video games, Speech Graphics’ CTO and co-founder Michael Berger said, “Building a digital avatar that can talk, emote, and communicate in real-time, tied directly to the subject’s brain, is a game-changer. Even while voice restoration is stunning on its own, face communication is so fundamental to human existence that it gives the patient who has lost that sensation of embodiment and control.
The early 1970s saw the invention of BCI technology, which has since undergone a slow progression throughout the years. The industry has recently experienced exponential growth in processing and computer systems, and numerous well-funded firms are now vying for the first FDA regulatory device clearance. Notably, Brooklyn-based Synchron gained notoriety in 2017 as the first business to successfully implant a BCI in a human patient. The controlled FDA trials for Elon Musk’s Neuralink began earlier this year after many rounds of research including several animal participants.
- X (previously Twitter) is now offering voice and video calling
- Chandrayaan-3: ISRO Moon mission reaches unprecedented heights and provides insight into what is occurring there
- WhatsApp finally introduces Communities, in-chat polls, and 32-person group video calling
- Uncontrolled 3D printers cause mayhem and begin printing on their own
1 comment
[…] Technology […]