This AI technology can understand words that are not even spoken out loud

Researchers at Pohang University of Science and Technology (POSTECH) have developed wearable technology that can convert silent speech into audible speech by learning the movements of the neck muscles. The study, led by Professor Sung-Min Park and Dr. Sunguk Hong, published in Cyborg and Bionic Systems, marks an important step in human-machine communication.
From Muscle Movements to Spoken Words
The innovation is built on a simple but powerful concept: speech is not just noise. When a person speaks – or even tries to speak quietly – small movements occur in the muscles and skin of the neck. These movements form a kind of “abstract map” of the target speech.
To capture this, the researchers created a wearable device called a multiaxial strain mapping sensor. The system includes a small camera and flexible silicone embedded with reference indicators, allowing it to detect even the smallest skin damage. Designed for everyday use, the sensor can be worn comfortably around the neck and automatically calibrates when repositioned.
The collected data is then processed using artificial intelligence, which interprets patterns of complexity and reconstructs target words or phrases. By pairing this with speech synthesis trained on the user’s voice profile, the system can produce speech that closely resembles the natural human voice – even when no sound is produced.
Practical Leaps Over Existing Systems
Traditional methods of voice restoration rely on technologies such as Electromyography (EMG) or Electroencephalography (EEG), which often require large equipment and may be uncomfortable to use for long periods of time.
The POSTECH team’s approach eliminates these barriers by providing a lightweight, wearable alternative. In testing, the system showed high accuracy in speech reconstruction, even in noisy environments such as industrial settings where conventional microphones struggle.
Real World Impact and Future Opportunities
The implications of this technology are far reaching. It may provide a new way of communication for patients who have lost their voices due to vocal cord injury or laryngeal surgery, allowing them to “speak” and use their voice profile.

Outside of healthcare, the system can enable silent communication in places where speaking out loud is impractical – such as libraries, meetings, or noisy workplaces. It also opens the door to natural human-AI interactions, where intent can be translated into speech without physical articulation.
Looking Forward
Researchers aim to improve the technology for wider real-world applications, improve accuracy and increase language skills. Future iterations may be seamlessly integrated with consumer devices, potentially changing the way people communicate in both personal and professional settings.
As AI continues to intersect with wearable technology, innovations like these point to a shift to more intuitive, invisible ways of communicating — where even unspoken words can eventually be heard.



