Imagine putting a device on your head and watching all your thoughts come to life as visualizations on a screen. If this seems like science fiction, you might be shocked to learn that a prototype version of such a device has already been created in Japan.
Reading the mind
The device, developed by researchers at the Kyoto University, uses neural networks to read people’s minds. Though the technology has existed for some time, it has largely been limited to deconstructing images based on simple shapes and pixels. But the new technique, called Deep Image Reconstruction, enables researchers to decode images that have several layers of structure and color. In one study that lasted 10 months, the researchers observed three people watching images from three categories — artificial geometric shapes, natural phenomena, and alphabets.
The participants viewed the images for different lengths of time. “The viewers’ brain activity was measured either while they were looking at the images or afterward. To measure brain activity after people had viewed the images, they were simply asked to think about the images they’d been shown. Recorded activity was then fed into a neural network that ‘decoded’ the data and used it to generate its own interpretations of the peoples’ thoughts,” according to Singularity Hub.
Though the neural network was trained using natural images, it was able to reconstruct artificial shapes. What this means is that the network generated images by actually measuring the activity of the brain and not by matching the activity to some existing sample. The model had a tough time when people were retrieving memories of images rather than viewing them live. Though most of the reconstructed images only resemble the original one to a minimal extent, the potential of the technology is undeniably enormous.
“As the accuracy of the technology continues to improve, the potential applications are mind-boggling. The visualization technology would allow you to draw pictures or make art simply by imagining something; your dreams could be visualized by a computer; the hallucinations of psychiatric patients could be visualized aiding in their care; and brain-machine interfaces may one day allow communication with imagery or thoughts,” according to CNBC.
Mind to speech
Last year, a new system developed by Edward Chang from the University of California, San Francisco, showed that it was possible to translate brain signals to speech. Chang and his colleagues placed an array of electrodes into the brains of study participants. The electrodes were positioned in the region that controlled movement.
Researchers “worked with five participants who had electrodes on the surface of their motor cortex as a part of their treatment for epilepsy. These people were asked to read 101 sentences aloud — which contained words and phrases that covered all the sounds in English — while the team recorded the signals sent from the motor cortex during speech… the team trained an algorithm to reproduce the sound of a spoken word from the collection of signals sent to the lips, jaw, and tongue,” according to New Scientist.
Once the technology is fully developed, it will be of great assistance to people who have speech disabilities. The technique can not only reproduce words, but also the musicality of the voice that gives emotional depth and personality to the words. People suffering from neurological conditions like Alzheimer’s, epilepsy, and Parkinson’s also stand to benefit from this technology.