Just like something from a sci-fi movie, scientists have been able to use human brain activity to reconstruct movies from the mind of volunteer test subjects with some amazing results.
UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences using functional Magnetic Resonance Imaging (fMRI) and computational models. The researchers showed test subjects a number of Hollywood trailers and then used their brain activity to reconstruct an image as shown below and in the video after the jump.
Volunteers needed to remain still inside an MRI scanner for hours at a time for the researchers to be able to record the brain activity. Professor Jack Gallant, a UC Berkeley neuroscientist, explains: “This is a major leap toward reconstructing internal imagery,” – “We are opening a window into the movies in our minds.”
Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab added: “Our natural visual experience is like watching a movie,” – “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.”
How the Technology Works
The process of reconstructing movies from brain activity involves several sophisticated steps. First, the volunteers watch video clips while their brain activity is recorded using fMRI. This imaging technique measures changes in blood flow to different parts of the brain, which correlates with neural activity. The data collected is then fed into a computational model that maps the brain activity to the visual stimuli the subjects were exposed to.
The computational model is trained using a large dataset of brain activity and corresponding video clips. Once the model is trained, it can predict the visual content that a person is seeing based on their brain activity alone. This prediction is then used to reconstruct the video clips, creating a visual representation of what the person was watching.
Potential Applications and Future Research
The ability to decode and reconstruct visual experiences from brain activity has numerous potential applications. For instance, it could lead to new ways of communicating with individuals who are unable to speak or move, such as those with locked-in syndrome. By interpreting their brain activity, it might be possible to understand their thoughts and intentions.
Additionally, this technology could have significant implications for the field of neuroscience. It provides a powerful tool for studying how the brain processes visual information and how different regions of the brain interact to create our perception of the world. This could lead to new insights into various neurological conditions and disorders.
However, there are still many challenges to overcome before this technology can be widely used. One of the main challenges is improving the accuracy and resolution of the reconstructed images. Currently, the reconstructed videos are relatively low-resolution and lack fine details. Researchers are working on developing more advanced computational models and imaging techniques to address this issue.
Another challenge is the need for extensive training data. The current models require a large amount of data to accurately map brain activity to visual stimuli. This means that each individual would need to undergo extensive fMRI scanning, which is time-consuming and expensive. Future research will focus on developing more efficient models that require less training data.
Despite these challenges, the progress made so far is incredibly promising. As the technology continues to advance, it could revolutionize our understanding of the brain and open up new possibilities for communication and medical treatment.
Source: Gizmodo : UC Berkeley
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.