The secret of life of the Imperial Citadel of Thang Long, not inferior to Chinese archery movies

1 | 0 Discuss | Share
A team of Japanese scientists has developed "mind captioning" technology - an AI system that can convert brain signals into complete sentences. This technique promises to support people who have lost the ability to speak and open up the future of non-verbal communication.
According to Nature magazine, researchers at NTT Communications Science Laboratories (Japan) have developed a technique called "mind captioning". This technology uses data from functional magnetic resonance imaging (fMRI) to analyze brain activity and convert it into full semantic descriptions.
Unlike invasive brain implants, this method is completely non-invasive, making the user more comfortable during the experiment. The study, published in the journal Science Advances, shows that the system can "read" the image or scene that the participant is observing or imagining, then translate it into a sentence that accurately describes the content.
In the experiment, Tomoyasu Horikawa âs team used more than 2,000 videos to train a deep language model. Each video was converted into a âsemantic signatureâ â a digital encoding representing the imageâs content. They then trained another model to recognize the brain activity patterns that corresponded to each signature as participants watched the videos.
Once the system has learned enough, it can read the fMRI data of a person watching a new video and predict what they are seeing. A text generation model will find the sentence that best describes that semantic signature. For example, when a participant watches a scene of a person jumping off a waterfall, the system initially describes it as âa rushing stream,â then gradually refines it to âa person jumping over a waterfall deep on a mountainside.â
Remarkably, the system not only worked when participants were watching videos, but also generated accurate descriptions when they simply recalled scenes they had seen. This suggests that the brain stores and recreates images using a similar mechanism to seeing and imagining them â an important finding that helps scientists better understand how the brain encodes visual memories.
Researchers say this is a major step forward in understanding how the brain interprets the world around it, and opens up new communication possibilities for people who have lost the ability to speak due to stroke, trauma or neurological disease. In the future, this technology could be integrated with brain-computer interfaces (BCIs), allowing users to "speak" with their thoughts without having to speak or move.
But with its immense potential comes concerns. Experts warn of the potential for privacy violations if the technology is misused. Mind reading â whether of images or memories â raises major ethical questions, particularly in areas such as security, law enforcement, and advertising. For now, the system only works with the consent and cooperation of the participant, and cannot yet âreadâ private or unwanted thoughts.
Some ethicists have called for early legal frameworks for âbrain decodingâ technologies, to ensure that freedom of thought â one of the most basic human rights â is not violated in the AI era. âWe are entering an era where the line between intelligence and data is increasingly blurred. The question is not just âcan we do itâ, but âhow far should we goâ,â one expert said.

1 | 0 Discuss | Share

4 | 0 Discuss | Share

1 | 0 Discuss | Share

6 | 0 Discuss | Share

4 | 0 Discuss | Share

8 | 0 Discuss | Share

4 | 0 Discuss | Share

3 | 0 Discuss | Share

4 | 0 Discuss | Share

5 | 0 Discuss | Share

2 | 0 Discuss | Share

1 | 0 Discuss | Share



0 | 0 Discuss | Report