A brain–computer interface (BCI), sometimes called a mind-machine interface (MMI), or sometimes called a direct neural interface (DNI), synthetic telepathy interface (STI) or a brain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory-motor functions.
The ability to control a computer using only the power of the mind is closer than one might think. Brain-computer interfaces, where computers can read and interpret signals directly from the brain, have already achieved clinical success in allowing quadriplegics, those suffering “locked-in syndrome” or people who have had a stroke to move their own wheelchairs or even drink coffee from a cup by controlling the action of a robotic arm with their brain waves. In addition, direct brain implants have helped restore partial vision to people who have lost their sight.
Recent research has focused on the possibility of using brain-computer interfaces to connect different brains together directly. Researchers at Duke University last year reported successfully connecting the brains of two mice over the Internet (into what was termed a “brain net”) where mice in different countries were able to cooperate to perform simple tasks to generate a reward. Also in 2013, scientists at Harvard University reported that they were able to establish a functional link between the brains of a rat and a human with a non-invasive, computer-to-brain interface.
Other research projects have focused on manipulating or directly implanting memories from a computer into the brain. In mid-2013, MIT researchers reported having successfully implanted a false memory into the brain of a mouse. In humans, the ability to directly manipulate memories might have an application in the treatment of post-traumatic stress disorder, while in the longer term, information may be uploaded into human brains in the manner of a computer file. Of course, numerous ethical issues are also clearly raised by this rapidly advancing field.
As the power of modern computersgrows alongside our understanding of the human brain, we move ever closer to making some pretty spectacular science fiction into reality. Imagine transmitting signals directly to someone's brain that would allow them to see, hear or feel specific sensory inputs. Consider the potential to manipulate computers or machinery with nothing more than a thought. It isn't about convenience -- for severely disabled people, development of abrain-computer interface (BCI) could be the most important technological breakthrough in decades. In this article, we'll learn all about how BCIs work, their limitations and where they could be headed in the future.
The Electric Brain
The reason a BCI works at all is because of the way our brains function. Our brains are filled withneurons, individual nerve cells connected to one another by dendrites and axons. Every time we think, move, feel or remember something, our neurons are at work. That work is carried out by small electric signals that zip from neuron to neuron as fast as 250 mph [source: Walker]. The signals are generated by differences in electric potential carried by ions on the membrane of each neuron.
Although the paths the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind. It can also work the other way around. For example, researchers could figure out what signals are sent to the brain by the optic nerve when someone sees the color red. They could rig a camera that would send those exact signals into someone's brain whenever the camera saw red, allowing a blind person to "see" without eyes.
Last week, engineers sniffing around the programming code for Google Glass found hidden examples of ways that people might interact with the wearable computers without having to say a word. Among them, a user could nod to turn the glasses on or off. A single wink might tell the glasses to take a picture.
But don’t expect these gestures to be necessary for long. Soon, we might interact with our smartphones and computers simply by using our minds. In a couple of years, we could be turning on the lights at home just by thinking about it, or sending an e-mail from our smartphone without even pulling the device from our pocket. Farther into the future, your robot assistant will appear by your side with a glass of lemonade simply because it knows you are thirsty.
Researchers in Samsung’s Emerging Technology Lab are testing tablets that can be controlled by your brain, using a cap that resembles a ski hat studded with monitoring electrodes, the MIT Technology Review, the science and technology journal of the Massachusetts Institute of Technology, reported this month.
The technology, often called a brain computer interface, was conceived to enable people with paralysis and other disabilities to interact with computers or control robotic arms, all by simply thinking about such actions. Before long, these technologies could well be in consumer electronics, too.
Some crude brain-reading products already exist, letting people play easy games or move a mouse around a screen. NeuroSky, a company based in San Jose, Calif., recently released a Bluetooth-enabled headset that can monitor slight changes in brain waves and allow people to play concentration-based games on computers and smartphones. These include a zombie-chasing game, archery and a game where you dodge bullets — all these apps use your mind as the joystick. Another company,Emotiv, sells a headset that looks like a large alien hand and can read brain waves associated with thoughts, feelings and expressions. The device can be used to play Tetris-like games or search through Flickr photos by thinking about an emotion the person is feeling — like happy, or excited — rather than searching by keywords. Muse, a lightweight, wireless headband, can engage with an app that “exercises the brain” by forcing people to concentrate on aspects of a screen, almost like taking your mind to the gym.
Car manufacturers are exploring technologies packed into the back of the seat that detect when people fall asleep while driving and rattle the steering wheel to awaken them.But the products commercially available today will soon look archaic. “The current brain technologies are like trying to listen to a conversation in a football stadium from a blimp,” said John Donoghue, a neuroscientist and director of the Brown Institute for Brain Science. “To really be able to understand what is going on with the brain today you need to surgically implant an array of sensors into the brain.” In other words, to gain access to the brain, for now you still need a chip in your head.
Last year, a project called BrainGate pioneered by Dr. Donoghue, enabled two people with full paralysis to use a robotic arm with a computer responding to their brain activity. One woman, who had not used her arms in 15 years, could grasp a bottle of coffee, serve herself a drink and then return the bottle to a table. All done by imagining the robotic arm’s movements.
But that chip inside the head could soon vanish as scientists say we are poised to gain a much greater understanding of the brain, and, in turn, technologies that empower brain computer interfaces. An initiative by the Obama administration this year called the Brain Activity Map project, a decade-long research project, aims to build a comprehensive map of the brain.
Miyoung Chun, a molecular biologist and vice president for science programs at the Kavli Foundation, is working on the project and although she said it would take a decade to completely map the brain, companies would be able to build new kinds of brain computer interface products within two years.
“The Brain Activity Map will give hardware companies a lot of new tools that will change how we use smartphones and tablets,” Dr. Chun said. “It will revolutionize everything from robotic implants and neural prosthetics, to remote controls, which could be history in the foreseeable future when you can change your television channel by thinking about it.”
These brain-reading technologies have been the stuff of science fiction for decades.
In the 1982 movie “Firefox,” Clint Eastwood plays a fighter pilot on a mission to the Soviet Union to steal a prototype fighter jet that can be controlled by a brain neurolink. But Mr. Eastwood has to think in Russian for the plane to work, and he almost dies when he cannot get the missiles to fire during a dogfight. (Don’t worry, he survives.)
Although we won’t be flying planes with our minds anytime soon, surfing the Web on our smartphones might be closer.
Dr. Donoghue of Brown said one of the current techniques used to read people’s brains is called P300, in which a computer can determine which letter of the alphabet someone is thinking about based on the area of the brain that is activated when she sees a screen full of letters. But even when advances in brain-reading technologies speed up, there will be new challenges, as scientists will have to determine if the person wants to search the Web for something in particular, or if he is just thinking about a random topic.
“Just because I’m thinking about a steak medium-rare at a restaurant doesn’t mean I actually want that for dinner,” Dr. Donoghue said. “Just like Google glasses, which will have to know if you’re blinking because there is something in your eye or if you actually want to take a picture,” brain computer interfaces will need to know if you’re just thinking about that steak or really want to order it.
No comments:
Post a Comment