The robot, developed at the Laboratory of Informatics and Artificial Intelligence (CSAIL) of the Massachusetts Institute of Technology, is able to understand the signals coming from a person connected to an electrocardiograph and to respond to them by appropriate actions.
MIT researchers developed a robot named Baxter, who can sort objects into two categories, “paint” and “wire”, receiving instructions from the human brain. Putting these items in two boxes, he changed his mind when he received from a man a mental signal that he was doing something wrong.
Negative mental signal, sent to the robot through the electrocardiograph, caused him to almost instant reaction.
“When you are watching a robot, all you have to do is mentally agree or disagree with his actions,” says CSAIL laboratory director Daniela Rus. “You do not need to learn to think in a certain way, the machine adapts itself to you.”
Researchers hope that the power of thought control technology can be applied to a wide range of robotic devices, including production robots and autonomous cars. In addition, it can be used to control limb prostheses, and also is involved in communication facilities.
“Imagine that you can instantly order a robot to perform an action, and you do not need to type the text of a command, press a button, or even say any words,” says Rus. “This optimized approach will significantly increase our capabilities in the management of factory robots, cars without a driver, as well as other devices that we have not even invented yet.”
The system provides the ability of a robot to consult a person for advice when he is not sure of the correctness of his decision. This opens up the possibility of a dialogue between a person and a machine with the help of constant “yes-no” feedback.
“This work brings us closer to creating effective tools for brain-controlled robots and prostheses,” says Professor of Informatics at the University of Freiburg Wolfram Burgard. “Given the complexity of the task of converting the human language into robot-accessible teams, work in this area can have a truly huge impact on the future of cooperation between a person and a robot.”