Friday , 29 March 2024
Home » Science » MIT scientists unveil mind-controlled robot (Video)
MIT scientists unveil mind-controlled robot (Video)
MIT scientists unveil mind-controlled robot (Video)

MIT scientists unveil mind-controlled robot (Video)

A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wanted robots to be a more natural extension of our bodies.

In the demonstration, robot arms were given the task of placing either a canister of paint or a ball of wire into an appropriate box.

The human controller, meanwhile, wearing an electroencephalography (EEG) cap, would have their brainwaves read for the cognitive signs that the robot was about to make an error.

The readings from the EEG cap were parsed through machine-learning algorithms developed by the researchers, which focused on ‘error-related potentials’, signals from the brain when it detects a mistake. The robot, picking up the reading from the machine learning algorithms could then correct their behaviour.

The team’s novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds, claim the team of computer scientists at Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University.

“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” said CSAIL director Daniela Rus. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”

“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing. You don’t have to train yourself to think in a certain way – the machine adapts to you, and not the other way around.”

In the past, robots controlled with human thought via EEG caps required humans to “think” in a prescribed way that computers can recognise. For example, an operator might have to look at one of two bright lights, each of which corresponds to a different task for the robot to execute. That, however, requires a high degree of concentration.

Instead, the team focused on error-related potentials or ErrPs, which are generated when the human brain notices what it regards as a mistake. As the robot indicates which choice it plans to make, the system uses ErrPs, read via the machine-learning algorithms, to determine whether the human agrees with the decision.

However, ErrP signals are extremely faint, which means that the system has to be fine-tuned to both classify the signal and incorporate it into the feedback loop for the human operator, according to CSAIL. In addition to monitoring the initial ErrPs, the team also sought to detect ‘secondary errors’ that occur when the system doesn’t notice the human’s original correction.

“If the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer,” said Gil. “These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.”

Agencies/Canadajournal




  • About News

    Web articles – via partners/network co-ordinators. This website and its contents are the exclusive property of ANGA Media Corporation . We appreciate your feedback and respond to every request. Please fill in the form or send us email to: [email protected]

    Leave a Reply