With the rise of autonomous robots and AI, MIT researchers have created a way to control robots more spontaneously, using hand gestures and brainwaves.
The system takes advantage of brain signals called ‘error-related potentials’ (ErrPs), which naturally occur when people notice a mistake. It monitors the brain activity of a person observing robotic work, and if an ErrP occurs- because the robot has made an error- the robot pauses its activity so the user can rectify the mistake.
This happens via an interface that measures muscle activity with the person making hand gestures to be able to select the correct option for the robot.
In one trial, the team used ‘Baxter’, a robot from Rethink Robotics, to move a power drill to one of three possible targets on the body of a mock plane. With human supervision, Baxter’s chances of choosing the correct target improved considerably from 70 per cent to more than 97 per cent of the time.
Additionally, the system works with anyone, enabling organizations to deploy it in real-world settings without needing to train it on new users.
According to the projects lead author Joseph DelPreto, the invention has particular significance as unlike traditional robotic management, users do not need to think in a certain mechanical way.
“The machine adapts to you, and not the other way around,” he said, adding that the system “makes communicating with a robot more like communicating with another person.”
It opens up new possibilities for how humans could manage autonomous robots in a more open-ended manner and for the longer term, it could be useful for the elderly, or workers with language disorders or limited mobility.
“We’d like to move away from a world where people have to adapt to the constraints of machines,” said project supervisor Daniela Rus. “Approaches like this show that it’s very much possible to develop robotic systems that are a more natural and intuitive extension of us.”
This article originally appeared in Engadget.