Engineers at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT have developed robots that can correctly sort objects by reading human minds using an electroencephalograph monitor. The system is programmed to detect brain signals known as “error-related potentials” when the person wearing the monitor sees them trying to put an object in the wrong place. Once the robot detects an error it places the object in the other box, and even changes its facial expression to show embarrassment.
For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks.
But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?