Most of us would probably like to have a robot – smart, versatile, and ever-loyal to its owner. I want mine to go to the gym without me and exercise tirelessly, while my remote body reaps the benefits. At the speed at which technology is advancing, surely that’s on the horizon.
At hospitals, offices, factories, virtually everywhere, it’s becoming more common to see robots working alongside people. Currently, though, robots lack the intelligence to quickly sense human intentions. According to Professor Julie Shah, in the department of Aeronautics and Astronautics at the Massachusetts Institute of Technology (MIT), that means people and robots are working independently of each other, increasing inefficiency, and decreasing productivity. Her vision is to harness the relative strengths of robots and people, creating a synergy which enhances both efficiency and productivity.
Studies of human-robot interactions have largely focused on programming robots to better recognize human intentions, enabling more effective cooperation. Human-robot interactions, though, go in both directions – human to robot, and robot to human. People will need to learn how to “read” robots.
Teaching them, though, sounds like a far more daunting task than programming robots.
Concepts of human learning
Concepts about human learning are well-established, thanks to decades of cognitive science and educational psychology research. Now, scientists at Harvard University and MIT are collaborating to apply those principles of learning to shape people’s mental images of robots.
Using previous studies that examined attempts to teach robots new behaviors, researchers identified points at which cognitive science theories could be used to help people form mental models of robots quickly and accurately. People with more realistic mental models of a robot tend to be more effective collaborators with robots. That’s exceptionally important when people and robots work together in risky situations like health care, when the stakes may be life and death.
“Whether or not we try to help people build conceptual models of robots, they will build them anyway. And those conceptual models could be wrong,” says Serena Booth, a graduate student in MIT’s Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory, in a statement. “This can put people in danger. It is important that we use everything we can to give that person the best mental model they can build.”
Human-robot teaching theories
Researchers analyzed 35 research papers on human-robot teaching using two key theories. The first, called the analogical transfer theory, suggests that people learn by analogy. People interacting with new information tacitly search for something familiar within their experience that they can use to understand new concepts.
The variation theory of learning suggests that humans learn new concepts through a four-step process: repetition, contrast, generalization, and variation.
Many of the 35 research papers used part of one or the other theories, but the use was not intentional. Booth notes that had the theories been used deliberately, their experiments may have been more useful. Variation theory would include showing a variety of different environments in which the robot performs a task, and in which the robot makes mistakes. Negative examples teach what the robot is not, she adds.
Using cognitive science theories could also improve robot design. Analogical transfer theory suggests that if the movements of a human and a robotic arm don’t match, people can have difficulty learning to interact with the robot.
Enhancing explanations
Booth and her collaborators also studied how theories of human concept learning could improve the explanations that help people build trust in unfamiliar, new robots. The researchers make recommendations about how research on human-robot teaching can be improved.
For one, they suggest that researchers incorporate analogical transfer theory by guiding people to make appropriate comparisons when they learn to work with new robots.
They also suggest that including positive and negative examples of robot behavior, and exposing users to how strategic variations of parameters in a robot’s “policy” affect its behavior, can help humans learn more effectively. The robot’s policy is a mathematical function that assigns probabilities to each action the robot can take.
“We’ve been running user studies for years, but we’ve been shooting from the hip in terms of our own intuition as far as what would or would not be helpful to show the human. The next step would be more rigorous about grounding this work in theories of human cognition,” says Elena Glassman, an assistant professor of computer science at Harvard’s John A. Paulson School of Engineering and Applied Sciences, and the primary advisor on the project.
Booth plans to redesign some of the experiments she studied, hoping that deliberate use of the learning theories improves human learning.
Tip of the hat to a literary legend
The Three Laws of Robotics (often known as Asimov’s Laws) are a set of rules devised by science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround” (included in the 1950 collection “I, Robot“), although they had been foreshadowed in some earlier stories. The Three Laws, quoted from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
1: A robot may not injure a human being or, through inaction, allow a human being to come to harm
2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.