In a new turn on human-robot research, computer scientists at the University of Bristol have developed an intelligent, handheld robot that first guesses then irritates users by rebelling against their plans, thus indicating an understanding of human intention.
In a developing technological world, cooperation between machines and humans is an essential aspect of automation. This new research shows frustrating people intentionally as part of the process of developing robots that better collaborate with users.
The team at Bristol has developed smart, handheld robots that finish tasks in collaboration with the user. Contrary to traditional power tools, that know nothing about the jobs they perform and are entirely under the control of users, the handheld robot retains knowledge about the task and can help through fine-tuned motion, guidance, and decisions about task sequences.
Although this helps, fulfill tasks faster, and with greater accuracy, users can get annoyed when the robot’s decisions are not in accordance with their plans.
The latest research in this area by Ph.D. applicant Janis Stolzenwald and Professor Walterio Mayol-Cuevas, from the University of Bristol’s Department of Computer Science, searches the use of intelligent tools that can influence their decisions in reaction to the intention of users.
This research is an innovative and stimulating twist on human-robot research as it targets to predict first what users want and then go against these ideas.
Professor Mayol-Cuevas said: “If you are irritated with a machine that is meant to help you, this is easier to detect and measure than the often vague signals of human-robot cooperation. If the user is frustrated when we tell the robot to go against their plans, we know the robot comprehended what they wanted to do.”
“Just as short-term predictions of each other’s actions are important to successful human teamwork, our research shows incorporating this capability in cooperative robotic systems is vital to successful human-machine cooperation.”Professor Mayol-Cuevas
For the experiment, researchers used a prototype that can trail the user’s eye gaze and develop short-term predictions about planned actions through machine learning. This knowledge is then used as a base for the robot’s decisions such as where to move afterward.
The Bristol team coached the robot in the study using a suite of over 900 training examples from a pick and place task conducted by participants.
Central to this research is the evaluation of the intention-prediction model. The researchers tried the robot for two cases: rebellion and obedience. The robot was programmed to go by or disobey the predicted intention of the user. Knowing the user’s goals gave the robot the power to go against their decisions. The variance in frustration responses between the two situations served as proof for the accuracy of the robot’s predictions, thus endorsing the intention-prediction model.
Janis Stolzenwald, a Ph.D. student, financed by the German Academic Scholarship Foundation and the UK’s EPSRC, carried the user experiments and acknowledged new challenges for the future. He said: “We found that the intention model is more helpful when the gaze data is joined with task knowledge. It brings up a new research question: how can the robot recover this knowledge? We can imagine learning through demonstration or involving another human in the task.”
YOU MAY LIKE: Coronavirus: AI Gears Up In Battle Against Covid-19
In the groundwork for this new challenge, the researchers are presently exploring shared control, collaboration, and new applications within their studies about remote association through the handheld robot. A maintenance task serves as a user experiment, where a handheld robot user obtains assistance through an expert who slightly controls the robot.
Former Ph.D. student, Austin Gregg-Smith built and designed the handheld robot which was used for the research. The research is available as an open-source design through the researcher’s site at www.handheldrobotics.org.