Innovation is our regular column that highlights emerging technological ideas and where they may lead
Where PCs are concerned, faster is invariably better. But things aren't so clear-cut in human society. The next generation of social robots will be better loved if they adopt more human-like behaviour – even if that means losing some of their raw efficiency.
Norihiro Hagita and colleagues at the ATR laboratories in Kyoto, Japan, asked 38 volunteers to click on a PC mouse to enlarge an image. The response was programmed to be delayed by 1 to 3 seconds. As expected, an immediate response was most favoured, and participants expressed more and more dissatisfaction as the delay lengthened.
But a version of the experiment that involved a humanoid robot threw up a surprising result. The volunteers were asked to tell the robot to take out the rubbish, and the robot verbally acknowledged the request. This time an immediate response – beginning the moment the volunteer finished talking – was considered less welcome than one that was delayed by a second.
Bilge Mutlu at the University of Wisconsin-Madison, who studies social human-robot interactions, finds that result informative. "In the interaction with the PC, what is efficient is that the system does a given command as soon as possible," he says. "In the interaction with the robot, what is efficient is that the robot follows the norms of the conversation, which includes a seemingly inefficient 1 second delay."
Don't butt in
Mutlu's work also suggests that robots should modify their behaviour, often in apparently inefficient ways, to be appreciated. He asked local hospital workers for their thoughts on their latest mechanical colleague, a box-like robot called TUG. Those who said they resented the robot objected primarily to its lack of social graces. Where a human trying to deliver a message to a colleague might pause if the other is on the phone, for instance, TUG – seeking efficiency – minimises the time taken to deliver the message by blurting it out.
Aethon, the company that builds TUG – based in Pittsburg, Pennsylvania – says it's learning from Mutlu's work. The robot can now be programmed to talk in hushed tones at night, for example – but teaching it when to hold its tongue could be more difficult. Although psychologists have explored social norms for decades, roboticists have not needed to. "We often don't know what these socially acceptable norms and rules are," says Mutlu. So expecting their robots to display them is asking a lot.
Don't barge in
Some researchers are, however, trying to do just that. One accusation levelled at TUG was that it "just barrels right on" through a crowd of people rather than moving to one side to let patients pass. Peter Henry and Christian Vollmer's team at the University of Washington, Seattle, think they can help robots learn to move through a crowd as humans do.
Rather than pre-programming fixed instructions, the team thinks it's simpler to drop a robot untrained into the real world but equip it with the smarts to study and mimic the behaviour of those around them.
They have developed an algorithm that allows a virtual robot to navigate a crowd as a human might by first monitoring how the properties of the crowd – density and flow – affect the way virtual crowd members move through the throng.
"If a human takes a geometrically longer route avoiding the crowd, our planner would learn to do the same thing," Henry says (see video). That contrasts with the typical approach adopted by robots – taking the shortest and thus most efficient route to a goal, which, as Mutlu's study shows, can lead to resentment.
Mutlu thinks Henry's study is a step in the right direction. As robots become increasingly integrated with the everyday world, what constitutes "efficient robotic behaviour" looks set to change, he says. "Even if it is inefficient based on other criteria, 'socially acceptable' behaviour is what is efficient for technology that interacts with people."