Technology

Works Well With Robots? The Way Robots/AI and Humans Interact

Blame it on HAL 9000, Clippy’s constant joy interruptions, or any navigation system that takes delivery drivers to a dead end. In the work space, humans and robots do not always get along well.

But as more and more artificial intelligence systems and robots help human workers, mutual trust is the key to getting the job done. A professor at the University of Georgia tried to bridge this gap with the help of the U.S. military.

Aaron Schecter, an assistant professor in the Department of Management Information Systems at Terry College, has received two grants from the U.S. Army worth nearly $2 million to study the interaction between human and robotic teams. Although artificial intelligence in the home can help order food, artificial intelligence on the battlefield provides a set of riskier environments-team cohesion and trust can be a matter of life and death.

“My research does not pay much attention to the design and elements of the robot’s working principle; it is more psychological. When might we trust something? What is the mechanism of trust? How do we get them to cooperate? If the robot is wrong, you Can you forgive him?-Aaron Schechter

“In the army field, they want to have a robot or artificial intelligence that is not controlled by humans, and the robot or artificial intelligence is performing functions that reduce the burden on humans,” Schechter said. “Obviously, people want people not to react badly to this.”

Although the vision of military robots may fall into the realm of “terminators,” Schecter explained that most robots and systems being developed are designed to transfer heavy payloads or provide advanced exploration: a walking platform that carries ammunition and water, so Soldiers will not carry 80 pounds of gear.

“Or imagine a drone that is not remotely controlled,” he said. “It flies above you like a pet bird, watches in front of you, and provides voice feedback, such as ‘I suggest this route.’

But these robots are only trustworthy if they are not shot by soldiers or put them in danger.

“We don’t want people to hate, hate, or ignore robots,” Schecter said. “You have to be willing to trust him in life and death situations to be effective. So how do we make people trust robots? How do we make people believe in artificial intelligence?”

Rick Watson, Professor Regent and J. Rex Fuqua Distinguished President of Internet Strategy, is the co-author of some of Schecter’s research on artificial intelligence devices. He believes that as the development of artificial intelligence becomes more comprehensive, the study of how machines and humans work together will become more important.

Understand the limitations

“I think we will see a lot of new applications of artificial intelligence, and we need to know when it will work well,” Watson said. “We can avoid situations where it is dangerous to humans or it is difficult to justify a decision, because we don’t know how the artificial intelligence system recommends it when it is a black box. We must understand its limitations.”

Knowing when artificial intelligence systems and robots are performing well has allowed Schecter to apply his knowledge of human teams to human-machine team dynamics.

“My research doesn’t pay much attention to the design and elements of the robot’s working principle; it is more of a psychological aspect,” Schecter said. “When might we believe something? What is the mechanism that generates trust? How do we get them to cooperate? If the robot is wrong, can you forgive him?

Schecter first gathers information about when people are most likely to follow the robot’s advice. Then, in a set of projects funded by the Army Research Office, he analyzed how humans get advice from machines and compares it with the advice of others.

Dependent algorithm

In one project, Schecter’s team showed the test subjects a planned task, such as drawing the shortest route between two points on a map. Research has found that people are more likely to believe an algorithm’s recommendations than others. In another study, his team found evidence that humans may rely on algorithms to complete other tasks, such as word association or brainstorming.

“We are studying how algorithms or artificial intelligence affect human decision-making,” he said. “We are testing many different types of tasks and finding out when people trust the algorithm the most. … We didn’t find anything surprising. When people do something more analytical, they just Will trust computers more. Interestingly, this model can be extended to other activities.”

In another study that focused on how robots interact with humans, Schecter’s team showed more than 300 topics to VERO, which is a fake artificial intelligence assistant in the form of an anthropomorphic spring. “If you remember Clippy (Microsoft’s animation help robot), it’s like Clippy on steroids,” he said.

During the Zoom experiment, teams of three performed team building tasks, such as finding the maximum number of uses of the clip or listing the items needed to survive on a desert island. Then VERO appeared.

Seek good cooperation

“This is the avatar that floats up and down. It has a spiral shape, looks like a spring, and stretches and contracts when you want to talk,” Schecter said. “It says, ‘Hi, my name is VERO. I can help you with all kinds of different things. I have natural voice processing skills.”

But it is a research assistant with a voice modulator to operate VERO. Sometimes, VERO will provide useful suggestions, such as different uses of the editing; other times, he will play the host, “Good job, guys!” or encourage more moderate teammates to contribute ideas.

“People really hate this situation,” Schechter said, noting that less than 10% of participants were aware of this trick. “They are like,’Stupid Vero!’ They are too bad.”

Schecter’s goal is not just to torture subjects. He said that the researchers recorded every conversation, facial expression, gesture, and reaction in the experience survey to find “patterns that tell us how to collaborate well”.

An initial article about artificial intelligence humans and human teams was published in a scientific report in Nature in April, but Schecter is still considering and publishing more articles in next year’s work.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button