Washington: According to a new research from data scientists at the University of Georgia, people may be more willing to trust a computer program than their fellow humans, especially if a task becomes too challenging.
The findings of the study were published in the journal ‘Nature’s Scientific Reports’.
From choosing the next song on your playlist to choosing the right size of pants, people are relying more on the advice of algorithms to help make everyday decisions and streamline their lives.
“Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day,” said Eric Bogert, a Ph.D. student in the Terry College of Business Department of Management Information Systems.
Bogert added, “It seems like there’s a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people.”
Bogert worked with management information systems professor Rick Watson and assistant professor Aaron Schecter on the paper, “Humans rely more on algorithms than social influence as a task becomes more difficult.”
Their study, which involved 1,500 individuals evaluating photographs, is part of a larger body of work analysing how and when people work with algorithms to process information and make decisions.
For this study, the team asked volunteers to count the number of people in a photograph of a crowd and supplied suggestions that were generated by a group of other people and suggestions generated by an algorithm.
As the number of people in the photograph expanded, counting became more difficult and people were more likely to follow the suggestion generated by an algorithm rather than count themselves! or follow the “wisdom of the crowd,” Schecter said.
Schecter explained that the choice of counting as the trial task was an important one because the number of people in the photo makes the task objectively harder as it increases. It also is the type of task that laypeople expect computers to be good at.
“This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects,” Schecter said. “One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there — like income and credit score — so people feel like this is a good job for an algorithm. But we know that dependence leads to discriminatory practices in many cases because of social factors that aren’t considered.”
Facial recognition and hiring algorithms have come under scrutiny in recent years as well because their use has revealed cultural biases in the way they were built, which can cause inaccuracies when matching faces to identities or screening for qualified job candidates, Schecter said.
Those biases may not be present in a simple task like counting, but their presence in other trusted algorithms is a reason why it’s important to understand how people rely on algorithms when making decisions, he added.
This study was part of Schecter’s larger research program into human-machine collaboration, which is funded by a USD 300,000 grant from the U.S. Army Research Office.
“The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior,” Schecter said. “Because there’s very little research in that setting, we’re starting with the fundamentals.”
Schecter, Watson and Bogert are currently studying how people rely on algorithms when making creative judgments and moral judgments, like writing descriptive passages and setting bail of prisoners.