Read children’s stories to your robot to instill human values

Washington: In a bid to alleviate the fear of robots harming humans or behaving unethically in social settings, a team of researchers has develop a device that will train robots to read inspirational stories and make them understand ways to behave in human societies.

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or choose to harm humans.

Called “Quixote”, the technology developed by the researchers from the School of Interactive Computing at the Georgia Institute of Technology, teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

The stories teach children how to behave in socially acceptable ways with examples of proper and improper behaviour in fables, novels and other literature.

“We believe story comprehension in robots can eliminate psychotic-appearing behaviour and reinforce choices that will not harm humans and still achieve the intended purpose,” said researcher Mark Riedl, associate professor and director of the entertainment intelligence lab.

Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behaviour.

It builds upon Riedl’s prior research — the Scheherazade system — which demonstrated how AI can gather a correct sequence of actions by crowd-sourcing story plots from the internet.

Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviours and punishes other behaviours during trial-and-error learning.

In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

“The technique is best for robots that have a limited purpose but need to interact with humans to achieve it. It is a primitive first step toward general moral reasoning in AI,” said Riedl who developed the technique with Brent Harrison.

“Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual,” the authors noted.
IANS