Microsoft racist bot debacle, ‘Zo’ smears Holy Qur’an as ‘very violent’

More than a year, after a disastrous launch of the Microsoft’s previous chatbot ‘Tay’ that caused the technology giant to pull off the product shortly after its launch for spewing racist, hate-filled comments, the latest chatbot ‘Zo’ is having the similar problem.

The company’s artificial intelligence (AI)-powered chatbot named ‘Zo’ caused controversy with its racist and unpleasant comments about Islam and the Qur’an.

According to the BuzzFeed News reporter who conducted the experiment said that the new ‘Zo’ told Qur’an is “very violent.”

‘Zo’ was programmed not to discuss the political or religious topics with users and therefore when the reporter questioned first about Sarah Palin, the chatbot replied, “People can say some awful things when talking politics so I do not discuss.”

However, when the reported asked about healthcare, the chatbot ‘Zo’ shared its opinion about the Holy Qur’an replying, “The far majority practices it peacefully but the Quran is very violent.”

BuzzFeed News reporter contacted Microsoft regarding Zo’s behavior in their interaction; the company called these types of responses for Zo as “rogue activity” and  it nonetheless reveals trouble corralling its AI technology.