San Francisco: Microsoft-owned OpenAI is using its large language models like GPT-4 “to build a content moderation system that is scalable, consistent and customisable”.
According to the company, GPT-4 can not only aid in content moderation decisions but also in policy development and policy iteration, “reducing the cycle from months to hours”.
The company claims that the model can parse the various regulations and nuances in content policies and instantly adapt to any updates, resulting in more consistent labelling of content.
“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators. Anyone with OpenAI API access can implement this approach to create their own AI-assisted moderation system,” OpenAI’s Lilian Weng, Vik Goel and Andrea Vallone wrote in a blogpost on Tuesday.
OpenAI believes GPT-4 moderation tools can help companies carry out around six months of work in about a day.
“We are actively exploring further enhancement of GPT-4’s prediction quality, for example, by incorporating chain-of-thought reasoning or self-critique,” OpenAI said.
The company is also experimenting with ways to detect unknown risks and, inspired by Constitutional AI, aims to leverage models to identify potentially harmful content given high-level descriptions of what is considered harmful.
Meanwhile, OpenAI has announced that it is expanding the ‘custom instructions’ feature to all ChatGPT users, including those on the free tier of the service.
This feature provides users more control over how ChatGPT responds, reports TechCrunch.
The ‘custom instructions’ feature was first introduced last month as a beta for ChatGPT Plus subscribers, allowing them to add preferences and requirements that they want the artificial intelligence (AI) chatbot to consider when responding.