California: Open AI, the non-profit AI research company, has built a new text generator which is nearly perfect. Yet, it is not being made available to the masses.
The new natural language model, GPT-2, has been trained to predict the next word in a sample of 40 gigabytes of internal text. The end result from this training was a system capable of generating text that adapts to the style and content of the conditioning text.
This basically means text that is so realistic and convincing that it is scary.
According to TechCrunch, the researchers believe the near perfect system is vulnerable to potential abuse through bots which will become capable of better dialogue and speech recognition, resulting in abusive or spam comments on social media.
Owing to the vulnerability of the system to potential abuse, OpenAI is releasing only a smaller version of the language tool.
The company will revisit the decision of releasing the tool in full in six months. In the meantime, OpenAI has called on governments to bring about guidelines for the diffusion of AI technologies.