OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous

OpenAI Releases Fake News Bot It Previously Deemed Too Dangerous

In February of this year, the nonprofit artificial intelligence research lab OpenAI announced its new algorithm called GPT-2 in mere seconds. Rather than release the bot to the world, OpenAI deemed it too dangerous for public consumption. The firm spent months opening up pieces of the underlying technology so it could evaluate how it was used. Citing no “strong evidence of misuse,” OpenAI has now . 

OpenAI designed GPT-2 to consume text and produce summaries and translations. However, the researchers became concerned when they fed the algorithm plainly fraudulent statements. GPT-2 could take a kernel of nonsense and build a believable narrative around it, going so far as to invent studies, expert quotes, and even statistics to back up the false information.

You can see an example of GTP-2’s text generation abilities below. 

You can play around with GPT-2 online on the . The site has already been updated with the full version of GPT-2. Just add some text, and the AI will continue the story. 

The deluge of fake news was first called out in the wake of the 2016 election when shady websites run by foreign interests spread misinformation, much of which gained a foothold on Facebook. OpenAI worried releasing a bot that could pump out fake news in large quantities would be dangerous for society. Although, some AI researchers felt the firm was just looking for attention. This technology or something like it would be available eventually, they said, so why not release the bot so other teams could develop ways to detect its output. 

An example of GPT-2 making up facts to support the initial input.

Now here we are nine months later, and you can download the full model. OpenAI says it hopes that researchers can better understand how to spot fake news written by the AI. However, it cautions that its research shows GPT-2 can be tweaked to take extreme ideological positions that could make it even more dangerous. 

OpenAI also says that its testing shows detecting GPT-2 material can be challenging. Its best in-house methods can identify 95 percent of GPT-2 text, which it believes is not high enough for a completely automated process. The worrying thing here is not that GPT-2 can produce fake news, but that it can potentially do it extremely fast and with a particular bias. It takes people time to write things, even if it’s all made up. If GPT-2 is going to be a problem, we’ll probably find out in the upcoming US election cycle.

Now read:

Facebook Twitter Google+ Pinterest
Tel. 619-537-8820

Email. This email address is being protected from spambots. You need JavaScript enabled to view it.