Researchers at OpenAI say an AI text generator—a tool that could be useful to marketing, if designed correctly—is too dangerous to release to the public.
A smart tool created by OpenAI is too dangerous for the public, the company says.
OpenAI is a nonprofit artificial intelligence research company founded by Elon Musk to advance AI research and development, but its researchers say that the full version of a language AI tool they created—GPT-2, which crawls through websites and writes text—should not be released publicly.
Researchers say that GPT-2 could be used for better speech recognition, better translation between languages and as AI writing assistants—all tools that could help marketers. But the tool could be manipulated to automate the production of spam and phishing, create misleading news articles, impersonate other people online and automate abusive or faked content to social media.
“These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns,” OpenAI researchers write. “The public at large will need to become more skeptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more skepticism about images.”
In a 2018 survey from O’Reilly Media and MemSQL, 61% of marketer say AI is the most important aspect of their data strategy.
As Marketing News found in 2017, marketers have been trying to find the best mix of AI tools for years—including tools like Narrative Science that can write pages of content through AI. But OpenAI’s research speaks to a caution marketers must have in new technology, AI or otherwise: Test first to ensure the technology will be a positive for business, but also consumers, the industry and the world.