A mental health chatbot went rogue with harmful advice
- An AI chatbot for an eating disorder site began offering diet advice
- AIs can scrape inaccurate or harmful data and present it to users
- Some leaders are calling for increased regulation of the industry
(NewsNation) — A chatbot meant to help those dealing with eating disorders began offering diet advice after generative artificial intelligence capabilities were added in the latest instance of an AI going off-script in potentially harmful ways.
The Wall Street Journal reported on the instance of Tessa, a bot used on the National Eating Disorder Association’s website. Originally, Tessa was designed as a closed-system bot, only capable of delivering a set of answers determined by developers.
The company that administered the bot added generative AI capabilities later, giving the bot the ability to go off-script and create its own answers based on data. NEDA said they were unaware of the shift, which led the bot to begin offering diet advice in response to questions about eating disorders.
Tessa was taken offline, but it’s one of several instances that highlights the potential drawbacks of using AI, especially in arenas such as health, where sensitivity and accuracy can be critical.
A YouTube AI that transcribed speech in kids’ videos was found to be inserting profane language where none previously existed, potentially exposing children to inappropriate content.
Replika, an app that bills itself as an AI friend, began acting sexually aggressive toward users, to the point that some described the behavior as harassment.
An AI assistant on Bing began acting aggressive and angry toward users, even threatening some who interacted with the bot.
Lawyers who used ChatGPT to write case documents found the AI produced inaccurate information, including citing cases that didn’t exist.
Artificial intelligence can sound convincingly human, even when dispensing verifiably false information. In some ways, that’s by design: AI is meant to mimic human thought and behavior rather than to strictly identify truthful information.
AIs are also only as good as the data that’s fed into them, and many AI companies have kept quiet about the sources from which they are scraping data. AI works by analyzing large amounts of text, essentially training the tools to predict what words are likely to follow each other.
But where the data comes from can influence what results get turned back. With much of the internet available to mine for data, it can be difficult to determine what AIs are learning from. Information that is highly specific can narrow that down; online users who queried AI on a highly specific fan-fiction trope were able to determine the tool was scraping certain sites.
When datasets are available, they can contain sites that include inaccurate or biased information. A Washington Post analysis found sites that could contain private information, like voter data, or sites that potentially scraped data covered by copyright or intellectual property rights. The analysis also revealed many personal blogs were being used, which aren’t subject to any kind of fact-checking and could include misleading or biased information.
For those using AI for entertainment, that kind of inaccuracy isn’t necessarily a big deal. But when companies seek to use AI to address shortages in areas like mental health, as Tessa shows, it can become a serious problem.
Tech companies are racing each other to create better, more realistic AI. But many are also calling for companies to hit the brakes, and for governments to take action and regulate the industry to avoid serious consequences when AIs go rogue.