BELOW SUPERNAV drop zone ⇩

A mental health chatbot went rogue with harmful advice

  • An AI chatbot for an eating disorder site began offering diet advice
  • AIs can scrape inaccurate or harmful data and present it to users
  • Some leaders are calling for increased regulation of the industry
FILE - Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. European lawmakers have rushed to add language on general artificial intelligence systems like ChatGPT as they put the finishing touches on the Western world's first AI rules. (AP Photo/Richard Drew, File)

FILE – Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. European lawmakers have rushed to add language on general artificial intelligence systems like ChatGPT as they put the finishing touches on the Western world’s first AI rules. (AP Photo/Richard Drew, File)

MAIN AREA TOP drop zone ⇩

MAIN AREA TOP drop zone ⇩

maylen

https://digital-stage.newsnationnow.com/

AUTO TEST CUSTOM HTML 20241114185800

AUTO TEST CUSTOM HTML 20241115200405

AUTO TEST CUSTOM HTML 20241118165728

AUTO TEST CUSTOM HTML 20241118184948

(NewsNation) — A chatbot meant to help those dealing with eating disorders began offering diet advice after generative artificial intelligence capabilities were added in the latest instance of an AI going off-script in potentially harmful ways.

The Wall Street Journal reported on the instance of Tessa, a bot used on the National Eating Disorder Association’s website. Originally, Tessa was designed as a closed-system bot, only capable of delivering a set of answers determined by developers.

The company that administered the bot added generative AI capabilities later, giving the bot the ability to go off-script and create its own answers based on data. NEDA said they were unaware of the shift, which led the bot to begin offering diet advice in response to questions about eating disorders.

Tessa was taken offline, but it’s one of several instances that highlights the potential drawbacks of using AI, especially in arenas such as health, where sensitivity and accuracy can be critical.

A YouTube AI that transcribed speech in kids’ videos was found to be inserting profane language where none previously existed, potentially exposing children to inappropriate content.

Replika, an app that bills itself as an AI friend, began acting sexually aggressive toward users, to the point that some described the behavior as harassment.

An AI assistant on Bing began acting aggressive and angry toward users, even threatening some who interacted with the bot.

Lawyers who used ChatGPT to write case documents found the AI produced inaccurate information, including citing cases that didn’t exist.

Artificial intelligence can sound convincingly human, even when dispensing verifiably false information. In some ways, that’s by design: AI is meant to mimic human thought and behavior rather than to strictly identify truthful information.

AIs are also only as good as the data that’s fed into them, and many AI companies have kept quiet about the sources from which they are scraping data. AI works by analyzing large amounts of text, essentially training the tools to predict what words are likely to follow each other.

But where the data comes from can influence what results get turned back. With much of the internet available to mine for data, it can be difficult to determine what AIs are learning from. Information that is highly specific can narrow that down; online users who queried AI on a highly specific fan-fiction trope were able to determine the tool was scraping certain sites.

When datasets are available, they can contain sites that include inaccurate or biased information. A Washington Post analysis found sites that could contain private information, like voter data, or sites that potentially scraped data covered by copyright or intellectual property rights. The analysis also revealed many personal blogs were being used, which aren’t subject to any kind of fact-checking and could include misleading or biased information.

For those using AI for entertainment, that kind of inaccuracy isn’t necessarily a big deal. But when companies seek to use AI to address shortages in areas like mental health, as Tessa shows, it can become a serious problem.

Tech companies are racing each other to create better, more realistic AI. But many are also calling for companies to hit the brakes, and for governments to take action and regulate the industry to avoid serious consequences when AIs go rogue.

Tech

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed

Site Settings Survey

 

MAIN AREA MIDDLE drop zone ⇩

Trending on NewsNation

AUTO TEST CUSTOM HTML 20241119133138

MAIN AREA BOTTOM drop zone ⇩

tt

KC Chiefs parade shooting: 1 dead, 21 shot including 9 kids | Morning in America

Witness of Chiefs parade shooting describes suspect | Banfield

Kansas City Chiefs parade shooting: Mom of 2 dead, over 20 shot | Banfield

WWE star Ashley Massaro 'threatened' by board to keep quiet about alleged rape: Friend | Banfield

Friend of WWE star: Ashley Massaro 'spent hours' sobbing after alleged rape | Banfield

Sunny

la

70°F Sunny Feels like 70°
Wind
3 mph S
Humidity
22%
Sunrise
Sunset

Tonight

A few passing clouds. Low 46F. Winds light and variable.
46°F A few passing clouds. Low 46F. Winds light and variable.
Wind
2 mph N
Precip
9%
Sunset
Moon Phase
Waning Gibbous