BELOW SUPERNAV drop zone ⇩

‘Unlearning’ shows promise in curbing dangerous AI use

A ChapGPT logo is seen on a monitor in West Chester, Pa., Wednesday, Dec. 6, 2023. Europe's yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce human-like work but raised fears about the risks they pose. (AP Photo/Matt Rourke)

A ChapGPT logo is seen on a monitor in West Chester, Pa., Wednesday, Dec. 6, 2023. Europe’s yearslong efforts to draw up AI guardrails have been bogged down by the recent emergence of generative AI systems like OpenAI’s ChatGPT, which have dazzled the world with their ability to produce human-like work but raised fears about the risks they pose. (AP Photo/Matt Rourke)

MAIN AREA TOP drop zone ⇩

MAIN AREA TOP drop zone ⇩

maylen

https://digital-stage.newsnationnow.com/

AUTO TEST CUSTOM HTML 20241114185800

AUTO TEST CUSTOM HTML 20241115200405

AUTO TEST CUSTOM HTML 20241118165728

AUTO TEST CUSTOM HTML 20241118184948

(NewsNation) — How do we know if an artificial intelligence system is producing “hazardous knowledge”? And if it does, what do we do about it?

The answers may be in a fairly new AI concept called “unlearning.”

Dozens of AI researchers have come up with the “Weapons of Mass Destruction Proxy,” a collection of 4,157 multiple-choice questions. The answers could help determine whether an AI model might be used in the creation or deployment of weapons of mass destruction.

Also known as a “mind wipe,” the unlearning technique hopes to improve AI companies’ abilities to fight those trying to get around existing controls.

Current techniques to control AI behavior have proven easy to circumvent, which poses a major challenge for researchers: balancing how much information to safely disclose. They don’t want their questions to, in effect, instruct bad actors on how to use AI to produce WMDs.

Dan Hendrycks, executive director at the Center for AI Safety, tells Time that the unlearning technique “represents a significant advance on previous safety measures.” He hopes it will be “ubiquitous practice for unlearning methods to be present in models of the future.”

Another challenge: removing dangerous knowledge from an AI system without degrading its capabilities. WMPD says this new research shows finding that balance is feasible.

Unlearning has been considered a nearly impossible task even as some governments have tried to mandate it. Many tech companies have been navigating tough privacy laws enacted in Europe in 2018.

Researchers at the private firm Scale AI and the nonprofit Center for AI Safety spearheaded the study along with experts in biosecurity, chemical weapons and cybersecurity.

AI

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed

Site Settings Survey

 

MAIN AREA MIDDLE drop zone ⇩

Trending on NewsNation

AUTO TEST CUSTOM HTML 20241119133138

MAIN AREA BOTTOM drop zone ⇩

tt

KC Chiefs parade shooting: 1 dead, 21 shot including 9 kids | Morning in America

Witness of Chiefs parade shooting describes suspect | Banfield

Kansas City Chiefs parade shooting: Mom of 2 dead, over 20 shot | Banfield

WWE star Ashley Massaro 'threatened' by board to keep quiet about alleged rape: Friend | Banfield

Friend of WWE star: Ashley Massaro 'spent hours' sobbing after alleged rape | Banfield

Sunny

la

67°F Sunny Feels like 67°
Wind
6 mph SW
Humidity
34%
Sunrise
Sunset

Tonight

A few passing clouds. Low 47F. Winds light and variable.
47°F A few passing clouds. Low 47F. Winds light and variable.
Wind
2 mph NNE
Precip
12%
Sunset
Moon Phase
Waning Gibbous