NewsNation

AI CEO: Regulations will be critical to reduce risks

(NewsNation) — The CEO of the company behind ChatGPT told lawmakers Tuesday that governmental regulations will be critical to reducing the risks presented by rapidly developing artificial intelligence technology.

“If this technology goes wrong, it can go quite wrong and we want to be vocal about that,” Sam Altman, the head of OpenAI, said before a Senate Judiciary subcommittee. “We want to work with the government to prevent that from happening.”


Altman said that oversight should take the form of an agency that has the power to issue licenses and take them away. He also called for a set of safety standards that an AI model would have to pass before being deployed.

But senators, who are trying to keep pace with AI as it reshapes major industries in real-time, are still trying to determine what that regulation will look like. 

“There are places where the risk of AI is so extreme that we ought to impose restriction, or even ban their use, especially when it comes to commercial invasions of privacy for profit,” said Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law.

ChatGPT’s release in November accelerated an AI arms race as companies scrambled to leverage the technology.

Teachers feared students could now cheat on essays, but more serious issues soon became clear. Sometimes AI “hallucinates” and provides false information. Other researchers have warned about bias baked into the models.

Altman acknowledged the significant dangers and thinks the response will require a combination of “companies doing the right thing, regulation and public education.”

Professor Gary Marcus, a leading voice in AI who also testified Tuesday, agreed with Altman that the U.S. needs a cabinet-level organization in order to address the challenges posed by the technology.

Both Altman and Marcus suggested an international AI agency could be necessary.

Missouri Sen. Josh Hawley said AI could wind up being the printing press or the atom bomb, calling it one of the most disruptive technologies in human history.

In March, a group of prominent computer scientists called for all AI labs to “immediately pause” the training of AI systems for at least six months in order to better understand the potential risks to “society and humanity.”

European lawmakers have already introduced artificial intelligence legislation that could soon become law. The flagship proposal effectively bans certain uses like remote facial recognition and will govern any product or service that uses an artificial intelligence system.

This story will continue to be updated.