Senate committee searches for first-ever AI regulations
WASHINGTON (NEXSTAR) – Artificial intelligence is no longer science fiction for Congress.
Lawmakers on a Senate Judiciary subcommittee warned Tuesday the technology has the potential to direct nuclear attacks, create deadly diseases, even end humanity.
“The word that has been used so repeatedly is scary,” said Sen. Richard Blumenthal, D-CT. “The urgency here demands action.”
Blumenthal and Sen. Josh Hawley, R-MO, brought top AI experts to a hearing to help regulate the technology. Hawley worries that if left unchecked, a handful of Big Tech companies could control the AI world.
“That is the true nightmare,” he said. “And for my money, that is what this body has got to prevent.”
Dario Amodei, the CEO of Anthropic, told lawmakers that in as little as a couple of years, AI has the potential to run wild, creating dangers as extreme as large-scale biological attacks.
“We believe this represents a grave threat to national security,” Amodei said.
But the experts all agreed that Congress can address these fears through legislation, like requiring AI products to pass safety tests and securing the technology’s supply chain.
“Keeping these technologies out of the hands of bad actors,” Amodei said.
Linda Moore, the president and CEO of TechNet, brought AI companies and policymakers to Capitol Hill Tuesday to discuss best practices for the technology. Moore said the industry as a whole supports congressional action.
“We like to say AI is too important not to regulate,” she said.
Moore also wants lawmakers to adopt a national privacy standard for AI since about a dozen states currently have their own laws.
“That does affect the data that is put into the training models for the machine learning to develop the AI,” she said.
The companies want to flip the script on the technology, stressing it also has the potential to help end climate issues, cure deadly diseases, and even save humanity.
While Congress continues to debate potential industry regulations, the White House announced voluntary commitments from seven tech companies, including Anthropic, last week that aim to make AI safe and secure for users. The companies agreed to publicly report flaws and risks in their technology, protect it from cyber threats and label AI-generated content.
Blumenthal said legislation could look similar.
“We can’t repeat the same mistakes we made on social media,” he said.