NewsNation

John Oliver on new AI tech: ‘Stupid in ways we can’t predict’

Nov. 4, 2019 file photo, John Oliver performs at the 13th annual Stand Up For Heroes benefit concert in support of the Bob Woodruff Foundation at the Hulu Theater at Madison Square Garden in New York. Danbury, Conn., Mayor Mark Boughton announced a tongue-in-cheek move posted on his Facebook page on Saturday, Aug. 22, 2020, to rename Danbury's local sewage treatment plant after Oliver following the comedian's expletive-filled rant about the city. (Photo by Greg Allen/Invision/AP, File)

(NewsNation) — Comedian and “Last Week Tonight” host John Oliver unpacked the rise of Chat GPT and other AI programs in recent years and said, “The potential and the peril here are huge.”

The main problem, Oliver said, is that you can’t always tell why an AI program gave the response it did, whether it is helpful or alarming. Scientists call this the “black box problem.”


“Think of AI like a factory that makes Slim Jims,” Oliver explained. “We know what comes out: red and angry meat twigs. And we know what goes in: barnyard anuses and hot glue. But what happens in between is a bit of a mystery.”

Programs like Chat GPT are designed to teach themselves using information from the internet and produce a response to a user’s prompt. But it is not always clear to engineers what information the program used or how it reached the answer it gave.

“The problem with AI right now isn’t that it’s smart; it’s that it’s stupid in ways that we can’t always predict, which is a real problem,” Oliver said.

According to Oliver, the danger of AI surpassing human intelligence is far away with programs like ChatGPT limited to creating content in a narrow scope.

AI, like ChatGPT, “Is ultimately the result of human-created algorithms that interact to create this content,” said National Security Institute founder Jamil Jaffer on “Morning in America.”

After OpenAI launched ChatGPT late last year, the chatbot’s popularity has exploded as students, writers, artists and more have toyed with its ability to generate text and other content.

But the technology has raised concerns as misinformation, plagiarism and security issues continue to come up across a range of industries.

Earlier this month, videos of AI-generated journalists went viral for spreading misinformation and pro-China propaganda. College students have begun using AI to complete their homework, raising questions about the future of education. Tesla CEO Elon Musk has come under fire for driver-assist systems that misrepresent the cars as self-driving.

Despite this, Oliver says AI can be a force for good through innovation and progress.

“The fact is there are other much more immediate dangers and opportunities that we really need to start talking about,” Oliver said. “AI is ultimately a mirror, and it will reflect back exactly who we are; From the best of us, to the worst of us.”