(NewsNation) — For many parents, sharing photos of their children on social media has become second nature — a way to show pride and connect with family members. But those same pictures may also be putting kids at risk, particularly regarding artificial intelligence.
“Existing generative AI tools and emerging ones are remarkably sophisticated at producing realistic images based on photographs of real children,” Leah Plunkett, faculty at Harvard Law School and author of the book “Sharenthood,” said on “NewsNation Live” Friday.
So-called “sharenting,” a term for when parents publicize their children’s private lives online, has become popular in recent years, but Plunkett says the practice could be used to exploit children with AI.
“It is time for us as adults to protect the kids in our homes and schools and neighborhoods by limiting, or stopping, putting images of kids that we know and love out on the open internet,” she said.
Earlier this week, top prosecutors in all 50 states wrote a letter urging Congress to study how artificial intelligence can be used against children through pornography.
“AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions,” the letter says.
The attorneys general from across the country called on federal lawmakers to establish an expert commission to study how AI can be used “to exploit children specifically.” They also want existing restrictions on child sexual abuse materials to be expanded.
“We are engaged in a race against time to protect the children of our country from the dangers of AI,” the prosecutors wrote. “The proverbial walls of the city have already been breached. Now is the time to act.”
During her conversation with NewsNation’s Marni Hughes, Plunkett stressed the urgency of the situation and urged officials not to wait.
She said it’s “absolutely crucial” that state prosecutors and other government officials look at what they can do under existing laws to protect kids from the dangers of AI.
Plunkett thinks tech companies will have to be part of the solution and place guardrails around their own products.
In February, Meta, as well as adult sites like OnlyFans and Pornhub, started using an online tool called Take It Down that lets teens report explicit images and videos of themselves on the internet.
Next week, Congress will hold three hearings on artificial intelligence, including one with Microsoft President Brad Smith, according to Reuters.
The latest hearings come four months after OpenAI CEO Sam Altman told lawmakers that government regulations would be critical to reducing the risks posed by AI.
It remains unclear how long it could take for Congress to craft sweeping AI rules or whether such legislation will happen at all.