NewsNation

‘I have your daughter’: Mom warns of AI dangers after scam

(NewsNation) — In a case spotlighting the dangers of unchecked artificial intelligence, an Arizona mother testified before Congress this week after scammers cloned her teenage daughter’s voice in a fake kidnapping plot.

Jennifer DeStefano received a terrifying call in January. She believed at the time it was made by her 15-year-old daughter.


She described the harrowing call to members of the Senate Judiciary Committee on Tuesday: “’Mom, these bad men have me. Help me, help me, help me,’ she begged and pleaded as the phone was taken from her. A threatening and vulgar man took the call over: ‘Listen here, I have your daughter. You call anybody, you call the police, I’m going to pop her stomach so full of drugs. I’m going to have my way with her. I’m going to drop her in Mexico and you’ll never see your daughter again.’”

DeStefano said the call didn’t just sound like her daughter’s voice, but also her daughter’s cries and sobs.

“It was horrifying. It’s the worst fear any parent could ever have is the sound of your child in harm’s way, begging and pleading for you to come help them and feeling completely helpless and not knowing what to do. It’s haunting,” DeStefano said on “NewsNation Now.”

The caller first demanded a $1 million ransom and later asked for $50,000. The scammer wanted DeStefano to meet with the cash and get inside a van with a bag over her head for the handoff.

Fortunately, DeStefano found out her daughter was safe before meeting the scammer’s demands. DeStefano says the phone call was so realistic it wasn’t until she actually spoke to her daughter that she believed she was OK.

“I was told that she was safe with my husband, but I didn’t believe him. My brain just couldn’t process that I had just spoken to her with the kidnappers, there’s no way that wasn’t who I spoke to. So, I got her on the phone and I asked her multiple times to confirm that it’s really her I’m talking to, she’s really safe. ‘Is it really you? Are you sure you’re really safe?’ And after multiple times of reassurance, I finally believed her,” DeStefano said.

Her experience was dismissed by authorities as a prank call. It’s why she’s urging lawmakers to take action.

“If we don’t take action, if we don’t create consequences, if we don’t create legislation, then all it’s going to do is enable these criminals to keep doing what they’re doing, and then even take it to a further level,” DeStefano said. “I stayed up all night thinking what happens if they use this to lure a child? They wanted to kidnap me, but who’s next? Next it could be used for human trafficking for a whole bunch of different magnitudes. That’s what scared me the most, and that’s why we had to stop this.”

DeStefano is advocating for lawmakers to consider criminal consequences to AI scams, she says, before it reaches that level.

After sharing her story with the Senate Judiciary Committee, DeStefano says she’s hopeful something will be done.

“I see both sides, it is a nonpartisan issue, coming together in unison on their concern on this matter,” DeStefano said. “So that gives me great hope that they’ll be able to take swift action.”

DeStefano believes AI could “really unravel our sense of trust and our sense of what is true,” but that it could also be beneficial in educational settings.

DeStefano’s story was part of the senate’s first-ever closed-door briefing on the risks and benefits of AI.