New iPhone can ‘learn user’s voice’ in 15 minutes: Apple
- A new iPhone feature, Personal Voice, will replicate users' voices through AI
- It’s designed to help the speech-impaired with calls and in-person conversations
- Critics are nevertheless warning of privacy and security concerns
(NewsNation) — Amid a new wave of artificial intelligence, Apple iPhones will soon be able to speak in their users’ voices, the tech company announced on Tuesday.
The new iPhone feature, Personal Voice, will give users randomized text prompts to generate 15 minutes of audio. Another feature, Live Speech, will allow users to save commonly used phrases for the device to speak during phone calls and in-person conversations.
Apple said it will use machine learning, a type of AI, to create the voice on the device itself, rather than externally, so the data can be more secure and private.
The tech giant notes that these tools will help users who are speech-impaired. For example, a man who was diagnosed with ALS and is losing his ability to speak said, “If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world.”
But critics say this could pose security and privacy threats in the future.
“There are a number of privacy concerns,” said Vahid Behzadan, a University of New Haven cybersecurity expert. “What if the voice model is not fully stored on your phone, but is backed up on Apple? What if your voice can be stolen by your phone to be used by others?”
Nevertheless, the company is jumping on the AI bandwagon.
“At Apple, they’ve always believed that the best technology is technology built for everyone,” said Apple CEO Tim Cook.
“There are over 2.5 billion people right now in the world that will need this type of technology,” said Marva Bailer, a tech executive. “So, it’s a really great opportunity to invest in our people.”
Apple said Personal Voice is expected to be released before the end of the year.