NYU professor believes AI could ‘unlock’ health care advances
- AI capabilities have rapidly expanded over the last few years
- Tech leaders are searching for ways to mitigate risks
- Professor Scott Galloway says AI has huge potential benefits
(NewsNation) — Artificial intelligence has been described as both a potential humanity saver and humanity ender.
Scott Galloway believes it’s mostly likely neither.
“There’s a … pattern here … there’s a lot of catastrophizing every time a new technology comes along,” the tech entrepreneur and NYU marketing professor said Thursday on “CUOMO.” “I mean, you know, wasn’t Bitcoin supposed to replace the dollar by this point?”
As AI has proliferated, so too have concerns about the potential dangers to society, including consumer scams and disinformation campaigns through so-called “deepfake” videos.
This week, attorneys general in all 50 states called on Congress to look into how AI can exploit children through pornography and put forward legislation to address it.
Earlier this year, 42% of tech CEOs said in a survey they fear that artificial intelligence will not only do away with jobs but could even wipe out humanity.
Galloway, though, believes the technology holds great promise, especially in the health care industry.
“They’ve increased prices faster than inflation for 40 years, and yet only one in five people are happy with their health care. So, if we can distribute or disperse automated health care out to mobile phones for preventive health care, help people answer their series of questions, determine when something is serious and maybe take health care from a defensive on-your-heels industry to an offensive on-your-toes industry and dramatically lower costs, this could be just an enormous unlock,” Galloway said.
The White House has taken an active role in the conversation and earlier this year invited tech leaders to discuss the technology’s risks. In July, seven major tech companies agreed to follow a set of White House AI safety guidelines.
The company’s “voluntary” commitments include conducting external security testing of AI systems before they’re released and sharing information about managing AI risks industry-wide, as well as with governments, academia and the general public.
As AI capabilities rapidly expand, the White House has asked companies to help address “society’s greatest challenges” and research societal risks that AI can pose, including bias, discrimination and privacy concerns.
There are also concerns over AI’s ability to create and spread disinformation. Galloway says it could come into play at critical points in history, such as presidential elections.
“We are going to have a misinformation apocalypse Q1, Q2 of next year, as Putin’s shortest blue-line path to victory in Ukraine is a Trump victory, and I think you’re just going to see massive misinformation against Biden,” he said. “What’s a better expenditure: $60 billion in a war of attrition for Russia, or to give the GRU and Albanian troll farms $5 billion and take advantage of what I’ll call an amoral leadership at social media platforms to just spread misinformation that is very, very hard to distinguish from credible fact-checked information? It’s coming.”
NewsNation reporter Katie Smith contributed to this report.