(NewsNation) — Elon Musk’s artificial intelligence (AI) startup xAI launched an AI model called Grok on Saturday available to a select group.
“Grok has real-time access to info via the X platform, which is a massive advantage over other models,” Musk said.
This release comes nearly a year after OpenAI’s ChatGPT caught the attention of both businesses and users around the world. And it signals Musk’s commitment to advancing AI technology despite prior calls for a pause on AI development.
On Thursday, Musk met with British Prime Minister Rishi Sunak in London to conclude the first-ever artificial intelligence (AI) safety summit.
During a cozy onstage chat at a business reception in London’s grand Lancaster House, Musk warned that AI was the “most disruptive force in history.” He even predicted that AI would lead to the end of the workforce, BBC reported.
“There will come a point where no job is needed – you can have a job if you want one for personal satisfaction but AI will do everything,” Musk said.
Musk, the billionaire CEO of Tesla and SpaceX, and owner of X (formerly Twitter), unveiled his latest startup in July this year. This new venture is based in the San Francisco Bay Area and has recruited a team of leading AI researchers with previous experience at OpenAI, Google, Microsoft, and Tesla.
Musk was a co-founder and early funder of OpenAI who parted ways with the San Francisco-based research lab several years ago. He’s grown increasingly critical of OpenAI as it’s gained global prominence and commercial success with last year’s release of ChatGPT and solidified its financial ties to Microsoft.
The public unveiling of xAI follows comments Musk made about it in April to then-Fox News host Tucker Carlson.
Musk told Carlson that OpenAI’s popular chatbot had a liberal bias and that he planned an alternative that would be a “maximum truth-seeking AI that tries to understand the nature of the universe.”
Musk said all subscribers to X’s recently launched Premium Plus plan, which costs $16 per month for ad-free access to X, will get access to Grok “once it’s out of early beta.”
The startup reflected Musk’s long-voiced concerns about a future in which AI systems could present an existential risk to humanity. The idea, Musk told Carlson, is that an AI that wants to understand humanity is less likely to destroy it.
Musk was one of the tech leaders who earlier this year called for AI developers to agree to a six-month pause before building systems more powerful than OpenAI’s latest model, GPT-4. Around the same time, he had already been working to start his own AI company, according to Nevada business records.
Since the launch concerns have emerged about the potential impact of AI on misinformation during the upcoming 2024 presidential election.
In a recent interview on “NewsNation Prime,” Jim Anderson, CEO of Beacon, an AI software company, discussed the implications of Musk’s entry into the AI landscape.
One notable aspect of Musk’s AI initiative is his commitment to making it “without censorship.” However, the implication of this statement remains uncertain.
Anderson said that censorship in AI is a complex and politically charged topic, and the focus may shift to the vast amount of training data that Musk has access to.
With millions of Tesla vehicles on the road collecting extensive data and Starlink satellites in orbit, Musk has a unique data advantage that could shape the direction of his AI endeavors, according to Anderson.
The Republican National Committee and presidential candidate Ron DeSantis have already used AI to create fake political ads, raising concerns about the potential proliferation of misleading content in the 2024 presidential election.
A recent Associated Press poll revealed that 58% of adults believe that AI will increase the spread of false and misleading information during the upcoming election, with only 6% believing it will decrease it.
To address the concerns surrounding AI and misinformation, the White House issued an executive order aimed at developing standards for watermarking and clearly labeling AI-generated content. While this is a positive step, experts say it is not a foolproof solution, and the devil is in the details when it comes to effective regulation.
The Associated Press contributed to this report.