BELOW SUPERNAV drop zone ⇩

Why the Pentagon’s ‘killer robots’ are spurring major concerns 

MAIN AREA TOP drop zone ⇩

MAIN AREA TOP drop zone ⇩

maylen

https://digital-stage.newsnationnow.com/

AUTO TEST CUSTOM HTML 20241114185800

AUTO TEST CUSTOM HTML 20241115200405

AUTO TEST CUSTOM HTML 20241118165728

AUTO TEST CUSTOM HTML 20241118184948

As the Defense Department is pushing aggressively to modernize its forces using fully autonomous drones and weapons systems, critics fear the start of a new arms race that could dramatically raise the risk of mass destruction, nuclear war and civilian casualties.

The Pentagon and military tech industry are going into overdrive in a massive effort to scale out existing technology in what has been the Replicator initiative. It envisions a future force in which fully autonomous systems are deployed in flying drones, aircraft, water vessels and defense systems — connected through a computerized mainframe to synchronize and command units.

Arms control advocates fear the worst and worry existing guardrails offer insufficient checks, given the existential risks. Critics call self-operating weapons “killer robots” or “slaughterbots” because they are powered by artificial intelligence (AI) and can technically operate independently to take out targets without human help. 

These types of systems have rarely been seen in action, and how they will affect combat is largely unknown, though their impact on the landscape of warfare has been compared to tanks in World War I.

But there are no international treaties governing the use of these weapons, and human rights groups are uneasy about Washington’s ethical guidelines on AI-powered systems and whether they will offer any protection against an array of humanitarian concerns.

“It’s really a Pandora’s box that we’re starting to see open, and it will be very hard to go back,” said Anna Hehir, who leads autonomous weapons system research for the advocacy organization Future of Life Institute (FLI).

“I would argue for the Pentagon to view the use of AI in military use as on par with the start of the nuclear era,” she added. “So this is a novel technology that we don’t understand. And if we view this in an arms race way, which is what the Pentagon is doing, then we can head to global catastrophe.”

Deputy Secretary of Defense Kathleen Hicks first announced Replicator during a defense conference in Washington, D.C., in late August, calling it a “game-changing” initiative that will counter China’s growing ambitions and larger fleet of military resources. It comes after years of research and testing in both the private and public sector, now reaching a point where the U.S. can move to field the technology.

While autonomous systems have been used for decades in some form, such as automatic defensive machine guns, Replicator is designed to produce swarms of AI-powered drones and flying or swimming craft to attack targets.

Hicks said Replicator will use existing funds and personnel to develop thousands of autonomous systems in the next 18-24 months. She has also publicly said that the initiative will remain within ethical guidelines for fully autonomous systems.

Those guidelines come from the Pentagon’s own directives on the use of AI in warfare, updated in January 2023, which focus on ensuring senior level commanders and officials properly review and approve new weapons. The policy stipulates there be a “appropriate level of human judgment” before an AI weapon system can use force.

A Congressional Research Service report noted that the phrase was a “flexible term” that does not apply in every situation, and that another phrase, “human judgment over the use of force,” does not mean direct human control but refers to broad decisions about deployment.

Some critics also point out that there is a waiver throughout the policy that appears to allow for the bypassing of the requirement for senior-level review of AI weapons before deployment.

Eric Pahon, a spokesperson for the Defense Department, declined to speak on the questions raised about the policy, including the language used and the waiver. The Hill sent detailed questions to the Pentagon on the concerns but did not receive a response.

Pahon said in an interview the U.S. was the “world leader in ethical AI standards.”

“We’re always going to have a person responsible for making decisions,” he said. “We’re committed to having a human responsible for decisionmaking.”

Michael Klare, secretary for the Arms Control Association’s board of directors and a senior visiting fellow researching emerging technologies, said he was “dubious that it will always be possible to retain human control over all of these devices.”

He argued these types of self-operating systems could carry out unintended missions such as attacking nuclear facilities.

“The multiplication of these kinds of autonomous devices will increase the potential for unintended or accidental escalation of conflict across the nuclear threshold [that could] trigger a nuclear war,” Klare said.

“We fear that there’s a lot of potential for that down the road,” Klare added. “And that’s not being given careful consideration by the people who are fielding these weapons.”

Fully autonomous systems have been feared for decades, with activists warning about the dire consequences of diminishing human oversight as they grow more ubiquitous.

The leading concerns are: 

  • Making the decision to go to war will be easier the more heavily the world relies on AI weapons
  • Algorithms should not take human life because they cannot comprehend its value and can be biased, for example targeting groups based on race
  • AI weapons, expected to be cheap and mass-produced, will proliferate and spread to insurgent groups and bad actors
  • The risk of escalation between nuclear powers increases with the use of autonomous systems

The issues are not new, but human rights groups are alarmed about the pace of Replicator and worry its ambitious timeline will not allow for proper testing in the real world.

Hehir, from the FLI, said the timeline announced by Hicks was “very, very fast” for an emerging technology that may prove to be overwhelming. Experts in the field also say the technology is lacking when it comes to humans retaking command over out-of-control autonomous weapons. 

“I don’t think that’s a sufficient amount of time,” she said. “If you’re waging war with such a high number of systems in a particular moment or a particular attack, then it’s impossible to exercise meaningful human control over such a number.”

In 2018, the FLI released a petition calling for the prohibition on any autonomous weapon taking the life of a person without a human operator making the call. The petition to date has been signed by 274 organizations and nearly 4,000 people. 

A 2016 FLI letter called for a complete ban on offensive weapons beyond meaningful human control, which has been signed by more than 34,000 people, including tech billionaire Elon Musk and theoretical physicist Stephen Hawking.

The western security alliance NATO has established some standards on the use of autonomous weapons, including a requirement they only attack lawful military targets. But NATO officials concede that it’s unclear “when and where the law requires human presence in the decision cycle” and “how wide a decision cycle is.”

While at least 30 countries have called for the prohibition of lethal autonomous systems, the United Nations (U.N.) has yet to ratify any treaty to govern the weapons, and to date there is no agreed upon definition for them. 

U.N. Secretary-General António Guterres has called for a legally binding agreement by 2026 to restrict the technology and prohibit lethal autonomous weapons from being used without human oversight. The U.N.’s Convention on Certain Conventional Weapons (CCW) has been debating the issue since 2013.

This year, the U.N. is expected to address the issue more head on; the General Assembly may bring up autonomous systems in October, and the CCW is expected to convene and discuss the topic in November. 

Human rights groups and activists largely expect the U.S. to eventually back a treaty, arguing it is in Washington’s best interest to do so, because America could help shape the use of AI weapons as adversaries such as China pursue similar tech.

The Defense Department would not say whether it would support an international treaty governing the use of AI weapons.

But as with other weapons treaties, the greatest challenge may be compliance, which is compounded by the fact that advanced AI technology can evade tracking and identification of its use.

Peter Asaro, the co-founder of the International Committee for Robot Arms Control and a spokesperson for the Campaign to Stop Killer Robots, said AI use will be hard to prove and easy to deny. 

“You have to have a good understanding of the technological architecture,” he said. “These drones [could] have another drone that looks just like it but doesn’t have that [autonomous] functionality. It’s really hard to tell.”

The Pentagon has now formed a working group to research, study and ensure these types of weapons are used ethically and responsibly. It also has a Chief Digital and Artificial Intelligence Office overseeing the development of related tech.

For the U.S. military, the only known combat deployment of fully autonomous craft is Shield AI’s Hivemind — an AI pilot software that operates the Nova quadcopter to scan areas and identify targets and movement on battlefields.

Today, there are just one or two instances of the technology being used offensively, meaning it’s unclear what mass deployment of AI weapons will actually look like.

Public records show only one known instance where an offensive AI weapon is thought to have been used, in Libya in March 2020.

At the time, a Turkish-made drone system called the STM Kargu-2 attacked forces aligned with Gen. Khalifa Haftar, who leads one of two factions vying for power in the war-torn country.

A U.N. report said the STM Kargu-2 drones “were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”

It’s not clear how deadly the attack was. But as the technology rolls out, humanitarian organizations worry about an increase in civilian casualties from the mass deployment of AI bots — and possible bias in who is targeted. 

The International Committee of the Red Cross recommends explicitly prohibiting drones that could attack a target without human knowledge and has called for limitations on all other autonomous systems, such as restricting their use in urban areas, for example.

“Machines can’t make complex ethical choices,” reads a petition from Amnesty International calling to create international restrictions around the technology. “They lack compassion and understanding, they make decisions based on biased, flawed and oppressive processes.”

Military

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed

 

MAIN AREA MIDDLE drop zone ⇩

Trending on NewsNation

AUTO TEST CUSTOM HTML 20241119133138

MAIN AREA BOTTOM drop zone ⇩

tt

KC Chiefs parade shooting: 1 dead, 21 shot including 9 kids | Morning in America

Witness of Chiefs parade shooting describes suspect | Banfield

Kansas City Chiefs parade shooting: Mom of 2 dead, over 20 shot | Banfield

WWE star Ashley Massaro 'threatened' by board to keep quiet about alleged rape: Friend | Banfield

Friend of WWE star: Ashley Massaro 'spent hours' sobbing after alleged rape | Banfield

Sunny

la

70°F Sunny Feels like 70°
Wind
3 mph S
Humidity
22%
Sunrise
Sunset

Tonight

Clear to partly cloudy. Low 47F. Winds light and variable.
47°F Clear to partly cloudy. Low 47F. Winds light and variable.
Wind
2 mph NNE
Precip
11%
Sunset
Moon Phase
Waning Gibbous