Debate grows on AI weapons making life-or-death decisions

Ukraine's push for automated weapons highlights growing global competition in military tech.


News Desk October 12, 2024
Debate grows on AI weapons making life-or-death decisions

Silicon Valley is involved in a significant debate regarding the potential use of artificial intelligence (AI) in weapons systems, particularly whether machines should have the authority to make life-and-death decisions in combat.

This discussion gained momentum in late September when Brandon Tseng, co-founder of Shield AI, asserted that weapons in the US would never be fully autonomous, meaning that AI algorithms would not have the final say in lethal actions.

He emphasized that neither Congress nor the general public supports such a notion.

Shield AI is an American aerospace and arms technology company based in San Diego, California.

It develops artificial intelligence-powered fighter pilots, drones, and technology for military operations

However, just days later, Palmer Luckey, co-founder of Anduril, expressed a different perspective during a talk at Pepperdine University.

Anduril Industries is an American defense technology company that specializes in advanced autonomous systems.

He indicated a willingness to consider the use of autonomous weapons, questioning the moral arguments against them.

Palmer Luckey pointed out the paradox of landmines, which lack the ability to differentiate between civilians and military targets, highlighting that ethical concerns around AI in warfare need to be addressed pragmatically.

A spokesperson from Anduril later clarified that Palmer Luckey's comments did not suggest that robots should independently decide to kill; rather, he was concerned about the implications of malicious actors employing harmful AI technologies.

In the past, tech leaders like Trae Stephens of Anduril have advocated for a model where humans remain responsible for critical decisions regarding lethality.

He stressed the importance of having accountable parties involved in such decisions.

While Anduril's spokesperson denied a conflict between Luckey's and Stephens’ views, the underlying sentiment was that there should always be someone accountable in the decision-making process regarding the use of lethal force.

The stance of the US government on this issue is also ambiguous. Currently, the military does not purchase fully autonomous weapons.

While some argue that certain weapons can operate independently, this differs significantly from systems that can autonomously identify, acquire, and engage targets without human intervention.

The US does not impose a ban on developing fully autonomous weapons nor does it prevent companies from selling them internationally.

Last year, the US introduced new guidelines for AI safety in military applications, which were endorsed by several allies.

These guidelines require top military officials to approve any new autonomous weapon; however, compliance is voluntary, and officials have repeatedly stated that it is not yet the right time to consider a binding ban on autonomous weapons.

Recently, Joe Lonsdale, co-founder of Palantir and investor in Anduril, also expressed a willingness to explore the concept of fully autonomous weapons.

He argued against a binary approach to this debate, suggesting that a more nuanced understanding of autonomy in weaponry is necessary.

Activists and human rights organizations have long attempted to establish international bans on autonomous lethal weapons, but the U.S. has consistently resisted these efforts.

However, the ongoing conflict in Ukraine may shift the dynamics, providing a wealth of data on the use of AI in combat and serving as a testing ground for defense technology companies.

Ukrainian officials have been vocal about their need for increased automation in weapons systems, believing that it will enhance their capabilities against Russian forces.

The overarching concern among US officials and tech leaders is the fear that countries like China and Russia might develop fully autonomous weapons first, compelling the US to follow suit.

A Russian diplomat's remarks at a UN debate on AI arms underscored this fear, suggesting that priorities around human control differ significantly between nations.

In response to this competitive landscape, tech leaders like Lonsdale are advocating for a proactive educational effort aimed at military and governmental officials to better understand AI's potential benefits in national security.

 

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ