China’s PLA uses Meta’s Llama AI for military purposes, breaching acceptable use policy

Chinese researchers with connections to the PLA have modified the AI to suit applications for intelligence gathering


News Desk November 02, 2024
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. PHOTO: REUTERS

In a development that raises questions around the use of open-source technology, China’s People’s Liberation Army (PLA) is reportedly employing an early version of Meta’s open-source AI model, Llama 13B, for military purposes.

 According to reports by Reuters, Chinese researchers with connections to the PLA have modified the AI to suit applications related to intelligence gathering, decision-making, and training. Despite Meta’s policy that prohibits military use, the model has become a critical component of the PLA’s evolving AI capabilities.

From Open-Source to Military Tool: The Rise of ‘ChatBIT’

In June, six Chinese researchers from institutions linked to the PLA published a paper detailing their work on “ChatBIT,” an adaptation of Llama 13B trained on military data. According to the translated paper, ChatBIT was designed not only for intelligence analysis but also as a future tool for “strategic planning, simulation training, and command decision-making.”

The model was reportedly trained with over 100,000 military dialogue records, reinforcing its capability for processing large data sets relevant to defence operations. A separate study revealed that a similar Llama-based large language model (LLM) has already been applied domestically to assist police with data analysis for decision-making.

In another instance, a paper uncovered by Reuters describes researchers at an aviation firm connected to the PLA using Llama 2 to develop “training for airborne electronic warfare interference strategies.”

Meta’s Approach to Open-Source AI: A Double-Edged Sword?

Meta CEO Mark Zuckerberg has been a vocal advocate of open-source AI, citing transparency and accessibility as essential benefits. In a July essay, he praised open-source platforms like Unix and Linux, suggesting that “open-source AI is the path forward.” Zuckerberg argued that by making AI models accessible, society as a whole could advance more safely and equitably. “I think governments will conclude it’s in their interest to support open source because it will make the world more prosperous and safer,” he wrote.

The Llama model’s open-source framework, however, also makes it available for unrestricted use worldwide, a fact Zuckerberg acknowledged. Responding to concerns about potential misuse, including by geopolitical rivals like China, he asserted that restricting access would ultimately harm the US. and its allies more than it would benefit them. “Our adversaries are great at espionage, stealing models that fit on a thumb drive is relatively easy,” Zuckerberg stated. He warned that a closed AI environment would only empower a few tech giants and geopolitical adversaries, limiting opportunities for smaller entities.

Policy Violation: Meta’s Stand on Military Use

Meta’s guidelines for Llama strictly prohibit its use for “military, warfare, nuclear industries, or applications involving violence.” However, the model’s open-source nature means these restrictions cannot be enforced if third parties adapt the model independently. Molly Montgomery, Meta’s director of public policy, reiterated the company’s stance: “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.”

In response to reports about PLA usage, Meta issued a statement to Gizmodo, emphasising the US. need to maintain open innovation in AI development. A Meta spokesperson warned that hindering open-source AI would “cede its AI lead to China, hurting our economy and potentially putting our national security at risk.”

As the Llama model is already in circulation, Meta’s options for limiting its usage are restricted. The incident has amplified ongoing concerns about balancing open-source AI benefits with the risks posed by adversarial use. While Meta argues that closed AI models could leave small businesses and academic institutions at a disadvantage, the question remains: how should the tech community and policymakers address the potential weaponisation of open-source AI?

With no immediate solution in sight, the widespread availability of Llama exemplifies the complex challenges facing global AI governance.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ