event

Charting the Global Dialogue on Military AI

Wed. March 27th, 2024
Carnegie India

Carnegie India organized an online-closed door discussion titled “Charting the Global Dialogue on Military AI” on March 27, 2024. The closed-door comprised of officials from government and inter-governmental organizations, subject matter experts, and industry representatives. The discussion focused on the current state of artificial intelligence (AI) in the world and participants took stock of the state of military applications of AI. After taking into consideration various multilateral, bilateral, and national initiatives adopted by different countries, participants mapped out the regulatory landscape of military applications of AI.

Discussion Highlights

1. The Current State of Artificial Intelligence in the World

Applications of AI have significantly advanced over the last few years. Generative AI, in particular, exhibits the capability to act autonomously based on user inputs and prompts, and offers several operational advantages. These models rely on compute to power their deployment, requiring less power than they did previously and making their functioning more efficient. Despite these advancements, generative AI is prone to producing misinformation and may not always behave as expected. Issues such as these underscore the need to govern AI and ensure human oversight for its responsible utilization. Currently, there is divergence in the global landscape on AI governance, wherein the Global North has taken a risk-aversive approach, and the Global South has adopted a pro-innovation approach. India has advocated for an approach that aims to balance innovation with risk mitigation and has actively participated in global processes such as the G20, the Global Partnership on Artificial Intelligence (GPAI) and the United Nations. However, India needs a domestic policy that highlights its coherent AI risk mitigation strategy in order to align it with its international positions on AI governance. Further, India’s domestic strategy on AI governance, its quest to build sovereign AI to reduce foreign dependence, and its data policies will all have ramifications on its position at global initiatives that aim to govern military AI.

2. The State of Military Applications of AI Around the World

Currently, there are various applications of AI in the military—autonomy in weapon systems, the execution of command and control functions, ISR (intelligence, surveillance and reconnaissance) operations, supply chain and human resource management, modelling, and simulation. AI is also converging with other technologies such as space tech, quantum tech, and biotech. Weapon systems have seen an increase in complexity, with many countries shifting to autonomous weapons, for instance, Israel’s Gospel system and Ukraine’s Saker. There is progress in the distribution of AI across various applications, from logistics to human resource management to command and control etc. These progressions are subtle and incremental and may lead to insidious integrations into military operations. For example, a decrease in decision-making time through AI can quicken response and lead to escalation and flash wars. There is also potential for the integration of AI in certain military tasks like cybersecurity, jamming, simultaneous attacks, swarm coordination, and operations in an anti-access/area denial environment (A2/AD operations). However, concerns persist regarding the ethical use of autonomous systems in the military, its compliance with International Humanitarian Law (IHL), and the risk of escalation posed by these AI-based systems, especially in those involving nuclear doctrines. Finally, there exists a gap between theoretical understanding and practical application of AI, particularly in the context of human-machine interaction and comprehension. To address these gaps, there must be more focus on human machine teaming and training. The functioning of AI involves three main stages—sensory inputs, planning the course of action, and action (execution of efforts). Each of these stages contribute to AI’s adaptability and problem-solving capabilities and pose a distinct set of technical challenges. Legal and ethical dilemmas arise as AI technologies advance, particularly when they become so complex that human comprehension of their operations becomes limited. Current AI models also struggle with both associative and directed thinking processes that are inherent in human cognition. More challenges can arise in the development of AI systems, such as when vendor locks for application programming interface (API) hinder data delivery to ensure AI development.

3. Regulatory Landscape of Military AI Around the World

Currently, there are two broad schools of military AI governance. The first one is the humanitarian approach that advocates for the removal of lethal autonomous methods in the military and views AI in military operations as a concern. The second one is a more pragmatic consideration on the deployment of AI in the military, highlighting the need to balance humanitarian values with military imperatives. To regulate AI in the military, it is important to understand and distinguish between automated, automative, and autonomous processes. Further, it is important to understand the differences in regulating military AI from civilian AI. For example, non-military AI is viewed in the context of human rights, whereas military AI risks are focused on compliance with IHL. Furthermore, the military doctrines and ethos of a country are also important consideration when regulating AI in the military. Developing an AI regulatory framework requires a multistakeholder approach due to its complex nature. Many governmental bodies lack the technological expertise to regulate AI effectively, making the involvement of the private sector crucial. Private companies often possess valuable resources that can contribute to the regulatory process. Furthermore, multistakeholder processes like the U.S. Political Declaration on the responsible use of AI in the military and autonomy, three of its working groups, and the Responsible AI in the Military Domain (REAIM) summit reflect commitments to impart practical knowledge to develop and use these technologies responsibly.

Takeaways:

1. The opportunities and challenges of military AI are manifold. The doctrinal aspects of military AI need to be studied to balance military imperative with responsible use.

2. There is a narrow window to meaningfully regulate AI in the military before current practices become established norms.

3. There is a need to coordinate global discussions on military AI with civilian AI.

4. More work is needed on export control measures to regulate dual-uses of AI and ensure that these technologies don’t reach non-state actors.

5. It is important to build a risk matrix for military AI, starting with the minimum risk threshold.

6. Multi-stakeholder discussions that focus on capacity building, like the U.S. Political Declaration or the REAIM summit, are needed.

7. It is important to have an inclusive dialogue that will understand the perspective of the Global South on military AI.

This event summary was prepared with the help of Nandini Nair and Katyayinee Richhariya, research interns at Carnegie India.

Carnegie India does not take institutional positions on public policy issues; the views represented herein are those of the author(s) and do not necessarily reflect the views of Carnegie India, its staff, or its trustees.