Bellandi Insight
AI in War: Who Claims Responsibility?
Autonomous weapons, drone swarms, AI-driven targeting — these technologies are reshaping conflict faster than law can catch up. When a machine decides to strike, where does accountability lie?
Recent Developments & Battlefield Examples
- In the Ukraine-Russia conflict, autonomous drones (like Gogol-M and swarm systems) are increasingly deployed, some capable of identifying and striking targets with limited human oversight.
- At the UN, nations regroup to regulate lethal autonomous weapons (LAWS). Despite growing urgency, major powers resist binding restrictions in favor of national guidelines.
- The UN General Assembly passed a resolution Dec 2024 condemning fully autonomous weapons, proposing a two-tier approach: ban some systems outright, regulate others.
- Ethics & technical risks: unpredictability, black-box behavior, misclassification, reward hacking — all present serious dangers if lethal systems act without human oversight.
- Directive 3000.09 (U.S.) mandates “appropriate levels of human judgment” remain in autonomous systems that use lethal force.
Stanford — Lethal Autonomous Weapons
Reuters — UN “Killer Robots” Talks
War on the Rocks — No Human in the Loop?
Geneva Academy — Proliferation Risks
arXiv — Technical Risks of LAWS
War is becoming a laboratory. As technology races ahead, the gap between deployment and regulation widens dangerously.
Ethical & Legal Challenges
- Absence of humans-in-the-loop: Fully autonomous systems may act without direct human approval in the moment. :contentReference[oaicite:5]{index=5}
- Blurred accountability: Who is liable — developer, commander, state, or the algorithm itself?
- Violation of international law: Proportionality, distinction between combatants & civilians, necessity — ethical rules are hard to embed in code. :contentReference[oaicite:6]{index=6}
- Arms race escalation: Cheap drones + AI lower entry threshold. The tech may spread to non-state actors, increasing conflict risk. :contentReference[oaicite:7]{index=7}
Your Turn
Question: If machines autonomously decide to kill — even under human-set goals — who should be held responsible? And how do we stop mistakes from becoming disasters?