Man, Machine, and the Battlefield in Modern Warfare
STRATEGY CENTRAL
For And By Practitioners
By Monte Erfourth - December 8, 2024
Introduction
Artificial intelligence is on the verge of revolutionizing modern warfare's speed, scale, and complexity, surpassing the human mind's capabilities. AI promises to solve this problem—helping to sift through data, recognize patterns, and make multi-domain decisions in real-time. But as the U.S. military increasingly looks to AI to address tactical and strategic challenges, a fundamental question looms: How do we blend AI’s computational power with human moral and professional judgment? It is not enough to build smarter systems; those systems must serve human ethics and law, particularly concerning lethal force.
AI as a Game Changer for Decision Making
Modern combat is no longer confined to a single domain. On any given day, operations may occur simultaneously in the air, land, sea, space, and cyberspace. This all-domain nature of warfare creates complexity that is almost impossible for human minds to manage effectively in real-time. AI can solve problems efficiently, providing real-time analysis and offering decision-making solutions to address threats as they emerge. It is a capability that is not optional; it is imperative for remaining competitive against peer adversaries.
AI is increasingly seen as an invaluable partner in speeding up the decision-making process while maintaining, if not improving, the accuracy of those decisions. However, there is a delicate balance to strike—AI must not replace humans. Instead, it must work in concert with military professionals who can apply moral and ethical considerations, particularly when lethal force is involved.
The American military is clear: A human must always be in the loop. Humans can weigh options, consider unintended consequences, and make value-based decisions—crucial elements when lives are at stake. This isn't a matter of mere caution; it’s a matter of ethical duty. The ethical implications of allowing a machine to make lethal decisions are enormous, and the potential consequences could be tragic if things go wrong.
The compromise somehow integrates the best of both man and machine. However, integrating AI into military decision-making is not as simple as handing over rapid decision-making to a machine and letting a human push a button. It requires a level of understanding and training that allows military personnel to both trust and question the decisions suggested by AI. One way the military is preparing for this future is through AI-driven wargaming.
Imagine a wargame where participants must consider tactical or operational solutions in each domain simultaneously and reconcile those solutions with the Law of Armed Conflict (LOAC). In such a scenario, AI offers solutions based on data, statistics, and probabilities—delivering outcomes that maximize military advantage while minimizing risk to friendly forces. However, the participants—human commanders and operators—must evaluate these solutions with a keen awareness of ethical and legal boundaries. They need to determine whether those decisions align with international law, the moral standards expected of the American military, and, ultimately, their conscience.
An AI-driven wargame might present an example like this: A drone strike is proposed on a target that AI has identified as a high-value military asset. The system’s analysis indicates a 95% success rate, with minimal risk to U.S. personnel. However, there is a 5% chance that a civilian structure nearby could be affected. The AI might calculate this as an acceptable level of collateral damage based on the expected strategic advantage. But the human in the loop must ask: Is that risk acceptable? Does it comply with the LOAC, which requires minimizing civilian harm? Could there be unintended consequences—such as negative political or social repercussions—that the AI cannot fully evaluate?
These wargames and others should also be part of the development of the Rules of Engagement (ROE). Programmers and lawyers should have time to help identify problems and solve them well before the time and risk elements of war demand rapid decision-making from the man-machine interface. Studies on risk and risk tolerance, political objectives and cultural sensitivities should be part of wargaming and teaching the man-technology system to solve them in peace as a matter of routine to enhance their linked process.
Such wargames force participants to recognize the inherent correctness of AI’s tactical or strategic solutions and their limitations. The technology experts, lawyers, psychologists, ethicists, strategists, anthropologists, weapons experts, and other assorted experts must be part of a production and pre-deployment training package. The improved kill-chain systems and the AI used to develop strategy require the best man-machine integration we can field. AI does not possess moral judgment, nor can it understand the complex political and human consequences that extend beyond the battlefield. Human commanders and operators, by contrast, must bear the weight of these decisions and be prepared to justify their actions legally and morally.
Accelerating Decisions Without Losing Humanity
This discussion highlights the speed and a quality of “correctness” in applying deadly force. However, two parts of AI-enabled decision-making are at play, as presented here. Slower strategic decisions should be enabled by AI, as well as kill chain decisions that require speed and accuracy in the use of lethal force. Speed in the application of the ethical use of lethal force is the central issue, as it is the primary problem of integrating the full power of AI with the slower but more nuanced human mind. Slow thinking (a less immediate threat that requires deliberate decision-making) in developing a strategy is full of issues to be resolved. Still, the AI-human integration at speed in applying lethal force is the more vexing of the two parts of a symbiotic relationship.
The key to integrating AI into the kill chain is forging a symbiotic relationship that enhances decision speed and accuracy without sacrificing our humanity or core democratic values. AI offers rapid data analysis, tactical recommendations, and precision—but it must always serve human oversight rooted in ethical responsibility. The integration must mesh the objective analysis of AI with the subjective human capabilities to maximize the effect of weapons and preserve and protect civilian lives, infrastructure, and the environment.
On the future battlefield, decisions must be made in seconds, not minutes. AI can assist by providing swift, data-driven solutions, but the ultimate judgment must come from humans who can evaluate the broader consequences. Human operators must blend tactical advantage with moral and strategic considerations, ensuring adherence to the principles of ethical conduct and the Laws of Armed Conflict. The nightmare scenario—AI making lethal decisions autonomously—risks removing the essential humanity from warfare, replacing nuanced ethical judgment with cold calculations that could lead to tragic outcomes. This remains true, even when making slower-paced strategic decisions. AI’s cold rationality must not allow us to create monstrous strategic objectives like genocide, starvation, or enslavement.
We must also consider that AI may, in time, surpass humanity in ethical and legal reasoning at speed. This would create a truly unprecedented event where ethical AI systems may lead and fight warfare. AI could become more ethically human than humanity. Without greed, will-to-power, hatred, anger, fatigue, and human foibles, AI trained by the best legal and ethical minds could become more suitable to fight wars than humans. Synergy would take on a new meaning in a world where AI was our ethical superior. But until that day, man must be the final decision-maker. Synergy must be strived for through a relentless quest to improve man, machine, and process.
Synergy requires ongoing training that goes beyond technical skills. Military personnel must grasp AI's capabilities and limitations, recognizing when it might lead to unintended outcomes or when human judgment is stretched too thin. By promoting critical thinking and experience, the military can build trust in AI while ensuring that human operators retain ultimate authority and prioritize ethics over speed. This integration allows for quick and precise actions while adhering to the principles of just warfare. The overall process demands that AI enhances operational tempo without compromising human values, preventing indiscriminate harm and upholding moral standards defined by LOAC. Ultimately, AI is a force multiplier that empowers, not replaces, humanity.
Responsibility and Legal Constraints
The American military must adhere to the Laws of Armed Conflict (LOAC) and its ethical code. The use of AI in warfare raises critical questions about how compatible these laws and values are with machine decision-making. A key concern is the principle of distinction, which requires identifying combatants versus non-combatants. While AI can identify targets accurately, it cannot grasp context like humans.
Consider a situation where AI identifies individuals with weapons in a conflict zone and suggests they are combatants, recommending a strike. A human operator, however, may know these individuals are part of a local militia not actively fighting or civilians defending themselves. The information available to humans can accurately assess the context and decide if a strike is justified. Conversely, AI may have updated intelligence, which is the basis for a more accurate assessment. The human must be trained to make snap decisions on which to trust while complying with LOAC and ROE.
The issue of accountability is crucial. If a mistake occurs, like targeting civilians, who are responsible? A human can be held accountable for errors, but what about an AI? This lack of clear accountability is one reason the military insists on keeping a human in the loop—someone must be responsible for decisions made on the battlefield.
A Vision for the Future
Integrating AI into military operations is not a distant possibility—it is happening now. The challenge is to ensure that as this technology becomes more deeply embedded in the decision-making process, it is done in a way that upholds the principles of just war and the ethical standards that define the American military.
This will require a combination of training, wargaming, and cultural adaptation. Commanders must be comfortable using AI, but they must also be comfortable questioning it. They must learn to recognize when AI offers the best solution and when it falls short. This kind of learning can only come from experience—from exercises that force them to confront AI's limitations and consider the broader consequences of their actions.
Ultimately, the goal is not to replace human decision-makers but to enhance them. AI can potentially make the American military faster, smarter, and more effective. But it must do so in a way that respects human judgment and upholds the values that define the military and the nation it serves.
The future battlefield will demand decisions at the speed of events across all domains. AI can provide the answers, but only humans can determine whether those answers are right or wrong. The challenge is to blend the speed and power of AI with the conscience and judgment of the human mind to ensure that as warfare becomes more complex, it does not lose its humanity.
Bibliography
1. Allen, Gregory C., and Taniel Chan. Artificial Intelligence and National Security. Harvard Kennedy School, Belfer Center for Science and International Affairs, 2017.
2. Crootof, Rebecca. "The Killer Robots Are Here: Legal and Policy Implications." Cardozo Law Review 36, no. 1 (2014): 1837-1915.
3. Scharre, Paul. Army of None: Autonomous Weapons and the Future of War. W.W. Norton & Company, 2018.
4. Singer, P.W. Wired for War: The Robotics Revolution and Conflict in the 21st Century. Penguin Press, 2009.
5. Sparrow, Robert. "Killer Robots." Journal of Applied Philosophy 24, no. 1 (2007): 62-77.
6. United Nations Office for Disarmament Affairs. The Convention on Certain Conventional Weapons and the Use of Lethal Autonomous Weapons Systems. United Nations, 2019.
7. U.S. Department of Defense. Directive 3000.09: Autonomy in Weapon Systems. November 21, 2012.
8. Walzer, Michael. Just and Unjust Wars: A Moral Argument with Historical Illustrations. Basic Books, 2015.
コメント