The Rise of Artificial Intelligence in Warfare
The growing use of artificial intelligence in warfare is changing how countries plan for, participate in, and even think about going to war. Unlike traditional weapons operated by human hands, modern...
The growing use of artificial intelligence in warfare is changing how countries plan for, participate in, and even think about going to war. Unlike traditional weapons operated by human hands, modern artificial intelligence systems are beginning to make decisions on their own. This shift is about control, ethics, and power as much as efficiency and speed. From the battlefields of Ukraine to the laboratories of the Pentagon, artificial intelligence is at the center of everything; everyone is discussing the next revolution in military technology.
From command centers to surveillance systems, cyber operations to drones, artificial intelligence is already operating in many different venues. According to the Center for Strategic and International Studies, over 60 countries are now supporting military artificial intelligence initiatives. In 2023, the US government has set aside more than $1.8 billion for defense research including artificial intelligence. Using jamming technology and drones enhanced with artificial intelligence, Ukrainian and Russian military are hunting for one another’s equipment in Ukraine. Manufactured in Ukraine, drones like the “Saker Scout” can find and strike up to 64 separate Russian military targets without any human help. Military experts have been surprised by the efficacy of these gadgets built on fundamental off-the-shelf electronics such Raspberry Pi computers.
Artificial intelligence raises significant ethical and safety questions even if it offers modern capabilities for military operations. One of the key issues is the use of autonomous weapons, machines capable of deciding to kill without human involvement. A prime illustration of this is the Israeli military’s use of a tool called “Lavender” to construct kill lists depending on AI-generated profiles during the 2023–24 conflict on Gaza. Reports say this technology could have wrongly identified as possible jihadists up to 37,000 Palestinians. Israeli officers who utilized Lavender admitted they merely gave each target a 20-second review before authorizing the strike. Subsequent studies revealed that the system had a 10% error rate, which meant thousands of unarmed people could have been wrongly targeted.
In a military setting, artificial intelligence could be misused or make errors with serious consequences. Though little, bias or inaccuracy during training could have disastrous effects even with great amounts of data needed to educate artificial intelligence models. For instance, by mistaking a civilian car for a military truck, artificial intelligence could create unnecessary deaths in the area of picture identification. Human operators may not always find clear the complex ideas AI systems use to operate, which makes it challenging to spot these errors. An error in combat might be attributed to the soldier, the programmer, or the machine.
The absence of clear worldwide rules or safeguards allowing countries to participate in an artificial intelligence weapons race adds more cause for worry. By 2024, the United States, Russia, China, and Israel are all developing autonomous military systems. Chinese military doctrine, “intelligentised warfare,” specifically addresses the use of artificial intelligence for both physical and “cognitive warfare,” or the manipulation of enemy soldiers’ and civilians’ minds and decision-making processes. Among these is the use of artificial intelligence in psychological operations and social media-based false information campaigns. Such tools might mislead entire societies or endanger democracy.
The question of power is another issue. AI weapons are far faster than people. Far quicker than a human operator could, the United States Army’s Project Convergence tested an AI-powered computer system in 2020 capable of analyzing sensor data and firing artillery commands in twenty seconds. Although this could be tactically advantageous, it also suggests that soon algorithms could pass decisions with life-or-death effects. In the case of a communication breakdown or system compromise, such weapons might release catastrophic force.
Though opinions differ among experts on how to reduce these risks, almost all agree that the responsible use of artificial intelligence depends on strong ethical norms and supervision. For instance, the United States military holds that a human being must always be involved in the choice to use lethal force. Although some countries follow this standard, others do not; the urge to let machines do more and more of the labor grows with each passing year. More than 270 organizations worldwide, including Human Rights Watch, have suggested a blanket ban on completely autonomous weapons. The globe has not yet arrived at a legally binding agreement.
Startups, not major military corporations, are creating many of the innovative artificial intelligence weapons, therefore it is noteworthy. American firms such as Anduril have developed autonomous drones such as ALTIUS that can fly by themselves and utilize the AI onboard to find their targets. Some weapon systems in Ukraine are being built using parts available in any typical electronics shop or even a toy store. Though hard to control, this approach makes AI warfare more available because of its low cost and high influence.
Difficult issues come up on the potential application of artificial intelligence in war. Should we trust machines to determine issues of life and death? Should two nations that rely significantly on autonomous systems go to war with one another, what would result? Should such systems fail or turn against their controllers, who would be held responsible? These totally technical problems are accompanied by worries of a moral, legal, and basically human kind.
Managing artificial intelligence (AI) calls for great vigilance since mismanagement might turn it into one of the most powerful weapons of contemporary war. Like earlier nuclear weapons, artificial intelligence offers great potential as well as great danger. The world has to act right now to set laws, controls, and ethical standards before technology runs wild. Rather than human warriors, we have to act now to avert future wars being waged by incomprehensible, unstoppable robots.


