
In the shadows of Silicon Valley, artificial intelligence is quickly reshaping the battlefield, providing a glimpse of a future where wars may be won or lost in milliseconds by algorithms we can barely comprehend. As AI seeps into military strategy, we face the prospect of a new era in warfare — one where the line between human intuition and machine calculation blurs, and a single line of code could spark the next global conflict.
As we witness the disaster that is the is the “Gaza Pier“, driven by the ongoing “Corporate BS Bingo” that replaced decades of actual training and planning, it’s easy to miss new developments, especially with contentious elections at hone, and ground-shaking political shifts overseas.
“Artificial Intelligence” (AI) systems are revolutionizing the military decision-making processes through their ability to rapidly process, analyze, and collate vast amounts of data, far faster than even teams of trained and experienced humans can do. These developing capabilities have several key implications for military strategy, and thus, national security strategies.
The first factor is enhanced situational awareness: AI can integrate data from multiple sources (satellites, drones, ground sensors, etc.) in real-time, at speeds faster than conventional processes. It also provides commanders with a more comprehensive and up-to-date battlefield picture, helping to identify patterns and anomalies that human analysts might miss.
AI can cycle through predictive analysis at high speed, to better forecast enemy movements, and possible intentions, based on historical data and current intelligence information as it comes in. Clearly, this aids in proactive strategy development rather than reactive responses, helping to predict potential geopolitical events and conflicts before they escalate, at levels down to the division level of command, or even lower.
Artificial Intelligence is able to quickly analyze multiple scenarios to determine optimal resource allocation, improving resource optimization, aiding efficiency in troop deployment, equipment distribution, and supply chain management. These points are not insignificant, as they form the critical underpinnings of military operations.
In addition, faster decision cycles, despite the increased potential for errors, allow AI-assisted analysis to significantly reduce the time needed to make strategic decisions. This potential increase in accuracy and speed would prove crucial in fast moving, rapidly developing conflict situations.
These advantages are not without risks, however. The risk of over-reliance on AI recommendations, without human oversight, is a serious ethical issue. This is best demonstrated by the deployment of the STM Kargu, a completely autonomous drone that uses facial recognition technology to identify specific individuals for targeted assassination, without input from a human operator. These drones, according to the United Nations, Turkey executed exactly this type of attack in 2020.
There is a distinct need for some sort of protocol to explain to AI how to understand the reasoning behind strategic suggestions. As well, “friendly” AI needs to be trained to recognize deception tactics, especially those that may come from “adversarial AI”, attempting to manipulate a friendly AI’s decision-making systems and processes.
In that regard, the integration of AI in cyber security and information warfare is transforming both offensive and defensive capabilities, first through enhanced cyber defenses. As in the wider civilian sphere, AI systems can monitor networks in real-time, detecting and respond to threats faster than human operators. Machine learning algorithms can identify new types of cyber attacks by recognizing potential attack patterns. Automated, independent patch management and vulnerability assessment tools, also powered by AI, can enable these systems to aid in their own defense.
Also in that regard, AI-powered cyber attacks are another aspect of this developing realm. The development of more sophisticated and adaptive malware, intended for deployment by AI, can discover and exploit vulnerabilities in target networks more efficiently than manual searching. This holds the potential for AI to coordinate large-scale, multi-vector attacks on hostile cyber networks.
In the realm of information warfare and disinformation, AI has already developed tools for creating and disseminating very convincing fake news and propaganda. Such psychological operations formerly required a massive investment in conventional printing and radio technology, with results that were frequently uneven in performance. The use of natural language processing to analyze and target specific population demographics with tailored disinformation can reshape both civilian and troop viewpoints in near-real time.
AI-generated realistic video and audio, as a result, will soon prove crucial for military deception operations, through challenges in verifying the authenticity of intelligence gathered from open sources, as well as via recovered intelligence report. Development of AI tools to detect deepfakes and other manipulated media is a major aspect of ongoing AI combat developments.
The reason for this kind of focus, as indicated above, lies in the realm of social media manipulation. AI bots capable of influencing public opinion and sowing discord in target populations can potentially undermine a hostile nation’s national strategy – and potentially its active combat operations – by using AI to identify key social influencers and vulnerable groups for targeted messaging, deep fake video and audio, presenting a distorted perspective to a hostile nation or support group’s population.
But, AI systems can also be used to detect and counter enemy disinformation campaigns, including those conducted by hostile AI’s. The key feature in these types of operations lies in the speed of detection, and in effective countermeasures, as soon as those types of subtle attacks are detected.
In more conventional situations, quantum computing and cryptography hold the potential for quantum-capable AI systems to rapidly break current encryption methods. This is a serious problem, one of extreme concern, as AI holds the potential to crack the “holy grail” of cryptography, by possibly finding a shortcut to breaking the “one-time pad” (OTP) encryption protocol which, despite its faults, is still the most secure system for securing classified transmissions.

Related to this, is the development of AI management for quantum-resistant cryptography, to protect sensitive military communications. In signals intelligence (SIGINT), advanced AI systems for intercepting and decrypting enemy communications can use natural language processing for real-time translation and analysis of intercepted messages.
This list, quite literally, can go on for miles.
The expansion of artificial intelligence into the military sphere is not something to be hand-waved off as a passing fad. Like all developments in military technology, there are both design and deployment cycles, but also countermeasures that can be discovered and implemented.
The Chinese have a saying: “May you live in interesting times.”
That is not a positive…not least, because we do, in fact, live in interesting times.
Act accordingly.
ADDITIONAL RESOURCES
- Paul Scharre (2023), Four Battlegrounds: Power in the Age of Artificial Intelligence
- Sam J Tangredi (USN, Ret.), George Galdorisi (2021), AI at War
- Denise Garcia (2024), The AI Military Race
- Thomas Ricks (2012), The Generals
- James F. Dunnigan (2003), How To Make War, 4th Edition
- James F. Dunnigan (1991), Shooting Blanks
