NOTE: As we take this article to press – on Friday, July, 19, 2024 – a major cyber event is developing, affecting Windows OS machines running the CrowdStrike antivirus software. At press time, it remains unclear as to whether this is a simple software glitch, or if it is a deliberate attack.
One of the most popular terms in the military sphere of late is “information warfare” (IW)…but, what is that, really? Simply out, information warfare is the use of information and communication technologies to gain competitive advantages over opponents. In short, it is the use of broad categories of inforamtion gain advantages.
For propaganda centuries, competing states have used various forms of propaganda (well before the term was invented in the 1920’s), it was not until World War One that Edward Bernays developed the first rudimentary principles of what would become the modern fields of psychological operations (psyops), propaganda, and what I term “directed deep-fake operations“.
With the rise to dominance of increased connectivity and a vastly enlarged reliance on digital systems, for everything from simple communications to to critical financial transaction systems, information warfare is now a critical, and growing component of national security. Finding ways to “attrit” such systems, whether via a more stealthy, long-term approach of systems infiltration or through a sudden, all-out assault, is now a major focus of top-tier national armed forces.
Like all of the many areas of warfare, modern information warfare has its unique shapes, spaces and requirements. Information warfare is now far more than creative fake newspapers, propaganda posters and leaflets:
- A. Cyberattacks and hacking target critical government and military systems.
- B. Disinformation and propaganda are used to spread false or misleading information, specifically targeted to influence public opinion.
- C. Social media manipulation uses platforms from Facebook and Instagram, to TikTok and Minds to amplify directed messages of misinformation and fake news in order to create “echo chambers“, which pigeonhole unwary readers into believing a wholly fictional version of reality.
- D. Critical to these operations are the use of “deep-fakes” and AI-generated content to create convincing fake videos and audio to mislead or discredit. These videos originally began by digitally grafting the faces of various celebrities onto pornographic videos – because Rule 34 is real – and moved on to spoofing major media and political figures…these tools have only improved in recent years.
There are, of course, many actors involved in making this type of warfare viable. Variously, there are three basic groups actively engaging in these operations: state-sponsored groups deployed by governments to run campaigns designed to influence foreign populations by reshaping their views via mainstream and social media spaces; these also frequently serve to destabilize adversary powers. This is one of the many responsibilities of the Central Intelligence Agency’s “meme division”.
Non-state actors (terrorist groups, “hacktivists”, and other organizations, best lumped together as “anarchists”) use the same information warfare tactics as the state-sponsored groups, but use them for strictly criminal, money-making scams, or as mercenary groups to supplement the state groups in their operations, as has happened in recent years, specifically with Iran’s response to the STUXNET attack of 2010, that seriously damaged Iran’s nuclear material enrichment facility in the city of Natanz.
The main tools being used to facilitate the various operational avenues of attack in information warfare are “bots” and “troll farms”. These vectors employ automated accounts and organized groups spreading content and engaging in online discussions, that are increasingly being driven by ever-improving Artificial Intelligence (AI) algorithms.
Aside from the social media manipulaton sphere, which is best defined as a “soft attack strategy”, the primary attack modes use viruses and “hostile” AI to target critical infrastructure systems to attempt to disrupt power grids, financial systems, hospital operations, local police and fire response systems, water distribution and treatment systems, and other vital services. This is, in fact, the door that was opened by the STUXNET attacks, because that virus – rather than directly attacking the core programming, specifically targets the programmable logic controllers (PLCs), which allow the automation of electromechanical processes such as those used to directly control machinery and various industrial processes, including gas centrifuges for separating nuclear material, as happened in Iran in 2010.
Globally, various hostile vector systems are used to influence national elections, by attempting to sway voter opinions unnaturally and to undermine electoral processes, although this requires a targetable infrastructure in the target country that allows for manipulation of votes and vote counting through electronic means. Economically, consequences include manipulation of both local and global markets, theft of crucial intellectual property, and significant disruption of business operations, both at the street level, but also the operations of major, “blue chip” companies.
Socially, a dedicated “soft strike” IW campaign can exacerbate even long-dormant divisions within a country and its societies. the exacerbation of existing tensions and/or the creation of new conflicts within populations can have horrifying consequences; Rwanda and the breakup of Yugoslavia, while not directly the result of IW campaigns, come immediately to mind. Information Warfare campaigns often result – intentionally, or not – a serious erosion of trust through declining confidence in media, government institutions, and information sources.
Counter-measures and defensive strategies, to date, are haphazard, with their effects being difficult to measure accurately. Government initiatives, such as the creation of cybersecurity agencies and information warfare units, are themselves frequently seen as suspiciious by those government’s own populations, as are various “media literacy” programs, that seek to educate the public in how to identify and resist disinformation. In this, of course, the governmental responses are fighting against frequently subtle and hard-to-argue points, limiting their effectiveness.
In the private sector, responses such as the development of AI-powered detection tools and enhanced security measures are ongoing. However, these tools and their value remain murky, as the companies deploying them are loathe to talk about them in public, as their very existence depends on those tools remaining secret.
International cooperation through the sharing of intelligence and joint operations to combat threats is also hard to measure, for the simple fact that those measures are also hazy in their effects, at least for the general public, as intelligence agencies and armed forces – for reasons similar to the private sector – are loathe to reveal their operations publicly.
As Information Warfare continues to adapt to new technologies and societal changes, the paramount importance of highly responsive adaptability means that defensive strategies must constantly evolve to meet new threats, in real-time. Global cooperation is needed for nations and corporations to establish norms and combat information warfare effectively. In this, these groups will need to find methods to share their defense strategies…which is a very difficult thing to do for thee groups, even on a good day.
Additional Resources
Edward L. Bernays (1928), Propaganda
James F. Dunnigan (1996), Digital Soldiers