Abstract
We live in an interconnected world. Essential systems providing us electricity, water, food, healthcare, transportation and even finance are now all dependent on computers and software. Pretty much every piece of critical infrastructure in the world is controlled by software.
This progress eases our everyday life, but the growing dependence on computers also raises detrimental consequences. Besides all the good sides, we also created a new space in which criminals, terrorist and state actors can operate almost undetected while causing huge damage. There was never a 100% secure system or network, from the very first computers to now.
Chasing Vulnerabilities
The Cyber Security industry sets itself apart from many other industries because it is divided between offense (Red Teams) and defense (Blue Teams). And thus, we have an eternal cat and mouse game. As mentioned above, no system is 100% secure, or in other words, everything built with software is just inherently vulnerable. Many vulnerabilities are unknown, but you can earn your reputation (and get bounties), by discovering them (or in other words, by "breaking" a system).
If we mapped this to another industry, let's say medicine, the thought experiment gets quite interesting. Imagine some doctor and students trying to actively search the most efficient way to harm a patient, while the other half is trying to come up with solutions against those techniques and tactics. This might be actually considered a crime against humanity. But at the same time it might start a race to create new drugs? And through that maybe even prevent a bigger outbreak of the "vulnerability"?
Back to bits and bytes; believe it or not, this makes our systems more hardened. No matter which industry, companies rarely take responsibility on their own. It must be forced upon them either through regulations or public pressure. That's basically how it all began. In the early days, companies didn't really care when a researcher found a vulnerability in their products. But once the headlines and thus public scrutiny started, the companies would spend more money/resources to fix those issues.
Interestingly, the Red-/Blue-Team dynamic that defines today’s cybersecurity industry has its roots not in technology, but in the military. War games are meant to simulate attacks and improve defense tactics. There is a fundamental asymmetry; attackers need one way in, while defenders must secure everything. This makes "winning" cyber conflicts very hard.
This asymmetry also led to a growing recognition that breaches are inevitable, and that the focus must shift from trying to prevent every attack to minimizing the impact when one succeeds. Which is the core idea behind cyber resilience (but that's a topic for another blog post).
Cyber Warfare
In the realm of military strategy, the traditional domains of warfare; land, sea, air have long dominated the discourse, in the recent years this has been expanded by space and cyberspace. This is showing the importance of IT systems and raising concerns for cyber conflicts. Richard A. Clarke, former U.S. National Coordinator for Security, Infrastructure Protection, and Counter-terrorism, defined cyber-warfare as "actions by a nation-state to penetrate another nation's computers or networks for the purposes of causing damage or disruption". Cyber conflict has emerged as one of the most pressing challenges for national security and also international stability, due to its economic impacts. It can be carried out by nation-states or non-state actors and also includes activities like cyber espionage and denial-of-service attacks. Cyber-attacks can disrupt critical infrastructure, steal sensitive data and destabilise societies.
The Hidden Danger
Cyber conflict including cyber warfare (often in form of hybrid warfare) is a reality. It may be very obvious (attack on critical infrastructure) or it may be in a more hidden form (e.g. disinformation campaigns). This is one of the first blindspots; we expect a "Cyber Pearl Harbor", keep waiting for a catastrophic, singular cyber event, but real cyber conflicts are persistent, slow-burn campaigns of espionage, influence, and positioning. We're blind to the war happening right now because it doesn't match our mental model of "warfare." Artillery, planes, tanks and missiles are directly visible, while cyber conflicts can come in a more nuanced form as long as it does not have a kinetic effect (sometimes they even do).
We focus on potential grid shutdowns and dam breaches, but ignore how cyber conflicts enable narrative manipulation, election interference, and social destabilization. Also known as cognitive warfare, the human mind becomes the battlefield through manipulation of our information mediums and bot farms.
Beyond Code
Cyber conflict is not solely a battle of code. It is also a struggle over information flood, perception, and political will (e.g. regulatory issues). The broader context involves misinformation campaigns and the manipulation of public opinion. While it may seem irrelevant at first, it can be just as destabilising as a technical breach. A well-coordinated disinformation campaign can sow discord within societies, undermine trust in democratic institutions and alter the course of political events, all while leaving little trace.
Cyber conflicts often reside in the "gray zone" of international affairs, a realm where it is unclear whether an act constitutes espionage, sabotage, or an act of war. Ignoring the political dimensions of these issues is not just an academic oversight. Considering how the disruption of a financial institutions network could lead to loss of confidence in the entire banking system, or how interference in electoral processes might destabilise a nation's democratic framework, this is quite a potential for global instability.
A significant portion of the literature treats cyber conflict as purely a technical problem. A view that is reinforced by the cyber industry's emphasis on quantitative metrics, such as the number of breaches, blocked IoCs (Indicator of Compromise), the speed of malware propagation or the sophistication of for example encryption algorithms. These measures are undeniably important however, they do not capture the complexities of attribution, escalation, or the interplay of human decision-making which truly defines modern cyber conflicts. Decision-making under uncertainty, cognitive biases, and the pressure of public opinion all play a significant role in the outcomes of a conflict.
Kinetic Impact
Besides cognitive warfare, cyber attacks can have real physical impact. Operational Technology (OT) consists of industrial control systems that run energy plants, manufacturing, transportation and many other areas, basically anywhere you have programmable logic controllers.
Now, what if you are able to breach that? How about raising chlorine levels to dangerously high amounts to poison the water supply of a city? Disabling control mechanisms in a nuclear facility (hello Stuxnet)? Or if you suddenly fully open the gates in a hydroelectricity facility and cause floods? What about attacks on the power grid system (e.g. Ukraine 2015)? Hospital outages? Manipulating fail-safes in factories to cause explosions/fires? There are more than enough such real-world examples.
What makes it more dangerous is that a cyber weapon is not a one-time-use tool. Once a malware is deployed, it can be reverse-engineered, modified and redeployed. Many intelligence agencies have their own arsenal (hello Shadowbroker leaks). Your code can be repurposed by your enemy. And herewith, we come to the next blindspot. The Attribution.
Nexus of Uncertainty
In traditional warfare, identifying the adversary is typically straightforward; flags, uniforms, equipment and other established military structures provide clear markers of identity. In the cyber domain however, adversaries can mask their identities, spoof their IP addresses and operate through intermediaries. This makes it very difficult to determine who is behind a cyber attack. In regular war, camouflaging as another state is considered a war crime (Geneva Convetions Art 23, f). But in cyber space this is common practice. Sophisticated state-sponsored actors increasingly deploy “false flag” operations to mimic the tactics of rival nations or hacktivist groups. It's common practice to use infrastructure linked to unrelated third parties to obscure malicious activities.
Problems with Attribution
In a lot of analyses attribution relies on assumptions rather than evidence based facts. One usually draws conclusions from observations. It is not easy to overcome biases during an analysis. Pre-2016, financial crimes were automatically attributed to Russia while intellectual property theft pointed to China. This created "target fixation" where analysts only saw one culprit and overlooked other possibilities. In reality, it is very hard to identify who was operating the keyboard during an attack provided their OPSEC is well setup. That's actually what investigators count on. At one point OPSEC usually fails through laziness, distraction, boredom or sloppiness.
The flawed assumptions of attribution
While the following assumptions might very obvious at first, they are not always flawless.
"Evidence" About the Threat Actors
- The same threat actor produces similar attacks (same TTPs)
- Criminals don't engage in espionage unless state-sponsored
- APT groups work for exactly one government
- Malware is proprietary and not shared
About operations:
- Cyber-espionage is full-time work with regular office hours (08:00 - 17:00)
- Malware source code belongs to the group, not individual developers
- Code similarity indicates the same developer
- A system can be breached by only one threat actor, meaning all malware found belongs to one group
→ Reality check: In the age of Anything-as-a-Service you don't need to write your own malware from the ground up. Yes, even criminals rely on third-parties. Also as mentioned malware which was deployed once, can be used by other groups as well. This makes it more questionable to always point to the same APT group when finding the same IOCs for example. Different APT groups tend to use the same bulletproof hosting providers (looking at you Stark Industries). Finding German comments or variable names in the code does not necessarily mean that those were German hackers. Impersonation happens all the time. Also another fact that breaks the office-time assumption: Russia spans 11 time zones, which "office-time zone" do you take here?
Sometimes cyber security companies face pressure by being the first to attribute attacks to an APT group, leading to rushed analysis. There are no real consequences for poor analysis. Some analyses are only good for headlines, but wouldn't stand a court case. You can differentiate a poor analysis from a good one when you are expected to trust the source (aka the author) without being provided any evidences. Remember:
- assumptions are not facts
- signs are not evidences
Always verify the assumptions which are made.
Legal Paralysis
The ambiguity in attribution (insurance industry perspective on attribution) creates fertile ground for misinterpretation. When an attack occurs, decision-makers might hastly assign it to a known adversary without sufficient evidence, potentially triggering an overblown or even retaliatory response. Such missteps can escalate tensions in a way that a conventional conflict might not.
One example I like to bring is; in regular warfare you have clear and hard evidences. If a missle explodes in a states territory it is easy to reverse-engineer the whole supply chain based on the fragments you find, and also to see the flight-path through satellite and radar information. In the cyber space this is much harder. What if two nations are on the brink of an escalation, and another state nation uses this situation to create yet another proxy-war by impersonating one of the involved nations in a cyber attack on critical infrastructure?
There is another layer of complexity for political attribution. States frequently withhold intelligence to protect sources or to avoid escalation. The lack of a neutral, international body to arbitrate attribution claims perpetuates this cycle of distrust.
The attribution blind spot has pretty much paralysed the development of cohesive international legal frameworks for cyber space. The Tallinn Manual, written by approximately twenty experts at the invitation of NATO, outlines how existing laws of war apply to cyber operations. But this is a non-binding study and has limited buy-in from China, Russia, North-Korea and other states. Without clear attribution mechanisms, victim states struggle to justify retaliatory measures under international law, while aggressors exploit the ambiguity to test red lines.
Rules for Cyber Soldiers
Armed conflicts increasingly feature non-state actors engaging in cyber operations. From hacktivists to cybersecurity experts, whether operating as white-hats, black hats, or self-described patriotic hackers, private individuals are launching cyber attacks against perceived adversaries.
This development presents three risks:
- civilian infrastructure suffers direct or indirect damage from these operations
- participants expose themselves and their social networks to potential military responses
- and most significantly, mass civilian involvement undermines the combatant/civilian boundary that traditionally offered protection to non-participants, thereby elevating risks for all civilians.
In 2022 Ukraine called volunteers to participate in their IT Army. Which is a precedent and raises some questions:
- Does this make the participants foreign mercenaries?
- Does this allow them to be targeted for retaliation?
- Under what circumstances can a hacker be targeted for neutralization?
Here, the ICRC created a list with eight rules for "civilian hackers" and obligations for states to restrain them, which might give at least some clarity.
Interdisciplinarity
Having an interdisciplinary common ground means not only enhancing technical defenses, but also fostering collaboration among technologists, scientists, politicians, sociologists, and military strategists. Developing a shared vocabulary and comprehensive frameworks that encompass both technical and human dimensions is essential. It is therefore important for policymakers, industry leaders, and researchers to acknowledge and address the human and political dimensions of cyber conflicts
Conclusion
We tend to view cyber conflicts purely as a technical challenge, neglecting the complex human, political and strategic dimensions, which are equally critical. Together with the challenges in attribution, misinterpretation of motives, and the blurred line between cyber and physical realms, this poses significant risks for global stability. We need to embrace interdisciplinary perspectives and integrate technical expertise with policy areas.
We have to broaden our understanding, improve our attribution methods, and develop policies that account for the full spectrum of cyber conflicts. Failing to address this threatens not only our cyber security posture but also our ability to establish norms, prevent escalation, and build resilience in an increasingly digitised world.