Root Cause

     

AI + Cyber and the Security Dilemma

October 6th, 2024
Chris Rohlf

The conversation around AI and cybersecurity is often skewed by alarmist narratives that overlook the potential benefits of AI, particularly in defensive cyber applications. Large Language Models (LLMs) are dual use, meaning they can be employed for both beneficial and malicious purposes depending on the actors intent. For example, their capacity to identify 0-day vulnerabilities could become a powerful tool for improving cyber security, but it could also be used by attackers for identifying new vulnerabilities to leverage when compromising their targets. Given the current reality of the "defenders dilemma", and the general imbalance between the high cost of defense and low cost of offense, I believe that this dual use nature LLMs offers significantly more uplift for defenders than attackers. Because of this they have the potential to shift the long standing attacker favored asymmetry in cyber operations.

This short essay examines one potential set of complexities that could arise if the AI doomers are wrong, and AI does significantly alter the "Defenders Dilemma" in significant and positive ways. This is of course not a foregone conclusion, even the best cyber defenses are ineffective if they are not adopted or utilized properly, and AI will be no different. However, the opportunity for AI to raise the cost of cyber attacks, particularly for opportunistic actors like ransomware operators, is an opportunity we cannot ignore. These adversaries would find their low-cost, high reward operations less attractive as defenses become more automated and effective. While the deployment of these tools is unlikely to be uniform across the internet, even partial adoption could price out less sophisticated attackers. But this returns us to the question of what kinds of other complexities this may create as a result.

In international relations there is a theory known as the "Security Dilemma". The security dilemma arises when states invest in improving their capabilities ways that can appear as threatening to other states. This is particularly true when it is difficult to know whether these capabilities are being developed for offensive or defensive purposes. This can easily result in miscommunication and fear which can result in an arms race or even conflict. The security dilemma is a complex topic with multiple aspects and complexities that scholars have debated for decades. The cyber security dilemma has been explored and previosuly written about, particularly in Ben Buchanan's book of the same name. What is important to know is that cyber plays a role in competition between states in many ways. Consider the low cost for cyber attackers to target even states with economies and militaries multiples of their own size and maturity. Smaller, less sophisticated states with significantly fewer resources are able to routinely target and successfully compromise entities within the United States. This is why cyber operations have become a staple of modern espionage tradecraft and military operations. It allows these players to punch well above their weight.

As states, and the companies and infrastructure within them, adopt and deploy AI over the coming decades their cyber defenses could rapidly improve and raise the cost of these attacks by orders of magnitude. The dual use nature of AI complicates this further, as the same tools that improve cyber security can often be used for offensive purposes. These capabilities are likely to be deployed in an autonomous fashion (for both defensive and offensive purposes) in the coming decades which has the potential to create further miscommunication. When this happens other states may perceive these capabilities as threats, or fear losing their low cost means of access, leading to an arms race and potentially escalations beyond cyber. Given the costs in hardware, compute, and technical know-how, to develop state-of-the-art AI models, it is plausible that whichever state dominates these resources will have a near permanent lead over their competitors. This dynamic could create an environment where states race to develop, or steal, both offensive and defensive AI technology, increasing miscommunication, instability and fear.

This outcome is not a guarantee, and given the potential for AI models to produce more attack surface than they secure, we may have some time before this is a reality we need to be concernd about. However in some ways this may resemble the "going dark" debate over end-to-end encryption. Just as end-to-end encryption made traditional surveillance and signals intelligence harder it did not eliminate all options for attackers, AI enabled defenses could make common cybercrime more difficult while pushing sophisticated actors such as APT to adapt. However the cost and complexity for compromising targets is likely to go up across the spectrum even if unevenly.

This shift in the dynamics and economics of cyber operations is likely to come with complexities and impacts that extend beyond the technical domain. The key challenge for policymakers will be finding ways to encourage the use of AI for defensive purposes without escalating tensions as described by the security dilemma, and requiring international cooperation and possibly exploring new norms for AI use in cyber. It is important for policy makers to consider this possibility to ensure there is no miscalculation as a result. The intersection of AI and cyber security is likely to play an increasingly larger role in national security in the coming decades. The leaps in technical capabilities these models will continue to make, and the automation they will enable, will continue to become more and more entangled in competition between states.