February 8th, 2025
Chris Rohlf
AI governance discussions in cyber security often define "defenders" too narrowly by limiting the term to mean those who monitor
for and respond to security incidents in deployed systems. While teams with those responsibilities play a critical role, this
perspective is flawed and does not acknowledge how software is designed, developed, and ultimately used in the real world.
A more comprehensive understanding of cyber security would account for the efforts integrated much earlier in the software
development lifecycle. These include software and security engineers, who design and develop code and integrate tooling to
discover security issues much earlier in the development process. Their work is foundational to preventing vulnerabilities rather
than just responding to attacks after they occur. The dual use cyber capabilities found in AI offers these teams signifcant
advantages in terms of scale and automation of these efforts which directly prevents insecure and vulnerable systems from ever
being deployed.
This lack of nuance in AI governance discussions has had consequences. Some policymakers and industry analysts misjudge what
constitutes a dual use AI cyber capability as a result, failing to distinguish between tools that improve security early in the
development lifecycle and those that enable offensive operations. This misunderstanding can lead to misguided policy positions
that conflate fundamentally different applications of dual use AI capabilities in cyber security. These policy recommendations
often fail to reflect the true dynamics of AI's impact and uplift for cyber defense and offense.
A policy that restricts AI tools based on a narrow understanding of defenders could slow down security teams while doing nothing
to prevent offensive uses. Conversely, well informed governance must acknowledge the full spectrum of defensive efforts including
early design and development of code through deployment and maintanence of software, and the value AI brings to these efforts.
A better approach to AI governance would recognize that cyber security is not just about incident response but also about
proactive security engineering. As I have written many times before, I believe that AI holds significantly more potential for
uplift in defense than it does for offense. But in order to convince people of that we first need to ensure their definition
of defense is holistic and correct.