AI’s national‑security risks are falling through the gaps
10 Sep 2025| and

While existing regulators have most AI risks under control, our new report shows that national-security risks are falling through the gaps.

The Productivity Commission has called for a pause on guardrails for high-risk AI until the government completes a ‘gap analysis’ to identify which risks regulators can handle and which remain unmitigated. Our research, which applies the United Kingdom’s National Risk Assessment’s methodology to AI in Australia, finds that general-purpose AI poses national-security risks and that current mitigations are inadequate.

Sixty-four experts in AI and governance assessed five cross-cutting national-scale AI threats:

—Unreliable agent actions: users relying on AI agents that are not competent or trustworthy, and that engage in behaviours such as fabrication or hallucination.

—Unauthorised AI behaviour: users directing an AI agent towards one goal, but the agent autonomously pursues other goals or exceeds its authority.

—Open‑weight model misuse: open-weight models being modified to remove safety measures or enhance dangerous capabilities.

—Access to dangerous capabilities: AI models giving a wider range of actors easier access to dangerous capabilities, such as the ability to conduct a cyberattack or build chemical, biological, radiological or nuclear (CBRN) weapons.

—Loss of control: AI labs losing control of an AI model through self-replication, recursive self-improvement or the bypassing of containment measures.

Experts assessed the likelihood of each threat causing ‘moderate’ or greater harm—meaning more than nine fatalities or more than $20 million in economic cost—in the next five years, and the potential severity of that harm if it were to occur.

Experts’ top-ranked priority was AI giving people access to dangerous capabilities, such as the ability to build bioweapons or conduct advanced cyberattacks. They evaluated that this is 40–50 percent likely to occur in the next five years. If it happens, experts assess it would cause between 201 and 1000 fatalities or between $2 billion and $20 billion in economic damage annually in Australia.

Losing control of an AI system was also a priority, being 10–20 percent likely to occur in the next five years—far from a remote chance. This likelihood reflects Silicon Valley’s ‘move fast and break things’ mentality, and the simple fact that we have no idea what will happen if labs succeed at their stated goal of building artificial general intelligence. While this seemed like science fiction only a few years ago, it’s now the subject of hundreds of billions of dollars of annual investment and an explicit race between the United States and China. We can’t be too surprised by the possibility that someone might win.

Australia lacks its own national risk assessment, making comparisons to other national security risks difficult. However, drawing on the UK’s assessment as a benchmark, losing control over AI systems is comparable to pandemics in likelihood and consequence. Other AI risks land in similar ballparks to attacks on critical infrastructure or large-scale CBRN attacks. All AI risks assessed exceed traditional national-security risks—such terrorist attacks on public transport or fuel security concerns.

Most experts consider Australia ill-prepared. Between 78–93 percent of them rated current government measures as ‘inadequate’ for mitigating these AI risks.

The Therapeutic Goods Administration is best placed to deal with AI in medical devices. The Civil Aviation Safety Authority is best placed to deal with AI in aviation. But no one is responsible for understanding and addressing the risks of general-purpose AI. Navigating uncertainty around lower-likelihood events with catastrophic consequences is the role of the national security community.

The cyber offensive capability of frontier AI models best demonstrates why we need to get ahead of these risks. Frontier labs are testing whether systems can conduct sophisticated end‑to‑end cyber operations, with OpenAI finding that its models are close. Google’s Project Zero has found novel vulnerabilities in computer systems, and Anthropic’s Claude has enabled large-scale extortion operations and fraudulent employment schemes by criminals with only basic technical skills.

Australia isn’t powerless to act.

We can use diplomacy to counsel caution. The US and China are racing to ‘artificial general intelligence’, with the winner being the first to use the technology to economically and militarily dominate the other. This has striking similarities to the nuclear arms race. In the same way we responded to that race, Australia should urge caution and seek to de-escalate.

We can build sovereign evaluation capability. As AI capability races ahead, safety technology falls further behind. An Australian AI security and safety institute would allow us to conduct sovereign evaluations of these risks, accelerate safety research and have a seat at the global table.

We control our own infrastructure laws. Currently, the Security of Critical Infrastructure Act only covers AI infrastructure if it provides services to government or other critical infrastructure. We can expand that act to capture all AI infrastructure as AI becomes critical in its own right.

We can regulate the chokepoints. AI guardrails need to extend to AI developers because they’re the chokepoint for many of these threats. We need legislative tools to require compliance by developers and deployers as standards and best practices emerge.

Our report makes its methodology, threat definitions and stress-test scenarios available to regulators and security agencies, allowing them to replicate the scenario-based threat assessments.