- The Strategist - https://www.aspistrategist.org.au -
AI is already leading. The question is whether we are
Posted By Odette Meli on December 4, 2025 @ 08:52

Artificial intelligence is moving faster than any governance, policy or organisational system built to contain it. For Australia’s national-security community, this is a strategic inflection point. AI is shifting from decision-support to decision-shaping, and the question facing governments is stark: will we shape this future or be shaped by it?
Global analysis shows that AI is now embedded across strategic planning, intelligence cycles, operational design and national-resilience frameworks. Horizon-scanning, scenario design and early-warning analysis—once resource-intensive disciplines—are being transformed by AI’s ability to synthesise vast, complex information streams. Yet the true value of this evolution lies not in automation but in the foresight that AI enables: the ability to anticipate geopolitical, technological and societal shifts across a five- to 10-year horizon and to position capability accordingly.
This point was reinforced at the Homeland Security & Safety Summit in Doha last year, where I argued that leadership and institutional resilience are not keeping pace with technological acceleration. AI is advancing rapidly, but human systems remain slow, siloed and often too narrowly focused on immediate pressures. Without leadership uplift and value-aligned governance, AI will amplify organisational weaknesses rather than strengthen national security. Technology itself does not create foresight. Leaders do.
Australia’s National AI Plan, published on 2 December, marks a significant shift in national policy direction. Rather than introducing a standalone AI act, the government has chosen a pathway grounded in existing technology-neutral law, supported by the establishment of the Australian AI Safety Institute.
This approach emphasises monitoring harms, identifying regulatory gaps and ensuring safe, values-aligned AI adoption across government. It places the responsibility for long-range preparedness inside institutions themselves. National-security agencies must now build internal foresight capability, ethical governance structures and future-aligned leadership to navigate rapidly evolving technologies. External regulation will evolve, but it will not move fast enough to guide capability decisions over the next decade.
This shift reflects emerging global guidance. The World Economic Forum’s AI Value Alignment: Guiding Artificial Intelligence Towards Shared Human Goals highlights the need for values-aligned design and responsible innovation. Similarly, the OECD and World Economic Forum AI in Strategic Foresight reportpublished last month demonstrates how AI enhances institutional imagination and long-range scenario capability. The OECD’s work on AI and Strategic Foresightunderscores the need for anticipatory governance and futures literacy in high-responsibility sectors.
Together, these sources converge on an important principle: AI’s highest value is the strategic foresight it enables, not the efficiency it delivers.
Policing provides a compelling demonstration of this shift. In Britain, lessons from fragmented systems prompted strengthened oversight, future-aligned governance and transparency in policing. New Zealand Police has taken a governance-first approach, ensuring every AI-related decision is anchored in proportionality, legitimacy and anticipatory risk assessment.
Across Australia and New Zealand, policing organisations are integrating predictive analytics, digital forensics and horizon-scanning into strategic planning. This direction is formalised in the Australia–New Zealand Responsible and Ethical Artificial Intelligence Framework 2025 developed by the Australia New Zealand Policing Advisory Agency.
This framework reinforces the idea that the institutions most capable of leveraging AI are those already practising long-range thinking and values-based governance.
The Australian National University National Security College adds another layer through its podcast AI, Rights and Rules: Who’s Accountable in an Automated World? The college argues that national-security strategy must integrate technological innovation with democratic oversight and public legitimacy. The challenge is not in choosing between security and rights but in building governance systems capable of safeguarding both as AI reshapes decision-making.
AI will reshape national security whether preparation occurs or not. The strategic question is whether Australia will use AI to deliberately shape the decade ahead—or drift into futures defined elsewhere.
Article printed from The Strategist: https://www.aspistrategist.org.au
URL to article: https://www.aspistrategist.org.au/ai-is-already-leading-the-question-is-whether-we-are/
[1] National AI Plan: https://www.industry.gov.au/publications/national-ai-plan
[2] Australian AI Safety Institute: https://www.industry.gov.au/news/australia-establishes-new-institute-strengthen-ai-safety
[3] AI Value Alignment: Guiding Artificial Intelligence Towards Shared Human Goals: https://www3.weforum.org/docs/WEF_AI_Value_Alignment_2024.pdf
[4] AI in Strategic Foresight report: https://reports.weforum.org/docs/WEF_AI_in_Strategic_Foresight_2025.pdf
[5] work on AI and Strategic Foresight: https://www.oecd.org/en/about/programmes/strategic-foresight.html
[6] Australia–New Zealand Responsible and Ethical Artificial Intelligence Framework 2025: https://www.anzpaa.org.au/products/products/australia-new-zealand-responsible-and-ethical-artificial-intelligence-framework
[7] AI, Rights and Rules: Who’s Accountable in an Automated World? : https://nsc.anu.edu.au/podcast/ai-rights-and-rules-whos-accountable-automated-world