Building a brilliant ADF
20 Apr 2018|

In 2017, a US Department of Defense study of the future operating environment described artificial intelligence (AI) as the most disruptive technology of our time.

In this environment, says the report:

big data techniques interrogate massive databases to discover hidden patterns and correlations that form the basis of modern advertising—and are continually leveraged for intelligence and security purposes by nation states and non-state entities alike.

The potential applications of artificial intelligence, and the deep-learning capabilities it brings, may be one of the most profound artefacts of the fourth industrial revolution. Informed and experienced experts such as Secretary James Mattis and US Deputy Secretary of Defense Bob Work are questioning whether AI will change not just the character, but also the nature of war. That highlights just how disruptive this technology is likely to be for society and commerce, as well as for human competition and conflict.

In future conflicts, we can expect decision cycles to become faster than human cognition can process. Military command and control—and strategic decision-makers—will need AI that can process information and recommend options for decisions faster (or of higher quality) than can the enemy.

And as I’ve written previously, military organisations will contain thousands or even tens of thousands of unmanned and robotic systems, all with some type of AI. These swarms will demand AI-assisted command and control, as will the other composite human-automated military formations that are likely to exist in future areas of conflict.

While there’s a need to build capacity within the Defence organisation, the guiding principles apply to a wider national security community. Defence exists in an ecosystem of government organisations working towards national objectives. It’s imperative to quickly introduce AI to support decision-making in this joined-up environment. Then, immediate action is required for Defence (the department and the ADF) to rapidly increase its understanding of the applications of AI, and to contribute to a national approach.

Frank Hoffman recently proposed that military organisations may be at the dawn of a seventh revolution in military affairs that he calls the ‘autonomous revolution’. Underpinned by exponential growth in computer performance, improved access to large datasets, continuing advances in machine learning and rapidly increasing commercial investment, the future application of AI and machine learning may change military organisations and, more broadly, how nations prepare for war.

However, as Max Tegmark has written, there’s debate among AI researchers about whether human-level AI is possible, and when it might appear. The nearest estimates are ‘in a few decades’, with others predicting ‘not this century’ and ‘not ever’.

But, assisted, augmented and autonomous intelligence capabilities are already in use or can be expected over the coming decade. AI needn’t replicate human intelligence to be a powerful tool. The intellectual preparation of Defence personnel, and that of the wider national security community, to effectively use AI must begin now.

The first of four key imperatives is to start educating Defence and other national security personnel about AI. A Belfer Center report finds that it’s vital for non-technologists to be conversant with the basics of AI and machine learning. The aim is to develop baseline AI literacy among more Defence and national security leaders to supplement the expertise of the few technical experts and contractors who design and apply algorithms. Reading lists, residential programs and online courses, as well as academic partnerships and conferences, will help.

Defence education must be adapted to deliver greater technical literacy so personnel better understand machine learning and AI. This will permit a wider institutional capacity to effect quality control and address the risks of misbehaving algorithms. Personnel must be educated about the ethical issues of using AI for national security purposes. The overarching aim must be to develop a deep institutional reservoir of people who understand the use of AI, and who appreciate how human and AI collaboration can be applied most effectively at each level of command.

The second imperative is for Defence and other agencies to move beyond limited experimentation and thinking in key areas to broader explorations of how to use AI. This might include finding new ways for the national security community to use AI to work together more effectively. It may also allow the use of AI to support decision-making that could drive changes to the organisation of military and non-military elements in national security.

Some aspects of this exploration will require leaps of faith in assessing how capable AI will be in the future. Between the wars, the German army used fake tanks to develop its combined arms operating systems. So too might we use anticipated future capabilities to build new integrated national security and Defence operating systems using AI.

Potentially, this exploration could even permit consideration of significant changes in strategic decision-making processes and organisations—mostly leftovers from second and third industrial revolution mindsets.

The third imperative is for Defence to deepen collaboration with external institutions working on AI applications. All of the Group of Eight universities in Australia conduct AI research and teach applications. A number have partnerships with international institutions, including some who do work for foreign military and security agencies. They could help Defence, and other government agencies, explore the use of AI to support operational capabilities, for decision-making in directing and running operations and in other strategic functions such as education. Broad collaborative research with our closest allies will  permit sharing of best practice and offers small nations like Australia the opportunity to develop bespoke applications that complement—not copy—overseas innovations.

Eventually, this could (and probably should) lead to a fourth imperative, the development of an AI equivalent of the Australian Naval Shipbuilding Plan, and a focus for a sovereign capability in AI research and development. Such a national approach is important because it might provide resourcing for further collaboration between universities and government and commercial entities. It may provide the basis for a larger national industry to support non-military AI functions.

Finally, if robotics is included, it may provide a basis to mobilise national effort if that’s necessary in the coming decades. Australia sits at the end of a long supply line for almost every element of sophisticated weaponry. Our successors may thank us if we have the forethought to develop an indigenous capacity to design and construct (using additive manufacturing) swarms of autonomous systems.

Sitting back and observing foreign developments isn’t an effective strategy. An aggressive national and departmental program of research, experimentation and educating Defence and other national security personnel is required. The knowledge for such a program exists in our universities and changes in the regional and global security environment provide the strategic drivers for action.

Leveraging potential capabilities, we must start to educate our people now, and we must develop a national sovereign capacity to harness AI for national security purposes. In this way we might develop a truly brilliant future ADF.