- The Strategist - https://www.aspistrategist.org.au -

Artificial intelligence needs humanity

Posted By on July 4, 2022 @ 15:00

Many have heralded artificial intelligence [1] as a force-multiplier for defence and intelligence capabilities.

Do you want armed autonomous vehicles to comply with legal and ethical obligations as set out in the Royal Australian Navy’s robotics, autonomous systems and AI strategy? [2] AI can help. Do you want to more effectively analyse intelligence [3] to predict what an adversary will do next? AI can help. And AI’s proponents are right—it could, and likely will, do all of those things, but not yet.

Its ability to spot patterns, compute figures and calculate optimum solutions on an ‘if X happens then do Y’ basis is now unmatched by any human being. But it has a fundamental flaw: we do not measure human motivations solely by numbers.

Classical game theory has been trying to measure this since the 1940s and its practitioners have had so little success that they labelled many such motivations as ‘irrational’ and decided that quantitative modelling is not possible. But, if we could, modelling of non-material payoffs could answer such questions as ‘How will Russia change its defensive posture if Vladimir Putin loses face from military setbacks in Ukraine?’ or ‘Why would a rational person volunteer as a suicide bomber?’

Instead, game theory has only proposed high-level conceptual frameworks in an attempt to guide decision-makers. It has looked, for example, at whether we should have modelled the Cold War nuclear arms race as an iterated prisoner’s dilemma [4].

This is not the fault of the economists and maths-trained game theory experts or their successors, trying earnestly and for good cause to predict the probability of human actions. Many are experts in the art of programming, but not in all the intricate detail that looks to explain why we humans do what we do. We cannot expect a programmer’s life experience to compare to thousands of years of philosophy, historical precedent and more recent psychological studies. The programmers need back-up and humanities departments are what they need.

The advent of AI has led to consideration of its ethics and the involvement of humanities specialists, often employed to guide programmers with high-level principles-based frameworks [5] and/or to rule on AI testing and wargames as ethical or non-ethical. Both are vital to ensuring AI better understands humanity and our expectations of it, but this engagement is insufficient. To borrow an analogy from mathematics, the former supplies an example answer but no formula to apply, and the latter marks the answer but does not check the working-out.

We need humanities specialists involved at the coding level to help programmers assign mathematical functions to the various factors influencing human decision-making. It is not enough to say love of money, family or duty motivates a person. A fit-for-purpose AI will need to know how much they are motivated and how these motivations interact. In short, we should have a mathematical proof [6] for these factors.

And for those who call such rigour ‘onerous’, they are correct. It will be difficult, detailed and it could be disastrous for our national security community if we do not try. A cursory review of published government AI programs shows just how high the stakes are.

The navy released its AI strategy with particular emphasis on autonomous undersea warfare systems [7] and the Australian Signals Directorate recently announced the REDSPICE [8] investment to boost its AI capabilities; both mark new eras in the incorporation of AI. It should be noted these developments are also happening within police forces [9] at the federal, state and territory levels. And while the national security community no doubt has more opaque AI operations, they are likely taking heed of the recent ASPI report [3] highlighting noteworthy precedents from the US and UK for improving use of AI.

The implications of this pervading emphasis in AI was recently summarised by Michael Shoebridge [10] in another ASPI report:

The national security implications of this for Australia are broad and complicated but, boiled down, mean one thing: if Australia doesn’t partner with and contribute to the US as an AI superpower, it’s likely to be a victim of the Chinese AI superpower and just an AI customer of the US.

Building Australia to become an AI superpower will require collaboration, such as with private companies (like Google [11]) or academia (as in the Hunt Laboratory for Intelligence Research [12]) and employing the ‘build on the low side, deploy on the high’ methodology. Alternatively, it could be delivered in-house through either agency-specific taskforces, the Office of National Intelligence [13]’s joint capability fund or forums to be created under the new action plans [14] from non-traditional security government sectors.

Whatever the manner of collaboration, using humanities specialists to develop a common language for human motivations would solve the so-called Tower of Babel problem between qualitative and quantitative analysts. Its development would be comparable to standardising the type of brick and mortar used in the construction industry, or shipping containers used in the freight industry.

Only by harnessing both ‘soft’ and ‘hard’ sciences to code our humanity can we give the national security community the tools needed for Australia to become an AI superpower.



Article printed from The Strategist: https://www.aspistrategist.org.au

URL to article: https://www.aspistrategist.org.au/artificial-intelligence-needs-humanity/

URLs in this post:

[1] heralded artificial intelligence: https://www.dst.defence.gov.au/sites/default/files/publications/documents/DST-Group-TR-3716_0.pdf

[2] strategy?: https://www.navy.gov.au/sites/default/files/documents/RAN_WIN_RASAI_Strategy_2040f2_hi.pdf

[3] more effectively analyse intelligence: https://ad-aspi.s3.ap-southeast-2.amazonaws.com/2021-11/SR%20179%20Collaborative%20and%20agile_0.pdf?VersionId=yzAQe8yLxBPJ1c9VbqGAMD93a_oRPAK_

[4] iterated prisoner’s dilemma: https://www.jstor.org/stable/425197

[5] principles-based frameworks: https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-ethics-framework

[6] mathematical proof: https://en.wikipedia.org/wiki/Mathematical_proof#Colloquial_use_of_%22mathematical_proof%22

[7] autonomous undersea warfare systems: https://www.aspi.org.au/opinion/aukus-requires-rapid-expansion-autonomous-undersea-warfare-systems

[8] REDSPICE: https://www.asd.gov.au/sites/default/files/2022-03/ASD-REDSPICE-Blueprint.pdf

[9] police forces: https://ad-aspi.s3.ap-southeast-2.amazonaws.com/2022-04/AI%20and%20policing%20in%20Australia_0.pdf?VersionId=1jIN6j1gQl1RQYBsuids2QePxz_hjFNP

[10] Michael Shoebridge: https://ad-aspi.s3.ap-southeast-2.amazonaws.com/2022-04/SR183%20AI%20Your%20questions%20answered.pdf?VersionId=xFT7Wh.VIHax8rp4Yu0UHAruP1.GYoqx

[11] Google: https://ai.google/responsibilities/responsible-ai-practices/?category=fairness

[12] Hunt Laboratory for Intelligence Research: https://huntlab.science.unimelb.edu.au/home/about/

[13] Office of National Intelligence: https://www.oni.gov.au/

[14] action plans: https://www.minister.industry.gov.au/ministers/porter/media-releases/action-plan-positions-australia-be-global-leader-artificial-intelligence

Copyright © 2024 The Strategist. All rights reserved.