Is artificial intelligence about to be regulated?
9 Jun 2023|

Advanced artificial intelligence technologies are being adopted at an unprecedented pace, and their potential to revolutionise society for good is enormous. Since ChatGPT was first released by OpenAI in November last year, AI technologies have been elevated in the international consciousness at a rate few technologies have achieved.

However, AI also poses significant risks, from data leakage and privacy violations to deliberate disruption and attack. The importance of mitigating these risks and ensuring the responsible development and deployment of AI technology is only increasing.

On 1 June the Australian government, through the Minister for Industry and Science Ed Husic’s office, released a discussion paper on the need to regulate AI. This could see AI technologies assigned risk levels, requiring organisations to understand the risk profile of AI they implement.

This reflects a broader conversation happening around the world, as seen by recent efforts by the US to engage with industry leaders and promote responsible innovation.

On 16 May, Open AI CEO Sam Altman’s testimony at the US Senate Judiciary Subcommittee on Privacy, Technology, and the Law highlighted the potential harms of AI and need for tougher regulation. This follows meetings between President Joe Biden and the heads of Alphabet, Anthropic, Microsoft and OpenAI on 4 May to discuss the potential direction of the technology and the options for mitigating risks. The meeting focused on safety, security, human and civil rights, privacy, jobs and democratic values, and what can be done to protect citizens from AI-related risk.

The meeting came against a backdrop of speculation that AI may be at an ‘inflection point’ that will be remembered for decades to come. Over the past six months, incredible technological advances have been achieved under the mantle of AI, surpassing what even experts in the field had thought possible in the near to medium term. This includes large-language models, advanced visual capabilities, and breakthroughs that some expect will enable autonomous vehicles to replace human freight drivers.

The speed and extent of these advances have led some experts to warn that AI poses an existential threat to humanity. The Future of Life Institute wrote an open letter calling for a pause on AI development. Geoffrey Hinton, often referred to as the ‘godfather of AI’, quit Google, citing concerns about the technology’s dangers. Eliezer Yudkowsky, an expert well known for urging caution on AI, called to ‘shut it down’. These public appeals come alongside an increasing awareness that what was once an academic challenge is increasingly becoming a practical one.

Attacks on AI systems are happening in the real world. One public-interest initiative—AI, Algorithmic, and Automation Incidents and Controversies—has kept a register of AI incidents and attacks ranging from automated generation of fake news, failures of autonomous systems like self-driving cars, and political propaganda being released as deepfakes. Some thinkers in the humanities, including Yuval Harari (author of Sapiens and Homo Deus), even claim AI has the power to ‘hack’ society thanks to its ability to influence human cultures and norms.

Biden’s meeting with industry leaders recognises a new challenge, albeit in the context of an existing trend: when the biggest technological advances and are made and controlled by private industry, governments have to be proactive to mitigate risk. This is a challenge for the rest of the world too, since the biggest advances in technology globally tend to occur in a few very large companies in the US. Remaining at the cutting edge of AI requires the significant financial and hardware resources  that these companies are willing to invest. The training of GPT-3, the large-language model that underpinned ChatGPT, for instance, cost millions of dollars. This is of particular concern in the security context. When model information is held as proprietary by these companies, information about how to secure it is ceded to them.

Adversarial machine learning in particular poses a significant threat, because it allows malicious actors to manipulate AI systems and potentially harm individuals or organisations. For example, an attacker could manipulate the data used to train an AI system for facial recognition to make it misidentify certain individuals.

Many countries are already taking steps to promote responsible innovation and risk mitigation in AI. In the US, initiatives include a blueprint for an AI bill of rights, the AI risk-management framework and a roadmap for standing up a national AI research resource. The EU has established the European AI Alliance, which aims to promote ethical and trustworthy AI, and the AI4People initiative, which aims to develop a set of ethical guidelines for AI. Australia recently created the National AI Centre’s Responsible AI Network. China recently proposed draft measures to regulate generative AI, the form of AI behind deepfakes and text generation. In 2017 the Chinese government released a development plan for new-generation AI, which aims to establish China as a world leader in the technology by 2030.

However, with development of AI moving too fast for even the experts to keep up with, it’s fair to worry about what hope policy and regulatory systems have. Already, misinformation, deepfakes and mistrust are clawing away at democratic institutions. There’s even discussion about how AI could be weaponised in interstate conflict. The most pressing question, though, is whether AI might be a risk to humanity at large. If it turns out to be the existential threat some leaders say it may be, the global community must do more to work together to create an AI positive future.