The need for responsible AI 
19 Mar 2024|

‘2023 marks 25 years of Google Search, and a quarter of a century of curiosity,’ said the tech giant in December. At the same time, Google launched its ‘Year in Search’, highlighting how it’s been influenced by what matters to Australians, naming everything from the Optus outage and the war in Gaza to the new royal era with King Charles III and Queen Camilla as some stand-out search trends in 2023. 

This calls into question how we’re being influenced, and, of course, that brings forth the subject of artificial intelligence. 

For example, Google’s autocomplete function uses AI to predict what users are typing and to offer suggestions based on popular queries. While it’s intended to be helpful, reducing typing time by 25%, there are concerns about the neutrality of the AI deployed in this technology. It has been seen to produce offensive, inaccurate or misleading suggestions, such as ‘women should stay at home’. 

With technology and media so entrenched in our social systems today, the advancement and application of AI is a perpetual issue—particularly where regulation is in its infancy. 

Enter responsible AI. 

This term has become a common phrase in recent years to counter AI fears and concerns. However, the NSW Ombudsman cautions that ‘responsible AI’ is a form of tech vendor spin, arguing that it risks obscuring the real question of who is actually responsible for AI development. 

Remarkably, the European Union has made strides to counter this issue. It has just recently finalised the world’s first AI Act, which will see it enforce binding rules on the development, deployment and use of both high-risk and general-purpose AI. 

Because, when AI is used to support, or even replace, everyday processes and decision-making, there’s a lot at stake. 

In 2018, well before ChatGPT made AI mainstream, the technology was introduced through the US legal system. A longer prison sentence was handed down based on no more than an algorithm. 

Since then, TikTok has been downloaded by 1.677 billion users, and its all-powerful, all-knowing recommendation algorithm has proved dangerous. It doesn’t take long for a user to move from a relatively tame comedy clip to a malicious one—whether it’s a radical view, extremist content or outright propaganda from a particular ideological fringe. 

It’s a prime example of poorly incentivised algorithms, in which companies optimise scripts for their own benefit—in this case keeping users engaged by pushing content that triggers strong emotions—without consideration for social harm, including rampant polarisation. 

This is relevant because in recent months Australia has seen a rise in anti-Semitic and Islamophobic content on social media platforms. 

While the federal government has jurisdiction over telecommunications, it does not, yet, regulate software or how it’s deployed. Although Communications Minister Michelle Rowland signalled updates to the Basic Online Safety Expectations, asking tech companies to ensure that their algorithms don’t amplify harmful or extreme content—covering racism, sexism and homophobia—AI regulation is still currently in the hands of industry. 

Where there’s potential for misuse, developers and data scientists are responsible for avoiding that through ethical AI design principles. This is explicit mitigation of the potential harms that the technology can cause. 

However, we often see the basics in ethical AI development missed. AI development can follow different pathways, depending on the goals and methods of the developers. Every day, my work to build trustworthy AI involves careful consideration of the pathways designed for safe use. The most obvious use of AI should always be the safest. 

There are some basic principles that AI developers should follow. And first up is bias mitigation: AI models should be designed with care to avoid unfair or discriminatory outcomes. Transparency and explainability should also be considered, to avoid mysterious black-box AI situations in which the AI’s processes and outputs are indefensible. Lastly, accountability is crucial to see that, where errors are made (and they will be), there are means to correct them. 

Even with all this in place, truly responsible AI also needs a protective framework against rogue actors going out of their way to use the technology for financial or criminal gain. You can’t outrun an arms race, but you can beat the people involved—you just need to get into the minds of those intentionally using the technology to harm. 

While the risk of an individual going out of his or her way to use the technology harmfully exists, it’s up to those developing algorithms and training AI to make sure pathways are as controlled as possible.