How to think about AI policy
21 Mar 2024|

In Poznan, 325 kilometers east of Warsaw, a team of tech researchers, engineers, and child caregivers are working on a small revolution. Their joint project, ‘Insension’, uses facial recognition powered by artificial intelligence to help children with profound intellectual and multiple disabilities interact with others and with their surroundings, becoming more connected with the world. It is a testament to the power of this quickly advancing technology. 

Thousands of kilometers away, in the streets of Beijing, AI-powered facial recognition is used by government officials to track citizens’ daily movements and keep the entire population under close surveillance. It is the same technology, but the result is fundamentally different. These two examples encapsulate the broader AI challenge: the underlying technology is neither good nor bad in itself; everything depends on how it is used. 

AI’s essentially dual nature informed how we chose to design the European Artificial Intelligence Act, a regulation focused on the uses of AI, rather than on the technology itself. Our approach boils down to a simple principle: the riskier the AI, the stronger the obligations for those who develop it. 

AI already enables numerous harmless functions that we perform every day—from unlocking our phones to recommending songs based on our preferences. We simply do not need to regulate all these uses. But AI also increasingly plays a role at decisive moments in one’s life. When a bank screens someone to determine if she qualifies for a mortgage, it isn’t just about a loan; it is about putting a roof over her head and allowing her to build wealth and pursue financial security. The same is true when employers use emotion-recognition software as an add-on to their recruitment process, or when AI is used to detect illnesses in brain images. The latter is not just a routine medical check; it is literally a matter of life or death. 

In these kinds of cases, the new regulation imposes significant obligations on AI developers. They must comply with a range of requirements—from running risk assessments to ensuring technical robustness, human oversight, and cybersecurity—before releasing their systems on the market. Moreover, the AI Act bans all uses that clearly go against our most fundamental values. For example, AI may not be used for social scoring or subliminal techniques to manipulate vulnerable populations, such as children. 

Though some will argue that this high-level control deters innovation, in Europe we see it differently. For starters, time-blind rules provide the certainty and confidence that tech innovators need to develop new products. But more to the point, AI will not reach its immense positive potential unless end-users trust it. Here, even more than in many other fields, trust serves as an engine of innovation. As regulators, we can create the conditions for the technology to flourish by upholding our duty to ensure safety and public trust. 

Far from challenging Europe’s risk-based approach, the recent boom of general-purpose AI (GPAI) models like ChatGPT has made it only more relevant. While these tools help scammers around the world produce alarmingly credible phishing emails, the same models also could be used to detect AI-generated content. In the space of just a few months, GPAI models have taken the technology to a new level in terms of the opportunities it offers, and the risks it has introduced. 

Of course, one of the most daunting risks is that we may not always be able to distinguish what is fake from what is real. GPAI-generated deepfakes are already causing scandals and hitting the headlines. In late January, fake pornographic images of the global pop icon Taylor Swift reached 47 million views on X (formerly Twitter) before the platform finally suspended the user who had shared them. 

It is not hard to imagine the damage that such content can do to an individual’s mental health. But if applied on an even broader scale, such as in the context of an election, it could threaten entire populations. The AI Act offers a straightforward response to this problem. AI-generated content will have to be labeled as such, so that everyone knows immediately that it is not real. That means providers will have to design systems in a way that synthetic audio, video, text, and images are marked in a machine-readable format, and detectable as artificially generated or manipulated. 

Companies will be given a chance to bring their systems into compliance with the regulation. If they fail to comply, they will be fined. Fines would range from €35 million ($58 million) or 7% of global annual turnover (whichever is higher) for violations of banned AI applications; €15 million or 3% for violations of other obligations; and €7.5 million or 1.5% for supplying incorrect information. But fines are not all. Noncompliant AI systems will also be prohibited from being placed on the EU market. 

Europe is the first mover on AI regulation, but our efforts are already helping to mobilize responses elsewhere. As many other countries start to embrace similar frameworks—including the United States, which is collaborating with Europe on ‘a risk-based approach to AI to advance trustworthy and responsible AI technologies’—we feel confident that our overall approach is the right one. Just a few months ago, it inspired G7 leaders to agree on a first-of-its-kind Code of Conduct on Artificial Intelligence. These kinds of international guardrails will help keep users safe until legal obligations start kicking in. 

AI is neither good nor bad, but it will usher in a global era of complexity and ambiguity. In Europe, we have designed a regulation that reflects this. Probably more than any other piece of EU legislation, this one required a careful balancing act—between power and responsibility, between innovation and trust, and between freedom and safety.