Skip to content

Wednesday 3rd May 2023

Have we cracked the regulation of artificial intelligence? As if!

Amy Heckerling’s brilliant coming-of-age film Clueless translates Jane Austen’s Emma to the palm-fronded affluence of contemporary Beverly Hills. In one scene its main character Cher, an archetype of the wealthy American teenager, is asked to present to her school debating class on the subject of media censorship. She argues pithily that attempts to sanitise dramatised violence on television are futile because, were somebody to change the channel and watch the news, they would see just as much violence in the real world.

As a defence of artistic realism it is a nice vignette, but it also encapsulates an anxiety boiling up at the time of the film’s release, namely, what – in a technological sense – constitutes the real world? 27 years on, we are arguably further away from an answer to this question, despite being in desperate need of one. As the results of technological progress encroach upon our time and replace analogue customs, they blur the boundaries between the real and represented. TikTok is a case par excellenceprobably the app closest to achieving the original dream of cyberspace as an inhabitable world constructed purely of digital content. This blur has fundamental problems for preventing harms and human rights abuses of a type unfathomable in our history.

The,Rodeo,Drive,Street,Signs

Artificial intelligence is the ubiquitous phase du jour, humanity’s salvation or its bête noire, depending on who we listen to. It is seldom possible to open a newspaper these days and escape articles extolling its incredible leaps and bounds, its potential, or its catastrophic consequences that our species may not survive. Of course, the transformative power of AI is already part of the economy. It played a role in sending you to this webpage, and it has been successfully adopted to accelerate drug discovery, bolster cyber security, and keep us engaged on social media. It is not tomorrow’s technology, and neither are its excesses and potentials for harm.

One area where losing our grip on what constitutes the real world becomes especially problematic is the phenomena of deepfakes, hyper-realistic (but entirely synthetic) videos created using artificial intelligence. These deceptions demolish and remould truth. They are nothing less than machines of false consciousness in the wrong hands. The litany of harms that can fall out from deepfakes is almost endless, but its most prominent are found in pornography, where victims – almost always women – are edited into sexual films, and in democratic politics. Here, AI presents a greater threat to public trust in the sincerity and accuracy of politicians’ statements than any scandal or leak in history.

This has already created very real headaches for governments, especially those pursuing systems-based regulation to tackle online harms like the UK. These regimes, like the Online Safety Bill, are set to be particularly good at preventing and arresting user-to-user harms at the sharp end of social media interactions. This is possible because of a powerful set of duties of care on the companies themselves. Unfortunately, however, these regulatory mechanisms are much less useful at opening up the bonnet and dismantling sophisticated user-generated harms at their source – i.e. intervening in the systemic AI that underpins them.

“AI presents a greater threat to public trust in the sincerity and accuracy of politicians’ statements than any scandal or leak in history”

For this reason and others, the UK government has embarked on its own separate approach to regulating AI, which is under consultation at the moment. It sets out a pro-innovation approach, which will use non-statutory measures in the first instance to corral developers of AI to align with set principles. It is a timely and important step, but it downplays a crucial piece of the jigsaw: awareness. While it contains a plan (of sorts) to encourage future regulators of AI to conduct education campaigns for consumers and users on AI risks, there is scant detail as to what this would mean in practice.

It is my belief that this is deserving of its own strategy, and one directed at users both young and old. The only way we can hope to combat some of the inherent negative effects of AI is through a universal, robust and standardised education that gives every citizen the tools to understand how their data and experiences are being manipulated by technology. This must include a critical approach to truth. It is certainly no magic bullet, but it is a noble start.

Perhaps it is a non sequitur to hope to adequately answer the question of what is real and what is not. But, until we turn our critical attention to the public, we will remain totally clueless.

“The only way we can hope to combat some of the inherent negative effects of AI is through a universal, robust and standardised education”