Speakers from AI regulation event

AI regulation: it’s about finding the right balance

AI regulation is a government, industry, regulator, company and private individual issue – and it must be addressed collaboratively

How to regulate AI and to build AI models responsibly is one of the top business concerns for 2023 – and it will surely rank highly going forward. Governments worldwide (and private individuals) are also considering how and when to respond to the march of AI (on top of all the macro crises the world is enduring).

It is fair to say that AI regulation is on all our radars. That is what prompted WPP and BCW to host a forum that explored the future of AI regulation.

WPP is ahead of the curve. It has been investing in AI for the few years because AI is, as WPP CTO Stephan Pretorius calls it, “a transformation technology when applied to marketing and advertising”. He calls the pace of innovation over the last year “radical” – impact of AI on client work, business outcomes, the industry and society has become front and centre. There is no compromise in terms of values and integrity.

WPP’s AI company, Satalia, is at the core of innovation and has enabled WPP to push beyond just applying AI tools to be at the forefront of discovery. Importantly, WPP’s AI expertise the building of explain-ability into its use of AI tools.

BCW's own experience in building tools, such as Decipher, with its partner Limbik, and supporting clients on articulating policy and external communications perspectives through its Navigate team means that they too are at the forefront of work in this area.

The UK Government is taking a lead

Given the high penetration of AI expertise in the UK, the UK Government is in pole position to be a leader. Alexandra Leonidou, Head of Regulation and Governance at the UK Office for AI says that the UK Government has recognised the profound effect of AI globally.

She talks of its impact on public services like healthcare and education – not only its impact on marketing and advertising. While she identifies the risks – and they are profound (AI-enabled cyber-attacks is about finding the right balance) – she outlines that regulation is about finding the right balance.

In March 2023, the UK Government published its AI White Paper – AI regulation: a pro-innovation approach – which had six core characteristics at its heart: pro innovation, proportionality, trustworthiness, adaptability, clarity and collaboration. It has taken both a principles-led and context-led approach to regulation, and it expects, says Leonidou, to lean on existing regulators to oversee agreed principles.

The UK Government plans to start on its regulatory journey with a non-statutory approach, says Leonidou. The Government has consulted extensively – with 400 businesses and individuals, including WPP – and already established the central risk function in government.

Stealing the headlines has been the AI Safety Summit and the discussion paper launched just before the summit: Frontier AI: capabilities and risks. Funding for a digital and AI advisory service has also been secured. The outcome of the AI summit – with its roster of around 150 high-profile attendees from around the world – has been well publicised: to build a shared understanding of frontier risks, to establish a forward process, to agree appropriate measures, to identify measures for AI safety research, and to showcase how safe AI will enable AI for good globally.

Leonidou calls the Bletchley Declaration, published at the end of the summit, “ground-breaking”. A total of 28 countries, including China, signed this declaration, thereby agreeing to seize the opportunity of AI for peace and wellbeing, to affirm that all actors have a role to play, and to consider a proportionate and pro innovation approach to governing AI. The Emerging Processes document will help ensure conversations and collaboration continue to flow.

What is more, the AI Safety Institute has been launched to carry out AI research and build models. This organisation will support technical standard development in partnership with other jurisdictions – partnerships with Singapore and the US have already been inked.

Where are we at?

There are so many ways to think about AI but, at WPP, Chief AI Officer Daniel Hulme has a very clear train of thought. He points to the six applications of AI which helps us navigate governance, one application at a time.

He also points to the AI impact pyramid with disruption at its apex (where AI disrupts the commercial and operating model), production and services occupying the layer beneath that (with their AI embedded tools and processes), and a core productivity base supporting everything (with its AI embedded core productivity tools and back office).

But when we think about the risks associated with AI, Hulme refers to micro risks, malicious risks and macro risks in society, and the importance of distinguishing between these three risks. But, in the final analysis, says Hulme, there’s a series of important questions to consider in relation to how AI is used and the extent to which it should be regulated:

1. Is the intent appropriate? AIs don’t have intent but humans do.

2. Are the algorithms deployed opaque or transparent? There needs to be explain-ability.

3. What harm could an AI cause?

WPP is proud to have developed a set of principles, guidance, and legal advice, which underpin our internal generative AI platforms and tools and help our people and clients understand AI responsibility. These include WPP AI Policy and WPP Data & AI Ethics Principles and Guidelines, and specifically covering generative AI – Generative AI Principles.

Broadening the debate

From a wider perspective, what is becoming clear is that there is no real common understanding of the responsible use of AI. Yves Schwarzbart, Industry Relations Manager at Google UK and Co-chair of the ASA’s AI Taskforce, called for collaboration and coordination throughout the advertising and marketing industry.

Jesse Shemen, CEO and co-founder of Papercup – an AI translation company – talked about quality control in the use of AI tools and agreed there is no common understanding of the responsible use of AI (which is why a principles-based approach is the right one). Hulme concurred that there is no commonality of understanding of risk which is why WPP undertakes significant engagement with clients to help them understand training models, biases and risks associated with copyright violation.

In spite of the dearth of common understanding, Leonidou pointed out that we are already seeing emerging initiatives – such as the White House Executive Order – take similar approaches to each other. This is exactly what is needed if we are to avoid barriers to trading across borders.

Perhaps this is at the crux of AI regulation: how do you build a common understanding of the challenges business faces while also understanding the risks and not limiting the scope for businesses to reap the rewards? And how do we make sure companies that adopt AI technologies are not penalised by regulation compared with their early-mover peers?

published on

04 December 2023

Category

Technology & data

Related Topics

Humanising AI

More in Technology & data

Orange and blue megaphones on an orange background

A space for sound

Savvy brands who venture into sonic branding will find vast opportunity in this relatively uncluttered landscape

CES 2024

CES 2024: the future now

AI, health tech, transportation – these were just some of the high points at the Consumer Electronics Show (CES) 2024

Abstract purple triangle in a vortex

Gen Z’s virtual world is real

The virtual world is important to the happiness of Gen Z in China, according to Echo He of WPP's Mindshare