Are marketers prepared for the oncoming world of AI regulation?
As businesses look to maximise the possibilities of AI, how can they respond to the rapidly evolving regulatory landscape without limiting innovation or risking their reputations?
AI regulation is currently a complicated picture, with legislators looking to strike a balance between encouraging innovation and protecting citizens’ rights. As the adoption of AI scales across multiple industries, it’s crucial that business leaders keep up with the latest developments.
For those in advertising and marketing this is especially important. There are huge opportunities for AI-powered content creation, audience insights, market research and much more but these must be weighed against the challenges of maintaining transparency, trust and consumer confidence.
Many are pressing on, establishing new frameworks for self-governance informed by existing industry standards that also reflect brand values. With this ambition in mind, let’s examine what you need to know.
AI regulation: the current state of play in the UK
To support the rapid development of Al technology across the UK, the new government has established a new ‘AI Opportunities Unit’ and is expected to publish its ‘AI Opportunities Action Plan’ in Q4 20241.
Similarly, the UK’s AI Safety Institute continues to conduct research and building infrastructure to evaluate the impact and safety of frontier AI – defined by the UK government as “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”.
Whereas the previous government’s 2023 AI Regulation white paper clearly favoured a lighter-touch approach, July 2024’s King’s Speech hints at a desire for something more rigorous.
It’s not entirely clear which approach will prevail, although Feryal Clark, Parliamentary Under-Secretary of State for AI and Digital Government, has recently confirmed that any legislation will be: “highly targeted and [will] avoid creating new rules for those using AI, with a focus on the systems of tomorrow.”
In the meantime, the implementation of the EU AI Act – which classifies AI models according to their potential risk – continues2. It’s likely that the UK will strive for consistency with this and other international standards – including guardrails on highest risk categories, and a move to make companies’ existing voluntary AI safety commitments legally binding.
From infrastructure to transparency: key challenges for business
Effective AI models depend on a combination of computational power and large amounts of data. As their use and complexity increases, the need to keep investing in core infrastructure will be crucial, across both the private and public sectors.
“This is cutting-edge, physical infrastructure that requires new investment, new skills, and new training to deploy effectively and create maximum benefits for your company,” says Max Beverton-Palmer, Head of Policy UK at NVIDIA, the AI computing company. The challenge is positioning AI at the heart of your business and not seeing it as an add-on. Achieving this means:
- Developing plans and budgets to support the development of AI infrastructure at a scale appropriate to your long-term goals
- Ensuring that data is properly stored, managed and processed, and that solid data privacy and security protocols are in place
- Adopting strategies that make systems more transparent and accountable to preserve consumer confidence and ensure future regulatory compliance
- Investing in the right human resources to develop and oversee AI systems, including training existing staff
- Auditing, measuring and re-assessing your models to ensure they’re fit for purpose and fulfilling business needs
Staying still means falling behind
In addition to meeting these strategic challenges, businesses must also stay ahead of the regulatory curve where they can – especially in sectors like advertising, where maintaining consumer trust is key.
Across the marketing industry, new standards are emerging. Self-regulatory organisations like the ASA are offering useful insights into how to approach regulation3 and building AI-powered monitoring tools4, so it’s important to keep up to date with their work in addition to following what governments are doing.
With AI transforming the way consumers are targeted, ensuring that companies’ AI governance “adheres to the foundational principles of being legal, honest, decent and truthful is a good place to start,” according to Vicky Brown, General Counsel, Commercial and Chief Privacy Officer at WPP.
For instance, advertisers can establish clear AI guidelines in which they can outline risk such as algorithmic bias, privacy violations, or the inadvertent inclusion of misleading information, and associated risk mitigation measures such as using complete, comprehensive and uncorrupted data sets for AI training or being transparent about AI use.
Self-governance and brand values
For Brown, one of the key challenges (and opportunities) for brands is mapping AI applications onto to their core values – and developing policy frameworks based on this.
One brand leading the way is Dove, which has taken a proactive stand by committing never to use AI ‘to create or distort women’s images’ in its content5 – thereby aligning its AI strategy with its longstanding campaign to promote real beauty.
AI is also creating opportunities for brands previously limited by budget. ITV has recently expanded its commercial creative production service to encourage SMEs to consider TV advertising, with a model that uses Generative AI to create ads at far lower costs6.
As such practices become more common, the need for clear and consistent approaches to the use of AI – and when and how this is communicated to consumers – is critical. According to a 2023 survey by MMTN Research, 78% of consumers polled wanted brands to be fully transparent when they use AI7.
Ultimately, this comes down to the specifics of a campaign or product. The use of AI-generated backgrounds, for instance, may require labelling if, for example, they portray a seemingly real-word setting – like a holiday resort – that might materially mislead the viewer. At the end of the day, it’s all about aligning brand values with how AI us used to produce or enable advertising.
Careful attention to both brand messaging and creative output is crucial but should always be backed up by robust workflows and internal processes. This could include processes governing how training data is assessed, to avoid bias when compiling audience research. Or it could mean establishing clear lines of accountability for AI-driven decisions.
To reinforce this, Brown advocates a cross-organisational approach to implementing AI guidelines. Legal teams can manage regulatory compliance, creative teams can maintain brand authenticity, and policy experts can oversee strategic alignment, working together to build consistency across the supply chain and establish clarity for clients.
Managing reputation in an AI-driven world
It’s a view echoed by Kate Joynes-Burgess, Senior Advisor on Digital Innovation (EMEA) at Burson, who says: “We need multidisciplinary teams to think through how we optimise AI adoption, how we integrate it in effective ways, how we use it responsibly, and how we ask the right questions to inform the right solutions.”
For many, this is key to maintaining brand reputation and consumer confidence in a world increasingly driven by complex technologies, including AI.
Yet AI itself will likely be part of this solution. Just as the use of AI presents new risks to how brands are perceived, it also offers those brands enhanced tools and techniques to monitor and predict audience behaviour and perception in real time.
“AI is changing our workflows, from managing reputation to seeing around corners; ultimately, it’s evolving how we identify opportunities across the plethora of channels we now manage, and address potential challenges before they arise,” says Allison Spray, EMEA Chief Data & Intelligence Officer at Burson.
Spray references tools such as ‘Decipher Index’8, which uses cognitive AI to evaluate how emerging broader social or political themes can impact brand reputation, allowing users to get ahead of communications challenges before they arise.
Similar AI models can be trained to support enhanced audience segmentation, gather consumer insights, or crunch large amounts of data, freeing up teams to focus on the more creative or strategic areas of their work.
Nevertheless, in all eventualities, human expertise, creativity and critical thinking will remain of paramount importance – especially as we come to terms with the tricky issue of responsible regulation and reputation management.
Notes
Category
Explore More Topics
More in Technology & innovation
What businesses are getting wrong about agentic AI
The next phase of the AI revolution is coming. WPP’s Chief AI Officer Daniel Hulme explores how businesses can cut through the hype and strategically prepare for the transformative potential of agentic AI
From beats to beauty: how does the beauty sector sound?
Exploring the untapped power of sonic branding in the beauty world