How you use AI will be a critical part of messaging
Communicating how you use AI – in a way that people who are not data scientists can understand – is, and increasingly will be, vital for companies. That is why WPP’s BCW has launched BCW Navigate, says Harry Stovin-Bradford
“Companies need to think now about how they talk about the ways they are using AI and how they will use it in the near-term,” says Stovin-Bradford. “They literally need to start navigating this topic now.”
Why? People are already asking questions about how their data is being used, whether they can challenge that use, how companies are using AI tools to analyse consumer data, whether people can challenge those decisions, and whether there is any redress for unfavourable decision-making.
“The level of scrutiny and, by extension, transparency that's going to be demanded of companies will be significant,” he says. “Whether that is to do with an algorithm scanning people's CVs or with AI tools grading standardised tests, people will want to know how companies are using AI, and companies will have to respond to these questions.”
A period of regulatory debt
“Our focus is on the next two to five years,” says Stovin-Bradford. “What AI can deliver beyond that point is difficult to predict. Importantly, the question we are asking is: what regulatory debt has already been built up and what will that look like projected forward in this limited time horizon?”
What does he mean by regulatory debt? “This is where the use of a product or service outpaces the ability of government to keep up from a regulatory perspective,” he responds. “You end up with this significant delta between what technology is being used for and how it is being monitored and controlled. That gap will widen and, eventually, there will be some sort of crisis that will cause regulators to act.”
When we reach a crisis point, it will be too late for companies to hone their response. “When this sort of thing happens, governments tend to regulate very quickly and, at that point, you will need a point of view – as a company – on communicating how you use AI and on how it adds value to society,” he says. “If you are either a major user of the technology or a developer of this technology, you are also going to need to have a perspective on what regulation should look like, who should be regulating you, your vision for what society looks like after the introduction of your AI, and why that's better than society without the use of AI.”
What do we even mean by AI?
It is so easy to conflate the different types of technology underpinning AI generally and end up being confused by what AI is. The concept of AI should be unbundled into its constituent parts, says Stovin-Bradford, who is evangelical about using the six definitions of AI promulgated by WPP’s AI company, Satalia. “That provides a really helpful framework for thinking about this,” he says.
Of the six categories – task automation, generative AI, human representation, extracting complex insights/predictions from data, complex (better) decision-making, and extending the abilities of human beings – it is possibly the fourth category (insights and predictions from data) is likely to make the public most anxious in the near term.
“AI has been around for a long time. It was first referenced in UK legislation that gained Royal Assent in 1990,” says Stovin-Bradford. “If you bought a house as far back as the 1990s, AI has likely been making decisions about your mortgage for decades. The U.S. Department of Defence began using AI in 1991 to optimise logistics and, of course, the foundations of this work were laid by Alan Turing in the 1940s and 1950s.”
A reputational risk for companies
Companies that have invested heavily in this space – and WPP’s Wunderman Thompson found that 77% of businesses in the UK have already adopted some form of artificial intelligence – need to take stock.
“This is a brilliant time for communications. Taking complex ideas and expressing them simply is what is needed here. It is only right that the public is going to want to know more about how their data is being used – amongst other things,” says Stovin-Bradford, “and companies need to be ready.”
But, while messages are being built in the foreground, behind the scenes companies should be evaluating their appetite for risk. And BCW has developed a model for doing just that.
“At BCW, we think about the process for talking about AI in the following way: audit, articulate and then advocate,” says Stovin-Bradford. “The audit stage is understanding all the different ways in which AI touches your business. If you are using AI, there are political, regulatory and reputational risks that you're already carrying as a business. You then need to be able to articulate that risk to all potential audiences, and then you need to think about whether you need to be advocating for the way in which it is regulated.”
And the ‘articulate’ element is not simple. Communications should not just be with consumers; there are also internal communications to consider, as well as the potential for statutory corporate reporting in relation to risk over time.
But there is also risk in not keeping up. “Why wouldn't you be excited about this opportunity and the chance to be in at the beginning of communicating your position on it?” asks Stovin-Bradford. He points out how badly communication around the introduction of the telegram, the telephone and then the internet was managed: “We've got an opportunity here as business, as civil society, as a culture, to get it right this time.”
He urges companies to make the most of this moment. Ask the questions you need to ask of your business, message the answers well and embrace the opportunity to shape how AI is governed, he concludes.
12 April 2023
More in Technology & data
Online games and eSports complete the ecosystem
The opportunities for brands in-game and in eSports are massive in China and should be considered part of the marketing mix, says Liu Yi of WPP’s Mindshare
AI is serving up a holiday feast
AI can tell us what to eat and how to prepare it this holiday season. Jeff Malmad of WPP’s Mindshare explains
AI regulation: it’s about finding the right balance
AI regulation is a government, industry, regulator, company and private individual issue – and it must be addressed collaboratively