How should we think about AI?
WPP’s Chief AI Officer and Satalia CEO, Daniel Hulme, says understanding the intended applications of AI technologies is perhaps the most compelling way of thinking about AI
Rightly or wrongly, AI has become synonymous with different technologies. Cloud, data, algorithms – they all enable us to do interesting things with AI. These technologies don’t necessarily help us to define AI, but they are a way in which we can start to build a framework for understanding AI, its parameters, and its scope for doing good (and bad).
But the struggle to define AI continues. We, at WPP’s Satalia, prefer the definition ‘goal-directed adaptive behaviour’. This is all about making decisions in relation to a specific goal with the ability to figure out whether those decisions were good or bad so that the technology can make better decisions going forward. But let’s be clear, AI as we know it does not really learn on its own – at least it does not do so at present – so even this definition does not really move us forward.
That is why using these six categories of technology through which we can – and perhaps should – think about AI are so useful. For now, they are our lens for considering AI, and they will help us define a framework for understanding the evolution of AI as a force for good, and importantly also its potential for doing harm, and therefore requiring intervention.
1. Task automation
The first category of technology is task automation. This includes robotic process automation, natural language processing, machine vision, and so on. New technologies have allowed us to carry out tasks both better and faster and replace specific tasks with simple algorithms to create huge value in terms of optimisation and automation. Technology replacing humans in the performance of various tasks always comes to the fore when jobs are threatened. But there is probably a halfway house – between the performance of tasks by humans and by machines – and it is the middle ground that will probably have the most impact over time.
2. Content generation
Often referred to as ‘generative AI’, this category involves the automatic generation of images, videos, text, music and so on. It is about augmenting the creative process – something that is often referred to as content intelligence. There are valid and useful applications of this type of AI – to drive increased engagement in campaigns, for example. But there will still be safeguards required as this type of technology becomes more commonly adopted and more sophisticated.
3. Human representation
These types of technology are about replacing humans with technology. These interface technologies – like chatbots and avatars – look and feel human and raise concerns around ethics and risk. This technology make us ask ourselves: should a chatbot or another technology that looks and sounds like a human declare that it is not in fact a human before a user interacts with that technology; and should technology ever be given a human name?
There is also the question of homophily. We keep in mind, at Satalia, that people tend to seek out or be attracted to those who are like themselves. So, how does this work when humans interact with non-humans and when it is not clear to humans who, in the group, is not actually human? What does that mean for trust? Does that create a social bubble? Does it reduce diversity? These are all questions we need to confront.
4. Extracting complex insights/predictions from data
Here we are talking about machine learning and advanced analytics – essentially data science whereby we extract complex insights from data. Then follows the need to explain those correlations if we are to understand the world in new and better ways so we can build better systems.
Machine learning to extract new complex correlations is attracting a lot of attention – there is a lot of chatter of chatter about this type of technology. Through a marketing lens, it can help us understand new personas and identify new types of human behaviours in ways that we've never been able to achieve before and then leverage that. Perhaps this category of technology will be the best place to start if we are to build a framework for AI only being used as a force for good.
5. Complex (better) decision-making
Expert systems, optimisation, decision trees, inference – these are all ways of using AI technologies to make better decisions. But, if these insights are given to humans to make actionable decisions, can the ultimate value of those insights be realised? Should machines be trusted with the whole optimisation process? Where should automation and optimisation begin and end?
6. Extending the abilities of humans
The capability to extend the abilities of humans both in the physical and digital worlds is the final category of AI technologies in our list. Here, we are talking about using exoskeletons that can make control decisions using AI technology, and cybernetics. This category is about using technology as an extension of ourselves in the physical world for the purposes of enhanced performance. And, in the metaverse, it is about having a digital twin – an avatar – make decisions on your behalf.
Across all categories
Across all six categories there are, and should be, different sets of constraints. And, for each of them, there are different security, safety, ethical and governance questions. The questions are endless: is there bias in generative AI; are we optimising for the right KPI; what happens if we have toxic combinations of data; and what happens if we extract insights that we should not be able to access? These questions – and lots more beside – will help us build a framework within which we hope it will be safer to operate.
But it is so much more complex than that. These six categories of technology are largely combinable. They are building blocks for complete systems. But by categorising technologies in this way, we can identify strengths, ethical concerns, weaknesses, frictions, opportunities, and so on.
This way of thinking enables us to unbundle technologies and their applications after decades of piecemeal development that has resulted in a confusing AI landscape. It is by unbundling that we can begin to work on improving technologies, skills and processes to help us solve problems better and, importantly understand the framework for a safe and ethical future for AI.
06 February 2023
More in Technology & data
RGBlack: AI causes history to repeat itself
WPP’s Racial Equity Programme has supported AKQA’s RGBlack initiative in a new phase of development. Tim Devine, of WPP’s AKQA, explains
Gen AI: how the world could look
So much has been said about the wonders and the worries associated with generative AI. But re-imagining a world without violence and prejudice is very much a new use of the technology. Jason Carmel at WPP’s Wunderman Thompson explains
How you use AI will be a critical part of messaging
Communicating how you use AI – in a way that people who are not data scientists can understand – is, and increasingly will be, vital for companies. That is why WPP’s BCW has launched BCW Navigate, says Harry Stovin-Bradford