Back to home

When marketing meets science fiction

AI offers enormous potential, but what about the ethical challenges?

Oliver Feldwick

The&Partnership

published on

10 November 2020

Illustration red circle

Ethics should play a much larger role in marketing than it does. It sits behind everything we do. Operating at the intersection between business, human decision-making and creativity, we should continually be examining our impact on the world. As Rory Sutherland puts it, “Marketing raises enormous ethical questions every day – at least it does if you are doing it right.” But we tend to not delve into them. Probably because it’s difficult.

The trolley dilemma

Luckily, millennia of philosophical ponderings have given us some useful tools to explore ethical dilemmas. For example, the Trolley Problem is a common thought experiment to illuminate the ethics of utilitarianism, which is the philosophy that the ideal ethical action is to maximise “good/utility” or minimise “harm”. In this dilemma, you are standing at a train intersection. A runaway train trolley is coming down the track. If you leave it on the current track and do nothing, it will hit five people. As the illustration below shows, if you flip the switch it will divert to another track, hitting one person. Is it ethically okay to actively step in and kill one person to prevent a greater loss?

First year philosophy students’ pub conversations tend to then make the dilemma more spicy. What if that one person is a member of your family? Or if the five people are all in their eighties while the one person is a teenager with a whole life ahead of them? Or the one person is a world-leading cancer specialist on the edge of a breakthrough that could save millions? These thought experiments are designed to unpack and illuminate difficult ethical concepts, helping us apply ethics to everyday situations.

One of the challenges with AI is that by the time things get bad, it can often be too late. We should ask serious, probing questions sooner rather than later

Ethical dilemmas: AI raises the stakes

Science fiction is full of similar dilemmas. In 2001: A Space Odyssey, HAL 9000 – the onboard AI computer – holds the crew captive and ultimately tries to kill them for the sake of the mission. To survive, they must “kill” HAL. In Blade Runner, the test for replicants is whether or not they can react with the appropriate levels of empathy; however, as these bioengineered beings increase in sophistication, they become indistinguishable and the viewer is forced to question whether the replicants are moral agents in their own right (and whether humans’ treatment of them is ethical).

These stories are compelling because the evolution of AI forces us to look at previously unexamined parts of human life. A feature of AI is that it doesn’t operate like a human. It doesn’t get tired or distracted; it doesn’t suffer from subjectivity; and it doesn’t approach things with the same thought processes or mindsets. It can solve problems in ways that humans can’t fully comprehend.

This is where ethics comes into it. As nonhuman actors that think fundamentally unlike us are being asked to make decisions and take actions that affect us, there are very valid concerns around the possible negative or unintended impacts of AI. The cautionary tales of our science fiction could become scientific fact.

One of the challenges with such game-changing technologies is that by the time things get bad, it can often be too late. That is why we should be asking serious, probing questions sooner rather than later. Was HAL 9000 immoral in choosing to kill the crew? Or was it acting ethically, trying to follow its interpretation of its original coding? And by proxy, did the programmer of HAL 9000 do something unethical? And, if so, should they be held responsible? Indeed, did the crew break a moral code by killing a sentient computer just because they had differently aligned objectives?

AI and autonomous driving now mean that the previous Trolley Problem is a very live debate. A self-driving car needs to decide between steering left to avoid a person but potentially hitting others. Who should it choose? What data should it infer from? Should it be able to quickly scan the faces of people, run them against a police database and see their records? Or access their social media profiles and use that information to see if they have dependants? Should it favour drivers or pedestrians? How would you feel if you knew your self-driving car was programmed to kill you to save others? How about if you developed a “defensive driving module” that you paid extra for, but that increased your safety as a driver?

Tiny algorithmic nudges add up to big ethical challenges

In marketing we aren’t talking about stark life-or-death scenarios, but ethics still matter. AI influences a whole host of marketing activity:

  • The ads our programmatic systems choose to show
  • The creative tweaks our optimisation engine makes
  • Image and video generation and short-form copywriting
  • Voice recognition and chat interfaces
  • “Next Best Action” recommendations

These little nudges and moments quickly add up to shape and influence the world around us in multiple ways. For example:

  • Netflix algorithms can affect our mood based on what is recommended to watch next
  • Facebook algorithms can affect whether we become more extreme or moderate in our views
  • Amazon Fresh algorithms can affect our eating habits
  • Google Search results can affect how we see the world by what content we’re exposed to

Components of ethical decision-making

There are three useful components for looking at ethics in marketing. The “agency/author” of the decision, the “intent” that goes into it, and the “outcomes” that it creates.

Each component raises slightly different questions.

  • With the agency/author, we should ask ourselves: who is creating? Who is responsible? Who is at fault?
  • With the intent, we should look at the data and processes that the decision is based upon
  • With outcomes, we should look at whether the use of AI is working as intended and the net result of the action. In short, who did it? What did they want to do? What actually happened?

Four common pitfalls to be wary of

Looking at intent and outcome gives us a framework for different ethical risks and how we should guard against them. If we map these into four quadrants, we get to some of the common concerns or challenges that we could face regarding AI ethics.

1. Unfair advantage: can appropriate use of AI in marketing still skew the game in an unethical way?

While it might feel counter-intuitive, even ostensibly successful uses of AI can raise challenges, such as allowing unfair competitive advantages and monopolistic behaviours. The AI lock-in data advantage can create unassailable market dominance and requires guarding against.

But on a broader level, it can also escalate and skew the balance in marketing. Marketing is based on a balanced, implicit contract between media owners’ ability to monetise attention, brands’ ability to buy and use it and consumers’ right to have their attention respected and rewarded. If AI provides a major advantage to one of those forces, it can distort the balance.

Perhaps we’d end up with each different brand optimising their campaigns against each other and each consumer using their own ad-blocker AI in order to defend against them. It seems it would be unsustainable and undesirable for everyone to need their own AI systems, defending themselves against the others.

2. Unconscious bias: how do we ensure that seemingly working systems aren't based on unethical data, bias or premises?

There are several examples of this in action, where unforeseen problems in the dataset skew the algorithm. For example, researchers developed a tool to catch early-onset Alzheimer’s with 90% accuracy. The catch was that it only worked on the specific French-Canadian accent in the area in which it was developed. There are many other similar examples in the press, from racial bias in recognition software or parole prediction software to gender bias in voice recognition or hiring algorithms.

This unintended bias is only spotted through careful observation and investigation. The risk, of course, being that a lot of other incidents of it will go unnoticed.

Even when an algorithm is specifically defined in order to not take any form of discriminatory data into consideration, it can use other data points as a proxy for the same discrimination (e.g. you specify it to not look at gender, but then it uses height as a proxy for gender). This matters because we can have a lot of insidious and potentially unethical targeting and optimisation decisions happening without us even realising. From systems that accidentally target vulnerable individuals (such as with gambling) to marketing that is simply sub-optimal because of a problem in the underlying data.

3. Unintended consequences: how do we ensure well-intentioned systems don't lead to unintended and undesirable consequences?

Even if the premises and inputs into the system are all working, it can still end up with unintuitive, unexpected and ultimately unintended consequences. This is because the algorithm does not have any innate common sense. It doesn’t understand the broader context or assumptions that we might take for granted.

A common thought experiment for this in AI circles is the Spoon Factory. Imagine a spoon factory, whose production you turn over to an AI that is tasked with optimising spoon production. Given this blunt objective, it could decide to do this by flouting health and safety regulations, by pushing workers’ rights and by sacrificing profitability to maximise productivity. Once your AI Spoon Assistant has fired all your workers and run your company into the ground in order to squeeze out just a tiny bit more productivity, it’s too late to point out that wasn’t exactly what you meant.

A similar analogy can be found in marketing, in relation to fake news and clickbait. By creating a system that is designed to optimise around the time spent on the platform or the engagement rates, the incentives have accidentally reshaped the whole machine to prioritise outrage, to prey on insecurity and to use deceitful tactics to hit those targets. On one level, the system is working as intended; but on another, most people would agree that the current opaque digital marketing landscape is not working optimally.

4. Unethical usage: where do we draw the line and how do we defend against downright unethical or reckless use of AI?

The most obvious misuse of AI is where it is used by a malicious individual for a malicious end – be that for profit, mischief, or political ends. This is already happening in the realm of bot networks, ad fraud, fake news and spam marketing. There’s an ongoing battle between good and bad actors. (Google’s spam filters have reduced spam email to a minor annoyance, rather than a daily onslaught, for example). But beyond criminal activity, there is plenty of scope to optimise unethically, preying on insecurities, spreading disinformation and targeting vulnerable audiences. This is tricky to prevent, beyond simply imploring people not to be evil. But regulation, transparency and industry oversight are critical to help minimise the impact and harm.

Taking steps to avoid the pitfalls

Even unpacking the problems we face in AI and marketing ethics can be complex. A spectrum of potential pitfalls means that this is not a simple task to resolve. However, there are some key principles that can minimise the risks.

Humans and machines working side-by-side

Having humans and machines working together can help to keep things on track. Unless it’s a clearly and safely defined system, having a human on hand to provide common sense guidance is invaluable. This should be an “oversight and augment” role. If the machine is able to do 90% of the task, the human can focus on checking the output and making sure that the final 10% works. Humans should be acting as the supermanager of the system, not just handing over control to the machine.

Ensure a diversity of data, teams and thought

The greater the diversity working on the problem, the better. Building a system around a single metric or dataset (such as a clickthrough) or building an industry around a monoculture (a problem that Silicon Valley is often accused of) can each lead to bias and unintended consequences. A diversity of datasets can help identify or dilute bias, while a diversity of individuals can ramp up the human safeguard from both a conventional diversity point of view as well as from a diversity of thought and opinions, ensuring a range of voices are in the room who aren’t all drinking the same Kool-Aid.

Build in safeguards, explainability and a “kill-switch”

AI developers and marketeers should be conscious of building in safeguards. This requires working through scenarios and thinking the worst. A “pre-mortem” exercise, wherein you imagine all the ways that things could go catastrophically wrong, can help to outline potential pitfalls to guard against, which is useful when designing these systems and identifying what to look out for.

Engage design ethicists and algorithm design experts

Another key component for technical algorithm design is explainability. This is a requirement of GDPR – and GDPR does a good job of litigating against many of the unconscious and unintended downsides of algorithms and AI in marketing – but it is also good practice. Being able to look inside the black box and check its decisions is critical. Furthermore, providers should build safeguards against people misusing marketing tech, by monitoring usage and building in a kill-switch that disables features if they are being used in problematic ways.

It's time to take ethics seriously

All of the above can only be achieved consistently if we start taking algorithm design and design ethics seriously.

Each company should have its own clear point of view on appropriate use of data, machine learning, targeting and dynamic creative in its working practices. This is sensible business practice that minimises potential fall-out, but it is also the right thing to do. We likely have a skills gap of people who are both able to understand the systems we are building and have the frameworks to see how they are applied ethically. Taken together, these steps can minimise and prevent some of the worst ethical misuses.

At the end of the day, there will inevitably be continual challenges and concerns, but that shouldn’t be taken as a reason not to develop a new technology.

Looking at AI ethics is not a simple topic. While marketing uses will have lower stakes than other sectors, they are also more complex due to the nuances of creativity, persuasion and AI.

In the words of HAL 9000, “This mission is too important for me to allow you to jeopardise it.”

We can’t afford to just wait and see. But the good news is that by building the right safeguards, diversity and frameworks into everything that we do, we can shape the landscape for the better.

 

Read more from Atticus Journal Volume 25
This is an excerpt from Do androids dream of electric consumers? Ethical considerations at the intersection of AI, creativity and marketing 387 KB

Category

The Atticus Journal Technology & innovation

Related Topics

Artificial intelligence Data privacy Ethical advertising

Explore More Topics

A commitment to health B2B: business speaks to business Capitalising on creativity Humanising AI People in all their diversities Showing up for the shopper

Show less

Explore More Topics

More in The Atticus Journal

Allison Spray

Generative AI: mitigating risk to unlock opportunity

H+K’s Allison Spray on managing the commercial and reputational risks that the proliferation of generative AI will present

Illustration of windmill on a backdrop of green cars

Making sustainability profitable

Sustainability investments must deliver returns – both financial and reputational – to be ‘sustainable’ for business. Something needs to change, says Luc Speisser

Sustainability comms must get real

Sustainability comms must get real

There’s a disconnect between the way corporations talk about climate change and how the public discusses the same issue. That’s the conclusion of research by Jamie Hamill, Alessia Calcabrini and Alex Kibblewhite.