Bot Based Ad Fraud
By Andy Roberts, Mindshare
Recently the issue of ad fraud within the digital ecosystem has been highlighted as a major concern. Our viewpoint is simple, we will do everything humanly possible to ensure that our client’s campaigns are visible to, and are viewed by real consumers. Although this is our prerogative, recent articles in the FT
highlight the challenges we face in achieving this ambition.
Details and Implications
The FT article references a study (cyber-attack), undertaken by a group of European academics on a series of UGC based online video services including Vimeo, Dailymotion, Myvideo.de, and specifically YouTube.
In a nutshell, avoiding the technical detail, the academics created a site and then fired non-human, bot based traffic at it in order to test whether the sites claimed these known fraudulent impressions as real views. The attack on YouTube made the headlines for several reasons. As third-party tracking is not currently permitted, they were able to test the strength of Google’s proprietary monitoring technology. While this compared well to technology used on other sites, they also revealed that Google had charged for impressions that its own technology had revealed as fraudulent. In essence, Google marked its own homework incorrectly.
We have challenged Google to explain the error in its system. In response, Google accepts that there is a flaw, which it will strive to eradicate. However, to put the issue into context, there were a minute number of impressions involved in the test and the ‘overcharge’ only amounted to 0.007 cents. Furthermore, Google’s system is designed to recognise fraud at much higher volumes, and the test revealed that its monitoring system is superior to most others. Google are not complacent and take this very seriously, working closely with GroupM to provide even more rigorous protection in the future.
While keeping this incident in context, it serves to highlight the challenges we all face, if even the mighty Google can be flummoxed by an attack. We have to follow a strict strategy to ensure we are doing everything humanly possible to protect our clients’ campaigns.
We have distilled this into a 6 point action plan:
Summary / POV
- Ensure the best, most up to date, third-party fraud detection technology is in place.
- Avoid GroupM blacklisted sites that are known to run either damaging content or harbor fraudulent impressions.
- Build client specific whitelists of known, safe sites.
- Use open Ad Exchanges with caution, ensure they have protection built in.
- Use GroupM Trusted Market Places - programmatic direct deals with known, premium publishers
- Use direct deals with known premium publishers.
In summary, it seems the introduction of a ‘dislike’ button on Facebook could cause as much controversy as the media storm surrounding its development.
The use of the ‘dislike button’ when applied to brand content may not be quite as straigthforward as its intended use for more personal expressions of empathy or sympathy. Brands would possibly face the added fear of a sea of dislikes being directed at their content. However, on a more positive note, the increased user data may of course benefit brands. After all, having a deeper understanding of which content truly resonates with users, should serve to aid content development and more nuanced user-targeting.