Paint a target on yourself: Facebook & Cambridge Analytica revisited

Paint a target on yourself: Facebook & Cambridge Analytica revisited
Mark Zuckerberg on stage at Facebook’s F8 conference. Image by Maurizio Pesce — https://www.flickr.com/photos/pestoverde/15051962555

I spent much of Sunday morning scoring entries to the the Prolific North marketing awards. The best entries in the categories I was judging had a few things in common. One of them was how well they used data to target their audience.

Then I got a call to speak to TalkRadio this morning about the latest updates on the Facebook/Cambridge Analytica story. And again it was clear that really, I would be talking about marketing. The type of marketing that most consumer-facing organisations, of any scale, have been doing for years.

Data intelligence is bad

The brands and agencies entering the awards used a whole range of methods to better understand the beliefs, needs and desires of their audience: surveys, testing, focus groups, analysis of existing data sets. They then used this intelligence to shape the stories they told to maximise their effect.

These stories were told across a variety of media: television, Facebook posts, digital and print ads, PR campaigns. This is where they differ to Cambridge Analytica (CA): in all of the stories I’ve read so far, the data was only used to target advertising. This seems unlikely.

It appears that CA, and its alleged affiliate, AggregateIQ, fed back to clients about the personality types and hot issues affecting its audience’s decisions. We know that there has been a mass influx of fake news into Facebook and the Web in general: biased and often patently untrue stories designed to discredit people and ideas and reinforce existing — often wrong — beliefs. Given the apparent level of moral reasoning taking place inside CA, and inside the campaigns that it supported, it seems unlikely that its arsenal would have been limited to advertising. Though, as I say, no report I have read offers concrete examples of any materials produced off the back of the data and profiling that CA or AIQ developed.

…or is it?

You choice in marketing is to shout at people about how great you think your product, service, or candidate is, or to listen to what is important to them and respond to those needs. To understand their worldview and tailor your messages accordingly. Since few of us like being shouted at, and most of us have developed a level of filter to ignore such base marketing, it’s unsurprising that the latter approach is more effective.

For all the horror that this might engender in people, it’s still a relatively unsophisticated process, even in the most advanced campaigns. It doesn’t appear either CA or AIQ’s work fits that category. Nonetheless, it is effective enough to deliver an incredible return on investment, certainly for the brands whose award entries I’ve been examining. One pound spent on marketing might turn into two, five or ten pounds in revenue.

When I say it’s not sophisticated, what I mean is that the targeting is still far from precise. I’ve lost count of how many people have asked me about (or more often complained to me about) irrelevant advertisements pursuing them around the web. Or completely off-base recommendations for products based on other things they have bought.

But when it works, this targeting is incredibly effective. Why do ads pursue you around the web? Because retargeting (the official name for this) is incredibly effective — somewhere between 40 and 100% more effective than ads seen cold, depending on which study you look at.

Likewise recommendations: brands recommend things they think you might like because it works, boosting the size of your basket at checkout by maybe 20%.

Imagine how it will be when they are actually really good at this? Yes, you might feel like you’re being manipulated. But actually you will also feel like the brand is working to your agenda. Who doesn’t want a personalised experience when shopping? A site that does the searching for you and finds what you want with minimal clicks?

The answer, is very few people. All evidence suggests we love brands that personalise our experience and minimise the friction in our shopping process.

As for products, so for politics?

The question is, do we feel the same about politics? The furore around CA, AIQ and FB doesn’t seem to be about the data breach — if you can call it that: the data CA used was collected entirely legally and the way that it was then sold on used to be entirely commonplace, if still against both data protection laws and FB’s terms and conditions. We have another story about a large scale data breach each week, and it seems to slide off the back of the public, contributing only to a slightly heightened background level of technological fear.

No, the furore around this story is around the prospect that our decisions on something more vital than our next box of cereal or holiday destination may have been manipulated. Some don’t want to believe that they were manipulated. Some really want to believe that others were, as a way to explain decisions that they find incomprehensible.

Personally, I’m sceptical about the effect either CA or AIQ had on the Trump or Brexit campaigns. Their methodology is suspect and most analyses suggest they weren’t approaching the sophistication of the best brands.

What to do

How do we stop this happening again in the future, should we want to? There are two options.

The first is that we try to legislate against this type of behaviour around elections. But that for me is like trying to reseal Pandora’s box. We know there are bad actors with a desire to influence voting. Are they going to hold to the laws? Will the laws we establish be able to adapt to new techniques and technologies? Unlikely.

Instead, I think we have to make the process much more transparent. Everyone needs to know when and how their data is being used, and how they are being targeted.

This can’t be achieved by forcing the likes of Facebook to do a better job of releasing data they hold. Let’s be honest, who has the time to plough through all that? I haven’t even bothered downloading mine. There are no surprises in there for me.

If we want to avoid situations like this in the future, we must change the way our data is held and how we are rewarded for sharing our personal information. If we want to keep track of where it goes and how it is used, then we should be in control of it, and we should place a value on it being shared.

We clearly can’t do this on a case-by-case basis: just think how many times your data (in a very low-level, anonymised way) is accessed each day by brands targeting you with advertising. We need a policy system wrapped around our data that allows it to be accessed by others on demand, according to the policies we select. A level of machine learning would allow it to adapt based on our responses over time.

This won’t prevent us being targeted by campaigns looking to change our behaviour. But at least we will be in control of what we receive, and rewarded for sharing our data with those with commercial interests in our attention. At least it will be transparent: we will know who was targeting us, with what, and when.

Like this? Get more when you subscribe at subscribe.bookofthefuture.co.uk

This post forms part of my Future of Communications series. For more posts on this subject, visit the Future of Communications page.

Tom Cheesewright

https://tomcheesewright.com/futurist-speaker

Futurist speaker Tom Cheesewright is one of the UK's leading commentators on technology and tomorrow. Tom has worked with a huge range of organisations across a variety of markets, to help them to see a clear vision of tomorrow, share that vision and respond with agility. Tom draws on his experience to create original, compelling talks that are keyed to the experience of the audience but which surprise and shock with unexpected facts and examples.

Share this article ...

Tom Cheesewright