“But for real, the future of privacy, what will be left of it in 50 years…”
To answer it, we must start with some questions. What do we mean by privacy? What are we keeping private? And from whom?
All or nothing
There are two opposing schools of thought here:
- “None of your business” – One school of thought says that everything should be private unless you explicitly choose to share it. That no-one should have the right to compel you to share it, without at the very least, compelling evidence of wrongdoing.
- “What have you got to hide?” – The other school of thought asks, “what have you got to hide?” It suggests everything should be out in the open and only terrorists and deviants keep things concealed.
Most of us inhabit the grey area in the middle. We’re happy to share some things, we share others grudgingly. Sometimes we make informed choices about trading some privacy for services, such as social networks. Most of the time we make such choices based on very poor information and understanding.
The short answer to “what is the future of privacy”, is more of the same: we will continue to muddle through making mostly OK choices based on limited information about what we should share. Our privacy will be alternately infringed and reinforced by governments, regulators and private corporations alike.
The bigger answer requires addressing some of the specific technologies, business models, and pressure points that will affect the future of privacy. The answers it reveals might show up the cracks between different jurisdictions.
Future of privacy: the long answer
Let’s divide the challenge between public and private, asking two questions:
- Will corporations continue to gather data about us in a bid to target us more accurately with advertising and change our buying (or voting) behaviour?
- Will governments make increasing use of technologies like facial recognition, infringing our privacy under the cover of making us safer?
Corporate data gathering
A few years ago, I had conversation with the chief data scientist of what was then Demandware, now Salesforce Commerce Cloud, Rama Ramakrishnan. Rama is now Professor of the Practice, Data Science and Applied Machine Learning at MIT Sloan. What he told me rather exploded my understanding of the drivers for social networks and search engines gathering big data about us.
“When it comes to understanding shoppers, the key lesson is that you are what you buy. That I am of Indian origin and live in a particular suburb of Boston is not particularly valuable. The fact that I like a certain brand of boots, that is interesting.”
What he means, and what he went on to explain in the paper that I wrote for Demandware on data-driven retail (sadly no longer available), is that much of the information we are worried about sharing, and that the social networks appear to have prized collecting, is actually of very little value when it comes to selling advertising or targeting us. Put another way, we give so many explicit signals about what we want that there is little return on investment on spending billions trying to infer what we might want from other data about us.
The value of data
I see lots of companies starting to get to grips with this fact. They are starting to understand that, even before any regulation, or consideration of the security risks it presents, the cost of gathering and processing lots of personal data about us is often not outweighed by its value. Better just to have the 10% of – often at least semi-anonymous – data that gives 90% of the value.
Now, there is still lots of deeply personal and perhaps compromising information in our clickstreams. We are right to be cautious about what happens to that information. Here, regulators have a role to play, looking both at what is collected, how it is stored, and how it is used. But the diminution of the business case for large-scale data hoarding gives me hope that this sector can be regulated. If the value of our most deeply personal data proved to be much higher, I would worry that business would lobby and wrangle to minimise oversight.
This is the same reason why I’m not *that* concerned about the data gathered by voice assistants like Alexa. Yes, they are picking up more than our explicit commands. But storing and processing that morass of data probably has very little value. I’ve still chosen to keep voice assistants out of shared areas in the home until my kids are old enough to make a conscious decision about sharing their own voice data. And they can always be hacked, but that’s a different story…
Of course, our data does still have value. “So why don’t we see that value?”, many ask. Many people have proposed putting personal data into the hands of individuals rather than corporate behemoths. There are even proposals to ascribe it some sort of property rights, making it a tradeable commodity. It’s an idea that I have discussed on this blog before. I still believe that in the future we will store most of our personal data inside a firewalled cloud account and release it only on a case by case basis when we see value in return. Sometimes that value might be financial – such as when it is a signal of a willingness to buy. In this case there are already mechanisms to monetise that signal, through cashback services like Quidco. But sometimes that sharing might be more community minded – such as with the sharing of health data for government information or large-scale studies.
Is it worth it?
It’s unlikely that we will make the decision to share ourselves for reasons of practicality, and of value. Practicality because the number of requests per day will likely be overwhelming. We will give our personal AI assistant a set of broad rules and it will take decisions based on those rules, feeding us the exceptions and learning from each one. Value because, unless it is a very explicit buying intention, our data just isn’t worth very much!
As Chloe Grutchfield of adtech specialist RedBud Partners pointed out on a panel I was part of in late 2019, most people would stand to make a maximum of 50p per day from their data. That would be based on them signing up for multiple data-driven reward schemes and sharing their data without much thought for their privacy. Now, that might be enough to subsidise the cost of the personal AI making the data-sharing choices for you, but it won’t do much more than that. You can’t make a living from your personal data.
So, will corporations continue to gather data about us in a bid to target us more accurately with advertising and change our buying (or voting) behaviour? Yes. But I think they will be increasingly selective about what they gather, and more conscious about how they store and use it. In the long run, we will likely have more granular control over what is collected and shared. And we will reap some of the (small) reward from it. Here, the future of privacy is surprisingly positive.
Government grabbing data
I am more concerned about the overreach of government when it comes to the future of privacy, particularly in places like the UK and US. Here, there is limited experience of authoritarian states and the speed at which the tide can turn against different groups. The “what have you got to hide” lobby is very strong, particularly in the UK where there is less of a counterculture of those completely opposed to federal government. That doesn’t mean that I respect those groups – they rather terrify me. But they are nonetheless something of a balance to governmental overreach.
The ruling in the UK on the use of Live Facial Recognition by the South Wales Police sets a precedent, albeit one based on a limited version of the technology. These are not the networked cameras scanning the streets and monitoring everyone’s movements that you might see in a dystopian scifi. Rather these are mobile units running against a bespoke set of target individuals at each location they are deployed. That said, the fining of a man who covered his face to avoid the system on trial in Romford sets a worrying precedent of its own.
It only takes an authoritarian home secretary (ahem), and a terror incident to see the scope of such technology expanded. And with us outside the European Union and challenges to judiciary oversight, I am concerned how fast this might happen.
That concern extends beyond cameras. Our legislature in the UK has been using technology as a scapegoat for all sorts of societal ills for a few years now. It is very happy to discuss draconian measures to lock down access and monitor people’s use. The “what have you got to hide?” lobby might do well to read a history book or two and see just how many groups fall into the government’s sights when things start to slide.
Will governments make increasing use of technologies like facial recognition, infringing our privacy under the cover of making us safer? Yes, in the UK, US and probably places like Australia, they almost certainly will. In the EU, with different attitudes and stronger regulators, the risk feels much more distant.
Future of privacy: What will be left in 50 years?
The answer to this first question then probably comes down to where you live. We won’t have global harmonisation on laws for many more than 50 years. In the meantime, I think you will see the boundaries of your privacy expand in both the public and private domains if you live in Europe. Good news for you Franck. Elsewhere, the future of privacy is perhaps not so rosy.