In amongst the majesty of the landscape and the soundtrack, there’s a really geeky plot point in Blade Runner 2049.
Avoiding too many spoilers, throughout the first half of the film I was wondering why the hero wasn’t given away by his home AI. He has a pretty intimate relationship with the AI’s female avatar, sharing with her secrets that the AI’s maker desperately wants.
“Surely this AI runs in the cloud?”, I thought. A cloud owned and operated by the villain, from which he could easily extract the information he wanted. Then the moment comes where it’s clear that this AI has been running entirely locally on the ‘console’ — the computer — in the hero’s flat.
Alexa plus plus plus
This was a surprise. The AI in question is a direct extrapolation from the current Amazon Alexa/Google Assistant experience. Maybe it wasn’t originally, but there are some humorous non-sequiturs in her speech that will be familiar to the owners of such systems.
Alexa and Assistant are great products, but they are run as much for the benefit of their makers as for their owners. They collate huge amounts of data about the activities in your home, which is used to shape their understanding of your future shopping preferences and target you with products or advertising.
This is no more invasive in some ways than the data they gather from your web browsing activity. But the nature of the interface means that it is more accessible to more people in the home, with less control.
A conscious bargain
There’s nothing wrong with bringing Amazon or Google into your home. I certainly don’t consider these companies to be ‘evil’ in any way. But as I’ve argued before, homes are places for many people. Not all of them will — or can — consent to the digital footprint they are building up through interaction with such devices.
There’s also the point about independence of thought. Amazon — naturally and rightly — will prefer to sell you Amazon products, or preferred third parties from which it makes the most profit. I want my personal assistant to have the clearest loyalties and work exclusively on my behalf. If I have to pay more for that, so be it.
It was both surprising and pleasing, to see a fictional future technology presented as working along these lines.
The motivation for this probably had nothing to do with the point of principle involved, and everything to do with the fact that it would have been a very short film otherwise. ‘Villain wants information, villain gets information from database’ is not a very compelling plot.
We know from history that film makers are also often more interested in story than accuracy — not necessarily a criticism, but nonetheless something that occasionally frustrates science geeks like me. Given the incredible possibilities presented by what we know about science and technology, it sometimes seems perverse to operate outside that broad scope.
However deliberate, for me this presentation of technology offered a tiny flicker of optimism against a dystopian backdrop. Let’s hope that in reality, we can do slightly better.
Whitbread is continuing to open more Costa Coffee shops predicated on the continuing growth in demand. That growth has slowed, but the company believes we are entering a ‘third wave’ of coffee consumption, where we are willing to spend more per cup. Yes, that means you, with your single-estate cold brew.
While consumption patterns are interesting, I’m much more interested in what the continuing expansion of the chain says about our need for third space, a living room beyond our home, or a work space beyond the office. To the naked eye the coffee market looks saturated. And yet more and more continue to appear. Why?
The reality is that we are living in increasingly densely-packed circumstances. This is nothing to do with immigration, which is both culturally positive and economically necessary. Rather, it’s about housing.
Multi-generational homes are now increasingly common. As we stay single later and house prices grow ever more un-affordable, we’re sharing houses with our peers, later and later in life. Or renting the smaller spaces that we can afford in cities, where have little room to relax or socialise. Sometimes, we need to escape.
Developers are building what someone described to me yesterday as ‘student accommodation for grown-ups’: giant blocks of small apartments with high-quality shared spaces to make up for the lack of space to entertain or relax inside the flat itself.
The new pub
The irony is that we had a huge network of shared spaces in this country. Places that were designed to be the ‘home away from home’ for those who couldn’t afford the space, or the heat, in their own home. Places where groups could meet and socialise. Places where at one point in time, a lot of business was done. They’re called pubs, and they’ve been closing at a rate of 27 per week.
Of course we also had a very strong coffee shop culture in the past. Perhaps this is just cyclical. But nonetheless I think this trend is interesting, particularly for the way it counters this idea of us disappearing into our digital devices.
If we really were becoming an antisocial nation of nerds, lost to our laptops, then these physical meeting spaces would have much less value. Yet sat in one of my favourites yesterday (Manchester’s Chapter One bookshop/coffee shop), less than a quarter of the tables were occupied by solo workers. Most people were there to meet, talk, work and socialise. This was pretty typical of the other shops I stuck my head into. Hardly a scientific survey but enough to validate my suspicions: these are social spaces and they will continue to grow.
Human beings are collaborative by nature, fuelled by connection. We need spaces to make those connections, for business or pleasure. The pub fell out of favour for all sorts of reasons: homes well-equipped for entertainment, changing attitudes to alcohol, and a richer array of alternatives. But what’s clear from the continuing — almost baffling — growth of the coffee shop market, is that our need for connection has not gone away.
In one of those weeks of weird co-incidences, I’ve had a lot of conversations about robots in the last few days. And it has reminded me that our mental image of these autonomous machines is shaped very much by a small number of sources — mostly fictional.
The MegaBots battle this week won’t do much to change this, albeit these aren’t robots in any real sense, more clumsy upright tanks.
Nor will the range of robots on sale this Christmas, of which I’ll be reviewing a few on 5live in the run-up. I’m very much looking forward to having a house full of little bots over the new few weeks. But they’re not the robots we should be thinking about right now.
Return to the robot tax
When I wrote about the realities of a robot tax recently, I don’t think I went far enough in explaining just how unlike the robots of our imagination most robots in the workplace will be.
Imagine a robot inbound call centre, taking calls for customer service, support, or sales. There will be no actual ‘centre’. The whole thing will exist virtually, in a data centre — a giant air-conditioned building full only of computers. In fact, more than likely it will exist across multiple data centres in different countries. Perhaps none of them will be the country being serviced by this call centre, or the country where the business operating it exists. There will be no phones, no phone lines, just a very fast data connection to the rest of the world.
Like human call centres this system will have peak hours and quiet hours. In peak hours it might need a few thousand agents. In quiet hours it might need a few tens of them. Virtual agents will therefore be ‘spun up’ on demand, to meet customers’ needs. These agents may only exist for the span of a single call before being wound down again to minimise the required computing power and costs.
Now apply that model to administrative tasks, or augmented reality shop assistants. Many, many of the robots coming into our workplaces might have a lifespan measured in the minutes — or even seconds — existing thousands of miles outside the jurisdiction of whatever tax authority has domain in the location they are servicing, their output carried in a common data highway. They will have no corporeal form, nor even an enduring digital presence.
We naturally anthropomorphise. Hell, half the internet is the anthropomorphisation of cats. But we have to stop thinking about robots in these terms.
The physical robots that we do have — probably delivery drones and driverless cars in the near future — will have few human physical traits. And the vast majority of robots we do encounter will have no physical form at all. They will be fleeting presences, instantiated for a single task and destroyed again the minute it is complete.
You and I experience different realities. Or rather, we experience reality differently.
How could we not? Those ‘sounds’ you hear do not have any explicit meaning coded into them. Mean is created by your ears and brain working together to interpret the vibrations in the air in a particular context. Is it a siren or a baby wailing?
Those colours you ‘see’ are just your brain’s interpretation of different wavelengths of light. What colour is that dress? Those objects all around you? They don’t exist absolutely as you see them. It’s just your brain extrapolating from limited information. They aren’t even really solid.
Our experiences are sufficiently shared for us to interact. But our interpretation is nonetheless, unique.
The unconscious translator
This interpretation is largely unconscious. Some heavyweight mental exercises, or some pretty strong drugs, might change that interpretation. But for the most part, we have very little control of the interpretation that happens inside us.
What about the outside though? Two classes of product show us a future where increasingly we have conscious control over the reality we experience.
First, earphones that alter your aural experience in real-time. This might be changing the sound mix of the world around you, turning up vocals and down traffic. It might be changing the music you hear, turning off the muzak and replacing it with your favourite tunes. It could be translating other languages into your own. You could even change other people’s voices. Always wanted your partner to sound like George Clooney or Mariella Frostrup? In the future, they can.
Of course we can already shut the world out, with our own music turned up loud, or with noise-cancelling systems. But this goes a step beyond. This isn’t blocking experiences, this is consciously tweaking them — or transforming them completely — to suit our desires.
This technology won’t be limited to sound. Right now augmented, or mixed reality has largely been limited to gimmicks. Temporary apparitions on your phone screen. Blocky renditions of dinosaurs on your dining room table, or ill-fitting overlays of potential purchases on your person.
But the future that a mixed reality presents is increasingly clear. Or perhaps, opaque. Because far from the occasional novelty object, mixed reality could mean redesigning the world we see in real time, to meet each of our individual needs and desires.
Imagine walking down the street and every billboard is tailored to you. Every shop front responds to your presence. But only you can see it because you are witnessing the world through a pair of lenses. Everyone around you is seeing their own version of the world.
This is a mild and relatively uncontroversial vision compared to what’s possible.
Tell your own story
Imagine moving your city. You live, for example, under frequently grey skies, in a Northern city. You can’t do anything about the rain making you wet, but you could change the weather you see. It doesn’t matter if it’s overcast, what you see is a sunny Mediterranean afternoon. Complete with sea view and mountains in the distance, if that’s what you want.
Imagine changing the inhabitants of your city. This is where it gets really challenging. Overwrite the appearance of people in the street with orcs and trolls, aliens and automata, or just erase them completely, leaving only an outline for you to avoid bumping into them.
With a camera on your face, lenses in front of your eyes, and buds in your ears, all of this becomes possible with the application of enough bandwidth and processing power.
My concern is what this does to the shared experience. We’ve always seen different realities but they have been sufficiently shared for us to build societies and international bonds. Over the last century, our realities have become increasingly shared. You can debate the relative merits of globalisation but I think it’s hard to argue against the value of a global conversation on common terms over its alternative.
The Brexit referendum and the last US election showed the first signs of a reversal in that increasingly shared reality. The personalised bubbles of our digital media did a great deal to insulate us from opposing views, and fuelled by fake news, presented different tribes with very different realities. Each was clearly sufficiently plausible, if often carrying extreme prejudice and a large amount of fiction.
Imagine how insulated we would be if literally everything we saw was filtered through such a lens.
Caution not criticism
We are some way from this technology being a reality for the masses. But I think it’s less than a decade. I’m not saying we shouldn’t pursue it — far from it. But we should have the discussion now about how we teach people, if we put limitations in place, and who controls the realities that we all witness. Do you want to live in Amazon-land or Google-ville? Do you want to control your own reality? And if so, what’s to stop you becoming isolated?
Digital technologies have done a huge amount to bring the world together. They risk dividing and fragmenting it again. The technology can’t change that, but we can, through its considered application.
I went in to Hotwire PR yesterday to talk about influence, sharing some of my experience as a person on the telly/radio and as a writer/blogger/podcaster. I also talked about the changing nature of influence, as has been highlighted in some of the work I’ve done.
Though I studied engineering (Mechatronics), my first job was in PR. I spent five years working on behalf of a range of tech firms, large and small. Given my understanding of the tech, a lot of my time was spent acting as a translator for the engineers, turning their words into stories we could sell. But I still spent a good chunk of time trying to sell those ideas in to the intermediaries between our client and their customers.
Initially this meant primarily journalists and analysts. But the founder of the firm I worked for became increasingly interested in other influencers, eventually founding Influencer50 and writing the book, Influencer Marketing.
I got involved in many of the early influencer marketing programmes that were joint projects between the agency (Noiseworks) and Influencer50. Now we were looking at 25 categories of influencer: user advocates, resellers, systems integrators, bloggers, conference organisers and frequent speakers. And we were wondering, how can we reach all of these influencers and make them advocates for our client?
We found ways. But it was very different from the linear model we started with. Then we would pitch a story to a journalist, the journalist would (or wouldn’t) write the story, the prospect would read the story, and we would tell the client how many prospects had read it. Our measurement used to be on the last step: really a measure of reach, rather than influence. Now we were being measured on our ability to reach people we had already shown carried influence.
This was complex. But the world of influence is getting more complex still.
The rise of the peers
Three things have happened since 2005 when I left the agency and started out on my own. Firstly, the publishing power at everyone’s finger-tips has increased dramatically, giving anyone the power to reach an enormous audience. Secondly, but not unrelated to this, the diversity of media has grown exponentially. Thirdly, trust in the media has fallen to an all-time low.
The result of this is a re-balancing of the influences that drive us, particularly when it comes to purchasing. The chart below shows the UK slice of some research we did with Salesforce Commerce Cloud for the futurereadyretail.com programme.
The exact question asked was: “Which 3 of the following have the strongest influence on which items you end up buying?” The score is a percentage of respondents giving that answer.
If you aggregate the columns for friends, family and peer reviews, what you can see is that peer connections are far and away the most powerful influence on buying decisions — way more than traditional media, television, or celebrities — some of the most commonly targeted forms of influence.
Influence is messy
The reality is that the process influence never looked like that neat, linear picture I had in my head as an engineering-minded 21-year-old. Influence is messy and complex. We absorb a huge range of influences and assign them different weights depending on the time, context and decision at hand. But what is clear is that now, more than ever, it is the people around us — physically and digitally — who are the primary arbiters of influence.
There is no better interface for a light bulb than a light switch.
This is not an absolute rule. In some contexts, for some people, a sensor response, a voice command, two claps, or hell, even an app, might be better.
But right now, for the vast majority of people, in the vast majority of contexts, a light switch is unbeatable. It is simple and familiar and most of all, it works. It doesn’t fail when AWS goes down. It doesn’t take five seconds to respond.
If a connected device can’t take those characteristics as a base line, conform to them within a reasonable margin, and improve on them with new features, then you have to ask yourself: does this object have a place in my home?
An increasingly analogue, digital world
It is spectacularly easy to make digital objects these days. Physical devices with internet connections are now a primary school project, with costs measured in the low pounds. Entirely virtual objects can now also be created with a primary school skills and at a cost measured in the pence.
Beyond the primary school, the state of the art is digital objects with analogue interfaces. How else to describe virtual, augmented or mixed reality? These are all interfaces to our digital systems designed to mimic physical interfaces. Physical interfaces that are intuitive to us thanks to millions of years of evolution.
Given this trend, to wrap the digital in the physical, why do we persist in wrapping connected physical devices with digital interfaces?
Analogue outside, digital inside
I’m rebuilding my home automation system at the moment, though since this seems to be a constant state of affairs, it might be more accurate to say it is undergoing continuous development. One of the design principles for this iteration is that every digital action must be clothed in an analogue interface that conforms as far as possible to the standards of the physical item it replaces.
This starts with the light switches.
If I get it right, they will look, and function just as they did before. But with the added benefit that they can — if desired — be remotely controlled and that the system will know their state.
But unless you know this, it will just be a plain old light switch. Because right now, there’s no better way to turn a light bulb on or off.
People often ask me how I keep up with everything. There’s two answers to that.
The first is this: I don’t. I keep up with the specific things that people are paying me to look at. It just happens that this covers a broad spectrum of topics, from search engines to super yachts.
When you do research on such a broad range of topics, you gather a lot of context along the way. That helps you to get your head around other stuff, or at least makes you sound passably knowledgeable about a lot of things.
Shelagh Fogarty once described me as ‘a man who knows a lot about a lot’. ‘A lot about a little, and a little about a lot’, might be more accurate.
The second answer is that, as a rule, I listen when I walk and write when I sit. To my shame this largely excludes books from my intake. Instead, I consume a huge amount of material through podcasts. This lets me get the sense of the arguments in the big books of the day — frequently from talks give by the authors themselves.
I always intend to buy the books as well, or listen all the way through on Audible, but often it doesn’t quite happen.
At the end of the talk, as usual with the RSA’s excellent events series, the host invited questions. The first voice was very familiar to me. Even though he didn’t give his name I knew straight away it was my friend, the designer Johnny Grey. He asked what Sue had meant when she talked about ‘human-centred design’ — not because he was unfamiliar with the term, he explained, but because of a sense of caution about its use.
The answer revolved around addressing not just the needs and challenges facing the end customer but the needs of those people delivering the service as well.
This got me thinking.
When I first designed the Intersections foresight tool, it was to fulfil a need that I had. I wanted to structure my investigations into the near future of the different markets I was being asked to address.
The Five Vectors of Change had already emerged from the projects I was working on: five consistent trends that seemed to be affecting every sector, whether public or private, local, national or international. What I needed was a way to connect these trends to the realities of a specific sector I was addressing.
I realised looking back at the projects I’d worked on that the places where the incoming five trends caused the most dramatic change, were where there was already pressure:
If there is stress on your margins then greater diversity in the supply chain can dramatically improve the situation.
If your customer service is poor, then rising consumer expectations of performance are going to be more problematic for you than others.
If you are a deeply vertically-integrated business, then you might struggle to adapt to an increasingly networked economy.
These are Intersections, the points at which incoming trends collide with existing Pressure Points.
The view from your window
Back when I was in marketing, our new business pitches always used to include a section called ‘The View from Your Window’. It was a few paragraphs that told the client that we understood their business and their market. When it was a big pitch we used to interview members of staff in the business to get this insight.
I realised I needed a similar process.
Over time, I’ve refined a set of questions that can be re-used across industries to understand the Pressure Points that companies are facing. It’s amazing how often issues that are widely recognised in the lower ranks can shock management. Organisations are often a lot more opaque than we think.
Pressure Points are human
I’ve always talked about my toolkit as being about structure, not people. I’ve left the more human elements of strategy and change to specialists in that area. I’m an engineer by training and I have a strong belief that there are structural design solutions to a lot of the problems that my clients — whether companies or industries — are facing.
What I realised from Johnny’s question and Sue’s answer is that actually I have been addressing a human component all along.
The questions that I ask people to find the Pressure Points, are about their feelings: what frustrates them, how they rate their own performance and that of the people around them. This has proven to be a very effective way to find problems in the core of a business, but it also speaks to their more human needs. To the changes that will improve their experience.
Now when people ask where the human element comes into my views on the future, I can give a much clearer answer.
The needs of the user
There’s a second constituency that Sue Siddall mentioned that I haven’t addressed, and that’s the needs of the user. The people actually applying my toolkit for themselves, as consultants or leaders in their own business.
One of the most valuable pieces of feedback from the first course at Salford was that people wanted the tools to be more usable on a day-to-day basis, not just for projects. They also wanted them to be simpler: I’m learning you can almost never make things too simple when it comes to designing tools.
For the next course I’m taking the five or six steps in each of the two tools taught on the course, and making each one into a ‘micro-tool’ in its own right. For example, one little tool can be used to prioritise the Intersections that you focus on, or it can be used as a format for a to-do list to prioritise your work. One that can be used to understand future impact, can also help you to prepare for a meeting with a new constituency — perhaps a client or a different function within your business.
Strategy and storytelling
The most human development of all to the toolkit over the last few months has been to the way that I communicate what it is. With due thanks to Phil Lewis, Applied Futurism is about strategy and storytelling.
How do you set strategy in an increasingly fast-moving world? How do you communicate your vision for the future to all the audiences that matter? How do you build an organisation that is truly future-ready?
These are the questions that Applied Futurism seeks to answer.
Yesterday, I joined a retail round table discussion hosted by my client Freeths solicitors. Senior executives from a range of big retail brands joined us for two hours of conversation and cracking food.
I kicked the discussion off with a little provocation. Here’s the five bullets I used to get people talking.
In the future…
…augmented reality personalises every space
The first thing you need to know is that our physical and digital experiences will continue to collide. In ten years I believe we will spend 10–12 hours each day experiencing the physical world through a digital lens. That is to say, in ‘augmented reality’ (AR) or ‘mixed reality’ (MR).
Initially this will mean that everyone starts to wear smart glasses containing a smart-phone-scale computer, a pair of digital lenses, and a front-facing camera, as well as a variety of user interfaces: eye-tracking, bone-conducting microphone and speaker.
There’s much scepticism about this idea, in the wake of Google Glass. But I think a lot has changed in the nearly five years since Glass launched. For a start, the amount of hours we spend glued to screen continues to rise, with the latest Deloitte figures showing many of us walk right across roads while staring at our phones. Secondly, our acceptance of cameras everywhere has grown. They’re now on many car dashboards, on cycle helmets, on drones, and in kids hands as little action cameras — every beach is peppered with people shooting in high definition. And no-one bats an eyelid.
This technology changes everything — particularly the front-facing camera. Now our machines can see what we see, and combined with all the other sensor data, including location, develop a hugely rich picture of our physical world interactions, as well as our digital interactions.
…more and more shopping is done for us
I’ve argued before that many of our low-engagement purchases will be handed over to a personal digital assistant. Toilet roll, tinned tomatoes, that sort of thing. No-one enjoys shopping for them, but no-one wants to run out. So why not let an AI-driven assistant ensure that you are kept supplied and never think about them again?
Increasingly I’m convinced that more of our high-engagement shopping might be taken over as well, or at the very least, assisted.
Right now, clothes shopping online remains a lottery. Inconsistent sizing, even within single brands, means that neither men nor women can shop confidently without trying goods on. And reverse logistics — the returning of goods — is an expensive nightmare for brands, especially when dealing with low-cost ‘fast fashion’.
Both of these issues can be solved. Manufacturing data could be applied to give the most detailed fit data, and supplemented with shared data from people who have tried goods on. With a digital personal assistant holding your measurements — updated daily every time your head-mounted camera catches a mirror — you could buy with extreme confidence something will fit.
Or your personal digital assistant could buy for you. Everyone loves surprise post. And we’re increasingly signing up for subscription-based purchases for everything from pants to organic vegetables. Why not give your AI some discretionary spend to surprise you with a new item of clothing every month — or even week.
With the rise of autonomous vehicle — including the rolling drones trialled in Greenwich last year — automated warehouses, and better integration of offline and on, the costs of reverse logistics start to fall. Brands can be more confident sending out goods that will fit and suit their clients. And know that if things do need to come back, it won’t cost the earth.
…but the tactile experience grows in value
All this suggests that there will be more damage to an already-challenged high street. But speak to 16–35 year-olds and it becomes clear that they have an enormous attachment to the high street and the physical experience it offers.
According to research I was involved with for the Salesforce Future Ready Retail programme, the high street is an important social venue for many (29%). 28% say they go for ‘something to do’ and 43% say they go just to get ‘out and about’. This trend is likely to increase: we have growing multiple occupancy in shrinking homes. People need a third space to escape to, so footfall shouldn’t be a problem.
But will they buy? Most of this cohort say they go to the high street to research or make purchases. 58% want to try items on or test them out. 54% want to touch or feel items before they buy. 51% value the high street for the instant access it provides them to goods. 38% are seeking ideas.
Altogether, 96% say they still like to visit actual shops on the high street and in shopping centres. The challenge though, is connecting this physical activity to digital commerce.
…necessity connects physical and digital
The reality is that many people try offline and buy online. Not only are the prices potentially better, but the convenience is increasingly greater. With rapid fulfilment, why lug your shopping back when it could be delivered to you, neatly packaged, by the time you arrive home?
As the figures above show, the offline experience is crucial to the buying process. The challenge is demonstrating that with enough confidence to continue investing in it. The widespread acceptance of augmented reality devices could solve this problem. The data will be there to track someone through a physical interaction with a product right through to their digital purchase. However, two problems remain.
First, we will not be able to make a causal connection, only a correlation. This shouldn’t be too much of an issue though: there is rarely a causal connection between physical advertising and the purchase, but nonetheless we continue to invest billions in it.
Second, there is no obvious mechanism for paying the provider of the physical world experience for the digital purchase. Should one retailer pay another because an online purchase was spurred by an offline experience in their store? Not likely.
More likely is that brands will start to have to foot the bill for physical exposure, and find ways to map the investment in one back to the returns in the other. In this scenario, department stores have an enormous opportunity, aggregating the costs of physical exposure and charging brands for the privilege. We may yet see the renaissance of the department store as a venue for people to touch and feel goods, even if their purchases are ultimately offline. Some may feel this is already what they have become, but in the future it may be a sustainable business model.
…in-store the focus is on product and service, not transaction
For those investing in stores, they will want to ensure that they can maximise the value of that investment. This means focusing on product exposure and service quality and not on the space currently devoted to transactions. Tills will largely disappear, replaced perhaps by RFID systems, but more likely in the long term by computer vision systems tracking goods around (and out of) the store — as seen with Amazon Go.
Human staff will be augmented by virtual assistants, powered by the full range of data captured about each shopper and what that shopper chooses to share from their own digital assistant.
I’m delighted to hear a politician engaging with the coming challenges from automation. But I don’t think this is the right idea. Because we already have a tax on robots. It’s called Corporation Tax.
Corporation Tax is a tax on profits. The more you make, the more you pay.
Companies invest in robots primarily because they increase profits. They do jobs more quickly, more cheaply, and more reliably than humans. Hence companies can produce more at lower cost. In theory then, those companies investing heavily in automation ought to be making disproportionately greater profits, and paying more tax.
Unfortunately, it doesn’t always work like that.
Analysis by the House of Commons Library (with the caveat that this was commissioned by Labour) suggests tax avoidance costs the state around £2.2bn per year. This does not include the re-routing of profits internationally into lower (or zero) tax destinations.
If it did, the figure would likely be much, much higher.
The global companies investing the most in automation are also those with the scale and capability to avoid tax most effectively.
A tax on robots sounds good. It plays to our deep-seated fears of being displaced by machines. Fears that go back centuries: the Luddites weren’t against technology, they were against losing their jobs to it.
But this is the problem with so much of modern politics. The answer to everything has to be a new policy, because that sounds good in speeches and looks good in headlines.
Taxing robots also sounds a lot more palatable to half the electorate than cutting down on tax avoidance. Talking about tax avoidance —as Labour has done — leads half the population to think that they will end up paying more tax. Rather fewer people think they will be affected by a robot tax. And it may well be the wrong people.
A brake on progress
Taxing robots specifically targets those companies that are driving innovation. They may well be the same companies investing the most in tax avoidance, but that doesn’t mean that automation is inherently wrong. Automation may cost a lot of jobs, but many of them will be ones people would rather not do. The question is perhaps not whether we should be protecting bad jobs but working out what people are going to do instead.
Every listed company is obliged by its commitment to shareholders to operate to its best potential. Sometimes this means their leaders prioritise short term success over sustainability — a mistake that hurts us all. But over the long term, without a fundamental change in our economic system, we have to accept that the duties of leadership to shareholders means that companies are going invest in automation.
If that switch is inevitable, do we want the companies here to be the last to make that switch, because we make it expensive? Do we want companies in the UK to be the last reap the benefits of automation, or the first?
Software and hardware
Think about the types of robots that are being employed. Some of them are bound by location. These are primarily physical robots: drones for delivery, self-driving cars and trucks, warehouse operations. Still, many of the physical robots in manufacturing and production can easily be moved anywhere offshore — likely close to a port in a country with cheap electricity.
The real challenge is how you tax software robots. These will outnumber the physical robots by an order of magnitude. The digital systems that replace accountants, solicitors, call centre workers, retail staff, administrators. How do you tax those?
You can’t do it explicitly because counting them would be near impossible. The closest analogy today is the investment that large software companies make in counting the licences for their tools used by corporations. This investment is enormous and the technical expertise considerable.
I don’t think it is practical for any state tax collector to tackle this challenge across all of the possible types of software robot, many of which will never be identifiable as having captured part or all of one human role.
The answer is international
What we have to return to then is a tax on a principle, rather than a tax on a particular aspect of business. If you make profits, it is because you have access to the infrastructure and assets of a nation — including its people. There is a cost to that access, and it is corporation tax.
What we might consider is creating bands of corporation tax based on the relative profits a company makes compared to its employment base. Huge profits but few employees might see you pay a higher rate than a company with low profits but many people. This might have the negative drag effect I decried earlier, but set at the right level and introduced progressively it could start to offset the losses from personal tax and employer contributions (PAYE and NI).
None of this is feasible without international co-operation though. As I have said, the companies investing the most in automation are usually those with the scale and scope to move profits around the world to minimise their tax bill. Only greater international co-operation can make this harder and close the tax gap for each nation.
Robots not androids
When we talk about robots, particularly in the context of work, we naturally think of machines that look like us. If they look like us, then they can be counted and taxed like us. This is a fallacy.
The robots that replace many of us at work will look nothing like us. In fact they might not look like anything at all. Just a few million lines of code in a server somewhere in the world. They might never be bought or sold, but created in house from a collection of open-source components, so they can’t even be taxed at point of sale.
We can’t tax robots. Which leads us to the difficult but obvious conclusion: we have to get better at taxing profits.
In twenty years you find out you can come back and give yourself some advice. But you don’t have long. The machine can only keep you in the past for a minute.
What do you say?
Ignore, for a moment, the possibility of time paradoxes, and the sheer unlikeliness of time travel. Strip away the specifics: that relationship you should have dodged, next week’s lottery numbers. What would be the most common advice we would come back and give to our younger selves?
Here are some ideas:
Learn to learn
“The next twenty years are going to see change like never before. Keeping up is going to be a challenge, both at home and at work. Your best prospect for success is to be the fastest to adapt.
Learn to recognise the gaps in your skills and your knowledge and how to find the resources to fill them.”
Scepticism is your best defence
“The ability to broadcast information is growing much faster than your ability to verify it. You can’t rely on the sources you once could. There will be attempts to solve this with technology, but they will only ever be half an answer.
You need to learn to question everything you’re told and sold. Accept and challenge your own prejudices. Scepticism is your best defence: from salespeople, politicians, and yourself.”
Cling to your privacy
“People will tell you that data is the new oil. It isn’t. It’s the new prison.
The less control you have of your personal data, the more you are trapped in the expectations of others. Whether it’s brands and the things they want to sell you, insurers and their willingness to protect you, or employers and their willingness to hire you. Even friendships and relationships can be jeopardised by a digital history out of your control.
Don’t withdraw. Enjoy the advantages being offered in return for your data. But ensure you are informed and make every sharing decision consciously.”
Have you heard of transactional memory? It’s the outsourcing of memory to other people around us. You don’t remember your nephew’s birthday because you know your other half will. You don’t need to remember to get the car MOT’d because you know your other half will. By outsourcing this way we can store much more than we can fit in our own heads.
Human beings have always been looking for ways to be more than our own biology allows. We’re a race of toolmakers, determined to turn every material we can find to our advantage. Whether it’s a stone axe or a smartphone, we’re keen to augment ourselves to be more than we could otherwise be. Do more than we could otherwise do.
This deep and ingrained comfort with transactional memory, and our own desire to augment ourselves, is why I think we will so happily accept AI extensions of our own selves. A transactional relationship with software agents that extend our memories, our processing power, and give us the ability to do what every busy person has always wanted: to be in two places at once.
Hold that thought.
For the time being, the biggest battleground in digital marketing is still search. What this means is people spending billions of pounds and using all sorts of sneaky means to ensure that when you type the relevant words into Google or Bing, theirs is the first brand that you see.
You may not think it, but Google and Bing are your friends here. Every day its engineers go to bat to make sure that what you see when you search is not what someone else wants you to see, but objectively the most relevant answer to your question.
On the other side though, every day, every brand’s agency is working to do the opposite. To ensure that whether you’re searching for car insurance or cat food, it’s their clients who appear right at the top of those search results.
Two sides locked in a constant battle.
Hold that thought.
Tell me: do you enjoy buying toilet paper? Really? Or tinned tomatoes? Or washing powder? All those things that make up a good chunk of your weekly shop. Things you really need, but honestly, do you really want? Do you lust after them? Are you fulfilled by finding that perfect pack of triple-ply?
What if you could have a transactional relationship with an artificial intelligence who ordered those things for you. Ensured that you never had to think about them again. They would always just be there. You would never run out of washing up liquid, or dog food, or nappies again. It has access to your credit card and an online store, and limited scope for discretionary spending against a list of key items.
Outsourcing plus tools.
Now imagine the battle that is going on behind the scenes. Today that battle is between marketing agencies and search engines. But what happens when you stop searching? When you allow an AI to do the searching for you? Imagine how much effort will go into influencing your AI to buy a particular brand. This is the next big battleground and there won’t be a single human on the front lines.
Personalised marketing engines will suck in huge amounts of data about you and your peers and serve endless offers at your AI, only for it to bat them back. 99% of them will be rejected. But every now and again, based on appealing to the criteria that your AI has been given, or learned, it will change your brand of toilet paper, cat food, shower gel, or, yes, cereal.
What might that data be? Let me give you some ideas.
For a start, we will all be wearing cameras on our heads, all the time. Your AI won’t just know what brands you buy, how much you use, and when you need more, it will know what your friends and family buy and use. People like you bought things like this? Imagine Amazon’s recommendation engine turned on its head and brought into the physical world.
Your smart glasses won’t just know what you ate elsewhere, they will know how much you enjoyed it. Heart rate monitors, breathing, galvanic skin response, even an EEG reading brain activity. All this is today’s technology the output of which can already be used to reliably interpret emotion by an AI. Sampled a different cereal elsewhere and liked it? You may find a box in your next order.
But only if it’s good for you. Your personal AI will know a lot about your health. We already pump data into systems like MyFitnessPal, recording our diets and streaming data from our Fitbits and connected scales. A few years ago I made a programme called ‘In the future, toilets will be our doctors’. I wasn’t kidding. You can learn a lot about what’s going on inside you by looking at what is coming out of you.
We are already on this journey. We have outsourced our memories to digital calendars. Our sense of direction to GPS.
We are increasingly comfortable with subscription-based shopping models for everything from films to food, razors to pants.
The future of retail — at least large chunks of FMCG — is automated. Decades of marketing to humans will increasingly be turned on the AIs that assist us, trying to game them into switching our brands. This is the new brand battleground.
Mum hasn’t gone to Iceland. Nor has dad. And they haven’t sent the kids. The AI has done the shopping and it has bought you exactly what you need.
Last week I gave the closing keynote at the enormous RESI 2017 residential property conference, sharing a stage with the housing minister Alok Sharma, the BBC’s Mark Easton, Dame Eliza Manningham-Buller, and Blur’s Alex James.
I wrote a talk for the event, but the night before I decided it was all wrong. Closing keynotes need to be full of energy — especially when people are still jaded from the previous night’s gala dinner. They need to give people some simple points to take away. And while they can summarise, the last thing people want to hear is a repeat of what has come before.
Looking at the agenda for the previous days I decided I needed to come up with something fresh. This is what I wrote. Though it was written for a property audience, I think it has wider relevance. Have a read and see what you think.
I’ve been asked to talk to you today about disruption. In the next twenty five minutes I want to talk about ten things that are going to completely disrupt the physical world. Your business, your home, and everyone’s lives.
But first I want to talk about what’s driving that disruption. Right here, right now there is one change driver that is bigger than Trump, bigger than Brexit, bigger than climate change. And it’s technology.
Technology is driving change both more consistently and more persistently than any of these factors today. You may be able to roll back whatever decisions a politician makes, given enough time. But you can’t un-invent the smartphone, or the atom bomb — unfortunately, given the sabre rattling from a certain chubby dictator.
The appliance of science
When I talk about technology, I’m not talking about the phone in your pocket, though that’s part of it. I’m talking about technology in the broadest sense. The appliance of science. We are a race of tool makers who have been applying science since the first caveman or woman picked up a rock and realised it was a more efficient way to stove in the head of whatever animal they were trying to catch. Technology is maths, wheels and language. Which I guess makes Shakespeare a coder.
Throughout our history technology has done one thing. It has lowered friction. Technology allows us to do things we couldn’t otherwise do more efficiently, quickly, and painlessly.
But that gives whoever has that technology a competitive edge. Because if someone else has that edge, then we want it. It doesn’t matter if it’s countries competing in an arms race, companies competing in a market, or you trying to keep up with the family at number 42 with the nice new Merc.
It is this competitive tension that keeps driving technology forward. The last ten years have seen technology transform our world. The next ten will see transformations of even greater magnitude.
1. The end of possessions
Technology has eliminated so much of the matter in our lives. Newspapers and magazines, books, paper in general. CDs, DVDs, Blurays and all the various paraphernalia needed to play them on.
This has coincided with a shift to a much more experience-led culture. Expenditure on food and drink and holidays is up. People are focused on what they can do, not what they can own.
There’s still huge — perhaps increasing — value in tactile experiences like vinyl, in the face of mass digitisation. But the larger trend is clear: we can achieve the same or greater experiences through fewer physical objects.
2. Personal AI
We outsource memory to other people in our lives. How many times have you relied on a partner or family member to remember someone’s birthday, the MOT, or home insurance renewal? Why shouldn’t we outsource to machines as well?
The reality is that we already do. GPS has become our sense of direction, calendars and photos our memories.
The next step is letting them filter the world, and even take buying decisions, on our behalf. Right now we put this power in the hands of third parties like Facebook, and subscription shopping services. When it should be our own personal AI, intimately familiar with our preferences and insulated from the influence of external parties.
3. Frictionless administration
With a personal AI hosting aspects of our identity, finance and vital documentation, we can look forward to truly frictionless administration. No more endless reams of paper or multi-page forms for every insurance policy, remortgage or investment. Our assistants interact with the APIs of any intermediary, in turn interacting with providers and third parties. Blockchain may play a role in providing a more secure and transparent record.
4. Everything is smart
Our personal AIs will be driven by data captured from the world around us, and able to shape that world to our needs. Because everything will be connected. It costs less than a couple of pounds to add WiFi to anything these days — a few cents to do it at scale. Eventually the cost of doing so falls below the return — however slight it might be. And so everything gets some level of smarts, for sensing or control.
5. Distributed energy
We can power this smarter world because three things are happening. First, the consumption of each unit is declining: desktop PCs consume around 400 watts, laptops 75w, tablets and phones just 10. Appliances get more efficient all the time.
Second, our ability to generate electricity cheaply and cleanly is improving — particularly at small scale with solar. Wind is already markedly cheaper than nuclear, as the last round of bidding for UK energy supply shows.
Third, we can now store energy better. The next generation of batteries approach the energy density of petrol and are made from cheap and readily-available minerals.
6. Everything is electric
Because of this, gas starts to look as unattractive as a home fuel as coal does to us now. Dangerous and dirty, people will bother less and less with installing gas supply in new developments, as electricity becomes the preferred technology for heating and cooking, transport and travel, as well as all of our digital appliances.
7. Autonomous construction
Machines can already lay bricks and pour concrete faster than people, with large-scale 3D printers now producing whole buildings near-autonomously from a recycled slurry. As this technology advances it will change the nature of construction and maintenance. Autonomous machines will follow digital instructions to create and complete whole structures, utilising new materials and modular techniques.
Then machines will respond to sensor data to adapt those buildings to current need, within the parameters laid down by the original architects.
8. Dynamic addressing
Your phone is increasingly your address, enabling you to share your location with a high degree of accuracy with third parties. The incredible WhatThreeWords gives a unique address to every few square metres of the earth. Given these capabilities, why do we have everything delivered to a fixed physical address? New fraud controls mean we should be less reliant on address as a validation of someone’s trustworthiness. Why not send goods to wherever they want them — whether that’s where they are or where they will be?
9. Life through a lens
Yesterday’s Deloitte figures showed we spend an incredible amount of time staring at a screen. Tomorrow we will stare through it. Augmented reality enables more natural, human interactions with the digital world, and equips us with a general purpose sensor — the head-mounted camera — that enables a whole range of applications. I genuinely believe that in just ten years we will spend 10–12 hours per day in augmented reality, witnessing the world through a digital overlay. One that expands our senses, enhances our memory and cognition, and personalises our world. This isn’t a vision without risk, but I think it’s realistic.
10. Joy is paramount
One of the insights about the ‘millennial’ generation that I actually accept is the rising priority placed on experiences over possessions. While widely pilloried I think this can only be seen as a good thing in retrospect. We should enjoy life if we can, and our spaces and places, services and service, need to be shaped around that priority.