Monthly Archives

12 Articles

Posted by Tom Cheesewright on

The art of the probable, the possible, and the desirable

‘The art of the possible’ is a phrase historically associated with realpolitik. It has come to mean ‘achieving what we can (possible), rather than what we want (often impossible)’. But as this piece in the New Statesman points out, it was once used a little more optimistically as a challenge to aged ideas.

For me, science is a much more optimistic ‘art’ of the possible than politics. It explores the boundaries of what the laws of physics permit us to achieve, pushing back those boundaries with knowledge all the time. Inside the envelope defined by science, everything else comes down to choices.

The art of the probable

What, then, is the art of the probable? What shapes those choices inside the bounds set by science?

For the most part, it is money. What is most profitable, or affordable, seems to be the greatest predictor of what will be. Technology drives change, but the direction of that change is largely steered by financial considerations.

There are, of course, other motivations — moral, emotional, environmental. But in our capitalist economy, and (still) reasonably stable democracy, dominated for most of the last thirty years by a particular strain of economics, money tends to drive what’s next.

The art of the desirable

Whether I’m writing or speaking, this reality often causes people consternation. They confuse what I believe will be the case, with what I would like to be the case. They confuse futurism, with politics — the art of the desirable.

You can’t eliminate politics, or personal bias, from your perspective. That’s why I do my best to systematise my analysis. But it will always be, to some extent, subjective.

Arguing for what you would like to see is the job of a politician. I can do that, but it’s not what people pay me for. They pay me to try to be objective about what I think — given all the factors — is possible or likely to happen.

Until the dominant political ideology goes through a major shift, that will largely be a case of understanding how economic factors shape choices within the boundaries defined by science.

Posted by Tom Cheesewright on


Last night I went to speak to the people and partners of Soul, one of the companies that licenses the Applied Futurist’s Toolkit, about the future of marketing. I called the talk ‘AI and AIDA’ because I think the impact of new technologies will fall at every point on the customer journey, from awareness, to action.


These are some of the conclusions I reached.

1. The canvas is growing

Following from the five vectors of change that I often discuss with clients, it’s clear that the breadth of channels through which brands can address customers and prospects is growing. This is perhaps most clear in the possibility (or probability) of always-on augmented reality.

In this scenario, every atom of the physical world, and all the spaces in between, become pixels on a four dimensional canvas through which brands can communicate with us.

2. Machines own the medium

Combine this huge diversity of channels with the infinite possibilities for data-driven personalisation, and it makes sense that most of the brokering of the medium over which we are reached is handled by machines. People simply can’t process the data and deliver the creative at sufficient speed.

The machines will be working inside parameters defined by people, but the increasingly programmatic world of media we see today is really just the beginning.

3. AI on both sides of the battle

It won’t only be brands and agencies applying new technologies to this age-old problem. Facing a cacophony of digital noise, consumers will need to develop better filters, and these will probably come in the form of a digital assistant. One who knows us incredibly well and filters messages on our behalf, batting back the vast majority of bids for our mindspace.

Of course, this only works if we own the assistants, not the brands or media owners. This may be a privilege that not everyone can afford.


Posted by Tom Cheesewright on

Your smartphone is a relic

Your smartphone is clunky. Your social networks, rudimentary at best. In a few short years we will look back at our current interactions with technology and laugh. Twitter, seen through the glass of a museum cabinet, will look as archaic as Ceefax. Facebook like MS-DOS. Smartphones like typewriters.

When people consider the rapid pace of technological change, they often talk about the raw power: transistor counts, processor speeds, storage volumes and data rates. They talk less often of the use to which all this great technology has been put.

So much of it has been applied to making technology more accessible, improving the user interface. In fact, perhaps the biggest change in computing technology of the last fifty years is not its speed, size or cost, but its accessibility.

Plot this curve forward and you see the potential for some very interesting shifts in how we interface with our machines. Let me make a few suggestions.

1. You will have an intimate relationship with an AI

The best interface is often no interface at all. Do you want a switch or do you want lights that are on, at the right level, at exactly the right time? Do you want to have to make appointments, pay bills, buy toilet roll (itself perhaps a product at risk, but that’s another story), or would you rather those things just happened?

We’re rapidly reaching the point where an AI (of sorts) can do all these things for us. But right now, it looks like those services will mostly be provided by one of the big tech giants in return for a little cash and a lot of access to our data.

Soon, I think we’ll see more independent services. Services in which we can have a greater level of confidence that the data that we give up isn’t being used to sell us stuff. This will come at a cost, but I think it’s one that many will choose to pay. I’ve seen a growing level of consciousness of privacy and data exploitation by large companies amongst young people. Those who can, will invest in a personal AI with which they have an intimate and long-term relationship. An AI that acts as an intermediary between them and the rest of the world, handling much of the administration of life, and interfacing with smart devices on their behalf.

This AI will know everything about you. Protecting it will be of enormous importance.

2. You will live in a mixed reality

The day is coming where you will spend the majority of your waking hours seeing and hearing the world with a digital overlay. Where it won’t be entirely clear what is physical and what is virtual.

At first, this will come through glasses. Then perhaps through lenses, though these might present an interesting etiquette problem. Glasses you can easily remove when you want to show someone that you are focusing on them and not the streams of digital data buzzing past your eyeballs. Lenses? Not so much.

In this mixed reality, the nature of the interface between us and our information streams will be radically different. No more scrolling streams of text and images. It will need to be rather more subtle than that. Shades of colour to suggest mood, perhaps at the periphery of your vision. Maybe virtual clouds will warn you of weather changes. Perhaps emoji-fied avatars of your friend inserted into the crowd will tell you of their mood.

3. Your interface goes beyond sight and sound

Today the information we receive from our machines is really only carried on two pathways: sight and sound. Yes, we have basic haptics but, as the saying goes, we ain’t seen nothing yet.

I’ve long been fascinated by the work of David Eagleman on Sensory Substitution — carrying ‘sight’ and ‘sound’ via touch. It feels like we could add a lot of richness to our interface with greater application of this sort of technology — long before we start thinking about neural interfaces. Imagine using heat, or cold, or vibration, to give you a really subtle sixth sense for what’s happening in your wider digital world.


Posted by Tom Cheesewright on

Hyperloop: plus ca change…

The laws of physics are constant. Once you understand them, your solutions to challenges like transportation are always going to look pretty similar. Which is why the Hyperloop is not new. It combines the technology of pneumatic trains, first deployed in the UK in the 1860s, with the technology of magnetic levitation, first demonstrated as a technology for trains in 1913.

This is not to say that the engineering challenge of the Hyperloop is simple. It will be a great feat when completed, and take advantage of huge leaps in our application of science since the days of the pneumatic train: computing, new materials, batteries, solar power and much, much more.

But the challenges it faces may not have changed much at all.




The hyperloop is, by its nature, an intercity transport platform. To get the advantages of its high speed, you need to be travelling a reasonable distance. Otherwise it has no chance to get up to speed before it is stopping again. Rough maths time: the proposed top speed of the hyperloop is around 1200 kilometres per hour, which is about 333 metres per second. At maximum acceleration, a Tesla car can pull about 1g or 10 m/s2 acceleration. That would get you near enough top speed in 30 seconds.

By comparison, a Pendolino train can reportedly accelerate at a maximum of 0.43 m/s2 — i.e. it takes 60 seconds to reach 60mph. Let’s say that the hyperloop can comfortably accelerate somewhere between the two. At slightly more than Pendolino rate, it’s going to take 11 minutes and perhaps more importantly, 54km, to get up to top speed. Crank the acceleration up to 2m/s2 and that comes down to 2m45s and 13.6km. At 5m/s2 it falls to just over a minute and 5km.

Now this may seem extreme, accelerating ten times as fast as a Pendolino, and I know some people who would be put off by that. Half a g is more than you would experience on takeoff and landing on a commercial airliner, for example — perhaps 2–3 times as much — though you might experience more on banking.

This, though, is what is proposed.

Stop and go

With 5km to get up to speed, and 5km to stop, on a short hop like Manchester to Liverpool (roughly 30 miles/50km), you’re only going to be at cruising speed for a couple of minutes. Indeed, the Northern Arc proposal suggests six minutes from Liverpool to Manchester, and seven from Manchester to Leeds. This would be absolutely transformative, genuinely making these cities part of the same economic zone, if the cost of travel is reasonable. Even Glasgow could be under an hour from Liverpool.

But this presents the new issue: how do you find the space for new tunnels — underground or in the air — between and through these densely-populated, organically-grown and mostly privately-owned urban areas? And how do you pay for those works?

If you look at the cost breakdown for HS2, it’s interesting to see that over £1.8bn of the original budget (since dramatically expanded) is allocated to land costs — half of that has already been spent. Once you strip out risk (nearly £13bn of the £33bn original 2011 costings), the biggest line items are tunnels, bridges, viaducts, and the construction works around the tracks themselves. These will be different for hyperloop but it’s hard to see how they will be *that* different.

Future technology in today’s world

I would *love* to see the Hyperloop become a reality in the UK, particularly the Northern Arc project that would have such benefits for the north of England and Scotland. But I fear that its futuristic vision may be stymied, or at the very least, slowed, by the challenges of construction in a complex and expensive environment.

For once as a futurist, I hope I’m wrong.


Posted by Tom Cheesewright on

Hello cyberwoman

Where do you end and your machines begin? Do you know?

A year ago, I pointed out at TEDx that we are all bionic now. That the lines between us and our machines have already been blurred. We don’t need chips under our skin or a neural lace in our heads to be bionic. The bandwidth of communication between us and our machines has been so increased by predictive intelligence, rich sensor arrays and rapid interfaces that the machines may as well be part of us now.

Tomorrow I head to Tug Life 3 to talk about “Machines + Human Life: What next? And is it really better?”. And I’ll say that this is just the beginning. And I’ll ask, what do you mean by better? And for whom?

Let’s be clear: the reality of tomorrow’s world is of one populated by cybermen and cyberwomen, unaware of where they end and their machines begin. People who may not be surgically altered for this interface, but who will certainly be adapted to it. People with sixth, seventh, eighth and more senses: for the direction of their friends or favoured hangouts. For the mood on their social streams. For the weather. All fed through permanent digital overlays on their existing senses: vision and sound but also touch and smell.

These people have outputs as augmented as their inputs. Their shouts reach billions, their gestures command objects to move. They can create faster than any human in history — whether objects or documents. Their imaginations made real at incredible speed. The only thing that prevents these superhumans becoming gods is the relative power of their peers, maintaining a balance.

Except there won’t be a balance. On the current course, the gap between the haves and have-nots will widen. Social mobility is hard enough between classes. How do you ascend to godhood? This is the subject of the 2013 Neil Blomkamp film Elysium, on which I did a round of media interviews at its DVD/Blu-ray launch. His answer? Violent revolution.

Maybe we will take a different course. Maybe the technology becomes so cheap and so accessible that everyone can have it without signing their privacy and rights away in return, the format that currently funds many tech-driven services. Maybe we have a less violent revolution in the interim and steer down a more egalitarian path.

I am excited by the prospect of human-driven evolution. The idea that we can be more — better — with the application of technology. This is nothing to fear. It is, after all, what we have always done: extend ourselves with tools. From the flint axe, to the abacus, to the supercomputer, we have always applied science to overcome the weaknesses of our minds and bodies.

But I am fearful for who this accelerated evolution might leave behind. Those without wealth or work may see no benefit without concerted effort from all of us who consider relative equality to be important.



Posted by Tom Cheesewright on

Extinction level events

Last night I watched the Kevin Spacey film Margin Call. It’s fiction but it’s easy to believe it’s not far from the reality inside some of the large investment banks during the 2008 crash.

Avoiding spoilers — you really ought to watch this film if you haven’t — the fictional firm in the film is forced to take drastic action after it underestimates the risk to some of its biggest assets — the very real mortgage-backed securities.

In my experience, the problem is rarely that people underestimate risk. It’s that they don’t estimate it at all. At least not for those issues that might cause them serious harm.

See no evil, hear no evil

Part of the problem is visibility. Some recent big business casualties just didn’t see what was coming at all. They didn’t recognise the threat presented by ecommerce, or digital media, or global competition, or new channels of customer communication.

Part of the problem is closed-mindedness. People become experts in their own business but forget that doesn’t make them experts in what is happening outside. They dismiss events in adjacent industries as being irrelevant to the special circumstances of their own.

Yet, all too often, the external challenges if unmet, are much greater than any internal strength.

Process over talent

This is not a personality fault of any individual. Visionary leaders who can keep one eye on the business and one on the breadth of external factors are extraordinarily rare.

But it is the fault of the leader if they fail to put in place processes to address this inherent weakness that most of us share.

Now, more than ever.

Because in an age of accelerated change these extinction level events come ever faster, driven by, or at the least, conducted by, technology. It doesn’t have to be an asteroid hitting earth to wipe out companies and even whole industries. Or its business equivalent, the 2008 crash. High frequency change means there are constant small shocks, each one great enough to collapse the unprepared.

Every organisation above a certain size should have a formalised process for scanning the near horizon and identifying threats. Every large organisation should be actively considering its fitness to react in the case that an extinction level event appears on the horizon.

Yet still, so few do.

Posted by Tom Cheesewright on

Everyone is a sorcerer in a mixed reality

Are you familiar with the concept of astral projection? Any reader of Doctor Strange comics (or watcher of the film) will recognise the moment where the Sorcerer Supreme leaves his body and his spirit flies through space. This is something my nieces and nephews have been experiencing this week.

I have been sent a ViFly R220 drone to review for The Loadout and 5live. It is a racing drone that comes with a set of goggles providing first person view. In other words, you can fly the drone by seeing what the drone sees. Pop the goggles on, pick up the controller and you can leave your body and soar through the air.

It’s extremely disorienting the first time you try it. I’d use that as an excuse for smashing the drone into a wall but I wasn’t wearing the goggles at the time. One of my nephews was. I’m hoping he hasn’t been scarred by the experience.

Drones for all

Racing drones with the paraphernalia for a first-person view remain a novelty for now. But inevitably this technology becomes smaller and more widely available. If, as I expect, most of us start to sport mixed reality glasses or lenses at some point, it could be a momentary switch between your ground-bound reality and soaring to the clouds — or anywhere else — seeing the view and taking the controls of any number of drones you may own or have access to.

‘What’s the point?’, you might ask. There are a few sensible challenges. Virtual environments are now so realistic that you can get much of the same experience from a computer game. So why risk real-world injury and cost? And beyond the thrill what’s the value?

Well the risk of real-world injury and cost is the point. That’s what makes it exciting. And the real world will always bring more chance and variability than a game. But that only satisfies the thrill-seeking angle: why would anyone else in the population ever want to experience an aerial perspective in the first person?

Think about live events: sports, F1, marathons, parades. Think about traffic and being able to see what’s ahead. Think about any time you have wanted to be in two places at once or get a different perspective. I wrote four years ago about the FascinatE project to change the nature of television. With drones, the scope for this project is massively expanded. You could experience, first hand, events anywhere in the world.

Smaller, lighter, cheaper

Now think about what the drones will be like before long: smaller, lighter, cheaper, longer lasting. Imagine that your phone network or whoever you purchase your augmented reality equipment (and data connection) from has a network of drones in the sky that you can jump — virtually — into at any time, like a flying car club.

The skill will take time to gain, though these drones will likely have a smarter AI assistant than the deliberately manual drone I was flying (crashing). The dislocation of your senses is an acquired taste, the brain’s natural reaction is to trigger a sense of sickness when this happens — some believe because the only time we’d experience such a dislocation in the past is when we had ingested something poisonous and hallucinogenic. But for a generation raised on fast-paced computer games it should be a more easily acquired skill.

In a mixed-reality future, where everyone is equipped with augmented reality and access to endless low-cost, tiny drones, we are all sorcerers, capable of projecting our astral selves across the planet.



Tom Cheesewright