As a futurist I frequently have to explain to people that I’m talking about what I see happening, not what I would like to see happenRead More
Last night I went to speak to the people and partners of Soul, one of the companies that licenses the Applied Futurist’s Toolkit, about the future of marketing. I called the talk ‘AI and AIDA’ because I think the impact of new technologies will fall at every point on the customer journey, from awareness, to action.
These are some of the conclusions I reached.
1. The canvas is growing
Following from the five vectors of change that I often discuss with clients, it’s clear that the breadth of channels through which brands can address customers and prospects is growing. This is perhaps most clear in the possibility (or probability) of always-on augmented reality.
In this scenario, every atom of the physical world, and all the spaces in between, become pixels on a four dimensional canvas through which brands can communicate with us.
2. Machines own the medium
Combine this huge diversity of channels with the infinite possibilities for data-driven personalisation, and it makes sense that most of the brokering of the medium over which we are reached is handled by machines. People simply can’t process the data and deliver the creative at sufficient speed.
The machines will be working inside parameters defined by people, but the increasingly programmatic world of media we see today is really just the beginning.
3. AI on both sides of the battle
It won’t only be brands and agencies applying new technologies to this age-old problem. Facing a cacophony of digital noise, consumers will need to develop better filters, and these will probably come in the form of a digital assistant. One who knows us incredibly well and filters messages on our behalf, batting back the vast majority of bids for our mindspace.
Of course, this only works if we own the assistants, not the brands or media owners. This may be a privilege that not everyone can afford.
A truly smart city is one that doesn’t just serve its people, it engages them in its life and evolution, growing collaborativelyRead More
Your smartphone is clunky. Your social networks, rudimentary at best. In a few short years we will look back at our current interactions with technology and laugh. Twitter, seen through the glass of a museum cabinet, will look as archaic as Ceefax. Facebook like MS-DOS. Smartphones like typewriters.
When people consider the rapid pace of technological change, they often talk about the raw power: transistor counts, processor speeds, storage volumes and data rates. They talk less often of the use to which all this great technology has been put.
So much of it has been applied to making technology more accessible, improving the user interface. In fact, perhaps the biggest change in computing technology of the last fifty years is not its speed, size or cost, but its accessibility.
Plot this curve forward and you see the potential for some very interesting shifts in how we interface with our machines. Let me make a few suggestions.
1. You will have an intimate relationship with an AI
The best interface is often no interface at all. Do you want a switch or do you want lights that are on, at the right level, at exactly the right time? Do you want to have to make appointments, pay bills, buy toilet roll (itself perhaps a product at risk, but that’s another story), or would you rather those things just happened?
We’re rapidly reaching the point where an AI (of sorts) can do all these things for us. But right now, it looks like those services will mostly be provided by one of the big tech giants in return for a little cash and a lot of access to our data.
Soon, I think we’ll see more independent services. Services in which we can have a greater level of confidence that the data that we give up isn’t being used to sell us stuff. This will come at a cost, but I think it’s one that many will choose to pay. I’ve seen a growing level of consciousness of privacy and data exploitation by large companies amongst young people. Those who can, will invest in a personal AI with which they have an intimate and long-term relationship. An AI that acts as an intermediary between them and the rest of the world, handling much of the administration of life, and interfacing with smart devices on their behalf.
This AI will know everything about you. Protecting it will be of enormous importance.
2. You will live in a mixed reality
The day is coming where you will spend the majority of your waking hours seeing and hearing the world with a digital overlay. Where it won’t be entirely clear what is physical and what is virtual.
At first, this will come through glasses. Then perhaps through lenses, though these might present an interesting etiquette problem. Glasses you can easily remove when you want to show someone that you are focusing on them and not the streams of digital data buzzing past your eyeballs. Lenses? Not so much.
In this mixed reality, the nature of the interface between us and our information streams will be radically different. No more scrolling streams of text and images. It will need to be rather more subtle than that. Shades of colour to suggest mood, perhaps at the periphery of your vision. Maybe virtual clouds will warn you of weather changes. Perhaps emoji-fied avatars of your friend inserted into the crowd will tell you of their mood.
3. Your interface goes beyond sight and sound
Today the information we receive from our machines is really only carried on two pathways: sight and sound. Yes, we have basic haptics but, as the saying goes, we ain’t seen nothing yet.
I’ve long been fascinated by the work of David Eagleman on Sensory Substitution — carrying ‘sight’ and ‘sound’ via touch. It feels like we could add a lot of richness to our interface with greater application of this sort of technology — long before we start thinking about neural interfaces. Imagine using heat, or cold, or vibration, to give you a really subtle sixth sense for what’s happening in your wider digital world.
The laws of physics are constant. Once you understand them, your solutions to challenges like transportation are always going to look pretty similar. Which is why the Hyperloop is not new. It combines the technology of pneumatic trains, first deployed in the UK in the 1860s, with the technology of magnetic levitation, first demonstrated as a technology for trains in 1913.
This is not to say that the engineering challenge of the Hyperloop is simple. It will be a great feat when completed, and take advantage of huge leaps in our application of science since the days of the pneumatic train: computing, new materials, batteries, solar power and much, much more.
But the challenges it faces may not have changed much at all.
The hyperloop is, by its nature, an intercity transport platform. To get the advantages of its high speed, you need to be travelling a reasonable distance. Otherwise it has no chance to get up to speed before it is stopping again. Rough maths time: the proposed top speed of the hyperloop is around 1200 kilometres per hour, which is about 333 metres per second. At maximum acceleration, a Tesla car can pull about 1g or 10 m/s2 acceleration. That would get you near enough top speed in 30 seconds.
By comparison, a Pendolino train can reportedly accelerate at a maximum of 0.43 m/s2 — i.e. it takes 60 seconds to reach 60mph. Let’s say that the hyperloop can comfortably accelerate somewhere between the two. At slightly more than Pendolino rate, it’s going to take 11 minutes and perhaps more importantly, 54km, to get up to top speed. Crank the acceleration up to 2m/s2 and that comes down to 2m45s and 13.6km. At 5m/s2 it falls to just over a minute and 5km.
Now this may seem extreme, accelerating ten times as fast as a Pendolino, and I know some people who would be put off by that. Half a g is more than you would experience on takeoff and landing on a commercial airliner, for example — perhaps 2–3 times as much — though you might experience more on banking.
This, though, is what is proposed.
Stop and go
With 5km to get up to speed, and 5km to stop, on a short hop like Manchester to Liverpool (roughly 30 miles/50km), you’re only going to be at cruising speed for a couple of minutes. Indeed, the Northern Arc proposal suggests six minutes from Liverpool to Manchester, and seven from Manchester to Leeds. This would be absolutely transformative, genuinely making these cities part of the same economic zone, if the cost of travel is reasonable. Even Glasgow could be under an hour from Liverpool.
But this presents the new issue: how do you find the space for new tunnels — underground or in the air — between and through these densely-populated, organically-grown and mostly privately-owned urban areas? And how do you pay for those works?
If you look at the cost breakdown for HS2, it’s interesting to see that over £1.8bn of the original budget (since dramatically expanded) is allocated to land costs — half of that has already been spent. Once you strip out risk (nearly £13bn of the £33bn original 2011 costings), the biggest line items are tunnels, bridges, viaducts, and the construction works around the tracks themselves. These will be different for hyperloop but it’s hard to see how they will be *that* different.
Future technology in today’s world
I would *love* to see the Hyperloop become a reality in the UK, particularly the Northern Arc project that would have such benefits for the north of England and Scotland. But I fear that its futuristic vision may be stymied, or at the very least, slowed, by the challenges of construction in a complex and expensive environment.
For once as a futurist, I hope I’m wrong.
Technology has lowered the friction of information flow and massively increased supply. Search tools may not be sufficient to help us deal with the excess.Read More
Where do you end and your machines begin? Do you know?
A year ago, I pointed out at TEDx that we are all bionic now. That the lines between us and our machines have already been blurred. We don’t need chips under our skin or a neural lace in our heads to be bionic. The bandwidth of communication between us and our machines has been so increased by predictive intelligence, rich sensor arrays and rapid interfaces that the machines may as well be part of us now.
Tomorrow I head to Tug Life 3 to talk about “Machines + Human Life: What next? And is it really better?”. And I’ll say that this is just the beginning. And I’ll ask, what do you mean by better? And for whom?
Let’s be clear: the reality of tomorrow’s world is of one populated by cybermen and cyberwomen, unaware of where they end and their machines begin. People who may not be surgically altered for this interface, but who will certainly be adapted to it. People with sixth, seventh, eighth and more senses: for the direction of their friends or favoured hangouts. For the mood on their social streams. For the weather. All fed through permanent digital overlays on their existing senses: vision and sound but also touch and smell.
These people have outputs as augmented as their inputs. Their shouts reach billions, their gestures command objects to move. They can create faster than any human in history — whether objects or documents. Their imaginations made real at incredible speed. The only thing that prevents these superhumans becoming gods is the relative power of their peers, maintaining a balance.
Except there won’t be a balance. On the current course, the gap between the haves and have-nots will widen. Social mobility is hard enough between classes. How do you ascend to godhood? This is the subject of the 2013 Neil Blomkamp film Elysium, on which I did a round of media interviews at its DVD/Blu-ray launch. His answer? Violent revolution.
Maybe we will take a different course. Maybe the technology becomes so cheap and so accessible that everyone can have it without signing their privacy and rights away in return, the format that currently funds many tech-driven services. Maybe we have a less violent revolution in the interim and steer down a more egalitarian path.
I am excited by the prospect of human-driven evolution. The idea that we can be more — better — with the application of technology. This is nothing to fear. It is, after all, what we have always done: extend ourselves with tools. From the flint axe, to the abacus, to the supercomputer, we have always applied science to overcome the weaknesses of our minds and bodies.
But I am fearful for who this accelerated evolution might leave behind. Those without wealth or work may see no benefit without concerted effort from all of us who consider relative equality to be important.
In a mixed-reality future we are all sorcerers, capable of projecting our astral selves across the planet.Read More