I returned to an old theme for a dinner with a selection of property, construction and retail professionals recently: the merging of the physical and digital worlds that is happening as each invades the other’s domain. This is some of what I said.
Perhaps the more advanced offensive is the progress of physical objects into the digital domain. The natural result of Moore’s law advances and sheer economies of scale mean that just about any device can become a ‘smart’ device for the price of a few pounds.
Increasingly, the most costly aspect of this transformation is the question of power, so cheap has processing and connectivity become. Just a few pence now will add a rudimentary chip with enough processing grunt to handle basic applications, and a Wi-Fi or Bluetooth connection.
If you want it to run without power for more than a few minutes, that costs a little more: batteries are still expensive. Eventually though, we solve this problem. And though physics might start to present a challenge to Moore’s Law in its most technical form, the exponential increase in computing ‘bang for buck’ will continue.
The result is that just about everything gets connected in one way or another. However small the utility of making something ‘smart’, the cost will ultimately fall below that threshold.
A living city
How does this affect the worlds of the property, construction or retail professional? A few examples.
Firstly there will be no aspects of a building or space, that you cannot measure or control, at any point in its life cycle. More than that, there is likely to be a level of automated intelligence controlling and managing all of these assets.
Whereas a building management system today might maintain environmental conditions, monitor fire safety, and minimise energy consumption, future systems might be able to wield much greater control and do so in collaboration with other buildings and spaces around them.
Imagine a building that largely builds itself, to the specifications in the design DNA that an architect defines. Imagine it can continuously optimise its internal layout to the needs of its users. Imagine it can collaborate with other nearby intelligences to maximise safety, comfort and utility for the people around it.
In the future our self-driving cars will be navigating their way around self-managing buildings, themselves an ecosystem of smart devices.
The virtual made real
While physical objects are becoming increasingly digital, so too are digital objects becoming increasingly physical. The combination of artificial intelligence with a range of new sensing and display technologies means that digital artefacts and devices increasingly interact with us in physical ways: voice and gesture, observation and inference.
Today’s example of this is perhaps the rapid growth in voice assistants: Amazon’s Alexa and Google’s Assistant. In thinking about the future we have to consider how these limited intelligences will continue to evolve, but also another key technology: augmented reality.
Augmented reality allows us to over-write the reality we see and hear with a new, digitally-generated reality. Every place and face can be changed. Every surface becomes a screen, every space an opportunity for a digital object — or person. Just don’t try to sit on a digital chair.
And augmented reality captures everything in our physical environment in digital form. Full-time cameras and microphones will allow tech giants like Google to complete their mission of indexing the world’s information, right down to the location of every thing, at every point in time.
The new PDA
Geeks of a particular vintage will remember the PDA, the pre-smartphone pocket computer. I think the name can now be recycled for a class of cloud-connected intelligent assistant that begins to take on many of the more mundane aspects of running our lives — and our businesses.
Imagine outsourcing the more mundane tasks to which your brain is assigned: administration, travel bookings, ensuring you have fresh milk in the fridge. We’re reaching the point where we can — and will — trust our digital assistants with our memories, our chores, and our payment cards.
With the information captured through our augmented reality devices, and the interactions we have with our voice assistants and social graphs, these new PDAs will know everything they need to know about us, and our world.
The future is often discussed as if it is a singular thing. As if everyone will experience the same future. They won’t, of course.
At the most fundamental level, even before you take into account your geographic location, social and economic situation, we all experience the present slightly differently. There are as many worlds as there are people, subtly shifted through the lens of our own interpretation.
For every single present experience, there is an infinite number of possible futures, all shaped by macro factors and personal choices between now and then. We can paint possible futures for the many based on a balance of probabilities, or we can help individuals and organisations to plot the path to their future, understanding the likely external influences that will affect their journey.
Doing this requires a clear understanding of the picture today, both objectively and subjectively: what are the facts and how do you perceive your position? You need to know where A is before you can plot a route to B. We understand this primarily by looking at pressure points: where are the existing stresses — particularly for organisations. Change is more likely to come first at these stress points.
Then it’s a question of understanding the macro factors: what is driving change and how? How will these factors interact with the pressures you already face? Will they create new opportunities or expand existing threats?
We can map these things from the perspective of a single individual, a company, or a whole industry. But the critical point is that it is your future, not a generic future. If you want to know what’s around your corner, you need to plot your own path.
I have long predicted the end of Facebook. As always with futurism, predicting ‘what’ is easier than ‘when’, and I’ll admit, I’ve been surprised at the network’s longevity and adaptability.
Through smart acquisitions and constant innovation, Mark Zuckerberg has plotted a route to survival and incredible scale for what remains the world’s number one social network. People questioned the platform’s ability to adapt to mobile, but it has thrived. It has found routes into countries with even limited internet infrastructure. And it has learned to monetise the incredible amount of traffic it receives, and the personal data it collects. Most surprisingly it has recently taken a decision that might negatively affect its short term revenues in favour of building sustainability through greater customer engagement and satisfaction.
But I still say that, eventually, the day will come when Facebook is outpaced. When it fails to spot the incoming changes with sufficient alacrity, and is left to fall by the wayside.
How will this happen?
Facebook has survived by being extremely cognisant of the threat it faces: disengagement from the young. Once this key taste-maker group shifts, the rest of the market will follow within a few years — as will advertisers, keen to reach young spenders. Both anecdote and research suggest this is already happening. Knowing this, Facebook acquired some of the likely destinations for this flight — Instagram and WhatsApp — and adapted their features to replicate the best of the other options. Facebook now owns three of the top four, and four of the top seven social networks by active user.
At the same time, obvious competitors to Facebook have made a series of serious mis-steps: Twitter’s lack of clear direction and failure to deal with abuse; SnapChat’s distraction into physical product and failed interface redesign.
The result is that Facebook as a company has entrenched its hold on the current generation of social media — at least in northern/western markets. Displacing it will require both great timing and creativity, but also an external disruption creating a new opportunity.
The first disruption was mobile: if SnapChat or Twitter had executed flawlessly on mobile, or Instagram chosen to remain a challenger, we could have seen a very different social landscape now. The next disruption will also likely be around the primary hardware through which we interact with our social networks.
It looks unlikely to be voice assistants: it’s hard to see voicemail re-invented as a social network in a very visual age. Though there might be an opportunity for audio-stickers — sending your friends funny sound effects or altered audio through this network of devices.
Instead the hardware shift that might create a crack in Facebook’s defences is augmented reality. The company is well aware of this and has been investing in AR for some time, building up strengths while the phone is still the platform, but aware that one day there will likely be a shift to a wearable system.
But these transitions are always tricky, especially when an established business has a portfolio to defend. New entrants can create a pure, powerful offering that captures the zeitgeist, and if they choose, remain independent long enough to challenge the incumbents.
This is what happened to Kodak. And one day, Facebook will have its Kodak moment.
Search for the term ‘inverting the pyramid’ and you will find pages devoted to the classic book on football tactics by Jonathan Wilson.
This blog post is not about that.
Dig a little deeper and you will come across the concept of the ‘reverse hierarchy’, where leaders imagine themselves as the support network for customer-facing staff imbued with greater control and responsibility to make them more reactive to customers’ needs.
This blog post is not about that either.
Imagine if something could literally invert the hierarchical model of organisations, leaving fewer people at the bottom than at the top. This is one of the potential consequences of automation.
That’s what this story is about.
Role-related automation risk
The most automatable roles seem to be those lower down the hierarchy: production, logistics, customer service, administration. No role is completely replaceable by machines, but automation can potentially slash the number of people employed in a single role.
In a call centre, this might mean first-line support is automated, leaving humans to manage the automation and handle second-line support — the more complex calls or unusual enquiries. In a factory, the robots might do the making and the humans handle the exceptions. Already in warehouses the machines do the heavy lifting and the humans do the complex packing.
The result might be that the human piece of the hierarchy might become inverted: the lower down the hierarchy, the more machines there are, and fewer people.
Augmenting the irreplaceables
Higher up the hierarchy, where human skills are less easily replaced, the ratio of machines to humans might be very different. If the automation model boosts profitability, growing the business might mean recruiting more people at the top — at least in proportion to those roles closer to production and customers that can be scaled with more investment in machines.
Of course you’re unlikely to have many CEOs or FDs, but below that board tier, there might be significant expansion. Yes, US companies may find themselves with even more ‘Vice Presidents’.
The result may look something like the image. A broad base of machine labour at a high ratio of virtual employee (however you might count that — an interesting question in itself) to human. Rising to a wide set of senior humans supported in closer to a 1–1 ratio by machines that augment their capabilities. Above that, the classic pyramid is likely re-established.
Perhaps some organisations already look like this? Perhaps it has only been recognised as more of a flat hierarchy so far.
Either way, it will be interesting to watch.
What are we going to do in the future? For those compelled by the arguments for mass technological unemployment, it’s a constant question.
Maslow’s hierarchy presents a useful structure for the two critical parts of the answer.
Without work, how do we earn our safety and security, food and shelter, the bottom tiers of the pyramid?
Even if we can address those issues — driving the debate about Universal Basic Income — how do we fulfil our needs at the upper tiers? Self-esteem is often deeply connected to the value of our work. More practically, many of us meet our partners through work.
The answer often given, is that we are freed for more creative pursuits. This doesn’t mean we become a nation of watercolour artists, with fields of easels at every beauty spot. Creativity has many faces, and freed from commercial constraint we may see the progress of scientific discovery accelerate, as well as cultural advancement.
For this to be true though, and for creative activities to provide the emotional returns that we all require, we need to know how to engage with them. And right now, I’m not sure that we do.
Being consistently creative in a way that rewards requires a great degree of discipline. Not just to keep you focused on a task, but to know when you cannot focus and release yourself to freewheel while the brain recharges.
This is unscientific and vague language — I’m afraid I don’t have the terms to properly describe the process. But it’s familiar to anyone who has to create for large parts of their working life, whatever form that creativity takes. I’ve been writing professionally, in one form or another, for 18 years now. But I still have to wrestle myself away from the keyboard when the words aren’t coming. I have to break the conditioning that says good work is about effort and application, something that even now is deeply embedded.
If we are to be a nation of creatives at some point in the future, perhaps we have to start changing the way we teach people to work. And even if the robot jobs apocalypse never comes, this may just be a positive step anyway.
Once a month I head into the BBC to review the newspapers on Radio Manchester. It’s an early start that rather wipes out my prime working hours, and it doesn’t pay (unlike most of my media work these days). But it’s fun, and it’s a good excuse to read all the papers, side by side, for a couple of hours. I write it off as research, both into the big stories of the moment, and into the state of media and debate.
Much of the ‘news’, certainly on the front pages and in the comment sections, could best be categorised as ‘disaster porn’. One way or another, the world is going to hell in a handcart. I’m no media historian but I’d be willing to bet this isn’t an entirely new phenomenon. Risk represents an engaging story, especially when that risk is to your health, wealth, or firmly-held opinions.
The reality of course is (mostly) rather different. Taken over the long term, we have drastically improved our health and wealth, and arguably our firmly-held opinions: we no longer blame witches for misfortune. The path of progress will always be unsteady, and individuals and whole generations may suffer as a result. But as a species we seem to be on a fairly positive track, with one clear caveat: much of our progress has been at the expense of our environment. The papers that most enjoy their disaster porn stories seem less keen to report the realities of this issue.
The nature of my role and expertise means that the societal ‘disaster’ on which I am most frequently asked to comment is the intervention of technology in social interactions. Believe the papers, and the latest book-peddling therapist (sometimes with, but more often without, real evidence), and the internet is turning children into screen-addicted zombies, devoid of social skills and entirely dependent on their various devices for social interaction.
This narrative holds just until I get to the school gates and witness a few hundred children running around, shouting, laughing, playing football and tag, climbing, swinging, and living out imaginary worlds, as they always have.
Just yesterday, I got asked to comment on Apple shareholders’ move to increase parental controls on iOS devices on TalkRadio. There’s a sense that ‘something must be done’ and that it is the tech companies that must do it. But I’m sceptical — both of the problem, and of the measures to address it.
Firstly, if there was a generation of children who were negatively affected by excessive access to the internet, then I think they’re already in their teens and twenties. The ones who were young when smartphones and social media were novelties, and parents had even less concept of how — or why — to control access than they do today. I can’t evidence this, but it’s a hypothesis I’d be willing to test if anyone has the funding.
For the most part, the young people I see use their digital devices predominantly for three things: organising future events, capturing those events, and retrospectively discussing those events. Their digital interactions facilitate and revolve around their physical interactions. Yes, they love gaming, and YouTube, but mostly as a healthy part of their entertainment mix.
Secondly, even if there is a persistent negative effect beyond a limited population of children with addictive or social disorders, or suffering the effects of neglectful parenting, I’m very sceptical that technical solutions will offer any respite. What value are additional parental controls in a world of connected devices? How does that do anything but disguise and delay a potential problem until children leave home?
In our assessment of both problem, and solution, we underestimate the value of the human touch. How much we still crave social interaction — physical interaction even, which is what the teenagers are still chasing, just as they always were. And how much we need guidance, as we learn to explore the digital world, which is no less a mix of good and bad than the physical world it supplements.
“Welcome home Tom. It’s a little cold. Would you like me to put the heating on?
My house can talk. And text.
OK, it’s not my actual house. That would be weird. It’s an instance of Home Assistant, the rapidly-evolving open-source home automation software. Following a little configuration work over the Christmas break, I can now converse with my ‘house’ using Telegram, the WhatsApp-like messaging service. My house tells me things, like when people are arriving or leaving, and it can take instructions, like turning on lights, music or the heating.
Over time I can add more services. I’m thinking of a concierge service for visitors might be quite cool: something that sets up their Wi-Fi and gives them access to the house’s services.
What’s the point of this?
For me this is a rudimentary and very small scale example of what I mean when I talk about ‘living cities’. Living cities are what happens when you bring together sensors, actuators and intelligence to start to respond to the needs of citizens. When you go beyond just ‘smart’ to bring some warmth and engagement.
There’s no real intelligence in my system — it’s entirely driven by events triggering certain messages. But even with this very simple technology, the house can start to engage with my needs and respond to them in a much more human way than it otherwise might. It can know that I usually like a certain temperature. That I like a certain playlist when I’m cooking, or the lights a certain way when watching a film. And it can tell me that it knows, in quite a natural fashion, and offer solutions to me at appropriate moments.
To truly fit my definition of a ‘living’ system, my house would need ‘real’ intelligence: perhaps predicting needs I hadn’t explicitly expressed. And it would need to be able to evolve its behaviours — and even its physical space — to better meet those needs. Sadly, I can’t 3D-print walls yet. But it’s easy to see that technology coming.
In the meantime, it’s nice having a house that can look after itself. And me.
One of the primary objectives of the proto-science of alchemy was to turn lead into gold. It seems a rather base goal (forgive the pun), and more in the realm of magic than technology. Nonetheless, alchemists around the world laid down some of the foundations of modern science.
The alchemists never succeeded, but as it turns out, you can turn lead into gold. Since every element is merely a collection of protons, neutrons and electrons, if you can manipulate the content of a nucleus you can change lead into gold. People have done so. Unfortunately, the process isn’t exactly practical, requiring huge amounts of energy from a particle accelerator, or depositing the lead in nuclear reactor.
Selling that might be even harder than selling Ratner’s jewellery.
Early in 2017 a team of scientists took the next step in creating truly programmable organisms. We may look back on this as synthetic biology’s ‘Turing moment’, the point at which an expensive specialist machine starts to become an affordable generalist platform.
Imagine being able to program a bacterium to produce materials, biofuel, cotton or spider silk. Imagine being able to program it to make medicines. Program one, feed it and watch it divide, exponentially increasing your production capacity.
The potential is endless, as are the pitfalls. Such power needs careful constraint. And yet, it is following the same path of all technologies: it is becoming cheaper and more accessible all the time.
Basic genetic engineering is already at the point of being a toy, in terms of its cost and ease. How long before I can buy a genetic programming platform as readily as a 3D printer?
Technology is the tools by which we manipulate nature
I have rather pigeonholed myself as a ‘tech expert’ over the years. Occasionally I struggle against this self-applied categorisation, worried that it limits my scope and people’s faith in my advice.
But then I follow little rabbit holes of research into alchemy (inspired by a throwaway comment on a recent episode of The Infinite Monkey Cage) and synthetic biology, and realise that technology — properly defined — is barely a pigeonhole. It represents the grand scope of our ability to affect our environment, an endeavour that I believe defines us as a species.
This is why I start with technology — in the broadest sense — when looking to the future. Technology is the means by which we make change, whether intended, or unintended.