Monthly Archives

4 Articles

Posted by on

Frugal Innovation and the Maker Movement

Charles Leadbeater has a new book out. Leadbeater, writer, political adviser and all round big thinker, has turned his attention to the forces of innovation outside of Silicon Valley. Those inventors, activists and entrepreneurs who operate in a world of constraints as opposed to bountiful capital and light touch regulation. What he calls ‘frugal innovation’. I haven’t had a chance to read the whole book yet, but as usual the RSA podcast is a good start.

The idea of frugal innovation seems particularly relevant having spent the weekend exhibiting at Maker Faire. As regular readers will know, I like making stuff and most recently have been building a home automation systemusing open source hardware. This is partly for fun, partly because I’ve always wanted a smart home (Tony Stark envy), and partly as an experiment to show what’s really difficult about these things (the user experience design, in case you’re curious).

When I found out Maker Faire was running again this year at the Museum of Science and Industry, I decided to take a stall and show off my work. If you haven’t been to a Maker Faire, think of it as show & tell for grown-ups. There are some stalls there where people sell things, but mostly it’s about sharing what you’ve made (and learned) with other people, both fellow makers and members of the public — lots of them young, which was great. The kids loved playing with Jock the RoboRaspbian, my Raspberry Pi-powered, web-controlled toy robot. And the adults generally liked the idea of keeping an eye on their homes remotely, and being able to cut their energy bills by turning off all the lights their kids leave on.

Maker Faire Manchester

The question I was asked most often was whether I planned to commercialise the system. The answer is always ‘no’. For a start I’m sworn off more start-ups for now — at least ones that aren’t connected to my core business. I don’t think there’s a lot of money to be made from the system I’ve built: Samsung, Apple etc have the manufacturing capability and supply chain to do things more efficiently and at greater scale than I ever could. But most importantly what I have built depends hugely on the work of others — something that was true for all the makers I spoke to at the Faire.

The software that sits in each of my home automation nodes is heavily based on ‘RESTduino’, a project with multiple contributors, who have given their work to the community at no charge. The web platform uses libraries for various functions like talking to the nodes, graphing the data, and communicating with the energy monitoring system — all written by others and given to the community at no cost (‘open source’). Even the hardware I’m using — the Arduino — is open: anyone can replicate its design without licence fees.

All of this means it would be a complex affair to try and scale what I have built up into a profitable business. But without it I wouldn’t have been able to build anything at all — or if I could, doing so would have taken ten times as long and cost ten times as much.

In Leadbeater’s last book, We Think, he looked at mass creativity as exposed on YouTube and other social channels. The Maker Faire showcases a more physical level of mass creativity, enabled by the open sharing of different hardware and software components. Every Maker takes those components and builds something uniquely their own, to fulfil their particular needs (or wants). Generally they then share the new components they have built to bridge the gaps back to the community, and the process continues.

As access to these components, and the ability to replicate them, is increasingly commoditised, it will be interesting to see what effect this has on the concentrated innovation of the Apples and Samsungs of this world. Imagine you can search a database of products and systems for a solution to a problem/challenge you are facing. You find a design — of software or hardware — that appeals, and then render it out, either as an installable application or a physical product through a 3D printer and some purchased components.

This exists (to some extent) today in Thingiverse, it’s just not widely used by the average consumer yet. But in just a few years we might all be sharing, or consuming, each other’s frugal innovations.

Posted by on

Intersection: Where Macro Trends Meet Your Market

Futurism is a practice with an increasing level of professionalism and process. Futurologists, trend forecasters, and strategists use a variety of different methodologies to understand what’s happening, filter the noise and try to inform and qualify their predictions.

At Book of the Future we have created our own approach that we call Intersection, to allow us to make practical predictions and inform the advice we give to clients. Specifically it is designed to help us understand and demonstrate how macro trends, related to or driven by technology, will impact on the specific sectors our clients are working within.

3D Lens

The process starts with our ‘3D Lens’: we believe that in a specific place (the UK) and over a specific period (the next twenty years), technology will be the biggest driver of change. This is predicated on the simple fact that within the time and space boundaries specified, we are expecting only steady, linear change in the other classic PESTLE factors — Political, Economic, Social, Legal, Environmental. By contrast technology is advancing at an exponential rate, as described by Moore’s Law, and this advance is touching every area of life and work.

Of course there is a small chance that we will have a revolution or a massive natural disaster, or other shock event in the UK in the next twenty years. One that may have an enormous impact. But our role as futurists is not to try and second guess the un-guessable — the ‘black swan’ events. It is to help the organisations that we work with to adapt to the visible future, the one that we believe will define the macro picture and that is already defining it today. As William Gibson said, “The future is already here, it’s just not evenly distributed.” We operate in the small pockets of the future that are here today and we help to expand them to encompass our clients.

Primary Trends

Within the scope of our 3D ‘Lens’ we break the primary technology-driven trends down to five core areas:

Accelerated

Put simply, things happen faster. Rich data moves at the speed of light around the world. Financial transactions take place so fast that they can no longer be handled by humans.

This changes businesses: there’s no value in six month old data when someone else can supply it real-time.

And it changes expectations: consumers and business users alike are rapidly frustrated by anything but an instantaneous response.

Ubiquitous

If there is a technological solution to a problem and it doesn’t cause grievous social harm, then someone will probably implement it.

If there are legal, environmental, social, financial or technical barriers that need to be overcome, then they likely will be and sooner than you think.

Technology has been shrinking in size and cost, and growing in power and usability at an exponential rate for decades.

This trend will continue to the point where technology is near-invisibly integrated into the environment around us and we are not always aware when the capabilities we are using are ‘normal’ human, or augmented by technology

Agile

Business success in the past was often characterised by the ability to optimise processes, supply chains, prices.

This retains value, but as models, channels and demands are changing faster and faster, success is increasingly defined by agility: the ability to enter and conquer new markets and opportunities fast.

This affects the structure of organisations: rather than slick, vertically integrated monoliths they need to be stratified into loosely coupled layers.

Each layer interfaces with the other but might also interface with third parties, offering its thin layer of optimised service as a building block in other people’s value stack.

Diversified

Technology has lowered the barriers to market entry. The capital costs of a start-up, barring any physical stock, are trending towards zero.

This means more players in any market but also more models, and more channels.

There won’t be a single paradigm in any industry any more: competitors may supply the same products or services in different ways.

Likewise, many and various sub-cultures and micro-markets will exist on a variety of standardised, open platforms.

Small

There is a growing global community that exists outside of national borders. They share an increasingly common culture, albeit coloured by local norms.

Social networks now capture a huge proportion of the global conversation, just as digital media services ensure the wide spread of common cultural reference points.

The last step towards true globalisation will be brought about by the ease with which products can now be moved: as data.

The rules of manufacturing and the supply chain are about to be radically re-written by hyper-local, automated manufacture, placing a huge emphasis on the ready supply of a variety of basic feed stocks.

Market Impact

With each organisation that we work with we look for the market impact of these macro trends, and we focus on areas of stress that already exist in the business. These are often surprisingly easy to find, at least when coming in with a fresh perspective, and appear in all areas of the business. New competition, changes in regulation, rising materials or labour costs, poor customer or supplier engagement, falling budgets or margins, breakdowns in compliance. You can usually find some of these in the industry press before you even start interviewing members of the organisation.

For each stress that we find, we test it against the macro trends and look to see whether it might be mitigated or exacerbated. We try to estimate the scale of the potential impact. Sometimes this is very hard, sometimes it is quite straightforward: if a formerly physical product can now be delivered digitally, the supply chain costs are likely to be orders of magnitude smaller.

Once we have assessed the market impact we can begin to rank the intersections and focus on those that will have the greatest effect. In reality the solutions we design around these intersections are often structural changes that will help to address others as well.

Narrative vs Empirical

You couldn’t call this process scientific. There is no repeatable experiment. Different people following the same methodology might achieve a different result (though we are trying to formalise the process within the organisation at least, so that there is consistency across future engagements). But the evidence from our interactions with clients over the last 18 months is that it undoubtedly valuable.

Feel free to use the information this post to try to replicate the process in your organisation. Or if you’d like some help, you can always drop us a line.

Posted by on

Breaking Band: Building a Better Internet Infrastructure

If you follow me on social media, you will know that my broadband is down. It has been since the storms on Saturday.

I have no problem with my service going down. We’re going to have to get used to crazy weather over the next few years, and this lightning storm was like nothing else I have experienced in the UK. Lightning hit something, or water got somewhere it shouldn’t and things went down.

(Sh)It happens.

The problem is what happens next. Four days of wrangling with poor information, idiotic ‘customer service’ scripts, and under-equipped call centre staff insisting the problem is with my third-party router (only installed because the supplied one was utterly unreliable). Broken promises and repeatedly missed (self-imposed) deadlines. It took a concerted effort on Twitter to make something happen. That something is an engineer who has to come to my house today and work back from there, despite multiple customers in my area being simultaneously taken out (apparently not enough to justify it being called an ‘outage’).

There is no way this is an efficient way to run a business. But based on the chorus of recognition I’ve had across Facebook, Twitter and LinkedIn, my experience isn’t unusual for BT customers.

The Fourth Utility

Me whining doesn’t make for a great blogpost though and that’s not what this is about (though the scripts of the conversations with some of the call centre staff are pretty amusing). My point is that the internet is the fourth utility and it needs to be treated as such.

In the days when email and the web were the only things carried over the internet, it was annoying but not the end of the world if it went down. After all, for the first few years of consumer internet it was usually down: you had to dial up to get access.

But today the internet carries much more than these intermittent services. Across entertainment, environment and security it has become a platform in its own right on which we are increasingly reliant.

Without the internet I can’t access services that I pay for, like Netflix and Spotify. This adds to the ‘lost value’ of it being down.

My Nest thermostat can’t communicate with the world to find out what weather conditions it should be responding to. And I can’t communicate with it to turn it on and off, potentially reducing my comfort and costing me additional money in gas.

I can’t monitor the cameras covering parts of my property, or get alerts from my home automation system about intruders, floods or fires.

These are all what you might call ‘first world problems’ today, and I know I’m in the minority as a user of all these services. But they will be commonplace before long. And I pay good money for my internet platform to be able to use them.

Connected Age

We are moving to an age of ever-increasing connectivity. Some might decry our reliance on machines and their interconnection and I understand their concerns. I always think about the overweight chair-bound slobs in Disney’s Wall-E, beholden to their robot servants. But history suggests that technological advance usually drives life improvements, and human beings find ways to mitigate the risks they present.

If we are to continue the pace of technological advance, and retain our place as one of the more technologically-advanced nations, then we need to change the way we treat broadband provision. We need to stop looking at it as a luxury and understand it as a utility, and frame policy and provision appropriately.

At the moment there is far too little competition at the right levels of the market. Having most providers beholden to BT’s Openreach infrastructure does not drive innovation. The special deals that the government has with the big providers (BT and Virgin) to discount the tax they pay on their cables prejudices the market against new entrants. Regulation discourages the opening up of access to existing assets to allow the sharing of ducts, poles and other routes by providers, utilities and transport companies.

If my connection goes down in a few years time I would like my provider to know before I do and tell me. I’d like them to start the diagnosis and repair automatically, before I have called and without any human intervention. If I’m unhappy with their service I’d like to know that there are genuine, physical connection alternatives for my service, not just a re-branding of the same pair of wires.

Ofcom has highlighted that our broadband provision is some of the best in Europe. I’d agree with the FSB: it’s not good enough.

Posted by on

Emerging from the Colossal Cave

This post is based on the script from two presentations I gave this week at Creative Kitchen in Liverpool, and Tameside Together in Manchester. You can see the presentation, built using Impress.jshere.

##

You’re familiar with Moore’s Law, right? Coined by Intel co-founder Gordon Moore back in 1965, it suggests (based on the evidence Moore had witnessed back then) that the number of transistors that can economically be put on a silicon chip doubles every two years. In other words, your computing bang for your buck has been growing at an exponential rate for nearly fifty years.

This law, and a number of parallel laws about the speed of our digital connections, and the amount of stuff we can stick on a hard disk, have described the technology revolution. Devices getting progressively smaller, cheaper, faster, better.

But I think they miss a vital component of what has changed about technology: it has become more human.

I’ve often described early computing experiences as like travelling to an alien planet. The machine you interacted with was a giant monolith, housed in its own environment, speaking its own language and using its own customs.

Now I have a new analogy.

One of the first computer games I ever played was Adventure, or Colossal Cave, as it was otherwise known. It was on a BBC Micro. I was a bit young initially, but I have vivid memories of my dad and cousin getting quite into it. Colossal Cave was a swords and sorcery epic delivered entirely through the medium of a text interface. ‘Choose Your Own Adventure’ or ‘Role & Play’ novels (perhaps also only familiar to people of my particular vintage) on the small screen.

Now I can’t remember the exact plot or characters of the Colossal Cave, but I imagine, somewhere in the depths of this cave, you may have found an ogre. This, for me, is the early computer. Giant, hulking, slow-witted and inhuman.

A little closer to the surface, and a little more evolved, you may find an ork. These are your pre-GUI personal computers. Communication is easier, but they’re still dim and gruff.

Then come your goblins, smaller nimbler and more able to interact. Laptops with graphical interfaces, and even access to the Internet.

Today the elf-like smartphone is all-the-rage. Slender, attractive, and much closer to human in its abilities to interact with touch, motion and voice.

But the elf remains at the edge of the cave. It can look out into the light and shout to us, but it can’t influence our physical world. Without some form of prosthetic there are hard limitations on its reach and strength.

The history of computing over the last half-century for me is one of evolution. Of computers evolving towards a state where their interactions with us are not limited to the screen, and instead they can communicate with us on all the levels that we communicate with each other, and change our environment around us.

As designers and coders we have for years been shining a torch into the Colossal Cave, briefly illuminating the intelligence inside so that we can interact. Now is the time for the computers to emerge from the cave and begin to communicate with us on our terms. But they need our help to do so.

Stepping back from my well-stretched analogy for a minute, there is good reason for us to help.

Do we really want to interact with data via a screen? Even the loveliest high-resolution, touch display, is an artificial environment relative to the majesty of the world around us. It’s also incredibly low-bandwidth. Think about the breadth of senses you have, through which your brain manages to process information, microsecond by microsecond. Why limit ourselves to interacting over a few million pixels when such rich experiences are available to us?

Computers now have so much data at their disposal, and the intelligence to process it, that we can let them be autonomous. The screen and keyboard was created when we had to manually provide them with all of their inputs, all of their instructions. That is no longer the case. Computers can make decisions based on time, date, weather, environment, location, your social graph and any number of other data points. Why bind ourselves to manual control when they are capable of taking on tasks we no longer need to do?

There is a challenge here though. More than one in fact.

The first one is the age-old sci-fi question: should we? Should we give them this much power? Should we leave behind manual labour? What does it mean for jobs?

The answers to these questions are book-length in themselves, but I’m inclined to think we should accept and even encourage this next step in technological progress. For the simple reason that there are more challenges for human minds to tackle. Why not hand the problems we have already nailed over to machines, if they can solve them more efficiently?

The second challenge is around ‘how’. Because for all my bravado and optimism above, this stuff ain’t easy. Or more specifically, the user experience design challenge isn’t easy.

Here’s an example. I’ve been building my own home automation system, as I have documented on this blog. This is both fun (if you’re a geek like me) and a serious experiment: I’m using the smart home as a small scale model for the smart city. The basics are simple: a few hours, a few quid and some cobbled-together code gets you a system that measures all sorts of environmental variables and allows you to trigger electrical devices in response. But as soon as you start trying to design the user experience, it starts to get really complicated.

Take a simple lamp. I want lamps to come on if it’s dark and when there’s someone in the room. And more importantly, turn off when the room is empty, saving me money and cutting my carbon footprint. You’d think the rules for that would be pretty simple, and they are until human behaviour gets involved.

Because sometimes we like it being dark. When we’re trying to sleep, or get a little cosy on the sofa to watch a film. When the house keeps turning the lights on in those situations, it gets pretty annoying. So what do you do? Create modes? Change behaviour throughout the day? Have a manual override? All of these things are possible but what you realise is that the number of permutations is enormous: automating response to human preference is really hard.

This is why we need more people from the creative and digital industries to start experimenting with physical computing. Sure there are a few forward-thinking agencies playing with wearables and microcontrollers. But think about how many websites are produced each year. Imagine how fast we could change our environment and our economy if we produced even a fraction as many digital, physical devices.

There is a particular opportunity here in cities with a manufacturing heritage (and often a surprisingly strong living industry), and a more recent digital scene. Manchester and Liverpool are the two places where I’ve been spreading this message this week.

It’s a simple message and not a particularly original one, but I hope I have carried it to some new audiences. Computing is emerging from the darkness of the cave. Now is the time to greet it and introduce it to our world.

####

If you’re interested in booking futurist speaker Tom Cheesewright for your event, you can find more information here.

Tom Cheesewright