Hyper-decisiveness is a critical part of building a future-ready organisation, but to build it you need more than just technologyRead More
When it comes to our banking interactions, there is good friction and there is bad friction. Eliminate the good friction and you risk customer loyalty.Read More
The Luddites smashed machines they could see that were taking their jobs. How will the new Luddites rage against invisible, ephemeral machines?Read More
Cash usage is declining at an incredible rate. What will future payments look like? Frictionless, automatic, and based on implied consent.Read More
Twice this week, people have relayed to me incredible promises for the power of blockchain and how it will change the way we do everything. This is quite some feat for a technology that few people understand, even in principle, and even fewer can describe with clarity.
I have a number of issues with this idea. It’s not that I don’t think blockchain has great potential. There’s a clear attraction in the robustness of its distributed nature and the potential for transparency that represents. I can see how it might be valuable for managing contracts and deeds — matters of public record that don’t necessarily have tight privacy concerns around them.
But blockchain is an architectural choice, not a technological solution in its own right. It is one way we might choose to tackle particular problems, and one of many. It is suited to some situations and not to others — like the storage of personal data.
However well encrypted it may be, you cannot store personal data in a blockchain-based system and comply with the General Data Protection Regulations (GDPR). The regulations may change, though I’m not totally convinced that they should. Even if they do, it will take a long time.
Why Blockchain is not like IoT or AI
It’s great that people are enthused by the idea of a technology and its potential applications. But blockchain is quite different to other technological buzzwords doing the rounds at the moment, like AI and IoT (internet of things).
These are much broader classifications of groups of technologies (at least in the way that the terms are commonly used — academics might object to broader uses of the term ‘AI’). This leads to criticism that they are nothing more than marketing terms, and sometimes that is fair. These terms don’t define single architectural choices, but rather opportunities to tackle new problems, or address old ones differently. Within these definitions your solution can be endlessly tailored to the challenge at hand.
But say you’re going to apply blockchain technology to a particular problem and you are dramatically narrowing your range of choices — perhaps beyond what is wise.
Blockchain will change some worlds
There will undoubtedly be some industries for which blockchain is a revolutionary technology. Some people will get incredibly rich off the back of it. Ultimately, perhaps it will prove to be a good basis for alternative currencies. But it isn’t some universal technological panacea that will solve everything. While it might changes some worlds, it won’t change every world.
Like this? Get more at subscribe.bookofthefuture.co.uk
If bitcoin or one of the other cryptocurrencies can fulfil its promise, there will be no more minimum spend for digital transactions.Read More
The office of finance can be the home of truth, data and analysis in organisations. The place for best practice in all these things.Read More
Marketing and sales have overtaken finance in the collection and application of information inside organisations, becoming the new sources ot truth.Read More
Hosting an event for the ICAEW on the UN Sustainable Development goals, I ask, can the next generation reinvent the social contract?Read More
Is trust in a legacy brand worth more to customers than the service and convenience of a more agile market entrant? Rarely.Read More
In between the two events my attitude to the coming transformations has become significantly more cautious. Not pessimistic, as suggested by F6S/tech.eu’s Jon Bradford, but concerned that the opportunity is so great that it cannot possibly be fulfilled.
The risk, and the fear, come from the fact that the opportunity is not purely commercial, it is social.
A year ago I spoke about the various frictions in the financial system that were the ignition points for new innovation: the speed and cost of moving money and taking payments; a lack of trust in the big banks. Today more friction points are being exposed and exploited all the time, driving the fintech boom.
But each new innovation seems to be an incremental improvement. A profitable sliver shaved off the giant banks, while the core remains unchanged. As Chris Gledhill, one of the other speakers at Future Money, put it when he left Lloyds to found Secco: “Even outside [the banks], the FinTech communities are innovating around existing financial protocols — making them cheaper, faster, better. They’re not trying to actually reinvent these things.”
What we have with the advent of technologies like the blockchain and its derivatives is an opportunity to reinvent the very structure of our financial system. The balance of power. The centralised nature. Not just to eliminate friction but to embed greater fairness.
This might sound like socialist moralising but really it should appeal to anyone of an entrepreneurial nature. By lowering friction but also redistributing power we can create a much more open marketplace. New platforms for innovation but also lower barriers to raising money, and collecting it from customers. Simpler access to new markets, nationally and internationally.
I am hugely hopeful that our financial system two decades from now looks radically different to how it does today. More open. More distributed. Lower cost and lower friction.
Will that come from individual innovations? Or does it require someone with a larger vision?
I’m not sure.
So I remain hopeful, but cautious.
Last week I was the target of the infamous ‘Windows tech support’ scam. It wasn’t the first time.
If you’re not familiar with this hustle, it typically starts with a call on your landline from an Indian call centre. The person at the other end tells you they are from Microsoft and that they have been monitoring your PC, and that it is infected with a virus of some description. In order to convince you they then walk you through opening up the Event Viewer, an administration tool, in order to show you a series of errors and warnings.
In reality these errors and warnings are completely harmless, but many people are convinced and subsequently talked into installing a remote access tool which then provides the scammers with access to their PC for real, ostensibly so that they can ‘fix’ the problem.
From there it’s all down hill: charges, extortion, malware etc.
Now I knew it was a scam from the start. Even if I’d never read about the scam before I got the first call, I would have known it for what it was.
You could say it’s just down to experience. That technology has been a major part of my career and even before that I was mucking around with machines from a very early age. I understood what I was being shown and what it meant.
But actually the understanding that this was a scam came much earlier in the call than the point at which the caller directed me to the event log. I knew the moment they said they were from Microsoft and that they had been monitoring my machine.
A few things gave it away. The terrible quality of the phone line for one. But even more than that, I knew Microsoft would not be monitoring my machine in this way. I knew they couldn’t staff a call centre with people to remotely monitor and manage users problems without some explicit contract. Both to address the cost of doing so, and the privacy issues it would raise.
None of this was particularly conscious. It was just that my sceptical spider-sense started buzzing.
I don’t think this instinctive scepticism is solely the domain of the geeky. I believe it can probably be taught. And doing so is one of the key parts of solving some of technology’s major security challenges.
Most of the security threats that we face, at home or at work, still require some form of human co-operation, willing or unwilling. Clicking on a dodgy email or link. Installing an insufficiently-checked app.
A healthier level of trained scepticism would prevent much of this behaviour.
How do we teach scepticism like this? I’ll cover that in my next post.