“Will we ever put microchips in our brains?”
People ask me this question frequently, often inspired by the latest science fiction showing transhuman characters with implanted digital technology. So sure are they that this will happen, the question is often not ‘if’ but ‘when’.
I have no doubt that at some point our technology will allow us to supersede our biology. At that point, inserting technology into our bodies will be entirely normal. But I think we have a way to go before this is everyday reality. Here are a few reasons why.
The rate of change in technology is too high
How often do you change your phone? Every two years? Every three? How often do you want to have major surgery?
For the last fifty years at least, the performance we get from our digital devices has doubled every two years. We are reaching the limits of what our current technology can do, leading many to see the end of Moore’s Law, the description of this incredible expansion in the bang for buck we get from machines. But with Neven’s Law, quantum computing is showing early promise of even faster advances in computing power.
The result is that even if you just look at raw computing capability, the technology we might choose to install in our brains is advancing incredibly fast. And who wants to be stuck with an old model, especially when it is in their brains?
The situation gets even more absurd when you consider the rate of change in other fields of science and technology that will influence how a human/machine interface might be constructed. Everything from the materials from which we might make a neural interface, to our understanding of the brain to which it might connect, is expanding incredibly fast.
This won’t stop some people from doing it. There will obviously be medical cases where an intervention makes sense. And there will be those who want to be at the cutting edge. But for most people, the idea of carrying around technology that is instantly out of date inside their skull will probably put them off.
The security risks are too great to put microchips in our brains
Everything is hackable. There is no such thing as complete security.
To give you an example, a few years ago I brought the hacker Samy Kamkar over to the UK to speak at a conference I was hosting. He spoke about a situation where hackers found a way to get data off an air-gapped computer – i.e. one not connected to any network – inside a locked room. As long as they could get code onto that machine, they could turn its memory chips into rudimentary antennae to broadcast information over radio waves.
Since then people have found lots of ways to extract information from inaccessible machines. This article describes the same approach Samy described but also a way to use status LEDs and surveillance cameras as a communications channel.
If you are going to have a neural interface, then it is going to be connected to a network. Otherwise, what’s the point? That makes it much more susceptible to hacking and malware than the carefully protected computers in the examples above. And someone will try to hack it. That is guaranteed.
Are you ready to have your brain hacked? I thought not.
Direct to brain is the wrong interface
Perhaps the strongest reason not to have a neural interface though, is that you don’t need one.
The last sixty years of computing history may show an incredible increase in speed, storage and bandwidth. But what is much more interesting is the use to which all that power has been put. We have used a huge proportion of the available bits and bytes to make machines easier to use. No longer do we have to punch instructions out of bits of car, or script complex instructions on the command line. We can shout across the room and the machine does what we want. At least, some of the time.
The need for a neural interface presumes that this trend does not continue, when it is very clear that it will. Because the next natural step, and one that we are working towards quite clearly, is to have no interface at all and let the machines take decisions on our behalf.
Feed an AI data from your life. From your social graph, your conversations, from the video camera you will soon be wearing on your face 10 hours a day to drive you mixed reality glasses, and from the physiological sensors it has on your body. Give it all this information and it will know you well enough to choose – and even buy – things for you. You don’t need an interface to your lights, heating, or door locks because they all respond to you automatically. Most of your shopping is increasingly automated. You will take pleasure in doing some things manually, like picking some outfits, or putting on a record, or browsing a book shop. But 80% of things, the ones you want a neural interface for, will be automated anyway.
The same will be true at work. Why have a neural interface when the machine can craft most of your messages for you, and interact with you in the most human domains: movement and language?
Will we ever put microchips in our brains? Yes. Just not soon.
I’m not ruling out the mass adoption of some form of neural interface. But it won’t happen soon because the technology isn’t ready and it moves too. fast Because even if it were stable and complete, we couldn’t secure it. And most of all, because we just don’t need to put microchips in our brains.