I’m at a conference in Prague, bringing together the sales team for my client, BTC Europe, from across the continent. Everyone is speaking English, fluently. But will that be the case in thirty years’ time?
Live translation technology is so good now that remote conversations can be held in two languages now with a digital intermediary processing the translation in near real-time. It’s never quite as good as the staged demonstrations, of course, but it is nonetheless impressive. When everyone is sporting compatible hardware (mixed reality glasses), will we even bother to learn a foreign language?
Sadly, I think fewer people will bother. Those that do will recognise the value that it adds: the understanding of structure and nuance, as well as the ability to connect with someone more closely.
In a world where machines can live-translate our words into any language, what else will machines be able to do in real-time with our communication? Given the proliferation of fake news recently, I wondered if we might also have a ‘spellcheck for truth’ built into our written and spoken communication.
This could work both ways. When we’re speaking, or writing, our personal digital assistant (renewing an old acronym, PDA to describe an assistive AI) might highlight inaccuracies, reviewing what we have written against sources from across the web. I can imagine a subtle red glow at the edge of your field of vision when you say something a little off, through to a migraine-like pulsing if you tell a total porky*.
Of course, these sources themselves will need some sort of accuracy rating, and some people might decide such ratings are themselves a conspiracy and turn off any analysis. After all, some people still insist the world is flat.
Our PDA will also be able to analyse the information we receive, underlining written sentences in a new colour — I suggest a bovine waste shade of brown — to highlight when they’re untrue. Or we could have some sort of animated overlay on someone’s person, since we’re operating in mixed reality. ‘Liar, liar, pants on fire’? That could be entertaining.
Of course, there is no fact-checking source for some lies. But we will all have access to other indicators when someone is not telling the truth. Every pair of mixed reality glasses could, in theory, analyse someone’s voice patterns, heart rate, breathing, and perhaps even their sweat levels, and provide a level of lie-detection. Would we find this too invasive? The technology largely exists today but I’ve not noticed anyone discussing the prospect.
Then there is the question of whether we want absolute objectivity. I think if we tried to pursue it, we would realise a lot of our lives are based on small fictions. There is some analysis of reason that suggests it is entirely retrospective: we take decisions and then retrofit a narrative with facts to justify them. If this is true, deep analysis of the narratives of our lives that we tell ourselves could be deeply uncomfortable.
As always, I think reality will end up somewhere away from either extreme: today’s reality where untruths seems to have incredible power, and a tomorrow where a fact-driven reality is a little too cold and hard. But that is still an incredible shift to come in the next thirty years.
*’Porky pie’ = lie, if you’re unfamiliar with the vernacular