Screens are a seriously limited form of interaction between us and our digital worlds. Communicating via a screen is like a novice eating with chopsticks: not very efficient.There is just a very small number of pixels with which to communicate, either capturing your touch or returning information via the image. Compare this to the majesty of the physical environment and its 360 degree canvas of sounds, smells, and sensations on your skin. This is why I have long been a believer* that the screen has a limited lifespan as the primary component in our digital interactions. The post-screen age is coming.
We hear a lot about ‘big data’. And about the increasing speeds with which data can be delivered to our devices: ‘superfast broadband’, 4G, fibre. We hear little about the speed with which we can consume and process data. Part of the answer to this comes in improvements in the device’s capabilities, and part of it comes from improvements in the user interface design. But fundamentally we will always be limited if we stick to the screen as the primary means of consuming and interacting with data.
In a post-screen world we use a rich array of sensors combined with machine learning to accurately interpret what they are saying, and foretell what it is we might want before we even ask for it. Control input and feedback comes from a new range of touch technologies, combined with sound, voice and more visually integrated design features. Examples: the Lechal Pods that I’ve been testing that give satnav directions by vibrating under your left or right foot at appropriate moments. Or the Withings Activité Pop that displays your steps walked with a simple dial rather than a flashy digital screen.
Google and others have long been working on mid-air touch technologies, using radar to expand the canvas of the touch screen and give us new interaction opportunities. Wired reported a few days ago on a UK company looking to return information using directed sound waves to create the sensation of touch on the fingers.