“Welcome home Tom. It’s a little cold. Would you like me to put the heating on?
My house can talk. And text.
OK, it’s not my actual house. That would be weird. It’s an instance of Home Assistant, the rapidly-evolving open-source home automation software. Following a little configuration work over the Christmas break, I can now converse with my ‘house’ using Telegram, the WhatsApp-like messaging service. My house tells me things, like when people are arriving or leaving, and it can take instructions, like turning on lights, music or the heating.
Over time I can add more services. I’m thinking of a concierge service for visitors might be quite cool: something that sets up their Wi-Fi and gives them access to the house’s services.
What’s the point of this?
For me this is a rudimentary and very small scale example of what I mean when I talk about ‘living cities’. Living cities are what happens when you bring together sensors, actuators and intelligence to start to respond to the needs of citizens. When you go beyond just ‘smart’ to bring some warmth and engagement.
There’s no real intelligence in my system — it’s entirely driven by events triggering certain messages. But even with this very simple technology, the house can start to engage with my needs and respond to them in a much more human way than it otherwise might. It can know that I usually like a certain temperature. That I like a certain playlist when I’m cooking, or the lights a certain way when watching a film. And it can tell me that it knows, in quite a natural fashion, and offer solutions to me at appropriate moments.
To truly fit my definition of a ‘living’ system, my house would need ‘real’ intelligence: perhaps predicting needs I hadn’t explicitly expressed. And it would need to be able to evolve its behaviours — and even its physical space — to better meet those needs. Sadly, I can’t 3D-print walls yet. But it’s easy to see that technology coming.
In the meantime, it’s nice having a house that can look after itself. And me.