Monthly Archives

3 Articles

Posted by on

Humanoid Robots: The User Interface for the Internet of Things?

Two of my projects are merging. This morning, inspired by a second viewing of Iron Man 3 (yes, I want to be Tony Stark) I finally finished assembling RoboRaspbian. And realised that he should actually be part of Project Santander. Cue more modifications…

Quick recap: I like to build stuff, partly for fun, partly to exercise my brain, and partly to test ideas out about the future. RoboRaspbian started relatively simply: I found an original RoboSapien toy, minus his remote control, in a charity shop for £3 or so. Seemed like a bargain but I wanted to be able to control him.

I looked at replacing his microprocessor but this seemed unnecessary when I could get all the movement I wanted just by sending the right commands to the existing one. I found someone had turned a Raspberry Pi into a universal remote control capable of outputting the right commands, and so the project became ‘strap a Raspberry Pi to the back of a RoboSapien’. With some relatively simple electronics to bridge the two (read LOTS of trial and error), RoboRaspbian was born.

Then I thought: wouldn’t it be nice if I could make him talk — more than just his original, limited vocabulary (mostly yawns and farts). Get him to converse with the kids. I’d already done this on a previous robot project, Sammy. So I added in the text-to-speech engine Flite, and added an amplifier to the small amount of spare space inside the robot’s chest. This hooks the audio output of the Raspberry Pi into the original speaker, matching the volume of his in-built sounds.

So now I have a talking, gesturing robot. Nothing particularly smart about him though: he is entirely human-controlled. But hang on a second: I’ve spent the last few months rolling out sensors around my house*. They could feed him with all sorts of interesting data. Getting him to tell me when certain areas were too cold, or when humidity levels got too high would be really cool. Plus he can gather all sorts of information from the web: new emails/tweets etc.

So, we have a new plan: humanoid (ish) robot becomes the user interface for the Internet of Things. Down the line I can add a microphone and make the voice interface two-way using something like PocketSphinx or Google’s Speech API.

This will require one simple (ha!) hardware modification: if the robot is to be on 24/7, I’m sure as hell not running him on batteries.

*Note: As I write this, the blog is currently rather behind vs my actual progress with the home automation project, which is stably monitoring multiple conditions in four different rooms,

 

Posted by on

Take a Picture, With Your Mind.

Imagine taking a picture just by thinking.

You train a neural interface to recognise the patterns of activity that are fired when you hit the shutter button on a camera, or your phone. Then when you think that thought, the neural interface triggers snapshots from discrete — and discreet — wireless cameras distributed around your body. One in your glasses, one in your shirt button, one in your shoes.

Software in the cloud stitches the images together into a multi-megapixel whole and works out what the likely focus was meant to be, dynamically polishing the output, sharing it to your social streams and storing it for posterity.

This isn’t some wild sci-fi fantasy. It’s a very close reality.

Neural interfaces are already consumer-items, available for just a few tens of pounds in gaming systems. Recognising the same brain patterns being repeated should actually be relatively simple.

At Mobile World Congress last week Rambus showed me a camera the size of a pinhead. It needs no lens, and will cost less than 20p per unit once it is manufactured in volume.

Wireless data standards for short range transmission advance apace. Power requirements at the personal area range are low. And with the demonstrations the Alliance for Wireless Power showed me last week, a wireless charging unit could keep button-sized batteries powered up all day.

Send the images up to the cloud over 4G — it doesn’t have to be instantaneous if you’re not stood there holding your phone and waiting — or Wi-Fi. There’s loads of computing grunt on tap and automatic post-processing is already well developed.

This is real. The question is, do we want it?

People are already uncomfortable with Google Glass, but that stands out a mile. What happens when your smart wearable devices disappear into the fabric of your everyday clothing — a theme to which I keep returning because it is imminent.

It’s up to us to discuss this stuff and set some rules, if we want them,

Posted by on

Smart Cities: We Need to Talk

Amid the hype and bluster of Mobile World Congress it is refreshing to hear someone admit they don’t know the answer. Francisco Jose Jariego Fente is Telefonica Digital’s Industrial Internet of Things Director. The question he willingly accepts he can’t answer is admittedly a tricky one: what is the business model for smart cities?

Telefonica has more evidence than most for what the answer, or answers, might be. Its project in Santander has proven there is little money to be made in the hardware: the city rolled out 12,000 sensors funded by a relatively small EU1m from the EU. And the sum of the data collected from those sensors, just 5MB per day, similar to a single photo or MP3 file, suggests there is very little to be made in its carriage or storage.

The biggest challenges, and hence the biggest potential revenues, come in processing and presenting the data in a useful form. This is where Telefonica has focused its efforts and is looking to commercialise the learning from the Santander experiment. IBM too has recognised that this is where the value lies.

But this value only becomes tangible when the rest of the smart city ecosystem is in place. Cities are complicated. They are managed by multiple authorities and commercial parties. They evolve constantly, reacting to the needs of their inhabitants. And those inhabitants themselves, who in many ways represent the city much more than its buildings or infrastructure, have a say in how it develops: any executive control is limited.

Building a smart city on a green field site like South Korea’s Songdo is one thing. But there are huge drivers to smarten all our cities. And that means retrofitting technology, processes and partnerships to an existing, evolved organic environment. One model isn’t going to fit every city. Making it happen will be a process of negotiation, integration, iteration. And there will be lots of different parties involved: political leaders, civil servants, service providers, technology companies, health services, police forces, property owners and most important of all, the citizens themselves.

Brokering a framework that keeps all of these people at least relatively happy, while delivering on the promise of smart cities is no small task. It will only come through dialogue. But it’s a conversation we need to have. Because the promise of smarter cities is too great to ignore.

In the first instance there is simply lower costs, both financially and to the environment. There are lifestyle benefits: less traffic, quicker parking, more efficient public transport. Taking things a step further, there are advantages to planners: recognising a noise problem in one place might inform a change in planning to a new building nearby, perhaps requiring materials that absorb or deflect sound, or the planting of trees as a screen. Ultimately, there is the prospect of properly understanding our cities and the interactions that make them live, so that we can make more informed decisions about their future, in local government, in corporations, and as individuals.

Smart cities have long held promise, but the complexity of the problem they present has retarded their progress. To get things moving, as we need to do, a broad and open conversation between all of the interested parties is required. To agree how the interactions will be managed, and vitally the costs and rewards will be divided.

Tom Cheesewright