Yearly Archives

85 Articles

Posted by on

The Quantified Self and the Future of Medicine

I have a chest infection. I am sure of this without tests because I get one pretty much every year. I’m asthmatic, which makes me more susceptible to infections, and means that they tend to take hold. One year I was taken to hospital in an ambulance when the oxygen reaching my blood got very low. Not a situation I want to repeat.

This morning I called my GP for an emergency appointment to be told they were trialling a new system: triage by phone before handing out appointments. Ten minutes after I called, my GP called me back. This is pretty much our conversation, constructed from memory and edited only for brevity.

Doctor: “What seems to be the problem?”
Me: “I think I have a chest infection again. I’m coughing up some unpleasant day-glo stuff, and wheezing a lot at night and in the morning. It’s been going on for about two weeks and I’ve been doubling up my preventer and taking six to eight blasts of Ventolin each day. My peak flow isn’t down too far but I think that’s only a matter of time.”
Doctor: “Are you allergic to penicillin?”
Me: “No.”
Doctor: “OK. I’ll prescribe you five days of antibiotics. The script will be ready at the front desk.”
Me: “When can I pick it up?”
Doctor: “It’s there now.”

I’ve cut out a brief conversation about whether I needed an appointment and me requesting an extra inhaler but you get the idea. My (always excellent) GP dealt with my problem quickly, and kept me out of the surgery saving time and money.

This process is made easier by the fact that I present my GP with evidence when I say there’s something wrong. I keep a decent record of my inhaler usage and my peak flow. I’m also recording my physical activity at the moment and sometimes (when trying to lose weight) record my calorie intake and daily weight. If everyone could present their GP with this sort of data, we could probably save a lot of time. And money. And lives.

Imagine if there were a cut down version of the device on my wrist (an Oregon Scientific Dynamo) in each of my inhalers. Instead of supplying a new body with each inhaler as they currently do, you get one with your first prescription that contained a tiny low power Bluetooth chip. Every time you took a blast it would sync up to your phone and store the data locally or, if you were happy to share it, in the cloud where it could be accessed by your GP.

You could set flags on this data to take action even before a patient has called in. Much cheaper to intervene when a situation can be controlled than to see that patient land in A&E. For example, if it’s winter and my inhaler usage goes up by a certain percentage, it may be worth giving me a call or even dropping me an email or text, prompting me to call in if I feel unwell.

Whether you like the privacy implications of this or not, the chances are something like it will soon be a reality. The reasons are the simple economics that we hear about every day: we have a large, ageing population who are likely to live a long time and be very expensive to support. We need to do what we can to a) improve their quality of life and b) reduce the burden they place on the health and care services. Monitoring people’s health and intervening early when the outcomes will be best and the costs at their lowest will play a large part achieving this aim.

Posted by on

Give Your Good Ideas Away

I’m a strong believer that when it comes to start-ups, ideas are cheap. Because the people with the gumption to turn the idea into something real are few and far between. I simply don’t believe a business idea on its own is worth very much.

That’s why I’m always sceptical when people ask me to sign an NDA before they’ll tell me their idea. What do they think I am I going to do with it? Find a team and build a business myself faster than they can? Sell the idea to the hungry crowd of waiting investors who will drop millions on the strength of a few words? Anyone who has witnessed the reality of getting a new start-up off the ground will know that the idea is a small part of the recipe for a successful business. Important, but in itself, inconsequential.

I wish this were not the case. Because I am full of ideas. If I could sell each good one (where I determine what is ‘good’) for a few quid I would be a wealthy man indeed. But sadly that is just not the way business works.

So instead I have decided to give my ideas away. They are yours, freely released under a Creative Commons Attribution licence. In other words: use them, mix and mash them up, make millions if you can. All I ask is that if you do, you give me the credit for being the originator.

Here are the first two, the second of which just came to me over lunch with a fellow (and soon to be very successful) entrepreneur. If they already exist, forgive me. I am not omniscient and it is too time consuming pretending to be so.

Social Seller App

There are lots of ways to sell online. From eBay, Gumtree and Craig’s List to Twitter, Facebook and Pinterest. Just like managing lots of social networks takes time, so does trying to list and sell items across these various networks.

Wouldn’t it be great it there was a Buffer-style app for listing goods across multiple social networks? One that handled the hosting of pics, the collection of payments (on those platforms where this was not integrated), communication with buyers, and the withdrawal of items from all networks once it had sold on one.

You could even use it to schedule items across different platforms to get maximum returns: e.g. “try auctioning this on eBay but if it doesn’t hit its reserve, withdraw it and put in on across these platforms at a fixed price.” Wouldn’t that be useful?

The app could make money through fees on transactions, advertising on the hosted image pages, and subscriptions for pro level users at a variety of tiers.

Personalised Shopping Mag

Discovery is ugly in eCommerce. It’s great when you know what you want, but there’s no good equivalent of the idle browse of a bookshop or record shop that leads you to unexpected purchases. I do get some geeky pleasure browsing cars or electronic components on eBay, but this is not the niche on which multi-billion dollar businesses are made.

Flipboard has made browsing social networks a much more pleasant experience by turning them into an interactive magazine. Couldn’t someone do the same for online shopping? Take feeds of people’s favourite sorts of item and package them up with pictures and reviews into a personalised shopping magazine/catalogue?

The business model — affiliate fees — is simple and probably lucrative. And the technology isn’t that complex, it just needs a good design job.

Come on someone: I want this product.

Posted by on

Bionics 2.0: Peaches Geldof and the Fear of Google Glass

As I’m fond of saying, we are all bionic now. We have offloaded memory, navigation and other functions to our smartphones and cloud-connected devices. What wearable technologies really represent is the second wave of mass-adoption bionics. How we adapt and respond to this rapid advance is going to need some thought.

This morning I was up at the Beeb, talking about Peaches Geldof’s Twitter gaffe. It was striking that someone apparently bright and educated, who has worked in the media, can have the nous to acquire 160,000 followers but not to recognise when her Tweets might be illegal. And worse, that they might be damaging to the future lives of two innocent infants.

But I don’t necessarily lay all the blame at her door.

Digital social media is a new technology. We are still adapting our behaviours to its existence and learning our way around its flaws, laws and possibilities. Arguably we haven’t yet got to grips with email — also a social media by many definitions. Etiquette for this now-antiquated form of digital communication continues to evolve, driven in part by changing modes of access.

Every new media has had a challenging introduction. Read Tom Standage’s excellent ‘Writing on the Wall’ for the full story, but from the advent of the letter through the printing press, it has always taken time for societies and governments to catch up with the implications of new technologies.

Hence the fear generated by Google Glass and other coming wearables. It took Peaches Geldof seconds to tap out a series of law-breaching tweets. But at least she had to withdraw the phone from her pocket first and unlock it: a small window in which to consider her actions. Imagine what she could do with a camera strapped to her head and the ability to tweet a stream of consciousness straight from her lips.

I don’t think people are scared of the possibility that someone wearing their tech could be surreptitiously streaming pictures straight to the web. I think they’re scared of the fact that people will. It is going to take at least at least a decade after wearable tech becomes the norm before we get to a recognisable set of rules, defined enough for us to be able to say confidently what is acceptable and what is not. Codifying those rules into laws will likely take another decade.

If that sounds like a long time, bear in mind it is now a decade since the launch of MySpace and I’m still regularly asked to advise and instruct on the use of social media. Though Facebook is ubiquitous, Twitter and LinkedIn are used by fewer than a quarter of people in the UK. I see behaviour I think is odd on all three networks all the time, but rarely do I consider the incidents so clearly outside any accepted ‘rules’ as to upbraid the perpetrator. Others are either more confident or happier to sit in judgement, but the fact remains: we are still learning.

I was one of those people who happily rocked a Bluetooth headset back in the early noughties, until I realised (and others gleefully pointed out) that I looked like a dick. Unless you’re a secret service agent, you do not need to be in hands-free contact at all times. I will also sport any wearable tech I can get my hands on until we decide as a society what works and what doesn’t, what is cool and what isn’t, what is acceptable and what is beyond the pale. I will do so in the knowledge that it is a learning process and mistakes are part of that process. And while everything settles down, I won’t criticise others for their errors, as long as they don’t repeat them. Even if I think, as in the case of Peaches Geldof, they really should know better.

Posted by on

Reintermediation: Bringing Back the Middle Men (and Women)

Disintermediation was one of the biggest buzzwords of the first dotcom boom. It means ‘taking out the middle man’ and that’s exactly what the first wave of internet and web services did.

Websites gave customers much more direct access to suppliers, enabling lower prices in the process. No need for a distributor or wholesaler and a network of retailers if you can sell products directly from your factory warehouse online. Publishing information became so cheap that it was cost-effective to publish everything rather than just a selection of content chosen by an editor (another middle man).

This has been great for choice, with the near-infinite ‘long tail’ ensuring products and content to fit almost every niche.

The advantage of middle men (and women) is that they aggregate and filter, cutting out the crap and highlighting the gold. In doing away with the intermediaries we lost a lot of this function. But it is returning, in new, more efficient forms.

It starts with the creators of content, be they vendors, retailers, advertisers or media. Increasingly they are trying to personalise the content they deliver to customers using acquired or built profiles. Land on their website and you are likely to see content tailored to your interests: shoes in your style, stories about topics you have previously shown interest in, adverts for products you have previously examined.

Then there are the curators: human or automated intermediaries assembling coherent feeds of information. This could be users if twitter, focused on a single topic. It could be topic-based aggregated feeds in apps like Flipboard or Feedly.

Then there are the smart user agents: programmable software that does the search on our behalf. Think Google Alerts as a basic example. A piece of software that we own or that sits in the cloud, that will take some parameters and get to know us like a supercharged global TiVo. One that pro-actively filters the morass of content available online and brings us only the most relevant morsels to our interests.

This has its risks: it could be easy to hide in a bubble of selected information, ignorant of world events. But I know plenty of people for whom this is life today. Those of us who like to be informed will remain so.

These new middle-men, women and robots represent one of the most interesting new business opportunities on the web. The most important question is who will own the very personal data collected about our preferences? The profiles that these smart agents will collect — across media, over the course of years — will be incredibly rich.

Will we let the cloud giants — Google, Amazon — own this data? Or will we insist on holding it ourselves?

Posted by on

RoboRaspbian Part 8: It’s Alive! Make Your Own RaspberryPi-Powered RoboSapien

It’s alive! My hybrid RoboSapien/RaspBerry Pi is up and running. Or at least shuffling.

Here’s how to build your own, along with the story of how I finally got mine working.

1. Get yourself a RoboSapien (mine was £2.50 from a charity shop) and a RaspberryPi. You’ll probably want a case too.

2. Make yourself a buffer circuit or wedge to go between your Raspberry Pi and RoboSapien. This is REALLY simple.

  • Take a little piece of stripboard.
  • Solder a pair of jumper cables at one end, connected to an IR LED (mine came from an old Sky remote).
  • Solder a TSOP4838 38khz IR receiver at the other end. These cost a couple of quid from eBay.
  • Wire the receiver into the RoboSapien — there are ground, Vcc and ‘IR OUT’ tabs on the head connector inside the RoboSapien. I used 3.5mm jacks to make it removable.
  • Make sure the LED and receiver are pointing at each other.
  • Pop a cut down IDE cable onto your RPi’s GPIO pins and at the other end plug in your jumper cables to GND and port 22.

3. Log into your Raspberry Pi remotely or with a screen and keyboard, and install the software as described in this post.

4. Find a way to mount and power your Raspberry Pi on the back of the RoboSapien. I intend to find a neater solution but for now I’m using cable ties, rip-off Meccano (my favourite prototyping tool) and a rechargeable 5Ah Li-Ion battery from ADATA.

5. Start LIRC_WEB by entering the relevant folder and typing ‘node app.js’.

6. Open up the relevant address on your smartphone — probably 192.168.XXX.XXX:3000 where XXX is the missing bits of your Rpi’s IP address.

7. Play!

Now of course all I have really done here is replicate Alex Bain’s Raspberry Pi universal remote project but with a little more wiring so that I can neatly strap this to the back of the RoboSapien. Why not just have the RaspBerry Pi separate? Because where’s the fun in that? He’s not a real robot if his brain is elsewhere.

And anyway, then I couldn’t hook up the audio out on the RPi to the RoboSapien’s speaker to expand his vocabulary. As soon as that amp chip arrives…

Pics and vids to follow.

Posted by on

RoboRaspbian Part 7: #Fail

So, quick recap. I’ve patched into the IR line on my V1 RoboSapien (RS) in order to connect, via a buffer circuit, a Raspberry Pi.

The buffer circuit is designed to protect the Raspberry Pi and simply connects to the GPIO ports on the RPi and into the RS using a 3.5mm jack socket I’ve attached. The idea is to use LIRC and the neat LIRC_WEB application to squirt commands directly into the RS’s microcontroller from the backpack-mounted Pi, giving me a Wi-Fi-enabled, web-controlled RoboSapien. Awesome, in theory.

Software installed, I set about hooking up the buffer circuit to the Pi. I did this via an old IDE cable, with connectors cut down with a hacksaw to the requisite 26 ports.

I plugged the buffer into the back of the RoboSapien, and then into the relevant ports on the IDE cable. I ran the software on the Pi, via SSH. I opened up a web browser at the relevant port (3000) and clicked on some commands. Nothing.

Hmmm. Time for debugging.

Step 1 was to test the buffer circuit. Again.

I’d tested it using an Arduino as a PSU with a switch to replicate the signal from the Pi. It worked fine but with the LED inline to show it was putting out commands, I was only getting about 2.5v across the connections into the RoboSapien. Was this the issue?

I removed the LED and the resistors. Less safe but looking at the specs of everything, the chance of the RS drawing too much current is pretty minimal.

Plugged it all back together and… nothing.

Then I looked at the IDE cable. A bunch of the wires twist around in the middle. Hence the ports on the cable connector wouldn’t match perfectly to the pinout on the board. It’s always the simple answer.

I soldered an LED to another 3.5mm jack socket and then stuck it on the end of the buffer board. I was confident in the location of the ground so just needed to move the signal connection around until I found pin 22. Three attempts in and when I clicked on the instruction in the web interface, the LED lit up.

Refilled with enthusiasm I hooked it back into the RS. And again, nothing.

I tried a bunch of stuff. I stripped the RS down again to check the connections. I tried removing the buffer board altogether and connecting the Pi direct.

Still nothing. I was clearly getting the signal to the board but it was having no effect.

Back to basics: since I was using LIRC, why not hook up an IR LED and see if I could control the RS using this?

 

I ripped apart an old Sky remote and soldered on a couple of jumper cables — see pic. I stuffed these into the IDE cable and… success! It worked first time.

So, what has gone wrong?

I thought the remote sent serial commands to the RS that were simply captured by an IR receiver and fed directly to the microprocessor. But in fact the receiver unit is more than just a detector: it demodulates (i.e. removes the carrier signal from the data) and cleans it up. I have been feeding incomprehensible junk into my RoboSapien.

So what’s the answer?

I could try and find a way of sending the unmodulated commands, but that would mean lots of software work, not my forte.

Better I think to stick a second IR receiver module into the system somewhere and use it to electrically isolate the Pi from the RS, giving me the protection I was originally seeking.

So, it’s off to eBay to find me a TSOP4838 38khz IR receiver. Or maybe a few. Sure they’’ll come in handy…

Posted by on

RoboRaspbian Part 6: Software Shenanigans

In order to actually make my robot do something I need some software. And this is where my projects so often fall down: a coder I am not. So let’s keep it simple.

The idea is to make the Raspberry Pi act like a remote control for the RoboSapien. But instead of having an IR emitter and receiver between remote and robot there will just be a wire — a wire that goes straight (via my buffer) from the RaspBerry Pi’s GPIO port 22 to the control board of the RoboSapien.

For this to work we need some software to make the Raspberry Pi act like a remote control, and some software to give me a user interface. For the former there is one obvious option, LIRC. For the latter I’m using Alex Bain’s LIRC_WEB that sits on top of NodeJS.

Alex Bain’s website has some great instructions on installing all this. Assuming you have a working Raspberry Pi running the latest version of Raspbian Linux:

To install NodeJS enter these instructions from a terminal:

(thanks to commenter gragib on Alex Bain’s blog)

Then finally install LIRC_WEB from Alex Bain following the instructions here: http://alexba.in/blog/2013/02/23/controlling-lirc-from-the-web/

All this worked pretty much first time for me, leaving me just debugging to do…

Yeah. Right.

Posted by on

RoboRaspbian Part 4: RTFM-Buffering the GPIO Ports

The path of true hacking never runs smooth. Especially when your hacking takes place in snatched hours of work time or evenings. This part of this project has been the source of most of my balls-ups.

First of all, my choice of component. I looked around for a cheap way to buffer the GPIO ports on the Pi and came across the Slice of Pi/o. This is a lovely little self-build board that I found on eBay and in its description I saw the words

‘Buffered output/input (protects your PI from miswires, shorts, overload and general abuse)’.

Fab. Except I didn’t — against the advice of the retailer — read and understand the description properly. The Slice of Pi/o doesn’t buffer the existing ports, it offers 16 buffered ports connected via i2c — a serial bus system.

Now I’m familiar with i2c having used it in my previous Arduino projects. But there is no way on earth I have the coding chops to get LIRC to use the i2c bus.

The money isn’t wasted: the 16 extra ports may come in handy down the line for further expansions. But this chip isn’t going to help me protect my Pi.

The error was compounded when I soldered my first Slice of Pi/o wrong while trying to multi-task. Trying to unsolder the component again was a total fail, so that one has gone into the parts bin.

So, back to the drawing board. I think I will go for a simple transistor switch arrangement for the time being in order to shield the Pi, with resistors to limit the load on the GPIO pins and the 3v3 supply. This will also allow me to add an LED to show that commands are making it out of the Pi.

Below: Slice of Pi/O…Slice of Pi/Uh-Oh

Posted by on

RoboRaspbian Part 5: Building a GPIO Buffer

You don’t want to connect something hacky direct to your Raspberry Pi’s GPIO ports. Unless you have the spare cash to buy another RPi. To protect my investment I built a simple little circuit to buffer the link between the GPIO and my RoboSapien’s IR line in.

This simply consisted of a small transistor with resistors on the signal line and base to prevent it drawing too much current (by my calculation 560 Ohm resistors should stop it drawing more than 5mA). I threw in an extra transistor and LED for good measure so that I could see signals making it as far as the board at least.

This all appeared to work in testing. Put some volts on the signal line, and the LED lit up.

Then it didn’t work when plugged into the RPi. So I ripped it back to its simplest possible form. Before I had taken a photo. Whoops.

But that’s a story for another day. First: software shenanigans.

Posted by on

Post-Enlightenment Blues

Last week I was buying some shoes for my daughter. Here’s how the conversation went at the till.

Shop Assistant: “Isn’t it warm?”

Me: “You should get used to that.”

SA: “You don’t believe in all that, do you?”

Me: “Me and just about every scientist in the world.”

SA: “Ooh there are lots of scientists who disagree.”

Me (gathering up my children): “Not many who aren’t funded by oil companies.”

It’s telling that we talk about ‘the Age of Enlightenment’. We can call it that because it ended. Across almost every sphere of life, views are coloured and decisions made not on the basis of fact and evidence but on faith, emotion and ideology.

This is fine for certain decisions. Back to the shoe shop. In the grand scheme of things it really doesn’t matter whether I chose purple shoes or blue ones. My one-year-old isn’t going to care either way. But it matters that I chose the right size.

The first thing the shop assistant did was measure my daughter’s feet. Imagine she told me my daughter needed a 4H, but I chose to buy a 4F because I didn’t like the idea of my daughter having wide feet.

The shop assistant wouldn’t have liked this. She may well have waved evidence in my face to try to convince me otherwise. But ultimately I was the one doing the spending, so she would probably have sold me the shoes. And this would be bad for my daughter.

Here we have three parties in a decision:

  • The Actor — the shop assistant
  • The Affected — my daughter
  • The Influencer — me

We have evidence on what is the right decision for the Affected: she needs a pair of 4H shoes. But the Actor’s decision is swayed more by what the Influencer believes than by the evidence of the impact on the Affected. This pattern is repeated across society.

Wasted Aid

Yesterday I met with David Trott, a consultant to the charities sector. He’s coming on board to advise us on the future of charities. David told me a much more disturbing story about fundraising for humanitarian aid.

Money for aid comes from us putting coins in a bucket, or more likely these days, giving by direct debit. As a result we get to influence how the money gets spent much more than the evidence of the best way to spend it.

David’s example is this: imagine you could jumpstart the creation of an affordable, national healthcare system in a deprived country. Effectively setting this country on its way to having its own NHS. In the long term you can show that this will have the greatest impact on the citizen’s lives for every pound spent.

It would be nearly impossible to fundraise for this as a goal. It just doesn’t grab people.

So instead you pick a single issue: child blindness for example. Kids are going blind for want of simple eyedrops. You put up posters and roll out TV ads: “You could help this child to see again for just £3.”

It tugs the heart strings and the money rolls in. Once you’ve raised money against this issue you have a mandate to spend the money on this issue and this issue alone. But the country has no infrastructure: where and how are you going to deliver the eye drops?

So you roll out clinics: small ones, specialising in the eye drops because that’s what you’re mandated to do. These clinics are separate from the country’s own limited health care services. They don’t deliver any other services. They confuse the local population who come to you with general health queries.

You tackle the blindness issue. Great. But you could have done so much more if the criteria for how the money was spent were defined not by the giver — the Influencer — but by what would deliver the best outcome for the Affected.

Political Blindness

In the political sphere the situation is slightly different. We, the people are both Influencer and Affected. Though individual policies may more directly impact some more than others, ultimately we succeed and fail as a society.

Which is what makes it so frustrating when evidence is ignored in policy making. Evidence that is usually telling politicians what would make life better for all of us. Evidence that many us as Influencers, find unpalatable.

Climate change, teenage pregnancy, offender rehabilitation, drugs and much, much more. Clear evidence is available about the way forward on all of these issues. But instead of taking note and pushing for the right sort of change, we the people prefer to keep our heads in a Dark Ages hole.

A New Hope

I wrote recently about transparency for the Institute of Leadership and Management. Ten years ago in ‘The Naked Company’, Don Tapscott and David Ticoll described the Internet as “a transparency medium without peer in human history.”

The web is arguably a presentation medium without peer: it is capable of rapidly delivering facts packaged in a compellingly visual and interactive format to vast numbers of people in seconds. It is this that has supported the rise of ‘armchair activism’, with millions who might not otherwise have been reached signing up to petitions and emailing their politicians to tell them what they think.

Now this ‘slacktivism’ can be just as easily misdirected as any other form of influence, especially once complex arguments are dumbed down for rapid sharing. But it at least engages the electorate in making a decision and showing their influence more often than every five years.

If we can use the tools of the Internet age to better communicate evidence over emotion and engage armchair activists in well-directed campaigns, we may just have a chance of bringing the country back into the light.

Posted by on

RoboRaspbian Part 3: Hacking-Connecting to the Microprocessor

Many of the RoboSapien hacks that I have seen have been pretty ugly. Clever, but not practical for a robot that is going to be in the presence of small children. So from day one I knew I had to put a decent amount of effort into making my hacks tidy.

I need to be able to mount the Pi onto the RoboSapien and there’s no way it is going to fit inside, so I’ll need to make some sort of backpack. I will also need to power the Pi, and I don’t think the D cells in the robot’s feet are going to cut it. Long term these will probably be replaced with some sort of Li-Ion arrangement and I may even look at some sort of charging dock. But this is a LONG way off. For now it just needs to work.

In order to keep my hacks neat and tidy I decided to terminate any internal hacks with connectors so that it’s easy to hook in and out without any risk of pulling wires. The simplest and cheapest option seemed to be 3.5mm stereo jacks from eBay — commonly used both for IR and audio connections. The downside of them is that they can short connections if inserted/removed with the power on, so I need to be extra careful about protecting my Pi.

As you can see from the pic I have drilled two holes in the back of the RoboSapien case on the opposite side to the power switch and added panel mount sockets. These are soldered to the back of the RoboSapien’s control board, hooking into the IR OUT (labelled OUT 1) and speaker connections using jumper cables, the pins on which can be bent to give a good fit to the board and make for easy soldering.

The location I’ve placed them is one of the few spaces in the densely packed RoboSapien shell, but the result is good: everything screws back together neatly and with nothing connected the connectors aren’t intrusive.

Next: buffering the GPIO ports on the Pi.

Posted by on

RoboRaspbian Part 2: The Plan-GPIO + LIRC

What do you do with a remote control robot that’s lost its remote control? Strap a dirty great computer to his back, obviously.

I want my charity shop RoboSapien to become a bit of a domestic robot pet. In the first instance I need to get him doing all the things he should do when controlled with a normal remote. Beyond that I’d like to see him talking, and maybe gaining some robot super powers, like the ability to interface with electronic gadgets around the home via IR and RF.

The plan is to interface a Raspberry Pi with him in two simple ways;

1. Hooking the Pi’s GPIO into the IR remote receiver line and using LIRC to send the original range of commands to give me movement etc

2. Hooking the Pi’s audio out into the RoboSapien’s speaker

I’ll need to protect the Pi in some way: don’t want to risk destroying it since the GPIO ports are unbuffered. And I’ll need to find a way to get commands to the Pi remotely that’s nicer than the command line.

As usual with these things there’s lots of help out there. Here’s my background reading:

Next: hacking begins…

Tom Cheesewright