Every journey to tomorrow has to start with an honest assessment of where you are today, and sometimes that requires uncomfortable self-examinationRead More
As a society we like to think of ourselves as pretty sophisticated. Technology has advanced so much in the last hundred years that it is easy to look back a century and feel proud of our achievements. But have we really come that far?
Rather than compare ourselves to our past, how about comparing ourselves to our imagined future? Being an optimist I’m more of a fan of the Roddenberry-style utopian vision than the darker, post-apocalyptic nightmares (though the latter make for better films).
Unlike the societies of Star Trek or even Iain M Banks’ Culture novels, our society is still driven by personal gain rather than the advancement of the race. I’m not advancing communism as the way forward — it has been proved that it is hard to harness the base human drives that push us forward under that system. Rather I’m saying that the underlying instincts that drive us are still very individual and not that much different from our pre-historic ancestors. Unfortunately I don’t think our instincts will evolve until we are all sufficiently comfortable that personal advancement is less vital.
That will require incredible economic and scientific leaps forward. Yet scientific advancement seems to be increasingly bound up in economic terms. Money is only spent where there is a clearly definable product as an outcome. Research for its own sake seems to be less and less prevalent. It seems to So, I was pleased to hear that the Department for Innovation, Universities and Skills is pushing for a £1bn injection into research funding from the government, as part of the stimulus package. For both reasons both economic and geeky, it would seem like a hugely valuable investment.
PS: Ever heard of the Singularity? The idea that the rate of scientific advancement will become exponential when machines start designing better machines? That can be looked at in two very different lights, depending on your optimism. Many people believe it’s not far off though…
The original Book Of The Future contained a page about Extra Sensory Perception. While I loved the picture of the ‘ESPER Battlecruiser’, and the description of its crew’s capabilities, I’ve always thought the whole spoon-bending thing was a bit of nonsense.
I remain sceptical of a human ability to read and influence minds remotely, at least at our current stage of evolution. But we are beginning to develop a kind of technological ESP.
Twitter has been defined as a social sixth sense. It’s a nice term and it gives non-users an idea of one of its main benefits: a constant awareness of the key events in your friends’ daily lives. But it does more than that.
Though I am relatively new to Twitter, it is fast becoming my primary source of media via links to interesting stories and videos. It is also beginning to have an impact on my diary, highlighting to me events I want to attend but didn’t previously know were happening. And this is just following twenty-odd people!
I believe that Twitter is just the beginning. I have written before about the ‘internet of things’ — the idea that some form of intelligence and connectivity will increasingly be a standard part of everyday objects. Everything from egg boxes to armchairs will be connected to the net and sharing the information it holds.
Imagine that intelligence being delivered to our brains in a way that doesn’t distract from our conscious acts, but that gives us an added level of peripheral vision about what’s going on in the world. The news feeds, social updates, and calendar reminders we set for ourselves today combined with dynamically generated information about the world around us. All the anecdotal examples we can think of today are pretty prosaic (car needing a service, milk being out of date, train running late) but history shows that all the best applications come once the platform is in place.
Sometimes we all suffer an overload of information, even with the current level of technology. So its easy to imagine today that this would all be too much. But human beings are evolving to match the built environment. I believe we will become better at processing large volumes of information concurrently (rather than being infantilised by technology as Baroness Greenfield has suggested).
Though I like my occasional analogue week, I find the idea of technological ESP really appealing.
You might not think that the mobile industry could be accused of lethargy. It has been one of the fastest moving sectors for the last twenty years. Yet it has taken the arrival of a range of challengers from the internet for the industry to begin to fulfil its potential.
When I say “the mobile industry”, what I’m really referring to is the old guard. The companies that make up the standards bodies; the ones who’s names are synonymous with mobile; the ones who have been coming to this conference since it was a few dressed tables in a Berlin hotel conference suite. Nokia, Ericsson, Motorola, Alcatel/Lucent, Nortel et al. And the operators they supply: Vodafone, Telefonica, Orange/France Telecom in Europe, Sprint and Verizon in the US, and all the many others responsible for establishing the major markets around the world.
What these companies have failed to do is understand their own value in the internet era. All have spent the last few years trying to fight the inevitable advance of the internet into mobile devices. They may argue that they have pushed services through as fast as technology and regulation would allow. But in reality they wanted to control the internet, something that has been proven impossible in the fixed line world.
Users want unrestricted access to the world wide web and all of its associated standards and applications. They will pay good money for this access. What they don’t want is walled gardens, restricted by punitive bandwidth charges and the operators’ poor attempts at delivering content.
If the operators had recognised this fact earlier — whether on their own or because they had been convinced of it by the vendors to whom they have historically been so closely tied — then there would have been little room for Google and the Apple to come in and have such a radical impact. The whole nature of the mobile industry is changing. Improving for users, worsening for the operators.
The models of both Google (with its Android software platform for mobile devices, available in the UK on the T-Mobile G1) and Apple (via iTunes and the iPhone) relegate the operator to the position of dumb pipe. A supplier of bandwidth and nothing more. Not only that, they force the operator to handle all the expensive, unpleasant parts of service delivery — billing, sales, customer care — and cream off the most attractive profits — content and services.
It didn’t have to be this way. There is a huge amount of intelligence in mobile networks, and a huge amount of data about users. The mobile operators know your name and address and your billing details. They can find your current location and, given permission, look at all the places you have travelled recently. They know who you call and what content you buy.
Sound scary? Think about how much information most of us happily supply to Facebook, with whom we have no financial relationship. We seem quite happy for Facebook to access this information and act upon it.
Imagine if the mobile operators had understood the value of this data, and the vendors who supplied them had built services and solutions based upon it. Nokia and the other handset manufacturers wouldn’t be fighting for their lives against GPhones and iPhones that overnight made their entire ranges look clunky and dated. And operators would be selling access to data to a variety of applications rather than wasting time trying to be funky content companies.
This is obviously a vastly oversimplified argument. The reality is a lot more complex than this black and white blog can convey. But the point remains: there is money to be made from being a pipe. As long as you’re not dumb about it.
A follow up from (or prelude to, if you’re reading this late Wednesday night) a little slot on the BBC Radio Manchester breakfast show to talk about the Conficker worm that is currently decimating corporate IT systems around the globe. Here’s my top three tips if you want to keep your home computer protected.
1. Enable automatic updates. The Conficker worm takes advantage of a flaw in Windows that Microsoft fixed back in October. It is only spreading because people — and particularly companies — have not applied this ‘patch’ to their computers. Ensuring this is never a problem for you again couldn’t be simpler. You just need to enable automatic updates. On Windows Vista, click the ‘Start button, then select ‘Control Panel’. At the bottom you will find ‘Windows Update’. Run this and select ‘Change Settings’ at the top left hand side. Make sure the top option is ticked.
2. Make sure you have a good anti-virus system in place. There are all manner of different types of security software, but the only one you really need is a good anti-virus system. These days most of the other requirements are met by Windows (and if you use an alternative operating system such as Linux or Apple OS X, you probably don’t need them). You really don’t need to pay for a good anti-virus system. If you don’t have one, or if the licence on yours has run out and you no longer receive the crucial updates, click here to download the free and powerful AVG (private use only).
3. Be careful! Viruses spread through emails, websites, shared networks, memory sticks and disks. Don’t open emails from people you don’t trust, don’t click on anything you don’t trust, don’t join networks you don’t trust, and be cautious about using other people’s memory sticks.
I am planning a LinuxMCE-based home automation and entertainment system. As well as all the fully multimedia units that will do voice, video and music, I wanted a rather simpler node on the network: a digital picture frame.
Using LinuxMCE for this is something of a ‘sledgehammer to crack a nut’ approach, but it will allow me to control the frame over the network. And I figure I might as well keep one consistent software system across the whole house.
Having seen a few articles around the web and in Custom PC on the subject, I started digging around in my box(es) of spares for some parts. In the end I settled on an old and very battered Sony Vaio notebook. The battery was completely shot and the case was falling apart, so it was neither economical nor practical to keep using it as it was. But it worked OK and had a very slim case, so it seemed to make an ideal donor.
In the photo above you can see the semi-dismantled laptop alongside the frame it was to be mounted in. This is a 30x40cm Copenhagen oak frame from Habitat with matching cutout.
Once the laptop was dismantled and the screen separated from the body of the machine, the next step was to mount both on a piece of foam artboard. I chose this for its strength to weight ratio, and the ease of cutting. Double-sided adhesive foam strips were used to hold it all together.
In the final image above you can see the loading screen for Kubuntu running on the device. I installed this and LinuxMCE before I dismantled the laptop, but with a USB keyboard and PCMCIA-mounted CD drive, it’s pretty easy to muck around with the software even with the whole lot inside the frame.
Unfortunately the software is proving tricky at the moment. LinuxMCE remains a little fiddly, and with the slightly unusual hardware combo found in a Sony Vaio, it is refusing to play ball at the moment. Sure that can be solved though and I’ll put a post up about how I solved it when I do. I will also post a photo of the finished item — somehow I forgot to get a shot of that!
True to my word I have acquired my first pieces of HomeEasy kit — a remote and three sockets. Each one switches up to 3kW and is configurable to respond to up to six remotes, and each socket can be assembled in to a series of groups. All this for just £20. The equivalent in X10 would cost over £50.
The equivalent in X10 would also not perform anywhere near as successfully — at least not in my house where the powerlines are dirtier than a stag weekend in Amsterdam. The HomeEasy sockets switch instantaneously and successfully on every test so far.
Just a quick clarification on my post about all things HD. The situation is better than I thought but there are still some issues.
First and foremost, the graphics side of things is not the issue, it is the sound.
Today most digital audio is carried through either optical or coaxial SPDIF (Sony/Philips Digital Interconnect Format) connections. This is fine for most of the audio systems used today, and graphics specialist NVIDIA is doing what it can by adding an S/PDIF connection to all of its graphics cards with HDMI. This means that you can feed whatever audio source you have down the HDMI pipe. Well done NVIDIA.
For most people, building a system today, this is a solution. A modern motherboard or sound card that will decode the various surround sound formats will connect to this graphics card and enable you to experience all sorts of HD content in a close to perfect manner.
Unfortunately the next generation of audio codecs such as Dolby TrueHD aren’t compatible with S/PDIF and require the high bandwidth of HDMI v1.3 to be carried in their native format. This means that one piece of hardware in the system — most likely the motherboard — has to recognise the sound format and either decode it or know to pipe it raw down the HDMI connection.
Will 99.9% of people ever know the difference? Probably not, but that’s what being a geek is all about.
(PS — still no response from Intel…)
There’s a lot of confusion around High Definition at the moment. The variety of video and audio formats that constitute HD are complex enough in themselves, even before you examine the embryonic standards for how all this content is created, stored, channelled and played.
Because of this, and the fact that prices continue to tumble for HD gear, I’m a little wary about making a firm decision on what to buy. Being an early adopter can be expensive if you get it wrong, as all those people who bought a HD-DVD player have learned. But a little research and ingenuity can often offset this risk.
If standards are changing constantly, what you want is a device that can change with them. Something that can be upgraded at a marginal cost. This has been the great success of the PlayStation 3 so far: its internet connection means that Sony has been able to offer users a series of upgrades already, consistently improving the device’s features and performance.
But the PlayStation 3 is limited to software upgrades. I want something that can take advantage of new hardware too. Which brings me back to my media PC.
The cost of PC components has fallen such that even if you wanted to build one from scratch, the price would be similar to a good Blu-ray player. If you already have a media PC, then upgrading to deliver HD content should be very cost effective.
Unfortunately, it seems that PC hardware hasn’t quite kept up with the bleeding edge. For example, the latest consumer Blu-ray players and surround sound amplifiers carry both audio and video signals over the HDMI connection. They also handle a variety of new high definition audio formats. No PC hardware that I have found to date carries built-in support for these features.
So for now I am going to suppress my early adopter tendencies and hold off on my HD upgrade. If you do find yourself drawn to Blu-ray though, for now you could do a lot worse than the PlayStation 3.
(PS — thanks to Nvidia’s PR for responding to my questions quickly despite being buried with work. No response from Intel so far…)
Been a while since I last posted about this project — in part due to my analogue week. This weekend would normally have been a great opportunity to crack on with it, but I have overcome my DIY malaise and instead spent the last three days filling, painting, caulking and sorting. I now have a much nicer looking kitchen and a garage that is halfway usable as a result.
In truth it is not just a lack of time that has stalled the project. The early results of my tests proved that the software works, at least in theory, but my subsequent tests have been a real reminder of the non-commercial nature of the product.
As a standalone machine the core/hybrid box I built works fine. I haven’t set up a proper remote yet, but it does everything that my current Windows Media Centre PC does. If I didn’t already have a working media centre PC, then I would probably go ahead and install it.
But the problem is that I do have a working media PC — one that works very well indeed. And without good reason, I’m loathe to replace it with something that has exactly the same functions but that runs on an OS with which I am much less familiar.
What appeals to me about LinuxMCE is all the other stuff — the networked Media Director units, the clever Orbiter remote controls, and for the moment, those aren’t working so well. I can control things with a PocketPC-based Orbiter but performance is slow, the interface looks shonky, and it is somewhat unstable. The Media Director boots successfully and reliably over the network — very cool — but I’m having trouble working out how to share music properly between devices.
The whole thing remains appealing, and I haven’t given up on it by any means. But it is proving a little less simple than — in some places — it has been billed.
Reality has taken some of the edge off my enthusiasm but I’m going to push on. Next step is a different type of Media Director, and one with a slightly different purpose: a digital photo frame. Take one old laptop, add a nice frame…. results to follow when I’ve had the time to put it all together.