For a lot of my futurist career, blogging has been a major outlet. My posts are less frequent these days but occasionally I still use a blog post to organise my thoughts.

The archive of posts on this site has been somewhat condensed and edited, not always deliberately. This blog started all the way back in 2006 when working full time as a futurist was still a distant dream, and at one point numbered nearly 700 posts. There have been attempts to reduce replication, trim out some weaker posts, and tell more complete stories, but also some losses through multiple site moves - It has been hosted on Blogger, Wordpress, Medium, and now SquareSpace. The result is that dates and metadata on all the posts may not be accurate and many may be missing their original images.

You can search all of my posts through the search box, or click through some of the relevant categories. Purists can search my more complete archive here.

The Next Big Thing(s)

What is 5G for? What replaces the smartphone? What is the next big thing? Perhaps these questions don't have singular answers.

Judging this year’s Tech of the Future category in the Global Mobile awards, I was struck by the range of entries. They addressed many different market spaces, with different combinations of technology, and came from companies and collaborations of different sizes and shapes. While tech as an industry may still have its issues with diversity in hiring, its outputs are incredibly diverse.This should perhaps not surprise us. The connected computing revolution has stripped much of the friction from innovation, equipping more people than ever with the tools and the knowledge to create. It has brought global organisations closer together, reducing the friction of communication and increasingly demolishing the remaining barriers, like language. A shared platform of connected computers has allowed more people to innovate, and to find an audience for those innovations in an enormous network of niches.Given this breadth of innovation and the increasingly fractured nature of the audience, I have to wonder if we should be looking for a single ‘next big thing’.

What’s next?

This is a question that is asked frequently in the mobile industry, in one form or another. It takes the form of questions like: “What is 5G for?”, and “What will replace the smartphone?” Perhaps we should stop looking for a single answer to these questions and think much more in terms of answers, plural.The whole point of 5G for me is that it can support a diverse range of applications with a level of ubiquitous connectivity we have only been able to dream of until now. If I still have to think about whether or not I am connected in five years’ time, then it will have been a spectacular failure. I should only have to think about the applications, and more specifically, my chosen blend of applications, unique to me. Some will be more popular than others, and hungrier for bandwidth or low latency. But with the rapidly growing array of connected devices, perhaps the biggest category in any analysis of traffic in the future will be ‘other’, an enormous group of individually small but collectively very large bandwidth consumers.

Beyond the smartphone

Perhaps the devices running those applications will be equally diverse? I am compelled by the vision for the future of mixed reality, and the replacement of handsets with headsets. But it is unlikely this will suit everyone. Processors, baseband units, cameras and batteries can be assembled into a huge variety of form factors. As design and manufacturing capabilities continue to advance, the size of a profitable market for individual devices is likely to shrink further. There will likely be a device for every niche, well beyond the current diversity of handset designs. And that’s before we get into the incredible range of M2M (machine to machine) or IoT (internet of things) devices that we already see appearing.In summary, perhaps we should be less concerned about what is ‘the next big thing’. Instead we should focus on the things, plural. On continuing to enhance the environment for innovation and experimentation. On putting the tools of creativity into the hands of those that understand their niches and can build great things that they will love.

Read More

Is immersive entertainment the future?

What is immersive entertainment? And how will it change as audiovisual technology advances and our experience economy evolves?

The Rolling Stones are releasing a 'radical, new immersive concert screening concept' based on their enormous 2016 gigs in Cuba. So what is 'immersive entertainment', and is it the future?

Physical and digital

Going to a gig is immersive entertainment. It engages all of your senses, for better or worse. If you're screaming at your favourite teen idol or thrashing around in the mosh pit, you are 100% in the moment. This is true of almost any intense form of physical activity or engagement. It's why these things are so good for us: they take us out of ourselves, and focus us wholeheartedly on what we are doing.The very need to describe something as immersive entertainment is for me an acknowledgement that this activity might not be as consuming as such a physical experience. That somehow through effort, design, or technology, the provider is trying to make something that might not be truly immersive into an experience that matches these physical-world highs.In the case of the Rolling Stones concert, this seems to amount to a combination of best-in-class audiovisual systems combined with some set dressing and live entertainers. These things together will not transport you back in time and across the ocean to Cuba. But they are designed to create as close an experience as you can get in your local concert venue.Critical to the success of this endeavour, will be the response of the rest of the audience. If they get into it, and you are surrounded by people having a good time, singing and dancing, then it will probably be very successful. If they treat it like a trip to the cinema, then it's unlikely to be close to immersive.

Future options for immersive entertainment.

Today the state of the art for group entertainment is ultra HD projection. But in 10 years time? Imagine the same event, with everyone gathering in a concert venue. But instead of the images being projected on a flat screen, you can see a virtual Mick strutting up and down the stage. He is indistinguishable from the real thing, until you remove your smart glasses and he disappears.Maybe you decide to stay home and watch the gig, and your living room is transformed into the concert venue. You lose the live atmosphere, but drinks are cheaper and there are no queues for the toilet.Neither of these options will stand up to the real thing. But with concert prices high and access limited, these sub-experiences are likely to be popular nonetheless. In this age of deepfakes, concerts need not be limited to current or living artists either. Why not time travel and see Springsteen at the Hammersmith Odeon in 75 (that's where I'd go), Nina Simone in '64 (yeah, I'd also go there) or Johnny Cash live at Folsom Prison (yep).

Experience economy

The idea of the experience economy is not new. It can arguably be traced back to the Tofflers' FutureShock in the 1970s. But it is true that a rising proportion of our expenditure is going on things we do rather than things we buy. In this world of FOMO, offering people the chance to get to a version of gigs that they missed - perhaps by decades - or couldn't afford to otherwise access, will likely prove popular. And in 10 years time, it might be the way that many of us experience live music.But the real, physical experience will always command a premium. Because for the foreseeable future, it will remain the richer experience and the only one that is truly immersive.

Read More

Why milk tells us everything about the future of content

The future of content is shaped by the massive explosion in choice that we have experienced as consumers and creators alike.

The changing content of a bottle can tell us a lot about the future of content overall.I’m old enough to remember having whole milk delivered by the milk man, and that first creamy mouthful you used to get off the top of the bottle if you didn’t shake it first. These days we don’t drink so much whole milk. First, we started skimming all the cream and fat out of it. Then we started switching to a whole different range of milks. Oat, almond, and ten different flavours of soya. The milk we get has less cream in it, and we have a whole range of different milks to choose from.The same basic things are true of the content we consume. There’s more choice, but the ratio of cream to milk is almost certainly worse than it was. When we consider the future of content, there are some things to celebrate, and some challenges ahead.

The cream ratio

Why is the cream ratio so much worse now? Because of two factors, both of which I believe are ultimately positive.Firstly, more people now have access to the tools of creation and publishing. Education is more widespread. Digital publishing has brought the marginal cost of adding another video, e-book, or blog post to our collective digital catalogue down to near zero.Secondly, no-one is going to stand in your way if you want to publish something, with a few notable exceptions. The old intermediaries may have ensured that the quality that reached a mass audience was generally higher, but their processes also did a lot to limit the access to market of many people with talent but not the right connections or background.We have traded a higher standard of quality overall for much greater diversity and accessibility. And I think that is a trade worth making. But in the next few years, as the range of content in the global market continues to increase exponentially, we’re going to need some new ways to navigate the morass, both as publishers and as consumers.

The continuing exponential

Why do I believe that the future supply of content will continue to grow? Because the next generation of content creation tools will not just benefit from even greater resolution and bandwidth, but more importantly they will offer even more natural interfaces for content creation. We will all have the ability to create rich three-dimensional pieces in the near future. Virtual objects, creatures, spaces and even whole worlds. And we will do this naturally, perhaps even subconsciously.If the current direction of technological travel continues, we will all be recording our lives in rich multi-dimensional video for large portions of the day, simply as a factor of how the next generation of mixed reality devices – headsets – will work. And we will interface with, edit, remix, and broadcast this content with intuitive voice and gesture commands, assisted by artificial intelligences that will do much of the work for us.How on earth will we navigate the range of choices available? And how will brands who want to reach us find their way to us.

Your decision engine

I foresee us increasingly outsourcing choice to machines. Personal assistants that know us deeply, intimately, and that can understand what we enjoy and what we value. They will know us by watching us, collecting data across multiple dimensions. They will watch our social interactions as they do now, but they will also watch our emotions. Monitoring our breathing, heart rate, galvanic skin response, and neurological activity, they will understand when we enjoy something. Examining our buying behaviour, our financial state, and watching the state of our possessions as they degrade and get used up, they will know what we need and can afford. Then they will take decisions on our behalf.Who owns these taste-makers, and who can influence them, is perhaps the biggest battleground for business in the second quarter of this century. Will we own and control them ourselves, having them act as shields for our limited attention, and curators of our own personal universe? Or will they be delivered by the corporate behemoths, the things they want to sell us wound invisibly into the strands of our own expressed interests?This matters, because in tomorrow’s blended reality, the content that reaches us literally defines how we will perceive the world, not just through the screen but everywhere we walk.

Watercooler moments in the future of content

The continuing expansion of the range of content presents challenges to publisher, consumer, and society.As publishers and brands, how will we find an audience in the future? Part of the answer comes from knowing your audience and accepting the likely limitations on the scale of that audience. The future is a million niches, and many fewer events that unite them. This is why phenomena like The Bodyguard and Game of Thrones attract such a frenzy: cultural events that unite us are increasingly rare.Smaller niches tend to be more protective of their own carefully curated identity in my experience. Cracking that shell from outside is hard as Budweiser’s pride experiment showed.As consumers, will we accept the role of the machine in defining our buying and consuming habits? In many ways, we already have accepted it. The lack of resistance to social media bubbles drenched in fake news over recent political cycles has shown that.As society, will we accept the decline of the watercooler moment, the cultural phenomena that bring us together? Or will we break out of our personalised worlds to find more shared moments?On the last question, I am cautiously hopeful. But we need to be conscious of the changes that are happening to the market for content over the next decade. We need to consider what they mean for the future of content, and consider our interventions if we are going to succeed in what could be a very challenging environment, as publishers, as consumers, and as a society.

Read More

What Apple’s announcements mean for the future of TV

Apple's big name content signings have caught all the headlines, but its move into curation is much more important for the future of TV

Apple is making a move to dominate the future of TV. But it's not the one you think...It's a few months since I stepped back from reviewing gadgets and commenting on general tech stories on the Beeb. Since then, I’ve paid less attention to the occasional slew of press releases that drop into my inbox from Apple. But this morning, as well as talking about the future of work, Julia Hartley-Brewer’s team on TalkRadio have asked me to comment on yesterday’s announcements. So, I took a look.What I saw fascinated me.For me, Apple’s announcements are not so much about new products or services. They are about the way we navigate the explosion of choice in front of us when it comes to entertainment. In the announcements of Apple TV+, the new Apple TV App, and Apple Arcade, the new ad-free games service, the same words keep coming up: “curated”, “personalised”, “discover”. Apple is catching a lot of headlines for its big-name content signings. But I think its desire, and mission, to insert itself into our decision-making is much more interesting.Fair warning, there may be some confirmation bias here. I’ve been obsessed with how we navigate the surfeit of choice we now face for some time, writing about the phenomenon of ‘reintermediation’ here, and here. This, to me, is just another example. But it’s important because of Apple’s scale and reach.

Apple swagger

Apple offering new content is undoubtedly important. It doesn’t matter how late you are to the party if you roll up with Apple-scale swagger. But Apple inserting itself into the process for how 1bn people choose content? That’s enormous. If Apple becomes the tastemaker, the front end to all TV services, then it will have incredible power over what we watch. It can diminish the value of the brands behind that discovery engine – brands that right now act as a heuristic for choice. Don’t know what to watch? There’s probably something good on the BBC.I believe that in the future a lot of our decision-making processes will be augmented by smart machines. Many of us already let our digital music accounts do the choosing with automatically curated playlists. This potentially creates a more open market for content creators: no longer is your success at the whim of a big distributor if you can get it found by the right discover engine. But it also places enormous power with those discovery engines – just as we have already handed Google so much power by making it the primary means by which we navigate the web.

Does Apple own the future of TV?

This move by Apple is a smart one as it transitions to a higher proportion of service-based revenue. It makes Apple ownership more sticky, because your preferences are bound to your Apple account. But it also allows Apple to exert that stickiness beyond its own ecosystem, if it can use its new channels as a trojan horse to get the Apple TV app on to third party devices and smart TVs.Don’t get me wrong: I’m as dazzled by Oprah and Spielberg as the next person. But reintermediation is the bigger play here.

Read More

The ideology at the heart of the web

The goal for the web is to create a single source of knowledge accessible to all humanity. Every attempt to splinter it undermines this goal. We have a choice to make.

Has the initial ideology at the heart of the web been corrupted?There are events in the cultural calendar that lead people to call a futurist. For the last 24 hours my phone has been buzzing with researchers for radio stations, wanting a comment on what the next thirty years of the web might look like, on this, the 30th anniversary of Tim Berners-Lee’s proposal for a new scheme for information management.Berners-Lee has naturally featured at the heart of these conversations, with clips of his chat with Rory Cellan-Jones preceding my own interviews. Berners-Lee is concerned about the fragmentation of the web and internet, into often proprietary and less-open segments that stymie the web’s evolved purpose: universal access to knowledge. Whether it’s China’s restricted version of the internet, or the polluted conversation spheres of Facebook and YouTube, these closed rooms are anathema to this ideal. Inside them, access to information is limited, monetised, or otherwise leveraged for control. But there’s something more fundamental about these operations that I think offends the Web’s creator and perhaps should concern all of us.Each one of these domains can succeed in its less-noble goals because it is in some way closed. Building walls creates a smaller territory that can be more easily controlled. This is great for innovation: if you’re only trying to shape a smaller space you can do so much more quickly. Hence the speed at which Facebook and others can introduce features – “moving fast and breaking things”. The walls give you control over access, locking people in and keeping others out.Where it gets really pernicious is when you give users the ability to build their own walls. This is something the author Matt Haig pointed out about Twitter recently:https://twitter.com/matthaig1/status/1105382871412424704Tribalism is the right word, and it’s not just Twitter. Every attempt to Balkanise the web and its offshoots is effectively an attempt to create, or support for others attempts, to reinforce tribal boundaries, based on politics, culture, race, or any other factor. These may not be the intentions but they are definitely counter to the original ideology at the heart of the web: a single, shared, global resource.Berners-Lee’s work throughout the last thirty years has been about connecting humanity to the web: half the world remains offline. It’s a noble goal because of the inherent value of access to knowledge. But what underpins it is a recognition that we are a single group, in spite of our differences. A global web breaks down barriers rather than builds them. It forces us to confront the fact that we have much more in common than separating us. And it creates a platform for sharing knowledge and truths at a time when that couldn’t be more important, particularly with looming issues like climate change that can only really be tackled with political will underpinned by understanding.When we consider the next thirty years, even before we get into the media through which we access the web, we have to consider whether we will protect the ideology at the heart of the web. Do we want a single resource for all people, or whether instead we want to create a platform predicated on division.

Read More

Convergence is not (only) the future of gaming

Gaming legend Hideo Kojima things the world of film and gaming will merge through mixed reality. I think that he's right: expect convergence

Hideo Kojima is a gaming legend. His plans to integrate gaming, film, music and more, formed the basis of a quick interview I gave this morning on the sofa at BBC Breakfast.It’s not a new idea that these different media might converge. In some ways it is happening already: look at the integration across the Marvel Universe where comic stories weave in and out of games, TV shows and films. Or how film promotion now starts with experiential games seeded around the internet. People have long considered ways to make the cinema experience interactive — a group ‘choose your own adventure’. And the natural conclusion of high-end games is total immersion in an experience of cinematic reality via VR.But I don’t think this is what Kojima is suggesting. Rather, what I interpret from his few words, is that a single, multi-threaded narrative might be explored through multiple forms of media combined in a single entertainment package.This makes a lot of sense with the convergence of entertainment delivery on a small number of devices: phones, tablets and streaming boxes. With some caveats, and the support of some high-end servers in the background, these devices are capable of delivering anything from a simple page of text to a rich VR experience.Why not utilise this breadth of capability to engage us in many different ways? It’s certainly one answer. But I don’t think this is the biggest opportunity in the future of gaming.The largest single segment of the gaming market, following years of rapid growth, is mobile gaming. Within that, the largest phenomenon in recent years is Pokemon Go. Though limited, I think this AR experience points to what will be the most popular and pervasive form of gaming.

Lessons for tomorrow

Imagine real life, gamified through the overlay of the physical world with digital sights and sounds. Virtual places, people, objects and creatures that you can interact with as though they were real. We’ve acclimatised to people speaking to themselves on wireless headsets. People running around the streets chasing Pokemon seemed to generate a lot more smiles and good will than criticism and questioning. I think we’ll adapt to people playing in the streets in their own virtual world — eventually.The revenue streams are certainly there to drive such an industry. Imagine an advert you have to interact with to win a game. Imagine that advert is a virtual character with a rounded virtual intelligence. This is a far cry from today’s billboards: this is hyper-targeted, totally personalised, and fully interactive.Whether you like the sound of that or not, it’s coming.

Read More

Facebook Facing the Same Challenge As Every Media Owner: Editorial Integrity vs Advertising Revenue

So Facebook’s results are out and they are largely positive — at least as positive as the previous quarter’s. Which just goes to show fickle the stockmarket can be: last quarter Facebook’s shares took a hammering, whereas this time around they’re up 13%.

Personally I’m more interested in the business than the vagaries of the stock market, and particularly in the challenge that Facebook is now facing. Because it appears to me that this very modern business is facing a very old challenge.

Newspapers and magazines can very rarely survive on the cover price alone. So they take in advertising. This advertising sometimes comes in very innocuous forms that can be of great value to the consumer: for example, small ads. Or it can be rather more insidious: poorly flagged ‘advertorials’ for example, adverts masquerading as independent editorial content.

At worst, the wall between editorial independence and advertising revenue can be demolished altogether and when that happens, the ‘news’ is defined by whoever pays the most money.

Now take a look at Facebook. It may be we who generate the editorial content, rather than a team of journalists, but its business is not that different to that of a newspaper or magazine. More than 80% of its revenue comes from advertising. And as it tries to grow that revenue, it is going to be pushing the boundaries of our editorial independence.

Take for example, Promoted Posts: the ability for advertisers to bump up the visibility of their posts to those who have liked their page — and their friends. This is an explicit manipulation of the feeds we receive from our friends, confusing what might be important/valuable to us, with what someone else wants to be important.

Likewise with mobile: with limited screen real estate, how is Facebook going to insert ads without squeezing them into the streams that we really want to read?

These are my concerns for Facebook, because I think many users are already reaching the limit of their tolerance with the platform. Privacy breaches and unwelcome redesigns have already tested people’s commitment. And in the last couple of years we’ve seen growth rates slow and even temporarily reverse in some territories around the world.

As ever I’m not saying Facebook is going to fail tomorrow. Or that I don’t like the platform: I am one of the billion active users. But I think it has a serious challenge on its hands to sustain its position in the market, and I don’t fancy its chances in the long term.

We’re just not that tied to Facebook. The more it infringes on our editorial control, the more we will move away.

Read More

Twitter 'Favorites': A Case Study of Evolving Social Media Etiquette

How do you use Twitter’s ‘favorite’ button? Twitter itself suggests a couple of ways that people can use it here: https://support.twitter.com/articles/20169874-favoriting-a-tweet

“Favorites, represented by a small star icon in a Tweet, are most commonly used when users like a Tweet. Favoriting a Tweet can let the original poster know that you liked their Tweet, or you can save the Tweet for later.”

Personally, I use favourites1 largely for the latter reason as part of an attempt to overcome what I still believe is one of the biggest problems on the web: discovery.

I want to know who is talking about issues that are important to me, primarily the four categories we cover: the future human, future cities, future business and future communications. I also want to know about great keynotes (and not just TEDTalks), both as a speaker who is always looking to improve, and as a conference organiser with TMRW, thinking about the next event.

I don’t have time to manually scour Twitter for people talking about these things so I use an automated tool that finds and favourites tweets containing certain key phrases2. This creates a shortlist (or sometimes a long list) of tweets for me (and Mason, my colleague who co-curates the feed) to check out.

The phrases we search for are constantly being tweaked but as you can see from looking at the current favourites list, it has a pretty good hit rate of interesting stuff. From it I find new people to follow, interesting articles and things that we might retweet.

We also find abuse. Like this (excuse the language).

Now this person clearly uses favourites in the other way that Twitter suggests. They’re having a very difficult time. I can absolutely see how favouriting a tweet where they were documenting their problems could be seen as offensive — if you assume that by favouriting the tweet I was ‘liking’ their misfortune.

For me and others (I know I’m not alone in this), the favourite has two meanings. It’s not as simple as a Facebook ‘Like’. But we may well be in the minority. To the extent that in the future our usage of the favourite is not only not recognised, it is broadly accepted as wrong. Maybe that’s the case already?

Either way this is an interesting little case study of how the meaning of simple gestures in social media can evolve rapidly, be interpreted differently by different people, and how that difference in interpretation can clearly cause offence.

1. Dropping the quotes and adding a 'u' from this point on.
2. By the way I'm not trying to hide the fact that the tool that I use to do this is promoted as a marketing tool, that that is how we discovered it, or that it works very well at finding us new followers as well as people to follow. But research is absolutely a key part of its value.
Read More

Is Social Media Good Or Bad?

Ah those wonderful binary choices born of radio phone-ins. I spent this morning defending social media following its shocking abuse in the Criado-Perez affair.

Is social media bad? Of course it isn’t. Normally I’d say something along the lines of “It’s technology. It has no agency. It can’t be inherently good or bad. It’s how it is used.” But I’m not sure that’s entirely true in this case.

For a start the platforms themselves may be all bits and bytes. But they are operated by companies that do most definitely have an agenda, encapsulated in everything from their user interface design to their usage policies. These policies for a long time led Facebook to justify removing images of breastfeeding while leaving untouched images of domestic abuse. Clearly while there is no agency in the technology, there is in the people behind it.

There are many examples of the powers of social media being abused, beyond the Criado-Perez threats. There is the daily trolling, the torrent of threats and abuse that regularly seem to arrive at the accounts of prominent women, and also the bullying that takes place on more closed social networks like Facebook. As a teenager being victimised there are few places to hide these days.

But you have to balance all of this against the good that social media does. And not all of this can be put down to people doing good using social media. Some of it is intrinsic in the technology’s very concept.

This comes down to democratised, disintermediated, decentralised, distributed publishing and communications. The ability to communicate with one person or many without limitations of cost, state or editorial control. These things are a fundamental part of the architecture of social media. There are weaknesses in this model — the lack of verification for example, or the power put into the hands of those wanting to abuse — but these for me are far outweighed by the good.

The power for communities, political and protest groups to self-organise faster, across greater geographies and without restrictions. The ability for people with shared niche interests to connect across the globe. The chance for families and friends distributed by the nature of our globalised world to stay in close contact. These things are almost immeasurably valuable and in 99% of cases are not abused.

Positive use of social media dramatically outweighs the negative. That’s why stories like the threats to Caroline Criado-Perez make the news. Change needs to happen to protect people in her position and prosecute the criminals abusing those threats. But we have to make sure that the great things about social media are protected when we make those changes.

Read More

Social Media and The Hive Mind

Science fiction is full of instances of the ‘hive mind’, a collective consciousness shared across small groups or even entire species of beings. The narrative varies: sometimes they act as one individual through many bodies, and sometimes they have distinct personalities but share their thoughts through some form of telepathic communication.

While we’re a long way from telepathy, you can see the parallels between the latter description and the current generations of heavy social media users. We freely share many of our thoughts across the networks to our friends and family wherever they are. As a result I know what my friends are thinking, what they are doing (or have been doing) and where they are.

Some people are clearly more open than others: I consider myself a fairly heart-on-the-sleeve sort, but I can’t imagine sharing some of the updates (or pictures) posted by some in my network. But I can see that level of broadcast intimacy becoming increasingly the norm. It feels very in sync with the current culture of emoting, present in music, celebrity revelations, lifestyle magazines and newspaper columns.

And as the friction involved in sharing updates becomes lower, I can see us sharing more and more. Some of it will be automated (such as location), much of it will be banal. But it will all contribute to a kind of background hum that gives us an extra sense of what is going on with our connections. We are already developing and using meta analysis tools that will help us sift meaning from this hum: just look at trends on Twitter if you want to see the most popular memes circulating the earth.

Where will this all end? Well if you take my rosily positive view of the world, it could all work out quite nicely. Tying this flight of fancy back to present day issues, imagine a global consensus based on the collected thoughts of the species. A grand, global, technologically-inspired version of proportional representation. It would provide an interesting take on democracy.

I just hope our judgement — and our wit — can keep pace with the advances of technology.

[This blog posts finishes a train of thought started on the way home from FiveLive last night talking about the fact that young people are switching from SMS to instant messaging according to a report from Mobile Youth. My take? Not surprising given that IM is lower friction and lower cost.]

Read More