On Saturday morning, 2nd Jan you can see me on Channel 5’s Saturday Show. One of the questions I’ll be asked is ‘what’s big for 2016′? In case I don’t do a very good job of answering it*, here’s what I hope is a better attempt.
In 2016 it will become clear that the niche trumps the trend.
Fashion is a funny concept. The idea that people of a whole range of shapes and sizes, tastes, cultures, colours and personalities will each season choose to sport the same set of styles? I’m sure you could make sense of it with some pop-evolutionary psychology: that it’s a status marker, or creates a sense of belonging. In fact, a quick google search throws up a number of academic papers and presentations to this effect.
But in a globally connected age, with access to so many different options and influences, do we all need to dress the same?
Our sense of belonging doesn’t need to be established with the people who are geographically close to us. Or more likely, we can suffice with a smaller group of local peers if we know we’re a part of a larger tribe online.
With access to global supply chains, we can source the fashion to suit our style at any time, and suppliers can find a market for products that previously may have been too niche to manufacture economically.
This rule doesn’t just apply to fashion. It applies to music, film, and television. We may all get swept up in Star Wars fever but at home we’re watching a hugely diverse array of programming.
The same rule applies at work as well. The low-friction nature of international, digital commerce seems to be dividing companies into ‘platforms’ and ‘players’. Platforms are horizontally relevant (across industries), high volume service providers that connect and support others: Google, Amazon, Facebook. Everyone else is a player: small, niche companies offering high value in small volumes, to an increasingly global market**.
Fashions and dominant trends, in any sector, are not going away. But I believe their importance will decline as the oracles that we each follow begin to diversify.
* Let’s just say I’ve seen the future… ** This makes the middle ground much harder to occupy, something I will write about more.
The book was in response to the increasing problem of consumer clutter weighing us down. But I’m not sure how long this is going to be a problem. Because much of the stuff we buy is either being digitised, or rented, or it is becoming more robust.
Much of the stuff around our homes is media of one form or another: books, films, games, newspapers, magazines. But these things are increasingly served up digitally to a device rather than consumed in a physical medium. The result is lots of digital clutter — a challenge in its own right — but though gigabytes may fill up your hard disk, they do not fill up your home.
Of course even the digital stuff doesn’t become clutter if you rent it rather than own it. You just have the challenge of discovering and accessing the stuff you want, when you want it — a different challenge altogether — as Spotify and Netflix demonstrate (and invest heavily to overcome).
It’s not just media that is increasingly rented: there’s a strong argument that any asset that gets underutilised will increasingly be borrowed rather than owned. Uber is aiming to do this with vehicles, one of the most under-used assets and one that, with the advent of self-driving cars, will be ripe for attacking with an alternative business model.
Our phones are effectively leased, or at least are on hire-purchase. As are many of our other devices these days.
There’s a limit to what we actually need to own.
The End of Obsolescence?
The devices that we do own appear to be lasting longer rather than shorter times. We may not behave like this culturally, but the reality is that many of our devices are highly capable long after their prescribed window of value.
I recently bought a ten year old car (see here for my bangernomics practices) that drives like it is new. I have a seven-year-old laptop that is still more than capable for all the things that most people use a PC for. Because so much of the content and services we consume now exist in the cloud, the demands on the devices at the edge are often not that great.
As the business model for homes, cars, and digital devices shifts, the imperative to build in obsolescence starts to disappear. As the company financing the purchase you WANT the products to maintain their function and hold their value. If you can’t convince a manufacturer to make you products that fit your brief then the chances are that you will make your own: the barriers to entry for designing and manufacturing all of these products are falling.
This theory doesn’t address all categories of clutter: clothes, for example. But imagine if we could find a way to incentivise the production of jeans that last a decade. Shirts that don’t succumb to stains or wear and tear.
None of this helps with the coming jobs apocalypse that seems to be looming. But it may help us to save the planet. And if Wallman is right, to be a bit happier.
This isn’t another self-help piece, exhorting you to do what you’re passionate about. Good as that advice is, for those privileged people who have a choice, I’m not in the business of motivational speaking.
Rather it’s about why we work.
Probably the most famous study of our motivations was Maslow’s Hierarchy of Needs. It’s a much-criticised study, but it’s simplicity (diminished over years of refinements) has secured it’s place in blog posts and PowerPoint presentations around the world for over 70 years. The Hierarchy suggests that have to fulfil our base motivations — survival — first, and only then can we start working towards the ultimate goal of ‘self-actualisation’, via safety, social and self-esteem needs.
You can replace ‘self-actualisation’ with ‘doing something you’re passionate about’.
If you’re poor and in work you’re less likely to commit crime than if you are richer and out of work.
What this speaks to is the importance of work in the definition and reinforcement of our sense of identity. Our value. Or as Maslow would likely put it, self-esteem.
Being usefully engaged in work gives us an engagement in society, and a place in it. It gives us rewards for a job well done, not just financial but social. If we’re usefully engaged we can stand up straight and be proud. We know who we are and where we stand.
Note the caveat here: ‘useful’ work.
Take bankers, for example. A much maligned class of worker, and perhaps for good reason. A study last year showed that bankers were more inclined to dishonesty but only when decisions were placed in the context of their work.
Is this perhaps because much of the work of banking isn’t socially useful?
In the next few decades the total number of jobs available seems likely to decline with increasing automation. Having listened to Carl Benedikt Frey and Michael Osborne expound on their research first-hand, I’m increasingly convinced of this. Listening to Richard and Daniel Susskind talk about the future of the professions only reinforces this view.
Without a radical shift in the way we value work, many of the jobs available to humans are going to remain poorly paid: social workers, care workers, nurses and more. Jobs that are undeniably socially useful and unlikely to be displaced by automation, even though they will be enhanced with technology.
We will have to consider rebalancing this distribution. And we will have to consider what the rest of us will do to remain socially useful.
The risks are high if we don’t. At the extreme, Charles Taylor points out that an ‘identity crisis’ is a major factor in the recruitment of young men to join ISIS/Daesh.
If we don’t find new ways to engage more people — usefully — in work in the coming years, we could find that there are many more dislocated, disenfranchised people finding such causes to take them in.
At the How To Change the World conference this week we heard from a range of speakers who talked in one way or another about the control we will soon have over our own physical development. It included the application of stem cells and other techniques in the regeneration of human tissue and organs — even to defeat ageing. And the use of psychedelic drugs to consciously expand our own thinking and change our brain plasticity to enhance learning. The options are many.
Whether through biology or technology — and frankly the boundaries between the two are blurry, given the importance of quantum physics in both — we are now in control of our own evolution. Natural selection is no longer the force it was. What traits we want to select, we have to choose, or even design. At the conference, Professor Julian Savulescu termed this ‘evolution under reason’, but you could equally call it ‘rational selection’ or ‘engineered evolution’.
This throws up a number of ethical dilemmas, particularly around the prospects for inequality, as today’s debate around gene editing is highlighting.
Assuming we can address those to the satisfaction of most — at least the rational portion — the prospects are rather exciting.
I’ve always been rather squeamish about human modification. Tattoos and piercings are not for me. And no, I’m not interested in the spam adverts for other forms of male enhancement. But there are certainly aspects of my abilities over which I would like greater control. Particularly the mental ones.
Here are three examples that are top of my wishlist.
Like most people there are particular times of the day when I am at my best. The exact hours change between summer and winter but it’s always first thing in the morning. It’s not always possible, or desirable to be at my desk by 7. And if I miss my window, which may only be three or four hours at most, then my day can be deeply unproductive. I might still plough through some expenses or achieve the rare feat of clearing my inbox, but I likely won’t create anything, and that’s largely what I get paid for.
There are other periods in the day when I get bursts of creativity, but these are less predictable. Even the usual methods of seeking distraction, or inspiration, or just letting my brain freewheel on a walk, often don’t give me more than a few minutes of renewed focus.
But what if I could turn this mind state on and off. With a switch or a pill? What could I achieve then?
There are a few options here today for this. I could try drugs like Adderall and Ritalin, but these are both illegal without prescription and have serious potential side effects. Similar drugs pop up as ‘legal highs’ but these carry all the same risks and more. If I were going to pop a pill I’d want it to be very well tested and regulated.
I could also try Transcranial Direct Current Stimulation or tDCS, an increasing popular alternative to drugs for DIY brain hackers. But again the science on this is in its early days. While there are enthusiastic proponents, my natural scepticism leads me to want some solid trials before I start to experiment.
The answers aren’t there yet, but there are clear opportunities.
Computers have a neat way to deal with a shortage of short-term memory. They dump a chunk of it into long-term memory and then retrieve it when it’s needed.
Humans do something similar. Some can do this with their own minds, with pretty reliable recall. I am not one of those people. Instead I rely on tools: notebooks, apps, my calendar, photos.
I once tried to replicate the computer’s process more precisely. I maintained what I grandly called a ‘livepad’. A single cloud-stored document, always open, on which I could record notes, ideas, my todo list, unfinished blog posts. It worked for a while but my limited interface to it (the keyboard), unreliable connectivity, and simple lack of discipline meant that I dropped it after a while.
Imagine something similar, with a better interface, and a level of intelligence to it. A place where you could record ideas that could be replayed back to you at the right time. The added intelligence in the pad may even help you to find coherence and commonality in those ideas, as well as assisting you with more mundane tasks, like remembering where to be and when.
High Bandwidth Interface
I think in words, more than pictures. Language is my preferred interface, and the way that I record and share language most frequently is via the keyboard.
The keyboard has proven to have incredible longevity. It is perhaps three hundred years old, based on the earliest patents. But it has limitations. I can only communicate words with it (for the most part). It is not that fast — certainly not in my hands.
I could try to learn to touch type, but even then I am limited to a relatively cumbersome interface. I can’t capture my thoughts on the move (though I do a decent job of writing blogs with my thumb while travelling on packed Tube trains). I could use a voice interface, but this isn’t exactly private and could be very annoying for those around me: I talk loud.
Instead I want the words to flow straight from my brain to the page, or the storage system.
This is some way off unfortunately. Though we are reaching the point where we can control artificial limbs with thoughts, the understanding of the brain on which this incredible achievement is based remains limited. For all our comprehension it is still largely a black box to us.
Evolution in Our Control
These are examples of what we might be able to add to human physiology in the years ahead. Even the drugs could be added to new glands as they are in Iain M Banks’ Culture novels. But they are elective and trivial compared to some of the choices we will have to make soon. We will have the capability to eliminate some genetically-carried diseases by selectively editing people’s genomes.
With that sort of power in our hands, we all need to think about the implications*.
* If you want to make a start, you could do worse than to watch this video from Professor Julian Savulescu:
In a recent LSE lecture, the historian Ian Morris noted that the greatest progress in human history often happens when civilisations bump up against each other. They might exchange slings and arrows, or bombs and bullets, or for that matter, bacteria and viruses. But they also exchange goods, ideas, foods, culture and technologies.
If this is true then you could argue that one of the reasons for our current accelerated rate of progress is the now constant overlapping of civilisations. We interact with others from around the world at multiple levels.
This globalisation has positives and negatives that have been well documented and oft-debated. But one that seems to have become accepted is that a natural result of globalisation is the domination of a small number of ideas — most notably in the form of brands.
You could call upon any high street around the world as evidence for this. Familiar brands populate the prime spots.
But for me these brands are relics of the pre-Internet age. It takes a long time to build up the scale and reach to place a branch of McDonalds or Zara in so many locations. In the low-friction digital environment, companies might achieve this scale much more quickly but that presence will be much less durable.
I’ve long been a sceptic about the durability of companies like Facebook, and I remain so. But even if they do sustain, it’s somewhat irrelevant to the wider point: systems like Facebook are platforms over which others ideas are shared, more than ideas in their own right. The ideas they allow to be shared are more diverse and numerous — and visible — than ever. Not singular and homogenised as previous iterations of globalisation might suggest.
Just take a look at the diversity of topics that form tumblrs, the array of themes on deviantart, or the bewildering range of conspiracy theories. All of which can be expressed and find an audience like never before.
These ideas are competing. And we can only hope that some of these ideas lose that competition, whether its fundamentalist terrorism, or the misogyny of trolls.
But if and when they do lose, I don’t believe the diversity of ideas will diminish. The more rapid integration of global ideas may help us to find consensus in some key areas. But such is the freedom we now have, to access knowledge and to synthesise and express new ideas based on what we have found, that I think the diversity of ideas will continue to grow, not shrink, for some time to come. While some ideas are defeated, or at least returned to a small minority of minds, others will continue to co-exist, and still more will be introduced.
The hive mind that the Internet has created is not a recipe for homogenisation, as earlier forms of globalisation may have been. It is a commons, a space in which many ideas — creeds, brands, behaviours, interests, cultures — can and will co-exist.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.