The AI Opportunities Action Plan
What does the UK government’s new AI Opportunities Action Plan actually mean?
Not unusually, I find myself having to answer this question in a bit more depth than I might otherwise have done, and at short notice.
As I write this there is a little over three hours until I appear on national radio, and a little over five before I appear on national television. And Labour’s new announcement is the subject. What better way to work out what I think than to write a blog post about it and share the process with you?
Doing the reading
The starting point? What has been said. Thankfully there is a press release to work from on the government website. And a link to an earlier report, the recommendations of which have been accepted in full, that I had at least scan-read before.
Here’s my precis: the plan has three parts:
1. Infrastructure: If we’re going to be good at AI we need data centres and compute power. Putting this section up front means we can also shout about big numbers, because there is private sector money (£14bn) going into the building of this infrastructure. This is clearly useful and important, but given demand, there’s a good chance that all that money would be spent anyway. But accelerating planning approvals and improving grid access for new infrastructure will help in the so-called AI Growth Zones.
2. Adoption: The government is creating a new unit inside the Department of Science, Industry and Technology to pilot new AI ideas for the public sector. And the rest of the government is being encouraged to drive adoption in their relevant sectors.
3. Leadership: The least detailed part of the announcement and the report is how we ensure the UK is an “AI maker, not just an AI taker”. The short version: balanced regulation, business/research collaboration, government support.
To Regulate or to Accelerate?
You might notice this is all a rather marked departure from the previous government’s approach to AI, which was a combination of ‘can I get a photo op with the techbros?’ and ‘SKYNET IS COMING’. I paraphrase, but it was very much a regulation/fear-first approach.
Regulation is clearly important, but I think the different approach being taken here is as much about the maturation of the AI story as it is about a different set of political priorities. Generative AI - the type that’s making most of the headlines - may sound intelligent some of the time. But it’s not going to try to take over the world. Believing it will is a foundational misunderstanding of the nature of the technology.
That obviously doesn’t mean we should take a laissez faire approach. Gen AI has huge implications for IP rights, fraud, jobs, and more. But highest priority for regulatory controls should be based on current reality not science fiction.
Boosting adoption
The most exciting part of the plan for me is about boosting adoption. This perhaps sounds a little woolly, but if the government is genuine about adopting the report in full, there are some really detailed recommendations that could make a big difference. For example, forcing all functions to expose their services, inputs and outputs through an API. As I wrote about in High Frequency Change, this approach has been the foundation of Amazon’s agility. And I detailed how to structure an organization like this in Future-Proof Your Business in the framework I called Stratification.
Modularising government functions like this and exposing their services and data to other government departments, the third sector, and private enterprise, could be an enormous source of both cost savings and growth innovation. Even better if it can be extended down to local government.
Just think about the amount of data re-entry and service replication that exists across the entire public estate. Now imagine being able to search and access existing functions and data with minimal effort. Stick AI capabilities over the top of that and you might have a recipe not just for greater productivity but also greater transparency.
Treating AI as a tool
I’m sceptical of all big government announcements, and especially so where tech is concerned. But I’m largely in favour of this. AI is fundamentally a productivity tool. And we know across both public and private sectors, we have a productivity problem. In this context, I’d rather have a government aggressively engaging with the challenge and the opportunity, across both adoption and regulation, than one that just stands back.
How effective the plan will be remains to be seen. As does how popular it will be. Whether you think the impact is structural or temporary, AI is going to take work from people in both the cognitive domain, and in the physical domain, through humanoid robots and self-driving vehicles. Some groups might find they’re not too happy with a government cheerleading this change.
But barring a policy of complete rejection, that change is going to happen. The only question is whether we’re riding the wave or getting swept away by it. And while I’ve never thought of Keir Starmer as a surfer, this is a valiant effort at the former.