UK to ‘do its bit’ on AI regulation — what could that mean?
Jaque Silva | Nurphoto | Getty Images
LONDON – The United Kingdom says it wants to do “its own thing” when it comes to regulating artificial intelligence, hinting at a possible departure from the approach taken by major Western partners.
“It’s really important that we as the U.K. do our part in terms of regulation,” Feryal Clark, the U.K.’s minister for AI and digital government, told CNBC in an interview that aired Tuesday.
She added that the government already has a “good relationship” with AI companies such as OpenAI and Google DeepMind, which have voluntarily opened up their models to the government for security testing.
“It’s really important that we get involved in that safety right at the beginning when the models are being developed … and so we will work with the sector on any safety measures that come up,” Clark added.
Her comments echoed Prime Minister Keir Starmer’s remarks on Monday that Britain has “the freedom now in terms of regulation to do it in a way that we think is best for the UK” after Brexit.
“You have different models around the world, you have the EU approach and the US approach – but we have the ability to choose the one that we think is in our best interests and that’s what we intend to do,” Starmer said in response to a question from reporters after the publication of a A 50-point plan to make the UK a global leader in artificial intelligence.
Divergence in relation to the USA, EU
Until now, Britain has refrained from introducing formal laws to regulate artificial intelligence, instead leaving it to individual regulatory bodies to implement existing rules for businesses when it comes to the development and use of artificial intelligence.
This differs from the EU, which has introduced comprehensive, pan-European legislation aimed at harmonizing technology rules across the bloc using a risk-based approach to regulation.
The US, meanwhile, any regulation of artificial intelligence at the federal level is lacking and instead adopted a patchwork of regulatory frameworks at the state and local levels.
During Starmer’s election campaign last year, the Labor Party pledged in its manifesto to introduce regulations focusing on so-called “frontier” AI models — referring to large language models such as the OpenAI GPT.
However, so far the UK has yet to confirm details of the proposed AI safety legislation, instead saying it will consult with industry before proposing formal rules.
“We will work with the sector to develop and advance this in line with what we said in our manifesto,” Clark told CNBC.
Chris Mooney, partner and head of commercial at London law firm Marriott Harrison, told CNBC that the UK is taking a “wait and see” approach to AI regulation even as the EU moves forward with its AI Act.
“While the UK government says it has taken a ‘pro-innovation’ approach to AI regulation, our experience working with clients is that they find the current position uncertain and therefore unsatisfactory,” Mooney told CNBC via email.
One area where the Starmer government talked about reforming AI rules was around copyright.
At the end of last year Great Britain opened a consulting on the revision of the country’s copyright framework evaluate possible exceptions to existing rules for AI developers who use the works of artists and media publishers to train their models.
Businesses remained uncertain
Sachin Dev Duggal, CEO of London-based AI startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” proceeding without clear rules is “borderline reckless.”
“We’ve already missed key regulatory periods twice — first with cloud computing and then with social media,” Duggal said. “We can’t afford to make the same mistake with artificial intelligence, where the stakes are exponentially higher.”
“UK data is our crown jewel; it should be used to build sovereign AI capabilities and create UK success stories, not just fuel overseas algorithms that we cannot effectively regulate or control,” he added.
Details of Labour’s plans for AI legislation were it was initially expected to appear in King Charles III’s speech. which opens the British Parliament last year.
However, the government has only committed to establishing “appropriate legislation” on the most powerful AI models.
“The UK government needs to clarify here,” John Buyers, international head of AI at law firm Osborne Clarke, told CNBC, adding that he had learned from sources that the consultation for official AI security laws was “awaiting publication”.
“By publishing consultations and plans on a small basis, the UK has missed an opportunity to provide a holistic view of where its AI economy is going,” he said, adding that not publishing details of new AI security laws would lead to investor uncertainty.
Still, some people in the UK tech scene feel that a more relaxed, flexible approach to AI regulation could be right.
“From recent conversations with the government, it’s clear that there is a significant effort underway to protect AI,” Russ Shaw, founder of the advocacy group Tech London Advocates, told CNBC.
He added that the UK is well placed to adopt a “third way” on AI safety and regulation — “sector-specific” regulations covering different industries such as financial services and healthcare.