UK to ‘do its own thing’ on AI regulation – what could that mean?

14
Jan 25
By | Other

Jaque Silva | Nurfoto | Getty Images

LONDON – Britain says it wants to do “its own thing” when it comes to regulating artificial intelligence, hinting at a possible divergence from approaches taken by its main Western peers.

“It’s really important that we as the UK do our own thing when it comes to regulation,” Feryal Clark, Britain’s minister for AI and digital government, told CNBC in an interview that aired Tuesday.

She added that the government already has a “good relationship” with AI companies such as OpenAI and Google DeepMind, which have voluntarily opened up their models to the government for security testing purposes.

“It’s really important that we bake in that security early on when the models are being developed … and that’s why we’ll be working with the sector on any security measures that come forward,” Clark added.

Her comments echoed comments by Prime Minister Keir Starmer on Monday that Britain has “the freedom now in terms of regulation to do it in a way that we think is best for the UK” after Brexit.

“You have different models around the world, you have the EU approach and the US approach – but we have the ability to choose what we think is in our best interests and we intend to do that,” he said. Starmer in response to a reporter’s question after announcing a 50-point plan to make the UK a global leader in AI.

Divergence from the US, the EU

Until now, Britain has refrained from introducing formal laws to regulate AI, instead deferring to individual regulatory bodies to enforce existing rules for businesses when it comes to the development and use of AI.

This is different from the EU, which has introduced comprehensive, pan-European legislation aimed at harmonizing rules for technology across the bloc by taking a risk-based approach to regulation.

Meanwhile, the US lacks any AI regulation at the federal level and has instead adopted a variety of regulatory frameworks at the state and local levels.

During Starmer’s election campaign last year, the Labor Party committed in its manifesto to introduce regulations focusing on so-called “frontier” AI models – referring to large language models such as OpenAI’s GPT.

So far, however, the UK has yet to confirm details of the proposed AI safety legislation, instead saying it will consult with industry before proposing formal rules.

“We will work with the sector to develop it and take it forward in line with what we said in our manifesto,” Clark told CNBC.

Chris Mooney, partner and head of commercial at London-based law firm Marriott Harrison, told CNBC that the UK is taking a “wait and see” approach to AI regulation, even as the EU moves forward with its Act of AI.

“While the UK government says it has taken a ‘pro-innovative’ approach to AI regulation, our experience of working with clients is that they find the current position uncertain and, therefore, unsatisfactory,” Mooney told CNBC via email.

One area the Starmer government has talked about reforming AI rules has been around copyright.

Late last year, the UK launched a consultation on revising the country’s copyright framework to assess possible exemptions from existing rules for AI developers who use the works of artists and media publishers to train models theirs.

Businesses remain uncertain

Sachin Dev Duggal, CEO of London-based artificial intelligence startup Builder.ai, told CNBC that while the government’s AI action plan “shows ambition,” proceeding without clear rules is “borderline reckless.” .

“We’ve already missed important regulatory windows twice — first with cloud computing and then with social media,” Duggal said. “We cannot afford to make the same mistake with AI, where the stakes are exponentially higher.”

“UK data is our crown jewel; it should be used to build sovereign AI capabilities and create British success stories, not simply feed overseas algorithms that we cannot fix or we control effectively,” he added.

Details of Labour’s plans for AI legislation were originally expected to appear in King Charles III’s speech at the opening of the UK Parliament last year.

However, the government only committed to creating “appropriate legislation” for the most powerful AI models.

“The UK government needs to provide clarity here,” John Buyers, international head of AI at law firm Osborne Clarke, told CNBC, adding that he has learned from sources that a consultation on formal AI security laws “is pending to be published”.

“By releasing piecemeal consultations and plans, the UK has missed the opportunity to provide a holistic view of where its AI economy is heading,” he said, adding that the failure to reveal details of the new security laws of AI would lead to investor uncertainty. .

However, some figures in the UK tech scene think a more relaxed and flexible approach to AI regulation may be in order.

“From recent discussions with the government, it’s clear that significant efforts are being made to protect AI,” Russ Shaw, founder of the advocacy group Tech London Advocates, told CNBC.

He added that the UK is well-positioned to adopt a “third way” for AI safety and regulation – “sector-specific” regulation governing industries as diverse as financial services and healthcare.

Click any of the icons to share this post:

 

Categories