To legislate or not to legislate? How EU and UK differ in their approach to AI

You are currently viewing To legislate or not to legislate? How EU and UK differ in their approach to AI
<span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix=""></span> <span class="bsf-rt-display-time" reading_time="3"></span> <span class="bsf-rt-display-postfix" postfix="min read"></span></span><!-- .bsf-rt-reading-time -->

The boom of artificial intelligence has spurred a regulatory frenzy across the globe — and Europe is at the forefront of the developments.

Both the EU and the UK are attempting to find the elusive balance between leveraging AI’s growth and mitigating potential risks — but their approaches differ significantly.

The former has opted for a hands-on, risk-based approach, whereas the latter has promised a “pro-innovation” stance. However, with news emerging that the UK government is now drafting new rules to regulate the tech, this could be about to change. 

The EU’s stricter, more cautious approach is clearly seen in the AI Act, the world’s first comprehensive law on artificial intelligence.

The Act’s top-down and horizontal approach sets clear obligations for compliance across all applications and sectors. It also establishes the European AI Office, which will oversee the law’s implementation.

In contrast, the UK hasn’t opted for bespoke legislation so far. Instead, it has proposed a guidance framework for existing regulatory bodies. For this reason, the government has pledged £10mn to prepare and upskill regulators so they can evaluate the opportunities and risks associated with the technology.

The UK is also following a vertical strategy. Its aim to evaluate risks sector-by-sector based on five principles:

  • Safety, security, and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

“By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely,” Michelle Donelan, Secretary of State for Science, Innovation, and Technology, said in February.

The EU’s risk-based approach

The EU lawmakers on the other hand, believe that it is a risk-based approach and the push for trustworthy AI that will unlock Europe’s competitive advantage.

The bloc’s rules target AI systems based on risk levels: unacceptable, high, limited, and minimal risk. AI tools that threaten safety and human rights, such as social scoring, are considered unacceptable, and are therefore banned altogether.

For the remaining categories, the concept is simple: the higher the risk, the tougher the rules. High-risk systems, for example, include AI technologies that can be used in law enforcement, healthcare, and critical infrastructure.

The AI Act applies to all AI companies (within or outside the EU) doing business in the bloc. Rule violations can lead to fines of up to 7% of a company’s global turnover.

In contrast, the UK is currently targeting voluntary agreements on AI safety with key companies and countries.

To legislate or not to legislate?

Critics have expressed concerns over the AI Act, fearing that the strict rules could impede innovation. European companies have also raised objections, warning that the EU could lose its competitiveness in the field.

But the UK might also be diverting from its pro-innovation, laissez-faire strategy before long.

The government is starting to draft its own legislation on artificial intelligence, people familiar with the matter told the Financial Times. The regulation would most likely limit the development of Large Language Models (LLMs) and require companies building advanced AI to share their algorithms with the government.

Related concerns range from potential misuse to market manipulation.

“The essential challenge we face is how to harness this immensely exciting technology for the benefit of all, while safeguarding against potential exploitation of market power and unintended consequences,” said Sarah Cardell, CEO of the UK’s Competition and Markets Authority (CMA).

One of the themes of this year’s TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), we’ve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).

Leave a Reply