• Contact Us

Innovation and regulation - are they going to get the balance right for UK AI?

on Thursday, 21 March 2024.

In February this year, the UK Government published its response to a consultation on the regulation of AI. The essential question was: "How pro-innovation are we going to be here in the UK, and does that require new legislation?"

Life sciences will be a key sector for implementing AI technology and mitigating (one way or another) the risks associated with that technology.

Before we look at the sector, it is worth thinking about the timeline of recent (2023 onwards) AI developments:

These are cross-sector policy or policy-influencing documents and events. In the latest one on that list, there are indications of how the life sciences sector is regarded and what developments are likely to happen in the near future.

In its response, the Government states:  

"The world is on the cusp of an extraordinary new era driven by advances in Artificial Intelligence (AI). I see the rapid improvements in AI capabilities as a once-in-a-generation opportunity for the British people to revolutionise our public services for the better and to deliver real, tangible, long-term results for our country."

In keeping with this statement, the Government has made strong further statements about the UK being a safe and innovative place for AI and a thought leader in that respect with a firmly pro-innovative stance.

Life sciences aspects

The consultation response refers to several important uses and developments relating to AI in the life sciences sector.

As an example of a UK regulator leading the way, it refers to the MHRA's publication 'Software and AI as a Medical Device Change Programme 2021'.   

Advances in healthcare are expected. The Government chose the example of AI-driven diagnostics. In particular, it mentions the DHSC funding for high demand areas such as chest X-rays and CT scans.

Discrimination in healthcare is a concern. The response refers to a hypothetical situation in which discriminatory practices could arise as a result of an AI system being used to underpin complex healthcare processes. That situation could involve bias in the output of the AI model, a lack of transparency, and inadequate measures to mitigate the effects of the bias.

The MHRA is applauded for progressing its plans in ways that align with the AI principles published in the Policy Paper, particularly in its guidance on software and AI as a medical device.

Acknowledging the high degree of potential risk in the medical device sector, and the difficulties of collecting real-world evidence, the Government reported that an AI sandbox for medical devices was considered likely to be beneficial.

In the context of highly capable general-purpose AI systems, the response considers whether existing laws are sufficient. The concern is that, due to their general nature and the array of downstream uses, there could be certain products for which the risks remain unmitigated. However, the government points out in relation to medical devices that the MHRA has existing powers under the Medicines and Medical Devices Act 2021 to hold manufacturers to account. 

MHRA due to report

The Government reported that many respondents were concerned about the UK rushing to legislate. In particular, there is a suggestion that regulators could be subject to a statutory duty to have regard to the AI principles in the Policy Paper. The government has promised not to rush into new legislation, but it will assess that suggestion. Broadly, the government's opinion is that a non-statutory approach is favourable because it offers critical adaptability. However, it has asked regulators to help. In response to that request, the MHRA is due to publish an update on its strategic approach to AI by 30 April 2024 (about six weeks before our PING conference on AI). The Government also encouraged regulators to publish action plans to help drive transparency. It is made clear that the government is regularly reassessing its prioritisation of regulators. For the reasons above and many others, the life sciences regulator should be a top priority for the government.      

Is legislation coming?

Legislation is an option. Indeed, it is tempting to ask why we do not already have pro-innovation, safety-conscious, world-leading legislation. Some of the Consultation respondents must have thought the same.

The Government acknowledges that legislation will be required, but it wants to avoid making mistakes in any rush to legislate. It also prefers to remain adaptable while it establishes its approach and develops its understanding of risk.

According to its response, before legislating the government will consider effective voluntary measures, mitigation of risk under existing powers, and any adverse effects on innovation and competition. In short, it is unlikely that the UK will have specific AI legislation any time soon.

In the EU, the EU AI Act is due to become law in April 2024. AI systems in medical devices placed on the EU market, which are set to be categorised as 'high-risk' systems, will have 36 months after that to start complying. No doubt the UK Government will be watching carefully and calculating how to be perceived as more 'pro-innovation' than the EU.

General medical device legislation is coming, and there will be opportunities to deal with AI-related issues in that legislation. But the indication so far is that it will not do that.


Are you working on your AI strategy and trying to plan for the coming changes? Contact Harry Jennings in our Pharmaceuticals and Life sciences team on 07789 533 122 to discuss this important topic and get your business ready for the legal impact of this technological revolution.