• Contact Us

AI and the Healthcare Sector - Changing Times

on Tuesday, 01 August 2023.

With the entry into the market of an array of third party products powered by GPT, and with these AI models appearing to be on a very steep development curve, AI looks set to rapidly transform the way many of us live and work.

ChatGPT was launched in November 2022 by the organisation OpenAI and has subsequently become synonymous with artificial intelligence (AI) for the general public and the use of this tool has become commonplace. Following the launch last year, the product has been upgraded to use a much more powerful GPT-4 engine and ChatGPT, amongst similar products launched by competitors, are reportedly being used by innovative  professionals across a variety of industries.

The capabilities of GPT and similar products - both current capabilities, and those which are projected to be with us soon - present tremendous opportunities, but they come with risks and challenges.  VWV's healthcare sector specialists have been considering some of the potential impacts of AI on the healthcare sector.  This article touches on some of the areas where health care professionals might need to respond to new legal, regulatory and contractual challenges. 

GPT (and Similar AI) For the Uninitiated

The AI which has generated such interest in the last few months has come in the form of Open AI's ChatGPT (other models have been developed by other organisations, and we refer in this article to GPT as a shorthand for all of these models).  Users are able to engage in conversational exchanges with ChatGPT, prompting it to provide answers to even the most complex of questions. For end users of the healthcare system, ChatGPT and GPT-4 can be prompted to offer general lifestyle and health advice, manage administrative tasks forming part of health maintenance and - as health care providers are increasingly aware - offer medical advice.

GPT combines a vast set of data (both from the internet and elsewhere) with a tool which is able to mimic human conversation.  The tool works as a statistical model which is trained to predict what a reasonable human might write based on the data which GPT has access to.  The output from this tool can be remarkable, producing in seconds what it might take a human hours to generate.  The output often contains a high degree of accuracy, and what appears to be reasoned argument.  But the output can also sometimes be fallible, confidently stating falsehoods as fact. The nuances and medical knowledge gleaned from years of experience cannot be artificially recreated (yet).

There are limitations in terms of the accuracy and usefulness of the output that GPT produces.  It is very important that it is 'handled with care', and that those limitations are recognised when using it.  But the potential of this tool seems huge.  Its accuracy is increasing, further more powerful iterations of GPT seem inevitable, and its limitations seem likely to be eroded with those further iterations.

Use Cases for Healthcare Providers

ChatGPT is the interface with the GPT model which has caught the attention of the general public.  But the engine behind it - GPT-4 - has been made available to third party product developers for use in their own applications.  Whilst ChatGPT (and Microsoft's Bing) is the shop window for GPT, it is likely to be those bespoke applications - designed for specific sectors and use cases and powered by GPT or other AI models - which will have the most transformative effect on our daily work and lives.

The NHS has embraced technology and created its own mobile application (app) which allows users to order repeat prescriptions, book or manage appointments, view their GP health records and register their organ donation decision.

ChatGPT has suggested the following use cases which the healthcare sector might expect to see in the near future - there will be many others.  However, each of these use cases carry with them risks and challenges, as well as opportunities, which healthcare providers will need to navigate.

  • Health Education: ChatGPT suggests that it can be used as a virtual health assistant to provide patients with reliable information about various health topics, preventive measures, lifestyle advice, medication instructions, and other general healthcare enquiries. It is very important to consider how accurate the information given to patients will be, and for healthcare providers to decide if they wish to endorse such a tool.
  • Patient triage: AI language models can assist in the initial assessment of patients by providing information and guidance based on their symptoms. When booking an appointment using the NHS App, users answer questions on symptoms they are experiencing, following which the app determines the urgency of their condition and directs patients to appropriate care pathways.
  • Follow-up Care: AI can offer post-visit support by answering non-urgent questions, providing guidance on self-care, and explaining test results or treatment plans. This can help alleviate patient concerns and ensure they have the necessary information for their recovery.
  • Appointment Scheduling: Again, the NHS App is already demonstrating a proposed use case for AI, which can be used to the administrative burden on primary care staff. GPT and other systems can integrate with existing software to check availability and provide suitable time slots.
  • Administrative Tasks: AI is already used to streamline administrative processes across a range of businesses. In the healthcare sector, tasks such as updating patient records, are likely to be increasingly supported by AI, saving time and resources for practitioners.
  • Management of Real Estate Assets: Property maintenance and repairs can be managed by AI, which can also help partnerships evaluate the financial performance of their real estate portfolio and provide administrative assistance to support healthcare landlords and tenants complying with the obligations of their leases.

Risks and Challenges

As with all new developments, the rapid progress and uptake of AI powered tools presents exciting opportunities for healthcare providers, but it introduces profound challenges too.

As detailed above, GPT mimics human conversation and assimilates a huge quantity of training data into a brief summary which can be acted upon accordingly. However, the data upon which the GPT output is based could be incorrect or partially correct. Where a human practitioner can distinguish between accurate and misleading information, AI may struggle to do so. In the healthcare sector particularly, given the vast quantity of information people share virtually regarding health and lifestyle choices there is a huge potential for misinformation to be presented as the truth, which could be particularly problematic if passed on to a vulnerable patient.

Along with issues surrounding accuracy, there will of course be legal and regulatory challenges, in an environment where the law and regulation may well struggle to keep up with the speed of development of these products.  But more broadly, healthcare providers will need to think about how they want to engage with these tools. 

On the one hand, there are potentially very real efficiency, cost and quality benefits in automating (or using AI to support) some roles and processes which could previously only be done by individuals.  In the healthcare sector, there is also an irreplaceable value in the 'human interface' which AI cannot offer.  Regulation aside, the decision as to how healthcare providers choose to strike that balance is likely to reflect the culture, communities and priorities of each organisation.

Where the decision is made to engage with these tools, organisations will need to think carefully about the terms on which they do so, ensuring that AI contracts provide appropriate levels of protection and accountability.  Other challenges will be revealed as the use of these technologies broadens, but as well as those outlined above, they are certain to include some of the following:

  • Data Privacy and Security: The use of AI tools by healthcare providers - for example in connection with diagnoses, recruitment, managing appointments, etc - will often require the sharing and processing of personal data. We have advised cutting-edge AI businesses on data privacy and security issues, and anticipate that we will increasingly be asked to support health care provider in managing data privacy and compliance risks associated with their use of AI.
  • Accuracy and Reliability: ChatGPT's responses are generated based on patterns learned from vast amounts of training data. While it can provide helpful information, there is a possibility of errors, biases or incorrect information being generated. It is essential to thoroughly validate and monitor the model's performance to ensure accuracy and reliability of the responses provided to patient. The nuance and care given to patients by healthcare professionals cannot be replaced by technology.
  • Informed Consent and Transparency: Patients need to be made aware they are interacting with an AI system and understand the capabilities and limitations of the system. Transparency about the AI's capabilities and its limitations are crucial to manage patient expectations.
  • Liability and Accountability: determining the accountability and liability in cases where the AI system provides incorrect or misleading information can be challenging. Potential risks should be considered, and protocols established to determine responsibility between the healthcare provider and the AI system provider.
  • Continuity of Care: While AI language models can provide assistance, they should not hinder the patient-doctor relationship. Patients, particularly those with additional vulnerabilities, will require access to human healthcare providers.
  • Digital Divide: The launch of ChatGPT and similar products, freely available to the public, will go some way to democratising access to AI. However, beyond those publicly available tools, it seems likely that unequal access to AI technologies will exacerbate existing inequalities in healthcare. There seems to us to be a risk of this disadvantaging patients from lower socio-economic backgrounds.
  • Ethical Considerations: The adoption of AI is likely to raise ethical questions around transparency, fairness, and accountability. We anticipate that institutions will look to establish robust governance frameworks to ensure responsible AI use.

AI technologies like GPT-4 have the potential to transform the healthcare landscape. However, the potential benefits come with a set of risks and challenges that healthcare providers and professionals must engage with to ensure a responsible and ethical adoption of AI. We will be developing the thoughts above (and other thoughts of our own) in future articles on some of the challenges and opportunities of AI to the healthcare sector.


If you are considering using AI to support your healthcare practice and would like some advice on the legal implications of this, please contact Ben Willis in our Healthcare team on 0117 314 5394, or complete the form below.

Get in Touch

First name(*)
Please enter your first name.

Last name(*)
Invalid Input

Email address(*)
Please enter a valid email address

Telephone
Please insert your telephone number.

How would you like us to contact you?

Invalid Input

How can we help you?(*)
Please limit text to alphanumeric and the following special characters: £.%,'"?!£$%^&*()_-=+:;@#`

See our privacy page to find out how we use and protect your data.

Invalid Input