ChatGPT was launched in November 2022 by the organisation OpenAI and has subsequently become synonymous with artificial intelligence (AI) for the general public and the use of this tool has become commonplace. Following the launch last year, the product has been upgraded to use a much more powerful GPT-4 engine and ChatGPT, amongst similar products launched by competitors, are reportedly being used by innovative professionals across a variety of industries.
The capabilities of GPT and similar products - both current capabilities, and those which are projected to be with us soon - present tremendous opportunities, but they come with risks and challenges. VWV's healthcare sector specialists have been considering some of the potential impacts of AI on the healthcare sector. This article touches on some of the areas where health care professionals might need to respond to new legal, regulatory and contractual challenges.
The AI which has generated such interest in the last few months has come in the form of Open AI's ChatGPT (other models have been developed by other organisations, and we refer in this article to GPT as a shorthand for all of these models). Users are able to engage in conversational exchanges with ChatGPT, prompting it to provide answers to even the most complex of questions. For end users of the healthcare system, ChatGPT and GPT-4 can be prompted to offer general lifestyle and health advice, manage administrative tasks forming part of health maintenance and - as health care providers are increasingly aware - offer medical advice.
GPT combines a vast set of data (both from the internet and elsewhere) with a tool which is able to mimic human conversation. The tool works as a statistical model which is trained to predict what a reasonable human might write based on the data which GPT has access to. The output from this tool can be remarkable, producing in seconds what it might take a human hours to generate. The output often contains a high degree of accuracy, and what appears to be reasoned argument. But the output can also sometimes be fallible, confidently stating falsehoods as fact. The nuances and medical knowledge gleaned from years of experience cannot be artificially recreated (yet).
There are limitations in terms of the accuracy and usefulness of the output that GPT produces. It is very important that it is 'handled with care', and that those limitations are recognised when using it. But the potential of this tool seems huge. Its accuracy is increasing, further more powerful iterations of GPT seem inevitable, and its limitations seem likely to be eroded with those further iterations.
ChatGPT is the interface with the GPT model which has caught the attention of the general public. But the engine behind it - GPT-4 - has been made available to third party product developers for use in their own applications. Whilst ChatGPT (and Microsoft's Bing) is the shop window for GPT, it is likely to be those bespoke applications - designed for specific sectors and use cases and powered by GPT or other AI models - which will have the most transformative effect on our daily work and lives.
The NHS has embraced technology and created its own mobile application (app) which allows users to order repeat prescriptions, book or manage appointments, view their GP health records and register their organ donation decision.
ChatGPT has suggested the following use cases which the healthcare sector might expect to see in the near future - there will be many others. However, each of these use cases carry with them risks and challenges, as well as opportunities, which healthcare providers will need to navigate.
As with all new developments, the rapid progress and uptake of AI powered tools presents exciting opportunities for healthcare providers, but it introduces profound challenges too.
As detailed above, GPT mimics human conversation and assimilates a huge quantity of training data into a brief summary which can be acted upon accordingly. However, the data upon which the GPT output is based could be incorrect or partially correct. Where a human practitioner can distinguish between accurate and misleading information, AI may struggle to do so. In the healthcare sector particularly, given the vast quantity of information people share virtually regarding health and lifestyle choices there is a huge potential for misinformation to be presented as the truth, which could be particularly problematic if passed on to a vulnerable patient.
Along with issues surrounding accuracy, there will of course be legal and regulatory challenges, in an environment where the law and regulation may well struggle to keep up with the speed of development of these products. But more broadly, healthcare providers will need to think about how they want to engage with these tools.
On the one hand, there are potentially very real efficiency, cost and quality benefits in automating (or using AI to support) some roles and processes which could previously only be done by individuals. In the healthcare sector, there is also an irreplaceable value in the 'human interface' which AI cannot offer. Regulation aside, the decision as to how healthcare providers choose to strike that balance is likely to reflect the culture, communities and priorities of each organisation.
Where the decision is made to engage with these tools, organisations will need to think carefully about the terms on which they do so, ensuring that AI contracts provide appropriate levels of protection and accountability. Other challenges will be revealed as the use of these technologies broadens, but as well as those outlined above, they are certain to include some of the following:
AI technologies like GPT-4 have the potential to transform the healthcare landscape. However, the potential benefits come with a set of risks and challenges that healthcare providers and professionals must engage with to ensure a responsible and ethical adoption of AI. We will be developing the thoughts above (and other thoughts of our own) in future articles on some of the challenges and opportunities of AI to the healthcare sector.