OpenAI - the organisation behind ChatGPT - launched its product in November 2022. By March, it had been upgraded to use the much more powerful GPT-4 engine, and other entrants (not least Google) had launched their own similar tools. With the entry into the market of an array of third party products powered by GPT, and with these AI models appearing to be on a very steep development curve, AI looks set to rapidly transform the way many of us live, work and learn.
The capabilities of GPT and similar products - both current capabilities, and those which are projected to be with us soon - present tremendous opportunities, but they come with risks and challenges. VWV's education and technology sector specialists have been considering some of the potential impacts of AI on the education sector. This blog touches on some of the areas where educators might need to respond to new legal, regulatory and contractual challenges. Each of those areas will be explored in more depth in future blogs.
The AI which has generated such interest in the last few months has come in the form of Open AI's ChatGPT (other models have been developed by other organisations, but we refer in this article to GPT as a shorthand for all of these models). Users are able to engage in conversational exchanges with ChatGPT, prompting it to provide answers to even the most complex of questions. ChatGPT and GPT-4 can be prompted to generate reports, advice, lesson plans and - as educators are increasingly aware - essays.
GPT combines a vast set of data (both from the internet and elsewhere) with a tool which is able to mimic human conversation. The tool works as a statistical model which is trained to predict what a reasonable human might write, based on the data which GPT has access to. The output from this tool can be remarkable, producing in seconds what it might take a human many hours to generate. The output often contains a high degree of accuracy, and what appears to be reasoned argument. But the output can also sometimes be fallible, confidently stating falsehoods as fact.
GPT has its limitations in terms of the accuracy and usefulness of the output that it produces. It is very important that it is 'handled with care', and that those limitations are recognised when using it. But the potential of this tool seems huge. Its accuracy is increasing, further more powerful iterations of GPT seem inevitable, and its limitations seem likely to be eroded with those further iterations.
ChatGPT is the interface with the GPT model which has caught the attention of the general public. But the engine behind it - GPT-4 - has been made available to third party product developers for use in their own applications. Whilst ChatGPT (and Microsoft's Bing) is the shop window for GPT, it is likely to be those bespoke applications - designed for specific sectors and use cases and powered by GPT or other AI models - which will have the most transformative effect on our daily work and lives.
Some of the use cases, which the education sector might expect to see, are outlined below - there will be many others. However, each of these use cases carry with them risks and challenges, as well as opportunities, which educators will need to navigate.
As with all new developments, the rapid progress and uptake of AI powered tools presents exciting opportunities for educators, but it introduces profound challenges too.
There has been a good deal of recent focus on the risk to academic integrity, with the potential for students to use these tools improperly for the completion of coursework and in assessments. How educators respond to that challenge - allowing students to embrace these new technologies whilst maintaining the integrity of assessments and the quality of learning - is likely to be remain a hot topic in the sector for the foreseeable future.
There will of course be legal and regulatory challenges, in an environment where the law and regulation may well struggle to keep up with the speed of development of these products. But more broadly, educators will need to think about how they want to engage with these tools.
On the one hand, there are potentially very real efficiency, cost and quality benefits in automating (or using AI to support) some roles and processes which could previously only be done by individuals. However, in the education and development of young people, there is also an irreplaceable value in the 'human interface' which AI cannot offer. Regulation aside, the decision as to how schools and universities choose to strike that balance is likely to reflect the culture, communities and priorities of each organisation.
Where the decision is made to engage with these tools, organisations will need to think carefully about the terms on which they do so, ensuring that AI contracts provide appropriate levels of protection and accountability. Other challenges will be revealed as the use of these technologies broadens, but as well as those outlined above, they are certain to include some of the following:
We resisted the temptation to invite ChatGPT to produce this blog post for us in full! But we were curious to see what it might identify as some of the key implications for educators of the use of AI. ChatGPT recommended to us that educators consider the following steps in addressing some of the risks associated with AI. We can only endorse its recommendations:
AI technologies like GPT-4 have the potential to transform the education landscape. However, the potential benefits come with a set of risks and challenges that schools and universities must engage with to ensure a responsible and ethical adoption of AI. We will be developing the thoughts above (and other thoughts of our own) in future blogs on some of the challenges and opportunities of AI to the education sector.