• Contact Us

AI and the Education Sector - Changing Times

on Wednesday, 17 May 2023.

Until recently, to most of us in the education sector artificial intelligence (AI) was a somewhat abstract concept. But that abstract concept has become dramatically real with the launch and widespread use of ChatGPT and similar products.

OpenAI - the organisation behind ChatGPT - launched its product in November 2022. By March, it had been upgraded to use the much more powerful GPT-4 engine, and other entrants (not least Google) had launched their own similar tools. With the entry into the market of an array of third party products powered by GPT, and with these AI models appearing to be on a very steep development curve, AI looks set to rapidly transform the way many of us live, work and learn.

The capabilities of GPT and similar products - both current capabilities, and those which are projected to be with us soon - present tremendous opportunities, but they come with risks and challenges. VWV's education and technology sector specialists have been considering some of the potential impacts of AI on the education sector. This blog touches on some of the areas where educators might need to respond to new legal, regulatory and contractual challenges. Each of those areas will be explored in more depth in future blogs.

GPT (and Similar AI) for the Uninitiated

The AI which has generated such interest in the last few months has come in the form of Open AI's ChatGPT (other models have been developed by other organisations, but we refer in this article to GPT as a shorthand for all of these models). Users are able to engage in conversational exchanges with ChatGPT, prompting it to provide answers to even the most complex of questions. ChatGPT and GPT-4 can be prompted to generate reports, advice, lesson plans and - as educators are increasingly aware - essays.

GPT combines a vast set of data (both from the internet and elsewhere) with a tool which is able to mimic human conversation. The tool works as a statistical model which is trained to predict what a reasonable human might write based on the data which GPT has access to. The output from this tool can be remarkable, producing in seconds what it might take a human many hours to generate. The output often contains a high degree of accuracy, and what appears to be reasoned argument. But the output can also sometimes be fallible, confidently stating falsehoods as fact.

GPT has its limitations in terms of the accuracy and usefulness of the output that it produces. It is very important that it is 'handled with care', and that those limitations are recognised when using it. But the potential of this tool seems huge. Its accuracy is increasing, further more powerful iterations of GPT seem inevitable, and its limitations seem likely to be eroded with those further iterations.

Use Cases for Educators

ChatGPT is the interface with the GPT model which has caught the attention of the general public. But the engine behind it - GPT-4 - has been made available to third party product developers for use in their own applications. Whilst ChatGPT (and Microsoft's Bing) is the shop window for GPT, it is likely to be those bespoke applications - designed for specific sectors and use cases and powered by GPT or other AI models - which will have the most transformative effect on our daily work and lives.

Some of the use cases which the education sector might expect to see are outlined below - there will be many others. However, each of these use cases carry with them risks and challenges, as well as opportunities, which educators will need to navigate.

  • Personalised Learning: For some time now, AI-powered tools have been helping to create customised learning experiences tailored to individual students' needs, abilities, and learning styles. Those tools only seem destined to become more powerful and useful.
  • Teaching Aids: AI-driven tools like ChatGPT can provide educators with support, offering real-time feedback on student performance, facilitating lesson planning, and generating teaching materials. Even just using ChatGPT (rather than a bespoke application) it is easy to produce and refine lesson plans and handouts using simple 'plain English' prompts.
  • Administrative Tasks: AI is already used to streamline administrative processes across a range of businesses. For educators, areas such as admissions, enrolment, and timetabling are likely to increasingly be supported by AI, saving time and resources for both staff and students.
  • Inclusive Education: AI has the potential to help bridge the gap for students with disabilities or special needs by providing tailored support and accessible learning materials, ensuring equal opportunities for all students.

Find out more about how our legal experts can support you

New (and Not So New) Risks and Challenges

As with all new developments, the rapid progress and uptake of AI powered tools presents exciting opportunities for educators, but it introduces profound challenges too.

There has been a good deal of recent focus on the risk to academic integrity, with the potential for students to use these tools improperly for the completion of coursework and in assessments. How educators respond to that challenge - allowing students to embrace these new technologies whilst maintaining the integrity of assessments and the quality of learning - is likely to be remain a hot topic in the sector for the foreseeable future.

There will of course be legal and regulatory challenges, in an environment where the law and regulation may well struggle to keep up with the speed of development of these products. But more broadly, educators will need to think about how they want to engage with these tools.

On the one hand, there are potentially very real efficiency, cost and quality benefits in automating (or using AI to support) some roles and processes which could previously only be done by individuals. However, in the education and development of young people, there is also an irreplaceable value in the 'human interface' which AI cannot offer. Regulation aside, the decision as to how schools and universities choose to strike that balance is likely to reflect the culture, communities and priorities of each organisation.

Where the decision is made to engage with these tools, organisations will need to think carefully about the terms on which they do so, ensuring that AI contracts provide appropriate levels of protection and accountability. Other challenges will be revealed as the use of these technologies broadens, but as well as those outlined above, they are certain to include some of the following:

  • Data Privacy and Security: The use of AI tools by educators - for example in connection with assessments, recruitment, admissions, etc - will often require the sharing and processing of personal data. We have advised cutting-edge AI businesses on data privacy and security issues, and anticipate that we will increasingly be asked to support educators in managing data privacy and compliance risks associated with their use of AI.
  • Digital Divide: The launch of ChatGPT and similar products, freely available to the public, will go some way to democratising access to AI. However, beyond those publicly available tools, it seems likely that unequal access to AI technologies will exacerbate existing inequalities in education. There seems to us to be a risk of this disadvantaging students from lower socio-economic backgrounds.
  • Ethical Considerations: The adoption of AI is likely to raise ethical questions around transparency, fairness, and accountability. We anticipate that institutions will look to establish robust governance frameworks to ensure responsible AI use.

ChatGPT's Recommendations

We resisted the temptation to invite ChatGPT to produce this blog post for us in full! But we were curious to see what it might identify as some of the key implications for educators of the use of AI. ChatGPT recommended to us that educators consider the following steps in addressing some of the risks associated with AI. We can only endorse its recommendations:

  • Develop AI Governance Frameworks: Establish policies, guidelines, and processes to ensure responsible, ethical, and transparent use of AI, addressing data privacy, security, intellectual property, accessibility, and ethical considerations.
  • Update Policies and Procedures: Review and revise existing policies, such as safeguarding, e-safety, SEND, curriculum, data protection, assessment, and staff development policies, to accommodate the integration of AI technologies.
  • Invest in Staff Training: Provide ongoing professional development opportunities for staff on AI tools, their potential applications, and the associated challenges, ensuring effective integration into the classroom.
  • Foster Digital Literacy and AI Ethics: Incorporate digital literacy and AI ethics into the curriculum, equipping students with the skills to navigate the evolving digital landscape responsibly.
  • Engage in Cross-sector Collaboration: Collaborate with AI developers, researchers, policymakers, and other education institutions to share best practices, insights, and experiences in implementing AI technologies.

AI technologies like GPT-4 have the potential to transform the education landscape. However, the potential benefits come with a set of risks and challenges that schools and universities must engage with to ensure a responsible and ethical adoption of AI. We will be developing the thoughts above (and other thoughts of our own) in future blogs on some of the challenges and opportunities of AI to the education sector.


Please do contact Edward Rimmell in our Contracts team on 0117 314 5232 should have any queries or wish to discuss this further. Alternatively please complete the form below.

Get in Touch

First name(*)
Please enter your first name.

Last name(*)
Invalid Input

Email address(*)
Please enter a valid email address

Telephone
Please insert your telephone number.

How would you like us to contact you?

Invalid Input

How can we help you?(*)
Please limit text to alphanumeric and the following special characters: £.%,'"?!£$%^&*()_-=+:;@#`

See our privacy page to find out how we use and protect your data.

Invalid Input