• Contact Us

Fighting fire with fire: what should higher education institutions consider when using AI tools to spot whether students are using AI to write their dissertations?

on Monday, 02 December 2024.

In the 2023/24 academic year, the number of cases of students at UK universities illicitly using AI tools, such as Chat GPT and other large language models to write their dissertations or otherwise help them pass their assessments increased exponentially.

As AI rapidly integrates itself into more and more of our day-to-day lives, it seems inevitable that this trend is only set to continue in future years.

This is prompting universities and other HEIs to fight fire with fire and harness the power of AI to detect where students are using AI to complete their work.  

However, the implementation by HEIs of AI detection tools such as GPTZero or TurnItIn raises several legal, practical and ethical issues that must be carefully considered by HEIs' key stakeholders before they are adopted.

Ownership and rights of use - do your due diligence

  • Complexity creates issues:The technology underpinning AI solutions is typically more complex than that of traditional software solutions, so HEIs should look for specific additional reassurance on ownership of the solution.
  • Supplier rights: Significant problems can arise if the supplier does not own or have rights to what they think or claim they do. For example, if an HEI purchases an AI solution to detect student plagiarism, only to discover that the supplier does not itself have the proper rights to core algorithms, or that the AI solution is trained on student work from other institutions to which the supplier does not have any rights, the HEI could face third party intellectual property rights infringement claims.
  • Reliance on third party datasets: Many commercially available AI solutions have been developed and trained on wide-ranging third party datasets which may have been collated using the controversial method of web-scraping to train their AI tools, without appropriate rights being granted by website and database owners. There is still great uncertainty and ongoing contention about the lawfulness of this type of development and training and this in turn creates uncertainty for HEIs as to the lawfulness and ownership of any AI solutions they look to integrate within their institutions.
  • Risk management: HEIs should ensure that the supplier has the proper license of the training data that feeds the AI tool and that they understand how the AI tool works in order to make a more informed decision as to the risks. Before using any AI tool, HEIs should also confirm that no third party intellectual property rights have been infringed. For example, this could be accomplished by carrying out thorough due diligence on the supplier and its AI tool, as well as by including indemnities in the agreement between the supplier and the HEI.

Privacy and data security concerns

  • Student data: Where students' data is inputted into an AI tool, the HEI needs to understand how that AI tool will use its students' data. This is of particular importance where the dissertation or other documents being inputted into the AI tool contain personal data. Many AI tools will use input data for machine learning purposes to continually train themselves and improve the AI's outputs for other customers. There is a risk of HEIs losing control of their input data which may have legal implications particularly in relation to intellectual property infringement, data protection compliance and breach of confidentiality.
  • Compliance with laws: HEIs will need to ensure that they comply with all legal and regulatory frameworks in using these AI tools. Part of this will include having robust internal policies in place which set out how student personal data will be processed as part of any AI tool.
  • Ownership and use of outputs: Consider what outputs may be generated as part of any AI tool used for detecting plagiarism (eg reports and statistics on student misconduct) and what you as an HEI would want to do with those outputs. HEIs need to ensure these outputs are under their control and are legally considered their proprietary data, not just the AI provider's by default. For example, an HEI would not want an AI provider commenting publicly on the extent to which its students commit plagiarism.

Confidentiality and transparency

  • Supplier secrecy versus transparency: AI businesses are likely to want to keep certain aspects of their technology secret. At the same time, if an HEI intends to rely on an AI tool to determine whether or not students have committed plagiarism etc, students will require transparency about how those decisions are made. Suppliers should be prepared to provide sufficient access and explanations about how their AI tool works and should do so via contractual commitments and within reasonable timelines.

Finding the right approach

  • Zero-tolerance versus limited use: Given the speed with which students' use of AI has become a significant issue for HEIs, HEIs have not yet been able to determine, at a holistic level, what approach they wish to take in respect of the use of AI by their students. Some HEIs have adopted a zero-tolerance approach, whilst others have encouraged students to use AI in limited responsible ways to enhance (rather than replace) their own independent learning.
  • AI policies: Some HEIs think it is unrealistic to expect students not to make any use of AI tools during their studies. This is partially borne out of the fact that it is already difficult to detect whether a student has used AI in their work and things are only going to get worse as AI becomes more and more sophisticated. Instead, and in the absence of any national guidelines being published on this issue by central government, these HEIs are working on formalising AI policies which set out the parameters within which students may legitimately use AI in their studies. The hope is that teaching students about the responsible integration of AI into their academic work helps maintain student trust and educational standards.

Supplier failure, accuracy and reliability

  • Accuracy: AI detection tools in the HE sector are by no means foolproof. There is always a risk of 'false positives', where a student's legitimate work is flagged as AI-generated, and "false negatives", where AI-generated content is not detected. HEIs should consider the limitations of any AI tools they look to implement to detect plagiarism and ask suppliers what steps they take to minimise the risk of false positives and false negatives.
  • Reliability: As previously stated, AI solutions are often developed and trained on third party datasets. The diversity, amount, volume, quality and authenticity of such datasets will have a direct impact on the reliability of the outputs produced by the AI solution. HEIs should therefore ask questions of their suppliers to establish what types of data each AI solution is trained on. For example, querying whether an AI tool trained wholly or partially on AI-generated content rather than actual student data would give you confidence that the results produced by the AI tool would be accurate.
  • Disaster Planning: If an HEI intends to rely heavily on AI to assess all student submissions, it needs to consider how it would cope if the AI tool became defective or the supplier suffered an insolvency event and could no longer provide the AI solution.

Ethics, bias and discrimination

  • Understand the risks associated with AI bias: An AI solution has been trained in a certain way, by certain people and using certain training data. Like humans, it comes with preconceived ideas and prejudices and it may learn based on specific experience or narrowly sourced data sets.
  • Bias: For example, in an HEI-plagiarism context, AI tools may be less effective at identifying content produced by non-native English speakers, or they might disproportionately flag work by students from certain demographics or cultural backgrounds. This could result in bias in the way different students are treated and assessed. HEIs should: (a) ask suppliers what testing has been carried out on their AI tools to ensure fairness; and (b) seek to ensure that an element of human oversight remains in the student evaluation process.
  • Discrimination as part of AI learning: Specific care should be taken if the AI tool is comparing a student's work to previous submissions by the same student in order to identify plagiarism. The AI tool will need to be sophisticated enough to appreciate that a student's style of writing may change over time or depending on the subject matter.
  • Risk of bias may change over time: AI solutions will learn and adapt over time, leading to changes in behaviour that are not seen in traditional software services. This can create "model drift" or challenges in maintaining consistent performance and ensuring ongoing compliance. For example, where an AI tool identifies a false positive or a false negative, is the supplier able to correct this to avoid further errors in the future and avoid a situation where the AI tool is training itself on inaccurate results?

Contracts

  • Limitations on Liability - Suppliers are likely to be cautious in their contracts and HEIs can expect to see significant liability limits, exclusions and disclaimers. If they are not taking responsibility, then you are.

Conclusion

The use of AI tools to detect AI-generated content in dissertations presents both opportunities and challenges for HEIs. While these tools can be an effective way to uphold academic integrity, HEIs must be mindful of the commercial and legal risks they pose, their limitations, potential biases and the broader ethical implications. It is crucial that HEIs take a holistic approach that combines AI detection with education on responsible AI use, clear privacy policies and policies on the use of AI by students, as well as a commitment to fairness. By doing so, HEIs can navigate the complexities of AI in academia while fostering an environment of trust and innovation.


For more information please contact James Barr on 07393 149 979  and Patrick McCallum on 07385 667 207 in our Commercial team. Alternatively please complete the form below.

Get in Touch

First name(*)
Please enter your first name.

Last name(*)
Invalid Input

Email address(*)
Please enter a valid email address

Telephone
Please insert your telephone number.

How would you like us to contact you?

Invalid Input

How can we help you?(*)
Please limit text to alphanumeric and the following special characters: £.%,'"?!£$%^&*()_-=+:;@#`

See our privacy page to find out how we use and protect your data.

Invalid Input