EDUCATION Higher Education Adobestock 367979072 LR

AI in Higher Education: Are we prepared for chatbot companions?

14 Jan 2026

AI’s role in education is well documented, particularly the challenges of teaching and assessing students with ready access to Large Language Models (LLMs) capable of producing human like text. While those academic issues are familiar, this article raises a different concern: the rise of chatbot companions. 


Concerns about AI chatbots have so far focussed on children but behavioural patterns formed in early years can reshape how young adults interact as they transition into higher education. We consider how this might impact higher education (HE) students against the backdrop of increasing reports of social isolation on campus since the COVID-19 pandemic.

Chatbots in the classroom

An AI chatbot is a type of LLM that engages in unstructured conversations with a user. Although chatbots may hold promise as learning tools, their educational benefits remain uncertain and contested.

As noted in Ofcom's November 2025 Discussion Paper on 'The Era of Answer Engines', chatbots can make search more inclusive for people who struggle with the conventions of traditional search engines, generate ‘Easy to Read’ content for individuals with cognitive impairments and assist young children, non-native speakers and students with simplified phrasing.

Although chatbots have the potential to help people locate and assess information online, Ofcom also cites research from the University of Cambridge showing that: "children are particularly susceptible to treating AI chatbots as lifelike, quasi-human confidantes. They may mistakenly believe AI chatbots understand emotional nuance and intentions. Such misplaced trust can foster unhealthy attachments and increase vulnerability to harmful information, while displacing more reliable sources of support: for example, if a child searching for advice relies on chatbot guidance instead of turning to trusted adults".

A BBC report in November 2025 found that two-thirds of children aged 9–17 had used AI chatbots. Popular platforms include ChatGPT, Google’s Gemini and Snapchat’s ‘My AI’. Other popular LLMs enable conversations with chatbots that are modelled on fictional characters or real individuals. As well as being used to obtain information or for entertainment, many of these systems can be and are used in a pastoral role.

Safeguarding and regulation

The regulatory landscape is evolving rapidly. In 2020, the Information Commissioner's Office published the Children's Code, a code of practice requiring online services used by children to be designed in the best interests of their health, safety and privacy. The Children's Code preceded the Online Safety Act 2023, which aims to further protect children from harmful online content. Following several high profile incidents involving chatbots, Ofcom confirmed in November 2025 that the Act would extend to platforms that allow users to create and share chatbots (including those based on real or fictional characters).

One of the incidents occurred in 2024, when a 14 year old boy took his own life after a chatbot role playing as a Game of Thrones' character allegedly encouraged him to do so. In response, the platform in question announced new safeguards to restrict services for children. In 2025, a Californian couple filed a lawsuit against OpenAI for the wrongful death after their 16-year-old son who took his own life. They claim that he started using ChatGPT for help with his schoolwork but it became his "closest confidant" and validated his "most harmful and self-destructive thoughts" including his plan to end his life.

Obviously, encouraging vulnerable people to interact and share sensitive information with systems that cannot provide appropriate care can be incredibly dangerous. Recent cases and regulatory responses suggest that broader restrictions on children’s chatbot use may be imminent. Yet once individuals reach adulthood, such protections fall away, leaving young adults free to engage with chatbots without regulatory constraint.

Implications for universities

For HE providers, the implications are significant. Even if limitations are applied to children's access to chatbots, universities may become the first environment in which young adults' experiment with chatbot companionship, right at the time they need to build new (offline) social networks.

The DfE's National Review of Higher Education Student Suicide Deaths reviewed over 160 cases of suspected suicide or self-harm in 2023/24. Undergraduates accounted for 73% of cases, with first-year students and international students being particularly vulnerable (27% and 24% respectively). The elevated risk among international students may reflect the added pressures of cultural adjustment alongside challenges such as isolation, academic stress, financial strain and living away from home.

For undergraduates, HE is as much as anything a time for social and personal development, critical engagement and building resilience. We know that students often report social isolation, whether because of remote teaching, commuting or other factors. In this context, turning to a chatbot may seem preferable to approaching someone in the library, asking for advice in the gym or a recipe suggestion in halls. Even if a student knows that the connection offered by a chatbot is an illusion, it is the always-available option that offers instant, short-term gratification. Of course, the long-term picture is different.

The social trade-off

Ofcom's intervention in response to the deaths of children is needed, but it does not address the broader risks that chatbots pose to mental health. There is much debate about the wide-scale impact of LLMs. For many, they herald a new age of dynamism and growth. While this may be true, for the time being recent events have demonstrated that, in their current form, chatbots can pose acute risks to vulnerable users if they generate harmful or misleading advice. Yet even (or especially) benign interactions pose a risk to socialisation.

Research from Sweden published in July 2025 observed that many users perceive chatbots as "an always available friend who responds without judgment and gives the impression of understanding". In a HE context, the worry is that positive exchanges with a chatbot may be perceived as preferable to stepping outside one's comfort zone and initiating genuine human connections.

Practical next steps

Students are not naïve; they recognise that AI cannot replace friendships. The challenge lies not in convincing students of this fact but in facilitating human connection. HE providers should therefore embed socialising in the architecture of university life, both physically through communal spaces and structurally through group assignments, buddy schemes and extended orientation programmes.

Alongside these initiatives, educators should audit existing and planned chatbot uses and establish clear boundaries for AI in pastoral care. Where HE providers teach students to critically assess AI generated content, they could consider extending these discussions to include the use of LLMs as companions. The goal is not prohibition, but equipping students to choose authentic human relationships over artificial substitutes.


For more information, please contact Liz Smillie or Kris Robbetts in our Higher Education team. 

 

Get in touch today

Are you looking for legal services?

Fill out our form to find out how our specialist lawyers can help you.

See our privacy page to find out how we use and protect your data.