
Navigating Artificial Intelligence in Education: a brief guide for academies, schools and colleges
AI is rapidly transforming the educational landscape, offering both exciting opportunities and complex challenges, particularly in the context of equality and SEND.
As generative AI tools become more accessible, settings must adopt a proactive, informed approach to ensure safe, equitable, and effective integration. This article summarises key guidance from the Department for Education (DfE), the Equality and Human Rights Commission (EHRC) to support educational leaders in the state sector in navigating AI responsibly.
According to the DfE, the new Generative artificial intelligence (AI) in education guidance of 10 June sets out clear principles for AI use, with education standards and child safety at the fore. It makes clear that AI should be used to ensure learning remains teacher-led and that teachers should verify accuracy and protect personal data. The DfE expectation is that "pupils across England will benefit from more face-to-face time with teachers as the Government forges ahead with plans to harness the power of AI to deliver educational excellence".
1. Understanding generative AI in education
Generative AI refers to technologies that can produce new content—text, images, audio, or code—based on patterns learned from large datasets. Tools like ChatGPT, Microsoft Copilot, and Google Gemini are examples of generative AI built on large language models (LLMs). These tools can assist with lesson planning, resource creation, feedback, and administrative tasks.
However, generative AI is not a substitute for professional judgment or deep subject knowledge. The DfE emphasizes that teachers must critically assess AI-generated content for accuracy, appropriateness, and alignment with curriculum standards.
2. Product safety expectations
To ensure AI tools are safe for educational use, the DfE has published Generative AI: Product Safety Expectations, outlining seven key categories:
- Filtering: AI must reliably prevent access to harmful or inappropriate content, with moderation across languages and formats. Schools must refer to the DfE's filtering and monitoring standards to make sure they have the appropriate systems in place, including filtering and monitoring approaches that cover generative AI.
- Monitoring and Reporting: Systems should log activity, alert supervisors to safeguarding concerns, and provide understandable reports.
- Security: Products must resist unauthorized modifications and offer robust protection against misuse. This includes complying with age restrictions set by AI tools and open access LLMs.
- Privacy and Data Protection: Compliance with the UK GDPR is essential. The ICO's age appropriate design code may apply to the AI tool. Personal data must not be used for commercial purposes (eg training the AI model) without lawful basis.
- Intellectual Property: AI tools must not store or use pupils’ or teachers’ original work for training without explicit consent.
- Design and Testing: Products should be thoroughly tested with diverse users before deployment.
- Governance: Clear risk assessments and complaints mechanisms must be in place.
In addition, the new June 10 guidance asks settings to:
- consider online safety, including AI, when creating and implementing their school or college approach to safeguarding and related policies and procedures
- consult Keeping children safe in education on: what they need to do to protect pupils and students online; their responsibilities with regards to limiting children’s exposure to risks from the school’s or college’s IT system; and to follow cyber security standards.
Meeting these expectations helps schools comply with safeguarding duties under Keeping Children Safe in Education and data protection laws, including the Online Safety Act 2023 and the UK GDPR.
3. Equality and inclusion: meeting the Public Sector Equality Duty (PSED)
AI must be implemented in ways that uphold the Public Sector Equality Duty under the Equality Act 2010. Schools must consider how AI affects individuals with protected characteristics and take steps to eliminate discrimination, advance equality and foster good relations.
The Artificial intelligence: meeting the Public Sector Equality Duty (PSED) | EHRC recommends conducting equality impact assessments when adopting AI technologies. This includes:
- Reviewing how AI may affect different groups.
- Ensuring training data is free from bias.
- Monitoring outcomes for unintended discrimination.
- Engaging with staff, pupils, and community groups to gather feedback.
Schools should also ensure that AI tools are accessible to pupils with special educational needs and disabilities (SEND), and that digital divides do not exacerbate existing inequalities.
4. Practical applications and risks
Educators are already using AI to streamline lesson planning, generate resources, and personalize learning. However, risks include safety, over-reliance by pupils, biased content or content that does not sufficiently account for individual needs and misinformation.
To mitigate these risks in line AI Opportunities Action Plan, we recommend that settings:
- Develop clear policies on AI use, including acceptable use by pupils and staff.
- Provide staff with training on AI tools and ethical considerations.
- Carefully consider the EHRC guidance Assessing the equality impact of AI-based technology: six discussion points | EHRC prior to purchasing or using AI products.
- Use AI tools with built-in safety features.
- Supervise pupil use closely, especially for under-18s and pupils with SEND.
- Engage parents and carers in discussions about AI’s role in learning.
- Review homework and assessment policies to account for AI availability.
- Encourage critical thinking and digital literacy among students.
Conclusion
AI offers transformative potential for education, but its adoption must be guided by safety, equity, and integrity. By aligning with government frameworks and ethical standards, settings can harness AI to enhance teaching and learning while safeguarding pupils and staff.
In this journey no learner must be left behind, children and young adults with SEND must be adequately safeguarded, avoiding 'one size' fits all approaches to AI in education. AI tools must champion SEN learners to achieve greater participation and equity in education, in line with UNESCO's human-centred approach to AI.
VWV can help with policy formulation, equality impact assessments and public sector equality duties, advice on safeguarding and AI, staff matters and AI, data protection, and legal considerations around SEND and AI.