Skip to main content
DEEP
ResearchMarch 202615 min read

What 33 International Frameworks Say About AI in Schools

When a school leader asks what good AI policy looks like, the honest answer is: it depends which framework you ask. The AI Literacy Audit Tool cross-references 33 of them. This article explains where they agree, where they diverge, and the specific gaps that appear most commonly when schools are audited against the full set.

Share:

And what most schools are missing

• • •

When a school leader asks what good AI policy looks like, the honest answer is: it depends which framework you ask. UNESCO has one view. The OECD has another. The UK Department for Education has published its own guidance. The EU AI Act introduces legally binding obligations. The Council of Europe has added a binding international convention. And there are frameworks from IB, COBIS, professional teaching bodies, and regional education authorities besides.

Between them, the frameworks that the AI Literacy Audit Tool cross-references number 33. That figure is not arbitrary. It reflects the genuine complexity of the international standards landscape that schools — particularly international schools — are expected to navigate.

Most schools are aware of one or two of these frameworks. Very few have systematically reviewed their practice against all of them. This article explains what the most important frameworks say, where they agree, where they diverge, and the specific gaps that appear most commonly when schools are audited against the full set.

Every international framework reviewed for this article identifies the same root problem: schools are adopting AI tools faster than they are developing the structures to use them safely and well.

• • •

The international frameworks landscape

The frameworks can be divided into four broad categories. The tables below summarise all 33 frameworks referenced in the AI Literacy Audit, organised by category.

Global Ethics and Principles

FrameworkPublisherPrimary Focus for Schools
UNESCO Recommendation on the Ethics of AIUNESCOHuman rights, transparency, accountability, and non-discrimination in AI systems
OECD AI PrinciplesOECDTrustworthy AI: inclusive growth, human centred values, transparency, robustness
European Ethical Guidelines for Trustworthy AIEuropean CommissionSeven principles for trustworthy AI: robustness, fairness, accountability, and others
BECTA Technology Principles (Legacy)BECTAFoundational technology governance principles still cited in UK inspection criteria

Competency and Curriculum

FrameworkPublisherPrimary Focus for Schools
UNESCO AI Competency Framework (Students)UNESCOStudent AI literacy: understanding, creation, ethics, and critical evaluation
UNESCO AI Competency Framework (Teachers)UNESCOTeacher readiness: AI pedagogy, curriculum integration, and professional development
UNESCO ICT Competency Framework for TeachersUNESCOFoundational digital and AI competency for teachers in all school contexts
OECD AI in Education FrameworkOECDAI literacy, workforce readiness, and evidence-based adoption in education systems
Singapore AI for Students FrameworkSingapore MOEProgressive AI literacy from primary through post-secondary education
Australian Digital Technologies CurriculumACARAData, algorithms, and computational thinking as foundational digital literacy
ISTE Standards for StudentsISTEComputational thinker, innovative designer, and digital citizen standards
ISTE Standards for EducatorsISTEDesigner, collaborator, and data-driven decision-making capabilities for teachers
European Schoolnet AI FrameworkEuropean SchoolnetAI competency for European teachers: pedagogy, ethics, and implementation
Commonwealth Digital Education Leadership FrameworkCommonwealthStrategic AI leadership, governance, and infrastructure for Commonwealth schools
World Economic Forum Future of Jobs (Education)WEFAI literacy and human-machine collaboration as critical future skills

Regulatory and Legal

FrameworkPublisherPrimary Focus for Schools
EU AI Act (2024)European CommissionLegal classification of AI systems; high-risk categories; transparency obligations
Council of Europe AI ConventionCouncil of EuropeBinding treaty on human rights, democracy, and the rule of law as applied to AI
GDPR and UK GDPR (Education Applications)ICO / EULawful basis for processing student data by AI systems; consent and transparency
UK DfE Generative AI in Education GuidanceDfE / UK GovSafe and responsible use of generative AI tools by staff and students
UK DfE AI in Education: Responsible UseDfE / UK GovProcurement, data governance, and pedagogical considerations for AI tools
UK Government AI Safety FrameworkDSIT / UK GovNational AI safety standards and evaluation principles
EU Digital Education Action PlanEuropean CommissionDigital competency, infrastructure, and AI integration across EU education systems
Ofsted Research and Evaluation FrameworkOfstedEvidence standards, curriculum coherence, and responsible technology use
NCSC Cyber Security in Schools GuidanceNCSC / UK GovData protection, network security, and AI tool risk assessment

Sector-Specific Guidance

FrameworkPublisherPrimary Focus for Schools
IB AI in Education GuidanceIB OrganisationAcademic integrity, AI-assisted work, and assessment design for IB programmes
COBIS AI Guidance for British International SchoolsCOBISPractical governance, policy, and implementation guidance for British schools abroad
New Zealand AI in Education GuidanceTe Kura / NZ GovEquitable and ethical AI use with a focus on Maori and Pasifika learners
Council of Europe Digital Citizenship EducationCouncil of EuropeDigital rights, responsibilities, and literacy including AI awareness
Microsoft AI for Good in EducationMicrosoftResponsible AI principles applied to educational tool deployment
Google for Education AI PrinciplesGoogleTransparency, privacy, and fairness in AI tools designed for schools
JISC AI in Further and Higher EducationJISCAcademic integrity, staff readiness, and institutional AI strategy
EdTech Evidence Group AI Framework (UK)EEGEvidence standards for AI tool evaluation and procurement in UK schools
NAACE AI and Education FrameworkNAACEComputing and AI curriculum guidance for UK schools

Reading across these four categories, two things become apparent. First, no single body owns the standards landscape. Schools must draw from intergovernmental organisations, national governments, professional associations, and sector bodies simultaneously. Second, the frameworks serve very different purposes: some are aspirational and ethical, some are practical and operational, and some are legally binding.

That last distinction matters more than is generally appreciated.

• • •

Where the frameworks agree

Despite the variety of publishers, regions, and purposes, every framework reviewed for this article converges on five core positions. Schools that have addressed all five can be confident they are building on solid ground. Schools that have addressed only some are more exposed than they realise.

1. AI literacy is a fundamental skill, not an optional extra

UNESCO, OECD, ISTE, the Singapore MOE, and the Australian curriculum all treat AI literacy as a fundamental competency for young people, equivalent in importance to reading or numeracy. This is not framed as preparation for a technology career. It is framed as preparation for citizenship, critical thinking, and informed participation in a society shaped by AI systems.

The implication for schools is significant: AI literacy cannot be delegated to computer science departments. It belongs across the curriculum, embedded in the way students read, write, research, and evaluate information in every subject.

2. Teacher competency must precede student competency

Both UNESCO frameworks, the European Schoolnet guidance, ISTE's educator standards, and the DfE guidance all make the same sequencing argument: you cannot develop student AI literacy without first developing teacher AI competency. A school that invests in student AI tools without investing equivalently in teacher development is building on unstable ground.

The frameworks are specific about what teacher competency means. It is not familiarity with AI tools. It is the pedagogical capacity to design learning experiences that develop students' critical relationship with AI, and the professional confidence to model that relationship in the classroom.

3. Policy must be specific, current, and operationally coherent

The DfE guidance, COBIS, IB, JISC, and the EU Digital Education Action Plan all stress that a generic AI policy is insufficient. A policy must name the tools in use, specify what is and is not permitted in different contexts, address academic integrity, cover data handling, and be reviewed frequently enough to remain relevant. The IB has gone furthest on this, requiring schools to update assessment guidance annually to reflect the evolving capability of AI tools.

4. Student data in AI systems requires explicit governance

Every framework that addresses data — from GDPR and UK GDPR through to the EU AI Act and the NCSC guidance — treats student data as a category requiring heightened protection. The specific requirements vary, but the common position is clear: schools need to know what data AI tools are collecting, how it is used, and whether that use has a lawful basis. The casual adoption of free AI tools that process student inputs without explicit data processing agreements is a compliance risk that most schools have not yet addressed.

5. Human oversight is non-negotiable

The OECD AI Principles, the European ethical guidelines, UNESCO's ethics recommendation, and the Council of Europe convention all place human oversight at the centre of responsible AI use. In an educational context, this means AI-generated scores, recommendations, and assessments should always be subject to professional review. It also means that decisions about students should never be fully delegated to AI systems, regardless of the claimed accuracy of those systems.

Five principles that all 33 frameworks share: AI literacy as a fundamental skill, teacher competency first, specific and current policy, explicit data governance, and meaningful human oversight of AI decisions.

• • •

Where the frameworks diverge

Consensus on principles does not mean consensus on implementation. The frameworks diverge in ways that create genuine difficulty for schools trying to be compliant across multiple contexts.

On the purpose of AI education

UNESCO frames AI literacy primarily through an ethical and rights-based lens: students should understand AI in order to defend their rights and participate critically in democratic life. The OECD frames it more economically: AI literacy is a workforce competency that supports productive participation in an AI-driven economy. The DfE takes an operational middle position, focusing on safe and effective use of specific tools.

These are not contradictory, but they lead to different emphases in curriculum design. A school trying to satisfy all three simultaneously will need a broader and more nuanced AI literacy curriculum than one aligned to a single framework.

On legal obligations versus aspirational guidance

The EU AI Act is legally binding. GDPR is legally binding. The Council of Europe AI Convention is a binding treaty. Everything else in the table above is guidance, recommendation, or voluntary framework.

This matters because many school leaders treat all frameworks as equivalent in weight. They are not. A school that implements UNESCO's ethics recommendations and ignores EU AI Act obligations because it does not view itself as a primarily European institution may be legally exposed — particularly if it recruits students from EU member states, uses AI tools provided by EU-registered companies, or has staff who are EU citizens.

On assessment integrity

The IB has produced the most developed and specific guidance on AI and assessment integrity of any framework reviewed here, including requirements around disclosure, tool-specific guidance, and task design principles. The DfE guidance is practical but less prescriptive. Most other frameworks treat assessment integrity as a subset of academic honesty without providing the operational detail that schools actually need.

The result is that most schools are writing assessment integrity policies without a clear external standard to benchmark against, and are therefore either under-specifying (leaving staff without sufficient guidance) or over-specifying in ways that are unenforceable.

On safeguarding and AI-specific risk

Safeguarding receives the most uneven treatment across the frameworks. The NCSC guidance, UK GDPR, and the DfE responsible use framework all address AI-specific safeguarding risks. Many of the international frameworks do not, either because they predate the emergence of generative AI tools capable of producing harmful synthetic media, or because safeguarding is treated as a national matter outside their remit.

This creates a gap in international schools in particular: they may be drawing on globally-oriented frameworks that simply do not address deepfakes, AI-facilitated grooming, or synthetic media involving students. Those schools need to supplement international frameworks with jurisdiction-specific safeguarding guidance.

The EU AI Act gap most schools have not closed: The EU AI Act classifies certain AI systems used in education as "high risk" — specifically those used for evaluating students, determining educational access, and assessing learning outcomes. High-risk AI systems are subject to mandatory transparency requirements, including disclosure to students and parents that an AI system is being used, and documentation of how that system has been validated. Most schools using AI tools for marking, feedback, or assessment recommendation are not yet compliant with these requirements. This applies regardless of whether the school is based in the EU, if the tool is provided by an EU-registered company or processes data on EU citizens.

• • •

What most schools are missing

When the AI Literacy Audit Tool cross-references a school's documents against this full framework set, the gaps that appear most consistently are not the obvious ones. Most schools have something in place on academic integrity and basic data protection. The gaps tend to appear in five less-visible areas.

Cross-framework coherence

Schools typically develop their AI policy by consulting one or two frameworks, usually the DfE guidance and either UNESCO or ISTE. The resulting policy is coherent within that narrow reference set but may be silent on obligations introduced by other frameworks. The EU AI Act transparency requirements are the most commonly absent. The Council of Europe convention's human rights framing is rarely reflected in school-level documentation.

The policy-to-practice gap

Multiple frameworks note that policy coherence is not the same as policy implementation. The DfE guidance explicitly warns against AI policies that exist on paper but have not been operationalised in classroom practice. The audit consistently finds schools whose AI acceptable use policy states one thing and whose Schemes of Work imply another — not through deliberate inconsistency, but because the two documents were written by different people at different times without cross-referencing.

Structured professional development

UNESCO and ISTE both distinguish between ad hoc staff awareness and structured professional development pathways. The former is a starting point; the latter is what the frameworks require. The most common finding in audits is that a school has done one or more AI INSET sessions but has no documented progression pathway, no mechanism for tracking staff competency development, and no differentiation between what class teachers, heads of department, and senior leaders need to know.

AI-specific safeguarding provisions

Safeguarding policies that were written before 2023 almost certainly do not address synthetic media, AI-facilitated contact with students, or the use of AI tools that process biometric or behavioural data. Even safeguarding policies written in 2023 may not reflect the rapid expansion of accessible generative AI tools that has occurred since. The frameworks that do address this — principally the NCSC guidance and UK GDPR — require schools to have explicitly assessed AI-specific safeguarding risks. Most have not done so in writing.

A documented tool evaluation process

The EdTech Evidence Group framework, the OECD AI in Education guidance, and the EU AI Act all require or recommend a documented process for evaluating AI tools before adoption, covering evidence of effectiveness, data handling, transparency about training data, and assessment of potential harms. Most schools adopt AI tools on the basis of teacher recommendation or free availability, without a formal evaluation process. This is both a governance gap and a safeguarding gap.

• • •

What to do with this information

The frameworks are not designed to overwhelm schools. They exist because international bodies, governments, and professional associations have thought carefully about what responsible AI in education looks like — and the fact that 33 of them have converged on similar core positions is actually reassuring: there is broad consensus on what good looks like.

The challenge is that reading 33 frameworks, extracting the relevant requirements, and mapping your school's current documents against all of them is not a realistic task for any senior leadership team. It would take weeks of careful reading, and the landscape continues to shift as new frameworks are published and existing ones are updated.

This is the problem the AI Literacy Audit Tool is designed to solve. Upload your school's existing documents and the system cross-references them against all 33 frameworks simultaneously, identifying the specific gaps in your current provision, generating a scored report across 9 dimensions, and producing a board-ready summary you can present to governors.

You do not need to read every framework. You need to know what they say about your school.

• • •

See how your school measures up against all 33 frameworks

The AI Literacy Audit Tool cross-references your school's existing documents against every framework in this article simultaneously. Upload your policy, your Schemes of Work, and your staff handbook and receive a scored, evidenced report across 9 dimensions in under 10 minutes.

Run your free AI Literacy Audit at audit.deepeducationnetwork.com

• • •

About DEEP Education Network: DEEP Education Network is a professional development platform supporting over 1,000 educators across 50+ countries. We specialise in helping schools and school leaders navigate AI integration through courses, training, and practical frameworks grounded in education research. The AI Literacy Audit Tool was built from our analysis of 33 international AI frameworks and is designed to give school leaders a rigorous, evidenced picture of their AI readiness in minutes, not months. audit.deepeducationnetwork.com

If this was useful, share it with a colleague. The conversation about AI in schools is too important to leave to chance.

AG

Alex Gray

Head of Sixth Form & BSME Network Lead for AI in Education. Alex explores how artificial intelligence is reshaping teaching, learning, and the future of work — with honesty, clarity, and a focus on what matters most for educators and students.

Discussion

Sign in to join the discussion.

Join the Conversation

DEEP Education Network is a growing community of educators navigating AI with honesty and purpose. No hype. No fear. Just practical thinking for the people who matter most — teachers and the young people they serve.

Explore DEEP Education Network