§1 Purpose & Scope
This policy establishes the governance framework for the safe, ethical, and educationally sound use of artificial intelligence (AI) tools across [SCHOOL NAME]. It applies to all staff, students, administrators, and any third party operating on behalf of the school.
This is a living document. It is versioned, school-owned, and subject to mandatory review every six months or upon any significant change in law, technology, or incident.
Scope: This policy covers all AI-enabled tools used in instruction, administration, assessment, communication, and student support — including AI features embedded in platforms the school already operates (Google Workspace, Microsoft 365, LMS platforms).
Alignment with Digital Citizenship Policy: This AI Policy is a companion document to the school's existing Digital Citizenship Policy. Where the two documents address the same conduct, the more specific or more restrictive provision applies. The Digital Citizenship Policy's expectations around responsible use, online safety, and academic integrity extend fully to AI tools. (Digital Citizenship Policy reference will be linked here upon receipt.)
§2 Definitions
- Artificial Intelligence (AI): Software systems that generate outputs (text, images, decisions, recommendations) based on training data, including but not limited to generative AI, machine learning classifiers, and automated decision tools.
- Generative AI: AI that creates new content (text, images, code, audio) in response to user prompts — e.g. ChatGPT, Google Gemini, Microsoft Copilot.
- AI Feature: An AI capability embedded within an existing platform (e.g. Gemini in Google Docs, Copilot in Word, smart suggestions in an LMS).
- Student Data: Any personally identifiable information relating to a student, including academic records, usage data, behavioral data, or biometric data.
- AI Policy Owner: The designated staff member with day-to-day governance responsibility for this policy.
- Governance Panel: Cross-functional body responsible for tool approval, policy updates, and incident review.
- Titular: The individual whose personal data is being processed (Ley 1581/2012, Art. 3).
- Education Records (FERPA): Records, files, documents, and other materials that contain information directly related to a student and are maintained by the school (20 U.S.C. § 1232g).
- Interesado / Data Subject: An identified or identifiable natural person whose personal data is processed (GDPR Art. 4).
- DPIA: Data Protection Impact Assessment — mandatory for high-risk processing (GDPR Art. 35).
§3 AI Tools Currently in Use
The following tools have been inventoried and reviewed under this policy. All tools not listed require approval before use with students or student data (see §8).
Note on embedded AI: Google Workspace for Education and Microsoft 365 for Education include AI features that activate by default. Administrators must review tenant-level AI settings and configure them in line with this policy before student use.
§4 Permitted Uses of AI
Staff
- Lesson planning, resource creation, and differentiation support
- Drafting communications (with human review before sending)
- Administrative summarization and documentation (non-student-identifiable)
- Professional development and training activities
- Formative assessment feedback generation (reviewed by teacher before delivery)
- Using AI features within approved platforms (Gemini in Google Workspace, Copilot in M365)
Students (with teacher oversight)
- Approved AI literacy activities as part of curriculum
- Using AI features within school-approved platforms (LMS tools, Grammarly, Canva AI within set permissions)
- Research assistance where disclosure and critical evaluation are taught
- Accessibility tools that include AI features (text-to-speech, translation, reading support)
Administration
- Operational reporting and analytics via approved platforms
- Parent/community communication drafting (human-reviewed)
- Non-identifiable data analysis for school improvement
§5 Restricted & Prohibited Uses
Prohibited for all users:
- Using AI to generate, distribute, or store content that exploits, endangers, or sexualizes minors
- Automated disciplinary decisions without human review
- Biometric identification of students without explicit informed consent and legal basis
- Using student data to train external AI models
- Sharing personally identifiable student information with unapproved AI tools
- AI-generated academic work submitted as the student's own without disclosure
- Using generative AI tools not on the approved list with student accounts
Restricted (Staff Only — Governance Panel Pre-Approval Required)
- AI-assisted behavioral prediction or risk-scoring of individual students
- Facial recognition or emotion detection tools
- Automated attendance tracking via AI vision systems
- Any AI tool that processes sensitive student data (health, SEND status, family situation)
§6 Student Data & Privacy
14
🇨🇴 Colombia
Under 14: guardian consent
14–18: own consent
(sensitive data: under 16)
13 / 18
🇺🇸 USA
COPPA: under 13 = parental
FERPA: parental rights until 18
14
🇪🇸 Spain
Under 14: guardian consent
(LOPDGDD Art. 7.2)
Governing Framework — Colombia
- Ley 1581/2012 — Ley de Protección de Datos Personales: requires informed consent, data minimization, and purpose limitation for all personal data processing.
- Decreto 1377/2013 — Implements the Personal Data Management Programme; schools must maintain a data treatment policy and register databases with the SIC.
- Directiva Externa 002/2024 (SIC) — Specific SIC guidance on processing personal data for AI systems; applies to AI tools used by educational institutions.
- Students under 14: written consent of parent/guardian required before any personal data is collected by an AI tool. Students 14–18 may consent to general data processing; sensitive data processing requires guardian consent for those under 16.
- Breach notification: 15 business days from detection — submit via RNBD portal to SIC and notify affected data subjects.
- The school must maintain a current Política de Tratamiento de Datos Personales published on its website.
All AI tools that process student data must be registered in the school's data inventory. Vendor data processing agreements must explicitly prohibit training AI models on student data and must confirm compliance with Ley 1581/2012.
Governing Framework — United States
- FERPA (20 U.S.C. § 1232g) — Prohibits disclosure of education records without consent. AI vendors accessing education records must qualify as "school officials" with legitimate educational interest and be bound by the same FERPA requirements as the school.
- COPPA (16 CFR Part 312) — Requires verifiable parental consent before collecting personal information from children under 13. Schools may provide consent on behalf of parents for educational tools under the "school exception," but only for tools with no commercial purpose.
- CIPA — Requires internet filtering and an Internet Safety Policy if the school receives E-rate funding. AI tools that enable open internet access or content generation must be reviewed for CIPA compliance.
- SOPIPA (CA) / State Laws — Many states (CA, NY, CO, TX, etc.) have additional student privacy laws prohibiting commercialization of student data. Check your state's requirements.
- AI vendors must sign a FERPA-compliant Data Processing Agreement and a student data privacy agreement aligned to state law.
- No federal breach notification requirement under FERPA, but most states require notification within 30–60 days. The school must comply with the applicable state law.
The school must ensure that any AI feature enabled within Google Workspace for Education or Microsoft 365 for Education is configured in the Administrator console to comply with COPPA (no data collection from under-13 users without parental consent where the school exception does not apply).
Governing Framework — Spain (EU)
- RGPD / GDPR (EU 2016/679) — All AI processing of student data must have a valid legal basis (Art. 6). For minors, consent (Art. 7–8) requires particular care; the school must apply age-appropriate safeguards.
- LOPDGDD (LO 3/2018) Art. 7.2 — Spain sets the digital consent age at 14 (below GDPR's default 16). Students under 14 require parental/guardian consent for all data processing. Students 14+ may consent if information is provided in clear, plain language.
- EU AI Act (EU 2024/1689) — Fully applicable from August 2026. AI systems used for student assessment, emotion recognition, or behavioral monitoring are classified as high-risk (Annex III) and require conformity assessments, transparency obligations, and registration in the EU AI database.
- GDPR Art. 35 — DPIA: Mandatory Data Protection Impact Assessment before deploying any AI tool that involves large-scale processing of student data or high-risk processing (e.g. AI-driven assessment, behavioral analytics).
- Breach notification: 72 hours from awareness of a breach — notify AEPD. If high risk to data subjects, notify individuals without undue delay.
- A DPD (Delegado de Protección de Datos) is required and must be registered with AEPD within 10 days of appointment.
The school's Google Workspace or Microsoft 365 tenant must be configured under the Education Plus / A3/A5 tier with the appropriate data residency settings (EU) and with student AI features restricted to age-appropriate controls. Gemini for Workspace and Microsoft Copilot for Education must be reviewed against AEPD guidance before enabling for students under 14.
§7 Safeguarding & Child Protection
All AI use in the school must be consistent with the school's safeguarding obligations. AI does not reduce or transfer safeguarding responsibility — it introduces additional considerations.
Safeguarding Principles for AI
- No AI tool may be used to monitor student behavior, communications, or location without explicit school policy, parental notification, and legal basis.
- AI-generated content shown to students must be filtered and reviewed for age-appropriateness before delivery.
- Any AI system flagging student risk or distress (e.g. mental health indicators) must route to a qualified human — never to automated action alone.
- Staff must be trained to recognise when a student may be disclosing harm through AI-mediated interactions (chatbots, AI tutors).
Colombia — Child Protection Framework
- Ley 1098/2006 — Código de Infancia y Adolescencia: guarantees the full development and protection of minors. AI tools must not compromise the rights enshrined in this code.
- Ley 1273/2009 — Delitos informáticos: criminalises unauthorised access to systems and misuse of data; applies to AI systems that may expose student data.
- Report any AI-facilitated safeguarding incident to ICBF (Instituto Colombiano de Bienestar Familiar) in addition to internal processes.
- CONPES 4144 (2025) establishes safe AI design as a national principle — the school's AI governance must align with this.
USA — Child Protection Framework
- COPPA — Prohibits collection of personal data from children under 13 for commercial purposes. AI tutoring or chatbot tools that collect conversation data from under-13 students without the school exception must have verifiable parental consent.
- KOSA (Kids Online Safety Act, 2024) — Imposes duty of care on platforms used by minors; AI tools used by students must not be designed to cause harm, exploit attention, or promote self-harm content.
- Title IX obligations extend to AI: the school must ensure AI tools do not create or perpetuate a hostile environment based on sex, race, disability, or other protected characteristics.
- Mandated reporter obligations apply to staff who become aware of abuse or neglect through any channel, including AI-mediated disclosures.
Spain — Child Protection Framework
- Ley Orgánica 8/2021 — Protección Integral a la Infancia y la Adolescencia frente a la Violencia (LOPIVI): schools have affirmative obligations to prevent digital violence, including AI-enabled harassment or exploitation.
- LOPDGDD Arts. 79–97 — Digital rights apply to students; schools must protect the right to digital security, to be forgotten, and to protection of minors' digital identity.
- EU AI Act — Prohibited Practices (Art. 5): the following are banned in educational contexts: subliminal AI manipulation, exploitation of vulnerabilities of minors, real-time remote biometric identification in public spaces.
- Any AI system that detects or infers a student's emotional state must be assessed as high-risk under the EU AI Act and require a DPIA (GDPR Art. 35).
- Report safeguarding incidents involving AI to the designated child protection coordinator and, where legally required, to Fiscalía de Menores or local authorities.
§8 AI Tool Approval Process
Any AI-enabled tool not currently on the approved list (§3) must pass the following process before use with students, student data, or in official school communications.
1
Request Submission
Staff member submits a Tool Approval Request to the AI Policy Owner, including: tool name, vendor, intended use, target users (staff/grade levels), and data types accessed.
2
Vendor Due Diligence
IT/compliance reviews vendor privacy policy, terms of service, data processing agreement, and security certifications. Confirm:
- Compliance with Ley 1581/2012 and willingness to sign a data treatment agreement
- FERPA/COPPA compliance; willingness to sign a Student Data Privacy Agreement
- GDPR compliance; DPA (Data Processing Agreement) under GDPR Art. 28; EU data residency or SCCs
- No use of student data to train AI models
- Data deletion capabilities and retention limits
3
Risk Classification
Governance Panel classifies the tool: Low / Medium / High risk based on data accessed, student age groups, and AI capability. High-risk tools require
SIC-aligned impact assessment.
legal review before approval.
a full DPIA (GDPR Art. 35) before approval.
4
Panel Decision
Governance Panel votes. Majority approval required. Decision logged with rationale and approvers named. Outcome: Approved / Approved with conditions / Rejected.
5
Registry Update & Communication
Approved tool added to §3 inventory. Staff notified. Training provided if required. Rejected tools logged with reason to prevent duplicate submissions.
§9 Roles & Responsibilities
Head / Principal
- Ultimate accountability for this policy
- Approves policy versions and major changes
- Chairs board/governor briefings on AI governance
- Signs off on high-risk tool approvals
Superintendent (Policy Co-Owner)
- Strategic leadership and district-wide accountability for AI governance
- Co-signs all policy versions and major changes
- Represents AI governance to the board and community
- Final authority on high-risk tool decisions
IT Director (Policy Co-Owner)
- Day-to-day governance stewardship and tool registry
- Configures AI controls in Google Workspace / M365 admin consoles
- Coordinates incident response and vendor compliance
- Drafts policy updates for panel review
Encargado de Datos / IT Lead
Privacy Officer / IT Lead
DPD (Delegado de Prot. de Datos) / IT Lead
- Manages SIC database registration; handles breach notification to SIC within 15 business days
- FERPA compliance; vendor DPA management; state breach notification compliance
- Mandatory DPO role; registered with AEPD within 10 days of appointment; manages DPIA process; notifies AEPD within 72h of breach
- Configures AI settings in Google Workspace / M365 admin consoles
- Reviews vendor security certifications
Governance Panel
- Reviews and votes on tool approval requests
- Reviews incidents and recommends policy changes
- Meets at minimum quarterly
- Members: AI Policy Owner, IT Lead, a teacher representative, a senior leader, and (optionally) a parent/community rep
Teaching Staff
- Use only approved AI tools with students
- Complete annual AI governance training
- Report incidents and new tool requests to AI Policy Owner
- Supervise student AI use in accordance with this policy
Students
- Use AI tools only as directed by staff within approved platforms
- Disclose AI use in academic work per teacher instruction
- Report concerns about AI content or interactions to a trusted adult
§10 Incident Response
Incident Classification
- P1 — Critical: Data breach involving student PII; safeguarding concern; unauthorised AI decision affecting a student's welfare or rights.
- P2 — Serious: Prohibited AI content delivered to students; unapproved tool used with student data; vendor compliance failure.
- P3 — Minor: Policy non-compliance without data exposure; student misuse of approved tool; staff using AI without disclosure.
Response SLAs
1h
P1 — Internal alert
AI Policy Owner and Head notified
24h
P1 — Containment
Tool suspended; data access revoked
15 días hábiles
Notificación SIC
Via portal RNBD · Ley 1581/2012
48h
P2 — Response
Panel convened; parents notified if required
5 días
P3 — Resolution
Log filed; corrective action assigned
1h
P1 — Internal alert
AI Policy Owner and Head notified
24h
P1 — Containment
Tool suspended; data access revoked
State law
Breach notification
Typically 30–60 days; check your state
72h
FERPA review
Determine if education records affected
5 days
P3 — Resolution
Log filed; corrective action assigned
1h
P1 — Internal alert
DPD and Head notified
24h
P1 — Containment
Tool suspended; data access revoked
72 horas
Notificación AEPD
GDPR Art. 33 — vía sede electrónica AEPD
Sin demora
Afectados
Notificar si alto riesgo — GDPR Art. 34
5 días
P3 — Resolución
Registro de incidencias; acción correctiva
Incident Log
All incidents P1–P3 must be logged in the Policy Incident Register (maintained by the AI Policy Owner), including: date, description, classification, actions taken, resolution, and lessons learned.
§11 Governing Legal Framework
- Ley 1581/2012 — Ley de Protección de Datos Personales
- Decreto 1377/2013 — Reglamentación de Ley 1581
- Ley 1098/2006 — Código de la Infancia y la Adolescencia
- Ley 1273/2009 — Protección de la información y los datos (delitos informáticos)
- Ley 059/2023 — Lineamientos de política pública para el desarrollo de la IA
- Directiva Externa SIC 002/2024 — Tratamiento de datos personales en sistemas de IA
- CONPES 4144/2025 — Política Nacional de Inteligencia Artificial
- Regulatory authority: Superintendencia de Industria y Comercio (SIC) — sic.gov.co
- Child welfare authority: ICBF — icbf.gov.co
- FERPA — 20 U.S.C. § 1232g; 34 CFR Part 99
- COPPA — 15 U.S.C. §§ 6501–6506; 16 CFR Part 312
- CIPA — 47 U.S.C. § 254(h) (E-rate recipients)
- KOSA — Kids Online Safety Act (2024)
- SOPIPA (CA) / State equivalents — check applicable state law
- U.S. Dept of Education AI Guidance (2024) — ed.gov
- Regulatory authorities: U.S. Dept of Education (FERPA) · FTC (COPPA) · State Education Agency
- RGPD / GDPR — Reglamento (UE) 2016/679
- LOPDGDD — Ley Orgánica 3/2018, de 5 de diciembre
- EU AI Act — Reglamento (UE) 2024/1689 (aplicable desde agosto 2026)
- LOPIVI — Ley Orgánica 8/2021 de protección integral a la infancia
- LOE/LOMLOE — Ley Orgánica 2/2006 (mod. LO 3/2020)
- Anteproyecto de Ley de Buen Uso y Gobernanza de la IA (aprobado en Consejo de Ministros, marzo 2025)
- Autoridades: AEPD (aepd.es) · AESIA (supervisión IA) · INTEF (orientación pedagógica)
- Recursos AEPD para centros educativos: guía "Protección de datos en centros educativos" (aepd.es)
Legal Notice: This policy is an operational governance framework. It is not legal advice. The school must seek independent legal counsel for formal legal ratification and compliance verification.
§12 Policy Review & Version Control
Review Cadence
- Scheduled review: Every 6 months (next: September 2026)
- Triggered review: Any P1 incident, a new law or regulation relevant to AI/data, a significant new tool deployment, or a change in school structure
- All policy changes require Governance Panel approval and Head/Principal sign-off
- Previous versions are archived with effective dates
Version History
| Version | Date | Author | Changes |
| 1.0 | 2026-03-20 | [NAME] | Initial publication |
| — | — | — | — |
Signatures & Adoption
AI Policy Owner
Signature
Name & Title
Date
Head / Principal
Signature
Name & Title
Date