Jurisdiction Framework: Ley 1581/2012 · Ley 1098/2006 · SIC FERPA · COPPA · DOE / FTC GDPR · LOPDGDD · EU AI Act · AEPD
Living Policy v1.0 🇨🇴 Colombia 🇺🇸 United States 🇪🇸 Spain — EU

Artificial Intelligence Governance & Safety Policy

[SCHOOL NAME] — K–12
Version
1.0
Effective Date
2026-03-20
Next Review
2026-09-20
Policy Owners
Superintendent & IT Director
Governing Authority
Superintendencia de Industria y Comercio (SIC)
U.S. Department of Education / FTC
AEPD · AESIA
§1 Purpose & Scope

This policy establishes the governance framework for the safe, ethical, and educationally sound use of artificial intelligence (AI) tools across [SCHOOL NAME]. It applies to all staff, students, administrators, and any third party operating on behalf of the school.

This is a living document. It is versioned, school-owned, and subject to mandatory review every six months or upon any significant change in law, technology, or incident.

Scope: This policy covers all AI-enabled tools used in instruction, administration, assessment, communication, and student support — including AI features embedded in platforms the school already operates (Google Workspace, Microsoft 365, LMS platforms).
Alignment with Digital Citizenship Policy: This AI Policy is a companion document to the school's existing Digital Citizenship Policy. Where the two documents address the same conduct, the more specific or more restrictive provision applies. The Digital Citizenship Policy's expectations around responsible use, online safety, and academic integrity extend fully to AI tools. (Digital Citizenship Policy reference will be linked here upon receipt.)
§2 Definitions
§3 AI Tools Currently in Use

The following tools have been inventoried and reviewed under this policy. All tools not listed require approval before use with students or student data (see §8).

Tool / Platform Type AI Features in Use Users Risk Status
Google Workspace for Education Integrated Platform Gemini AI (drafting, summarization, Smart Compose, Meet transcription) Staff + Students Medium Approved
Microsoft 365 for Education Integrated Platform Copilot (writing assist, Teams AI, Reading Coach, Reflect) Staff + Students Medium Approved
Schoology LMS AI-assisted grading feedback, content recommendations, analytics Staff + Students Low Approved
Edmodo LMS AI content suggestions, engagement analytics Staff + Students Low Under Review
Canvas (Instructure) LMS Impact analytics, AI feedback on assignments, early alert system Staff + Students Low Approved
Google Classroom LMS Gemini integration, practice sets, auto-grading assist Staff + Students Low Approved
ChatGPT (OpenAI) Generative AI Text generation, tutoring, Q&A Staff only High Restricted
Grammarly Writing AI Grammar, style, tone, plagiarism suggestions Staff + Students (Gr 6+) Low Approved
Canva AI Creative AI AI image generation (Magic Media), text-to-design, background removal Staff + Students (Gr 4+) Medium Approved
Note on embedded AI: Google Workspace for Education and Microsoft 365 for Education include AI features that activate by default. Administrators must review tenant-level AI settings and configure them in line with this policy before student use.
§4 Permitted Uses of AI

Staff

Students (with teacher oversight)

Administration

§5 Restricted & Prohibited Uses
Prohibited for all users:
  • Using AI to generate, distribute, or store content that exploits, endangers, or sexualizes minors
  • Automated disciplinary decisions without human review
  • Biometric identification of students without explicit informed consent and legal basis
  • Using student data to train external AI models
  • Sharing personally identifiable student information with unapproved AI tools
  • AI-generated academic work submitted as the student's own without disclosure
  • Using generative AI tools not on the approved list with student accounts

Restricted (Staff Only — Governance Panel Pre-Approval Required)

§6 Student Data & Privacy

Governing Framework — Colombia

  • Ley 1581/2012 — Ley de Protección de Datos Personales: requires informed consent, data minimization, and purpose limitation for all personal data processing.
  • Decreto 1377/2013 — Implements the Personal Data Management Programme; schools must maintain a data treatment policy and register databases with the SIC.
  • Directiva Externa 002/2024 (SIC) — Specific SIC guidance on processing personal data for AI systems; applies to AI tools used by educational institutions.
  • Students under 14: written consent of parent/guardian required before any personal data is collected by an AI tool. Students 14–18 may consent to general data processing; sensitive data processing requires guardian consent for those under 16.
  • Breach notification: 15 business days from detection — submit via RNBD portal to SIC and notify affected data subjects.
  • The school must maintain a current Política de Tratamiento de Datos Personales published on its website.

All AI tools that process student data must be registered in the school's data inventory. Vendor data processing agreements must explicitly prohibit training AI models on student data and must confirm compliance with Ley 1581/2012.

Governing Framework — United States

  • FERPA (20 U.S.C. § 1232g) — Prohibits disclosure of education records without consent. AI vendors accessing education records must qualify as "school officials" with legitimate educational interest and be bound by the same FERPA requirements as the school.
  • COPPA (16 CFR Part 312) — Requires verifiable parental consent before collecting personal information from children under 13. Schools may provide consent on behalf of parents for educational tools under the "school exception," but only for tools with no commercial purpose.
  • CIPA — Requires internet filtering and an Internet Safety Policy if the school receives E-rate funding. AI tools that enable open internet access or content generation must be reviewed for CIPA compliance.
  • SOPIPA (CA) / State Laws — Many states (CA, NY, CO, TX, etc.) have additional student privacy laws prohibiting commercialization of student data. Check your state's requirements.
  • AI vendors must sign a FERPA-compliant Data Processing Agreement and a student data privacy agreement aligned to state law.
  • No federal breach notification requirement under FERPA, but most states require notification within 30–60 days. The school must comply with the applicable state law.

The school must ensure that any AI feature enabled within Google Workspace for Education or Microsoft 365 for Education is configured in the Administrator console to comply with COPPA (no data collection from under-13 users without parental consent where the school exception does not apply).

Governing Framework — Spain (EU)

  • RGPD / GDPR (EU 2016/679) — All AI processing of student data must have a valid legal basis (Art. 6). For minors, consent (Art. 7–8) requires particular care; the school must apply age-appropriate safeguards.
  • LOPDGDD (LO 3/2018) Art. 7.2 — Spain sets the digital consent age at 14 (below GDPR's default 16). Students under 14 require parental/guardian consent for all data processing. Students 14+ may consent if information is provided in clear, plain language.
  • EU AI Act (EU 2024/1689) — Fully applicable from August 2026. AI systems used for student assessment, emotion recognition, or behavioral monitoring are classified as high-risk (Annex III) and require conformity assessments, transparency obligations, and registration in the EU AI database.
  • GDPR Art. 35 — DPIA: Mandatory Data Protection Impact Assessment before deploying any AI tool that involves large-scale processing of student data or high-risk processing (e.g. AI-driven assessment, behavioral analytics).
  • Breach notification: 72 hours from awareness of a breach — notify AEPD. If high risk to data subjects, notify individuals without undue delay.
  • A DPD (Delegado de Protección de Datos) is required and must be registered with AEPD within 10 days of appointment.

The school's Google Workspace or Microsoft 365 tenant must be configured under the Education Plus / A3/A5 tier with the appropriate data residency settings (EU) and with student AI features restricted to age-appropriate controls. Gemini for Workspace and Microsoft Copilot for Education must be reviewed against AEPD guidance before enabling for students under 14.

§7 Safeguarding & Child Protection

All AI use in the school must be consistent with the school's safeguarding obligations. AI does not reduce or transfer safeguarding responsibility — it introduces additional considerations.

Safeguarding Principles for AI

Colombia — Child Protection Framework

  • Ley 1098/2006 — Código de Infancia y Adolescencia: guarantees the full development and protection of minors. AI tools must not compromise the rights enshrined in this code.
  • Ley 1273/2009 — Delitos informáticos: criminalises unauthorised access to systems and misuse of data; applies to AI systems that may expose student data.
  • Report any AI-facilitated safeguarding incident to ICBF (Instituto Colombiano de Bienestar Familiar) in addition to internal processes.
  • CONPES 4144 (2025) establishes safe AI design as a national principle — the school's AI governance must align with this.

USA — Child Protection Framework

  • COPPA — Prohibits collection of personal data from children under 13 for commercial purposes. AI tutoring or chatbot tools that collect conversation data from under-13 students without the school exception must have verifiable parental consent.
  • KOSA (Kids Online Safety Act, 2024) — Imposes duty of care on platforms used by minors; AI tools used by students must not be designed to cause harm, exploit attention, or promote self-harm content.
  • Title IX obligations extend to AI: the school must ensure AI tools do not create or perpetuate a hostile environment based on sex, race, disability, or other protected characteristics.
  • Mandated reporter obligations apply to staff who become aware of abuse or neglect through any channel, including AI-mediated disclosures.

Spain — Child Protection Framework

  • Ley Orgánica 8/2021 — Protección Integral a la Infancia y la Adolescencia frente a la Violencia (LOPIVI): schools have affirmative obligations to prevent digital violence, including AI-enabled harassment or exploitation.
  • LOPDGDD Arts. 79–97 — Digital rights apply to students; schools must protect the right to digital security, to be forgotten, and to protection of minors' digital identity.
  • EU AI Act — Prohibited Practices (Art. 5): the following are banned in educational contexts: subliminal AI manipulation, exploitation of vulnerabilities of minors, real-time remote biometric identification in public spaces.
  • Any AI system that detects or infers a student's emotional state must be assessed as high-risk under the EU AI Act and require a DPIA (GDPR Art. 35).
  • Report safeguarding incidents involving AI to the designated child protection coordinator and, where legally required, to Fiscalía de Menores or local authorities.
§8 AI Tool Approval Process

Any AI-enabled tool not currently on the approved list (§3) must pass the following process before use with students, student data, or in official school communications.

1
Request Submission Staff member submits a Tool Approval Request to the AI Policy Owner, including: tool name, vendor, intended use, target users (staff/grade levels), and data types accessed.
2
Vendor Due Diligence IT/compliance reviews vendor privacy policy, terms of service, data processing agreement, and security certifications. Confirm:
  • Compliance with Ley 1581/2012 and willingness to sign a data treatment agreement
  • FERPA/COPPA compliance; willingness to sign a Student Data Privacy Agreement
  • GDPR compliance; DPA (Data Processing Agreement) under GDPR Art. 28; EU data residency or SCCs
  • No use of student data to train AI models
  • Data deletion capabilities and retention limits
3
Risk Classification Governance Panel classifies the tool: Low / Medium / High risk based on data accessed, student age groups, and AI capability. High-risk tools require SIC-aligned impact assessment. legal review before approval. a full DPIA (GDPR Art. 35) before approval.
4
Panel Decision Governance Panel votes. Majority approval required. Decision logged with rationale and approvers named. Outcome: Approved / Approved with conditions / Rejected.
5
Registry Update & Communication Approved tool added to §3 inventory. Staff notified. Training provided if required. Rejected tools logged with reason to prevent duplicate submissions.
§9 Roles & Responsibilities
Head / Principal
  • Ultimate accountability for this policy
  • Approves policy versions and major changes
  • Chairs board/governor briefings on AI governance
  • Signs off on high-risk tool approvals
Superintendent (Policy Co-Owner)
  • Strategic leadership and district-wide accountability for AI governance
  • Co-signs all policy versions and major changes
  • Represents AI governance to the board and community
  • Final authority on high-risk tool decisions
IT Director (Policy Co-Owner)
  • Day-to-day governance stewardship and tool registry
  • Configures AI controls in Google Workspace / M365 admin consoles
  • Coordinates incident response and vendor compliance
  • Drafts policy updates for panel review
Encargado de Datos / IT Lead
Privacy Officer / IT Lead
DPD (Delegado de Prot. de Datos) / IT Lead
  • Manages SIC database registration; handles breach notification to SIC within 15 business days
  • FERPA compliance; vendor DPA management; state breach notification compliance
  • Mandatory DPO role; registered with AEPD within 10 days of appointment; manages DPIA process; notifies AEPD within 72h of breach
  • Configures AI settings in Google Workspace / M365 admin consoles
  • Reviews vendor security certifications
Governance Panel
  • Reviews and votes on tool approval requests
  • Reviews incidents and recommends policy changes
  • Meets at minimum quarterly
  • Members: AI Policy Owner, IT Lead, a teacher representative, a senior leader, and (optionally) a parent/community rep
Teaching Staff
  • Use only approved AI tools with students
  • Complete annual AI governance training
  • Report incidents and new tool requests to AI Policy Owner
  • Supervise student AI use in accordance with this policy
Students
  • Use AI tools only as directed by staff within approved platforms
  • Disclose AI use in academic work per teacher instruction
  • Report concerns about AI content or interactions to a trusted adult
§10 Incident Response

Incident Classification

Response SLAs

1h
P1 — Internal alert
AI Policy Owner and Head notified
24h
P1 — Containment
Tool suspended; data access revoked
15 días hábiles
Notificación SIC
Via portal RNBD · Ley 1581/2012
48h
P2 — Response
Panel convened; parents notified if required
5 días
P3 — Resolution
Log filed; corrective action assigned
1h
P1 — Internal alert
AI Policy Owner and Head notified
24h
P1 — Containment
Tool suspended; data access revoked
State law
Breach notification
Typically 30–60 days; check your state
72h
FERPA review
Determine if education records affected
5 days
P3 — Resolution
Log filed; corrective action assigned
1h
P1 — Internal alert
DPD and Head notified
24h
P1 — Containment
Tool suspended; data access revoked
72 horas
Notificación AEPD
GDPR Art. 33 — vía sede electrónica AEPD
Sin demora
Afectados
Notificar si alto riesgo — GDPR Art. 34
5 días
P3 — Resolución
Registro de incidencias; acción correctiva

Incident Log

All incidents P1–P3 must be logged in the Policy Incident Register (maintained by the AI Policy Owner), including: date, description, classification, actions taken, resolution, and lessons learned.

§11 Governing Legal Framework
  • Ley 1581/2012 — Ley de Protección de Datos Personales
  • Decreto 1377/2013 — Reglamentación de Ley 1581
  • Ley 1098/2006 — Código de la Infancia y la Adolescencia
  • Ley 1273/2009 — Protección de la información y los datos (delitos informáticos)
  • Ley 059/2023 — Lineamientos de política pública para el desarrollo de la IA
  • Directiva Externa SIC 002/2024 — Tratamiento de datos personales en sistemas de IA
  • CONPES 4144/2025 — Política Nacional de Inteligencia Artificial
  • Regulatory authority: Superintendencia de Industria y Comercio (SIC) — sic.gov.co
  • Child welfare authority: ICBF — icbf.gov.co
  • FERPA — 20 U.S.C. § 1232g; 34 CFR Part 99
  • COPPA — 15 U.S.C. §§ 6501–6506; 16 CFR Part 312
  • CIPA — 47 U.S.C. § 254(h) (E-rate recipients)
  • KOSA — Kids Online Safety Act (2024)
  • SOPIPA (CA) / State equivalents — check applicable state law
  • U.S. Dept of Education AI Guidance (2024) — ed.gov
  • Regulatory authorities: U.S. Dept of Education (FERPA) · FTC (COPPA) · State Education Agency
  • RGPD / GDPR — Reglamento (UE) 2016/679
  • LOPDGDD — Ley Orgánica 3/2018, de 5 de diciembre
  • EU AI Act — Reglamento (UE) 2024/1689 (aplicable desde agosto 2026)
  • LOPIVI — Ley Orgánica 8/2021 de protección integral a la infancia
  • LOE/LOMLOE — Ley Orgánica 2/2006 (mod. LO 3/2020)
  • Anteproyecto de Ley de Buen Uso y Gobernanza de la IA (aprobado en Consejo de Ministros, marzo 2025)
  • Autoridades: AEPD (aepd.es) · AESIA (supervisión IA) · INTEF (orientación pedagógica)
  • Recursos AEPD para centros educativos: guía "Protección de datos en centros educativos" (aepd.es)
Legal Notice: This policy is an operational governance framework. It is not legal advice. The school must seek independent legal counsel for formal legal ratification and compliance verification.
§12 Policy Review & Version Control

Review Cadence

Version History

VersionDateAuthorChanges
1.02026-03-20[NAME]Initial publication
Signatures & Adoption
AI Policy Owner
Signature
Name & Title
Date
Head / Principal
Signature
Name & Title
Date