When AI Makes Mistakes in Class: Who Is Responsible?
Artificial Intelligence (AI) has permeated the educational environment, transforming classroom dynamics, expanding pedagogical possibilities, and introducing novel challenges. As AI-powered systems increasingly support or automate tasks—from grading to student feedback and adaptive learning recommendations—educators and institutions must grapple with a crucial question: When AI makes mistakes in class, who is responsible? This question is far from abstract; it carries direct implications for professional liability, student wellbeing, institutional trust, and legal compliance across Europe.
The Complexity of Accountability: An Evolving Landscape
Responsibility in the context of AI errors is a multidimensional issue. In a typical classroom, consider an AI-powered tool that erroneously marks a student’s answer as incorrect, or, more seriously, perpetuates biases in learning recommendations. The immediate harm can range from a minor disruption to a student’s confidence, to systemic discrimination or privacy breaches. Determining who is answerable—teachers, technology vendors, or educational institutions—depends on understanding how AI functions, who controls its deployment, and what frameworks govern its use.
“The introduction of AI into classrooms should not dilute the duty of care owed to students; rather, it necessitates a nuanced redistribution of that duty across all stakeholders.”
Teachers: The Human in the Loop
Teachers remain the most visible face of accountability in education. The European Union’s legal tradition emphasizes the role of the human operator—the teacher—as the immediate guardian of students’ rights and welfare. Teachers are expected to exercise professional judgment, even when supported by AI. This means they must:
- Understand the capabilities and limitations of the AI systems they use
- Monitor outputs critically, especially in high-stakes contexts
- Intervene when AI recommendations appear questionable or unjust
However, the expectation that teachers can always detect or correct AI errors is unrealistic. Many AI systems operate as “black boxes,” with opaque decision-making processes. Teachers are often not trained as data scientists or AI ethicists, and their workload may prevent thorough oversight. The European Commission’s Ethics Guidelines for Trustworthy AI underscores the importance of human oversight, but also recognizes its practical limits.
Vendors: The Designers and Suppliers
AI vendors play a pivotal role in the educational ecosystem. They are responsible for developing, testing, and maintaining the software that schools and teachers rely on. In the EU, under the proposed AI Act, vendors of “high-risk” AI systems (which include many educational applications) will face substantial obligations:
- Transparency about system capabilities and limitations
- Robustness and safety checks
- Bias mitigation strategies
- Provision of adequate documentation and user training materials
When an AI tool makes a mistake due to flawed design, insufficient training data, or a lack of transparency, vendors may bear direct responsibility. However, contractual arrangements can complicate this: many vendors limit their liability through terms of service, shifting some risks back to users or institutions.
Schools and Educational Institutions: The Organisational Layer
Educational institutions act as the gatekeepers for technology adoption. They are expected to conduct due diligence before procuring AI tools, ensure compliance with data protection laws (such as the General Data Protection Regulation, GDPR), and provide adequate support and training for teachers. If an institution fails to vet an AI system properly or neglects to establish protocols for oversight and redress, it may share in the responsibility for any harm caused.
Moreover, schools are generally the “data controllers” under GDPR, meaning they determine the purposes and means of processing students’ personal data, even if the processing is performed by a third-party vendor. This places a significant legal burden on institutions to ensure the AI tools they deploy respect students’ rights to privacy and non-discrimination.
EU Case Law: Evolving Precedents
European case law is still catching up with the rapid deployment of AI in education. However, several cases and regulatory opinions shed light on how responsibility is likely to be allocated.
Data Protection: The Role of GDPR
In Schrems II (C-311/18), the Court of Justice of the European Union (CJEU) emphasized the responsibility of data controllers (often schools) to ensure that personal data processed by third-party vendors is adequately protected. While the case focused on international data transfers, its logic applies to educational AI: if a school deploys an AI system that mishandles student data, it cannot simply blame the vendor—it remains responsible for due diligence and oversight.
Algorithmic Decision-Making and Discrimination
EU law prohibits discrimination on grounds such as race, gender, or disability (see Directive 2000/43/EC and Directive 2000/78/EC). If an AI system introduces biased outcomes, liability may extend to all parties involved—the teacher who used the system, the school that deployed it, and the vendor that designed it—depending on their respective roles and the nature of the error.
“Where algorithmic systems perpetuate inequality, the responsibility is collective, reflecting the interconnected nature of human and technological agency.”
Product Liability
The EU’s Product Liability Directive (85/374/EEC) treats AI software as a product, meaning vendors may be strictly liable for defects that cause harm. However, the application of this regime to complex AI systems is still being tested in courts, and many cases settle before reaching judgment.
Decision Tree: Determining Responsibility in Practice
Given the complexity of AI deployments, a decision tree can support educators and administrators in identifying where responsibility lies when an AI system makes a mistake:
- Step 1: Identify the Nature of the Error. Was the mistake due to incorrect data input, flawed AI logic, inadequate teacher oversight, or institutional failure?
- Step 2: Assess Human Involvement. Was the teacher expected to review or override AI decisions? Did they follow established protocols?
- Step 3: Review Vendor Documentation. Does the vendor provide clear guidance on the system’s use and limitations? Was the error foreseeable or preventable based on available documentation?
- Step 4: Examine Institutional Policies. Did the school provide adequate training and support? Were risk assessments conducted prior to deployment?
- Step 5: Reference Legal and Contractual Obligations. What do contracts, data protection laws, and relevant directives stipulate about liability?
Scenario Example: Automated Grading Error
Suppose an AI grading tool misinterprets a student’s creative answer as incorrect, resulting in a low grade. Who is responsible?
- If the teacher blindly accepted the AI’s output without review, they may bear primary responsibility, especially if the system was advertised as “assistive” rather than fully autonomous.
- If the vendor promoted the tool as fully reliable and failed to disclose its limitations, they may be liable for misleading claims or poor design.
- If the school failed to provide adequate training, or pressured teachers to use the tool without oversight, institutional responsibility comes into play.
Each situation must be analyzed in context, using the decision tree above to guide the allocation of responsibility.
Building a Culture of Shared Accountability
Effective use of AI in classrooms requires a culture of shared accountability. No single actor can guarantee perfect outcomes; instead, responsibility must be distributed in a way that encourages vigilance, transparency, and continuous improvement.
Best Practices for Educators
- Engage in regular professional development on AI tools
- Document instances of AI errors and interventions
- Communicate transparently with students and parents about the role of AI in assessments
Best Practices for Institutions
- Establish clear policies for AI procurement, monitoring, and incident response
- Encourage interdisciplinary collaboration between IT, pedagogy, and legal departments
- Foster an environment where teachers feel empowered to question and challenge AI outputs
Best Practices for Vendors
- Invest in explainable AI and user-friendly documentation
- Proactively address bias and error rates, sharing this information with clients
- Provide technical support and rapid redress mechanisms for reported issues
“Accountability in AI is not a burden to be borne alone, but a shared commitment to ethical and effective education.”
Looking Forward: Legislative and Technological Developments
The upcoming EU AI Act will introduce a more defined regulatory architecture for AI in education, likely increasing obligations for all parties. Teachers may need new training and certification; vendors will face higher standards for transparency and safety; schools will be expected to develop robust governance structures. At the same time, advances in explainable AI may empower educators to better understand and correct AI errors, narrowing the gap between human and machine judgment.
As European educators strive to harness the promise of AI, the question of responsibility remains central. By cultivating a shared understanding of roles, rights, and risks, the educational community can ensure that when AI makes mistakes, the response is not confusion or finger-pointing, but a coordinated effort to protect students and advance learning.