AI is Not Pedagogy: The Human in the Room Matters
Gist: Examining why artificial intelligence, despite its considerable capabilities, cannot replace the pedagogical expertise and relational work of teachers. Why conflating technological tools with teaching practice reveals a fundamental misunderstanding of what education actually is.
There’s a particular discomfort I experience when listening to technologists discuss the future of education. Not because I’m opposed to technological advancement, far from it, but because of the persistent category error that underlies so much of the discourse: the assumption that AI is pedagogy, or that it could become pedagogy with sufficient advancement. This conflation troubles me not as a luddite resistance to change, but as someone who has spent considerable time thinking about what actually happens when humans learn together in shared spaces.
The error is subtle but consequential. We speak of ‘AI-powered learning’ and ‘intelligent tutoring systems’ as though these phrases describe pedagogical acts rather than technological affordances. We measure educational AI’s success by metrics that tell us almost nothing about the complex, relational work that constitutes teaching. And in doing so, we risk designing futures where the teacher, the actual human in the room, becomes positioned as merely another technological interface, rather than the irreplaceable centre of the pedagogical encounter.
The Seductive Logic of Technological Solutionism
Technological Solutionism, first coined by Morozov et al., (2014) in ‘To Save Everything Click Here’. the term refers to a thought process according to which all of society’s problems (security, health, transport, education, etc.) can be solved by making technological solutions available to individuals.
The appeal of AI in education follows a familiar pattern. We identify legitimate problems: teachers are overworked, student-to-teacher ratios are untenable, personalised instruction is difficult to scale, assessment is time-consuming. Then we propose technological solutions that promise to address these inefficiencies. AI can grade papers, provide instant feedback, adapt content to individual learning speeds, answer repetitive questions, and analyse student engagement patterns.
All of this is true. And yet, something essential gets lost in this framing.
What gets lost is the recognition that pedagogy is not primarily about information transmission or task optimisation. Pedagogy is the deliberate, relational practice of creating conditions for learning. It involves reading a room (and I mean actually reading—the micro-expressions, the quality of silence, the energy shift when understanding emerges or confusion sets in). It requires making split-second decisions about when to push and when to hold back, when to answer directly and when to let students struggle productively with not-knowing. It demands the capacity to adapt, to improvise, to respond to the unexpected, to recognise teachable moments that no algorithm could have predicted.
Research bears this out, though perhaps not in ways that translate easily into the metrics we typically use to evaluate educational technology. Teacher-student rapport a somewhat old-fashioned term for the quality of relationship between teachers and learners that significantly influences student academic engagement across multiple dimensions: behavioural, affective, and cognitive (Zhou, 2021; Schnider et al., 2020; Frisby & Martin, 2010). The mechanisms here are not reducible to information delivery. Students who experience strong rapport with teachers are more willing to engage with difficult material, more likely to persist through challenges, and more capable of meaningful cognitive work.
This isn’t simply about being ‘nice’ or creating pleasant classroom atmospheres, though those matter too. It’s about something more fundamental: the way that learning is always already a relational and social phenomenon, embedded in webs of human connection that give it meaning and purpose.
I also recommend reading Clément Marquet ‘Ce nuage que je ne saurais voir. Promouvoir, contester et réguler les data centers à Plaine Commune’ it is available in English
What AI Actually Does (and Doesn’t Do)
I want to be precise here. My argument is not that AI has no place in educational contexts. That would be both false and unhelpful. AI systems can provide valuable support: they can handle certain repetitive tasks, offer students additional practice opportunities, make certain types of data visible to teachers, and provide scaffolding for independent work. Selwyn’s (2018) book ‘Should Robots Replace Teachers’ frames the need for questions that “relate to nature of education as a profoundly social – and therefore human – process”
But–and this is the crucial distinction–these are affordances, not pedagogical acts.
Consider what happens when students interact with an AI teaching assistant. Research examining this phenomenon found that whilst students appreciated the anonymity and reduced self-consciousness that came with asking questions to AI systems, they simultaneously worried about receiving unreliable or unexplained answers that could negatively impact their grades (Seo et al., 2021). The AI could answer questions, certainly. But it couldn’t do the pedagogical work of understanding why a student was confused, recognising patterns in their misconceptions, or helping them develop the metacognitive awareness to evaluate their own understanding.
More troublingly, students and instructors both expressed concerns about how AI support might reduce student agency and ownership of learning—what the researchers called the risk of ‘over-standardisation’ of the learning process. The AI could suggest paths forward, but it couldn’t make the nuanced judgement about when suggestion becomes prescription, when scaffold becomes crutch.
These limitations aren’t simply technical problems to be solved with more sophisticated algorithms. They’re categorical differences between tool use and pedagogical expertise.
The Irreducible Expertise of Teachers
Here’s what makes me uncomfortable about much AI-in-education discourse: it often proceeds from an impoverished model of what teachers actually do. Teaching gets implicitly reduced to content delivery, question answering, and assessment–functions that can, theoretically, be automated or augmented.
But this bears little resemblance to the actual phenomenology of teaching.
We know we have met a teacher when we come away amazed not at what the teacher was thinking, but at what we are thinking. We will forget what the teacher is saying because we are listening to a source deeper than the teachings themselves. A great teacher exposes the source and then steps back. (Carse, 1994: 70)
The teacher who notices that a usually engaged student has been quiet for three sessions and makes a point of checking in privately. The teacher who recognises that a class’s resistance to a particular text isn’t about the difficulty of the material but about its failure to connect with students’ lived experiences, and who pivots accordingly. The teacher who understands that a student’s ‘wrong’ answer actually reveals sophisticated thinking about a problem and uses that moment to deepen everyone’s understanding. The teacher who creates space for students to sit with difficulty, with not-knowing, with productive confusion. Who knows when that space has become unproductive frustration.
None of this is possible without embodied presence, without sustained attention to the particular humans in a particular room (including virtual) at a particular moment. None of it is reducible to pattern matching or data analysis, however sophisticated.
The evidence suggests that teachers themselves recognise this. A systematic review of research on training teachers to use AI found that the biggest challenge facing trainers wasn’t technical skill development–it was motivation (Aljemely, 2024). Teachers lack motivation not because they’re resistant to change or technophobic, but because they understand something that technologists often miss: that AI systems don’t actually address the pedagogical challenges that make their work difficult. They don’t help with the complex relational work, the moment-to-moment responsiveness, the ethical judgements about how to balance care with challenge.
The Problem with Presence
Let me stay on this question of presence, because I think it illuminates something important about why the human in the room matters. I talked about presence before with the role of virtual/immersive spaces.
In online learning research, ‘presence’ is typically understood as ‘a factor that makes students and instructors perceive each other’s existence during the learning process’ (Seo et al., 2021). But this definition, whilst useful, doesn’t quite capture what I’m driving at.
Presence isn’t merely about perception of existence. It’s about the quality of attention we bring to one another. It’s about the capacity to be affected by and responsive to what’s happening in the moment. It’s about vulnerability, the teacher’s willingness to not-know, to improvise, to risk failure in pursuit of genuine connection and understanding. All of this takes time, practice, and experience.
AI systems can simulate certain aspects of presence. They can provide feedback that feels personalised. They can appear to respond to student inputs. But this is fundamentally different from the presence of another human who is genuinely attending to you, who can be surprised by you, who brings their own uncertainties and enthusiasms into the encounter.
Research on teacher-student relationships consistently shows that this qualitative dimension of presence matters enormously for learning outcomes (Hagenauer, Muehlbacher, & Ivanova, 2022; Rodgers & Bradier-Roth, 2006). But it’s notoriously difficult to operationalise or measure using the metrics we typically apply to educational technology. How do you quantify the impact of a teacher’s genuine enthusiasm for a subject? How do you measure the pedagogical value of a teacher’s strategic use of their own uncertainty to model intellectual courage?
Pedagogy as Ethical Practice
Perhaps what troubles me most about the equation of AI with pedagogy is the way it obscures the essentially ethical nature of teaching.
Pedagogy, properly understood, is always a practice of judgement. It requires constant decision-making about questions that don’t have algorithmic answers: How much challenge is too much for this particular student at this particular moment? When does care become paternalism? When does academic rigour become cruelty? How do I balance the needs of individual students with the dynamics of the whole group?
These aren’t simply matters of optimisation. They’re questions about how we want to be with one another, about what kinds of human flourishing we’re trying to make possible, about how we navigate inevitable conflicts between competing goods.
Consider the teacher who must decide whether to let a student’s offensive comment pass (maintaining class flow) or address it directly (risking derailment but honouring the pedagogical responsibility to cultivate inclusive learning spaces). Or the teacher who must balance a student’s need for accommodation with the maintenance of meaningful academic standards. Or the teacher who must judge when it’s appropriate to share their own struggles with material and when such sharing would inappropriately burden students.
AI systems can’t make these judgements, not because they lack sufficient processing power, but because these are fundamentally human questions about values, relationships, and the kind of world we’re collectively creating. They require not just intelligence but wisdom, not just data but judgement informed by long experience of what it means to be human and fallible and striving.
What This Means in Practice
I’m aware that what I’m arguing here could be dismissed as romantic humanism—an unexamined privileging of the ‘human’ that ignores both the genuine limitations of human teachers and the genuine affordances of AI systems. So let me be concrete about what I’m not saying.
I’m not saying that AI has no role in educational settings. As I mentioned earlier, there are genuine affordances: tools for practice, systems for making certain kinds of student data visible, platforms for extending learning beyond scheduled class time.
I’m not saying that all human teachers are inherently wonderful and that human teaching is always superior to technological alternatives. Teachers can be terrible at their work. They can be boring, incompetent, cruel, burned out, unprepared. The existence of pedagogical expertise doesn’t mean all humans possess it.
I’m not saying we shouldn’t study what AI can do in educational contexts or experiment with different implementations.
What I am saying is that we need to maintain clarity about category differences. AI systems are tools, potentially powerful ones, but they’re not pedagogical agents. The pedagogical work still requires human teachers: their judgement, their presence, their capacity for genuine relationship, their ethical responsiveness.
This matters practically because it should shape how we design and deploy educational AI. If we’re clear that AI is a tool for teachers rather than a replacement for teachers, then we design differently. We focus on systems that augment teachers’ capacity to do the work only they can do, rather than systems that try to automate teaching itself.
We might, for instance, develop AI tools that help teachers quickly surface common patterns in student work, freeing up time for the kind of individualised attention that requires human judgement. We might create systems that provide students with low-stakes practice opportunities, so that classroom time can be devoted to more complex collaborative work. We might use AI to handle administrative tasks that take teachers away from teaching.
But we don’t pretend that these tools are themselves pedagogical. We don’t mistake efficiency gains for educational transformation. And we don’t allow the seductive promise of technological solutions to distract us from the harder work of supporting the humans who do the teaching.
The Indispensability of the Teacher
Let me return to where I started: the human in the room matters. Not as a contingent fact that might change with sufficiently advanced technology, but as a fundamental feature of what education is.
Education is the process by which we initiate new members into shared worlds of meaning. It’s how we pass on not just information but ways of thinking, ways of being, ways of attending to what matters. It’s how we learn to participate in communities of practice that stretch backwards to our predecessors and forwards to those who will come after us.
This is work that can only be done in relationship, by people who care about both the subject matter and the students, who bring their whole selves—including their limitations and uncertainties—into the encounter. It requires presence, vulnerability, ethical judgement, and the kind of attentiveness that can only come from one human to another.
AI can support this work. It can’t replace it.
And if we lose sight of this distinction, if we allow ‘AI-powered learning’ to become synonymous with ‘education’–we risk building systems that are ever more efficient at doing something that isn’t actually teaching at all.
The human in the room matters because teaching is fundamentally a human practice. Not in some nostalgic or sentimental sense, but in the most basic sense: it’s part of how we become and remain human together. It’s how we create the conditions for people to develop not just knowledge but judgement, not just skills but wisdom, not just information but understanding.
That’s not work that algorithms can do, no matter how sophisticated. That’s the work of pedagogy. And it requires the teacher.
What’s your experience with AI in educational settings? How do you see the relationship between technological tools and pedagogical expertise? I’m particularly interested in hearing from teachers about where you see AI genuinely supporting your work and where you experience it as missing the point entirely.

