Why Telling Teachers to Lead the AI Transition Is Educational Malpractice Category: The Educator's Dilemma | Reading time: 6 min
Category: The Educator's Dilemma | Reading time: 6 min
At a recent education and technology conference, a panel moderator asked a room full of educators what they needed to feel confident integrating AI into their classrooms. The responses came fast: more training, clearer guidelines, better tools, more time. What struck me was not the content of the answers. It was the exhaustion underneath them.
These were not resistant teachers. They were not technophobes clinging to chalkboards. They were professionals who had already absorbed remote learning, learning management system overhauls, post-pandemic re-enrollment crises, and chronic underfunding — and were now being handed one more transformation and told to lead it.
I study AI adoption in higher education. I also teach in it. And I want to name something directly: when institutions point to teachers as the primary change agents for AI integration, they are not empowering educators. They are offloading an institutional responsibility onto the most structurally vulnerable people in the system.
What We Are Actually Asking Teachers to Do
To understand the weight of what is being asked, it helps to break AI integration into what it actually requires at the institutional level. I think about this in three distinct pillars — and the failure to keep them separate is where most AI-in-education strategies collapse.
Pillar One is safe adoption — ensuring that AI tools being introduced into learning environments are vetted for data privacy, age-appropriateness, equity of access, and alignment with learning goals. This is a policy and procurement function. It requires legal review, IT governance, and institutional risk assessment. It is not a classroom teacher's job.
Pillar Two is curriculum redesign — rethinking what we ask students to do, how we assess them, and what learning outcomes we are actually trying to build, given that AI now exists as a permanent feature of the cognitive landscape. This is a curriculum development function. It requires instructional designers, faculty governance, and serious pedagogical debate. It can involve teachers, but it cannot be carried entirely by them on top of full teaching loads.
Pillar Three is systemic overhaul — the deeper question of what education is for in an AI-saturated world, and whether the structures, incentives, and metrics of our institutions are designed to answer that question honestly. This is a leadership function. It requires institutional courage, not just institutional communication.
What most institutions are doing right now is collapsing all three pillars and handing the rubble to classroom teachers with a professional development workshop and a ChatGPT account.
The Burnout Data Is Not Background Noise
This matters more urgently because educator burnout is not a peripheral concern — it is the context in which AI integration is being demanded. A 2024 RAND Corporation survey found that nearly half of teachers reported frequent job-related stress, with workload consistently cited as the primary driver. In higher education, adjunct and contingent faculty — who now make up the majority of the teaching workforce — frequently carry heavy course loads with minimal institutional support and no job security that would incentivize volunteering as innovation pilots.
Asking these educators to simultaneously teach, grade, learn new AI tools, redesign their curricula, and serve as the human face of an institutional transformation is not a growth opportunity. It is a system externalizing its costs onto its most precarious workers.
This is not new. Education has a long history of institutional inertia dressed as teacher empowerment. When standardized testing regimes expanded, teachers were told to innovate within the new constraints. When learning management systems arrived, teachers were told to redesign their courses. When the pandemic hit, teachers were told to pivot to online delivery in a weekend. The pattern is consistent: the institution protects its structure, and the teacher absorbs the disruption.
What Genuine Institutional Leadership Looks Like
I am not arguing that teachers should be excluded from AI integration conversations. Quite the opposite — their classroom-level knowledge is irreplaceable data for any institution serious about designing AI adoption that actually works. But there is a difference between including teachers as informed contributors and conscripting them as unpaid implementation leads.
Genuine institutional leadership on AI looks like a few specific things. It looks like dedicated release time for faculty who are asked to redesign courses — not a summer stipend that amounts to poverty wages for weeks of professional labor. It looks like an AI governance committee with real authority, not an advisory group whose recommendations get thanked and shelved. It looks like curriculum offices that take on the structural redesign work rather than sending teachers to a three-hour workshop and calling it transformation. And it looks like institutional leaders willing to have the harder conversation: that some of what we currently call education is assessment theater, and AI has simply made that visible.
The EDUCAUSE 2024 Horizon Report identifies AI as the defining challenge for higher education in this decade. The report is careful and well-researched. What it cannot compel is the institutional will to respond at the level the challenge actually demands.
The Accountability Gap
Here is the structural problem underneath the pattern. When a teacher's AI integration effort fails — when students game the tool, when learning outcomes drop, when the new pedagogy produces confusion rather than clarity — the accountability lands on the teacher. The institutional decision to deploy without design infrastructure, without governance, without genuine curriculum support, disappears into the background.
This accountability asymmetry is what makes the current arrangement not just ineffective, but unjust. Teachers are being asked to carry reputational and professional risk for decisions that were made well above their pay grade, in procurement offices and administrative retreats, with little input from the people who will live with the consequences.
Changing this requires institutions to do something genuinely difficult: accept that AI integration is a governance challenge first, a curriculum challenge second, and a classroom challenge third. The order matters. When you reverse it — as most institutions currently do — you get burned-out teachers, inconsistent student experiences, and AI pilots that produce impressive slide decks and underwhelming learning outcomes.
The educators I have spoken with do not need another professional development workshop. They need institutional partners who are willing to do their share of the work — and leaders with enough intellectual honesty to admit that the current arrangement is not empowerment.
It is abdication with better branding.
What would it look like if your institution's AI integration strategy was held accountable to educator workload data, not just student performance metrics? And who in your organization is positioned to ask that question out loud?
Priscillar McMillan is a doctoral research assistant in Information Technology in Education at the University of Nevada, Reno, and founder of Kowa Agency. She writes weekly on AI, learning systems, and the institutional decisions that will shape education for the next generation.