Faculty Are Not Resisting AI. They Are Responding Rationally to an Irrational Ask.
Category: The Educator's Dilemma | Reading time: 6 min
The Framing That Poisons the Conversation
There is a story that circulates in higher education administration about AI adoption, and it goes roughly like this: the technology is ready, the students want it, the evidence supports it, and the obstacle is faculty resistance — professors who are too attached to traditional methods, too skeptical of innovation, too protective of their disciplinary authority to embrace what is coming. The solution, in this story, is better communication, more professional development, and patient culture change.
I teach in a university. I study AI adoption in higher education. And I want to offer a different framing, one grounded in what the research on faculty AI adoption actually shows rather than what makes for a clean institutional narrative.
Faculty are not, as a population, irrational resisters of useful technology. What the evidence shows is something more specific and more uncomfortable: faculty are making rational cost-benefit assessments under conditions of genuine uncertainty, inadequate support, and misaligned institutional incentives — and arriving at conclusions that look like resistance from a distance but look quite different up close.
What Faculty Are Actually Weighing
When a faculty member declines to integrate an AI feedback tool into their course, they are typically weighing a specific set of real considerations. Will this tool produce reliable, accurate feedback on the kinds of work my students produce? Will it disadvantage students who lack the devices, connectivity, or digital fluency to use it effectively? Will it create accessibility problems for students with disabilities? Will the time I spend learning the system, configuring it for my course, and troubleshooting student issues be offset by any actual improvement in learning? And underneath all of these: if this pilot fails or produces unintended consequences, who bears the professional cost?
These are not irrational questions. They are exactly the questions a responsible educator should ask before deploying an untested technology into a learning environment where real students bear the consequences of getting it wrong. The research on faculty AI adoption consistently finds that skepticism is highest not among the least competent or least innovative faculty but among faculty with the deepest pedagogical knowledge — the ones who have thought most carefully about how learning works and are therefore most capable of identifying the ways a new tool might disrupt it.
A 2024 EDUCAUSE survey found that faculty concerns about AI center heavily on academic integrity, equity of student access, reliability of AI outputs in discipline-specific contexts, and uncertainty about institutional support if something goes wrong. These are not technophobic concerns. They are legitimate professional concerns that institutions have frequently failed to address before asking faculty to absorb the risk of innovation.
The Professional Development Mismatch
The dominant institutional response to faculty AI hesitancy is professional development — workshops, webinars, lunch-and-learns, peer learning communities. Some of this is genuinely useful. Most of it misses the actual problem.
The research on effective professional development for technology integration in higher education identifies several conditions that predict whether faculty will actually change their practice: sustained engagement over time rather than one-time events, pedagogically grounded rather than technically focused content, departmental or disciplinary community rather than institution-wide generic programming, and concrete support for the actual implementation work of adapting a specific course. Professional development that covers how to log into a platform and generate a report does almost nothing to address whether integrating that platform will improve learning in a specific disciplinary context. That question requires disciplinary knowledge the professional development facilitator frequently does not have.
What faculty consistently report wanting and rarely receiving is something more substantial: time to experiment without the pressure of student performance consequences, access to disciplinary colleagues who have genuinely integrated AI into similar courses and can speak honestly about what worked and what did not, and institutional signals that imperfect pilots during a genuine learning period are acceptable outcomes. Instead, what they frequently receive is an implicit expectation of seamless adoption on an accelerated timeline, with professional development as a substitute for the structural support that would actually change behavior.
The Incentive Structure Nobody Wants to Name
There is a structural dimension to faculty AI adoption patterns that the professional development conversation consistently avoids. In most research universities and a significant proportion of teaching institutions, faculty reward structures center on research productivity, grant acquisition, and publication record. Teaching innovation — including the substantial time investment of thoughtfully integrating AI tools into courses — is rewarded weakly if at all in tenure and promotion decisions. Adjunct and contingent faculty, who teach the majority of undergraduate courses at many institutions, have no job security that would make voluntary investment in institutional AI initiatives a rational use of their limited professional time.
Asking faculty to lead AI transformation under these incentive conditions is asking them to invest substantially in an activity that their institutions do not adequately value, at a risk level their institutions have not adequately mitigated, for a transformation whose benefits will accrue primarily to the institution's efficiency metrics and the vendor's contract renewal. The surprise should not be that some faculty decline. The surprise is that as many faculty engage as willingly as they do.
Genuine institutional support for faculty AI adoption would look like course releases or reduced loads for faculty undertaking substantial curriculum redesign. It would look like AI integration contributions weighted explicitly in annual reviews and promotion files. It would look like multi-year pilot commitments that give faculty enough time to learn a system properly rather than a semester to demo it. And it would look like institutional acknowledgment that the first iteration of any genuine innovation will be imperfect, and that imperfection during learning is not a professional failure.
What Sustainable Adoption Actually Looks Like
The institutions showing the most durable and meaningful faculty AI adoption in the research literature are not the ones that ran the most professional development workshops or sent the most communication about AI strategy. They are the ones that treated faculty as design partners rather than adoption targets — institutions where faculty had genuine voice in which tools were selected, how they were configured, and what success looked like.
Participatory models of AI integration, where faculty with disciplinary expertise co-design implementation with educational technologists and learning scientists, produce tools that are better calibrated to actual learning goals and faculty who are genuinely invested in making them work because they had a hand in making them. This is slower than top-down adoption mandates. It is also more durable, more equitable, and more honest about the complexity of the work.
Faculty are not the obstacle to AI transformation in higher education. They are the professional community whose knowledge is most essential to getting it right. Institutions that understand this will build AI education systems that actually improve learning. Institutions that frame faculty judgment as resistance to be overcome will build AI education systems that improve efficiency metrics and leave the deeper questions unanswered.
What would it look like if your institution's next AI initiative began with a genuine question to faculty — not 'how do we get you on board?' but 'what would you need in order for this to actually work for your students?'
Priscillar McMillan is a doctoral research assistant in Information Technology in Education at the University of Nevada, Reno, and founder of Kowa Agency. She writes weekly on AI, learning systems, and the institutional decisions that will shape education for the next generation.