The AI in Education Conversation Is Missing Most of the World
Category: Global South & Equity | Reading time: 7 min
What Seven Years in a Zambian Classroom Taught Me About EdTech
I spent seven years as a Guidance Teacher in Zambia's Ministry of General Education, teaching ICT and career guidance in a system that was doing remarkable things with very constrained resources. I watched brilliant students learn on equipment that would be considered obsolete in any well-resourced setting. I watched teachers design genuinely creative pedagogy around infrastructure limitations that most EdTech researchers have never encountered and do not think to account for.
When I read the AI in education literature now — as a doctoral researcher at the University of Nevada, Reno, and as a former UNICEF consultant — I am struck by how consistently that world is absent from the conversation. The studies are conducted at well-resourced universities in high-income countries. The platforms are designed for reliable broadband, modern devices, and students whose first language is English. The findings are presented as if they describe education universally, when they describe a particular slice of it — the slice that already has the most.
This is not a peripheral equity concern. It is a fundamental validity problem. If the AI-in-education research base is built almost entirely on evidence from high-income contexts, then the frameworks, policies, and systems being designed from that evidence are not designed for most of the world's learners. They are designed for the world's most advantaged learners and exported everywhere else.
What the Research Base Actually Looks Like
A systematic review of AI feedback and equity in higher education reveals that research examining Global South implementation contexts is critically underrepresented relative to the student populations these contexts serve. When Global South contexts appear in the literature at all, they tend to appear as sites of need — places where AI might solve access problems — rather than as sites of knowledge, where different pedagogical traditions, infrastructure realities, and community values might actually improve AI system design for everyone.
The research that does exist from Global South contexts surfaces patterns that high-income country research consistently misses. Connectivity is intermittent rather than continuous, meaning AI systems that require persistent internet access fail not occasionally but routinely. Device diversity is substantially wider, with students accessing learning platforms on everything from low-end Android phones to shared community computers, meaning platform designs optimized for laptop browsers exclude large populations by default. English-language training data underlies most AI educational tools, producing systems that perform significantly less reliably for students working in Bemba, Swahili, Shona, or any of the hundreds of languages in which actual learning happens across the African continent.
These are not edge cases. They are the modal conditions for a significant proportion of the world's higher education students. Treating them as afterthoughts in research design is not just an equity failure. It is a scientific failure, because it means the evidence base does not describe the phenomenon it claims to describe.
The Infrastructure Assumption Nobody States
Every AI feedback system I have encountered in the literature operates on a set of infrastructure assumptions that are so foundational they are never made explicit. Reliable broadband. Consistent electricity supply. Devices capable of running contemporary web applications. Institutional IT infrastructure capable of integrating with third-party platforms. Data storage and processing capacity sufficient for real-time adaptive responses.
In Zambia, in much of Sub-Saharan Africa, in significant portions of South and Southeast Asia, in rural and remote communities across every continent including the United States, these assumptions fail routinely. When they fail, the AI system does not degrade gracefully. It simply does not work. The student who needed the personalized feedback receives nothing, and the gap between them and the student whose infrastructure held opens a little wider.
I have worked as a UNICEF consultant on technology and education in contexts where these infrastructure realities are not hypothetical planning considerations but daily operational facts. The difference between how EdTech is designed in those contexts and how it is designed in Silicon Valley or Cambridge is not just a matter of resource allocation. It is a matter of who is in the room when design decisions are made, whose experience is treated as the default, and whose experience is treated as a special case to be addressed later if resources allow.
What Culturally Responsive AI Design Would Actually Require
Designing AI feedback systems that work equitably across Global South contexts is not primarily a technical challenge. The technical solutions — offline-capable architectures, low-bandwidth optimization, multilingual natural language processing — exist or are in active development. The harder challenge is the epistemological one: who gets to define what good learning looks like, whose pedagogical traditions inform how feedback is structured, and whose evidence counts as evidence.
Most AI educational feedback systems embed, invisibly, a set of assumptions about what learning looks like that reflect particular cultural and institutional contexts. The assumption that learning is primarily individual rather than collective. The assumption that the teacher's role is to transmit information rather than to model membership in a knowledge community. The assumption that correct answers are the primary signal of understanding rather than the quality of reasoning or the richness of explanation. These are not universal features of good pedagogy. They are features of a particular tradition — and building them into AI systems at the foundational level means exporting them globally as if they were.
Research on AI in Global South EFL education offers an instructive example. Studies examining AI tools in English as a Foreign Language contexts across African and Asian settings consistently find that tools designed around native-English pedagogical norms underserve learners whose relationship to English is complex, multiple, and embedded in multilingual realities that monolingual AI systems cannot model. The finding is not that AI cannot support EFL learning. It is that AI designed without that complexity in mind does not.
The Contribution Global South Contexts Make to the Field
I want to be direct about something that the equity framing sometimes obscures. The argument for centering Global South contexts in AI education research is not only that it is fair to those students — though it is. It is that those contexts surface design requirements, failure modes, and pedagogical insights that make AI systems better for everyone.
Teachers who have designed effective pedagogy under significant infrastructure constraints understand something about the relationship between technology and learning that teachers who have always had reliable tools do not. Students who navigate multilingual learning environments develop cognitive capacities that monolingual assessments do not capture. Communities with strong oral knowledge traditions have epistemological frameworks that challenge and enrich the text-centered assumptions baked into most AI educational tools.
When Global South researchers, educators, and students are co-designers rather than afterthought beneficiaries of AI education systems, the systems that result are more robust, more flexible, and more honest about what they are actually doing and for whom. That is not charity. That is better science.
What would it require for your institution's AI education initiatives to treat Global South implementation evidence not as a special case but as a primary source — and to build that standard into vendor selection, research partnerships, and pilot design from the beginning?
Priscillar McMillan is a doctoral research assistant in Information Technology in Education at the University of Nevada, Reno, and founder of Kowa Agency. She writes weekly on AI, learning systems, and the institutional decisions that will shape education for the next generation.