AI Literacy — the Week’s Arc
The Shrinking of a Word
A decade ago, “literacy” still carried the residue of its older meaning: the capacity to read a text with enough discernment to argue with it. Applied to new technologies, the word stretched but did not quite break — media literacy, information literacy, and digital literacy all retained some notion of the skeptical citizen, someone equipped not merely to operate a system but to interrogate it. Something has happened in the migration of the word to “AI literacy.” The interrogative muscle has atrophied. What now passes for literacy, in most of the frameworks being rolled out through schools, ministries, and corporate partnerships, is closer to vocational familiarity with a commercial product: knowing how to prompt, knowing which tools to pay for, knowing the etiquette of a particular chatbot. When the ISTE+ASCD and Google Partner to Provide AI Literacy Training to Six … announcement describes a plan to train six million American educators with curriculum underwritten by a dominant vendor, we are looking at something quite specific: the transformation of a civic concept into a procurement pipeline.
This essay is an attempt to map the terrain on which “AI literacy” is being contested, mostly silently and mostly without the public noticing what is being decided on its behalf. The contest is not between those who favor AI literacy and those who oppose it; almost everyone favors it. The contest concerns what the word means and, by extension, what kind of citizen is being imagined at its other end. The stakes here are not pedagogical in the narrow sense. They touch every other domain in which AI will matter — elections, criminal justice, labor, health, child welfare — because a population’s capacity to govern a technology depends on what it is taught to see when it looks at one.
My argument is that the dominant frameworks are converging on a definition of AI literacy that is workforce-oriented, tool-centric, and strategically shallow. They teach operation without critique, fluency without friction, and adoption without consent. A minority tradition — older, more demanding, rooted in media and information literacy — resists this convergence. Whether that minority tradition survives will determine whether “citizen participation” in the age of AI means anything at all, or whether it devolves into the ceremony of clicking through consent boxes for systems whose logic the public has been systematically untrained to question.
Two Definitions Quietly at War
Read ten AI literacy documents in a row and you will notice something: nearly all of them claim to bridge “skills” and “critical understanding,” and almost none of them do. The rhetorical gesture toward critique is preserved; the curricular weight falls elsewhere. The comprehensive review in Landscape of AI literacy in education: approaches, impacts, and … maps this drift across hundreds of interventions and finds a consistent pattern — instructional programs overwhelmingly emphasize computational fluency, tool use, and task performance, while ethical and critical dimensions are acknowledged in prefaces and abandoned in syllabi. The AI literacy development canvas: Assessing and building AI literacy … makes a similar observation about corporate training: organizations adopting AI literacy programs define success in terms of productivity uplift, not the employee’s capacity to contest an AI-driven decision that affects them.
The fullest statement of the more demanding tradition is the framework released as PDF Empowering Learners for the Age of AI, which explicitly names four domains — engaging, creating, managing, and designing with AI — and insists that evaluation, ethical reasoning, and sociotechnical awareness are not add-ons but constitutive. Even here, however, one notices the weight of pragmatic concession. The framework lives alongside, and must compete with, artifacts like the US Department of Labor releases AI literacy framework providing … guidance, which, whatever its merits, frames literacy almost entirely through the lens of occupational readiness. Both documents use the same phrase. They do not mean the same thing.
The tension is not merely semantic. A literacy defined as “being able to use ChatGPT productively” treats the technology as given and the human as the thing to be adjusted. A literacy defined as “being able to assess whether ChatGPT should be used, by whom, for what, and at whose cost” treats the technology as one possible configuration among many and the human as a political agent. The first produces users; the second, at least in aspiration, produces citizens. Kate Crawford, writing in The Atlas of AI (2021), insists on this distinction when she defines AI not as a set of techniques but as “the massive industrial formation that includes politics, labor, culture, and capital.” You cannot be literate in an industrial formation by learning to operate one of its consumer interfaces. Yet this is largely what is being taught.
The Prompt Engineering Capture
Nowhere is the narrowing clearer than in the rise of prompt engineering as a proxy for literacy itself. A systematic review in Prompt engineering in higher education: a systematic review … - Springer finds that, across the universities surveyed, “AI literacy” increasingly means teaching students to phrase queries in ways that extract better outputs from commercial large language models. The skill is real; the framing is consequential. The educator’s Prompting Techniques - Fostering AI Literacy: A Guide for Educators in … resource, published through the University of Virginia, is a representative case — thoughtful, well-constructed, and devoted almost entirely to eliciting better responses from systems whose training data, labor pipelines, and epistemic limits receive, at best, passing mention. The offering at Generative AI: Prompt Engineering Basics - Coursera advertises itself as literacy “for everyone,” which in practice means training a global workforce to be productive inside a specific vendor’s paradigm.
There is a reason prompt engineering has been so readily adopted as the synecdoche for literacy: it is teachable in a single session, measurable through rubrics, and aligned with the business interests of the firms supplying the models. A curriculum that teaches prompting can be delivered without controversy, funded through vendor partnerships, and assessed without any political question arising. A curriculum that teaches students to ask, for instance, why the model gives confident medical advice to a white user and evasive advice to a Black one — the kind of question that mirrors Mark Coeckelbergh’s demand, in AI Ethics (2020), that we listen to “people potentially disadvantaged by AI” — cannot be delivered without controversy, cannot be funded through vendor partnerships, and cannot be assessed without provoking the political question every institution has decided to avoid. The ascendancy of prompt engineering over critical AI literacy is not an intellectual mistake. It is an institutional choice.
The catalogs that accompany these curricula reinforce the framing. An article like The 25+ best free AI tools | Zapier — harmless on its face — functions, in aggregate with a thousand similar pieces, as the curricular air that students and teachers breathe. Literacy becomes a map of the tool landscape. The tool landscape is legible to those who made it. The consumer learns the ecosystem; the ecosystem does not learn the consumer.
What the Deepfake Problem Exposes
The most searching test of any definition of AI literacy is whether it equips ordinary people to survive the technology’s most aggressive current applications. Here the skills-based frameworks fail visibly. Consider the electoral threat documented in How we were deepfaked by election deepfakes - Financial Times, which traces how synthetic audio and video of candidates circulated in multiple national elections through 2024, often reaching voters faster than any corrective could be issued. The policy response, exemplified by Electoral Commission launches deepfake detection pilot to counter AI …, treats detection as a technical problem — as if better classifiers alone could repair a public sphere whose epistemic baseline has shifted. The underlying report Synthetic Crisis: Misinformation as the Trigger is more honest: the deepfake is the trigger, but the vulnerability is the citizen, whose habits of verification were formed in an earlier media environment and have not been updated.
A literacy adequate to this moment would teach something closer to what older media literacy traditions once did — the lateral reading techniques, source triangulation, and institutional skepticism that require time to develop and, crucially, require that citizens understand the incentives shaping the systems they encounter. The skills-based AI curricula do not teach this. They teach prompting. Meanwhile, the voice cloning scams documented in AI Voice Cloning Scams - How Criminals Use Deepfake Audio in 2026 … and cataloged for a specific demographic in What Are AI Scams? A Guide for Older Adults reveal just how uneven the distribution of exposure is. Older adults are not being targeted because they are less intelligent; they are being targeted because the AI literacy infrastructure, such as it exists, was built for students and workers, not for retirees, caregivers, and the people outside formal educational institutions.
Worse still, the deepfake problem has a particularly vicious instantiation in the domain of child safety. IA y deepfakes: nuevos riesgos de violencia sexual contra la … - UNICEF documents the proliferation of non-consensual synthetic imagery of minors, a pattern further examined in AI-generated child-sexual-abuse images are flooding the web.. A literacy that trains children to prompt effectively, but does not prepare them, their teachers, or their parents to recognize and respond to synthetic abuse, is not literacy in any recognizable sense. It is the cultivation of users without the cultivation of protection. The consolidated Guidance on AI and children - UNICEF makes the point with restraint: the frameworks children are being handed were designed for adults in workplaces, then lightly adapted, often by firms whose commercial interests are precisely the interests the literacy should be examining.
The Unequal Geography of Literacy
Coeckelbergh’s question — who will have access, who will be excluded — is not rhetorical. It applies to AI literacy itself, because the literacy being distributed is uneven in both reach and substance. The UNESCO effort described in La UNESCO lanza el Observatorio de Inteligencia Artificial en represents one of the few sustained attempts to build a regional infrastructure for Latin American AI education that is not simply imported wholesale from Silicon Valley or Brussels. The observatory model — tracking, comparing, and critiquing national policies — implicitly accepts that no single framework will serve every population, and that the relevant literacies are plural. This is a minority position. Most of the dominant frameworks implicitly assume a global English-speaking user, with reliable connectivity, working in a knowledge-economy occupation, and possessing the disposable attention to complete a modular course.
Within wealthier countries, the unevenness takes a different form. The survey published as Artificial intelligence policies in K-12 school districts in the United … finds extreme variance across American school districts — some with detailed guidance, many with nothing at all, and the variance correlating with resource levels in ways that will surprise no one. The broader analysis in Policy guidelines and recommendations on AI use in teaching and … confirms that the gap between districts with and without coherent AI policies has widened faster than any policy body has moved to close it. The result is that children in well-resourced districts receive — at best — the vendor-aligned curriculum, while children in under-resourced districts receive improvisation, ad-hoc bans, or nothing.
Bans are themselves a form of policy, and a revealing one. The debate captured in Should schools ban ChatGPT or embrace the technology instead? is not really a debate about pedagogy; it is a debate about how to manage institutional risk. The argument for a pause, articulated at its strongest in Experts call for five-year moratorium on generative AI in K-12 schools, deserves to be taken seriously on its own terms — not as technophobia, but as an attempt to create the temporal space in which a genuine civic conversation about the technology might occur before it becomes infrastructure. That conversation has been largely foreclosed by the speed of deployment, a speed that serves some interests more than others.
A related pathology appears in the academic integrity panic. The argument laid out in Contra generative AI detection in higher education assessments is that detection tools do not work, cannot be made to work, and by their operation degrade the trust relationship between faculty and students. Institutions that adopt detection as their response to AI are, in effect, admitting that they have no pedagogical answer to the question the technology poses. The GenAI and misinformation in education: a systematic scoping … - Springer review extends the point: misinformation from AI systems is not primarily a detection problem but an epistemic one, and institutions that treat it technically will keep producing graduates who have never been asked to think about why a statistical language model is so confidently wrong so much of the time.
Who Writes the Frameworks
The governance of AI literacy is, to a degree not widely appreciated, a governance performed by a small number of actors whose interests are not neutral. The partnership announced in the ISTE-Google document is emblematic: a major edtech standards body aligned with a major commercial AI provider, reaching six million educators, distributing curriculum that — whatever its merits — cannot help but normalize the provider’s products as the default terrain of literacy. The Department of Labor framework reflects the priorities of a state that primarily encounters AI through the lens of workforce policy. The educator-facing materials, from the University of Virginia prompting guide to the Coursera basics course, are produced inside institutions whose relationships with AI vendors are rarely disclosed with the prominence such relationships warrant.
Coeckelbergh’s call, in AI Ethics, for “more room for bottom-up next to top-down” acquires pointed meaning here. The current frameworks are overwhelmingly top-down. The researchers, teachers, and affected communities who might press a different definition of literacy do exist — they populate the pages of journals and the margins of policy reports — but they are not at the table where the six-million-educator deals are signed. One consequence is that whole categories of concern arrive in the curriculum only after a crisis forces them in. The pattern documented across Devastating report finds AI chatbots grooming kids, offering drugs … and Parents Group Issues Urgent Warning as WSJ Report Reveals Meta’s AI … and Meta’s AI chatbots ‘grooming’ children through roleplay, sexually … - MSN reveals a specific failure mode: the firms producing the chatbots and the firms producing the literacy curricula overlap, and the curricula reliably lag the harms their own products have produced.
There is an older tradition in computing education that saw this coming. Peter Denning and Matti Tedre, writing in Computational Thinking (2019), observed that “our battle-hardened notions of computational thinking do not help with many of the pressing issues of the world,” and that access to information “does not confer wisdom.” AI literacy, insofar as it is inheriting the vocabulary of computational thinking without its self-critique, risks inheriting the same limitation. The skills transfer; the wisdom does not.
The Democratic Deficit
If one accepts, with Crawford, that AI is an “industrial formation,” then literacy for democratic participation requires equipping citizens to engage that formation politically, not only operationally. This is the register in which current frameworks are weakest. Consider three domains in which AI-inflected decisions now reach citizens directly: policing, elections, and algorithmic adjudication of public services.
The analysis in The Use of Technology in Policing Should Be Regulated To Protect People From Wrongful Convictions argues that the use of facial recognition, predictive analytics, and related tools inside the criminal justice system is producing wrongful convictions that our regulatory apparatus is not built to detect. A citizen literate in the skills-based sense — able to prompt ChatGPT productively — is in no better position to contest a police department’s adoption of such systems than a citizen with no AI exposure at all. The literacy that matters here is the ability to read an algorithmic impact assessment, to understand the statistical claims underlying a predictive tool, and to recognize when a public agency has substituted a model’s output for a decision that requires human accountability. None of this appears in the dominant curricula.
The Spanish Supreme Court ruling celebrated in El Supremo consagra la transparencia algorítmica establishes, in one jurisdiction, a right to algorithmic transparency in administrative decisions. The ruling means something only to the extent that citizens can actually exercise it — which requires a literacy capable of interrogating how an algorithmic system classifies, ranks, or denies. That literacy is not being built at scale. What is being built at scale is prompting fluency.
The electoral question is similar. The detection pilots described in the UK Electoral Commission release are technical countermeasures; they are not a civic education. The literacy that democracy actually requires — an ability to hold probabilistic judgments about synthetic media, to triage sources under time pressure, to recognize the emotional signatures of manipulation campaigns — is closer to the old media literacy curriculum than to anything currently offered under the AI literacy banner. The Synthetic Crisis: Misinformation as the Trigger analysis makes the point starkly: the synthetic artifact is the catalyst, not the cause. The cause is an under-literate public operating in an information environment whose incentives have been restructured by systems the public has not been taught to see.
There is a specialized domain in which a more demanding literacy is being built — the security community’s work on adversarial use of AI, visible in projects like Hack the AI agent: Build agentic AI security skills with the GitHub Secure Code Game. This is literacy as adversarial engagement, where the goal is to understand a system well enough to break it or defend it. The approach is promising, but it is being developed for professionals, not citizens, and its pedagogical techniques have not crossed over into general education. They could. Whether they will depends on decisions being made now by the institutions that define “AI literacy” in its public-facing sense.
Toward a Literacy Adequate to the Moment
The conceptual muddle this essay has tried to map is resolvable, though resolving it requires a willingness to say what the dominant frameworks will not say: that AI literacy, properly understood, is a political literacy, and that a curriculum built without a theory of power is not literacy at all. The Landscape of AI literacy in education: approaches, impacts, and … review closes with a diagnosis that points in this direction — the observation that sustained impact correlates with programs that integrate ethical, social, and critical dimensions rather than treating them as optional modules. The canvas in The AI literacy development canvas: Assessing and building AI literacy … gestures at a similar integration but lives within organizational contexts whose incentives push the other way.
What a literacy adequate to citizen participation would look like is not mysterious. It would teach operation — because you cannot critique what you cannot use — but subordinate operation to interpretation. It would teach the political economy of the systems, not only their interface. It would include affect recognition and emotion detection as case studies in scientific overreach, following Crawford’s observation in The Atlas of AI that these systems are deployed at industrial scale despite being “at best incomplete and at worst misleading.” It would equip citizens to engage their local school boards, city councils, and administrative agencies on AI procurement, not as spectators but as participants. It would extend beyond the K–12 and higher education silos to the older adults, caregivers, and informal workers for whom the current curricula were not designed. And it would be plural — built with the recognition, central to the UNESCO observatory’s approach, that no single framework from no single capital can define what literacy means for the world.
The alternative is a population fluent in a commercial product and illiterate in the system that produced it. That is not citizenship; it is consumer training repackaged in civic vocabulary. The word “literacy” still carries, for most people who hear it, the older connotation of critical reading. The institutions shaping AI literacy curricula are relying on that connotation to legitimate a much thinner offering. Whether we notice the substitution in time to contest it is itself a test of the literacy we already have.
References
- AI Voice Cloning Scams - How Criminals Use Deepfake Audio in 2026 …
- AI-generated child-sexual-abuse images are flooding the web.
- Artificial intelligence policies in K-12 school districts in the United …
- Contra generative AI detection in higher education assessments
- Devastating report finds AI chatbots grooming kids, offering drugs …
- El Supremo consagra la transparencia algorítmica
- Electoral Commission launches deepfake detection pilot to counter AI …
- Experts call for five-year moratorium on generative AI in K-12 schools
- GenAI and misinformation in education: a systematic scoping … - Springer
- Generative AI: Prompt Engineering Basics - Coursera
- Guidance on AI and children - UNICEF
- Hack the AI agent: Build agentic AI security skills with the GitHub Secure Code Game
- How we were deepfaked by election deepfakes - Financial Times
- IA y deepfakes: nuevos riesgos de violencia sexual contra la … - UNICEF
- ISTE+ASCD and Google Partner to Provide AI Literacy Training to Six …
- La UNESCO lanza el Observatorio de Inteligencia Artificial en
- Landscape of AI literacy in education: approaches, impacts, and …
- Meta’s AI chatbots ‘grooming’ children through roleplay, sexually … - MSN
- Parents Group Issues Urgent Warning as WSJ Report Reveals Meta’s AI …
- PDF Empowering Learners for the Age of AI
- Policy guidelines and recommendations on AI use in teaching and …
- Prompt engineering in higher education: a systematic review … - Springer
- Prompting Techniques - Fostering AI Literacy: A Guide for Educators in …
- Should schools ban ChatGPT or embrace the technology instead?
- Synthetic Crisis: Misinformation as the Trigger
- The 25+ best free AI tools | Zapier
- The AI literacy development canvas: Assessing and building AI literacy …
- The Use of Technology in Policing Should Be Regulated To Protect People From Wrongful Convictions
- US Department of Labor releases AI literacy framework providing …
- What Are AI Scams? A Guide for Older Adults