Through McLuhan’s Lens
The Invisible Curriculum
April 21, 2026 | 2720 words
Through McLuhan’s Lens: The Invisible Curriculum
What the Cursor Teaches Before You Type
Watch what happens in the half-second before a sentence forms. A user opens ChatGPT, places the cursor in the prompt box, and notices that the box is small — smaller than the response area that will soon unfurl beneath it. The proportions are pedagogical. The interface has already taught, before a single keystroke, that the user’s contribution is meant to be brief and the machine’s contribution is meant to be ample. The blinking cursor waits. There is no draft folder, no margin for marginalia, no visible history of how the question was arrived at. There is only the prompt line, which asks, by its very shape, for something compact, declarative, and immediately answerable.
Now consider the autocomplete suggestion that appears mid-sentence in a writing assistant — the ghost-text that completes a thought the user had not yet finished having. Accepting the suggestion feels efficient. Declining it requires a small act of resistance: typing through the proposal, overriding a phantom that has already occupied the space where the next word should form. Over hundreds of such micro-decisions in a working day, the writer learns something the tool never announced. The writer learns that thinking is a race against a prediction, that hesitation is a cost, that the sentence one would have written unaided is, statistically, the less probable sentence — and probability has been quietly installed as a synonym for quality.
Or consider the “regenerate” button. It promises a second draft with a single click, and in doing so it teaches a posture: dissatisfaction as default, iteration as consumption rather than labor, revision as a slot-machine pull rather than a return to one’s own thinking. The user learns to evaluate outputs quickly, at a glance, because three more are a button away. The faculty of sustained attention to a single piece of prose — the faculty of staying with a draft long enough to discover what is wrong with it — is not forbidden. It is simply made to feel inefficient.
None of this is in any syllabus. None of it appears in the prompt-engineering guides now circulating through schools and universities. This is the invisible curriculum of AI: what the tool teaches the user about cognition, authority, and effort, wholly independent of whatever output it produces. To name it is the first task of AI literacy. To fail to name it is to offer training in prompt craft while the deeper instruction proceeds uninterrupted.
The Medium Is the Message, Re-read at the Prompt Line
Marshall McLuhan’s most famous claim — that the medium is the message — has been repeated so often that it has been flattened into a slogan about how form matters. Returned to its original force, it is stranger and more useful: the content carried by a medium is a distraction from the medium’s actual effects, which operate at the level of scale, pace, and pattern in human affairs. The light bulb, in McLuhan’s favorite example, has no content, yet it restructures the nocturnal life of a city more profoundly than any broadcast.
Applied to generative AI, the claim cuts against almost every current conversation. The overwhelming majority of AI literacy discussion concerns what users put into the prompt and what comes out: the quality of the question, the quality of the answer, the accuracy of the facts, the originality of the prose. This is the content layer. It is the layer where hallucinations are debated, where rubrics for evaluating AI-assisted work are drafted, where guidelines for disclosure are negotiated. A McLuhan-inflected reading suggests that the content layer, however urgent it feels, is precisely where the medium’s real instruction is hidden by being overlooked.
Consider what the prompt interface teaches independent of any output. It teaches that inquiry is conversational and turn-based, which quietly excludes forms of thinking that require silence, incubation, or a walk. It teaches that knowledge arrives on demand, which recalibrates the user’s tolerance for not-knowing; the productive discomfort of sitting with a question long enough for its shape to clarify is reframed as a friction to be eliminated. It teaches that an authoritative voice is fluent, confident, and evenly paced — because that is what the model generates — and this cadence becomes, over time, the reader’s template for what expertise sounds like. It teaches that revision is external: a new prompt, a new generation, rather than a re-entry into one’s own draft.
Most consequentially, the interface teaches that the unit of thought is the exchange. A question goes in, an answer comes out, and the cycle closes. The forms of thinking that do not fit this unit — the essay that takes three weeks to discover its argument, the proof that requires a month of false starts, the interpretation that depends on having read something else two years earlier — are not suppressed. They are simply rendered invisible by the shape of the container. The medium is the message in the precise sense McLuhan intended: what the prompt box does to the scale and pace of cognition matters more than anything ever typed into it.
This is why “prompt engineering” as a pedagogical frame is insufficient, and in some ways counterproductive. It is training in the content layer. It improves the user’s skill at filling the container while leaving the container’s effects on the user unexamined.
What Goes Numb
McLuhan’s second major contribution to this analysis is his account of extension and numbness. Every technology that extends a human faculty, he argued, induces a corresponding numbness in the faculty extended — what he sometimes called auto-amputation. The stirrup extends the leg and numbs the body’s sense of balance on horseback. The phonetic alphabet extends the eye and numbs the ear. The automobile extends the foot and numbs the musculature of walking distances. The point is not that extensions are bad; it is that users rarely notice what they have stopped doing, because the faculty in question is no longer exercised and therefore no longer registers its own absence.
Generative AI extends composition, synthesis, summarization, and a widening band of judgment. The question is not whether this is useful — it is, often, enormously so — but what goes numb. And here the honest answer is more specific than the usual hand-wringing about “critical thinking.”
What goes numb, first, is the experience of productive difficulty at the sentence level. Writing is not primarily a transcription of prior thought; for many writers, it is the means by which thought becomes articulate. The struggle to find the right word is also the struggle to find out what one actually believes. When the right word is proposed by a model, the struggle is shortened, and the discovery that would have occurred during the struggle does not occur. The user does not feel this loss, because the loss is the absence of a thought that was never formed. Numbness, in McLuhan’s sense, is precisely this: the inability to perceive what has been given up, because the faculty that would have perceived it is the faculty that was extended.
What goes numb, second, is the user’s feel for the shape of their own ignorance. Before search engines, and far more before chatbots, a researcher developed a tactile sense of where the edges of their knowledge lay — which sources they had consulted, which they had not, which questions remained open. Chat interfaces collapse this topography. The model responds with equal fluency to questions the user could have answered, questions the user could have looked up, and questions at the frontier of a specialized field. The phenomenology of “I don’t know this yet, but I know how far I am from knowing it” is eroded. What replaces it is a flattened confidence: the sense that anything askable is answerable, and that the cost of asking is uniformly low.
What goes numb, third, is the capacity to tolerate a first draft as a first draft. The regenerate button trains a relationship to one’s own work in which unsatisfactory output is to be replaced rather than understood. Writers have long known that what is wrong with a draft is often the richest information the draft contains — the place where the argument is thin is the place where more thinking is needed. Replacing the draft with a better-sounding one short-circuits this diagnosis. The draft becomes disposable, and with it, the habit of reading one’s own work for what it reveals about one’s own thinking.
None of these numbings is announced. They proceed by the ordinary mechanism McLuhan described: the extended faculty ceases to be felt. This is why AI literacy that restricts itself to teaching effective use will always miss its own subject. One cannot teach effective use of a medium whose primary effect is to make its effects imperceptible.
The Rear-View Mirror and the Word “Tool”
There is a further reason the invisible curriculum remains invisible, and it has to do with the frame through which the current moment is being perceived. McLuhan’s image of the rear-view mirror — we march backward into the future, seeing the new medium through the categories of the old — explains much of what is now said about AI.
The dominant frame is the tool frame. AI is described, in policy documents and classroom guidelines and op-eds, as a tool, which places it in a lineage with the hammer, the calculator, the word processor, the search engine. The frame is intuitive, which is part of its appeal, and it has the further advantage of domesticating a technology that might otherwise feel unfamiliar. Tools are things one uses. Tools are instrumental; their ethics concern how they are wielded. Tools do not, by the logic of the frame, act upon their users.
A McLuhan-inflected reading will notice immediately that the tool frame is itself the rear-view mirror. It is an attempt to understand a medium that restructures cognition, authority, and the pace of inquiry by assimilating it to the category of hand implements. The hammer does not propose the next nail. The calculator does not complete one’s arithmetic before one has finished thinking it. The search engine returns ranked links to documents written by humans; the chatbot returns synthesized prose that occupies the ecological niche once held by those documents. These are not differences of degree. They are differences that the tool frame is structurally incapable of registering, because the frame was built for technologies whose effects stop at the user’s hand.
Evidence of this framing distribution is now empirical. In a corpus of 6,660 articles on AI in education, 704 analyses address practical questions of how to implement AI, while the question of whether to implement it is nearly absent. The tool frame dominates; the partner frame — the suggestion that AI is something one thinks with, or is thought by — is nearly absent; the threat frame is present but marginal. Three observations follow.
First, the ratio itself is a curriculum. Before a single user sits down at a keyboard, the discourse has taught that implementation is inevitable. The “how” has been posed so loudly that the “whether” has become inaudible. This is not a neutral distribution of attention. It is a rehearsal of a conclusion.
Second, the near-absence of the partner frame forecloses the vocabulary in which the invisible curriculum could even be described. To name what the medium is doing to the user requires concepts — co-composition, cognitive delegation, stylistic absorption, epistemic offloading — that the tool frame actively discourages. A hammer does not have a style one can absorb. A tool is something one picks up and puts down; the idea that one might be picked up by it has no place in the frame.
Third, the marginality of the threat frame is sometimes treated as evidence of a mature, non-alarmist discourse. It can equally be read as evidence that the discourse has already adopted the medium’s own preferred self-description. Media, McLuhan observed, tend to impose their own assumptions on the unwary, and one of those assumptions is almost always that the medium is benign. The rear-view mirror flatters the new arrival by making it look like the old one.
The Turn: The Discourse Is Also a Medium
Here the floor shifts.
The argument so far has treated the invisible curriculum as something AI tools teach their users. But the AI literacy discourse is itself a medium, with its own invisible curriculum, and it is teaching educators before educators teach students.
Consider what the obsessive focus on “how do we teach students to use AI well?” is doing to those who ask it. The question is practical, urgent, and in its own terms answerable. It is also a container, and like any container, it shapes what can be placed inside it. By asking how to teach use, the discourse presupposes that use is the unit of analysis. By presupposing use, it brackets the prior question — what is the medium doing to the learner’s cognition, attention, and sense of authority? — as either settled or out of scope. The bracketing is not argued; it is enacted, in the shape of the question itself.
This is why the near-absence of student voice in AI integration decisions is not merely an equity problem. It is an epistemological symptom. In the corpus noted above, institutional and faculty perspectives dominate; student perspectives are largely absent. The invisible curriculum of this distribution is a lesson in who has standing to speak about one’s own cognitive formation. Students are positioned as recipients of policies made on their behalf, which teaches — before any classroom AI conversation occurs — that the reshaping of one’s own thinking by a medium is a matter on which one does not have primary authority to comment. The lesson is delivered by the structure of the discourse, not by any single document within it.
The silences compound. Long-term cognitive effects are a discourse gap. Non-Western perspectives are a discourse gap. What these silences teach is that the timescale of concern is the semester, the framework of concern is the Anglophone research university, and the imagination of what counts as a legitimate question about AI is bounded accordingly. A medium that will plausibly reshape the cognitive ecology of a generation is being discussed in vocabularies drawn from a narrow slice of institutional experience. The vocabularies are not wrong. They are partial in ways their partiality does not announce.
Now the turn can be stated plainly. Calling AI a tool is the first lesson of the invisible curriculum, and it is taught to faculty before it is taught to students. The tool frame arrives in professional development sessions, in institutional guidance, in the op-eds educators read over coffee. By the time an instructor designs a module on AI literacy, the frame has already done its work. The module will teach prompt craft, disclosure norms, and evaluative heuristics — useful, all of it — while the frame that shaped the module remains unexamined. The invisible curriculum is thus transmitted intact, from discourse to educator to learner, each stage reinforcing the preceding one’s assumption that the medium is instrumental and its effects on cognition are downstream of use.
To see this is to see that AI literacy cannot be built on top of the tool frame without inheriting the frame’s blindnesses. The literacy itself needs a different foundation.
What AI Literacy Could Be Instead
For educators of AI literacy and for learners trying to understand what they are absorbing, the implication is not a new checklist. It is a reorientation. Three shifts suggest themselves.
The first shift is from use to effect as the primary object of study. An AI literacy curriculum organized around effect would begin not with how to write a better prompt but with what happens to the user during and after the exchange. What did the learner stop doing that they would otherwise have done? What sentence did they not finish forming because a suggestion formed it? What question did they not sit with because an answer arrived in three seconds? These are empirical questions, answerable through reflective practice, journaling, comparative exercises in which the same task is undertaken with and without assistance. They are also the questions the tool frame cannot pose