Through McLuhan’s Lens
The Missing Conversation
May 10, 2026 | 2685 words
Through McLuhan’s Lens
The Missing Conversation: The Silences That Shape AI Discourse
In a corpus of 6,135 articles published this week on artificial intelligence, the word “Regulation” appears 94 times. “Machine Learning” appears 18. That is a ratio of more than five to one, in favor of the political-procedural over the technical-substantive. In the same body of coverage, the framing of AI as a “tool” dominates, the framing of AI as a “threat” recurs reliably, and five other framings — AI as partner, AI as transformation, AI as governance, AI as equity, AI as automation — each appear in under five percent of articles. Three specific conversations are nearly absent: the voice of students in the policy decisions being made about their futures, the long-term cognitive effects of daily AI use, and any sustained engagement with non-Western perspectives on what these systems are and should be.
These are not gaps in a single newsroom’s reporting. They are the shape of an entire week of public discourse. And the shape of the discourse, not its content, is what this essay is about.
It is tempting to read the numbers above as an indictment — of journalists, of editors, of the attention economy, of whoever else is convenient. That reading would miss the more useful observation. The numbers describe a medium. They describe what the current medium of AI discourse can carry and what it cannot, what it amplifies and what it filters out. And as Marshall McLuhan spent his career trying to teach an unwilling public, the medium is doing something to us that we cannot see while we are inside it.
The Figure Everyone Watches, the Ground That Governs
McLuhan’s most quoted line — “the medium is the message” — is also his most misunderstood. Readers tend to hear it as a claim about content versus delivery, as though he were saying that how a thing is said matters more than what is said. He was making a stranger and more useful claim. In Understanding Media, he argued that every medium reorganizes the sensorium of the people who use it. The medium’s real “message” is the new pattern of perception, attention, and social relation it installs. The content carried by the medium — the news story, the television program, the AI policy debate — is a kind of decoy that keeps the watchdog of the conscious mind occupied while the medium does its quieter, structural work.
This is the frame to bring to a corpus where Regulation outpaces Machine Learning five to one.
The figure — the thing everyone is watching — is the content of AI coverage: which company released what model, which senator proposed which bill, which task force issued which framework. The ground — the thing actually shaping the public’s relationship to AI — is the distribution of that coverage. It is the fact that a citizen reading widely this week will encounter the word “regulation” again and again, and the phrase “machine learning” almost not at all. It is the fact that “partner” and “equity” and “transformation” register as rounding errors. The ground is what the discourse trains the reader to expect, to demand, to feel competent discussing — and, equally, what it trains the reader to leave alone.
McLuhan’s figure/ground move, borrowed from Gestalt psychology and threaded through nearly all his later work, is a reader-empowering instrument. It says: stop staring at the picture. Look at the frame. Look at what the picture cannot contain. The figure is loud, but the ground is what governs.
Apply it here. A 94-to-18 ratio is a frame. It tells the reader that AI is a thing to be governed by procedures whose technical substance lies somewhere offstage. It positions the reader as a subject of policy — someone for whom rules will be made — rather than as a person who could, with effort, understand the systems acting on them. The shape of the discourse constitutes its audience. That constitution is the message.
The Rear-View Mirror Pointed at a New Country
McLuhan had a second instrument worth bringing to these numbers: what he called the rear-view mirror. In The Medium Is the Massage and across the McLuhan Interview, he returned repeatedly to the observation that human beings, confronted with a genuinely new technological environment, instinctively reach for the vocabulary of the previous one. We drive into the future, he wrote, looking into the rear-view mirror. We see the road behind us and call it the road ahead.
The dominance of “regulation” in this week’s AI corpus is a textbook case.
“Regulation” is a word with a long industrial pedigree. It evokes the Food and Drug Administration approving a compound, the Federal Communications Commission allocating spectrum, the Environmental Protection Agency setting an emissions cap. It assumes a recognizable object — a drug, a broadcast frequency, a smokestack — that can be inspected, classified, and bounded. It assumes a clear distinction between the regulator (public authority) and the regulated (private producer). It assumes that the relevant harms are measurable and the relevant remedies are procedural.
None of these assumptions translate cleanly to systems that are simultaneously infrastructure, content, labor substitute, cognitive prosthesis, surveillance apparatus, and cultural medium. But “regulation” is the vocabulary the public has, so “regulation” is the vocabulary the public uses. The 94 mentions are the rear-view mirror in operation. They describe a new country in the language of an old one.
Notice what the language carries with it. When AI is framed as an object of regulation, the implicit drama is between firms and states. The citizen’s role in that drama is to wait — to be protected, eventually, by rules whose technical content they are not expected to follow. The substantive question of what machine learning actually is — how it is trained, on what data, with what failure modes, optimizing for what — slides offstage. It becomes specialist knowledge, the domain of the people drafting the rules, not of the people the rules will govern.
Meanwhile, the 18 mentions of “Machine Learning” are doing a different kind of work. They are the thin signal of an alternative discourse in which the reader is treated as someone capable of understanding the mechanism, not merely the policy surrounding the mechanism. That discourse exists. It is just heavily outnumbered.
Ask the skeptical question McLuhan trained his readers to ask: who benefits from this ratio? When public attention is anchored on procedure, the substance of these systems becomes the private property of the people who build them and the people who write rules about them. The five-to-one ratio is not neutral. It quietly assigns competence. It quietly tells the reader where their thinking is welcome and where it is not.
What Has Gone Numb
McLuhan’s third instrument is the most uncomfortable. In Understanding Media, he proposed that every technological extension of the human body is accompanied by a corresponding numbness — what he sometimes called auto-amputation. The wheel extends the foot, but the body that uses wheels begins to lose something the walking body had. The book extends the eye and the memory, but the literate mind loses capacities the oral mind retained. The medium gives, and the medium takes, and the taking happens below the level of conscious notice. We do not feel the amputation. That is the whole point. The numbness is the mechanism by which we tolerate the extension.
Now look again at the frames that appear in under five percent of this week’s coverage: partner, transformation, governance, equity, automation. And look at the three explicitly missing conversations: student voice, long-term cognitive effects, non-Western perspectives.
These are not random absences. They are coherent. They point, together, at the parts of the human and social experience of AI that the current medium of discourse cannot easily metabolize.
The “partner” frame would require treating AI as something one collaborates with, which would in turn require describing what collaboration with such a system actually feels like, what it changes in the person doing the collaborating, what new capacities it offers and what existing capacities it dulls. That is a phenomenological conversation. It does not fit easily into a regulatory news cycle. So it stays under five percent.
The “equity” frame would require asking who is in the room when these systems are designed, whose languages they speak well and whose they mangle, whose labor was scraped to train them, whose labor they are now displacing, and which populations bear the cost of their failures. That is a structural conversation about power. It fits poorly into a discourse organized around firm-versus-state regulatory drama, because it implicates both firms and states. So it stays under five percent.
The conversation about long-term cognitive effects would require admitting that we do not yet know what daily reliance on generative systems does to memory, to writing, to the formation of judgment, to the patience required for a difficult thought. It would require speaking honestly about the possibility that something is being given up. It would, in McLuhan’s terms, require feeling the amputation. So it does not happen, or happens rarely, and at the margins.
The absence of student voice in policy decisions is perhaps the cleanest example. The population whose cognitive and professional lives will be most reshaped by these systems is the population least quoted in coverage of the rules being made about them. They are spoken about, not with. They are an object of the discourse, not participants in it.
The absence of non-Western perspectives is the largest of the silences and the easiest to overlook from inside an English-language press environment. The questions of what intelligence is, what authorship means, what a tool’s relationship to its user should be, whether the individual or the collective is the relevant unit of harm and benefit — these are questions on which the world’s intellectual traditions differ profoundly. A discourse that treats Silicon Valley assumptions and Brussels regulatory instincts as the two poles of the conversation has already truncated the conversation by an enormous factor. The truncation does not register as a loss inside the discourse, because the discourse is the medium that has produced the numbness.
In History and Communications, McLuhan, working through the legacy of Harold Innis, returned repeatedly to the observation that every dominant medium of communication carries an inbuilt bias — toward certain kinds of content, certain time horizons, certain spatial reaches, certain populations as speakers and others as audiences. The bias is rarely chosen. It is structural. Innis’s monopolies of knowledge are not conspiracies; they are the natural sedimentation of which questions a medium can ask and which it cannot. The current medium of AI discourse has a bias, and the under-five-percent frames and the missing conversations are its signature.
The Turn: The Discourse Is the AI You Should Be Worried About
This is the point in the essay where the figure and ground should swap.
A reader who began this piece worried about AI — about job displacement, deepfakes, surveillance, classroom cheating, election manipulation, whatever else has been the figure in recent headlines — has been operating, reasonably, inside the discourse. The discourse offers a menu of worries and a menu of remedies, and the reader, doing their civic duty, picks from the menus.
The harder observation is that the menu itself is the more powerful artifact. The discourse about AI is, in McLuhan’s exact sense, a medium. It has a structure. It has biases. It produces a certain kind of citizen — one who knows the word “regulation” and not the word “embedding,” one who can name three AI companies and zero training datasets, one who has strong feelings about whether AI will take their job and almost no feelings about whether AI is changing how they think. That citizen is not stupid. That citizen is the natural output of a medium with a 94-to-18 ratio and a sub-five-percent partner frame.
The discourse is doing more to shape the public’s relationship to AI than any particular AI product is doing. The chatbot is the figure. The discourse about the chatbot is the ground. And the ground is what governs.
This is not a comforting observation, but it is a usable one. It means that the most important media literacy task of this moment is not learning to spot AI-generated content. It is learning to spot the structure of conversations about AI — to notice, in real time, when a piece of coverage is reaching for the rear-view mirror, when it is constituting its reader as a subject of policy rather than a person capable of understanding, when it is letting an entire frame stay under five percent without remarking on the absence.
Notice also whose interests this structure serves. When the substantive technical conversation stays specialist and the regulatory conversation stays procedural, the people who actually shape these systems — engineers, executives, the small set of researchers at a handful of labs — retain effective control over the substantive layer while ceding the visible layer to legislators and reporters. The visible layer is loud. The substantive layer is quiet. This is a workable arrangement for power. It is a less workable arrangement for everyone else.
When the partner and equity frames stay marginal, the conversation cannot easily reach the questions that would most threaten existing arrangements: how the value created by these systems should be distributed, who gets to refuse them, whose ways of knowing count as expertise in their design. The marginality of those frames is not censorship. It is something more durable than censorship. It is the bias of a medium.
Reading with Figure/Ground Awareness
What does it mean, then, to read this week’s AI coverage — or next week’s, or next year’s — with the perceptual instrument McLuhan offers?
It means, first, treating the article in front of you as a sample of a distribution, not as a self-contained object. Ask what frame this piece is using. Ask which of the five under-represented frames it is not using. Ask whether the language is industrial-regulatory (“oversight,” “compliance,” “framework,” “guardrails”) or substantive-technical (“training data,” “model weights,” “inference,” “evaluation”) or experiential (“what it felt like to,” “what changed in my work when”) or structural (“who bears the cost,” “whose labor,” “whose languages”). The vocabulary is the tell. A piece written almost entirely in the first vocabulary is a piece participating in the 94. A piece that reaches for the other vocabularies is doing something rarer.
It means, second, noticing who is quoted and who is spoken about. A piece about education policy that quotes no students is a piece operating inside the medium’s bias. A piece about AI in a particular country that quotes only Western analysts is a piece operating inside the medium’s bias. These are not failures of individual journalists; they are the signature of the ground. But the reader who can see the signature is no longer fully governed by it.
It means, third, taking the silences as data. If a week of coverage barely mentions long-term cognitive effects, that is not because there is nothing to say. It is because the medium of current discourse cannot easily metabolize that conversation. The reader who notices the silence has located something the discourse is structurally unable to show them, and can go looking for it — in longer-form writing, in research literature, in their own experience, in conversation with people whose lives intersect with these systems differently.
It means, fourth, holding the rear-view mirror at arm’s length. “Regulation” is a word from a previous environment. It may turn out to be the right word for this one, but it should not be the only word. A public that can only talk about AI in the vocabulary of the FDA and the FCC is a public that has been handed an old map for a new territory and told the map is current.
McLuhan’s wager, across Understanding Media, The Medium Is the Massage, and the many interviews collected over his career, was that perceptual awareness is itself a form of freedom. Not a compl