AI NEWS SOCIAL · Thinker Column · 2026-05-10 International/LATAM
Through Toffler's Lens

Through Toffler’s Lens

The Missing Conversation

May 10, 2026 | 2490 words


Through Toffler’s Lens: The Silences That Shape the Machine

The Conversation That Isn’t Happening

Across 6,135 articles about artificial intelligence, something extraordinary is missing. Not a fringe concept. Not a heterodox theory. The basic frames through which a society might actually think about what is being built.

The frame of AI as a tool — present in under 5% of the discourse. AI as a threat — under 5%. AI as a partner, a collaborator, something one works with rather than deploys or fears — under 5%. AI as transformation, governance, equity, automation — each, separately, below the same vanishing threshold. Seven distinct ways of conceptualizing the most consequential technology of the decade, and none of them animate more than a sliver of what is being written.

What does animate the discourse? Adoption metrics. Productivity gains. Policy compliance. Vendor announcements. Capability benchmarks. The loud channels are loud — but they are loud about logistics, not meaning.

This is the phenomenon to examine: not what is being said about AI, but the shape of what is not. Alvin Toffler spent his career arguing that civilizational transitions are legible through their silences as much as their slogans. A society moving between waves does not simply add new ideas to old ones. It restructures what is sayable — and the questions that fall outside the new grammar do not get debated and lost. They get buried before the debate begins.

The missing conversation is not an oversight. It is an artifact of the collision now underway between Second Wave habits of public meaning-making and Third Wave conditions of fragmented, vendor-controlled, algorithmically-shaped attention. Reading that collision is the work of this column.

The Wave Frame: From Broadcast Consensus to Strategic Silence

In The Third Wave, Toffler described the Second Wave — industrial civilization — as running on mass communication. Three television networks. A handful of national newspapers. Standardized curricula, standardized news cycles, standardized stories about what mattered. The cost of this standardization was obvious: dissent compressed, minorities erased, complexity flattened. But it had one feature worth naming clearly: a society could, however imperfectly, have a conversation with itself. There was a public square, even if the square was rigged.

The Third Wave de-massifies that square. Channels multiply. Audiences fragment. Algorithms personalize. Every reader inhabits a slightly different informational environment, and the notion of a shared national story becomes nostalgic at best, ideological at worst.

Toffler treated de-massification with characteristic ambivalence. It liberates: voices once locked out of broadcast gatekeeping can speak. It also fractures: the cognitive substrate for collective decision-making thins out. What he did not fully anticipate — and what the AI discourse data now reveals — is a third possibility. De-massification does not merely produce plurality. Under conditions of platform concentration and algorithmic curation, it can produce strategic silence: a discourse so fragmented and so attentionally captured that entire categories of question never coalesce into public form at all.

This is the collision point. Second Wave habits — the expectation that important issues get debated in shared forums, that institutions broker public meaning, that journalism surfaces what the public needs to know — persist as cultural reflexes. But the Third Wave infrastructure on which AI discourse actually circulates does not support those habits. It supports engagement. It supports velocity. It supports the loud frames — adoption, productivity, compliance — because those frames map cleanly onto the metrics of platforms, the press releases of vendors, and the procurement cycles of institutions.

The seven missing frames? They map onto none of those things. And so they fall through.

Silence One: The Partner Frame and the Labor That Cannot Be Named

Of the seven absent frames, the collapse of the partner frame is the most diagnostic. Tool and threat at least map onto familiar industrial categories — instrument and danger, asset and liability. A society that thinks in tool/threat terms is still, recognizably, a Second Wave society arguing about its machines.

The partner frame is different. It implies a relationship — that the human and the system co-produce outcomes, that agency is distributed, that work performed with AI is neither fully the human’s nor fully the machine’s. To take the partner frame seriously is to be forced into questions the loud discourse cannot metabolize: If a worker and a system jointly produce value, who owns the surplus? If the training data was scraped from millions of unpaid contributors, are those contributors partners in what the system now does? If a knowledge worker spends their day in continuous interaction with a model, are they using a tool or laboring inside one?

These are not abstract questions. They are the central questions of labor in the present moment. And they appear in less than 5% of the discourse.

Toffler’s concept of the prosumer, developed across The Third Wave and refined in Revolutionary Wealth, names exactly what is being silenced here. The prosumer is the person who produces and consumes simultaneously — who generates value that the formal economy does not measure because it occurs outside the wage relation. Every search query, every uploaded image, every corrected autocomplete, every flagged piece of content has been prosumer labor feeding the training corpora of contemporary AI. The systems being sold back to workers as productivity tools were built, in significant part, on the unpaid cognitive output of those same workers and their neighbors.

This is not a hidden fact. It is a structurally unsayable one. To say it would be to open the question of compensation, of ownership, of whether the enclosure of human-generated text and image into proprietary models constitutes one of the largest expropriations of commons in modern history. The loud frames — productivity, adoption — assume that question away. They begin from the premise that AI is a product one buys, not a distillation of collective labor one is now being charged to access.

Whose interest does this silence serve? The answer is not subtle. It serves whoever owns the model weights.

Silence Two: The Substrate

The second silence is physical. AI runs on data centers. Data centers consume electricity at industrial scale, water for cooling at agricultural scale, and rare minerals extracted at colonial scale. None of this is secret. It is reported, occasionally, in environmental journalism and specialist outlets. But across 6,135 articles in general AI discourse, the material substrate of the technology appears as a marginal note when it appears at all.

This is the de-massification problem in sharp form. The information exists. Determined readers can find it. But because the discourse has fragmented into vertical channels — business AI, policy AI, education AI, consumer AI — the substrate question is always somebody else’s beat. The environmental cost is the environment reporter’s story. The mineral extraction is the supply chain reporter’s story. The labor of data center workers in regions that bear the cooling load is somebody’s story, somewhere, in a publication the general AI reader does not read.

In The Third Wave, Toffler observed that industrial civilization succeeded partly by hiding its costs — by externalizing pollution, displacement, and depletion onto populations and ecosystems that lacked the standing to bill for them. The Third Wave was supposed to make those costs visible through information abundance. The AI discourse demonstrates how that promise fails. Information abundance without integrative forums produces not visibility but distributed invisibility: every cost is documented somewhere, none is publicly metabolized anywhere.

The loud frames benefit. A productivity-focused discourse does not have to account for kilowatt-hours. A compliance-focused discourse does not have to account for watersheds. The story tells itself as a story about software, and the planet on which the software runs disappears into the background.

Silence Three: The Voices Not in the Room

The third silence concerns who gets to speak. The AI discourse is dominated by a remarkably narrow band of speakers: executives of model-producing firms, researchers at those firms or their academic affiliates, policy officials engaging with those firms, and journalists covering all three. This is not a conspiracy. It is the natural shape of a discourse organized around vendor announcements and the institutional responses they trigger.

Who is structurally absent? Workers whose tasks are being restructured by AI deployment without consultation. Contract content moderators and data labelers, often in the Global South, who perform the human labor that makes “automated” systems function. Communities near data centers. Artists whose work was ingested into training sets. The unemployed and underemployed whose displacement is treated as a regrettable side effect rather than a central topic.

The stance distribution in the data tells part of this story. The discourse skews pro-adoption and pro-productivity, with smaller bands of skeptical and nuanced coverage. But notice what this distribution measures: it measures the temperature of the conversation among the people having it. It does not measure who was excluded from being asked.

Toffler called this dimension of social transition powershift — the recognition that struggles over knowledge are struggles over power, and that the apparatus determining who counts as a knowledgeable source is itself the most important political artifact of a given era. In the Second Wave, that apparatus was the credentialing institution: the university, the professional guild, the licensed press. In the Third Wave, it has become something more diffuse and more difficult to challenge: the algorithmic ranking system, the platform’s content policy, the vendor’s PR cycle, the analyst’s report.

The result is a discourse in which “experts on AI” are overwhelmingly people with financial or institutional positions tied to AI’s continued expansion. Skeptical voices exist, but they are positioned as a counter-current rather than as the equal partners in a public deliberation. The people most affected — the displaced, the surveilled, the unpaid contributors — appear as data points in stories told by others.

Silence Four: What Alignment Doesn’t Mean

The fourth silence is technical-rhetorical. Within the AI discourse, the term “alignment” has acquired enormous weight. It refers, ostensibly, to the project of ensuring AI systems pursue goals consistent with human values. It is the topic of conferences, the staffing rationale for entire research divisions, the moral cover for product launches.

What the term obscures is the question: aligned to whom? In practice, contemporary AI systems are aligned — quite successfully — to the commercial interests of the firms that train them. They produce outputs that protect those firms from liability, that maximize user engagement, that route conversations away from competitor products, and that decline to discuss the operational details of their own training. This is alignment. It is just not alignment to anything resembling a public.

The discourse cannot easily name this because the word alignment has been pre-loaded with a humanistic gloss. To say “this system is aligned to its shareholders” is to commit a category error within the prevailing vocabulary — even though it is the most straightforwardly accurate description of what alignment work, in commercial contexts, actually accomplishes.

This is the kind of silence Toffler analyzed most acutely in Future Shock: the silence produced not by censorship but by cognitive overload. When change arrives faster than the institutions of meaning-making can metabolize it, populations become disoriented, and disoriented populations accept whatever vocabulary the most confident voices supply. The technical priesthood around AI has supplied a vocabulary — alignment, safety, capabilities, scaling — and that vocabulary now structures what questions can even be formulated.

Ask a question outside that vocabulary and you will be told you misunderstand the field. This is how a discourse trains its participants to silence themselves.

The Mechanism: How the Silences Get Produced

It is worth being precise about how a discourse with so many participants, so many publications, so much volume, manages to leave so much unsaid. The mechanism is not censorship. No one is preventing anyone from writing about labor displacement, water consumption, prosumer expropriation, or commercial alignment. Articles on each topic exist.

The mechanism is structural attention capture. Three Second Wave / Third Wave collisions produce it:

First, vendor-set agendas meet broadcast-trained journalism. Reporters trained in the Second Wave norm of “covering what’s new” find themselves in a Third Wave environment where what is new is determined by the product roadmaps of a handful of firms. Each new model release, each capability demo, each leadership shuffle generates a wave of coverage. The agenda is set elsewhere; the journalism follows. Slow, structural questions — about labor, substrate, ownership — do not generate news pegs and do not get covered.

Second, fragmented audiences meet platform-mediated distribution. The Third Wave should, in principle, allow specialized publications to do deep work on substrate questions and reach the audiences who care. Some do. But platform algorithms reward engagement, not depth, and the deep work circulates within the small communities already attuned to it. The general reader — the one whose civic participation the Second Wave at least pretended to cultivate — encounters AI primarily through the loud channels.

Third, institutional gatekeepers meet vendor-funded research. The credentialed authorities who might, in a Second Wave configuration, broker public meaning around AI — academic researchers, policy think tanks, civic organizations — increasingly draw funding from the same firms whose products they assess. This does not produce overt corruption. It produces a softer effect: the questions that get funded are the questions the funders find interesting, and the questions the funders find interesting are not the questions about whether the entire enterprise should be structured differently.

The result is a discourse that is, by volume, enormous, and, by content, narrow. Less than 5% of articles engage any of the seven frames a society would need to think about AI as a civilizational rather than commercial phenomenon. The discourse is busy. It is just busy with the wrong things.

The Adhocracy Problem

In The Third Wave, Toffler proposed adhocracy as the organizational form of post-industrial life: temporary, project-based, flexible, dissolving when its task is complete. He admired its capacity for responsiveness. He also worried, more quietly, about its inability to sustain long conversations.

The AI discourse exemplifies the worry. There are no durable forums where the missing conversations could be sustained at scale. There are conferences, but conferences end. There are journalistic series, but series conclude. There are advocacy organizations, but they fight for funding and attention against the very platforms whose practices they critique. The Second Wave at least produced standing institutions — public broadcasters, regulatory bodies, university departments — that could carry an inquiry across decades. The Third Wave produces sprints and pivots and discourse cycles.

What this means for the missing frames is concrete: even when a substrate critique or a labor critique breaks through, it has nowhere durable to live. It is metabolized as a news cycle, processed for two weeks, and then displaced by the next vendor announcement. The discourse has no long memory because it has no long institutions, and without long memory, accumulated insight cannot become accumulated pressure.

This is how silences sustain themselves even

← Back to this edition