The Longer View
The Assistant Arrives Pre-Installed
I. The question of the week
The Copilot-ification of the classroom and workplace is not, strictly speaking, a story about artificial intelligence. It is a story about default settings. Somewhere between the November 2024 update that folded enhanced reasoning into Microsoft 365 Copilot Reloaded With Enhanced AI Capabilities and the back-to-school posts of August 2025, the question shifted from whether institutions would adopt generative AI to what it meant that the assistant was already sitting in the sidebar when the teacher opened her lesson plan. This week's column traces that short rhetorical distance — and the much longer distance the underlying infrastructure covered at the same time.
The arc has one clean inversion. Through late 2024 and the first half of 2025, published commentary on embedded assistants — GitHub Copilot, Microsoft 365 Copilot, Claude-in-Copilot, Gemini for Education, Firefly Assistant — ran optimistic by a comfortable margin. Then, in the third quarter of 2025, for the first time, critical framings outnumbered celebratory ones. The reversal did not track a dramatic incident; no Copilot scandal preceded it. What had changed was the shape of adoption itself. The pilot had become the platform. The workshop had become the procurement line-item. The feature that educators and public servants had opted into had become the environment they opted out of only with friction.
This essay traces that transit through two registers: what the conversation has been saying, month by month, and what has actually been shipping, rolling out, and embedding itself in the infrastructure of schools, universities, and government agencies. Both arcs matter. The rhetoric tells you what the public discourse was prepared to defend at each moment. The reality tells you what it was defending against — or catching up to.
II. What we've been saying
In December 2024, when a Rhode Island College professor described preparing his writing students for the age of AI in An AI-Enhanced Writing Course: A Glimpse into the Future of Education, the framing was experimental and forward-leaning: imagine, the piece invited, a classroom where Copilot was "seamlessly" integrated. The same month, the IT department at the University of Oxford announced that it was using Microsoft 365 Copilot "to future-proof education" in University of Oxford's IT department embraces Copilot, a headline whose compound verbs — future-proof, embrace — did most of the argumentative work. These are pilot-era documents. They belong to a moment when the assistant still needed to be introduced.
The first quarter of 2025 was when the rhetoric of introduction scaled up to the rhetoric of instruction. McGill University announced its Copilot AI module on myCourses, a self-service tutorial delivered through the learning management system — the assistant not just available but curriculumized. A small-town Pennsylvania newspaper reported on school districts trying to navigate fast-paced AI technology surge, with a technology coordinator demonstrating Copilot's ability to draft math questions at grade level. A peer-reviewed study out of Kuwait, Enhancing Creativity and Cultural Integration in Art Education, treated Copilot as a creativity-enhancement tool for design students — the assistant now a variable in empirical research. Even the critical notes of the quarter, exemplified by Child Trends' Why School Districts Need AI Policies to Support STEM Education, operated in a corrective rather than oppositional register: districts were unprepared, they needed policy, but the direction of travel was assumed.
The second quarter continued the pattern while widening the frame from classroom to enterprise. Microsoft's own Inside Track team published Driving the future of work, an unusually candid piece of corporate communications that described the internal project of getting Microsoft's own workforce to actually use Copilot — a piece whose existence tacitly admitted that deployment and adoption are not the same thing. Nous Group's evaluation of the Australian government's whole-of-government trial, Copilot comes to Canberra, described what was then the largest enterprise AI rollout in the public sector. Microsoft's own Empowering educators with AI innovation and insights consolidated the education offering around a platform logic: Copilot Chat for teen students, Copilot+ PCs, an annual AI in Education report. The rhetoric had entered its infrastructure phase.
Then came the third quarter, and the inversion. Published commentary did not abandon optimism — Fulton County Schools' 87,000-student deployment was covered approvingly in How Fulton County Schools use Copilot Chat to empower student innovation, and Melbourne's Scotch College Junior School documented embedding generative AI across our school as a readiness-graded integration — but the balance of tone tilted. Sarah Hernholm's The AI Education Gap: Why Schools Need AI Policies Now anchored a late-Q2/early-Q3 argument that sixty percent of teachers were already using AI with students who had never been taught to use it responsibly — the pilot had outrun the pedagogy. A community-college instructor writing in The Conversation asked publicly whether embedding the tools was worth doing at all before answering in the affirmative — but the fact that the question now required defending was itself the news. A 2025 AACU study referenced in Coaching the Future found that ninety-five percent of higher-education leaders agreed teaching models would shift with AI, while more than a third reported their institutions were not ready.
By the fourth quarter, the polarity had settled. Microsoft's back-to-school promotional content — Top 5 Copilot AI features for students — ran alongside unambiguously skeptical journalism like AI in the Classroom: Are Robots the Future of Teaching?, whose opening described AI "quietly at work" in already-deployed grading algorithms. The question in the early pieces had been what Copilot could do. The question in the later pieces was what Copilot was already doing. As our briefing of 2025-05-04 observed, authors writing about AI in education almost uniformly expressed the purpose of "enhancing personalized learning" — a phrase that survived across the arc, from the Oxford pilot to the Fulton County case study, doing consistent work: converting infrastructure into pedagogy, and procurement into care.
III. What's been happening
The rhetoric traveled a linear path. The reality moved in layers.
The first layer was productization. Microsoft's November 2024 Copilot update consolidated admin controls, extended reasoning capabilities, and hardened the enterprise licensing model — the quiet, unglamorous work of making an assistant ship-ready at scale. By the time Oxford's IT Services announced its embrace of Copilot in the same month, the product was already past the demo stage; what Oxford was adopting was not an experiment but a standard-issue enterprise tool.
The second layer was curricularization. McGill's Copilot AI module on myCourses placed the assistant inside the learning management system itself, not as a supplement but as an object of instruction. Southern Columbia Area's technology coordinator, quoted in the Daily Item, was already using Copilot to generate grade-appropriate math problems — a workflow, not a pilot. The Kuwaiti design-education study in MBSE tested Copilot as a measured variable, which is the point at which a tool stops being novel. Between January and March 2025, the assistant moved from IT-department announcement to teacher workflow to research instrument.
The third layer was enterprise scale. Copilot comes to Canberra tested whether the assistant's promise translated into public-servant productivity at the scale of an entire national administration. Microsoft's own change management blog post — written, per its own disclosure, with AI assistance — documented the internal campaign to move Microsoft's workforce from license-holder to daily user. What both pieces quietly registered is that deploying the tool and getting it used are different problems. The tool had shipped. Adoption was the new project.
The fourth layer was institutional embedding. Fulton County Schools' Copilot Chat deployment reached roughly 87,000 students. Scotch College Junior School described staff "integrating AI into their practice in ways that align with their readiness, confidence, and classroom context" — a phrase whose careful variability indicates an institution that has chosen to normalize what it cannot standardize. Microsoft's Empowering educators announcement brought general availability of Copilot Chat for teen students, meaning that the assistant had crossed from university and enterprise into the under-eighteen demographic with whatever that entails for consent, pedagogy, and data practice.
Beneath these layers, one statistic bears lifting out. Hernholm reported that sixty percent of teachers are now using AI in their lessons while most schools have not developed policies teaching students to use the same tools responsibly. That figure is what the inversion in Q3 was reacting to. It is not that Copilot had failed to arrive. It is that it had arrived without the pedagogical scaffolding its champions had been promising, for eighteen months, would accompany it. The AACU finding cited in Coaching the Future — 95% of higher-education leaders agreeing teaching models would shift, more than a third reporting institutional unreadiness — is the same gap measured from the top.
The fifth layer, emerging now, is consumer productization. Microsoft's Top 5 Copilot AI features for students no longer addresses administrators, IT departments, or even faculty. It addresses students directly as consumers of productivity software. The institutional channels — the district purchase, the university license, the government trial — are now complemented by a direct-to-learner retail surface, study aid by study aid. This is what the assistant looks like when it no longer needs permission. Our earlier briefing on AI literacy (2025-04-27) argued that AI literacy was becoming foundational rather than technical — a claim that lands differently now, because the infrastructure has made the literacy non-optional.
Two passages from the library sit over this arc. Alvin Toffler, writing in Revolutionary Wealth (2006), noticed that "only much later did corporations train large numbers of users. Guru prosumers were the indispensable, yet unrecognized, drivers of the PC revolution." The Copilot rollout has inverted that sequence: the corporation trained the users first; the prosumers, insofar as they exist, came into a room where the assistant was already installed. And Meredith Broussard, in Artificial Unintelligence (2018), observed that "school is one of the most gorgeously complex systems humankind has built. I go into my classroom every day and leave surprised by what has transpired." The question posed by the last six quarters is whether an assistant trained on the average can hold its ground inside a system whose value is the surprise.
IV. Where they meet, where they miss
The rhetoric and the reality meet most cleanly at the level of what both sides agree has happened: the tool arrived, it was adopted, it is being used at scale. The conversation and the rollout share a vocabulary — "integration," "embedding," "readiness" — and share an endpoint, the classroom or workplace in which the assistant is simply present.
They miss at the level of agency. Throughout late 2024 and early 2025, published commentary described adoption as a series of choices: a professor at Rhode Island College choosing to build an AI-enhanced writing course, an IT department at Oxford choosing to embrace Copilot, a teacher in Pennsylvania choosing to let the software draft a worksheet. By the third quarter of 2025, the decisions documented in the same publications were no longer about whether to use the tool but how to manage its already-accomplished presence. Microsoft's own change management post is the giveaway. Change management is not what you do when you are considering a change. It is what you do when the change has occurred and the people affected by it have not yet caught up.
This is why the inversion in Q3 is not a verdict on Copilot. Nothing in the product broke. The shift registers the moment at which public conversation caught up with a procurement decision it had been narrating as a pedagogical one. The Forbes piece on the AI education gap and the AACU-cited unreadiness are not critiques of AI in education so much as retrospectives on eighteen months during which infrastructure was laid faster than discourse could metabolize it.
Shoshana Zuboff, in The Age of Surveillance Capitalism (2019), opens her section on authority by recalling a pulp mill manager's question: "Are we all going to be working for a smart machine, or will we have smart people around the machine?" The Copilot-ification arc has, in effect, outsourced that question to default settings. A teacher at Scotch College integrating AI according to her "readiness, confidence, and classroom context" is not answering Zuboff's question so much as routing around it; the smart machine is there, the smart person is there, and the institutional question of which holds authority over the other is deferred by being distributed across individual workflows.
The library has a second pertinent observation. Toffler, in Future Shock (1970), argued that "education must prepare people to function in temporary organizations — the Ad-hocracies of tomorrow," against a school system whose "standardized basic unit" was the teacher-led class. What the last six quarters of coverage have documented is the opposite move: the ad-hocracy arriving inside the standardized unit, not replacing it. Fulton County's 87,000 students are still enrolled in grade-level classes; the difference is that each student now has an assistant the curriculum did not design. Marshall McLuhan and Quentin Fiore, in The Medium is the Massage (1967), noticed the gap between "the modern home environment of integrated electric information and the classroom," a nineteenth-century environment in which "information is scarce but ordered and structured." Copilot closes that gap by importing the home environment into the classroom. Whether the classroom's ordering survives the import is the question the next eighteen months will answer.
The deepest miss is definitional. The rhetoric has treated the Copilot-ification as the introduction of a tool. The reality has been the replacement of a default. When the assistant is in the sidebar of the software that every teacher and every public servant already uses, "adoption" is not the right word for what happens next. The student who uses Copilot to summarize a reading has not adopted an AI tool; she has used Word. The defaults have moved. The conversation has not fully noticed.
V. The longer view
Broussard's observation that school is "one of the most gorgeously complex systems humankind has built" is worth holding against the back-to-school content that now catalogs the top five Copilot features for students. The complexity she described — students with deadlines, family drama, travel plans, their own children — is not the kind of complexity an assistant trained on average-case knowledge-work is built to meet. This is not an argument against the tool. It is an argument about where the tool is deployed, and by whom, and with what acknowledgment that the room it walks into is not empty.
What the arc has shown, across five quarters and the single inversion they contain, is that the decisive action happened before the conversation began. The product was hardened, licensed, and defaulted-on while the discussion was still framed around whether a writing professor should pilot it. By the time the critical voices pulled even with the celebratory ones in the third quarter of 2025, the infrastructure question had already been settled by procurement, licensing, and platform design. What remains open is not whether the assistant is present but whether anyone inside the classroom or the agency still gets to choose the default.
The Copilot-ification of the classroom and workplace is not the story of a tool being adopted; it is the story of a default being changed, and the conversation arriving eighteen months late to notice.
References
- Microsoft 365 Copilot Reloaded With Enhanced AI Capabilities
- An AI-Enhanced Writing Course: A Glimpse into the Future of Education
- University of Oxford's IT department embraces Copilot to future-proof education with AI
- Learn how to use Copilot AI module now on myCourses (McGill, system-warnings)
- Learn how to use Copilot AI module now on myCourses (McGill, inb)
- School districts trying to navigate fast-paced AI technology surge
- Enhancing Creativity and Cultural Integration in Art Education: Evaluating the Role of Microsoft Copilot Among Kuwaiti Design Students
- Why School Districts Need AI Policies to Support STEM Education
- Driving the future of work: How we're approaching Microsoft 365 Copilot change management at Microsoft
- Copilot comes to Canberra: Lessons from the world's largest whole-of-government AI trial
- Empowering educators with AI innovation and insights
- Personalizing education and enhancing administration with AI
- How Fulton County Schools use Copilot Chat to empower student innovation
- From curiosity to capability: Embedding generative AI across our school
- The AI Education Gap: Why Schools Need AI Policies Now
- Generative AI is coming to the workplace, so I designed a business technology class with AI baked in
- Coaching the Future: AI, Experiential Learning, and Ethics in Business Education
- 5 ways to use Copilot and AI tools to spark curiosity this school year
- Top 5 Copilot AI features for students
- AI in the Classroom: Are Robots the Future of Teaching?