The Vanishing Apprenticeship
I. The question of the week
The question is what “knowing how to code” means when the machine offers to write the code for you, and whether the people asking the question have any incentive to answer it honestly. Across four quarters of trade press, vendor announcements, executive podcasts, and earnest LinkedIn monologues, the question has been asked and answered and asked again — with answers swinging from “everyone must learn the tools now” to “the tools are quietly hollowing out the craft” and back. The arc is not a steady accumulation of insight; it is a market in moods, and the moods have inverted at least once.
The arc has a shape worth tracing. In late 2024 the conversation was almost uniformly enthusiastic: AI as the defining technical skill of the decade, the next stage of the developer’s career, the must-have line on a résumé. By the first quarter of 2025 a counter-current had appeared — senior engineers writing about junior colleagues who could not debug without a chatbot, productivity studies whose enthusiasm wilted under scrutiny. Through the spring the optimistic frame surged again, carried by executive predictions and policy moves at large employers. And then, in the third quarter, the critical voices returned with new authority — not because adoption had stalled but precisely because it had become near-universal, and the question of what was being adopted, and at whose expense, finally had data behind it.
What follows is an attempt to trace both layers — what has been said about coding skills in the age of AI assistants, and what has actually been happening to the people doing the coding — and to mark the places where the two have met, the places where they have missed each other entirely, and the place the discourse has so far refused to look.
II. What we’ve been saying
The autumn of 2024 produced a remarkably narrow band of opinion. AI was the skill, full stop. A Forbes column titled 5 Free Online Courses To Learn AI In 2025 opened with the assertion that if one technical skill stood out for professional growth in 2025, it was artificial intelligence — “more than a nice-to-have.” Forbes India ran Global AI and the future of work, describing AI as “one of the most transformative technologies of the 21st century” reshaping industries and redefining employment. Trade outlets ran taxonomies of the jobs that will thrive in 2030, almost all of them requiring some new species of AI fluency. The frame was inheritance: the old technical skills were yielding to a new layer of the stack, and the worker’s job was to climb up to it.
By January 2025 the frame began to crack. ZDNet reported, in Generative AI is now a must-have tool for technology professionals, that job postings mentioning generative AI had risen by a factor of 3.5 over the year, locking in the new requirement at the level of hiring. ITPro covered O’Reilly research showing declining interest in traditional coding languages mirrored by uptick in AI skills demand — the substitution narrative arriving as engagement data. Sam Altman, with the bluntness the role permits, told students to “get good at using AI tools, like we got good at coding back in school”, a sentence that telegraphed the entire repositioning: coding was the skill of the previous generation; tool fluency was the skill of this one.
But the same quarter produced the first sustained backlash. A widely circulated essay from senior developer Namanyay Goel, titled The Hidden Cost of AI-Assisted Development: Skill Erosion, argued that the tools were inadvertently degrading the underlying competencies of the people using them — that the convenience of suggestion was eroding the muscle of comprehension. Practitioner write-ups began sounding the same note: a LinkedIn analysis of AI in Software Development: Productivity Booster or Hidden Risk? catalogued the new dependencies, while The Challenges and Risks of AI Adoption in Software Development treated AI as a central pillar of modern engineering and worried, in the same breath, about what was happening to the pillars beneath it.
Spring inverted again. Trade press carried executive prediction as news. Computerworld’s From prompts to production: AI will soon write most code, reshape developer roles reported Mark Zuckerberg’s claim, made at Meta’s LlamaCon event, that AI could handle half of all software development within a year. Pymnts described how AI Coding Assistants Give Big-Tech Powers to Small Businesses, the democratization frame restored. The Pakistani business press, in AI-driven reskilling is a must, translated the same gospel for South Asian software firms. The Register, more soberly, ran AI software development has proven its worth but is risky, letting the qualifier do most of the editorial work. By June, Canva announced that AI use was now mandatory in coding interviews — the design firm reasoning that AI-assisted coding “better reflects real job conditions.” This was the moment the rhetorical frame closed. Knowing how to code, in the older sense, was no longer being assessed; what was being assessed was facility with the assistant.
The third quarter brought the return of the skeptics, now with the standing of late commentators on a settled fact. CIO Dive, summarizing Google’s DORA report in Developers heavily rely on AI for software development, used “rely on” — not “use,” not “augment with” — and the verb did the argument. CNN, in Google says 90% of tech workers are now using AI at work, made adoption itself the headline, leaving the question of effect open. And the AI-literacy frame, which our own earlier briefings tracked through the spring, hardened into something closer to a corporate compliance regime: AI literacy now becomes essential, as ResearchGate’s discussion forum had it; Why AI literacy will future-proof your career, as a Mashable-style explainer phrased the same demand.
What had been a debate about whether to use the tools became, by autumn, a debate about what kind of programmer the tools were producing.
III. What’s been happening
Beneath the rhetoric, four things were measurably true. First, the underlying capability had become quite good at a particular benchmark and remained limited at others. The Stanford HAI AI Index Report 2024 recorded a GPT-4 variant scoring 96.3% on HumanEval — a leaderboard for short, well-specified coding problems — an 11.2-point improvement on the previous high and a 64.1-point gain since 2021. As Stanford’s HAI AI Index Report 2024 noted in its language-model chapter, performance on these benchmarks “has increased 64.1 percentage points” in three years. The number is real and the number is also narrow: HumanEval problems are bounded and unambiguous, the kind of code-writing in which the assistant excels. The benchmark says little about debugging unfamiliar systems, reading hostile codebases, or producing software whose failure modes the author can anticipate.
Second, adoption ran ahead of evaluation. Google’s 2025 DORA survey, the basis for both the CIO Dive report and the CNN coverage, found that roughly nine in ten developers used AI tools at work for tasks like writing and modifying code. The number is the headline. It is also the methodological problem: when ninety percent of a population uses a tool, the comparison group from which one might learn whether the tool helps has effectively disappeared. The productivity claims that accompanied early Copilot studies become, in this environment, much harder to evaluate; what counts as developer productivity is now partly defined by the presence of the assistant.
Third, hiring rewrote itself faster than curricula. The 3.5x increase in job postings mentioning generative AI documented in the ZDNet report preceded any consensus on what such postings should require. The O’Reilly engagement data in the ITPro write-up showed declining interest in traditional languages on a learning platform — a leading indicator for what new entrants would actually know. Canva’s interview-mandate announcement was, in this context, the procedural endpoint: the assessment had been redesigned around the tool because the work had been redesigned around the tool. The credentialing system absorbed the change before the educational system understood it.
Fourth — and this is the part the optimistic press has the most trouble with — the people closest to the daily work began documenting harms that the surveys do not measure. Goel’s skill-erosion essay was practitioner ethnography, not study, but its observations were reproduced across the industry: junior engineers shipping code they could not explain, code review degraded into a pattern-matching exercise, debugging skill atrophying because the chatbot’s first guess was usually plausible enough to commit. The Register’s longer feature — which framed the technology as having proved its worth — was unusually candid about the risks: error rates, hallucinated dependencies, security vulnerabilities introduced by suggestions that were syntactically clean and semantically wrong. As Janelle Shane observes in You Look Like a Thing and I Love You (2019), the systems that pass for fluent often do so by gimmick — by handling a known set of cases well enough that the cases they cannot handle are mistaken for noise rather than evidence.
The labor-market story, meanwhile, ran on a separate track. Coverage of the future of work — the Inkl synthesis, the Forbes India panorama — leaned heavily on macro projections that had been recycled, with adjustments, since the early 2010s. Shoshana Zuboff, in The Age of Surveillance Capitalism (2019), traces the lineage of these projections back to Frey and Osborne’s much-cited 2013 study of technological unemployment, and notes how the same theme — that machine learning would soon “realize cost savings” by replacing categories of cognitive work — has been adapted, decade by decade, to whatever automation was currently being sold. The 2024–2025 wave of coverage of AI and developer jobs reads, on this longer view, as the latest iteration of a recurring discourse in which the displacement claim is made, the timing slips, and the next wave of tools arrives to reset the clock.
What is genuinely new, the data suggests, is not the displacement but the dependency: not that programmers are being replaced en masse, but that the practice of programming is being reorganized around an intermediary whose internal logic neither the user nor, in important respects, the vendor fully understands.
IV. Where they meet, where they miss
The discourse and the reality meet most cleanly on the surface fact: adoption is real, broad, and consequential. The 90% figure is not vendor spin. The benchmark gains are not imaginary. Small firms genuinely can ship software they could not have shipped two years ago. The optimistic press is not lying about these things; it is reporting them with the enthusiasm of a beat that depends on access.
Where they meet less cleanly is on what the adoption means for skill. The boosters and the skeptics agree that the tools change what developers do day-to-day; they disagree on whether the change preserves, transforms, or erodes the underlying competence. This is a question that the available evidence cannot yet settle, because — as noted above — when nearly everyone uses the tool, the counterfactual disappears. The honest position is that we are running a global, uncontrolled experiment on the formation of a generation of software engineers, and the results will not be legible for several years.
Where the discourse misses the reality, in three specific ways:
The first miss is the apprenticeship question. The trade press treats coding as a body of knowledge — languages, frameworks, patterns — and asks whether AI tools augment or substitute for that knowledge. But programming has always been transmitted at least as much through apprenticeship: through the experience of writing bad code, having it rejected in review, debugging through the night, gradually building the intuition for which abstractions hold and which leak. The Canva interview-mandate, the Google adoption numbers, and the Goel skill-erosion essay all point at the same thing from different angles: the apprenticeship structure is being silently rewritten by people whose interest is in shipping code now, not in producing the next generation of senior engineers a decade from now.
The second miss is the redefinition of the term “skill.” When Sam Altman tells students to “get good at using AI tools”, he is not making a neutral observation about the future of work; he is selling a product whose value rises in proportion to how much of what we currently call coding can be reframed as the wrong skill. The AI-literacy frame, which our April 2025 critical analysis of AI literacy coverage already noted was being mobilized to justify procurement decisions, performs a similar move at the institutional level. Calling something “literacy” makes it sound foundational; calling it “tool use” makes it sound trivial. The vendors get to choose which framing is in the air.
The third miss is the most fundamental, and it is the one How to Speak Machine (John Maeda, 2019) gestures at in a passage that has aged with unsettling speed: “while they are brilliantly composing lines of code, they’re starting to wonder what happens when their incomplete computational systems learn to ‘autocomplete.’” The question is not whether developers should use autocomplete; it is whether a profession defined by its capacity to specify systems precisely can survive its absorption into systems whose specifications are, by design, opaque. The current discourse treats this as a tooling question. It is closer to a question about what kind of intellectual labor programming is going to be.
V. The longer view
The arc traced here — boosterism, backlash, surge, sober reassessment — is almost entirely a discourse arc, conducted in trade outlets and on conference stages by people whose careers depend on the outcome they describe. The reality arc is slower, and most of it has not happened yet. It will be visible in the senior engineers of 2032, in whether they can do the things their predecessors could do, in whether the systems they build fail in the ways software has always failed or in new ways we do not yet have vocabulary for. It will be visible in whose code review judgment becomes the bottleneck once the assistant has produced everything that looks plausible. It will be visible in what gets taught, in computer-science programs and in the workplace mentorships that no provost has yet noticed are dissolving.
What the longitudinal frame makes clear is that the question the trade press keeps asking — should developers use AI tools? — is the wrong question, and has been the wrong question for at least a year. The relevant questions are who decides what counts as a coding skill, who absorbs the cost when the tools are wrong, and who is responsible when the next cohort of engineers arrives without the foundations the previous cohort built by hand. None of those questions has a vendor with an incentive to answer it.
The skill that will turn out to matter is the one no one is selling a course in: the judgment to know when the autocomplete is wrong.
References
- AI software development has proven its worth but is risky
- The Challenges and Risks of AI Adoption in Software Development
- The Hidden Cost of AI-Assisted Development: Skill Erosion
- AI in Software Development: Productivity Booster or Hidden Risk?
- Global AI and the future of work
- 5 Free Online Courses To Learn AI In 2025
- How AI is Reshaping the Future of Work: Top Jobs That Will Thrive in 2030
- Declining interest in traditional coding languages mirrored by uptick in AI skills demand
- ‘Get Good At Using AI Tools, Like We Got Good At Coding Back In School’: OpenAI Boss Sam Altman
- Generative AI is now a must-have tool for technology professionals
- ‘AI-driven reskilling is a must’
- AI and the Future of Work: Moving into a World Where Algorithms Replace Humans
- AI Coding Assistants Give Big-Tech Powers to Small Businesses
- From prompts to production: AI will soon write most code, reshape developer roles
- AI literacy now becomes essential
- Canva makes AI use mandatory in coding interviews
- Developers heavily rely on AI for software development: Google
- A 28-Year-Old AI Billionaire Reveals Game-Changing Advice For Teens
- Google says 90% of tech workers are now using AI at work
- Why AI literacy will future-proof your career
- HAI AI Index Report 2024 — Stanford Institute for Human-Centered AI
- The Age of Surveillance Capitalism — Shoshana Zuboff, 2019
- You Look Like a Thing and I Love You — Janelle Shane, 2019
- How to Speak Machine — John Maeda, 2019