AI NEWS SOCIAL · The Longer View · 2026-05-10 International/LATAM
Paying Interest on Borrowed Thought

Paying Interest on Borrowed Thought

I. The question of the week

Sometime in February 2025, a phrase that had been moving quietly through workshops and op-eds achieved escape velocity. Cognitive debt — borrowed from software engineering’s “technical debt,” itself borrowed from finance — became the dominant frame for a worry that had been articulated more clumsily before: that students and workers using generative AI were not merely learning differently but accumulating a deficit they would eventually have to repay. By autumn, an MDPI paper on social work education titled itself, without irony, Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education. The metaphor had finished its journey from worry to organizing principle.

This week’s arc tracks that phrase, and the broader argument it carries, across four quarters: from the panic of early 2025, triggered by a single Microsoft–Carnegie Mellon paper, through the cognitive-offloading literature of the spring, into the remediation industry of the summer, and arriving at the integration frameworks now being drafted by deans and provosts. The shape is familiar to anyone who has watched a moral panic become a curriculum reform. What is less familiar — what this piece will argue — is that the language of debt did real work along the way. It converted a contested question about what thinking is, and what schools are for, into an accounting problem with a managerial solution. The studies thinned out. The frameworks thickened. The thing the phrase was supposed to be naming — call it intellectual development, or judgment, or the capacity to be wrong in instructive ways — became progressively harder to see beneath the spreadsheet that was now being laid over it.

II. What we’ve been saying

The discourse has a precise origin. In February 2025, a study by Microsoft Research and Carnegie Mellon circulated under headlines that compressed its findings to a slogan: AI use correlates with reduced critical thinking. Within a week, Rolling Out was reporting that “excessive reliance on AI tools may be silently eroding workers’ ability to think critically,” R&D World was framing the same finding around an “antidote,” and PC-Tablet was offering strategies for “mindful AI usage.” The headline that landed hardest — Is AI Making You Dumber? Shocking Findings on Critical Thinking and Cognitive Skills! — pre-existed the study’s nuance and outlived it.

What is striking about the Q1 coverage is not its alarm but its uniformity. The same finding produced the same metaphors: erosion, atrophy, dependence, withering. A Lund University thesis from the same month, Critical thinking in the age of big data and AI, reached more equivocal conclusions about Big Data Analytics and white-collar decision-making, but received almost no popular pickup. The genre demanded loss, and loss was what got cited.

By Q2, the vocabulary had widened. Cognitive offloading, a term with a respectable cognitive-science pedigree, became the umbrella for a faster and looser set of claims. Psychology Today ran “Cognitive offloading shrinks mental muscles. Here are 4 ways students can stay sharp.” A Thai journal published The Negative Effects of Over-Reliance on AI Tools in IT Student Learning. A philosophy-adjacent piece in the Northeastern Chronicle asked whether we were “delegating thought.” A new journal entry in the Futurity Proceedings posed the binary directly: Cognitive Erosion or Extension?. The framing in each was almost identical: a stipulated harm, a list of corrective practices, an appeal to mindfulness. The studies underneath these pieces were heterogeneous in method and modest in scope; the rhetoric on top of them was not.

By midsummer, the phrase cognitive debt itself crystallized. eSchool News was now offering K–12 administrators a how-to: “How to avoid cognitive debt by building critical thinking skills.” The metaphor’s grammar was assertive in a way the earlier vocabulary had not been. Erosion describes; offloading describes; debt prescribes. It carries a creditor. It implies a repayment schedule. It tells the institution it has standing to collect.

Q3 saw the shift this column has seen before in other arcs: from diagnosis to therapy. Psychology Today ran “Why your mind risks going soft with AI, and how to sharpen it again.” Forbes Australia made it executive — “Why critical thinking is so crucial in the AI era” — and Springer’s Indian Journal of Surgery offered the same medicine to its own profession in Critical Thinking and Artificial Intelligence—Vital for Researchers, Reviewers, and Editors. The advice converged on a handful of moves: prompt more deliberately, verify outputs, ask better questions, do not paste the first draft. This is sound counsel. It is also, considered as a body of literature, the displacement of a pedagogical question — what does it mean to learn to think? — onto a hygiene question — how do you keep your AI use clean?

The Q4 turn closed the loop. Paying the Cognitive Debt treats the phrase as a settled fact and proposes an “experiential learning framework” for managing it. The conversation that began with a study warning of harm now operates under the assumption that harm exists, must be paid down, and can be administered. As earlier essays in this publication’s AI literacy briefing of 2025-06-08 and briefing of 2025-07-20 noted in passing, the AI-literacy discourse has done this before: a problem named in the language of skill, a solution proposed in the language of curriculum, an institutional buyer waiting at the end of the sentence.

III. What’s been happening

Beneath the rhetoric, the empirical picture is thinner and stranger than the headlines suggest.

The Microsoft–CMU study that anchored Q1 measured self-reported critical thinking among 319 knowledge workers using generative AI in their jobs. Self-report is a notorious instrument; people who feel they have offloaded a task tend to report having offloaded the cognition involved in it, which is a different claim from the claim that the cognition has decayed. The Frontiers paper from the same year, Evaluating the impact of AI on the critical thinking skills of university students, used the Technology Acceptance Model and structural equation modelling on student survey data — again, self-perception, mediated by a framework designed to study adoption rather than cognition. The Oregon State University work summarized in Study warns AI reliance erodes STEM students’ thinking skills identifies a “self-reinforcing cycle” through interviews. The Thai IT-student paper draws on a single institution. The MDPI societies paper on cognitive offloading is itself an exploration of correlations.

None of this is to say the findings are wrong. It is to say that the evidentiary base for a phrase as confident as cognitive debt is, at present, a small set of correlational, largely self-report studies, mostly from 2024–2025, mostly conducted in the first two years of widespread generative AI use. The base rate for any longitudinal claim — what happens to a student’s reasoning capacity over four years of AI-assisted study — does not yet exist. The studies are reporting that students who use AI a lot also report relying on it; the inferential leap to “their faculties have atrophied” requires evidence the literature has not yet produced.

What has been happening, concretely, is the build-out of an institutional response in advance of the evidence. The eSchool News piece is pitched at district administrators. The Springer surgery piece is pitched at journal editors. The Psychology Today and Forbes pieces are pitched at individual professionals shopping for routines. The MDPI social-work paper is pitched at programme directors writing accreditation responses. Each of these audiences buys something: a workshop, a curriculum module, a consulting engagement, a software tool. Cognitive offloading in education: How AI Use Can Undermine Critical Thinking — published on a vendor site — illustrates the supply side of this market with unusual frankness.

On the other side of the ledger, the integration discourse has been busy too. Just Thinking: How Visual Tools and AI Can Unlock Critical Thinking in Education argues — citing a 2023 UNESCO report — that AI, properly scaffolded, can develop the same faculties the cognitive-debt literature says it erodes. How can students prepare for their careers? 4 takeaways from an AI architect advises undergraduates to lean in. These pieces are not in dialogue with the cognitive-debt literature; they run in parallel, sharing vocabulary, citing different studies, reaching opposite conclusions, and selling adjacent products.

What is largely missing from both stacks is engagement with what critical thinking actually is. The MIT Press Essential Knowledge volume on Critical Thinking is careful to distinguish “the critical thinking movement,” concerned with logic and argumentation, from “critical pedagogy,” which draws on postmodernism and deconstruction to “ask questions about what we can really know.” The popular literature on cognitive debt uses critical thinking as if it were one thing — an unambiguous capacity measurable on a Likert scale — and treats its decline or preservation as the relevant outcome. The Critical Thinking Complete text reminds its reader that critical thinkers “differentiate forms of questions because the form of the question determines the kind of thinking being called for.” It is worth asking what form of question the cognitive debt literature is itself asking, and what form of thinking it calls for.

The other quiet development worth noting is on the bias side. Critical Thinking: Your Guide to Effective Argument defines publication bias as “the temptation for journals to publish research with positive or striking outcomes in preference to other, equally valid research that demonstrates a lack” of effect. Almost every study driving the cognitive-debt arc reports an effect; the null results, if they exist, have not yet found a venue or a headline. Hindsight bias — the same source’s term for “treat[ing] unforeseen events as though they were foreseeable” — operates here as well: the after-the-fact certainty that of course AI was going to damage thinking, of course the schools should have prepared, of course we now need the framework.

IV. Where they meet, where they miss

The rhetoric and the reality meet at one point: students and workers using generative AI heavily are reporting, with some consistency, that they feel they are thinking less. This is not nothing. Self-report is weak evidence for capacity but reasonable evidence for experience, and the experience being reported — that the AI is doing the part of the work that used to feel like thinking — deserves to be taken seriously. The pedagogical worry is real even if the neurological one is unproven.

They miss in three ways, each of which the cognitive debt frame helps to obscure.

First, the metaphor smuggles in an accountant. Debt has a creditor, a balance, a repayment schedule. Once cognition is debt, the institution that “holds” it — the university, the employer, the credentialing body — acquires the standing to set terms. The “experiential learning framework” of the social-work paper is the natural outcome: a managed-repayment plan for a debt the framers also defined. As Critical Thinking: Your Guide to Effective Argument notes in its discussion of decision-making heuristics, the form a problem is given largely determines the range of solutions deemed acceptable. Framed as debt, the problem can only be paid; framed as a pedagogical question, it might require schools to change what they teach and how they test.

Second, the discourse treats critical thinking as a single thing measurable from outside. Critical Thinking - The MIT Press Essential Knowledge series is explicit that the practice of reasoning involves not only logic and argumentation but the harder, slower work of critical pedagogy — “ask[ing] questions about what we can really know.” The studies driving the cognitive-debt headlines do not measure that second register, because there is no Likert scale for it. What they measure, well or poorly, is the perceived effort of formal problem-solving. The slippage between “students report exerting less analytical effort on prompt-able tasks” and “students are losing the capacity for judgment” is the slippage on which the entire genre depends.

Third — and this is where the column’s editorial commitment is sharpest — the remediation literature sells the cure separately from the diagnosis. The same outlets that report the harm sell the prompts, the workshops, the curricula, the visual-thinking tools, the mindfulness routines. The reader is invited to worry, then invited to buy. The articles do not lie; they are, individually, often careful. In aggregate, they form a market. The earlier briefings on AI literacy from this publication — the 2025-02-23 critical analysis and the 2025-03-23 critical analysis — flagged a version of this pattern in the literacy space: a competency, once named, becomes a product, and the institution that names it most assertively gets to sell the certification.

What the discourse misses, most consequentially, is its own reflexivity. A literature on critical thinking that does not apply critical thinking to itself — that does not ask whether cognitive debt is a useful metaphor or a captured one, whether the studies measure what they claim, whether the proposed frameworks address the diagnosis or merely accompany it — is performing the very deficit it warns about. The Critical Thinking Complete text observes that critical thinkers “learn how to think sociologically and, thus, how to recognize when [their] ideas are controlled by social rituals, expectations, and taboos.” The ritual currently controlling the cognitive-debt conversation is the institutional procurement cycle.

V. The longer view

There is a real question buried in this arc, and it is not the one the headlines have been asking. The question is not whether generative AI weakens or strengthens critical thinking — a binary that the existing evidence cannot adjudicate and that the metaphor of debt is poorly designed to hold. The question is what schools and workplaces are for, once a substantial portion of the formal reasoning they used to demand can be performed by a machine in seconds. That question is hard. It implicates curriculum, assessment, credentialing, labour markets, and the public mission of education in ways that no “experiential learning framework” will resolve from inside a single department. It is also the question that the cognitive-debt vocabulary helps institutions postpone, by converting a structural problem into an individual hygiene practice and selling the soap.

The longitudinal pattern is clear enough now to name: a thin empirical base, a thick metaphor, a remediation market, and an integration framework arriving in time to absorb both the worry and the budget. The reader who has followed this column’s other arcs will recognize the shape. What is owed here is not interest on borrowed thought; what is owed is the harder accounting — of what we were teaching before the tools arrived, and whether we knew why.

References

  1. AI dependency weakens critical thinking, study finds
  2. AI Reliance: How Microsoft’s Study Reveals Critical Thinking Skills Are Being Eroded
  3. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
  4. Are We Giving Artificial Intelligence Too Much Power? The Ethics Of Delegating Thought
  5. Cognitive Erosion or Extension? The Psychological Impact of Outsourcing Thinking to AI
  6. Cognitive offloading in education: How AI Use Can Undermine Critical Thinking
  7. Cognitive offloading shrinks mental muscles. Here are 4 ways students can stay sharp.
  8. Critical Thinking and Artificial Intelligence—Vital for Researchers, Reviewers, and Editors
  9. Critical thinking in the age of big data and AI
  10. Evaluating the impact of AI on the critical thinking skills of university students
  11. How can students prepare for their careers? 4 takeaways from an AI architect
  12. How to avoid cognitive debt by building critical thinking skills
  13. Is AI Making You Dumber? Shocking Findings on Critical Thinking and Cognitive Skills!
  14. Just Thinking: How Visual Tools and AI Can Unlock Critical Thinking in Education
  15. Microsoft and CMU: GenAI may undercut critical thinking, but there’s an antidote
  16. Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education
  17. Study warns AI reliance erodes STEM students’ thinking skills
  18. The Negative Effects of Over-Reliance on AI Tools in IT Student Learning
  19. Why critical thinking is so crucial in the AI era
  20. Why your mind risks going soft with AI, and how to sharpen it again.
  21. Critical Thinking - The MIT Press Essential Knowledge series
  22. Critical Thinking - Complete
  23. Critical Thinking: Your Guide to Effective Argument
← Back to this edition