AI NEWS SOCIAL · Category Report · 2026-04-19
Higher Education — the Week's Arc

Higher Education — the Week’s Arc

The Arrival Without a Reception Committee

Something peculiar has happened to higher education over the past three years, and it is not what the press releases or the panic pieces have told us. A technology of genuine cognitive consequence arrived on campuses in November 2022, and institutions responded less by thinking about teaching and learning than by writing policies about policies. The archive of the last thirty-six months now contains hundreds of framework documents, provostial memoranda, task-force reports, and “points to consider” briefings — a governance literature so voluminous that it has begun to obscure the pedagogical question it was ostensibly convened to address. When Stephen Marche warned in Will ChatGPT Kill the Student Essay? that the humanities had weeks, not years, to respond, he was right about the timescale and wrong about the response: the response, when it came, was administrative rather than curricular.

The sculpture professor who asks what is actually happening — not what is being said to happen — deserves a map rather than a manifesto. The map has three overlapping territories: a governance discourse dominated by administrators and consultancies; a pedagogical discourse produced mostly by individual faculty and small research groups; and a student discourse that exists almost entirely as behavior rather than commentary. These territories barely touch. The governance documents describe a world of principled frameworks and institutional readiness; the pedagogical research describes a world of failing assessments, anxious instructors, and uncertain learning outcomes; the students, meanwhile, have already integrated generative AI into the substrate of their coursework in ways that no policy document has caught up with. A recent global survey in Higher education students’ perceptions of ChatGPT: A global study of early reactions found that students across dozens of countries had formed coherent, largely positive working relationships with the technology within months of its release — well before most of their universities had produced a single official sentence about it.

The argument I want to develop here is that the imbalance between these three discourses — governance running ahead of pedagogy, pedagogy running ahead of policy, students running ahead of everyone — is not a temporary lag. It is the shape that AI adoption in higher education has taken and will continue to take unless institutions deliberately reconstruct the conversation. What is missing, and what deserves to be named, is a serious partnership framing: a mode of engagement in which faculty, students, and administrators jointly determine what it means to learn alongside these systems, rather than around them.

The Governance Bloat

The most striking feature of the AI-in-higher-education literature is how much of it is about procedure rather than practice. Open Toward an AI-Ready University from the University of Toronto, or the Standards and Recommendations for the Use of Generative AI in Teaching and Learning at Northeastern, or the Creating the AI-Enabled Community College task-force report from Achieving the Dream, and you will find the same architecture: a values statement, a risks register, a principles list, a governance structure, an implementation roadmap. These are serious documents produced by serious people, and they are not wrong. What they mostly do not contain — because their genre does not accommodate it — is a description of what happens in a particular classroom when a particular instructor confronts a particular cohort of students who have all used ChatGPT on the reading.

The European picture is similar. The large comparative study [C. M. Stracke, D. Griffiths, D. Pappa, S. Bećirović, E. Polz, L. Perla, A. Di Grassi, S. Massaro, M. P. Skenduli, D. Burgos, V. Punzo,

D. Amram, X. Ziouvelou, D. Katsamori, S. Gabriel, N. Nahar, J. Schleiss, P. Hollins. Analysis of Artificial Intelligence Policies for Higher Education in Europe,

International Journal of Interactive Multimedia and Artificial Intelligence, vol. 9, no. 2, pp. 124-137, 2025, http://dx.doi.org/10.9781/ijimai.2025.02.011](https://core.ac.uk/download/646475361.pdf), assembled by Stracke and colleagues across multiple countries, documents a continent-wide flurry of policy production in which strategic clarity consistently outruns pedagogical specificity. In the United Kingdom, the Policy and guidance on the use of generative artificial intelligence in UK higher education review finds the same pattern: institutions have produced elaborate permission structures, but the question of how a history tutorial should actually change when students can produce plausible 1,500-word essays in forty seconds remains, as it were, somebody else’s problem. Even the most practically oriented guides, such as Texas A&M’s Use Guidelines and Ethics resource, tend to operate at the level of permission and prohibition rather than curricular redesign.

There are honorable attempts to close the gap. The Responsible Adoption of Generative AI in Higher Education: Developing a “Points to Consider” Approach Based on Faculty Perspectives paper is explicit about the limits of top-down frameworks and tries, through sustained consultation with faculty, to move the conversation from compliance to craft. Its central insight — that principles cannot substitute for context, and that every discipline will need its own negotiated settlement — is the insight most of the governance literature avoids. Mark Coeckelbergh’s argument in AI Ethics (2020) that AI governance needs “more room for bottom-up next to top-down” and must listen to “researchers and professionals who work with AI in practice” is nowhere more applicable than here. The governance documents read as though written by people for whom the classroom is a rumor.

The cost of this imbalance is not merely aesthetic. It shapes what institutions measure, what they reward, and what they protect. A university that has produced a 40-page AI strategy can tell its trustees it is responding; it cannot tell its adjuncts how to grade.

What the Students Are Actually Doing

If the governance discourse is long on principles and short on practice, the empirical literature on student use has the opposite problem: it is overflowing with behavioral data that the policy documents seem not to have read. The Systematic Rapid Review of Empirical Research on Students’ Use of ChatGPT in Higher Education synthesizes dozens of studies and finds a remarkably stable picture across contexts: majorities of students are using generative AI regularly, most are using it for tasks that sit in the grey zone between permissible assistance and prohibited delegation, and almost none are being guided by their instructors in how to do so well. The Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review reaches compatible conclusions from a different angle, and adds that the research base, though growing fast, remains skewed toward STEM disciplines and English-language institutions.

The texture of student use is more interesting than the frequency. An exploratory study of university students reported in Using Generative AI to Enhance Experiential Learning: An Exploratory Study of ChatGPT Use by University Students documents something policy documents rarely acknowledge: students are not primarily using these tools to cheat on essays. They are using them to unstick themselves — to get an initial purchase on a confusing reading, to break through a drafting impasse, to translate technical jargon into plain terms, to simulate a tutor at 2 a.m. when no human tutor exists. The Indonesian case study in ChatGPT: The Future Research Assistant or an Academic Fraud? finds a similar pattern with a sharper edge: students who lack access to well-resourced libraries, graduate mentors, or English-language support are using ChatGPT as a substitute infrastructure for the scholarly apparatus that wealthier institutions provide.

This is a redistribution of educational capital that deserves more attention than it has received. The novice programmers studied in Assessing novice programmers’ perception of ChatGPT: performance, risk, decision-making, and intentions describe the tool as a private tutor that never tires of their questions — an access good, not a shortcut. The global cohort in the early-reactions study cited above reports using AI primarily to close gaps that their instructors do not have time to close. A governance discourse that treats student use mainly as an integrity problem has missed what students themselves say they are doing.

There are, of course, darker patterns. The French-language study L’humanisation des chatbots pédagogiques: un levier pour l’apprentissage ou un risque de dépendance cognitive? raises the specter of cognitive dependence — the concern that students who offload every friction to a helpful machine may be weakening the very mental muscles that friction was supposed to build. This is a real question, and it cannot be answered by policy. It can be answered only by teachers who know their students and their discipline well enough to distinguish productive from corrosive use. Which is precisely the kind of judgment the governance literature does not equip anyone to make.

Faculty Between Skepticism and Surrender

The faculty perspective, when it appears in the discourse at all, is usually flattened into one of two caricatures: the Luddite holdout who wants to ban everything, and the early adopter who has rebuilt her syllabus around prompt engineering. The actual landscape, as the EDUCAUSE piece Listening to Skepticism: What Faculty Concerns About Generative AI Reveal carefully documents, is more textured and more interesting. Faculty skepticism, the article argues, is usually not technophobia; it is a set of specific, defensible pedagogical commitments — to the cognitive work of struggle, to the relational dimension of teaching, to disciplinary standards that were hard-won and are now being asked to justify themselves on short notice. When administrators interpret skepticism as a resistance problem to be managed, they lose access to the discipline-specific wisdom that is the only thing capable of making any AI policy actually work.

The comparative view from University Teachers’ Vantage Points on ChatGPT Integration in Education: Upsides and Downsides reinforces the point. Instructors interviewed across institutions describe a tangle of pressures: administrative expectations to “integrate” AI without additional training time, students who expect instructors to have worked out a coherent stance, colleagues split between enthusiasm and alarm, and a genre of professional-development workshops that tends to showcase possibilities without acknowledging costs. The graduate teaching assistants surveyed in Generative AI in Higher Education: Graduate Teaching Assistants’ Practice and Reflection on ChatGPT for Module Assessment describe an even more exposed position: they are the ones actually grading the papers whose authorship is now in question, and they have been given almost no institutional support for making those judgments.

Computer-science faculty, who might be expected to have the clearest relationship with the technology, are in some ways the most conflicted. The interview study Exploring the Role of AI Assistants in Computer Science Education: Methods, Implications, and Instructor Perspectives finds instructors torn between recognizing that their students will work with these tools in industry and worrying that the foundational struggles of learning to program — the productive frustration that builds intuition — are being short-circuited. A parallel classroom study, Beyond the Hype: A Cautionary Tale of ChatGPT in the Programming Classroom, found that students with ChatGPT access produced working code faster but demonstrably understood it less well, and struggled more on subsequent tasks where the tool was withdrawn. This is the kind of finding that ought to reshape pedagogy. It has not, so far, reshaped much policy.

What faculty across the literature repeatedly ask for is less a framework than a conversation: time, with colleagues, to work out discipline-specific answers to discipline-specific problems. The quick-start document ChatGPT and artificial intelligence in higher education: quick start guide is useful precisely because it is modest; it offers instructors a vocabulary rather than a verdict. Most faculty I have read do not want their institutions to tell them what to do. They want their institutions to acknowledge that what is being asked of them — to redesign assessment, rebuild academic-integrity norms, and retrain themselves in a technology that changes every quarter — is enormous, and that the absence of sustained institutional support for this work is itself a failure.

The Detection Delusion

One of the clearest places to see the governance-pedagogy imbalance at work is in the brief, embarrassing history of AI-detection tools. When ChatGPT appeared, a significant fraction of institutional energy went into procuring and deploying detection software, on the assumption that the academic-integrity problem was a technical one that could be solved technically. The technical literature demolished this assumption almost immediately. Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases compared the leading detectors on realistic student assignments and found their accuracy wildly insufficient for high-stakes use, with false-positive rates that would destroy student trust if the tools were actually relied upon. Student Mastery or AI Deception? Analyzing ChatGPT’s Assessment Proficiency and Evaluating Detection Strategies pushed the analysis further, showing that ChatGPT could pass a wide range of standard assessments at levels indistinguishable from competent students, while detection strategies remained brittle and easily circumvented by light paraphrasing.

The review essay ChatGPT and the rise of generative AI: Threat to academic integrity drew the lesson that the empirical studies implied: the integrity problem cannot be solved at the output layer, because the output is indistinguishable. It has to be solved at the design layer, by building assessments whose value to students depends on their actually doing the work. This is not a new idea — it is the oldest idea in progressive pedagogy — but it requires the one thing governance frameworks cannot provide, which is course-by-course redesign by experienced teachers with time to do it.

The spectacle of universities spending money on detection tools that do not work, while underinvesting in the pedagogical redesign that would make detection unnecessary, is a small but exact allegory of the larger imbalance. It is also a reminder of Kate Crawford’s observation in The Atlas of AI (2021) that AI systems “are embedded in social, political, cultural, and economic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it.” Detection was never really a technical question. It was an institutional wish to convert a pedagogical problem into a procurement decision.

Hype Merchants and Banhammer Columnists

If the governance discourse is overly polite and the empirical literature is under-read, the popular commentary on AI in higher education oscillates between two equally distorting poles. At one end sits the enthusiast coverage exemplified by This CEO has teamed up with Google, Microsoft, and McKinsey, in which Sal Khan and a consortium of tech firms propose an applied-AI bachelor’s degree as the natural future of undergraduate education. The piece, like much of the genre, slides easily from “students should develop AI literacy” — a defensible claim — to “the curriculum should be reorganized around the products of companies who are also the interviewees.” The slippage is not accidental; it is the business model of the genre. At the other end sits the prohibitionist reflex captured in Colleges And Schools Must Block And Ban Agentic AI Browsers Now, whose title gives the argument: block first, think later.

Neither pole engages the classroom. The enthusiast coverage skips the pedagogical middle — the hard work of figuring out what actually helps students learn — and jumps straight to workforce readiness and degree restructuring. The prohibitionist coverage treats the problem as one of perimeter security, as though the tools were viruses that could be firewalled out of student lives. Both misread the situation because both evade the question a sculpture professor already knows how to ask: what is this particular practice for, what does it cultivate, and what does substituting a machine for part of it do to the thing we were trying to teach?

The more measured end of the popular literature, such as AI in Education: How Artificial Intelligence Is Changing Teaching and Learning, does better, but even these treatments tend toward the panoramic, describing the landscape without entering any of its buildings. The specific is where the real action is, and the specific is precisely what is hardest to convey in general-interest prose.

The Biases in the Machine

One area where the governance discourse and the empirical literature do meet — briefly, uncomfortably — is the question of bias. The scoping review Potential Societal Biases of ChatGPT in Higher Education catalogs documented biases along axes of gender, race, language, and cultural reference in the outputs of large language models deployed in educational contexts. These are not hypothetical. They affect what examples a student sees in a generated explanation, whose scholarship is cited, what writing styles are normalized, and whose English is treated as default. A university that has outsourced part of its explanatory apparatus to a model whose training corpus skews heavily Anglophone and heavily Western has made a curricular decision, whether or not it has acknowledged doing so.

This is the point at which the social aspects (SA) dimension of AI ethics, in Coeckelbergh’s sense, becomes pedagogical rather than abstract. It matters which voices the model amplifies, which it flattens, which it simply cannot produce. Kate Crawford’s reminder that AI “is not an objective, universal, or neutral computational technique” lands harder in a classroom than in a keynote, because the classroom is where the cost of the illusion of neutrality is actually paid. And yet the governance documents, with their values statements about equity and inclusion, rarely specify what faculty should actually do when a model’s explanation of the French Revolution quietly centers the perspectives its training data happened to have most of.

The reward-based pedagogical approach sketched in Encouraging Responsible Use of Generative AI in Education: A Reward-Based Learning Approach is one of the more interesting concrete responses in the literature: rather than either banning use or passively permitting it, the approach builds assignments that reward students specifically for critical engagement with the model’s outputs — for spotting errors, challenging framings, comparing with human sources. This is, importantly, a pedagogical answer to a pedagogical problem, and it illustrates the kind of work that the governance literature tends to gesture at without providing.

The Missing Partnership

The deepest absence in the current discourse, and the one that the sculpture professor is perhaps best positioned to notice, is the missing language of partnership. The dominant framings are all asymmetric: governance speaks of compliance, policy speaks of permission, commerce speaks of integration, popular coverage speaks of threat or salvation. Almost no one speaks of a joint practice in which faculty, students, and perhaps thoughtful administrators are working out, together and in public, what learning with these systems should look like in a given discipline. The Faculty Presentations archive at National Louis University is a small example of what this mode can look like at scale: not policy, not prohibition, but faculty sharing specific experiments with specific assignments, with named costs and named benefits. It is the genre the field needs more of and produces least of.

Partnership language matters because it reframes the central question. The governance question is: how do we regulate this? The pedagogical question is: how do we teach with this? The partnership question is different: what do we — faculty, students, and the technology’s designers — owe one another as we figure out what learning means under these new conditions? The question admits that students have expertise the faculty do not (they have been using these tools intensively for three years), that faculty have expertise the students do not (they know what their discipline is for), and that neither group has the kind of expertise the technology’s designers have about what the systems can and cannot do. A conversation among these three is the conversation that is almost entirely absent from the documents I have been reading.

What would it take to build it? It would take institutional investment in the one currency that governance documents cannot manufacture, which is faculty time — time to redesign, to confer, to try failing experiments, to write honestly about what did not work. It would take student involvement that goes beyond the survey instrument and into genuine co-design. It would take administrators willing to tolerate a period of visible disarray rather than impose premature coherence. And it would take the recognition, harder to come by than it should be, that the arrival of a technology of this magnitude is not primarily a compliance event. It is an intellectual event, and intellectual events are addressed by intellectual communities, not by frameworks.

What Is at Stake

The stakes of getting this wrong are not catastrophic in the way the panic literature suggests, but they are more serious than the enthusiasts admit. They lie in the slow erosion of practices whose value is not immediately legible. The apprentice sculptor who never struggles with a difficult medium does not become a sculptor; the undergraduate who never writes a bad paragraph that her instructor marks up does not become a writer; the novice programmer who never debugs her own code does not become a programmer. The empirical literature is already showing this in specific domains — the Beyond the Hype: A Cautionary Tale of ChatGPT in the Programming Classroom study on programming classrooms is only the clearest example — and the broader hypothesis, that cognitive offloading at scale will produce a generation of students who can produce polished artifacts without the understanding those artifacts used to index, is not fringe speculation. It is a reasonable extrapolation from what we already observe.

But it is equally true that the opposite error — treating the technology as an enemy of learning and trying to hold the line against it — will fail, and fail in ways that are bad for students. The global-study data in Higher education students’ perceptions of ChatGPT and the systematic review in A Systematic Rapid Review of Empirical Research on Students’ Use of ChatGPT in Higher Education both show that students are using the tools regardless of institutional stance, and that the main effect of prohibition is to drive use underground, deprive students of guidance, and concentrate the benefits on students who are already most resourced and most adept at getting around rules. Prohibition, in practice, is a regressive policy dressed as a principled one.

The path between these errors is narrow and passes through the work that only teachers can do: course by course, assignment by assignment, cohort by cohort, deciding what struggles are worth preserving, what tasks should be reimagined, what new capacities are now worth teaching. Listening to Skepticism is right that the faculty members most reluctant to embrace AI are often those most attuned to what their discipline is for; they are the people any institution would want designing its response, if the institution were willing to listen. The governance literature’s instinct to route around skepticism — to treat it as change-management friction — is the instinct most likely to produce bad policy. The partnership mode, in which skeptics are co-authors rather than obstacles, is the mode most likely to produce policy worth having.

Higher education will not be destroyed by AI, nor will it be saved by it. It will be changed, in ways that depend almost entirely on the quality of the conversations institutions choose to hold. The governance documents, the policy reviews, the task-force reports — these are not unimportant, but they are not the conversation. The conversation is the one that has not yet fully begun: the one in which faculty and students work out, with institutional support rather than institutional anxiety, what it means to teach and learn in the presence of a machine that can produce plausible answers to almost any question. That conversation is where the pedagogy is. It is where the stakes are. It is, so far, where too few of the documents are looking.

References

  1. A Systematic Rapid Review of Empirical Research on Students’ Use of ChatGPT in Higher Education
  2. AI in Education: How Artificial Intelligence Is Changing Teaching and Learning
  3. [C. M. Stracke, D. Griffiths, D. Pappa, S. Bećirović, E. Polz, L. Perla, A. Di Grassi, S. Massaro, M. P. Skenduli, D. Burgos, V. Punzo,

D. Amram, X. Ziouvelou, D. Katsamori, S. Gabriel, N. Nahar, J. Schleiss, P. Hollins. Analysis of Artificial Intelligence Policies for Higher Education in Europe,

International Journal of Interactive Multimedia and Artificial Intelligence, vol. 9, no. 2, pp. 124-137, 2025, http://dx.doi.org/10.9781/ijimai.2025.02.011](https://core.ac.uk/download/646475361.pdf) 4. Assessing novice programmers’ perception of ChatGPT: performance, risk, decision-making, and intentions 5. Beyond the Hype: A Cautionary Tale of ChatGPT in the Programming Classroom 6. ChatGPT and artificial intelligence in higher education: quick start guide 7. ChatGPT and the rise of generative AI: Threat to academic integrity 8. ChatGPT: The Future Research Assistant or an Academic Fraud? 9. Colleges And Schools Must Block And Ban Agentic AI Browsers Now 10. Creating the AI-Enabled Community College 11. Detecting LLM-Generated Text in Computing Education: A Comparative Study for ChatGPT Cases 12. Encouraging Responsible Use of Generative AI in Education: A Reward-Based Learning Approach 13. Exploring the Role of AI Assistants in Computer Science Education: Methods, Implications, and Instructor Perspectives 14. Faculty Presentations 15. Generative AI in Higher Education: Graduate Teaching Assistants’ Practice and Reflection on ChatGPT for Module Assessment 16. Higher education students’ perceptions of ChatGPT 17. L’humanisation des chatbots pédagogiques: un levier pour l’apprentissage ou un risque de dépendance cognitive? 18. Listening to Skepticism 19. Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review 20. Policy and guidance on the use of generative artificial intelligence in UK higher education 21. Potential Societal Biases of ChatGPT in Higher Education 22. Responsible Adoption of Generative AI in Higher Education: Developing a “Points to Consider” Approach Based on Faculty Perspectives 23. Standards and Recommendations for the Use of Generative AI in Teaching and Learning at Northeastern 24. Student Mastery or AI Deception? Analyzing ChatGPT’s Assessment Proficiency and Evaluating Detection Strategies 25. This CEO has teamed up with Google, Microsoft, and McKinsey 26. Toward an AI-Ready University 27. University Teachers’ Vantage Points on ChatGPT Integration in Education: Upsides and Downsides 28. Use Guidelines and Ethics 29. Using Generative AI to Enhance Experiential Learning: An Exploratory Study of ChatGPT Use by University Students 30. Will ChatGPT Kill the Student Essay?

← Back to this edition