AI NEWS SOCIAL · Category Report · 2026-04-19
Social Aspects of AI — the Week's Arc

Social Aspects of AI — the Week’s Arc

A strange asymmetry governs the public conversation about artificial intelligence. The harms are meticulously catalogued — bias audits, disparate-impact reports, wrongful-arrest tallies, classroom surveillance exposés — while the harmed themselves appear in the discourse largely as case studies, plaintiffs, or statistical silhouettes. The journalists, civil-liberties lawyers, academic researchers, and governance consultants who populate the citation stream have built something like an entire industry of documentation. Meanwhile the Indonesian gig driver, the Black student whose essay is downgraded, the French welfare claimant flagged by a fraud algorithm, the teenager arrested after a predictive-policing hit: these figures are conjured, named briefly, and then the conversation returns to its preferred subjects — model capabilities, ethical frameworks, regulatory posture.

This is not an accident of journalism. It is a structural feature of how AI discourse organizes power. When one examines the landscape of writing on the social aspects of AI, a single theme — ethical failures, harms, bias incidents — dominates roughly two-fifths of all coverage. That concentration, which might look like vigilance, is worth interrogating. What does it mean that the most-discussed feature of AI is its documented wrongness? Who is doing the documenting, and to whom? Why does the production of harm-evidence so rarely translate into structural remedy? And crucially, whose voices are absent from the record even as their bodies, their labor, and their futures are the substance of it?

This essay argues that contemporary AI discourse has developed a peculiar grammar in which harms are named with increasing precision even as the agents of harm remain diffusely described, and in which the people affected appear as subjects of study rather than speakers of analysis. The 39.5% share devoted to ethical failures is not a victory of critical attention; it is, in part, a substitute for accountability. Documentation has become the preferred response to a problem whose solution would require confronting institutional power. As Old Law, New Bias: Applying Civil Rights Doctrine to Algorithmic Discrimination observes, existing legal frameworks have struggled to translate decades of harm scholarship into enforceable remedy — an observation that should unsettle anyone who assumes that naming a problem is the first step toward solving it.

The Archive of Harm and the Absence of the Harmed

Consider the extraordinary corpus that has accumulated around algorithmic wrongful arrest. More than a Dozen Wrongful Arrests Due to Police Reliance on Facial Recognition Technology is, on its face, a catalogue of injury: named individuals, specific jurisdictions, documented consequences. The ACLU has done the slow evidentiary work that journalism and civil rights litigation require. And yet notice the shape of the discourse that this work produces: the ACLU speaks, the police departments occasionally respond, the technology vendors rarely respond at all, and the arrested individuals appear as discrete episodes whose narrative function is to instantiate a pattern. The pattern is the product; the person is its raw material.

This is not a critique of the ACLU. It is a critique of a discursive economy in which advocacy organizations must translate experience into legible evidence before the conversation will admit it. AI-Powered Surveillance Is Turning the United States into a Digital Police State extends this documentation to predictive policing, license-plate recognition, and aerial surveillance — again with impressive specificity, again with the harmed population rendered in aggregate. Even the most sympathetic coverage of ACLU and 75 Organizations Sound Alarm on Meta’s Plan to Add Facial Recognition Technology to Ray-Ban and Oakley Eyeglasses proceeds from the premise that civil society speaks on behalf of a public that is not itself organized to speak. Meta, meanwhile, speaks for itself — through product launches, corporate blog posts, regulatory filings.

Kate Crawford’s argument in The Atlas of AI (2021) bears repeating here: “Artificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction. Its systems are embedded in social, political, cultural, and economic worlds, shaped by humans, institutions, and imperatives that determine what they do and how they do it.” The discourse largely agrees with Crawford in the abstract, but its practical grammar continues to treat AI as an impersonal actor — “the algorithm decides,” “the system flagged,” “the model predicted” — precisely because the alternative would require naming the humans, the institutions, and the imperatives responsible. Documentation proliferates; attribution recedes.

The French case is instructive. La reconnaissance faciale utilisée illégalement depuis 2015 par la police et la gendarmerie reveals that French police have used facial recognition illegally for nearly a decade — an astonishing fact, when one pauses to absorb it, and one that the follow-up investigation in Révélations Disclose : la police française utilise illégalement la reconnaissance faciale specifies further. Note the verb tense: utilise, present tense, ongoing. The documentation is robust. The consequences for the institutions and officials involved are, to date, negligible. A decade of illegal deployment has produced a CNIL investigation and a round of investigative journalism, but no systematic accounting of who authorized what, who was surveilled, or what redress is available to those who were.

The 39.5% and Its Discontents

When a single thematic concern — documented ethical failure — occupies roughly two-fifths of the discourse, we should ask what else is not being discussed. The dominance of harm-documentation crowds out at least three other conversations: the mechanics of institutional power that deploys these systems; the affirmative construction of alternatives; and the perspectives of the communities that experience the harm most acutely but whose testimony is rarely treated as analytic material rather than evidentiary raw stock.

Mark Coeckelbergh anticipates this problem in AI Ethics (2020): “With regard to the ‘who’ question concerning AI ethics, we need more room for bottom-up next to top-down, in the sense of listening more to researchers and professionals who work with AI in practice and indeed to people potentially disadvantaged by AI.” The formulation is polite but the implication is sharp. The current AI ethics discourse, even at its most critical, tends to reproduce a top-down structure: experts speak about the disadvantaged, rather than alongside them. The proliferation of ethics frameworks, audits, and governance recommendations — many of them excellent — nevertheless instantiates a model in which those with institutional standing interpret the harms of those without it.

The education literature exemplifies this pattern. A Systematic Review of AI Ethics in Education surveys an impressive body of scholarly work on bias, fairness, and pedagogical harm. FairAIED: Navigating Fairness, Bias, and Ethics in AI in Education offers a technical taxonomy of the problem space. These are serious contributions. Yet the ratio of meta-analysis to intervention is striking: we now know a great deal about how AI systems in education fail students of color, students with disabilities, and students whose first language is not English, and comparatively little about what successful institutional resistance to harmful deployment actually looks like. AI Shows Racial Bias When Grading Essays — and Can’t Tell Good Writing from Bad demonstrates, with empirical force, that automated essay-scoring systems penalize Black students’ writing. The demonstration is now several years old. The deployment of automated scoring has, in the intervening period, expanded.

The research literature on algorithmic bias in educational prediction tells a similar story. Are algorithms biased in education? Exploring racial bias in predicting college success establishes the presence of racial bias in predictive models used for college admissions and retention. Study Uncovers Racial Bias in University Admissions and Decision-Making AI Algorithms adds institutional specificity. Yet the response from admissions offices, vendors, and accrediting bodies has been to commission further studies — often conducted by the same firms that developed the biased systems. The documentation loop closes without the system changing.

This is the structural pathology of the 39.5%. Harm-documentation is a necessary condition for accountability, but it is not a sufficient one. When the discourse treats documentation as if it were action, the documentation itself becomes a form of managed deferral. Institutions learn to absorb critique without modifying behavior; vendors learn to respond to audits with cosmetic adjustments; regulators learn to treat reports as milestones rather than mandates.

The Blame Economy: Students, Workers, and the Strategic Diffusion of Accountability

Nowhere is the asymmetry of AI discourse clearer than in the allocation of blame. Consider the disproportionate attention paid to student use of generative AI. ChatGPT à l’école : entre tabou et encouragement, le dialogue compliqué entre professeurs et élèves frames the question as one of student ethics and pedagogical adaptation. Universities must respond to students’ emotional reliance on AI invokes the language of addiction and dependency. The greatest risk of AI in higher education isn’t cheating — it’s the erosion of learning itself elevates the discourse somewhat, but still locates the crisis at the site of the student. The OpenAI executive who decided to deploy a chatbot optimized for engagement to a captive audience of adolescents is, in these accounts, largely absent. The university administrator who signed the site license is absent. The venture capitalist whose thesis depends on AI’s deep penetration into education is absent.

This is not coincidence. It is a discursive habit with a long pedigree: when technology produces social harm, the user is foregrounded as the locus of moral responsibility, while the producer recedes into a structural position of neutrality. The tobacco executive, the opioid manufacturer, the social-media platform — each has benefited from the same rhetorical move. In the case of AI in education, the move is especially visible because the student-user is already positioned as a subordinate within the institutional hierarchy, and therefore available for disciplinary critique in ways that the vendor is not.

Schools are using AI to spy on students and some are getting arrested inverts this frame with unusual clarity. Here the students are not the offenders but the surveilled, and the AI is not a temptation but a disciplinary apparatus operated by schools and vendors in tacit partnership with police. The article documents cases in which children have been arrested on the basis of algorithmic interpretations of their text messages and search queries. This is not a story about student ethics. It is a story about the extension of carceral logic into pedagogy, mediated by software whose vendors have every incentive to expand deployment. The fact that this framing is treated as a distinct story rather than the same story as the “students cheating” narrative reveals how thoroughly the blame economy structures what counts as a single subject of discussion.

The same asymmetry governs labor discourse. They claim Uber’s algorithm fired them. Now they’re taking it to court describes drivers whose livelihoods were terminated by automated decision systems whose logic they could not inspect and whose outputs they could not contest. Algorithms don’t care: how AI worsens the double burden for Indonesia’s female gig workers extends the analysis to a population that Western AI discourse almost never addresses: women performing care work and platform labor simultaneously, under conditions in which the algorithm’s indifference to gendered time constraints functions as a material form of discrimination. Lawsuit claims discrimination by Workday’s hiring tech and the accompanying analysis in What the Workday Lawsuit Reveals About Future of AI Hiring document the first serious legal challenges to algorithmic hiring discrimination. When AI plays favourites: How algorithmic bias shapes the hiring process adds scholarly texture.

Across these cases, the accountability question recurs: who is the defendant? The vendor builds the system; the employer deploys it; the algorithm decides; the worker bears the consequence. Each party points to the adjacent one. The legal architecture that would assign responsibility along this chain is still being constructed, and, as the Workday case suggests, the construction is slow, expensive, and borne by the plaintiffs themselves. The discourse around these cases is robust; the remedy remains embryonic.

The Geography of Silence

The structural silences of AI discourse are geographic as well as demographic. The center of gravity — measured by citations, funding, regulatory attention, and journalistic coverage — sits in the United States and Western Europe. The peripheries appear, when they appear, as destinations for harm that originated elsewhere. The cultural cost of AI in Africa’s education systems and its French-language counterpart L’IA dans l’éducation africaine : progrès ou perte de mémoire raise a question that the dominant discourse rarely poses: what happens when educational systems trained on Western corpora, optimized for Western pedagogical assumptions, and priced for Western markets are deployed in contexts whose linguistic, cultural, and epistemic traditions they were never designed to accommodate?

The answer, which UNESCO articulates with notable directness, is cultural erosion — a process in which African languages, historical narratives, and knowledge systems are progressively devalued not by explicit colonial mandate but by the quieter logic of algorithmic defaults. This is a form of harm that the dominant bias-and-fairness literature is poorly equipped to name, because it treats representational harm as a question of demographic parity within a presumed common framework rather than as a question of whose framework gets to be the common one.

The United Nations human rights office has gestured at the broader pattern. Racismo e IA: “Los sesgos del pasado abren la puerta a los sesgos del futuro” frames algorithmic bias as the technological extension of historical racial hierarchies — a framing that, while familiar in critical race scholarship, remains marginal in mainstream AI ethics. The question the OHCHR raises is uncomfortable for the dominant discourse because it implies that technical fixes cannot remedy structural wrongs: an AI system trained on data produced by a racist criminal justice system will reproduce the system’s racism even after the most scrupulous debiasing, because the debiasing operates on the system’s outputs rather than on the conditions of its inputs.

The French welfare case pushes the analysis further. France: Discriminatory algorithm used by the social security agency must be stopped describes the Caisse nationale d’allocations familiales’ use of a risk-scoring algorithm that disproportionately flagged disabled claimants, single mothers, and low-income recipients for fraud investigation. The algorithm was not a rogue deployment; it was an officially sanctioned instrument of state administration. Its targets were precisely the populations that the welfare state exists, in principle, to serve. The French government’s response — initial denial, then gradual acknowledgment under legal pressure — follows the same pattern as the facial-recognition case: the documentation accumulates, the institution equivocates, the deployment continues. PDF Lutter contre les discriminations produites par les algorithmes et l’IA documents the broader landscape of algorithmic discrimination in French administration. The Défenseur des droits — an office whose specific function is to identify such discrimination — has produced careful reports. The reports have not, to date, dismantled the systems they describe.

Coeckelbergh’s question, again from AI Ethics (2020), is the one the geography of silence forces us to ask: “Who will have access to the technology and be able to reap its benefits? Who will be able to empower themselves by using AI? Who will be excluded from these rewards?” The dominant discourse addresses the first question at length, the second occasionally, and the third almost never as a question in its own right — only as the shadow of the first two.

The Affect Industry and the Capture of Interior Life

One domain where the asymmetry of power in AI discourse becomes especially conspicuous is emotion recognition. Crawford notes in The Atlas of AI (2021) that affect-recognition systems are part of “an industry predicted to be worth more than seventeen billion dollars,” despite the fact that “there is considerable scientific controversy around emotion detection, which is at best incomplete and at worst misleading.” The gap between the industry’s commercial momentum and its empirical foundation is not a secret. It is extensively documented in technical venues such as the Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis, which lays out, in careful computational-linguistics register, the specific ways in which current affect-recognition techniques misrepresent the phenomena they claim to measure.

And yet the industry continues to grow. Deployments proliferate in hiring, in education, in border control, in advertising. The discursive response — academic ethics sheets, civil-society critiques, occasional regulatory gestures — operates in a different temporal register from the deployment itself. Papers take years; deployments take weeks. This temporal asymmetry is itself a feature of power: the critical discourse arrives always slightly too late to matter for the decisions already made, while being early enough to inform the next round of deployments that vendors then proceed to undertake anyway.

The affect-recognition case also exposes how thoroughly the dominant discourse accepts the frame that the vendors themselves have established. The question that gets asked is: Does this emotion-recognition system work accurately? The question that largely does not get asked is: What business does any institution have making automated inferences about another person’s interior state in the first place? The first question admits incremental improvement; the second admits only refusal. The discourse has chosen the first.

Law as Arriving Spectator

The legal system, in theory, is where harm-documentation ought to translate into accountability. In practice, American and European legal frameworks are adapting to algorithmic harm at a pace that the harms themselves have long since outstripped. AI Lawsuit Pushes the Boundaries of AI Litigation — and May Signal a New Wave catalogues the emerging case law with the cautious optimism of practitioners watching a doctrine take shape. The cases are individually significant — each one incrementally extends the applicability of existing civil rights law to algorithmic decision-making — but the aggregate effect remains modest relative to the scale of deployment.

The problem is not merely that law moves slowly. It is that the doctrinal categories of existing civil rights law were designed for a world in which discrimination occurred through identifiable human decisions. Old Law, New Bias: Applying Civil Rights Doctrine to Algorithmic Discrimination articulates the difficulty: disparate-impact doctrine, the most promising existing vehicle for algorithmic-discrimination claims, requires evidentiary showings that plaintiffs are poorly positioned to make against defendants who control the training data, the model weights, and the deployment logs. The burden of proof falls on the very people whose access to the relevant evidence is systematically denied.

Meanwhile the more novel harms — synthetic media used for abuse, generative tools repurposed for exploitation — push at the outer limits of what existing law can address at all. Law enforcement is trying to combat abusive AI. Experts say easier said than done describes the specific difficulty of prosecuting AI-generated child sexual abuse material under statutes written for photographic evidence of actual abuse. The moral clarity of the harm is not matched by the legal infrastructure to address it. Law enforcement adapts; the adaptation is partial and jurisdictionally uneven; the harm scales faster than the response.

Municipal and state-level responses have occasionally moved faster than federal ones. San Francisco prohíbe a policía usar reconocimiento facial describes one of the earliest municipal bans on police facial recognition, a policy that has since been emulated, modified, and in some jurisdictions reversed. The pattern of these local responses — bold initial action, gradual erosion under political and technological pressure — suggests that regulatory victories in the AI space are provisional in a way that victories in other civil-rights domains were not. The technology keeps improving; the vendors keep marketing; the police keep wanting; the ban keeps looking, to its opponents, like an impediment to public safety rather than a protection of liberty.

One way of reading the emerging litigation landscape is as a bet on the residual legitimacy of courts as fora in which the harmed can compel a response from the powerful. It is a costly bet. The Workday plaintiffs, the Uber plaintiffs, the ACLU’s facial-recognition clients — each of them bears years of delay, financial exposure, and emotional strain for the chance of a ruling whose precedential reach remains uncertain. The asymmetry of stakes is glaring. For the individual plaintiff, the case is a life. For the defendant corporation, the case is a line item.

Toward a Different Grammar

What would it mean to shift the grammar of AI discourse away from the architecture of managed documentation and toward something closer to shared authorship? It would mean, at minimum, three things.

It would mean treating the testimony of affected populations as analytic contribution rather than evidentiary raw material. The Indonesian gig workers, the French welfare claimants, the Black students whose essays are algorithmically downgraded, the surveilled African students whose cultural inheritance is being quietly overwritten — these populations possess the kind of specific, situated knowledge about algorithmic harm that no external researcher can fully reconstruct. The discourse does not lack their stories; it lacks their interpretations. There is a difference between being cited and being read.

It would mean naming the agents of harm with the same precision that the harms themselves are named. When the discourse shifts from “the algorithm decided” to “the vendor built, the institution deployed, the regulator failed to oversee, the executive authorized,” the possibility of accountability begins to take shape. The passive voice is not a neutral stylistic choice; it is a structural defense of the powerful.

And it would mean recognizing that the 39.5% concentration on ethical failures is not, in itself, a sign of a healthy critical culture. It is, in its current form, closer to what the sociologist Stanley Cohen called “states of denial” — an elaborate acknowledgment that operates as a form of non-action. The ACLU’s documentation is invaluable; the academic literature is necessary; the journalism is courageous. And none of it will produce structural change without a concomitant conversation about institutional redesign, regulatory enforcement, and the transfer of interpretive authority from those who study harm to those who experience it.

The reader of AI discourse should, after closing this essay, notice something uncomfortable in the familiar cadences of the coverage. Notice who speaks and who is spoken about. Notice how the algorithm is personified precisely in the moments when a human agent could be named. Notice how the catalog of harm grows richer year by year even as the mechanisms of deployment continue largely unimpeded. Notice, in A Systematic Review of AI Ethics in Education and in FairAIED: Navigating Fairness, Bias, and Ethics in AI in Education, the implicit division of labor in which ethicists produce frameworks, vendors produce systems, and the people subject to the systems produce neither frameworks nor systems but only data and occasional litigation.

The power in AI discourse sits, at present, with those who can convert harm into publication, publication into reputation, and reputation into the next grant, consulting contract, or regulatory appointment. This is not the same as the power of the vendors — who sit further up the chain — but it shares with that power a fundamental orientation away from the populations at the bottom of it. The harmed speak when they are summoned to speak, in the registers the discourse has prepared for them. The question of what they might say if the summoning were different — if the registers were their own, if the frameworks were built from their experience rather than onto it — is the question that the 39.5% has so far failed to ask itself.

Until it does, the archive of harm will continue to expand, each entry more specific than the last, each victim named more carefully, each technical analysis more sophisticated — and the systems themselves will continue to ship, to scale, to decide. The grammar of who speaks determines what can be said. And what cannot yet be said, in the dominant AI discourse, is the sentence that begins: We, who are the subject of these systems, refuse.

References

  1. A Systematic Review of AI Ethics in Education
  2. ACLU and 75 Organizations Sound Alarm on Meta’s Plan to Add Facial Recognition Technology to Ray-Ban and Oakley Eyeglasses
  3. AI Lawsuit Pushes the Boundaries of AI Litigation — and May Signal a New Wave
  4. AI Shows Racial Bias When Grading Essays — and Can’t Tell Good Writing from Bad
  5. AI-Powered Surveillance Is Turning the United States into a Digital Police State
  6. Algorithms don’t care: how AI worsens the double burden for Indonesia’s female gig workers
  7. Are algorithms biased in education? Exploring racial bias in predicting college success
  8. ChatGPT à l’école : entre tabou et encouragement, le dialogue compliqué entre professeurs et élèves
  9. Ethics Sheet for Automatic Emotion Recognition and Sentiment Analysis
  10. FairAIED: Navigating Fairness, Bias, and Ethics in AI in Education
  11. France: Discriminatory algorithm used by the social security agency must be stopped
  12. L’IA dans l’éducation africaine : progrès ou perte de mémoire
  13. La reconnaissance faciale utilisée illégalement depuis 2015 par la police et la gendarmerie
  14. Law enforcement is trying to combat abusive AI. Experts say easier said than done
  15. Lawsuit claims discrimination by Workday’s hiring tech
  16. More than a Dozen Wrongful Arrests Due to Police Reliance on Facial Recognition Technology
  17. Old Law, New Bias: Applying Civil Rights Doctrine to Algorithmic Discrimination
  18. PDF Lutter contre les discriminations produites par les algorithmes et l’IA
  19. Racismo e IA: “Los sesgos del pasado abren la puerta a los sesgos del futuro”
  20. Révélations Disclose : la police française utilise illégalement la reconnaissance faciale
  21. San Francisco prohíbe a policía usar reconocimiento facial
  22. Schools are using AI to spy on students and some are getting arrested
  23. Study Uncovers Racial Bias in University Admissions and Decision-Making AI Algorithms
  24. The cultural cost of AI in Africa’s education systems
  25. The greatest risk of AI in higher education isn’t cheating — it’s the erosion of learning itself
  26. They claim Uber’s algorithm fired them. Now they’re taking it to court
  27. Universities must respond to students’ emotional reliance on AI
  28. What the Workday Lawsuit Reveals About Future of AI Hiring
  29. When AI plays favourites: How algorithmic bias shapes the hiring process
← Back to this edition