12v26
Highlighted passages from the first reading:
...LLMs moved in the space of a few years from being a technical improvement in machine translation (Vaswani et al., 2017) to being proclaimed as the royal road to AGI that these debates really matter. LLMs are remarkably good generative statistical models of human language (including human-written computer code). This allows them to process language in ways that resemble human discourse and to be jury-rigged to create texts that loosely approximate human reasoning. This is a new material reality, a new force in the world, but one whose actual implications are obscured by the mythic garb it is swaddled in....The possible futures we face are much messier and more varied than stark visions of omnipotent AGI, just as our immediate past was. They will be shaped by the collision between imperfect and highly complex technologies and imperfect and highly complex human social systems (Matias, 2023; Nelson, forthcoming). It is impossible to predict the consequences, but we can map, study, and think about them as they are happening.
...From our perspective, the Singularity began two centuries ago with the Industrial Revolution (Shalizi, 2010), and it has been much messier and more variegated than anyone could have known. The modern social sciences are the offspring of the enormous shocks that they entailed in the past (Tilly, 1984; Nelson, forthcoming). They now need to work together with computer science and other related disciplines (science and technology studies; communications) to map what is best grasped as another stage in the Long Industrial Revolution. AI may turn out to be very important, but in quite different ways than our inherited myths suggest.
We build on ongoing collaborative work (Farrell et al., 2025) with Alison Gopnik and James Evans which argues that it is a category error to think of "large models" as self-motivated agents in the making. Instead, they are better understood as "cultural" (Yiu, Kosoy and Gopnik, 2024) and "social" technologies, resembling libraries and languages on the one hand and markets and bureaucracies on the other. Here we focus on how to study these technologies’ consequences for human society, emphasizing the social rather than the cultural aspects. We particularly emphasize how AI is a social technology, a systematic means of reorganizing social relationships among human beings (Therborn, 1978). Earlier social technologies include not just other information technologies, but institutions of governance such as bureaucracies, markets, and even democracy (Farrell, 2025)
...Briefly: LLMs create social relations between their users and the authors of the text in their training corpora. With the right access to the model and the corpus, one can trace the connections from system output back to individual source texts and their authors (Grosse et al., 2023). These social relations are mechanically mediated, giving users the illusion that they are interacting with just the machine and not an assemblage of people. But mediated social relationships and their illusions are a common fact of modern life
...This then opens a different perspective on the collision between new technologies such as AI and existing social systems. As DeDeo (2017) suggests, we urgently need to discover how the new coarse-grainings of AI interact with the existing abstractions through which humans simplify an inherently complex world to make it tractable. Both AI and older social technologies are, among other things, forms of information processing. We should investigate how the former are variously reinforcing, reshaping, or replacing the latter.
...From Simon’s perspective, humans must usually satisfice rather than optimize—settling for 'good enough' solutions rather than the best possible
...Simon's arguments can be reframed in terms of a more recent literature in complexity science on 'coarse-grainings.' Crudely speaking, a coarse-graining is a stripped-down representation of some complex phenomenon that seeks to capture its key aspects and dynamics. They are ubiquitous because no scientific model, organism, or artifact can actually grasp the full detail of its environment. Rather than the crawling molecular chaos of physical reality, they always deal with abstractions, compressed and selective summaries that ignore most details. In this sense, coarse-grainings include not only sophisticated mathematical models, statistical approximations of economies and turbulent weather systems, and "blurry JPEGs" of the World Wide Web (Chiang, 2023) for example, but the individual and collective representations that many social animals, including human beings, use to keep track of social structures and relationships
...such small-bore representations are hopelessly inadequate for modern human societies, which require impersonal social and informational technologies that can summarize social relations at very large scale. Rather than tracking a few individuals in a close-knit hunter-gatherer community where everyone knows everyone well, or even a village or town, we need to manage interactions that may involve millions—even billions—of people at once. Building and improving the means to do this has involved the development of institutions such as markets, bureaucracies, and even democracy that can handle relatively impersonal relationships at scale, using coarse-grainings that make these relationships comprehensible.
...Social technologies like markets, bureaucracy, and democracy allow human beings to become what economic historian Brad DeLong (2026) calls an "anthology intelligence," capable of deploying accumulated cultural knowledge in a coordinated way towards large-scale ends. That is their positive aspect. In their negative, these systems regularly appear monstrous to those who find themselves at the wrong end of the power relations they create. Markets, bureaucracies, and even democracies have furthermore devoured older and more intimate forms of social organization, replacing them with vast systems that are regularly indifferent, and sometimes inimical, to the particular fates and desires of individuals and groups.
...AI is not the apotheosis of the robot gods nor yet of their human masters. It is a new machinery of complex information processing, perhaps even comparable to markets, bureaucracies, and democracy. Its one weird trick is to take enormous bodies of digitized information, whether social-economic, textual, visual, or otherwise, and generate abstractions that look to capture their leading statistical characteristics.
...The LLMs that we emphasize are no more than coarse-grainings of the vast corpora of textual information that they have been trained on, post-processed to seem more natural in their interactions with humans and carry out more complex tasks. They are also no less. It is astonishing that we now have manipulable representations of entire bodies of human culture which can be set to work via an ordinary language interface to produce new outputs. These technologies are a new stage in the trajectory of institutional and organizational development that has run through modernity and the Long Industrial Revolution, giving new ways of managing complexity while creating their own complexities too.
...where do we even start? We suggest emphasizing two important dimensions of comparison and interaction: lossiness and power. Coarse-grainings are lossy by definition, raising the question of exactly which information gets discarded, and which is retained. Coarse-grainings also regularly get embroiled in power relations, because abstractions can create winners and losers in many social conflicts. The two often affect each other: different simplifications will advantage different groups.
Coarse-grainings necessarily discard information to create manipulable but lossy abstractions. As Maxim Raginsky (2025) puts it, "abstraction hides a great deal of complexity from view, and this is both its main virtue and its primary peril." The protocols that allow the internet to function at scale hide a great deal of heterogeneity, exposing it to unexpected perturbations and failure modes. Much the same is true of the simplifying statistics through which central bankers perceive the economy (Davies, 2025), reducing down the complexity of vast economic systems into a small number of target variables such as inflation that can be monitored for signs of instability, and the categories (such as census classifications) through which bureaucrats see the societies that they seek to order (Scott, 1998). All ignore some aspects of the system in order to focus attention on others
...Cronon (1991) provides a nice example of how coarse-graining led to struggles over actual coarse grains in nineteenth century Chicago. Creating national grain markets required defining and assessing different broad categories of grain, so that buyers could distinguish good grain from mediocre or bad without having to inspect it themselves. This facilitated trade; "all honest members benefited from knowing exactly what they were buying and selling," (119) but the crude scheme of gradation advantaged the elevator owners who bought grain, allowing them to "mix across grades," at the expense of the farmers they bought from (134), for example, combining just enough high quality grain with lower quality product that the final mix qualified for a more lucrative grade. These crude categories generated considerable unrest among farmers, who felt that they were being stolen from but had difficulty mobilizing against a technical-seeming system that was rigged in ways that were difficult to explain.
...Instead of myth-making about what a stylized future technology might do to stereotyped bureaucracies, we should ask how new and ineluctably messy social technologies such as AI combine lossiness and power relations in different ways than older and ineluctably messy social technologies such as bureaucracy, and what happens when the two technologies become entangled.
...Contrary to the language about "carefully weigh[ing] tradeoffs," it simply does not provide any objective means of weighing the kinds of choices across non-commensurables that are essential to the bureaucratic process
...such coarse-grainings may create systematic blind spots around ideologically inconvenient facts or questions. Chinese LLMs such as DeepSeek are trained to avoid direct answers on topics such as the Tiananmen Square massacre (Lu, 2025). This training is spotty, but may likely improve—'constitutional’ AI techniques (Anthropic, 2026) can surely be deployed for such ends. Yet if LLMs become load bearing parts of the ideological infrastructure, such blind spots may have negative consequences as well as benefits, making it harder for officials to see the fissures that they conceal by seamlessly stitching other facts and ideas together. There are historical precedents. In Maoist China, provincial statistics systematically exaggerated the size of harvests, leading central authorities to take the crops, leaving over 30 million people to starve to death (Wallace, 2014). As a general matter, higher ideological coherence and coordination power may come at the expense of increased lossiness and inability to see problems whose nature is at odds with the regime's organizing myths. While there are possible fixes, there are some reasons to suspect that increased use of AI may worsen rather than alleviate such problems. Other interpretations of the technology suggest countervailing forces to the tendency to conformity, but these may have their own trade-offs. The potential coordination value of LLM based ideological-cultural oracles is partly offset by the risk that these oracles may equivocate, or, even worse, provide radically different responses to different questioners, based on differences in prompts—exact word choices, implicit assumptions, or artful manipulation by bureaucrats with their own agendas. The extent of this problem will depend not only on the technology itself, but the institutional circumstances in which it is used (who has access and under what conditions).
...The frustrations of actually existing bureaucracy do not merely arise from inept or technically-inadequate solutions to the principal-agent problem. They emerge too from the collision of multiple incommensurable demands, each with its own problems and benefits, so that there are no optimal design solutions. Those who build or reform bureaucracies, like those who build other artifacts, need to satisfice across multiple intersecting needs and pathologies. Designs that neatly address one kind of problem may radically worsen others. Actually-existing AI has its own imperfections, some of which are endemic. Grafting AI systems onto existing bureaucracies will solve some problems but will worsen others and make altogether new ones. It will not eliminate the political difficulties of mediating across different, often non-commensurable, goals. Imagining replacing bureaucracy wholesale with AI is only plausible if one waves away the actual difficulties associated with real social technologies.
...This may however mean that chimerical melds of bureaucracy and AI will be subject to more contention rather than less, and that the contention will get weirder. The aspects of the signals lost to compression might be extremely hard to describe in previously-existing humanly-meaningful terms, but they might well make more trouble for some people than others nonetheless. Just as nineteenth century Midwestern farmers had difficulty in articulating how exactly they were hurt by grain classification schemes, twenty-first century citizens and interest groups may flounder while describing what AI is doing to them, even if they know that things are happening that they do not like.