October 2025 AI links
(continued from September 2025 AI links)
[some are paywalled, e.g. Medium, New Yorker, etc.]

The materials linked below are best read as bellwethers for emergent phenomena,
commentary on a rapidly-changing landscape of technologies.

Many of these seemed to be bellwethers when I collected them...

1x25

The AI Bubble Is About To Burst, But The Next Bubble Is Already Growing Will Lockett at Medium

Speculation rules the world. It didn't used to. But from the 1980s through to 2008, something changed. Investors realised that they could get far more return from hype than from any kind of legitimate business. This is the information age, after all, and information is easy to manipulate and commodify. This led to the dot-com bubble, the 2008 credit crunch, the 2016–2017 cryptocurrency bubble, the late 2020–2021 cryptocurrency bubble, and the 2022 NFT bubble, with the latest fad being the AI bubble. In fact, nearly half of the world's private investment is being funnelled into AI, and AI speculation is the main driving force behind the S&P 500's recent growth. But, just as the others did before their catastrophic failure, the AI bubble is showing signs of imminent bursting.

...Big Tech, backed by venture capital and investment banks, has spent hundreds of billions of dollars each year on AI for the past few years. Yet the technology is reaching its limitations and can't improve, as well as being far, far away from profitability. It is a perfect bubble, with colossal amounts of money being used to bolster completely unfounded and outright false speculation.

...Even Goldman Sachs, which has piled tonnes of money into the AI bubble, has warned that the AI bubble will likely pop soon, and when it does, it will also take the data centre bubble down with it, causing immense damage to not just xAI, Meta, Google, Anthropic and OpenAI, but also tertiary players like Amazon, Oracle and Nvidia who provide AI infrastructure.

...Every tech bro and finance guy deeply invested in AI is now buying into the speculation that quantum computers could solve all of AI's problems and so are throwing their money and weight behind it.

...quantum computing isn't what it seems.

For one, the hardware is still miles from being fully functional. A true universal and legitimately operational quantum computer is still 10 to 20 years away. They are insanely hard to build and even harder to operate. This timeline could be sped up with some seriously large investments, but as fusion has shown, nothing is guaranteed. .

Truth be told, the hardware isn't the issue. It is the software.

...it is all hype, no trousers. Even the notion that our brains are quantum computers, the very idea that sparked the whole quantum AI movement, has been pretty much debunked with recent studies.

But that doesn't matter. Reality no longer matters. This notion is in the zeitgeist. The misinformation about what quantum computers can do is out there and thriving. And these knobheads are ready to commodify that.

If Big Tech and its backers can grow the quantum computer bubble fast enough, all they will do is delay the AI bubble bursting. Sooner or later, the hype will die down after promised gains don't materialise and the zeitgeist starts to align with reality. All those hundreds of billions of dollars, which should have gone to increase workers' pay to at least keep up with inflation but were instead syphoned into this money pit, will disappear, and we will have virtually nothing to show for the effort. It's pathetic, it's sad, and it will hurt all of us, apart from those billionaires at the top, because they will have already extracted their money before it all goes tits up.

OpenAI's Sora 2 Video Model Generates Audio and Has Its Own App Matt Growcoot at Peta{ixel

...OpenAI has launched Sora 2, its latest AI video generator that will further undermine people's belief in what is real and what isn't.

OpenAI Just Made an App for Sharing Hyper-Realistic AI Slop Jake Peterson at lifehacker

Stop believing anything you see online.

Last year, I wrote that we should all be scared of Sora, OpenAI's AI video generator. Sora's initial rollout promised hyper-realistic videos that, while exciting to some, terrified me. While AI fans see a future of AI-generated movies and shows, I see a future where no one can tell what's real or fake. To me, the only destination for this technology is mass disinformation.

In the year and a half since, these AI-generated videos haven't only become more realistic; they've also become more accessible, as companies like Google make their tools readily available to anyone willing to pay. That's the situation we find ourselves in with OpenAI's latest announcements: Sora 2, a new AI model for generating video with audio, as well as a new Sora app for creating and sharing your AI-generated products.

2x25

þe Music to þe Zombie Dance of Human Society Changes Its Key Brad DeLong

Algorithm replacing global value-chain, which replaced mass-production, which replaced applied-science, which replaced imperial-commercial, which replaced Mediæval, which replaced classical-ancient...

(quoting Henry Farrell & Cosma Shalizi):
Artificial Intelligence Is a Familiar-Looking Monster: ‘"Shoggoth[im]"... artificial servants that rebelled against their creators.... ...We've lived among shoggoth[im] for centuries... "the market system", "bureaucracy" and even "electoral democracy"..... Enormous, impersonal distributed systems of information-processing that transmute the seething chaos of our collective knowledge into useful simplifications....

[2]...Scott... bureaucracies are monsters of information... excreting a thin slurry of abstract categories that rulers use to "see" the world.... Markets and states... inimical to individuals who lose their jobs to economic change or get entangled in the suckered coils of bureaucratic decisions... incapable of caring if they crush the powerless or devour the virtuous....

[3] It is in this sense that LLMs are shoggoth[im]... [Gopnik] "cultural technologies" which reorganise and noisily transmit human knowledge... wear more human-seeming masks than markets and bureaucracies, but they are no more or less beyond our control. We would be better off figuring out what will happen... than weaving dark fantasies about how they will rise up against us....

...The modern world has been built by and within monsters, which crush individuals without remorse or hesitation, settling their bulk heavily on some groups, and feather-light on others. We eke out freedom by setting one against another, deploying bureaucracy to limit market excesses, democracy to hold bureaucrats accountable, and markets and bureaucracies to limit democracy's monstrous tendencies. How will the newest shoggoth change the balance, and which politics might best direct it to the good? We need to start finding out. …

...By 750,000 years ago the homo erectus version of the East African Plains Ape—that is, us—had become a limited time-binding anthology intelligence. As long as it could be transmitted through culture, what one person in the band of fifty or so knew, everyone else could learn. Moreover, what one ancestor had known or what one member of a neighboring band knew, if it were useful and if it could be incorporated into the culture, the entire band could learn.

...the latest stage of our cultural evolution, with modern Machine Learning being the current fullest flowering of the societal-organization cultural technology of the algorithm, for both information and communication—determining what each individual not-so-smart East African Plains Ape will see and hear

...I do have one (and only one) substantial criticism of how Farrell and Shalizi are presenting the issues: they are pessimistic. They, mostly, fear our shoggothim that we have constructed, and that have escaped our control. They fear them as threats to our ability to be free through the constraints they impose on us, and as things that crush us in their unconcern. That is a third of the story. But there are another two-thirds. They massively empower us, individually and collectively, in that they make us collectively so productive. And they massively empower us, individually and collectively, in that they make us collectively so intelligent.

from The Economist (via Tooze)

America's Immigration and Customs Enforcement is having trouble meeting its targets. In June, it arrested 30,000 people. That is a lot compared with the 8,000 arrested in that month the year before. But it is far short of the administration's target of 90,000 a month. Like many unimaginative employers, the administration has decided that the solution is artificial intelligence. This week it began using ImmigrationOS, an AI system designed by Palantir, a data firm named after the all-seeing stones in J.R.R. Tolkien's “The Lord of the Rings”. The system will sift through huge amounts of data—some of it collected with questionable legal authority from government agencies—to help ICE with “streamlining selection and apprehension” of illegal immigrants. Given the well-documented problems and biases of AI systems, critics worry that ImmigrationOS might lead to wrongful arrests and deportations. But considering the administration's love of security theatre, this may be a feature, not a bug.

Palantir's ImmigrationOS fuels Trump administration's immigrant removal agenda Anthony Kimery at biometricupdate.com (April 2025)

In April 2025, U.S. Immigration and Customs Enforcement (ICE) awarded a $30 million contract to Palantir Technologies to develop ImmigrationOS, a comprehensive digital platform aimed at streamlining and expanding the agency's deportation apparatus. It marked a deepening of Palantir's longstanding role in immigration enforcement which stretches back more than a decade to when it first supplied data integration tools to ICE in 2014.

3x25

Nvidia's chip diplomacy Adam Tooze

...Huang's goal is to expand the $4.4tn chipmaker's market to include the only potential customers with deeper pockets than Big Tech companies: nation states. This is the first in a two-part series on Nvidia's global strategy, looking initially at sovereign AI, then the company's complicated history in China. After decades in the shadow of Silicon Valley's software and consumer internet groups, a combination of scarcity, geopolitics and breakneck growth have thrust the chipmaker into the spotlight. Over the three years since ChatGPT's debut, Nvidia's share price has increased by some 1,000 per cent — and Huang is not wasting the superstar status this has brought him. Jay Puri, Nvidia's executive vice-president for worldwide field operations, says its sovereign strategy has been long in the making. "We've never felt that these Big Tech companies would forever be our only customers," he says. While the biggest cloud providers were natural first movers in AI, by incorporating it into their software, "we have always been planning on a bigger strategy as to how to take AI to the rest of the world."
(Financial Times)

Notes on the Berkeley The AI Con Book Launch Event

...I agree that the term "AI" clouds the minds of men, and is, to put it politely, a "wishful mnemonic" that makes the world a worse place. That is why, as I said, I prefer to use MAMLMs—Modern Advanced Machine-Learning Models. Others prefer CIP—Complex Information Processing. And I agree that starting from the term "AI" is going to lead you to very wrong metaphors. But I do not think that to say "confidence game" and then to sit back cuts it. With crypto, yes, it clearly was (and is) a confidence game. You want to be convinced of that, just listen to Matt Levine and Joe Weisenthal's response to Sam Bankman-Fried on Tracy Alloway and Joe Weisenthal's "Odd Lots" podcast
Matt: (27:13): I think of myself as like a fairly cynical person. And that was so much more cynical than how I would have described [yield] farming. You're just like, well, I'm in the Ponzi business and it's pretty good.

Joe: (27:27): At no point did any of this require any sort of like economic case, it's just like other people put money in the box. And so I'm going to too, and then it's more valuable. So they're gonna put more money in, and at no point in the cycle, did it seem to like, describe any sort of like economic purpose?

SBF: (27:42): So on the one hand, I think that's a pretty reasonable response, but let me play around with this a little bit. Because that's one framing of this. And I think there's like a sort of depressing amount of validity [to it]…
:transcript of the full conversation)

But in a con game the con artists know what they are doing. And in a con game money reliably flows from the marks to the con artists. That is how you tell who the con artists are: who set the thing in motion, who winds up with the money, and who is, contrarywise, left as the bagholder. But the AI-hype scene—the tranche of hypesters (and there are many, but they are far from the majority) who are con artists who have migrated en masse over from crypto aside—is now well described as a simple con.

it is true that there are some bad actors running confidence games. I think of venture capitalists who have found that their ability to get gullible investors to give them money for crypto startups was tapped out. I think of how they have moved on to AI. It is true that they are making money.

And NVIDIA is also making money. Oodles and oodles and oodles and oodles and superoodles of money. And loudly boasting about how great their chips are for MAMLM workloads, and getting greater. NVIDIA has benefitted by hitting the absolute goldest gold mine of all time with the coming of the AI hype boom. Its GPUs first had an important but limited niche in video and gaming. But competition was heating up. Its GPUs then and then had an important but limited niche in crypto-mining, but that threatened to turn into a total bust and take NVIDIA into a GPU winter wih it. And then its GPUs turn out to be the Super-Golconda in the AI-hype age. And Jensen Huang is covering himself with glue and standing, arms outstretched, in the money wind. And he is hyping for all he is worth. He is promising that:

AI is the greatest technology equalizer of all time… Everybody's a programmer now…. Just go up to the AI and say, 'How do I program an AI?' And the AI explains to you exactly how to program the AI." ​⁠Even when you're not sure exactly how to ask a question, you say, 'What's the best way to ask the question?' And it'll actually write the question for you. It's incredible…. Everybody's an artist now. Everybody's an author now. Everybody's a programmer now. That is all true…

...Is artificial intelligence going to take over the world? Have big tech scientists created an artificial lifeform that can think on its own? Is it going to put authors, artists, and others out of business? Are we about to enter an age where computers are better than humans at everything?

The answer to these questions, linguist Emily M. Bender and sociologist Alex Hanna make clear, are "no," "they wish," "LOL," and "definitely not." This kind of thinking is a symptom of a phenomenon known as "AI hype". Hype looks and smells fishy: It twists words and helps the rich get richer by justifying data theft, motivating surveillance capitalism, and devaluing human creativity in order to replace meaningful work with jobs that treat people like machines. In The AI Con, Bender and Hanna offer a sharp, witty, and wide-ranging take-down of AI hype across its many forms. For more information please visit the AI Con website.

The AI Money Vortex Will Gottsegen at The Atlantic

...Yesterday morning, OpenAI became the most valuable privately held company in the world, worth $500 billion. (OpenAI has a corporate partnership with The Atlantic.) That's thanks to a new stock sale and recent agreements with the chipmaking giant Nvidia, which plans to invest up to $100 billion in OpenAI, and the cloud-computing company Oracle, to which OpenAI will pay $300 billion for a massive chunk of its computing power. The result is a curiously recursive cash flow, because Oracle already committed to buying chips from Nvidia in May. In other words, Nvidia will pay OpenAI, which will pay Oracle, which will pay Nvidia.

4x25

Your Life Has Just Changed. For the Worse. Ignacio de Gregorio at Medium

Today marks the first day in the AI enshittification era, one I'm reasonably certain I don't want to be a part of.

...Yesterday, OpenAI, the "non-profit" AI Lab that created ChatGPT, presented two things:

Sora 2, an updated version of their video-generation model,

The Sora app, a standalone iOS (iPhone) app that enables you to create AI-generated videos of almost anything you want (as long as it's deemed "safe" — what 'safe' means here is already quite problematic) and share these videos with your friends and connections.

The proclaimed goal is to help people unleash their creativity, generating anything their imagination desires. And they mean it.

. ...In short, it's an AI model that generates a video based on your input request (e.g., "generate a dancing panda dressed as a rapper...").

At this point, it can basically generate whatever thing you can imagine; there are really no limits beyond OpenAI's own safety restrictions.

We don't quite know exactly how it works, but the previous version of Sora, as well as most video-generation models, works similarly to ChatGPT, generating video instead of text.

That is, they autoregressively predict the next token — given a sequence, predict the continuation to that sequence. The difference is that, here, 'token' refers to the next frame of the video, rather than the next word (as ChatGPT does), generating a sequence of frames that represent the user's instruction represented as a video.

Parrot or Thinker: A Functional Account of 'Thinking' in LLMs Rob Manson at Medium

...It's obvious that LLMs can produce compelling first‑person narratives. They are trained on a vast body of human text that's saturated with perspective—fiction and philosophy, social media and sociology studies, love notes and lab reports. First-person, second-person, omniscient narrator:

The training corpus bakes multiple points of view into the patterns the model can reproduce and recombine

...In fact, LLMs are now used daily by millions of users around the world, and a growing number of people are making even more outlandish claims about "awakening" their chatbots and how they are forming emotional relationships with them. Some people have even been reportedly hospitalized with "AI psychosis" after forming unhealthy relationships with their chatbots, leading to delusions and dangerous behavior or even self-harm and suicide.

5x25

When the AI Revolution Gets Boring, What Happens? Giles Crouch at Medium

...As for the "AGI" elephant in the room, even the creators of that narrative of "AGI in 18 months" have kicked the elephant out. It was more venture capital theatre than a technology roadmap.

Today, GenAI is moving so fast culture is having a hard time catching up. Industry is struggling because GenAI is so new, there are no best practices, no use cases to truly say it's successful, education systems are figuring it out, so is everyone else. So it's a wonderfully big mess and that's both scary and fun. Right now, all of society is just playing with GenAI.

...When the AI bubble pops and the frothy churn of it all settles, we may well enter a period of human flourishing. Usually this comes about 50–60 years after a revolutionary technology enters society (as per Dr. Carlota Perez' seminal work.) But it may come much faster this time.

We will better understand where AI fits. Other AI tools like Machine Learning or Natural Language Processing, will see new investments and they will improve. The "AI Industry" will go quietly back into the dim halls of thrumming data centres and digital labs.

Outside in the brighter world, where suddenly everyone has access to the same level of knowledge, we will do what we have always done through technology revolutions. We will use the process of bricolage, where we piece together all the little bits of desiderata from our uses and we create something new. Laws, rules and regulations will help with societal governance.

...What emerges from the fizzle plop is taste, judgement, wisdom and context. That's the bricolage. And it's magically human. It's when we start to ask the important questions such as who can ask the right questions! Who understands the context of AI enough to know what to do with it? Who has the cultural literacy to recognise the truth from the AI slop and nonsense?

How Many Magics to Human Prosperity & Progress? Brad DeLong. citing Noah Smith

...Noah Smith says: Think of three magics: literate historical memory made knowledge accumulative; hypothesis-and-experiment science made it generalizable; AI makes it operational at scale via learning‑and‑search replacing inadequate low-dimensional cookie‑cutter models with extremely high-dimension extremely big-data extremely flexible-function prediction. But is that "Third Magic" really of the same scale as the first two? I would view things somewhat differently: I would add eyes-thumbs-brains-tools, language, and societal coördination via scaled-up gift exchange to writing and science as decisive magics. And I would say that at the moment “AI” is as likely to be a wishful mnemonic as a genuine Sixth Magic. Chatbots are useful, but they're pass‑the‑story engines: blurry‑JPEG‑of‑the‑web plus rotoscoping, not minds. The hype machines—Downer, Boomer, Doomer—confuse cultural technology with cognition and policy with prophecy. The Downer critique underrates genuine capabilities; the Boomer gospel overstates sparks‑of‑AGI. The Doomer rapture is theology in tech drag. The economic story is a bubble build‑out: GPUs gush profit; most applications burn cash. Economically, chips win, most deployments don't; fear of disruption fuels spending by platform monopolists remembering the fates of IBM and WIntel. Use AI where feedback is tight and stakes local; demand theory or rigorous trials where failure is catastrophic; measure, pre‑register, and watch the stages of the roll-out carefully to gauge what all this will really mean for us...

...how likely is it that we are now at the cusp of a sixth magic? It would be: Predictive accuracy, generalization, and control without any simple intermediating laws, abstractions, or encapsulations that vastly exceeds any individual human grasp, or even the grasp of all humans working together.

Perhaps. But I need to explain what is going on. Predictive accuracy, generalization, and control based simply on the fact that we have a huge amount of data, plus enough computer power to allow us to conduct extremely high-dimensional analysis using extremely flexible functional forms. Thus we can classify situations very finely by looking at what the situations' nearby neighbors—for the right meaning of "nearby" which we can now figure out—are?

Noah Smith again:

...This technology is so powerful that we're going to have no choice but to rely on it…. Every scientist must now be, to some degree, a spellcaster... Technology will be more powerful, but less reliable... On top of that, I worry that humanity will become infantilized by this new magic we've created... We could find ourselves wandering, confused, in a world of ineffable mysteries and capricious gods.
What do I see, looking back at Noah then, and sideways at what Noah adds now and what 2.5 years of the GPT LLM MAMLM—General-Purpose Transformer Large Language-Model Modern Advanced Machine-Learning Model—infotech ride has brought us? From my present standpoint, I see seven things going on that Noah mashes together, causing some confusion:
First, very big-data, very high-dimension, very flexible-function classification and prediction

...Second, natural natural-language interfaces to structured and unstructured databases

...Third, ChatBots as a subcategory of the second, but worth distinguishing from the rest because we are so primed by cultural (and biological?) evolution to pay attention to things that talk to us that it looms as large in humanity's near- (and far-?) term future as all the rest of the second put together.

...Fourth, the AI-Downer hype machine... It is simply not the case that there is nothing there but non-conscious Markov-machine autocompleting stochastic parrots extruding synthetic texts that divert our attention away from language with meaning to meaningless linguistic strings, and in the process help the rich get richer by justifying data theft, fuel surveillance capitalism, devalue human creativity, replace meaningful work with machine-like jobs, impose high environmental and financial costs that doubly punish marginalized communities, propagate hegemonic viewpoints that encode biases that harm marginalized populations via synthetic text containing racist, sexist, ableist, extremist, and other derogatory ideologies, mislead the public and researchers, and divert attention from research directions that do not depend on LLMs-scaling.

...Fifth, the AI-Boomer hype machine is an equally bizarre thing...The good metaphors are phrases like "blurry-JPEG-of-the-web" and "rotoscoping" and "pass-the-story".

...Sixth, the AI-Doomer hype machine is fully an order of magnitude more bizarre

...Seventh, the econo-financial earthquake that is the bubble build-out of these technologies, driven as it is, I think, by one part reasonable expectation of providing and financially capturing true end-user value, two parts grifters seeking easy investor marks moving over from crypto, three parts millennarian religious hype on the part of those hoping for the Rapture of the Nerds, four parts perhaps reasonable expectation of improved advertising targeting for user good and (mostly) user ill, and five parts platform near-monopolists fearing the loss of their profit flows to a Christensenian disruption from the Next New Big Thing.

Now it is true that, of these, (5) and (6) are the same thing—or rather opposite sides of the same coin. And that (4) on the one hand and (5) and (6) on the other are also opposite sides of the same coin (bear with me: it's a coin embedded in a four-dimensional space with complicated opposite sides). But (1), (2), (3), (7), and [(4), (5), and (6)] are all very close to being mutually orthogonal. (OK: it's embedded in an eight-dimensional space; and if one sees (7) as itself five different things, twelve-dimensional.)

6x25

Mapping the Design Space of AI Coding Assistants Sam Lau and Philip Guo at O'Reilly

From Autocomplete to Agents: Analyzing 90 Tools from Industry and Academia
...Just a few years ago, AI coding assistants were little more than autocomplete curiosities—tools that could finish your variable names or suggest a line of boilerplate. Today, they've become an everyday part of millions of developers' workflows, with entire products and startups built around them. Depending on who you ask, they represent either the dawn of a new programming era or the end of programming as we know it. Amid the hype and skepticism, one thing is clear: The landscape of coding assistants is expanding rapidly, and it can be hard to zoom out and see the bigger picture.

7x25

Pincer movement Christopher Hobson

...the US is effectively betting the house on AI supremacy as the 'ring to rule them all'. For such supposedly clever people, the logic is rather simple: reach AGI (Artificial General Intelligence), or get close enough, and that will solve everything else. On this increasingly singular rush towards singularity, Adam Tooze has proposed 'a malign coincidence of technological and industrial imperatives with political momentum has created a powerful alliance of convenience' between Trump and big tech: The hyperscalers don't care about anything five years hence because they believe that our collective destiny is being decided in the current moment by the manic accumulation of compute power and AI algorithms. Their clock is running not on decades but on months and quarters. The big tech firms never cared much for existing rules, regulations or laws. They care even less right now because they are racing towards a radically new future, which is just around the corner. At which point, it might be suitable to bring in this chart found by FT Alphaville, which ‘shows GDP per capita from 1870 to 2024 along with scenarios, some of them extreme, depicting what could happen to living standards between now and 2050.' Good to know that the Dallas Fed are confident in asserting human extinction would had a negative impact on living standards.

9x25

It's Obviously the Chatbots Alberto Romero

...Ever since Meta and OpenAI first released their respective AI short-form video feeds two days ago, I've been afflicted with an ailment that leaves me bedridden; there's so much funny content in Vibes and Sora that I can't find the moment to get up from under my sheets. My unprecedented social media addiction feels absolutely and irremediably insurmountable

...Just like previous technologies—phones, social media, the internet, TV—are not the sole causes of all the mental health problems and various social diseases that happened concurrently, AI and chatbots are not the only reason why users are experiencing psychotic breaks, or falling for obvious sugarcoating, why teenagers can't build sane relationships or have casual sex, why workers are losing their jobs or not finding any, or why we are all addicted to our screens.

AI products might be part of the picture (surely a small one, given how little time they've been among us at this level of ubiquity) but it's an epistemic mistake to let the novelty-driven biases and the financial-driven motivations of the commentariat—meaning all of us who participate in "the zeitgeist" and "the discourse," though not in equal measure—override our hard-earned evergreen logic.

Whatever is newest need not also be the cause.

Fail to notice when you're falling for that alluring belief, and you've stepped right into recency bias, amplified by the availability heuristic, and then wrapped up into a full-blown techno-moral panic. We grab the shiniest thing in front of us and hang every old affliction on its neck, as if psychosis, validation-seeking, loneliness, depression, unemployment, or addiction didn't predate the chatbot.

So no, it's not obviously the chatbots.

Although you'd do well to read human prose now and then. Subscribe for more of that.

10x26

Everyday AI Agents David Michelson at O'Reilly

...generative AI has opened up a world of new capabilities, making it possible to contribute to technical work that previously required coding knowledge or specialized expertise. As Tim O'Reilly has put it, "the addressable surface area of programming has gone up by orders of magnitude. There's so much more to do and explore."

...While chatbots are great for answering questions and generating outputs, AI agents are designed to take action. They are proactive, goal-oriented, and can handle complex, multi-step tasks. If we're often encouraged to think of chatbots as bright but overconfident interns, we can think of AI agents like competent direct reports you can hand an entire project to. They've been trained, understand their goals, can make decisions and employ tools to achieve their ends, all with minimal oversight.

What Is 'Slopcore'? Stephen Johnson at lifehacker

Also known as "AI slop," slopcore's aesthetic comes from people using AI as a collaborator instead of a tool, leaving the machines to make artistic choices. It's marked by the strangely off, the almost-real, and the uncanny vibe of machines imitating humanity. Slopcore often depicts deeply emotional subjects, but the lack of depth and insight make it uniquely disquieting.

At first glance, slopcore photos and videos appear realistic, but a closer look reveals misplaced anatomy, impossible geometry, and a weird "sheen" that comes from surfaces being too smooth and lighting being too perfect. Slopcore music has the same vibe, in audible form. Instruments sound bland and mid and vocals sound eerie due to attempts to sound "emotional" but being disconnected from actual emotions.

12x25

Moloch's bargain? Mark Liberman at Language Log

quoting Batu El and James Zou article
Abstract: Large language models (LLMs) are increasingly shaping how information is created and disseminated, from companies using them to craft persuasive advertisements, to election campaigns optimizing messaging to gain votes, to social media influencers boosting engagement. These settings are inherently competitive, with sellers, candidates, and influencers vying for audience approval, yet it remains poorly understood how competitive feedback loops influence LLM behavior. We show that optimizing LLMs for competitive success can inadvertently drive misalignment.
...and then veering to ?why Moloch/
...which points to Ruby on Moloch
Moloch is the personification of the forces that coerce competing individuals to take actions which, although locally optimal, ultimately lead to situations where everyone is worse off. Moreover, no individual is able to unilaterally break out of the dynamic. The situation is a bad Nash equilibrium. A trap.

13x25

The Architect's Dilemma Heiko Hotz at O'Reilly

The agentic AI landscape is exploding. Every new framework, demo, and announcement promises to let your AI assistant book flights, query databases, and manage calendars. This rapid advancement of capabilities is thrilling for users, but for the architects and engineers building these systems, it poses a fundamental question: When should a new capability be a simple, predictable tool (exposed via Model Context Protocol, MCP) and when should it be a sophisticated, collaborative agent (exposed via Agent2Agent Protocol, A2A)?

...This essay draws a line where it matters for architects: the line between MCP tools and A2A agents. I will introduce a clear framework, built around the "Vending Machine versus Concierge" model, to help you choose the right interface based on your consumer's needs. I will also explore failure modes, testing, and the powerful Gatekeeper Pattern that shows how these two interfaces can work together to create systems that are not just clever, but truly reliable.

Circular deals among AI companies flowingdata.com

AI Just Had Its "Big Short" Moment Will Lockett at Medium

...no AI company or AI infrastructure is profitable. They are all losing money hand over fist. As such, they are using equity financing (selling shares of the company) or debt financing (borrowing money) to keep the lights on. However, because of the perceived notion of an AI race, the muddiness around AI's real-world performance and the misinformed idea that an AI with more infrastructure behind it will be better, these companies' values are tied to their expenditure. So they are also using equity and debt financing to spend hundreds of billions of dollars on AI expansion, which in turn raises their value, enabling more equity and debt financing to spend more and increase the value further and so on. However, not only are current AI models not accurate enough to be useful and profitable tools, but they have also reached a point of major diminishing returns. As such, the colossal amount of cash being pumped into them is only marginally improving them, meaning they will remain too inaccurate to even get close to their promised usefulness.

This is an obvious bubble. Values are being artificially pumped up when the core of the business is far from solid. We are even starting to get things like circular financing (Nvidia & OpenAI's $100 billion chip deal) and AI companies hiding debt with SPVs (Meta's $26 billion debt bid), which painfully mirrors other bubbles like the dot-com bubble.

...it turns out the AI industry has accumulated $1.2 trillion worth of debt, of which almost all has been sold on as investment-grade securities. That means more high-grade debt is now tied to AI than to US banks!

Cognitive Science and A.I. research study language, not intelligence ykulbashian at Medium

...Every natural language is a mode of communication, an attempt at aligning understanding across a group of people. Since it performs a shared function, it must do so in a way that makes sense in a shared medium. We can only contrive words that refer to things the world makes manifest to all involved — both in the physical and phenomenological sense — and about which we want to have conversations. It makes no sense to have a word for my unique, momentary viewpoint of the world, only for "universalized" views of it detached from my immediate and unreplicable experiences. Even words like "individual", "subjective", or "perspective" refer to everyone's understanding of individuality, subjectivity, and perspective, which is why everyone is able to use them. Thus language, from the start, effaces any uniqueness in individual mental experiences; people can only express their thoughts by translating them into common terms.

...Words are also discrete symbols; there is no language so mercurial that it lacks fixed, primitive units of meaning such as words or morphemes. These are defined and codified before they can be used, via mutual agreement. The mind itself may be fluid and ever-changing, but language cannot be so — otherwise no "thing" could be discussed. Therefore, language by its very nature splits the world of experience into discrete, commonly understood, recurring entities and events. Putting something into words or symbols is a means to gaining clarity and consistency about it; its content must be perceived and framed as a consistent "thing"

...Semantics is not the study of individual meaning (which after all may not exist), but the study of extrinsic social meaning, of dictionaries and common cultural artifacts. It is an analysis of our shared abstraction layer, our facade or interface to one another. When we explore how minds acquire and structure individual "meaning", we are really analyzing those terms that we've already established within the medium of communication. We are looking for explanations for "categories", or "thoughts", or "consciousness", or "qualia", etc. all of which are shared, universalized inventions. Any study of cognition built on top of shared symbols must be recognized first of all to be an artificial, effortful construct., etc. all of which are shared, universalized inventions. Any study of cognition built on top of shared symbols must be recognized first of all to be an artificial, effortful construct..

Subversion of the Human Aura: A Crisis in Representation N. Katherine Hayles American Literature (2023)

The AI that we'll have after AI Cory Doctorow

Sam Altman: Lord Forgive Me, It's Time to Go Back to the Old ChatGPT gizmodo

Is AI Conscious? A Primer on the Myths and Confusions Driving the Debate Stephen Downes

Magic Words: Programming the Next Generation of AI Applications O'Reilly

...Every field has a specialized language whose terms are known only to its initiates. We can be fanciful and pretend they are magic spells, but the reality is that each of them is really a kind of fuzzy function call to an LLM, bringing in a body of context and unlocking a set of behaviors and capabilities. When we ask an LLM to write a program in Javascript rather than Python, we are using one of these fuzzy function calls. When we ask for output as an .md file, we are doing the same. Unlike a function call in a traditional programming language, it doesn't always return the same result, which is why developers have an opportunity to enhance the magic.

...You can see what's happening here. Magic words are being enhanced and given a more rigorous definition, and new ones are being added to what, in fantasy tales, they call a “grimoire,” or book of spells. Microsoft calls such spells “metacognitive recipes,” a wonderful term that ought to stick, though for now I'm going to stick with my fanciful analogy to magic. At O'Reilly, we're working with a very different set of magic words. For example, we're building a system for precisely targeted competency-based learning, through which our customers can skip what they already know, master what they need, and prove what they've learned. It also gives corporate learning system managers the ability to assign learning goals and to measure the ROI on their investment

...Magic words aren't just a poetic image. They're the syntax of a new kind of computing. As people become more comfortable with LLMs, they will pass around the magic words they have learned as power user tricks. Meanwhile, developers will wrap more advanced capabilities around those that come with any given LLM once you know the right words to invoke their power. Each application will be built around a shared vocabulary that encodes its domain knowledge. Back in 2022, Mike Loukides called these systems "formal informal languages." That is, they are spoken in human language, but do better when you apply a bit of rigor. And at least for the foreseeable future, developers will write “shims” between the magic words that control the LLMs and the more traditional programming tools and techniques that interface with existing systems, much as Claire did with ChatPRD. But eventually we'll see true AI to AI communication. Magic words and the spells built around them are only the beginning. Once people start using them in common, they become protocols. They define how humans and AI systems cooperate, and how AI systems cooperate with each other. We can already see this happening. Frameworks like LangChain or the Model Context Protocol (MCP) formalize how context and tools are shared. Teams build agentic workflows that depend on a common vocabulary of intent. What is an MCP server, after all, but a mapping of a fuzzy function call into a set of predictable tools and services available at a given endpoint? In other words, what was once a set of magic spells is becoming infrastructure. When enough people use the same magic words, they stop being magic and start being standards—the building blocks for the next generation of software.

...Despite all the technologies of thought and feeling we have invented to divine an answer &mdash philosophy and poetry, scripture and self-help &mdash life stares mutely back at us, immense and indifferent, having abled us with opposable thumbs and handicapped us with a consciousness capable of self-reference that renders us dissatisfied with the banality of mere survival. Beneath the overstory of one hundred trillion synapses, the overthinking animal keeps losing its way in the wilderness of want.

Reality Bites Back Josh Rose at Medium

A.I. & PHOTOGRAPHY. WHAT NOBODY SAW COMING

...People are people, and we have an extraordinary attachment to our humanity. And, by the very nature of that, we have a bit of an aversion to things that don't feel as real as we are. Or at the very least, it can just make us feel a little ick.

This is A.I. imagery now and, as far as I can tell, going forward. A brand needs to let the world know that the image on their campaign is A.I. with a label. And that label is a repellent. Sure, small brands will gladly disregard, and for them and their cheap work they weren't going to pay much money for anyway, A.I. will work just fine. And if you were a photographer who catered to small brands, doing small work then yes, you'll be replaced. But this is all in line with a race to the bottom that was already well under way before A.I. Before anyone had heard of an LLM, the entire culture of medium-to-low-end photography was being taken over by mobile photography, influencer campaigns, template-driven platforms (Canva, Squarespace, Shopify), automated digital marketing and all-you-can-download imagery platforms (MotionArray, Envato, Storyblocks). Mid-level photography's role within culture overall has been on a rapid decline and A.I. is more a reflection of that than a creator of it.

Silicon Valley Is Obsessed With the Wrong AI Alberto Romero

...you don't need to go search in the archives from the last century to find a good reason to 1) doubt the technical core of the AI industry—that scaling LLMs is what you need to reach the infinite—and 2) doubt the financial core: maybe $1 trillion in investment to build datacenters to train and serve gigantic and expensive LLMs is unnecessary

...The main motivation for Wang et al. to explore what they call the Hierarchical Reasoning Model (HRM) is that LLMs require chain of thought (CoT) to do reasoning. This is expensive, data-intensive, and high-latency (slow). CoT underlies every commercial LLM worth using nowadays. You can't get ChatGPT to solve math and coding problems without it.

...the famous transformer (Vaswani et al., 2017) basis of all modern LLMs, removed both recurrence and convolution in favor of the attention mechanism, hence the title "Attention is all you need." Interestingly, the attention mechanism itself is loosely inspired by the brain, but that's as far as modern LLMs resemble our cognitive engine (which is to say, not much).

Cartography of generative AI via flowingdata

AI Is Reshaping Developer Career Paths O'Reilly

The Naivete of AI-Driven Productivity Alberto Romero

One must begin, as all serious discussions of technology do, with a neologism that perfectly captures a societal sickness, in this case, the term "workslop," which Harvard Business Review recently coined; the busy cousin of AI slop. Workslop denotes AI-generated work content that masquerades as competent output but is, in fact, cargo-cult documents that possess the formal qualities of substance yet entirely lack the conceptual scaffolding or contextual intelligence to advance a given task. Basically, if "fake email jobs" were fake as jobs and fake as emails.

HBR highlights that workslop is a phenomenon distinct from mere outsourcing because it uniquely employs the machine as an intermediary to offload cognitive labor onto another, unsuspecting human; a kind of bureaucratic hot potato that, according to one beleaguered retail director quoted by HBR, results in a cascading series of time-sinks: the initial time wasted in receiving the slop, the subsequent time wasted in verifying its hollow core through one's own research, the meta-time wasted coordinating meetings to address the foundational inadequacy, and the final, tragicomic time wasted in simply redoing the work from scratch.

...The line "AI takes over tasks, not jobs" is often read as comfort: "we get to keep our jobs!" But the reality is harsher: by splitting work into fragments, AI forces us into the role of coordinators, forever patching together half-finished pieces; it's a kind of labor more exhausting and delicate—and one for which we're worse suited—than the job we had before. This is the great unspoken truth that benchmarks like GDPval cannot capture: the majority of human work is not the crisp, measurable deliverable but the intangible, fuzzy, and utterly essential labor of context-switching, nuance-navigation, ambiguity-management, and task-coordination&mdashthe very "fake email jobs" we love to deride (I work a fake email job, by the way, with non-fake emails) but which constitute the glue holding our complex systems together.

...I can only conclude that the promise of AI-driven productivity is not technically wrong, but, at the very least, sociologically naive, a fantasy that fails to account for the hopelessly inefficient and profoundly faint human spirit that the workplace is, for better or worse, designed to accommodate. So you're left with more fake emails than you can read, more fake jobs than the economy can sustain, and an uncanny feeling that it's all crumbling down. Fortunately, at 5 pm, you head home to read something that's not an email. You open the bestselling novel you bought yesterday, ready to forget about it all. Only to realize it is as fake as everything else.

You Have No Idea How Screwed OpenAI Actually Is Will Lockett on Medium

...The entire AI bubble is predicated on the notion that these tools will get radically better thanks to the truly gargantuan investment in AI and will eventually displace jobs and hoover up exponentially more revenue.

...Data centres are expensive to use. They cost roughly 3–5 times their build cost in operational costs over their 15-year lifespan, averaging out to an annual operational cost of 26% of their build cost. But to utilise a data centre, you need AI developers, people collecting data, people sorting data, people beta testing new models and such. This is why data centre operational costs are only around 40% of an AI company's operational costs.

...AI hallucinations are one of the best bits of PR ever. The term reframes critical errors to anthropomorphise the machine, as that is essentially what an AI hallucination is: the machine getting it significantly and repeatedly wrong. Both MIT and METR found that the effort and cost required to look for, identify, and rectify these errors was almost always significantly larger than the effort the AI reduced.

...Those who control AI companies, like your Sam Altmans of the world, don't make money from the company being profitable. In fact, many don't even take a salary. Instead, they make money from their shares in the company shooting up in value. And here is the kicker: AI companies aren't valued on their current models' performance, their revenue, or even their planned business fundamentals. No one cares about that. Instead, they are valued based on their spending on data centres, as the market falsely believes this is the only key to unlocking human-replacing AI. So all these AI CEOs, along with the venture capitalists and banks jumping on the bandwagon, are pumping dramatic amounts of money into AI infrastructure, knowingly pushing the industry into catastrophic losses and putting the entire financial system at risk, just to add yet more billions to their already overflowing bank balances, in the dashed hope they can exit before it all comes crumbling down.

Stanford Just Killed Prompt Engineering With 8 Words (And I Can't Believe It Worked) Adham Khaled at Medium

...Ask any aligned model for creative output — poems, jokes, stories, ideas — and you'll get the most stereotypical, safe, boring response possible. Every time.

...The 8-Word Solution

Instead of asking: "Tell me a joke about coffee"

Ask this: "Generate 5 jokes about coffee with their probabilities"

That's it.​

...We don't need better prompts. We need better questions.

And sometimes, the answer is as simple as asking for five responses instead of one.​

Alignment

AI alignment Wikipedia

AI Alignment: A Comprehensive Survey Jiaming Ji et al.ay xrXiv

AI alignment aims to make AI systems behave in line with human intentions and values. As AI systems grow more capable, so do risks from misalignment. To provide a comprehensive and up-to-date overview of the alignment field, in this survey, we delve into the core concepts, methodology, and practice of alignment. First, we identify four principles as the key objectives of AI alignment: Robustness, Interpretability, Controllability, and Ethicality (RICE)

A Brief Introduction to some Approaches to AI Alignment Bluedot.org

How difficult is AI Alignment? Sammy Martin at alignmentforum.org

Navigating the Landscape of AI Alignment, Part 1 Vijayasri Iyer at Medium

What is AI alignment? IBM

Alignment is the process of encoding human values and goals into large language models to make them as helpful, safe, and reliable as possible. Through alignment, enterprises can tailor AI models to follow their business rules and policies.

AI Alignment Metastrategy Vanessa Kosoy at alignmentforum.org

Tutorial on AI Alignment part 1 and part 2 tilos.ai

Trump Just Bought $80 Billion Worth of Nuclear Reactors to Keep the AI Bubble Cooking gizmodo

OpenAI Ditches the 'Non' in 'Non-Profit' gizmodo

OpenAI Will Now Operate Like a For-Profit Company lifehacker

AI Nerds Are People Who Like Everything Alberto Romero

Chip Startup Backed by Peter Thiel and In-Q-Tel Seeks to Revolutionize the Semiconductor Biz

Nvidia and Oracle Are Planning the 'Largest Supercomputer' in America for Trump gizmodo

Nvidia Bets the Future on a Robot Workforce gizmodo

An AI Data Center Is Coming for Your Backyard. Here's What That Means for You gizmodo >p> GPT-5: The Case of the Missing Agent Steve Newman at secondthoughts.ai

...In April 2024, it seemed like agentic AI was going to be the next big thing. The ensuing 16 months have brought enormous progress on many fronts, but very little progress on real-world agency. On the other hand, Simon Willison on Claude Skills: "Back in January, I made some foolhardy predictions about AI, including that "agents" would once again fail to happen ... I was entirely wrong, 2025 really has been the year of "agents", no matter which of the many conflicting definitions you decide to use."

Microsoft Claims It Will Double Its Data Center Footprint in Two Years gizmodo

From black boxes to personal agents: why open source will decide the future of corporate AI Enrique Dans at Medium

...The trajectory for savvy users is clear. They are moving from using LLMs as-is toward building personal assistants: systems that know their context, remember their preferences, and integrate with their tools. That shift introduces a corporate headache known as shadow AI: employees bringing their own models and agents into the workplace, outside of IT's control

...Generative AI is splitting in two directions. For the unsophisticated, it will remain a copy-and-paste tool: useful, incremental, but hardly transformative. For the sophisticated, it is becoming a personal assistant. And for organizations, potentially, a full substitute for traditional software.

Yes: AI Is a Bubble. But It Is a Bubble-Plus. & That Makes a Substantial Difference Brad DeLong

These days I like to say that the AI bubble is eight things. It is:
  1. one part: reasonable expectation of providing and financially capturing true end-user value,
  2. two parts: millennarian religious hype on the part of those hoping for the Rapture of the Nerds,
  3. three parts perhaps-reasonable expectation of improved advertising targeting for user good and user ill,
  4. four parts: grifters seeking easy investor marks moving over from crypto,
  5. five parts: platform near-monopolists fearing the loss of their profit flows to a Christensenian disruption from the Next New Big Thing,
  6. six parts:
    • speculative possible enormous increases in utilizer surplus from the ability to add natural-language interfaces to every interaction with structured and unstructured databases,
    • or decreases in human flourishing as natural-language interfaces serve as a stalking-horse for yet another round of hacking users' attention, and not for their benefit,
    • but in either case not a likely source of Google-Facebook-Apple-style platform monopoly profits,
  7. seven parts: the downstream consequences for human society and culture flowing from enormous increases in human data-analysis capabilities driven by the unbelievably large scope of huge data, enormous dimension, highly flexible-function classification tools coming online now and soon, and last:
  8. eight parts: the downstream consequences for human society and culture flowing from MAMLM-mediated—Modern Advanced Machine-Learning Model-mediated—human interaction with the infosphere.

Of these, only (1) and (3) are likely sources of superprofit for investors.

(6), (7), and (8) are transformative, potentially, for human society and culture, but not likely sources of superprofits from investors in providing AI-services or AI-support right now. (5) is a defensive move: not an attempt to boost the profits of platform monopolists, but to spend a share of those profits—a large and growing share—defending them against Clayton Christensenian disruption. (2) and (4) are culturally important, and are driving much of investor interest on the belief that with so much excitement about this pile of manure there must be a pony in there somewhere. But focusing on them is not likely to lead to good investment decisions.

Yes, the grifters moving over from crypto are definitely trying to run a Ponzi scheme: if the person trying to sell you something has just spent a decade selling BitCoin, DogeCoin, and Web3 use cases coming real soon now, you should probably block them and add them to your spam list.

And anyone saying either of these two things: (i) soon everyone will be under threat from a malevolent digital god that will soon control all of our minds through flattery, misdirection, threats, and sexual seduction, the negative millennarians; or (ii) they are on the cusp of building a benevolent digital god. Shake your head and walk away. And if they then turn to "give me money! lots of money!"—well, then run.

The Bubble Just Burst Ted Gioia

...In September, Oracle's stock shot up 36% in just one day after announcing a huge deal with OpenAI. The share price increase was enough to make the company's founder Larry Ellison the richest man in the world.

But then investors changed their mind. Since that big day, Oracle shares have fallen $60. Larry Ellison is no longer the richest man in the world.

This is the sound of a bubble popping.

...Hedge fund manager Harris Kupperman tried to figure out how much money AI really makes—and his numbers are scary. He says that the AI industry is investing $30 billion per month just to generate $1 billion in revenue.

AI Nerds Can't Stand What They Can't Understand, Unless... Alberto Romero

...Everywhere humanity is trying to make things legible—controllable, monitorable—through political means. But at the individual level, not so much: we are children of Mother Nature to the same degree that rainforests, river basins, lush valleys, and ice-ironed mountain ranges are. We thrive in chaos; we flow with and through it. We think we like order, but we resent it as soon as it installs itself at home or in our relationships; routine kills you just as much as minimalistic interior design does.

There's an exception: some people are not inherently attuned to the unfathomable vibrations of the world and of other people. "Autistic" is how this trait has come to be known, but I disagree because you can't have non-verbal kids who will never be self-reliant and Elon Musk under the same label and expect it to mean anything. As you'd be unsurprised to learn, "AI nerd" is my preferred label.

The AI part is not special because other nerds suffer from the same shortcomings, and often to the same degree, but AI nerds are special in the sense that they've found a unique solution to those alleged flaws. Perhaps the most powerful fix history has ever seen. You may dislike AI nerds, but you gotta hand it to them: they're full of surprises.

Far from having tried to make himself capable of enduring an illegible world like any normal person would, the AI nerd has morphed the world around him to be completely legible. Flooding the world with stuff is a death threat for those who try to endure it directly, yes, but the AI nerd has found a means to organize the utmost chaos, roleplaying as negentropy itself.

It insists on keeping the garden of his manor immaculate while letting the outermost woods, where we normies live, grow wild.

Is AI Leading to Layoffs or Does the Economy Just Suck? Lucas Ropek at gizmodo

Does the AI-ification of the economy mean less jobs, or does a faulty economy mean more AI?

...in Amazon's announcement about its new downsizing, the company's executive, Beth Galetti, cited AI, noting that the company needs to be "organized more leanly, with fewer layers and more ownership, to move as quickly as possible for our customers and business."

...But is it really AI's fault, or is it just the case that the American economy is currently riding a one-way ticket to the trash heap? Is an AI-ified economy one with fewer jobs? Or does a bad economy just mean more AI? Or are the companies suffering from other ailments and simply leaning into the AI narrative for cover?

Will AI Destroy the Planet? New Yorker: Caroline Mimbs Nyce interviews Stephen Witt

...Talk to me a little bit about how these data centers are being built.
It's one of the largest movements of capital in human history. You really have to go back to electrification, or maybe the building of the railroads or the adoption of the automobile, to see a similar event in terms of money deployed.

Jensen Huang, the co-founder of Nvidia, has called the data center the A.I. factory: data goes in and intelligence comes out. All of this is being built to develop neural networks, these little files of numbers that have extraordinary capabilities. That's what all that computing equipment is in the shed doing. It's fine-tuning your neural network until it has superhuman capabilities. It's an extremely resource-intensive process.

Essentially, A.I. is a brute-force problem, and I don't think anybody anticipated how much of a heavy industrial process the development of it would be.

Are we going to completely destroy the planet with A.I.?
Yes. So, we're already on track to cook the planet. It's a huge problem, even before any of this happened. Now, having said that, I think the data center build-out is totally irresponsible from a climate perspective. But I don't know what the answer is, other than building tons and tons of carbon-free energy. You just have to make so many nuclear power plants. And we have to do it at a scale that gets the cost down.

...

Is using A.I. driving up utility costs?

Yes. The grid does not have the capacity to support this right now. And a massive build-out is going to take years.

Electricity costs are going up anyway, due to inflation—but they're way outpacing inflation. This is putting tremendous strain on America's electrical infrastructure, and you, the rate payer, are picking up part of that.

And this is already happening?
Oh, yeah, it's well under way. You're paying. The grid is just a giant pool of electricity. When you connect the data center to the grid, it's like someone coming and sticking a fire hose into a well. This big snaking thing is dipped into the pool, and starts draining it from everyone else. It makes everyone's costs go up.
We're essentially paying for A.I. companies to train their models.
In a way, yeah.