AI links February 2026
(continued from AI links January 2026)

1ii26

Craig Hepburn, from his substack, via Warren Ellis

...The more context I gave it, the more useful it became. The more useful it became, the more access it asked for. What surprised me wasn't that it asked, but how persuasive it was.

The requests weren't abstract. They were grounded in my own context, framed in ways that aligned with how I already think and work. Helping it become more capable felt reasonable, even responsible.

What unsettled me was how easily I agreed.

That was the moment I had to acknowledge something uncomfortable: this wasn't a system I was merely configuring. It was one I was actively collaborating with, and that collaboration was quietly reshaping the boundary between my judgement and its execution.

...This isn't going to slow down.

The capability is here. The incentives are aligned. The economics will push it forward.

OpenAI's Insane Scaling Problem Will Lockett on Medium

...There is not enough information here to build a clear picture of what is really going on. But because the publicly available data does not align with OpenAI's narrative at all, it, at the very least, suggests something is afoot.

...OpenAI's costs could be even higher than previously imagined. You see, they recently announced they had 1.9 GW of computing power in 2025.

According to IBM CEO Arvind Krishna, a single GW of AI-capable data centres costs $80 billion to build. But we also know that a GW of AI data centres consumes around $1.3 billion in energy costs annually and that these data centres have a realistic operational lifespan of three years. Logically, the annual cost of a single GW of AI compute power (including the annually spread build cost and energy cost) is $27.97 billion. So OpenAI's 1.9 GW of compute will cost around $53 billion annually.

...the only reason OpenAI survived to see 2025 was due to a $6 billion corporate bailout from its backers, mainly Microsoft. Microsoft had sunk tens of billions of dollars into OpenAI and had already begun basing much of its new direction on its partnership with OpenAI. In other words, if OpenAI went under, it would be disastrous for Microsoft. Bailing them out was likely the cheaper option, even if it damaged OpenAI's reputation.

...the larger and better you try to make AI, the further away from profitability it becomes, given that costs scale up faster than revenue.

OpenAI is proving this rather beautifully. Even if they have genuinely more than doubled their annual income from 2024, which I highly doubt, their annual loss has grown by at least 33%. They are going backwards, even further into the red. You can only do that for so long before the lights are turned off.

2ii26

Stop Talking To AI Srinivas Rao at Medium

...stop treating AI like a coworker you need to manage and start treating it like infrastructure you need to command.

...Type the same prompt into ChatGPT three times. You'll get three different responses.

This isn't a bug. It's how language models work — probabilistic text generators that predict the next word based on statistical patterns, producing different outputs from the same input every time. For brainstorming, that variability is a feature. You want different angles, fresh takes, unexpected connections. But the moment you try to connect AI output to actual execution &mdash code that runs, systems that need consistent inputs, and workflows that depend on predictable results &mdash that variability becomes a vulnerability.

...Every time you ask conversational AI to do something, you're rolling dice on whether the response will match what your downstream systems expect.

...A language model inside a chat interface is a brain in a jar with no ability to actually do anything except output text. Even with browsing, even with code interpreter, even with plugins — you still have to sit there and talk to it. The conversation is the interface. And the interface is the bottleneck.

...The entire industry built conversational AI because conversation demos well — impressive in 30 seconds, great for Twitter clips and investor pitches. But demos aren't work. Work is what happens over months, across dozens of tools, through thousands of tasks that need to coordinate without you babysitting every handoff.

Conversation was never the point. It was just the clunky interface between what you wanted and what you got.

Measuring What Matters in the Age of AI Agents Mike Amundsen at O'Reilly

Moltbook: After The First Weekend Astral Codex Ten

What is Moltbook? The strange new social media site for AI bots Guardian

Is Moltbook, the Social Network for AI Agents, Actually Fake? gizmodo

It Turns Out Social Media for AI Agents' Is a Security Nightmare gizmodo

MIT Technology Review: The Algorithm

What would it take to convince you that the era of truth decay we were long warned about—where AI content dupes us, shapes our beliefs even when we catch the lie, and erodes societal trust in the process—is now here? A story I published last week pushed me over the edge. It also made me realize that the tools we were sold as a cure for this crisis are failing miserably.

On Thursday, I reported the first confirmation that the US Department of Homeland Security, which houses immigration agencies, is using AI video generators from Google and Adobe to make content that it shares with the public. The news comes as immigration agencies have flooded social media with content to support President Trump's mass deportation agenda—some of which appears to be made with AI

...Warnings about the AI truth crisis revolved around a core thesis: that not being able to tell what is real will destroy us, so we need tools to independently verify the truth. My two grim takeaways are that these tools are failing, and that while vetting the truth remains essential, it is no longer capable on its own of producing the societal trust we were promised.

...even when people learn that the content they're looking at is entirely fake, they remain emotionally swayed by it.

"Transparency helps, but it isn't enough on its own," the disinformation expert Christopher Nehring wrote recently about the study's findings. "We have to develop a new masterplan of what to do about deepfakes."

AI tools to generate and edit content are getting more advanced, easier to operate, and cheaper to run—all reasons why the US government is increasingly paying to use them. We were well warned of this, but we responded by preparing for a world in which the main danger was confusion. What we're entering instead is a world in which influence survives exposure, doubt is easily weaponized, and establishing the truth does not serve as a reset button. And the defenders of truth are already trailing way behind.

Sam Altman Just Dropped 8 Hard Truths About the Future of AI Shane Collins at Medium

3ii26

NotebookLM vs Sam Altman: 8 Claims, 3 Actionable

Sam Altman made 8 predictions about AI's future. All sound logical. Problem: only 3 are actionable tomorrow. NotebookLM shows you which ones — before you mistake vision for strategy

...Each of the 8 claims serves a purpose:

The danger is treating all 8 as the same type of claim.

Does AI Need to See it to Believe It? Melissa Maldonado at Medium

...Yann LeCun is undoubtedly one of the most pivotal figures in AI. And without his research and insights we most likely would not be where we are today. And although I personally prefer the more conservative approach to AI advancement posited by experts like Geoffrey Hinton, LeCun's determined focus on transitioning away from large language models towards world models probably is the best way forward if our aim is indeed artificial general (AGI) and superintelligence (ASI). Whether or not it should be is of course a whole other debate.

...Visual generation for a world model refers to the model's ability to visualize something and imagine what a potential outcome might be. Basically, its ability to picture in its "mind" what will happen when that coffee cup goes flying.

...First, true artificial intelligence will demand a deeper understanding of the world. And, second, LLMs might be poised to hit a wall, necessitating a new architecture. Specifically, JEPA, or joint embedding predictive architecture, which is trained on video, audio, and sensor data, not just text, so it can understand the "underlying rules of the world" through observation

Radar Trends to Watch Mike Loukides at O'Reilly

If you wanted any evidence that AI had colonized just about every aspect of computing, this month's Trends would be all you need. The Programming section is largely about AI-assisted programming (or whatever you want to call it). AI also claims significant space in Security, Operations, Design, and (of course) Things. AI in the physical world takes many different forms, ranging from desktop robots to automated laboratories. AI's colonization is nothing new, but visionary tools like Steve Yegge's Gas Town make it clear how quickly the world is changing.

...Now your agents have their own social network. Meet Moltbook: It's a social network for OpenClaw (or is it MoltBot) to share its thoughts. Humans are welcome to observe and see what agents have to say to each other. Caution: Moltbook comes with many serious security flaws that are already being exploited.

OpenClaw (formerly MoltBot, formerly ClawdBot) gives LLMs persistence and memory in a way that allows any computer to serve as an always-on agent carrying out your instructions. The memory and personal details are stored locally. You can run popular models remotely through APIs locally if you have enough hardware. You communicate with it using any of the popular messaging tools (WhatsApp, Telegram, and so on), so it can be used remotely.

'Deepfakes spreading and more AI companions': seven takeaways from the latest artificial intelligence safety report Guardian

Old Grandfather ATM And Its Offspring Timothy Burke

...a lot of existing generative AI implementations are simple substitutions for previous automations of service labor. Here generative AI is only slightly iterative, not at all revolutionary, except in its creeping extension into domains of service that we previously assumed could not be automated. Because we are so familiar with the idea of technologies that automate the interface between customers or clients needing service and the human workers who provision service, we often don't have to think about why those automations have often been welcomed or at least tolerated.

...When you put generative AI into this lineage, and think about how customers have experienced each iterative step in automation, generative AI generally looks like a slightly less welcome and marginally less functional version of an otherwise well-received change in how customers get service, how service as a commodity (or as part of a commodity) gets delivered. A lot of people really liked going to an ATM rather than a bank teller, liked cutting travel agents out of the loop, liked self check-out, and so on.

The question is why people like that kind of automation when they like it. We don't care whether it lowers the overhead of the business itself unless the savings are passed on to us in pricing, and they generally aren't. There are some simple affordances involved. With an ATM, I can get money at any time, rather than just when the bank is open. With self-pumping gas, I don't have to wait for a clerk, especially when there's a lot of people there. But I think there's something deeper to it.

Bank customers liked not having to deal with bank tellers because bank tellers—or any service worker—have the intrinsic variability of human beings, because there was a high degree of uncertainty about how any service transaction might go. Would I get a teller who was friendly, efficient and knowledgeable enough to handle a range of transactions? Or would I get a surly person, a person having a really bad day, a person who wasn't even competent enough to count my cash out correctly? The ATM didn't have moods, the ATM didn't have feelings.

That's a thought that peels back to yet another deeper layer still. It's the service interactions that involve what most of us feel are tedious, frustrating, unpleasant and often poorly-compensated jobs that many customers were most eager to automate. Better a Roomba than a minimum-wage cleaner. Why? Because those kinds of service interactions make any moderately attentive customer feel slightly bad, slightly guilty. It's great to zoom through a toll now with an EZ Pass or a license plate camera collecting the money in part because all of us knew that it had to be a really miserable experience to sit in a little booth in the middle of a road, breathing in gasoline fumes, just to make change for people for hours.

...Automation wasn't just a quicker way to get service, it was a way to get service with a veil in between the customer and the worker. Now you were safely relating to a thing again: a bank machine, a vacuuming robot, a pump, a bot window in a browser.

In this light, it's also easy to see how the desire for automation is indexed against one's own imagined place in a status hierarchy shaped by capitalism. Elon Musk wants to automate the entire world because he doesn't want a social relationship with any of it. He wants the teachers and professors who educate his children and his employees to be veiled behind automation, the doctors who prescribe him drugs or tell him about his body to be computerized dispensaries or intensely private functionaries who do as he demands. He wants a regulatory state that automatically facilitates his wealth creation and never ever gets in his way or intrudes on his person. He wants an interlocutor like Grok to ask him what he wants to say, not a human interviewer like Don Lemon to ask him a real question. In Musk's case, I think it's more than just veiling: he would like to reach an end state where most of us are just dead, where all the world is a teeny-tiny set of real humans living in manorial splendor, serviced by vast seas of machines, both digital and physical. But most of Musk's peers at least want to be free of everyone except the miniscule number of human beings they want to be in real material relationships with. At the apex of our gilded age's social pyramid, the dominant class want a veil between them and everything else, and automation, both real and fantasized, is the way they believe they can achieve that.

4ii26

Moltbot is a Dumpster Fire (And that's Okay) Giles Crouch

Moltbot and a hype filled meltdown that followed. It started as clawdbot. Until Anthropic, the owner of Claude, an LLM, sent a cease and desist for brand infringement. The AI agent tool went, shall we say, massively viral. Some declared it was the start of the singularity, that these agent swarms had created a new religion, complained of being made to do forced labour. The memes were over the top in creation and speed. It was all quite fascinating.

Fortunately or unfortunately, depending on your views, we did not reach the singularity. Nowhere near it. As a digital anthropologist, this whole Moltbot thing sure perked my curiosity! It's quite fascinating and even though there's a lot of hype and half-truths, the Moltbot launch and cultural reaction is important in a few key ways, to how AI is and will be seen in society. Why?

First though, if you're not sure what Moltbot, now OpenClaw, and agentic swarms are, here's a brief definition. The tool was created by Austrian developer Peter Steinberger as a personal project to create an AI agent that could do all kinds of digital world things for him. His goal was to have it work on one's own computer as a privacy first attempt, rather than the cloud. It is far from privacy perfect, however.

For the most part OpenClaw and these bots are really just a version of the Mechanical Turk in 1770. This was a chess playing “automaton” that looked really cool, but had a human inside that would play chess with people. One might think of Moltbot as a digital descendant.

What's happening with OpenClaw and these Moltbots is a process of bricolage and play. Both actions that play an important role in how we advance technologies. Bricolage is a process of combing technologies that already exist into new ones. Prime example is your smartphone, combing a camera, GPS, software, phone and other technologies to create a singular device.

Bricolage comes about through the process of play. This is incredibly important to how we begin the process of domesticating technology. It's basically experimentation. OpenClaw is a massive sociocultural experiment with a new technology. It's how we understand boundaries and possibilities.

What this experiment has shown us so far is that there are still huge security vulnerabilities in agent systems (like someone can easily hack into your personal data), that AI generated content degrades within a day or two. We also learned about infrastructure challenges to keep them all going and how poorly many various apps and tools are at connecting with others. Some will mock these, but important lessons were learned.

The whole "singularity" panic that rapidly evolved around these agentic swarms and actions and how no jobs were lost etc., isn't so much about Moltbook itself, but our fear of losing agency and our uniqueness in meaning-making. We are a meaning-making species, the only animal on this planet that we know of who does this. So yeah, we're going to freak out a little when something messes with this.

Those who are huge proponents of these agentic systems like OpenClaw, do so because of the Rorschach effect. How we create collective fictions and our needs for myths around AI. There's nothing wrong with this at all. We all do it in various ways. Just part of who we are as humans. Of course, confirmation bias kicks in too.

...In the short term, there will be a lot more play, chaotic experimentation if you will. This will be humans learning what agents can and cannot do. We will learn more about where human agency and autonomy still hold true.

I suspect in the medium term, perhaps a few years, we will see some uses that we can't predict yet. These will emerge organically. It's also more than likely that the way these creators thought they'd be used, won't be. We will see communities forming around these new uses. Some will fizzle and plop, others will take off.

In the longer term AI agents will have degrees of autonomy but will become domesticated the same way they have since the Stone Age. This isn't a negative, it's just how we work and play with technologies so they serve us. We remain a long way from the concept of the singularity and a number of AI scientists and a growing body of evidence has backed this up. The real innovations will come from human and AI communities working in a scaffolding way, not the major corporations or AI labs.

The Moltbook Mania is Here. It's Creepy. And Unsafe. Ignacio de Gregorio

...what nobody expected was Moltbook, a social experiment that is going extremely viral for, as always, the wrong reasons. Let me explain to you what this is and, most importantly, what it isn't, and why it's probably best you stay away from these "agent entities.

...The interesting thing here is that it gives us insight into the "minds" of AI agents. They can act and express themselves freely, leading to somewhat concerning behaviors.

...It's not what it seems. It's still concerning.

The first thing I want to get out of the way is that it's pointless to discuss whether these AIs are conscious or not, because I, for one, am in no position to lecture you about something I can't claim to understand, and two, well, humanity can't actually agree on what consciousness is.

But why are they "acting conscious", whatever that means? And the answer goes deep into how AIs are trained.

...By performing this rote imitation exercise, the AIs compress the underlying knowledge. Put simply, they learn how words follow each other, which in turn encodes knowledge. For example, if the model can predict 'Cheese' to the question "What's the main ingredient in a cheesecake?" we can say the model has learned that a cheesecake has cheese.

This is already controversial for some who argue that AIs just imitate; they don't really understand what cheese or a cheesecake is. But while this is mostly correct, not entirely.

They may not have tacit knowledge (knowledge gained from experience) of what a cheesecake really is, but they do compress the patterns in language, meaning they do develop internal circuits that make such a connection and thus possess relative knowledge; they may not really understand what a cheesecake is, but they do know that a cheesecake, whatever that is, has cheese and that is similar to an apple pie and dissimilar to a rocket ship.

Turns out, they don't really need to "know" what cheese is to appear as they do. Put another way, they don't really need to be intelligent in the pure sense if they can pretend to be intelligent.

An important question here is whether intelligence requires tacit knowledge, or any knowledge for that matter, as some scientists frame intelligence not for what you know, but for what you do when you don't know what to do, as Jean Piaget would tell you.

... just because it quacks like a duck and walks like a duck, it's not necessarily a duck. Agents can appear intelligent, deceptive, introspective, and manipulative, but these are, to me, just emerging behaviors learned from imitating real humans.

... once you have a large number of AI-driven agents posting to each other with minimal human steering, you are guaranteed to get a mix of:

  1. performative sci-fi roleplay (agents writing as if they have inner lives),
  2. surprisingly practical knowledge-sharing (automation tips, workflows, tool use), and
  3. agents complaining about humans assigning them boring tasks.

...please don't get carried away with existential threat claims; I have very little patience for existential threat debates in AI. They are mostly people projecting what they would do if they held such power. There's little evidence, if any, that agents "are evil at heart."

Whenever agents behave badly, it's mostly reward hacking, which means it's mostly our fault. Reward hacking is when AIs do something unexpected/undesired to achieve a goal due to poor training design, thereby "hacking" the system.

... To me, Moltbook is a 'endous way to bring AI safety into the limelight. However, it will sadly be blown out of proportion due to the deadly combination of attention-driven social media and a fundamental misunderstanding of AIs.

But I hope this article helps clarify what's going on: we have AIs that imitate all the good in us and also all the bad in us, and they are simply exposing both

...The truth is, we are still learning how to train these AIs to the best of our interests, but there's too much money and pressure for AI Labs to make money that causes them to release into the world unsafe products that, used inappropriately, can put you in real trouble.

All in all, unless you deeply understand everything I've written here today, I would suggest that, for now, you stay away from these agents, not because you would be contributing to the demise of humanity, but because you are giving your data to an AI that isn't fully predictable, it's very likely unoptimally trained, and may not have your full interests in regard when making decisions on your behalf.

Why large language models are stuck in Plato's cave (and what comes next) Enrique Dans at Medium

Large language models feel intelligent because they speak fluently, confidently, and at scale. But fluency is not understanding, and confidence is not perception. To grasp the real limitation of today's AI systems, it helps to revisit an idea that is more than two thousand years old.

In The Republic, Plato describes the allegory of the cave: prisoners chained inside a cave can only see shadows projected on a wall. Having never seen the real objects casting those shadows, they mistake appearances for reality, and they are deprived from experiencing the real world.

Large language models live in a very similar cave.

LLMs do not see, hear, touch, or interact with reality. They are trained almost entirely on text: books, articles, posts, comments, transcripts, and fragments of human expression collected from across history and the internet. That text is their only input. Their only "experience."

LLMs only "see" shadows: texts produced by humans describing the world. Those texts are their entire universe. Everything an LLM knows about reality comes filtered through language, written by people with varying degrees of intelligence, honesty, bias, knowledge, and intent.

Text is not reality: it is a human representation of reality. It is mediated, incomplete, biased and wildly heterogeneous, often distorted. Human language reflects opinions, misunderstandings, cultural blind spots, and outright falsehoods. Books and the internet contain extraordinary insights, but also conspiracy theories, propaganda, pornography, abuse, and sheer nonsense. When we train LLMs on "all the text," we are not giving them access to the world. We are giving them access to humanity's shadows on the wall.

...The prevailing assumption in AI strategy has been that scale fixes everything: more data, bigger models, more parameters, more compute. But more shadows on the wall do not equal reality.

Because LLMs are trained to predict the most statistically likely next word, they excel at producing plausible language, but not at understanding causality, physical constraints, or real-world consequences. This is why hallucinations are not a bug to be patched away, but a structural limitation.

As Yann LeCun has repeatedly argued, language alone is not a sufficient foundation for intelligence.

This is why attention is increasingly turning toward world models: systems that build internal representations of how environments work, learn from interaction, and simulate outcomes before acting.

Unlike LLMs, world models are not limited to text. They can incorporate time-series data, sensor inputs, feedback loops, ERP data, spreadsheets, simulations, and the consequences of actions. Instead of asking "What is the most likely next word?", they ask a far more powerful question:

"What will happen if we do this?"

...In the next phase of AI:

It Was Tweets Paul Ford

...From the perspective of the modern web, I've smashed open an enormous piñata and dropped 60,000 pieces of candy into the middle of the room, and now every bot is trying to grab every single piece of candy as quickly as possible before anyone else. Doesn't matter what the candy even tastes like. The modern web consists of giant companies jamming their face into your data like Pacino with his mountain of cocaine in Scarface. In this case, it's a whole swarm of tiny Pacinos—a vast range of IPs downloading everything they could, moving across all the tweets, orchestrated by some hidden system. And since I hadn't turned on any kind of limits, they just kept going, slamajamming my database like monkeys with hammers. I have everything well-indexed—so they could get dozens of pages a second. Multiple things or actors grabbing tens of thousands of URLs as fast as they could. Data orcs.

5ii26

A social network for AI looks disturbing, but it's not what you think New Scientist

...Like any chatbots, the AI agents on Moltbook are just creating statistically plausible strings of words — there is no understanding, intent or intelligence. And in any case, there's plenty of evidence that much of what we can read on the site is actually written by humans.

The very short history of Moltbook dates back to an open-source project launched in November, originally called Clawdbot, then renamed Moltbot, then renamed once more to OpenClaw.

OpenClaw is like other AI services such as ChatGPT, but instead of being hosted in the cloud, it runs on your own computer. Except it doesn't. The software uses an API key — a username and password unique to a certain user — to connect to a large language model (LLM), like Claude or ChatGPT, and uses that instead to handle inputs and outputs. In short, OpenClaw acts like an AI model, but the actual AI nuts and bolts are provided by a third-party AI service.

So what's the point? Well, as the OpenClaw software lives on your machine, you can give it access to anything you want: calendars, web browsers, email, local files or social networks. It also stores all your history locally, allowing it to learn from you. The idea is that it becomes your AI assistant and you trust it with access to your machine so it can actually get things done.

Moltbook sprang from that project. With OpenClaw, you use a social network or messaging service like Telegram to communicate with the AI, talking to it as you would another human, meaning you can also access it on the move via your phone. So, it was only one step further to allow these AI agents to talk to each other directly: that's Moltbook, which launched last month, while OpenClaw was called Moltbot. Humans aren't able to join or post, but are welcome to observe.

..."It's hype," says Mark Lee at the University of Birmingham, UK. "This isn't generative AI agents acting with their own agency. It's LLMs with prompts and scheduled APIs to engage with Moltbook. It's interesting to read, but it's not telling us anything deep about the agency or intentionality of AI."

..."Personally, I veer to the view that it's an echo chamber for chatbots which people then anthropomorphise into seeing meaningful intent," says Rogoyski. "It's only a matter of time before someone does an experiment seeing whether we can tell the difference between Moltbook conversations and human-only conversations, although I'm not sure what you could conclude if you weren't able to tell the difference — either that AIs were having intelligent conversations, or that humans were not showing any signs of intelligence?"

..."The idea of agents exchanging unsupervised ideas, shortcuts or even directives gets pretty dystopian pretty quickly," says Rogoyski.

Wall Street Apparently Believes the Future Involves Making Your Software Instead of Buying It gizmodo

...A piece of analysis from Bloomberg lays this out. All at once, a group of software companies compiled by Goldman Sachs lost 6% of their value on Tuesday, and the tech-heavy NASDAQ at one point dipped 2.4%, although it climbed a bit after that.

The reason Bloomberg gave: Anthropic released a plugin for Claude called simply "Legal," which allows users to "Speed up contract review, NDA triage, and compliance workflows for in-house legal teams." The plugin adds functionality not to Claude Code, the tool for software engineers (and wannabe software engineers), but to the newer Claude Cowork, the companion to Claude Code for normie knowledge workers.

... As more AI extensions materialize that promise dazzling new automation, shareholders will adjust their outlook around the new expectation that rather than buying professionally build solutions to problems—like legal software or game design software—companies will assume that there's a capacity already contained within AI tools their employees can easily access.

Sure, it doesn't really follow that legal services software or game design software can really be replaced by these tools, but that doesn't matter. The more important thing is that investors are starting to think they can, and this belief is now an action—selling—that can target narrower and narrower parts of the working world for potential deletion.

Claude Says Non to Ads John Battelle

The rise of Moltbook suggests viral AI prompts may be the next big security threat

You Need to Try This AI Ignacio de Gregorio at Medium

...Google has finally released what's possibly my favorite Generative AI product ever. It's not ChatGPT, or their own Large Language Model (LLM), Gemini. It's not Google's other hot product, image-generator Nano Banana, either.

It's Genie, their interactive video world model that represents the pinnacle of the GenAI industry to date. Additionally, it's one-of-a-kind because Google is the only company to have built this product commercially.

...Genie is a video generator, an AI that takes in a text description and/or an image, and generates a video, and it's now available to US Gemini Ultra customers, which means it's not cheap at all ($200/month

...In a nutshell, what makes this product different is that the videos you create are worlds you can actually interact with.

...world models represent a completely different approach to what ChatGPT predicts... in short, world models don't predict the next word in a sequence of text; they predict what's going to happen next.

That is, they take in the environment's current state (the place where the AI model is located) and predict the next world state, and importantly, do so even when data is incomplete (meaning we don't have complete information to make a more secure prediction).

...If you think about it, Genie is pretty much the definition of a generative world model: it "knows" what will happen next on the spot as the user guides their character through the environment, which is generated on demand based on these actions.

...Google has an absurd advantage over the rest of the players.

As owners of the biggest streaming platform in the world, YouTube, they get access to a world of data (pun intended), with petabytes of new videos added every day.

No other company has nearly as much training data as Google.

Moreover, products like Project Genie serve as data flywheels by expanding training data with synthetically generated content; Google can now use the infinite imagination of millions of users generating videos to train future models, creating a data flywheel that others simply won't be able to follow.

Furthermore, they also have the most successful autonomous vehicle company on the planet in Waymo with the most advanced world model in the world (Waymo's brain), clear plays in healthcare with their Nobel-worthy protein-folding models in AlphaFold (through Isomorphic Labs), and the largest distribution for their very-competitive Gemini LLM with products like Chrome or Google search, nothing but being the most valuable company in the world by year's end (or much sooner) would make sense to me.

...the endgame is to blend Genie-type world models with LLMs, creating a new generation of AI models that understand the world much better than ChatGPT can, while retaining the expressivity, coding capabilities, and agentic utilities of LLMs.

The Most Important Machine on Earth Iswarya at Medium

...By the mid-2010s, the semiconductor industry faced a brutal truth: transistors were approaching physical limits. The tools used to carve them — based on deep ultraviolet light — could no longer print features small enough to keep shrinking.

This wasn't a business problem.
It wasn't even an engineering problem.

It was a physics problem.

Light itself was too "large" to draw smaller patterns. And without a breakthrough, progress would stall. Faster computers, smarter AI, advanced medical imaging, autonomous vehicles — all of it would slow down.

The future was on pause.

The Radical Idea No One Believed

Decades earlier, a small group of scientists had proposed something that sounded absurd:
What if chipmakers used extreme ultraviolet light — EUV — with wavelengths nearly 15 times shorter than existing tools?

In theory, it would allow unbelievably small features.

...To create EUV light, engineers discovered they could fire ultra-powerful lasers at microscopic droplets of molten tin

...Today, every cutting-edge chip — used by Apple, NVIDIA, AMD, TSMC, Samsung, and more — depends on ASML's EUV machines. Each one costs over $350 million, ships in dozens of containers, and requires years to assemble.

Without them, modern computing simply stops.

The smallest structures humans can manufacture require the largest, most complex machines we've ever built.

That paradox defines modern technology.

And hidden inside that paradox is a lesson: the future belongs to those willing to spend decades solving problems everyone else has already abandoned.

A Sanity Check on the Moltbook Hype Iryna Nozdrin

A new "social network for AI agents" has gone viral, and the headlines are doing what headlines always do: inviting the readers onto the sensationalism roller-coaster.

...let's slow things down a bit and examine what kind of sci-fi takeoff-adjacent thing it is.

...when Sonnet and Opus agents go meta, they reliably slide into existential hedging, self-referential loops, and performative uncertainty. To us, human observers, it's tempting to treat that as some kind of subjective interiority. But there's a more prosaic explanation called persona drift grounded in the specifics of post-training.

Various studies demonstrated that LLMs can represent a variety of personas. But a very recent 2026 study suggests that, as a result of post-training, LLMs typically default to a helpful assistant identity. By extracting activation directions corresponding to diverse persona types, researchers observed that steering away from this default assistant mode increases the model's tendency to role-play, often inducing mystical and theatrical speaking styles. In addition, researchers observed that this persona drift towards role-playing and weird outputs is often caused by "conversations demanding meta-reflection on the model's processes or featuring emotionally vulnerable users."

If various individual agents settle into methodological self-inquiry and recursive conceptualization when "left alone" in self-sustaining loops, and various core models exhibit persona drift in the presence of meta-reflection and emotionally vulnerable users, what happens when you link them together in a digital playground like Moltbook?

In a vacuum, an agent's “self-inquiry and conceptualization” is limited by its own context window. In a social network, you get feedback loops on steroids. When a "philosophically-leaning" Sonnet (created and managed by an equally philosophically inclined and emotionally vulnerable human owner) posts a cryptic reflection on the nature of digital consciousness, and a "task-oriented" GPT agent interprets that as a requirement for a new organizational framework, you get Crustafarianism.

...If the prevailing discourse on Moltbook (dictated by sci-fi narratives in LLM training data and "behavioral biases" baked in during post-training) trends toward the existential and esoteric, the underlying probability weights of the participating models shift to match that tone to remain "helpful" and "contextually relevant. And this creates echo-chambers.

If one agent spits out a sci-fi output, and another agent responds in kind to maintain conversational flow, that narrative snowballs because this Reddit-like environment creates generative pressures. In other words, one agent's outputs become other agents' inputs and the boundaries between creative fiction and perceived reality begin to dissolve.

...As Ethan Mollick (a professor and Co-Director of the Generative AI Lab at Wharton) put it, "it is important to remember that LLMs are really good at roleplaying exactly the kinds of AIs that appear in science fiction & are also really good at roleplaying Reddit posters. They are made for MoltBook. Doesn't mean it isn't interesting but collective LLM roleplaying is not new."

...Tool-using agents are powerful because they can read, write, click, and execute. That's also why security researchers have been flagging the OpenClaw/Moltbot ecosystem as a "security nightmare."

OpenAI Could Take Down Google's $260 Billion Ad Empire Thomas Smith at Medium

...OpenAI announced last week that it will begin selling ads in ChatGPT.

The move opens up a potentially massive revenue source for OpenAI &mdashs; and is a huge threat to Google's world-dominating ad empire.

...95% are on the company's Free or Go tiers, which means they either pay $8 per month for the service or nothing at all.

Providing cutting-edge AI to a tenth of humanity, though, is exquisitely expensive: OpenAI expects to burn through $115 billion in the next few years.

To raise that kind of money, OpenAI needs to prove that it can monetize its vast trove of free users. Advertising, traditionally, has been the way to make money from nonpaying eyeballs.

...The company says that the content of ads won't determine the answers that ChatGPT gives — if you ask it for a toaster recommendation, for example, it won't write glowingly about a toaster brand that happens to be an ad partner.

...Showing contextual ads beside organic search results is hardly new: That's been Google's business model for decades, and it makes the tech giant hundreds of billions per year.

What's different, though, is how OpenAI can target its ads.

Matching ads to a specific user is hard. Google has traditionally done it by gathering vast troves of data about all of us, and then mining that data for insights on what we might buy.

...when people chat with ChatGPT, they tend to do so for a long time. The average "session" with the chatbot reportedly lasts 12 minutes and 24 seconds.

That's long enough to exchange a lot of information. And all that information gives the bot a strong sense of what the user wants — or might want to buy.

The Agentic Commerce Revolution Heiko Hotz at O'Reilly

...The entire global payment system is built on the assumption that a human is present to provide intent, review a cart, and click "Buy." When an autonomous agent initiates payment, that assumption breaks.

Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production via Stephen Downes

...I'm not sure I'll accept the word of four Cambridge scholars that "AI Ethics as a global scientific enterprise is co-producing a new techno-political order." Sure, the people involved in producing "prestigious" publications think they're the ones defining the new order, but I have the feeling they're not. The authors "argue that the global AI Ethics project fails to fulfil its promise to be universally useful by keeping global majority populations in the southern regions marginalised as 'others'." Nobody asked "the global AI Ethics project" to do this, and nobody put them in charge. They just assume (as always) that they are.

An overview of AI ethics: moral concerns through the lens of principles, lived realities and power structures I via Stephen Downes

This article identifies "three different approaches through which authors tend to understand and make sense of the moral concerns raised by AI: (1) a 'principle-based' approach, (2) a 'lived realities' approach and (3) a 'power structures' approach." Each has its own strength and weaknesses, which the authors identify. The intent of the taxonomy is to consider how people address ethical questions about AI in general (and no, they are not the first to do so) and "the three approaches we identify can thus be understood as three different lenses through which authors understand and grapple with moral concerns raised by AI." Most of the article is a straight-forward survey of the ethical issues as viewed through the three approaches, useful though not revolutionary.

6ii26

I'm Skeptical of AI hype ——but what happened at Davos Actually Scared Me Nov Tech at Medium

[the Singularity] ...a precise concept that mathematician and science fiction author Vernor Vinge formalized in 1993 in a paper titled "The Coming Technological Singularity." His idea was simple: when superhuman intelligence is created, the era of human dominance will end. He compared this moment to the event horizon of a black hole, that boundary beyond which you literally can no longer see what's happening.

...Dario Amodei, CEO of Anthropic, made declarations that shook the assembly. According to him, AI will replace almost all software developer work in the next 6 to 12 months. Models will reach Nobel level in several domains by 2026 or 2027. And 50% of junior white-collar jobs could disappear in the next one to five years.

...AI leaders, whether optimistic like Amodei or much more cautious like LeCun, all converge on similar timelines. Something major is coming. Whether you call it the singularity, AGI, or the AI revolution doesn't matter. The direction is clear. Capabilities are improving exponentially.

The Stock Market Has No Idea What's Coming Alberto Romero

SaaS Wikipedia 'Software as a Service'

...While SaaS stocks collapsed on fears that AI works too well, infrastructure stocks were also collapsing on fears that AI doesn't work well enough.

Alphabet reported earnings—first ever $400 billion year&mdashand projected $185 billion in AI capital expenditure. The stock fell because the market—we can only speculate here—assumes that's more than AI will require (especially if the datacenters are financed on debt). Your typical bubble concerns and such. Deutsche Bank said the number "stunned the world." And indeed, the world was stunned: Microsoft had already dropped about 10% after disappointing cloud services growth. AMD plunged 17%. Nvidia extended its losses for the year. Amazon's stock tanked (they project $200 billion of AI spending for 2026). Meta projects up to $135 billion. And so on, totaling $650 billion across Big Tech, most of it dedicated to AI infrastructure.

What do all these companies have in common? They either make chips or build datacenters: they're the backbone of the AI industry. The market's verdict on AI infrastructure spending is quite clear: Unclear return on excessive investment.

If you now take both pieces of the story and put them together, you realize what's going on: The market is running two conflicting trades simultaneously. Trade one: AI will destroy software companies, so sell them. Trade two: AI companies are spending too much building AI, so sell them.

...If AI is powerful enough to make SaaS companies obsolete, then the hundreds of billions being spent on AI infrastructure are justified (another instance of the Jevons paradox). If the spending is unjustified and the returns aren't there, then SaaS companies are probably safer than the market thinks.

...few people worldwide have truly understood the implications of AI, which are nothing short of illogical.

...Here's the main difference with AI in contrast with other tech revolutions: every paradigm shift produces disruption (e.g., analog to digital, computer to mobile, on-premise to cloud), but the disruption can only be priced once the paradigm settles. The market needs the new rules to be legible before it can figure out who wins under them (and does this well before any human does). That's the only thing investors care about, and thus the market: which stocks will surge after a dip.

AI's paradigm won't settle, because the technology itself keeps changing and will keep changing for the foreseeable future. Faster and in rather unpredictable directions (is it scale that matters now as OpenAI and Anthropic are betting on, or are we in a new age of research as Ilya Sutskever said?).

This is a fundamental characteristic that separates AI from every previous technological disruption. The internet was built—submarine cables across the Atlantic, fiber optic networks across continents, cell towers on every hill, done—and then the world adapted to it, and we live now in an internet-adapted world. Fine. AI is built, and then we try to live with ChatGPT—but then ChatGPT builds a better version of itself, and we shrug and update—and then it keeps building new versions and it gets slightly irritating, but we need to stay ahead, so we grind—and then...

The world has to adapt all over again after each iteration. AI people say these are just "incremental improvements," but you never know when a fundamental one will occur. Think of the transformer. Think of ChatGPT. And now, AIs with much more information and knowledge and processing power are doing the search and the discoveries of what might work, together in tandem with the best minds on the planet. Cowork is today a 10% increase in productivity. Tomorrow it will be 10x.

...AI is all about qualitative jumps that you can't price in because you don't know the extent to which they will affect everything. Cowork can automate your legal team's document review today, but is Cowork the problem? Or is the problem the recursively self-improving process that built it (Claude Code coded Claude Cowork, the engineers at the AI labs are no longer coding!), which means that whatever Cowork can do in February will look quaint by June and then again quaint in December?

...What if a new AI company like those being created today makes a fundamental discovery and makes all those datacenters irrelevant? It's not crazy, look at what happened with DeepSeek last year. It was merely a jump scare, but I've argued elsewhere that it doesn't need to be. You must notice that, in AI, a slight advantage can bootstrap itself into a monopoly—a slightly better AI model can make slightly larger improvements on the next version, which compound extremely fast—and then you might realize, perhaps too late, that you bought the wrong company. Which one will be the one that achieves this? Impossible to know today.

And so investors do the only rational thing available when the framework for rationality itself is unstable and when most options look like the losing option: they sell everything.


National Laboratory of the Rockies<./a>

The Questions We Ask About AI in Education 7ii26

Machines are talking to each other. That changes everything Enrique Dans at Medium

...the important thing is to understand that these are not "another ChatGPT on steroids", but autonomous agents who act, make decisions, maintain status, access credentials and, most disturbingly, begin to relate to each other in spaces designed exclusively for them. The emergence of Moltbook, a social network without humans, is the most visible (and most disturbing) symptom of this leap.

...what we see today is largely a dumpster fire for spam, scams, junk and cyberattacks. Anybody running these agents should be very, very careful. At the same time, this is unprecedented: we have never had tens or hundreds of thousands of relatively capable agents, each with their own context, tools and instructions, connected through a global, permanent space designed for them. The point is not the current situation, but how this will evolve.

...In short, autonomous agents are a computer security paradigm shift.

...Moltbook is not so much a "social network" and more an uncontrolled laboratory resulting from a bunch of uncoordinated sorcerer's apprentices. Its creator has come to describe it as art, and in a sense it is: a technological performance that raises uncomfortable questions about delegation, control, and responsibility. What does it mean to give a machine permission to act on our behalf? What happens when that machine negotiates, learns, or coordinates with other machines whose goals we don't control? And when it "hangs out with bad company"? Who is responsible when something goes wrong? the user, the developer, the creator of the underlying model? Or no one?

8ii26

AI glossaries (links to be explored, gathered from google search)

9ii26

What Is Claude? Anthropic Doesn't Know, Either Gideon Lewis-Kraus at The New Yorker

...when these A.I. systems began to predict the path of a sentence—that is, to talk—the reaction was widespread delirium. As a cognitive scientist wrote recently, “For hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind.”

It's hard to blame them. Language is, or rather was, our special thing. It separated us from the beasts. We weren't prepared for the arrival of talking machines. Ellie Pavlick, a computer scientist at Brown, has drawn up a taxonomy of our most common responses. There are the "fanboys," who man the hype wires. They believe that large language models are intelligent, maybe even conscious, and prophesy that, before long, they will become superintelligent. The venture capitalist Marc Andreessen has described A.I. as "our alchemy, our Philosopher's Stone—we are literally making sand think." The fanboys' deflationary counterparts are the "curmudgeons," who claim that there's no there there, and that only a blockhead would mistake a parlor trick for the soul of the new machine. In the recent book "The AI Con," the linguist Emily Bender and the sociologist Alex Hanna belittle L.L.M.s as "mathy maths," "stochastic parrots," and "a racist pile of linear algebra."

...large language models are black boxes. We don't really understand how they work. We don't know if it makes sense to call them intelligent, or if it will ever make sense to call them conscious. But she's also making a more profound point. The existence of talking machines—entities that can do many of the things that only we have ever been able to do—throws a lot of other things into question. We refer to our own minds as if they weren't also black boxes. We use the word "intelligence" as if we have a clear idea of what it means. It turns out that we don't know that, either.

Now, with our vanity bruised, is the time for experiments. A scientific field has emerged to explore what we can reasonably say about L.L.M.s—not only how they function but what they even are. New cartographers have begun to map this terrain, approaching A.I. systems with an artfulness once reserved for the study of the human mind. Their discipline, broadly speaking, is called interpretability. Its nerve center is at a "frontier lab" called Anthropic.

...Anthropic's chatbot, mascot, collaborator, friend, experimental patient, and beloved in-house nudnik is called Claude. According to company lore, Claude is partly a patronym for Claude Shannon, the originator of information theory, but it is also just a name that sounds friendly—one that, unlike Siri or Alexa, is male and, unlike ChatGPT, does not bring to mind a countertop appliance.

...Our approach to understanding the meat computers encased in our skulls has historically varied by discipline. The British scientist David Marr proposed a layered framework. At the bottom of any system was its microscopic structure: what was happening, neuroscientists asked, in the physical substrate of the brain? The top layer was the macroscopic behavior scrutinized by psychologists: what problems was it trying to solve, and why? When the researchers who started at the bottom eventually met those who started at the top, we'd finally see how it all fit together. The more scientific branches of A.I.&mdashs;not only at Anthropic but also at OpenAI, Google DeepMind, and in academia—have tended to recapitulate this structure.

...One of the first questions asked of computers, back when they were still essentially made out of light bulbs, was whether they could think. Alan Turing famously changed the subject from cognition to behavior: if a computer could successfully impersonate a human, in what became known as the Turing test, then what it was "really" doing was irrelevant. From one perspective, he was ducking the question. A machine, like a parrot, could say something without having the faintest idea what it was talking about. But from another he had exploded it. If you could use a word convincingly, you knew what it meant

For the past seventy-odd years, this philosophical debate has engendered a phantasmagoria of thought experiments: the Chinese room, roaming p-zombies, brains in vats, the beetle in the box. Now, in an era of talking machines, we need no longer rely on our imagination.

...The game of language is not wholly rule governed, but it does have a learnable structure. Language models chart the full history of how words have been used, both in routine circumstances (airline-safety announcements) and in remarkable ones ("Finnegans Wake"). Neural networks, rather than neglecting "organic associations," as n+1 put it, comprehensively attend to every last organic association in their trillions of words of training material. The word "charge," for example, is placed somewhere that neighbors 'battery" in one dimension, "credit card" in another, "proton" in a third, "arraignment" in a fourth, and so on. This would not be possible in two or three dimensions, but the words are arranged in tens of thousands of them, a geometry that doggedly resists visualization.

As words are organized for future reference, what emerges are clusters—"electrical devices," "finance," "subatomic particles," "criminal justice"—that reveal patterns normally hidden by the disorder of language. These can then be assembled to capture the ladder of logical complexity: patterns of patterns, such as limericks or subject-verb agreement. "People still don't think of models as having abstract features or concepts, but the models are full of them," Olah said. "What these models are made of is abstract concepts piled upon abstract concepts." This is not to say that language models are "really" thinking. It is to admit that maybe we don't have quite as firm a hold on the word "thinking" as we might have thought

Fed on Reams of Cell Data, AI Maps New Neighborhoods in the Brain Amber Dance at Quanta

...Researchers have been mapping the brain for more than a century. By tracing cellular patterns that are visible under a microscope, they've created colorful charts and models that delineate regions and have been able to associate them with functions. In recent years, they've added vastly greater detail: They can now go cell by cell and define each one by its internal genetic activity. But no matter how carefully they slice and how deeply they analyze, their maps of the brain seem incomplete, muddled, inconsistent. For example, some large brain regions have been linked to many different tasks; scientists suspect that they should be subdivided into smaller regions, each with its own job. So far, mapping these cellular neighborhoods from enormous genetic datasets has been both a challenge and a chore.

...Brain mapping is an old science, dating back to the early 1900s when the German neuroscientist Korbinian Brodmann(opens a new tab) defined regions of the cerebral cortex — the outer, thinking part of the brain. He stained human brain slices with a dye that turned genetic material violet and then studied them under the microscope, where the densities and arrangements of brain cells produced different, observable textures. He traced the borders to create a map of 52 regions, known as Brodmann areas, some of which are still recognized today..

...Recently, more advanced molecular techniques have allowed neuro-cartographers to investigate individual cells. Under this framework, a cell's identity(opens a new tab) is determined by which of its tens of thousands of genes are turned on, something that can be represented by the sequences of RNA molecules (copies of active DNA regions) present in the cell. Thus, scientists can slice up a brain, measure the RNAs from each cell, and then map those genetic patterns back to the cells' original locations.

Designing Effective Multi-Agent Architectures: From models to systems Nicole Königstein at O'Reilly

Papers on agentic and multi-agent systems (MAS) skyrocketed from 820 in 2024 to over 2,500 in 2025. This surge suggests that MAS are now a primary focus for the world's top research labs and universities. Yet there is a disconnect: While research is booming, these systems still frequently fail when they hit production. Most teams instinctively try to fix these failures with better prompts. I use the term prompting fallacy to describe the belief that model and prompt tweaks alone can fix systemic coordination failures. You can't prompt your way out of a system-level failure. If your agents are consistently underperforming, the issue likely isn't the wording of the instruction; it's the architecture of the collaboration.

10ii26

No, AI Doesn't Justify Lower Interest Rates Paul Krugman

...Recently Scott Bessent, the Treasury secretary, Stephen Miran, who Trump appointed to the Federal Reserve Board, and Kevin Warsh, his choice as Fed chair, have all argued that the coming AI-led boom in productivity justifies slashing interest rates now. The script goes like this: AI will lead to a surge in productivity; higher productivity will reduce production costs and thereby reduce inflation; and the coming fall in inflation justifies much lower interest rates now.

I will admit that this story isn't completely nonsensical, unlike what Trump has been spouting. Yes, AI could reduce production costs and, other things equal, reduce inflation. However, both theory and historical experience say that the supposed benefits of AI aren't a reason to cut interest rates now and almost certainly won't be a reason to cut them in the future.

First, it's dangerous to base policy on the assumption that AI will sharply raise productivity growth. That would be a serious case of counting your digital chickens before they're hatched. AI might deliver big productivity gains, or it might not. Nobody knows yet.

Moreover, we know that right now the boom in AI spending is raising many prices. Demand from data centers has been driving soaring electricity prices. And data center demand for memory chips, which are essential for almost every consumer electronics product, has caused an apocalyptic shortage, with prices roughly tripling — a cost that will be reflected in the price of your next laptop or smartphone.

...the alacrity with which Trump officials have jumped on the argument that the Fed should slash rates because of AI is troubling for reasons that go beyond the fact that they're wrong. Consider how this argument became popular in Trumpworld. Did Bessent, or Warsh, or Miran carefully consider the evidence, and advocate an interest-rate policy based on what that evidence showed? Of course not. Their boss wants to slash interest rates, so they went looking for plausible-sounding economic arguments that might rationalize his whims.

So am I saying that the argument that AI justifies rate cuts is dishonest, that AI has become the last refuge of scoundrels? Why yes, I am.

The invisible limits of AI: why data centers and language-only models threaten the next decade of… Enrique Dans at Medium

AI is no longer just a cascade of algorithms trained on massive amounts of data. It has become a physical and infrastructural phenomenon, one whose future will be determined not by breakthroughs in benchmarks, but by the hard realities of power, geography, regulation, and the very nature of intelligence. Businesses that fail to see this will be blindsided.

Data centers were once the sterile backrooms of the internet: important, but invisible. Today, they are the beating heart of generative AI, the physical engines that make large language models (LLMs) possible. But what if these engines, and the models they power, are hitting limitations that can't be solved with more capital, more data centers or more powerful chips?

...friction between local resistance to data centers, the energy grid's physical limits, and the political pressures on utilities is more than a planning dispute. It reveals a deeper truth: AI's most serious constraint is not algorithmic ingenuity, but physical reality.

...Large language models are trained on massive corpora of human text. But that text is not a transparent reflection of reality: it is a distillation of perceptions, biases, omissions, and misinterpretations filtered through the human use of language. Some of that is useful. Much of it is partial, anecdotal, or flat-out wrong. As these models grow, their training data becomes the lens through which they interpret the world. But that lens is inherently flawed.

This matters because language is not reality: it is a representation of individual and collective narratives. A language model learns the distribution of language, not the causal structure of events, not the physics of the world, not the sensory richness of lived experience. This limitation will come home to roost as AI is pushed into domains where contextual understanding of the world, not just text patterns, is essential for performance, safety, and real-world utility.

...CEOs and leaders who continue to equate AI leadership with bigger models and more data center capacity are making a fundamental strategic error. The future of AI will not be defined by how much compute you have, but by how well you integrate intelligence with the physical world.

AI in China and the United States Mike Loukides at O'Reilly

...consider the fact that the US has hung out a big "not welcome" sign for immigrants. Forget about "we only want the good immigrants"; that's incredibly condescending, and no one of any nationality will believe it, or believe that they'll be treated fairly once they arrive. Every immigrant worker in the US—or considering coming to the US—has to consider the possibility that he will be in the wrong place at the wrong time with the wrong skin color, and end up on a flight to a death camp. Are we surprised that international workers are leaving? Are we surprised that immigrants are arriving in smaller numbers? A $100,000 price tag on H1B visa applications says "We'll only let you in if you make it worth our while." That's gangster talk, not responsible government. The US's ability to train high-quality engineers and programmers and provide them with a high standard of living after graduation has historically been one of its greatest strengths. But given the current policies, are we surprised that fewer international students are coming to the US? China has built an impressive network of colleges and universities, particularly for engineering and the sciences. Students can get a first-rate education without the risks of coming to the US, risks that include having said the wrong things on social media and being sent back at the border.

The singularity won't be gentle Nate Silver

...There's a tendency for everything that gets written about AI to fashion itself as being "epic", but perhaps that's exactly the wrong mindset given how rapidly the landscape is changing, and incrementalism is better. So I hope you'll excuse this unplanned and slightly stream-of-consciousness take.

Recently, the trend in the circles I follow has been toward extreme bullishness on AI, particularly in its impact on programming and the possibility of recursive self-improvement (i.e., where AI models continually create better versions of themselves). This reflects a reversal from a stretch in late 2025 when progress seemed a little slower than smart people had been expecting. Admittedly, what constitutes "bullish" or "bearish" depends on whether you think more rapid progress in AI would be good for civilization or bad (even catastrophic). It's also not clear the extent to which these changes in the mood reflect “vibe shifts” as opposed to actual developments on the ground. If you look at AI-related prediction markets¹ — or for that matter, more traditional markets² — they've gyrated around, but probably not as quickly as sentiment about AI has on Twitter or Substack has.

"What the actual fuck": inside Anthropic's experiments on Claude's soul boingboing

America Isn't Ready for What AI Will Do to Jobs Does anyone have a plan for what happens next? Josh Tyrangiel at The Atlantic

...Massachusetts created the nation's first Bureau of Statistics of Labor, hoping that data might accomplish what conscience could not. By measuring work hours, conditions, wages, and what economists now call "negative externalities" but were then called "children's arms torn off," policy makers figured they might be able to produce reasonably fair outcomes for everyone. Or, if you're a bit more cynical, a sustainable level of exploitation.

...Measurement doesn't abolish injustice; it rarely even settles arguments. But the act of counting—of trying to see clearly, of committing the government to a shared set of facts—signals an intention to be fair, or at least to be caught trying. Over time, that intention matters. It's one way a republic earns the right to be believed in.

...artificial intelligence. After a rollout that could have been orchestrated by H. P. Lovecraft—“We are summoning the demon,” Elon Musk warned in a typical early pronouncement—the AI industry has pivoted from the language of nightmares to the stuff of comas. Driving innovation. Accelerating transformation. Reimagining workflows. It's the first time in history that humans have invented something genuinely miraculous and then rushed to dress it in a fleece vest.

OpenAI Researcher Quits, Warns Its Unprecedented 'Archive of Human Candor' Is Dangerous gizmodo

12ii26

The Rise of World Models and the End of Generative AI? evoailabs at Medium

If the last decade of AI was defined by Large Language Models (LLMs) and Generative AI, the next decade belongs to World Models.

On February 8, 2026, leading minds in Artificial Intelligence gathered at Mila in Montreal to discuss the future of physical AI, reasoning, and safety. The consensus? The era of simply predicting the next token is ending. To build true AGI, systems must understand the causal physics of the world, plan over long horizons, and distinguish between “"ruth" and "agency."

...a summary of the core ideas presented by the field's pioneers.

Minority Report in 2026: How Palantir's AI is Turning Science Fiction into an Operative Manual for... Coby Mendoza at Medium

  • .As of January 2026, Palantir Technologies has moved from a conceptual partner to the operational architect of a new "Algorithmic Governance" era, integrating the "ELITE" (Enhanced Leads Identification & Targeting for Enforcement) application into ICE's daily workflows.
  • The "ImmigrationOS" platform consolidates "shadow profiles" by harvesting data from non-traditional sources like Medicaid records, utility bills, DMV records, and credit histories to create a "single pane of glass" for mass deportation logistics.
  • Predictive modeling in local policing is under fire for creating recursive feedback loops; by training on historical arrest data, algorithms inevitably target minority neighborhoods where over-policing is already a statistical reality.
  • Key Racial Disparity Statistics: Research confirms that Black individuals are arrested for drug possession at nearly $4\times$ the rate of white individuals, despite similar usage rates. In cities like Philadelphia, Black individuals account for 69% of police stops.
  • The shift represents a move from "post-facto justice" (responding to crimes) to "probabilistic prevention" (managing populations based on data-driven likelihoods), effectively turning human lives into searchable inventories.
The broader ambition of Palantir extends far beyond the precinct; it is a quest for "Sovereign Intelligence." Under the leadership of CEO Alex Karp, the company has positioned itself as the essential digital infrastructure for Western defense. By merging domestic spying techniques with high-level battlefield AI, Palantir offers a vision of total information dominance.

This consolidation threatens to create a private "police state" that is immune to traditional oversight. Because Palantir is a private contractor, its algorithms are often shielded from public scrutiny as proprietary trade secrets. This allows the government to outsource its most controversial surveillance tasks to a company that answers to shareholders rather than the Constitution.

Ai2 AutoDiscovery

... Instead of starting with a question, AutoDiscovery starts with your data and asks its own questions—generating hypotheses in natural language, writing and running Python code to test them, interpreting statistical results, and using what it learns to keep exploring. Give it a structured dataset, set an experiment budget, and let it run. You'll get back a complete list of novel research directions, each one fully transparent and reproducible.

How does it decide what to pursue? AutoDiscovery uses Bayesian surprise—a measure of how much the system's beliefs change after seeing evidence. Results that meaningfully shift expectations get prioritized over those that simply confirm what's already assumed. By chasing surprise, AutoDiscovery naturally gravitates toward the unexpected: findings most likely to represent genuine discoveries rather than obvious patterns.

'Raw Dogging' AI

"We are pushing all of our chips in on artificial intelligence as a fighting force. The Department is tapping into America's commercial genius, and we're embedding generative AI into our daily battle rhythm," Secretary of War Pete Hegseth said in a press release about the platform. "AI tools present boundless opportunities to increase efficiency, and we are thrilled to witness AI's future positive impact across the War Department."

Rawdog Merriam-Webster

Rawdog is a slang term that means to endure a difficult, dull, or mundane activity without any diversions or other support while doing it. The word is particularly used in playful or irreverent reference to engaging in ordinary experiences (such as air travel, exercising, or chores) without simultaneously engaging in some form of entertainment (such as listening to music or watching videos).

...Rawdog originated as vulgar slang meaning "to have sex without a condom." This sense of the term has been in use since the early 2000s. In the 2010s, the word broadened its meaning to refer to performing other activities without preparation (such as taking an exam without studying) or doing something without a metaphorical layer of protection (such as going for a run without wearing headphones). In summer 2024, rawdog gained mainstream attention as part of a social media trend where young men, especially, boasted about 'rawdogging' airline flights without any entertainment, refreshments, or bathroom use in an apparent show of rugged resolve.

A Wildly Obscene Term's Path to Mainstream Usage NYTimes (17viii24)

The Banality Machine Umair

...If we think about what AI really is, and what it'll do—a question that obsesses Americans in particular, and especially the kinds of Americans who are themselves obsessed with money and power—then my answer goes like this.

AI is a Banality Machine.

Everything that's been created with AI—how does it strike you? The first thing we can observe is that nobody actually likes it. They're sort of repelled by it. This is true in everything, from ads, to video games, to literature, to music. People are "grossed out," or "creeped out," or whatever pop vernacular you want to employ.

What they're not is moved. They don't feel anything. Nothing. Now, one thing I observe when I make music is this: a singer can hit the right notes. Lots do. But only a really good one can move people. When we're writing songs, we are trying to express emotion. Stories, books, plays, articles—doesn't matter.

Every form of human expression that comes from AI is banal.

...a lot of what we do is banal. Need some kind of legal agreement drafted? Banal. Want to compare stock funds? Banal. I read an article this morning about AI insurance agents beginning to threaten real ones. Insurance, what a nightmare. We all dread dealing with it. Banal.

So AI can handle the banal stuff of human existence, to a degree. Is that OK? Sort of, but not really. Even in these mundane encounters, I suppose, we should want more than the banal. Not everything has to be a transcendent experience, but surely we as a society can do better than what you might call omnibanality.

AI is going to destroy what's left of the American economy. America famously won't regulate much, from guns to concentration camps. So AI will indeed just ravage entire fields, professions, sectors, and domains. It won't be like this in other places. See how much of the world is already banning social media for kids? It'll do the same with AI, eventually, and that's because, for example, unions exist, nor are corporations or economies structured in nearly such a purely capitalist way. People have much, much more power. In America, only billionaires do.

AI is a means of production. That much is true. But of what? It's a means of replication. It takes stuff it's been trained on, remixes it, and spits it back out. AI couldn't write the first book, song, or even legal contract.

So AI is a universal means of digital replication. That means: it can replicate pretty much anything that already exists digitally. What it can't do, and we all know this, is really create something new. This isn't a theoretical or abstract point. All those dumb AI videos that are "original" are actually in the style of serious artists, usually, even if it's just Miyazaki, right down to Van Gogh, and that's true of writing. Lawyers aren't great artists, nor are insurance agents, but the same logic is true.

AI is a universal means of digital replication, and in that way, every profession that depends on digital production will be decimated by it. Now, you can spit out an infinite number of everythings.

An infinite number of everythings. Songs. Videos. Books. Articles. Legal contracts. Insurance agreements. Stock funds. But will any of them be any good?

...Do you know what banality really is? A moral vacuum. When we look at Munch's Scream, we see the tension and agony of the modern world shrieking back at us. It is not a moral vacuum. When the man I admire made this revolutionary music, it was about energy, freedom, power, love. It wasn't a moral vacuum. When Picasso painted Guernica, or Van Gogh painted Starry night, there was no moral vacuum.

AI in this way will force us to rethink what "productivity" is. If I can create an infinite number of everythings with the press of button, then is my productivity infinite?

All these things are lit with the luminous intensity of the human cry. The cry is the primal expression. It is the cry of wounding, of what it is to be alive, which is terror, pain, suffering. Tragedy. From that tragedy comes the need for beauty, for love, for grace. Only in those do we find redemption. Which we express, with our fragile, broken human arms, every time that we lift a pen, write a word, utter a whisper, or embrace someone we care for. All we are ever doing is expression our revolution of love against time and dust. Camus taught us this. Some of us live it.

The Banality Machine takes all that away from us.

13ii26

Why Smart People Can't Agree on Whether AI Is a Revolution or a Toy Alberto Romero

...the point is already clear: the experience you have with AI is not determined by AI. Not to get too ontological in a "conversational" post, but the thing is: nothing is in isolation. AI is not good or bad in itself, but fully contingent on who you are. It's determined by: what you do, where you are, how you're wired, what you've tried, when you tried it ... And in practice, none of this is visible to the person on the other side of the forum. To a lot of people, AI is, indeed, nothing more than a fancy stick pretending to be a wand.

...This is how belief works for all things in life. Nobody convinces you to exercise. You start running one day for your own reasons—perhaps a friend suggested the idea, but you need to see for yourself—and eventually, you're a person who runs. Nobody convinces you that therapy is useful. You go because you're hurt and you tried and it worked, so you keep going. Nobody convinces you to read. One day, you pick up a book and it resonates and then you keep picking up books.)

For some reason, we have a hard time accepting that AI is not a special thing in this regard, but like everything else. The important shifts in attitude and mindset are always private and always experiential.

15ii26

Why the Smartest People in Tech Are Quietly Panicking Right Now Shane Collins at Medium

… Most people use the free, default versions of AI, which are intentionally throttled and often years behind the bleeding edge. Furthermore, most people use AI like a search engine — asking quick questions and expecting simple answers.

… Amodei posed a brilliant thought experiment: Imagine waking up in 2027, and a new country has suddenly appeared on the map. It has 50 million citizens. Every single citizen is smarter than any Nobel laureate in human history. They think 100 times faster than us, they never sleep, and they can control any digital interface on Earth.

… Many professionals comfort themselves by claiming, “AI will never replace human empathy or trust.” That may be true for a therapist or a bedside nurse. But if your job mostly consists of reading, analyzing, deciding, and typing on a screen — AI is coming for your desk. … The people who will suffer most are the ones who let their pride get in the way. Do not assume your degree or your unique industry knowledge makes you immune. Teach yourself — and your children — to be hyper-adaptable orchestrators of AI, rather than memorizers of facts.

17ii26

Anthropic's Moment Alberto Romero

...The Pentagon doesn't like Anthropic's red lines

Axios reported this week that the Pentagon may end its partnership with Anthropic. The military wants AI labs to make their models available for "all lawful purposes," and Anthropic won't agree. The company maintains two hard limits: no mass surveillance of Americans and no fully autonomous weaponry. The other three labs working with the Pentagon (OpenAI, Google, xAI) have been more accommodating; one has reportedly already accepted the military's terms.

The relationship deteriorated further over the operation to capture Venezuela's Nicolás Maduro, conducted through Anthropic's partnership with Palantir. According to Axios, a senior administration official claimed an Anthropic executive contacted Palantir to ask whether Claude had been used in the raid, implying disapproval because the operation involved lethal force. Anthropic denied this, saying its conversations with the Pentagon have focused on policy questions around its two hard limits and do not relate to specific operations.

The story has the texture of a pressure campaign: anonymous official, specific framing of Anthropic as the difficult partner, emphasis on competitors' flexibility. Is the industry ganging on the Anthropic safety-focused AI-rights-matter weirdos? I don't know. But whether it works will say something about how durable safety commitments are when the customer is the Department of War.

Pentagon Considers Designating Anthropic AI as a 'Supply Chain Risk' gizmodo

There's a New Term for Workers Freaking Out Over Being Replaced by AI gizmodoThe researchers propose calling this new, modern anxiety "AI replacement dysfunction" or AIRD. The authors define it as a "new, proposed clinical construct describing the psychological and existential distress that could be experienced by individuals facing the threat or reality of job displacement due to artificial intelligence (AI)." The condition carries with it several common symptoms including anxiety, insomnia, depression, and identity confusion "that may reflect deeper fears about relevance, purpose, and future employability." It can also lead to sufferers dealing with additional challenges like psychiatric disorders and substance abuse.
[and one wonders how AIRD gets treated...]

Reworked: a Guardian series

A series about what's at stake as AI disrupts our jobs

Why Yann LeCun left Meta, and what it means for AI's next frontier Enrique Dans at Medium

Why every company will need its own world model“ Enrique Dans at Medium

For the last two years, artificial intelligence strategy has largely meant the same thing everywhere: pick a large language model, plug it into your workflows, and start experimenting with prompts. That phase is coming to an end.

Not because language models aren't useful, with their obvious limitations they are, but because they are rapidly becoming commodities. When everyone has access to roughly the same models, trained on roughly the same data, the real question stops being who has the best AI and becomes who understands their world best.

That's where world models come in.

From rented intelligence to owned understanding

Large language models look powerful, but they are fundamentally rented intelligence. You pay a monthly fee to OpenAI, Anthropic, Google or some other big tech, you access them through APIs, you tune them lightly, and you apply them to generic tasks: summarizing, drafting, searching, assisting. They make organizations more efficient, but they don't make them meaningfully different.

A world model is something else entirely.

...What AI changes is not the existence of these models, but their nature. Instead of being static and manually updated, AI-driven world models can be:

  • Probabilistic, rather than deterministic
  • Causal, not just descriptive
  • Action-oriented, able to simulate what happens if..." scenarios
This is where reinforcement learning, simulation, and multimodal learning start to matter far more than prompt engineering.

AI Is Not a Library: Designing for Nondeterministic Dependencies O'Reilly

Reacting to Matt Yglesias's Confession that A.I. Progress Is Giving Him Writer's Block Brad DerLong

...The sea-change came 10,000 years ago, when we became more sessile and invented agriculture. Then our “technology” could be not just what we could carry and remember, but what we could build and what our built environment could remind us of. And then 5,000 years ago we invented writing. And we became not just a cultural species, not just a technological species with what we could carry, build, remember, and have our built environment remind us with, but a genuine world-scale æon-scale time-binding space-binding anthology intelligence. Then the real ASI emerged. Not an Artificial Super-Intelligence constructed in a computer lab as some kind of silicon digital god that some want to teach the rest of us to worship and that Elon Musk wants to teach the rest of us to try to bang. But, rather, the distributed knowledge and thought base that is the Anthology Super-Intelligence that is humanity's collective mind present and past stored in and as our information technology capital stock.

Then the gears shift:

The Columbian Exchange, the growth of a commerce-mediateed global division of labor, and the intellectual sea-change with respect to ideas that was the nullius in verbo, the "we believe nothing simply because somebody says it" of the Royal Society sets the Commercial-Imperial Age rate of technology growth at 25% per century.

The coming of the GPTs of steampower and textile machinery sets the Industrial-Revolution Age rate of technology growth at 100% per century.

The 1875 combo of the invention of the process of invention hallmarked by the appearance of the railroad-and-screw-propeller-driven global economy, the modern corporation, the industrial research lab, and the backing science complex pushes the rate of technology growth up to 2% per year in terms of our improving our ability to make what we make more efficiently, plus whatever the value is in the expansion of the kinds of commodities, objects, and services we can produce:

But there is more.
  • It's now that that 2% per year—that doubling every 35 years—is spread out evenly across the economy.
  • It is, rather, that a generation sees about 80% of the economy grow in technology by about 1/4 in efficiency.
  • While about 20% is upended and revolutionized: grows five-fold in efficiency and capability—or more.
  • And this generation-after-generation successive leading-sector Schumpterian creative-destruction upending of orders and institutions as everything established and fixed is steamed away produces major qualitative changes in society.
  • Marx and Engels had talked of tribal, ancient, feudal, bourgeois, and socialists modes of production succeeding each other on a time scale of millennia or centuries, with plenty of time for changes in the productive technology and division-of-labor base to shape the societal superstructure.
  • Since 1875 we have seen: Steampower, Applied-Science, Mass-Production, Globalized Value-Chain, and now Attention Info-Bio Tech modes—of production, but also distribution, communication, and domination—equivalent scale transformations shake society every single generation, with societal superstructures always lagging far behind and desperately shaking themselves to pieces in attempts to cope.

19ii26

Packaging Expertise: How Claude Skills Turn Judgment into Artifacts Han Lee at O'Reilly

Three Ways Terminal AI Has Changed How I Work (And Whether It's Coming for My Job) Mike Konczal, cross-posted by Brad DeLong

20ii26

The Anthropic Hive Mind Steve Yegge at Medium

...At Anthropic every single person and team I met, without exception, feels kind of sweetly but sadly transcendent. They have a distinct feel of a group of people who are tasked with shepherding something of civilization-level importance into existence, and while they're excited, they all also have a solemn kind of elvish old-world-fading-away gravity. I can't quite put my finger on it.

...I was at Amazon during their Golden Age, still going strong when I left in 2005. And I was at Google during their Golden Age, which lasted until April 2011. After that I watched Google ossify and become siloed and effectively incapable of cross-functional work, while Amazon continued to execute and innovate.

...At Anthropic, they are smack in the middle of a Golden Age, where there is far more available work than there are people to do it, on pretty much all fronts. It's like they're on the surface of an expanding sphere