November 2025 AI links
(continued from October 2025 AI links)
[some are paywalled, e.g. Medium, New Yorker, etc.]

The materials linked below are best read as bellwethers for emergent phenomena,
commentary on a rapidly-changing landscape of technologies.

Many of these seemed to be bellwethers when I collected them...

1xi25

AI Browsers: The Risks & Dangers Giles Crouch at Medium

...We are at an inflection point, a huge shift, in how we interact with information. While OpenAI's Atlas and Perplexity's Comet make the promise of amplifying our intelligence and let us browse at the speed of thought, that's not quite what's happening. We are building a new infrastructure for the internet, one where we no longer navigate ourselves, but where we send agents to do it all. Sounds cool. Or is it a trap?

...Humans are a storytelling, meaning-making, socially embedded species. But when we automate commerce decisions, this doesn't happen. Instead, these AI browsers may well be reshaping human behaviours to work in favour of the platforms, the algorithms. Just because we can ask more questions doesn't mean we think better. The use of agents in AI browsers makes them transaction engines, not decision support systems.

These browsers have been designed specifically for control. To break down boundaries as they require us to give them unrestricted access to our entire digital lives, our finances and the ability to experience serendipity, to explore in curious ways.

...The entire value proposition of AI browsers depends on them having unrestricted access to your digital life. Friction is the enemy of rent extraction as per technofeudalism.

...Traditional browsers were basically “pipes” for information flow. AI browsers are trying to become nodes with agency. Not just delivering information, but also interpreting, filtering and acting.

...These browsers claim to extend your cognition, but they're actually creating a hybrid cognitive system with its own emergent properties. And we don't yet understand what those are.

...The only responsible Ai company I've seen is Anthropic with Claude. Rather than build a browser, they created a Chrome extension. And it's quite brilliant. Less security issues, more agentic controls....With Anthropic, they're not trying to replace the infrastructure of browsing, but instead, and much more cleverly, adding the cognitive layer atop existing infrastructure. One can maintain more agency. And it integrates well with Claude's far better desktop app that can more easily integrate with your devices. Not perfect, but a better approach.

...These aren't tools for human flourishing, they're transaction engines optimised for platform profit, wearing the mask of personal assistants. Seeing us as only Homo Economicus. We're witnessing not the evolution of browsers, but the industrialisation of human agency itself.

Data centers & electricity demand Adam Tooze

...US utilities and AI developers have made a co-ordinated rush for gas turbines, spurred on by OpenAI and SoftBank's $500bn Stargate project to invest in AI infrastructure, including gas-fired power stations. The US is forecast to account for 46 per cent of global gas turbine orders this year, up from a recent historical average of 29 per cent, Dora Partners data shows. Data centre developers are even developing gas power plants independent of the grid. ExxonMobil is planning one such plant at a cost of about $15bn, according to a person familiar with the project. Christian Buch, chief executive of Siemens Energy, says that two years ago, it only sold one gas turbine in the US. "One, in the whole year. Now, we are at what, 150 or so?" he adds, before his assistant chimes in to clarify that it is closer to 200 units. … Unlike with many other sources of power, China does not dominate the supply of key equipment and infrastructure for gas plants. Its domestic electricity generation is primarily a mix of coal and renewables, with gas accounting for 3 per cent. Industry executives and analysts say that Chinese rivals are too far behind to break into the oligopoly any time soon. Incremental gains by the turbine manufacturing industry's big three (GE Vernova, Siemens Energy, and Mitsubishi Power) to reach about 64 per cent efficiency, saving operators millions of dollars in fuel over years, were hard earned and not easily replicated. "There's no significant challenge to the gas turbine industry from China," says Anthony Brough, president of Dora Partners.

Cyber Shadows and Human Futures Angus Peterson at Medium

...transformations , crossing the technological, informational, political arenas, interlock like gears in a machine that seems to run faster than anyone can control. AI reshapes labor markets; shrinking opportunity feeds discontent; disinformation channels that discontent toward convenient scapegoats; and polarized politics obstruct collective solutions. The outcome is a feedback loop of instability. Economists call it "polycrisis," but for most households it feels simpler: everything that once felt solid now wobbles.

...Wealth inequality and economic fragility are not natural outcomes of innovation; they are policy choices... The path ahead will not be smooth. Automation will continue to compress some forms of employment. Cyber risks will escalate as connectivity deepens. Disinformation will exploit the very openness that defines democracy. Yet history offers comfort: societies have faced convulsions before and adapted. The task now is to make adaptation deliberate, not accidental; to steer the transformation instead of being dragged by it.

3xi25

The Case That A.I. Is ThinkingNew Yorker

...The most basic account of how we got here goes something like this. In the nineteen-eighties, a small team of cognitive psychologists and computer scientists tried to simulate thinking in a machine. Among the more famous of them were David Rumelhart, Geoffrey Hinton, and James McClelland, who went on to form a research group at U.C. San Diego. They saw the brain as a vast network in which neurons fire in patterns, causing other sets of neurons to fire, and so on; this dance of patterns is thinking. The brain learns by changing the strength of the connections between neurons. Crucially, the scientists mimicked this process by creating an artificial neural network, and by applying a simple algorithm called gradient descent to increase the accuracy of its predictions. (The algorithm could be compared to a hiker navigating from a mountaintop to a valley; a simple strategy for eventually finding one's way is to insure that every step moves downhill.) The use of such algorithms in large networks is known as deep learning.

Other people in A.I. were skeptical that neural networks were sophisticated enough for real-world tasks, but, as the networks got bigger, they began to solve previously unsolvable problems. People would devote entire dissertations to developing techniques for distinguishing handwritten digits or for recognizing faces in images; then a deep-learning algorithm would digest the underlying data, discover the subtleties of the problem, and make those projects seem obsolete. Deep learning soon conquered speech recognition, translation, image captioning, board games, and even the problem of predicting how proteins will fold.

Today's leading A.I. models are trained on a large portion of the internet, using a technique called next-token prediction. A model learns by making guesses about what it will read next, then comparing those guesses to whatever actually appears. Wrong guesses inspire changes in the connection strength between the neurons; this is gradient descent. Eventually, the model becomes so good at predicting text that it appears to know things and make sense. So that is something to think about. A group of people sought the secret of how the brain works. As their model grew toward a brain-like size, it started doing things that were thought to require brain-like intelligence. Is it possible that they found what they were looking for?

OpenAI Inks First Multi-Billion Dollar Deal With Amazon gizmodo

Radar Trends to Watch: November 2025 O'Reilly

6xi25

Jensen Huang Gets It Wrong O'Reilly

Apple Will Reportedly Rent Siri a New Brain from Google for $1 Billion Per Year gizmodo

You Have No Idea How Screwed OpenAI Is Alberto Romero

The unsettling truth is that no one knows where the United States is going with the immense investment in AI (we do know where Europe is going with the absence of it, though), or what the constant soaring of the magnificent seven's stocks mean, or whether it's possible to sustain the economy with a bunch of companies that make deals with one another and sell, one the one hand, ads, and, on the other, chips to train AI models—to run more ads. So we wait.

7xi25

How AI can adapt to Indigenous knowledge Anna Dorn at Medium blog

AI presents itself as universal, yet it’s built on datasets, languages, and values rooted in specific cultural worldviews. Indigenous lawyer Dr. Pam Palmater argues that AI extends the reach of colonial control into the digital age. Tools like drones, facial recognition, and predictive policing are already used to monitor Indigenous communities, often misidentifying people and reinforcing existing biases. Large language and vision models amplify harm by scraping Indigenous languages, cultural practices, and images from the web without permission or context, in turn creating a "stereotyped, mish-mash of Indigenous cultural elements."