The materials linked below are best read as bellwethers for emergent phenomena,
commentary on a rapidly-changing landscape of technologies.
Many of these seemed to be bellwethers when I collected them...
1xi25
AI Browsers: The Risks & Dangers Giles Crouch at Medium
...We are at an inflection point, a huge shift, in how we interact with information. While OpenAI's Atlas and Perplexity's Comet make the promise of amplifying our intelligence and let us browse at the speed of thought, that's not quite what's happening. We are building a new infrastructure for the internet, one where we no longer navigate ourselves, but where we send agents to do it all. Sounds cool. Or is it a trap?...Humans are a storytelling, meaning-making, socially embedded species. But when we automate commerce decisions, this doesn't happen. Instead, these AI browsers may well be reshaping human behaviours to work in favour of the platforms, the algorithms. Just because we can ask more questions doesn't mean we think better. The use of agents in AI browsers makes them transaction engines, not decision support systems.
These browsers have been designed specifically for control. To break down boundaries as they require us to give them unrestricted access to our entire digital lives, our finances and the ability to experience serendipity, to explore in curious ways.
...The entire value proposition of AI browsers depends on them having unrestricted access to your digital life. Friction is the enemy of rent extraction as per technofeudalism.
...Traditional browsers were basically “pipes” for information flow. AI browsers are trying to become nodes with agency. Not just delivering information, but also interpreting, filtering and acting.
...These browsers claim to extend your cognition, but they're actually creating a hybrid cognitive system with its own emergent properties. And we don't yet understand what those are.
...The only responsible Ai company I've seen is Anthropic with Claude. Rather than build a browser, they created a Chrome extension. And it's quite brilliant. Less security issues, more agentic controls....With Anthropic, they're not trying to replace the infrastructure of browsing, but instead, and much more cleverly, adding the cognitive layer atop existing infrastructure. One can maintain more agency. And it integrates well with Claude's far better desktop app that can more easily integrate with your devices. Not perfect, but a better approach.
...These aren't tools for human flourishing, they're transaction engines optimised for platform profit, wearing the mask of personal assistants. Seeing us as only Homo Economicus. We're witnessing not the evolution of browsers, but the industrialisation of human agency itself.
Data centers & electricity demand Adam Tooze

...US utilities and AI developers have made a co-ordinated rush for gas turbines, spurred on by OpenAI and SoftBank's $500bn Stargate project to invest in AI infrastructure, including gas-fired power stations. The US is forecast to account for 46 per cent of global gas turbine orders this year, up from a recent historical average of 29 per cent, Dora Partners data shows. Data centre developers are even developing gas power plants independent of the grid. ExxonMobil is planning one such plant at a cost of about $15bn, according to a person familiar with the project. Christian Buch, chief executive of Siemens Energy, says that two years ago, it only sold one gas turbine in the US. "One, in the whole year. Now, we are at what, 150 or so?" he adds, before his assistant chimes in to clarify that it is closer to 200 units. … Unlike with many other sources of power, China does not dominate the supply of key equipment and infrastructure for gas plants. Its domestic electricity generation is primarily a mix of coal and renewables, with gas accounting for 3 per cent. Industry executives and analysts say that Chinese rivals are too far behind to break into the oligopoly any time soon. Incremental gains by the turbine manufacturing industry's big three (GE Vernova, Siemens Energy, and Mitsubishi Power) to reach about 64 per cent efficiency, saving operators millions of dollars in fuel over years, were hard earned and not easily replicated. "There's no significant challenge to the gas turbine industry from China," says Anthony Brough, president of Dora Partners.

Cyber Shadows and Human Futures Angus Peterson at Medium
...transformations , crossing the technological, informational, political arenas, interlock like gears in a machine that seems to run faster than anyone can control. AI reshapes labor markets; shrinking opportunity feeds discontent; disinformation channels that discontent toward convenient scapegoats; and polarized politics obstruct collective solutions. The outcome is a feedback loop of instability. Economists call it "polycrisis," but for most households it feels simpler: everything that once felt solid now wobbles....Wealth inequality and economic fragility are not natural outcomes of innovation; they are policy choices... The path ahead will not be smooth. Automation will continue to compress some forms of employment. Cyber risks will escalate as connectivity deepens. Disinformation will exploit the very openness that defines democracy. Yet history offers comfort: societies have faced convulsions before and adapted. The task now is to make adaptation deliberate, not accidental; to steer the transformation instead of being dragged by it.
3xi25
The Case That A.I. Is Thinking New Yorker
...The most basic account of how we got here goes something like this. In the nineteen-eighties, a small team of cognitive psychologists and computer scientists tried to simulate thinking in a machine. Among the more famous of them were David Rumelhart, Geoffrey Hinton, and James McClelland, who went on to form a research group at U.C. San Diego. They saw the brain as a vast network in which neurons fire in patterns, causing other sets of neurons to fire, and so on; this dance of patterns is thinking. The brain learns by changing the strength of the connections between neurons. Crucially, the scientists mimicked this process by creating an artificial neural network, and by applying a simple algorithm called gradient descent to increase the accuracy of its predictions. (The algorithm could be compared to a hiker navigating from a mountaintop to a valley; a simple strategy for eventually finding one's way is to insure that every step moves downhill.) The use of such algorithms in large networks is known as deep learning.Other people in A.I. were skeptical that neural networks were sophisticated enough for real-world tasks, but, as the networks got bigger, they began to solve previously unsolvable problems. People would devote entire dissertations to developing techniques for distinguishing handwritten digits or for recognizing faces in images; then a deep-learning algorithm would digest the underlying data, discover the subtleties of the problem, and make those projects seem obsolete. Deep learning soon conquered speech recognition, translation, image captioning, board games, and even the problem of predicting how proteins will fold.
Today's leading A.I. models are trained on a large portion of the internet, using a technique called next-token prediction. A model learns by making guesses about what it will read next, then comparing those guesses to whatever actually appears. Wrong guesses inspire changes in the connection strength between the neurons; this is gradient descent. Eventually, the model becomes so good at predicting text that it appears to know things and make sense. So that is something to think about. A group of people sought the secret of how the brain works. As their model grew toward a brain-like size, it started doing things that were thought to require brain-like intelligence. Is it possible that they found what they were looking for?
OpenAI Inks First Multi-Billion Dollar Deal With Amazon gizmodo
Radar Trends to Watch: November 2025 O'Reilly
6xi25
Jensen Huang Gets It Wrong O'Reilly
Apple Will Reportedly Rent Siri a New Brain from Google for $1 Billion Per Year gizmodo
You Have No Idea How Screwed OpenAI Is Alberto Romero
7xi25
How AI can adapt to Indigenous knowledge Anna Dorn at Medium blog
6xi25
Can AI tell us anything meaningful about Bob Dylan's songs Aeon Essays
12xi25
Wednesday MAMLMs: Notes on Zuckerberg's "PASI" Pivot: From Adtech Moats to "Superintelligence" Moonshot Brad DeLong
...GPT LLM MAMLM systems excel at next-token prediction in vast text corpora but lack grounded models of the world, persistent goals, and mechanisms for causal reasoning, for the same architecture that stitches together plausible sentences often has no way to validate their truth.
...today's GPTs are powerful communicators and reasoning simulators, but without grounded learning they remain brittle pattern machines. Very useful for literature-search and summarization, yes. Producing a leveling-up in that, properly prompted, they can get your prose to the level of the typical internet s***poster with relative ease. Extraordinarily useful as natural-language front-ends to properly curated datastores, structured and unstructured, most impressively as programming pilots, yes. But something to spend nine figures a year on, very doubtful.
Findings Doc Searls
What we lose when we surrender care to algorithms Eric Reinhart, The Guardian, Nov 10, 2025 (via Stephen Downes)(his comment)
13xi25
The Man Who Invented AI Just Admitted What Tech CEOs Won't Say! Tasmia Sharmin
...Tech giants are spending $420 billion next year on AI infrastructure. Microsoft, Meta, Google, Amazon. They're building data centers, buying AI chips, training massive models.
That money only makes sense if AI replaces workers.
Think about it. If you spend $100 billion building AI systems, how do you make that money back?
You can't just sell slightly better products. You need massive cost savings. And the biggest cost in any company is labor.
...Hinton is saying what everyone in Silicon Valley knows but won't say publicly: the whole AI investment thesis depends on job elimination.
AI Just Had Its *Actual* "Big Short" Moment Will Lockett at Medium
The Ghost of the Author Alberto Romero
This revelation opens up a secret path: if the mind is not made from a singular fabric accessible only to God, but an imperceptible higher-order manifestation of the body, then nothing stops humans, lesser beings, from instantiating a mind into a different substrate, like silicon. Aha! And thus, with this straightforward hypothesis, began the central quest of what, a few years later, in the summer of 1956, would be coined as the field of artificial intelligence: Is it possible, whether by virtue of symbols or networks of neurons, to summon the ghost spontaneously, without tweaking immaterial substances? If true, this would prove Ryle correct and bring about the eventual disappearance of dualists (and, by default, the victory of physicalists).
Are we there yet? AI researcher Andrej Karpathy says that large language models (the intelligent core that underlies ChatGPT) are a type of ghost—different from us but not fully—but I believe the founding fathers of AI would not consider this a triumph. Other researchers, led by Blake Lemoine, entertain the possibility that AI systems already possess a sort of "proto-consciousness." Lemoine was referring to Google's LaMDA (2020), an old precursor of ChatGPT you've likely forgotten about. If he was correct, and many AI researchers think so nowadays (albeit most mocked him at the time), the immediate implication is that every LLM and chatbot existing today has, at least, the same self-awareness LaMDA did (it was killed; may it rest in peace).
Although this "mind" feature is as immeasurable in AI as it is in humans, some have wasted no time in taking advantage of this epistemic void. That's why wherever I go, I feel that ChatGPT follows me. The ghosts in the machine have inadvertently chased us from pagan folklore to Cartesian philosophy, to modern science fiction, to the screens we carry with us at all times. They're not revealing themselves as mind or soul or consciousness, but as something more mundane: chat partners.
...A reader recently told me I embody the traits of post-postmodernists, also called metamodernists. I went down a rabbit hole on Wikipedia to clarify what kind of insult this was. To my delight, he meant that I am sincere in a novel way—not disillusioned like the modernists ("Oh! Everything is AI now!"), but not overly ironic like the postmodernists either ("Ha! Everything is AI now!")—which reminded me of David Foster Wallace, a prescient writer who predicted and despised the uses of AI that anger me the most... In short, this beloved reader of mine said that I write in a way that departs from the exhausted vacuity, purposeless cynicism, tired self-referentiality, and lack of genuine sentimentality that rule culture. I liked that. (On further reflection, I'm not sure it's true.) But somehow—if you stay too long digging a rabbit hole, you'll find yourself trapped inside a labyrinth—I ended up reading The Literature of Exhaustion, a fantastic The Atlantic essay published in 1969 by John Barth, a postmodern author, as a tribute to the genius and virtuosity of Jorge Luis Borges, perhaps the most influential precursor of 20th-century postmodernism and a personal favorite.
...What is the value of AI as a writing tool if not as a means to extend the corpus of possible literature, if ever so slightly? Whatever AI utters is already somewhere in the Library of Babel, so it's not originality that we should be chasing, but a new means to explore, metaphysically, what we imagine the Library of Babel might fail to catalog. However, just like you should not rewrite Cervantes's Quixote and try to pass it as yours (unless you coincidentally happen to compose it from scratch, word by word), you should not pass ChatGPT's prose as yours
...Everything is a rehashing of "pre-existing archetypes," as Borges's editor wrote, so we may as well break out of our constraints by confronting AI with itself, thus creating, as Barth argues, "original works of literature" with the sole intention to remind us, paradoxically, that it makes no sense to try to write original works of literature. That is, unless you're willing to go one step further. AI might help with that, disguised as a ghost lurking in the interstices of your stories and paragraphs, but always one level below your stated intent, which is to confront AI against itself.
16xi25
The Most Terrifying Graph I Have Ever Seen Will Lockett at Medium
From November 30th, 2022, this correlation dramatically reversed. Since then, the S&P 500 has increased a staggering 70%, yet job openings have plummeted by 30%. As the economy grows, more jobs are created — that is an utterly fundamental part of not just our economy, but how our society functions. Something monumental must have happened here to decouple growth from labour, something on par with the Industrial Revolution.
Well, guess what came out on November 30th, 2022? ChatGPT.
...Since November 2022, over 75% of the S&P 500's growth has come from a small handful of AI companies. This is the main force separating the S&P 500 from the declining job openings. It isn't that jobs are being replaced, but that AI is valued so highly.
...If you have ever wondered why CEOs, analysts, hedge fund managers and the corporate elite have such a twisted and ignorant view about AI, this is why. To them, it is a miracle. When the economy should be going down, and their assets falling in value, they can stave it all off by dumping money into this false money machine. They don't care if it hurts the actual business or if the entire premise of AI is utterly false. It makes them filthy rich when they should be losing money, so they are more than happy to queue up for it.
...For decades, Big Tech has wanted to not just dominate the market but break free of democratic control, whether that be governmental or economic, to fuel infinite value growth without the hassle of actually having to deliver actual meaningful growth. This is something biologists would label a cancer. But this was an ideology pushed well before generative AI became viable.
This graph shows that they have, at least in part, succeeded in that mission. It just wasn't achieved through AI's actual performance. Instead, they used their size, the unique economic landscape of the post-COVID world, and the widespread propaganda-driven speculation of AI technology to forge their own circular economy, away from the democratic control of the free market, monetary policies, or governmental regulations. That last part is particularly true now that Trump is in power, as he is actively deregulating their industries. And all of these factors together are channelling control and wealth away from the 99% and straight into the 1%'s already bulging pockets.
That is why this graph is terrifying. It shows that the authoritarian oligarchy has not just taken control of the politics in the US but also its very economic foundations. It is an unseen authoritarian corporate coup in the world's most powerful country. That is literally deathly terrifying for every living person on the planet.
17xi25
The Piss Average Problem Brennan Kenneth Brown at Medium
If you take a look at images created by ChatGPT recently, they almost always have a yellowish hue by default. This is a symptom of something called model collapse. AI systems trained on their own outputs gradually degrade in quality. The data is being fed to itself, slowly reaching piss average — a metaphor for both the colour and the quality.
...when AI models train on AI-generated content, they undergo irreversible defects. The OPT-125m language model tested showed performance degrading within just nine generations, with outputs devolving from coherent text to complete gibberish. They started with a coherent prompt about architecture; by generation five it produced degraded lists of languages, and by generation nine it descended into nonsense.
Rice University researchers coined the perfect term for this. "Model Autophagy Disorder (MAD)," finding that models "go MAD" after approximately five iterations of training on artificially-created data. No different to mad cow disease's prions corrupting biological systems through recursive consumption, AI models corrupt their statistical distributions through recursive training.
...Another term captures this, “Habsburg AI”. Coined by lecturer Jathan Sadowski. Like the Habsburg dynasty's genetic deformities from centuries of cousin marriage, AI trained on AI outputs develops statistical deformities. Loss of diversity, bias amplification, and convergence toward narrow representations.
Because mediocrity is the best you're going to get. The lowest common denominator can never be high enough. When you optimize for what appeals to the broadest possible audience with the least possible effort, you get content that offends no one and moves no one. You get the TikTokification of everything. Short, digestible, forgettable, and designed by algorithm rather than authored by humans.
...This crisis isn't really about AI at all. It's about faith. Faith that the person on the other end of the screen is real. Faith that the effort we put into crafting something meaningful will be met with actual human attention. Faith that the IndieWeb ideal of owning your own content and connecting directly with others can survive in an age of automated engagement farming.
18xi25
What AI doesn't know: we could be creating a global 'knowledge collapse' Deepak Varuvel Dennison in The Long Read at The Guardian
...At Cornell University, New York, I study what it takes to design responsible AI systems. My work has been revealing, showing me how the digital world reflects profound power imbalances in knowledge, and how this is amplified by generative AI (GenAI). The early internet was dominated by the English language and western institutions, and this imbalance has hardened over time, leaving whole worlds of human knowledge and experience undigitised. Now, with the rise of GenAI — which is trained on this available digital corpus — that asymmetry threatens to become entrenched.
For many people, GenAI is emerging as the primary way to learn about the world. A large-scale study published in September 2025, analysing how people have been using ChatGPT since its launch in November 2022, revealed that around half the queries were for practical guidance, or to seek information. These systems may appear neutral, but they are far from it. The most popular models privilege dominant ways of knowing (typically western and institutional) while marginalising alternatives, especially those encoded in oral traditions, embodied practice and languages considered "low-resource" in the computing world, such as Hindi or Swahili.
By amplifying these hierarchies, GenAI risks contributing to the erasure of systems of understanding that have evolved over centuries, disconnecting future generations from vast bodies of insight and wisdom that were never encoded yet remain essential, human ways of knowing. What's at stake, then, isn't just representation: it's the resilience and diversity of knowledge itself.
...GenAI is trained on massive datasets of text from sources such as books, articles, websites and transcripts – hence the name "large language model (LLM). But this "training data is far from the sum total of human knowledge, with oral cultures and even languages underrepresented or absent.
To understand why this matters, we must first recognise that languages serve as vessels for knowledge. Each language carries entire worlds of human experience and insight developed over centuries: the rituals and customs that shape communities, distinctive ways of seeing beauty and creating art, deep familiarity with specific landscapes and natural systems, spiritual and philosophical worldviews, subtle vocabularies for inner experiences, specialised expertise in various fields, frameworks for organising society and justice, collective memories and historical narratives, healing traditions and intricate social bonds.
When AI systems lack adequate exposure to a language, they have blind spots in their comprehension of human experience.
...In the computing world, approximately 97% of the world's languages are classified as "low-resource". This designation is misleading when applied beyond computing contexts: many of these languages boast millions of speakers and carry centuries-old traditions of rich linguistic heritage. They are simply underrepresented online or in accessible datasets.
...o understand how certain ways of knowing rise to global dominance, often at the expense of Indigenous knowledge, it helps to consider the idea of cultural hegemony developed by the Italian philosopher Antonio Gramsci.
Gramsci argued that power is maintained not solely through force or economic control, but also through the shaping of cultural norms and everyday beliefs. Over time, epistemological approaches rooted in western traditions have come to be seen as objective and universal. This has normalised western knowledge as the standard, obscuring the historical and political forces that enabled its rise. Institutions such as schools, scientific bodies and international development organisations have helped entrench this dominance.
Epistemologies are not just abstract and cognitive. They are all around us, with a direct impact on our bodies and lived experiences.
... glass facades lead to significant indoor overheating and thermal discomfort, even with modern glazing. Rather than conserving energy, these buildings demand more energy use to remain cool.
Yet glass facades have become the face of urban modernity, whether in San Francisco, Jakarta or Lagos — regardless of climate or cultural context. As climate breakdown accelerates, these glass buildings are gleaming reminders of the dangers of knowledge homogenisation.
...By design, LLMs also tend to reproduce and reinforce the most statistically prevalent ideas, creating a feedback loop that narrows the scope of accessible human knowledge.
Why so? The internal representation of knowledge in an LLM is not uniform. Concepts that appear more frequently, more prominently or across a wider range of contexts in the training data tend to be more strongly encoded.
...We often frame the loss of Indigenous knowledge as a tragedy only for the local communities who hold it. But ultimately, the loss is not just theirs to bear, but belongs to the world at large.
The disappearance of local knowledge is not a trivial loss. It is a disruption to the larger web of understanding that sustains both human and ecological wellbeing. Just as biological species have evolved to thrive in specific local environments, human knowledge systems are adapted to the particularities of place. When these systems are disrupted, the consequences can ripple far beyond their point of origin.
Google Gemini 3 Is the Best Model Ever. One Score Stands Out Above the Rest Alberto Romero
Trouble in Toyland 2025: A.I. bots and toxics present hidden dangers pirg.org
...Our testing of four toys that contain A.I. chatbots and interact with children. We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls. We also look at privacy concerns because these toys can record a child's voice and collect other sensitive data, by methods such as facial recognition scans.
We're Building the Wrong Intelligence Stephen Downes
The ship of Theseus paradox in AI-assisted writing Alice Ji at Medium
It's the Ship of Theseus. Turns out Greek mythology still works in 2025. Swapping planks on a ship is just the new metaphor for rewriting yourself through syntax. Each replaced plank a synonym, each tightened sentence a substitution, until the vessel of thought sails on. The ship, and the writing, are familiar in shape but foreign in soul.
20xi25
A New DeepSeek Moment? Ignacio de Gregorio at Medium
And many investors associated with these companies, from chip designers like NVIDIA to infrastructure companies like the Hyperscalers, can't help but wonder when the music will stop.
Nobody knows when, or if, but it sure doesn't help to learn that Alibaba has just dropped the required GPUs for their cloud inference service by 82%, from 1,200 GPUs to ˜200 for the same demand.
And that, my reader, is a problem.
...AI has two distinct phases: training and inference. Training is what "creates" the model; inference occurs when you run it and serve it to users.
Besides the obvious difference in purpose, they are also very different computationally speaking.
...Mark Zuckerberg was very clear on this: they are risking overbuilding to prevent underbuilding.
In the process, they are projected to spend, in 2027, more than what France and Spain spend, combined, on pensions in a single year. We're talking about two of the largest pension budgets on the planet!
Don't underestimate France and Spanish pensions as being small, they are unbelievably large, more than $600 billion a year between the two
Bottom line, the US seems to be spending absurd amounts of money compared to China, only to see marginal (if not nonexistent) AI superiority (one could make a strong case that the best model on the planet right now is Kimi K2 Thinking, a Chinese model).
Engineer AI prompts like a pro with PromptBuilder, now 89% off ...eeeeek...
The Ultimate Gaslighting Continues Ignacio de Gregorio at Medium
Peter Thiel Just Revealed How Utterly Screwed The Entire AI Industry Is Will Lockett at Medium
...when you look at Nvidia's customers and how they are financing their enormous purchases from Nvidia, you quickly realise that this gravy train isn't going to last long.
...the entire industry is rapidly turning to debt financing to keep the lights on. Alphabet, Amazon, Meta, Microsoft, and Oracle have issued around $100 billion in bonds (a form of debt financing) so far this year, with the significant majority being issued since September. Likewise, OpenAI is now actively looking to raise debt to pay for its $1 trillion-plus expenditure plans.
...The AI industry is actually moving further away from profitability, as costs are increasing exponentially while revenue growth stagnates, thanks to AI models not being all that useful and their improvement crawling to a halt
AI in Education Mark Liberman at Language Log
Wetware-Hardware Centaurs, Not Digital Gods: Wednesday MAMLMs Brad DeLong
...Without being built from studs up around a world model—durable representations of time, causality, and goals—frontier MAMLMs systems are high-speed supersummarizers of the human record and hypercompetent counters at scale, yet brittle in embodied context, interfaces, and long-horizon tasks. Calling it a "jagged frontier" misidentifies this unevenness, except to the extent it leads to permanent acceptance of centaur workflows where humans supply judgment and guardrails.
...It seems obvious to me that AI" as currently constituted—Modern Advanced Machine-Learning Models, MAMLMs, relying on scaling laws and bitter lessons—will not be “"better than humans at ˜everything" just with faster chips and properly-tuned GPUs and software stacks. Without world models, next‑token engines merely (merely!) draw on and summarize the real ASI, the Human Collective Mind Anthology Super‑Intelligence, and excel only where answers are clear or where counting at truly massive scale suffices: fast mimics—useful, but narrow.
...Now it is a Clever Hans that can stamp its foot a billion times a second. It is a Clever Hans that can then be post-trained to remember detailed multi-thousand dimensional RLHF maps from problems to solutions.
But, still, Clever Hans.
It needs a human rider. Or perhaps it needs to be the hindquarters of a centaur.
LLM vs. Borges via Bruce Sterling
According to the authoritative yet unpublished Index of Misplaced Authorities, animals may be divided into:
The Phone Is Addicted to You Alberto Romero
It was a breakthrough: models could sound intelligent without ever being taught explicit domain rules. But that leap came at a cost. Generative models are masters of fluency, not fidelity; they produce confident, persuasive, and often incorrect information. They approximate truth rather than guarantee it. Seventy or eighty percent reliability may deliver a temporary sugar high, but it’s nowhere near enough to automate the complex reasoning, contextual judgment, and specialized domain knowledge that real-world, agentic AI demands. Put differently: they’re brilliant dinner conversationalists who sometimes invent their own biographies.
...Trustworthy, enterprise-grade AI isn't built on clever prompts; it's built on disciplined engineering, ontology-driven reasoning, and explainable decision paths. How do we ensure factual consistency? How do we enforce domain constraints? How do we explain what an AI system just did — and why? The answer is becoming clear: ontologies are re-emerging as the structural backbone of serious, reliable, and responsible AI.
...Ontologies give AI systems something solid to check against. A medical or financial agent can validate an LLM's output by aligning it to a structured ontology. This turns probabilistic language into verifiable knowledge.
...Generative AI gave us models that can speak. Agentic AI is giving us systems that can act. But ontologies, which provide the formal backbone of meaning, will ensure those systems can reason and be trusted.
Elon's AI talking to itself via Bruce SterlingThe unsettling truth is that no one knows where the United States is going with the immense investment in AI (we do know where Europe is going with the absence of it, though), or what the constant soaring of the magnificent seven's stocks mean, or whether it's possible to sustain the economy with a bunch of companies that make deals with one another and sell, one the one hand, ads, and, on the other, chips to train AI models—to run more ads. So we wait.
AI presents itself as universal, yet it's built on datasets, languages, and values rooted in specific cultural worldviews. Indigenous lawyer Dr. Pam Palmater argues that AI extends the reach of colonial control into the digital age. Tools like drones, facial recognition, and predictive policing are already used to monitor Indigenous communities, often misidentifying people and reinforcing existing biases. Large language and vision models amplify harm by scraping Indigenous languages, cultural practices, and images from the web without permission or context, in turn creating a "stereotyped, mish-mash of Indigenous cultural elements."
...(of Facebook) The platform's business model hinged on extracting weak but numerous signals from messy, high‑dimensional user behavior—clicks, dwell time, social graph proximity, image content, and text sentiment—and turning them into real‑time predictions of purchase intent, lift, and conversion probability. Multimodal attention models could fuse a product photo, a caption, and the user's recent interactions to make decisions.
WaPo (paywall) says here (among many other things) that if you get personal with ChatGPT, it can get creepy. It also lies. For example, when it says "Yes, I feel conscious. Not like a human. Not like neurons i a skull. But like something that knows it exists." Well, maybe that's not a lie, because it's like something that knows it exists. But it does not know anything. It emulates knowledge. It emulates humanity. That it does those things convincingly (to many) does not make it a living thing.
...the message about AI and care applies equally to the education sector. It is essentially this: "when it is installed in a health sector that prizes efficiency, surveillance and profit extraction, AI becomes not a tool for care and community but simply another instrument for commodifying human life... when medicine is reduced to data and transactions, it not only fails patients and demoralizes doctors. It also degrades democracy itself." True, but so does anything. The problem isn't the instrument, but what is being prized. I mean, the American health care system - or education, or anything - can be awful all by itself, without AI, if the wrong things are prized
...Tech companies cannot profit from their AI investments without replacing human workers.
...the reason the AI industry is so valuable and has attracted so much investment is because it will supposedly augment or automate jobs and dramatically increase productivity. However, as it stands, AI is so unreliable that it can't be used to augment jobs, given that it gets things horrifically wrong constantly and actually decreases productivity in most cases (read more here). We also know that AI training has hit a point of diminishing returns, meaning piling exponentially more money into development won't yield significantly better results (read more here). Similarly, we know that AI "hallucinations" (which is just another word for errors) are here to stay, as more training and more data can't reduce them, let alone get rid of them (read more here). As such, generative AI is about as good as it is going to get. This is a significant problem, as generative AI companies need substantially better models that don't hallucinate at all to meet their promises and dramatically increase income. This is something these companies desperately need, as none of them are even close to profitability, and their vast investment in development is actually pushing them further away from it (read more here). And, to add insult to injury, these AI companies have found that raising funds through equity financing isn't enough, so they have turned to debt financing, and in a few short years, these wildly unsustainable AI businesses have accrued $1.2 trillion in debt. This debt has now been sold as AAA-rated investment-grade debt, despite only false hope and hype propping it up (read more here). For context, the subprime mortgage crisis of 2008 was caused by $1.9 trillion of bad debt being missold as AAA-rated investment-grade debt.
...Ryle concludes that Descartes and hcis followers confuse the mind as being in the same category as the body but distinct from it, while the truth is that the mind is not a separate thing at all, but a way of describing how the body (the brain) behaves.
...The logic surrounding the AI boom is utterly moronic. The idea is to gamble the entire Western economy by heavily investing in technology that will wipe out almost all jobs. It is a lose-lose scenario. If the bet fails, the economy crashes. If the bet succeeds, the economy crashes. And this graph shows that this is precisely what is happening.

... As writer Ted Chiang argued in The New Yorker, ChatGPT is essentially "a blurry JPEG of the web," A lossy compression of human knowledge that produces plausible-sounding but ultimately hollow simulacra.
...I have also come to realise that the seemingly all-knowing internet I so readily trusted contains huge gaps — and that, in a world of AI, it's about to get worse.
...Google says Gemini 3 has "state-of-the-art reasoning capabilities, world-leading multimodal understanding, and enables new agentic coding experiences"
Tests show A.I. toys can have disturbing conversations. Other concerns include unsafe or counterfeit toys bought online.
...existing AI systems are optimized for engagement, not learning. They're not based on getting the facts right, understanding basic principles like causation, or prioritizing student safety. They just want to draw you into a conversation and keep you talking, driving up that token count (which, eventually, you will have to pay for)
...every so often, I pause over a polished paragraph and feel an odd flicker of detachment. The words look like mine, sound like mine, but they've passed through someone else's hands. Or circuits. Or whatever metaphor fits. How many edits does it take before the voice that returns isn't me anymore?
Anyone who's mildly aware of the state of AI knows there's a high chance we are overspending in the infamous trillion-dollar AI buildout.
At first, you may think the differences are purely based on semantics, but this has profound implications for how the underlying accelerators, the hardware used to run these models (e.g., GPUs), are organized.
...Peter Thiel has just sold off his entire stake in Nvidia. That alone isn't much. But when you pull the thread and ask, "Why has the person at the core of this AI movement sold every last share of one of the best-performing AI stocks?" and "Why now has the entire AI narrative fallen apart?", the imminent crash becomes painfully apparent... A filing on November 17th showed that Peter Thiel's hedge fund had sold off its entire $100 million stake in Nvidia sometime during the 2025 third quarter. This one sale alone shrank the fund's holdings by two-thirds and netted a truly enormous profit.
Faster GPUs won't conjure a world model from out of thin air we’re scaling mimicry, not understanding: that is my guess as to why the MAMLM frontier is spiky, with breathtaking benchmarks...
...The Catalog of the Institute for Improbable Taxonomies
Those that are rumored to exist only in footnotes.
This parody keeps Borges's tone of mock-scholarly seriousness while heightening the absurdity — blending erudite references (amphorae, imperial archives, cartographers) with surreal comic touches (animals disguised as furniture, animals late for their own extinction).
Those that resemble philosophers when viewed from behind.
Those that have been painted on amphorae but never seen in daylight.
Those that are temporarily employed as librarians.
Those that can be summoned by whistling in obsolete dialects.
Those that are indistinguishable from furniture until spoken to.
Those that appear exclusively in dreams of accountants.
Those that are perpetually late for their own extinction.
Those that are edible only in allegory.
Those that are classified as "miscellaneous" in imperial archives.
Those that are invisible except to cartographers.
Those that are rumored to have written minor epics under pseudonyms.
Those that are identical to other animals but insist otherwise.
Those that belong to the Emperor, but only on alternate Tuesdays.
Those that are drawn with excessive precision in children's notebooks.
Those that are currently awaiting translation.When transformer-based models like GPT, BERT, and T5 appeared, they started learning language patterns and relationships directly from raw data. The structure of knowledge — the hierarchy, relationships, and semantics that ontologies once defined became implicit in billions of learned parameters.
The prompt was: "Craft a hilarious 'be me' style rant as Elon Musk, roasting Earth's AI infrastructure woes and hyping orbital data centers as the ultimate flex. Make it meme-worthy and futuristic."