1vii25
How much (little) are the AI companies making? Cory Doctorow
...Even companies that are actually profitable (in the sense of bringing in more revenue than it costs to keep the business's lights on) love to juice their stats, and the worst offenders are the Big Tech companies, who reap a vast commercial reward from creating the illusion that they are continuing to grow, even after they've dominated their sector.Take Google: once the company attained a 90% global search market-share, there were no more immediate prospects for growth. I mean, sure, they could raise a billion new humans to maturity and train them to be Google customers (e.g., the business plan for Google Classroom), but that takes more than a decade, and Google needed growth right away. So the company hatched a plan to make search worse, so that its existing users would have to search multiple times to get the information they sought, and each additional search would give Google another chance to show you an ad
...But that was small potatoes. What Google — and the rest of the tech sector — needed was a massive growth story, a story about how their companies, worth trillions of dollars, could double or triple in size in the coming years. There's a kind of reflexive 'capitalist critique that locates the drive to tell growth stories in ideology: "endless growth is the ideology of a tumor," right?
But spinning an endless growth story isn't merely ideological. It's a firmly materialistic undertaking. Companies that appear to be growing have market caps that are an order of magnitude larger than companies that are considered "mature" and at the end of their growth phase. For every dollar that Ford brings in, the market is willing to spend $8.60 on its stock. For every dollar Tesla brings in, the market is willing to spend $118 on its stock.
Why OpenAI Suddenly Erased Jony Ive from their Website Sam Enfield at Medium
Inference As Interference: How LLMs Collide Semantic Waves To Create Meaning Rob Manson at Medium
Most people think Large Language Model (LLM) Inference is just multiplying weights. But at each layer in a model we can see that an Interference Pattern is created by the collision of two waves rippling through semantic space....Here's the technical core:
When two or more propagating waves of the same type are incident on the same point, the resultant amplitude is the "vector sum"This isn't just physics. It's vector addition.And in LLMs, everything in the Residual Stream is vector addition.
Radar Trends to Watch: July 2025 O'Reilly
Academia & MAMLMs: The Seven Labors of the Academic East-African Plains Ape Brad DeLong
...For the past 5000 years, ever since the invention of writing, higher education (and lower education too) have really had one overwhelming purpose: to equip people to be front-end nodes to the East African Plains Ape Natural Anthology Super-Intelligence—the EAPANASI. As nodes in that and able to draw on that anthology super-intelligence, trained white-collar workers have, for 5000 years now, drawn on its knowledge and wisdom, remix it as they apply it to their own situation, do some information processing of their own, and then upload their conclusions and insights to add to the store on the one hand and use their conclusions and insights to inform others and act in the world on the other hand.Training people to be such front-end nodes has always involved training them to do seven things:
- How to survey a subject...
- How to identify the live issues from the survey...
- How to hone in on a key question given the live issues...
- How to research the honed-in question...
- How to analyze the research to obtain an answer...
- How to then store the answer in a useful, permanent form...
- Last, how to persuade others that your answer is the right one, so that you can then both contribute your mite to the anthology super-intelligence and act in the world...
...the fundamental tasks we academics train people for—(1) through (7) above—have remained visibly the same even though the literate and numerate procedures to accomplish them have changed.
...MAMLMs—modern advanced machine-learning models—are software program devices that run on doped silicon hardware devices to engage in very big-data, very high-dimension, very flexible-function classification, regression, and prediction analyses at a scale that was previously unimaginable.
...two things that are stable and constant in academia are (a) technological panics at least since the days of Plato's Phaidros and (b) a constant essential purpose in training knowledge workers so that they can enrich their lives and be useful to others. The patterns of workings that constitute humanity as an anthology super-intelligence have changed profoundly and repeatedly. Thus the way that individuals become effective front-end nodes of and to that anthology super-intelligence have changed as well. Yet it is still, at bottom, the familiar seven academic labors: surveying, questioning, researching, analyzing, storing, and persuading. And academia exists to (a) teach students the context and the answers to key interesting questions, (b) modeling the process of being an effective front-end node, and (c) goading them to practice becoming effective front-end nodes.
What Happens After A.I. Destroys College Writing? The demise of the English paper will end a long intellectual tradition, but it's also an opportunity to reëxamine the purpose of higher education. Hua Hsu at New Yorker
2vii25
Google is opening its NotebookLM AI tools to students under 18 via Stephen Downes
Anthropic Gave an AI a Job. The Results Were Hilarious.. and Terrifying Rohit Kumar Thakur
An AI named Claudius tried to run a small business. What happened next is a glimpse into our very strange, very near future
Inside a plan to use AI to amplify doubts about the dangers of pollutants via Stephen Downes
The Parrot in the Machine James Gleick at NYRB
...The grandiosity and hype are ripe for correction. So is the confusion about what AI is and what it does. Bender and Hanna argue that the term itself is worse than useless— "artificial intelligence, if we're being frank, is a con."It doesn't refer to a coherent set of technologies. Instead, the phrase "artificial intelligence" is deployed when the people building or selling a particular set of technologies will profit from getting others to believe that their technology is similar to humans, able to do things that, in fact, intrinsically require human judgment, perception, or creativity.
Calling a software program an AI confers special status. Marketers are suddenly applying the label everywhere they can
...Large language models do not think, and they do not understand. They lack the ability to make mental models of the world and the self. Their promoters elide these distinctions, and much of the press coverage remains credulous. Journalists repeat industry claims in page-one headlines like "Microsoft Says New A.I. Nears Human Insight" and "A.I. Poses ‘Risk of Extinction,' Tech Leaders Warn." Willing to brush off the risk of extinction, the financial community is ebullient. The billionaire venture capitalist Marc Andreessen says, "We believe Artificial Intelligence is our alchemy, our Philosopher's Stone&,dash;we are literally making sand think."
I'm Losing All Trust in the AI Industry Alberto Romero
...The economic engine keeping the industry alive is unsustainableBut why do they need to make money using what they know are unacceptable tactics that will incite widespread and intense backlash? Because, despite the hype, most frontier AI labs are still money-losing operations that require constant infusions of capital. There's no solid, credible roadmap to profitability yet (except ads, alas).
...The AI industry is yet to find product-market fit because the "fit" is being manufactured by a few incumbents with pockets deep enough to play above the rules of the free market. His first paragraph says it all:
Silicon Valley venture capitalists and many entrepreneurs espouse libertarian values. In practice, they subscribe to central planning: Rather than competing to win in the marketplace, entrepreneurs compete for funding from the Silicon Valley equivalent of the Central Committee. The race to the top is no longer driven by who has the best product or the best business model, but by who has the blessing of the venture capitalists with the deepest pockets—a blessing that will allow them to acquire the most customers the most quickly, often by providing services below cost.
...they jump from "AI will usher in an age of abundance, curing cancer and educating everyone" to "AI will destroy half of entry-level white jobs in five years" every few days.
They don't realize that panic doesn't prepare society but paralyzes it instead, or that optimism doesn't reassure people but feels like gaslighting. Worst of all, both messages serve the same function: to justify accelerating AI deployment—either for safety reasons or for capability reasons—while avoiding accountability for its real-world consequences happening today, for which no millenarian rhetoric is needed and thus no influx of investor capital.
...You may have noticed a strange absence of the topic "AI agents" on this blog. It's strange because everywhere you look, you'll find people shouting, "2025 is the year of AI agents!!!" But the way I see it, that absence is both necessary and urgent. The reason is simple: AI agents—fully autonomous AIs that can do stuff on your behalf unmonitored——just don't exist.
It's one thing to hype up LLMs, but I think it crosses an invisible line of rigor and self-respect to hype something that doesn't even exist.
So 2025 is not the year of AI agents; it's the year of talking about AI agents.
Free Lunch Is Over for the AI That Broke the Web gizmodo
...On July 1, Cloudflare, one of the internet's core infrastructure companies, declared "Content Independence Day." In a landmark policy shift, the company announced it will now block AI crawlers from scraping sites hosted on its platform unless those bots pay content creators for the data they consume.
OpenAI Hits the Panic Button gizmodo
This week, the world's most important artificial intelligence company was closed. OpenAI gave its entire staff a week off to "recharge," a seemingly generous perk for a workforce relentlessly pushing toward building a world-changing technology.But this was not a wellness initiative. It was a strategic retreat in the middle of a brutal, high-stakes war for talent that is now threatening to shatter the company's carefully crafted identity.
The enemy is Meta Platforms, the social media empire that includes Facebook, WhatsApp, and Instagram. According to OpenAI's own CEO, Sam Altman, their tactics are getting ugly. In a recent Slack message to employees reviewed by WIRED, Altman addressed the departure of several key researchers poached by Mark Zuckerberg's company.
... OpenAI, the most prominent lab in the generative AI race, may be struggling to keep its own people on board. For years, OpenAI has operated with the fervor of a quasi-religious mission. The goal was not just to build products; it was to birth Artificial General Intelligence (AGI) for the benefit of humanity. The work was hard, the hours long, but the mission itself was presented as the ultimate compensation. Now, Zuckerberg is calling that bluff, making a cynical bet that every missionary has a price, and it seems he's being proven right.
Zuckerberg Was Wrong About the Metaverse. Can We Really Trust Him With Superintelligent AI? gizmodo
The dangers of so-called AI experts believing their own hype New Scientist
Joe Rogan's Latest Episode Will Make You Question Everything About AI gizmodo
Cultural Anxiety and Artificial Intelligence Giles Crouch at Medium
Silicon Valley's Safe Space NY Times
...The roots of Slate Star Codex trace back more than a decade to a polemicist and self-described A.I. researcher named Eliezer Yudkowsky, who believed that intelligent machines could end up destroying humankind. He was a driving force behind the rise of the Rationalists.The Rationalists saw themselves as people who applied scientific thought to almost any topic. This often involved "Bayesian reasoning," a way of using statistics and probability to inform beliefs.
Because the Rationalists believed A.I. could end up destroying the world — a not entirely novel fear to anyone who has seen science fiction movies — they wanted to guard against it. Many worked for and donated money to MIRI, an organization created by Mr. Yudkowsky whose stated mission was "A.I. safety."
But it was the other stuff that made the Rationalists feel like outliers. They were "easily persuaded by weird, contrarian things," said Robin Hanson, a professor of economics at George Mason University who helped create the blogs that spawned the Rationalist movement. "Because they decided they were more rational than other people, they trusted their own internal judgment."
Many Rationalists embraced "effective altruism," an effort to remake charity by calculating how many people would benefit from a given donation. Some embraced the online writings of "neoreactionaries" like Curtis Yarvin, who held racist beliefs and decried American democracy. They were mostly white men, but not entirely.
I Can Tolerate Anything Except The Outgroup Posted on September 30, 2014 by "Scott Alexander"
...a critique of what Mr. Siskind, writing as Scott Alexander, described as "the Blue Tribe." In his telling, these were the people at the liberal end of the political spectrum whose characteristics included "supporting gay rights" and "getting conspicuously upset about sexists and bigots."But as the man behind Slate Star Codex saw it, there was one group the Blue Tribe could not tolerate: anyone who did not agree with the Blue Tribe. "Doesn't sound quite so noble now, does it?" he wrote.
Mr. Altman thought the essay nailed a big problem: In the face of the "internet mob" that guarded against sexism and racism, entrepreneurs had less room to explore new ideas. Many of their ideas, such as intelligence augmentation and genetic engineering, ran afoul of the Blue Tribe.
Mr. Siskind was not a member of the Blue Tribe. He was not a voice from the conservative Red Tribe ("opposing gay marriage," "getting conspicuously upset about terrorists and commies"). He identified with something called the Grey Tribe — as did many in Silicon Valley.
The Grey Tribe was characterized by libertarian beliefs, atheism, "vague annoyance that the question of gay rights even comes up," and "reading lots of blogs," he wrote. Most significantly, it believed in absolute free speech.
The essay on these tribes, Mr. Altman told me, was an inflection point for Silicon Valley. "It was a moment that people talked about a lot, lot, lot," he said.
...In 2005, Peter Thiel, the co-founder of PayPal and an early investor in Facebook, befriended Mr. Yudkowsky and gave money to MIRI. In 2010, at Mr. Thiel's San Francisco townhouse, Mr. Yudkowsky introduced him to a pair of young researchers named Shane Legg and Demis Hassabis. That fall, with an investment from Mr. Thiel's firm, the two created an A.I. lab called DeepMind.
Like the Rationalists, they believed that A.I could end up turning against humanity, and because they held this belief, they felt they were among the only ones who were prepared to build it in a safe way.
In 2014, Google bought DeepMind for $650 million. The next year, Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community.
She Wanted to Save the World From A.I. Then the Killings Started Christopher Beam at NY Times
At first, Ziz LaSota seemed much like any other philosophically inclined young tech aspirant. Now, she and her followers are in jail, six people are dead, and Rationalists are examining whether their ideas played a role....Over the years, the Rationalist movement has counted Peter Thiel and Sam Bankman-Fried among its community, and has influenced numerous figures, including Elon Musk, Sam Altman, Steven Pinker and Nate Silver. Perhaps more significant, for the tech workers building the A.I. tools that will undergird our world, Rationalism is something like a fraternity, and a shared language.
...Some say the apocalyptic rhetoric around A.I. is a larger problem within the movement.
"There's this all-or-nothing thing, where A.I. will either bring utopia by solving all the problems, if it's successfully controlled, or literally kill everybody," said Anna Salamon, the director of the Center for Applied Rationality, a nonprofit that has served as a hub for the Rationalist movement. "From my perspective, that's already a chunk of the way toward doomsday cult dynamics."
from Medium Newsletter Anna Dorn
A recent New York Times story claims that AI chatbots are making people go crazy. It cited users who believed they were living in a simulation, fell in love with AI entities, or used chatbots' answers to justify dangerous behavior. But is ChatGPT really making people lose their minds, or just showing them what was already there?In a response essay, tech writer Alberto Romero argues the latter. He doesn't dispute that some ChatGPT users have had disturbing experiences, but he pushes back on the idea that the chatbot is warping minds at scale. What's really happening, he argues, is a familiar pattern: new technology gets blamed for old human tendencies. Romero argues that the NYT presents isolated cases as if they're representative, when in fact they're statistical outliers among hundreds of millions of users. He sees this as part of a larger media pattern that favors spectacle and fear over proportion. Taken in context, Romero concludes that ChatGPT isn't injecting people with delusion. It's reflecting the delusions they already brought to it.
Tech developer Bryan Cruse similarly argues that the panic around ChatGPT says more about us than it does about the tool itself. He writes that the chatbot isn't making us dumber, but it's revealing how little critical thinking we were doing in the first place. From copy-paste college essays to jargon-filled mission statements, Cruse shows how much of our intellectual culture has long rewarded fluency over insight. ChatGPT didn't create that problem. It just made it impossible to ignore.
The panic isn't new; it's part of a pattern. Every time a new technology emerges, we project our fears onto it and then recoil when those fears reflect back at us.
Zuckerberg, OpenAI, & The "Internet Condom" Why Meta's AI researcher poaching spree means more than it seems. Ignacio de Gregorio at Medium
...I believe Zuck believes losing the AI train will kill the social media company....Recent studies indicate that approximately 95% of teens (ages 13-17) in Western countries use social media regularly, with around one-third online almost constantly, averaging between 3-5 hours daily.
This intensive usage correlates strongly with heightened rates of anxiety, depression, body image issues, and disrupted sleep patterns.
Nearly half of teenagers report social media negatively impacting their mental health, and about 11% demonstrate problematic, addiction-like behaviors, including a loss of control over their online time.
Young adults (ages 18=24) also exhibit significant vulnerability, spending an average of 2.5 to 3 hours per day on platforms such as Instagram, TikTok, and Snapchat. An alarming 82% perceive themselves as addicted to social media.
Key stressors include anxiety from constant comparison with peers, FOMO (fear of missing out), and emotional exhaustion from continuous connectivity.
...The issue is that, with the reality of social media burnout setting in, AI is only making it worse.
...The primary issue is that social media content is typically short. Short content is much easier for AI to generate because AI models struggle with long generations (content longer than 10 seconds), but have become extremely good at producing short videos and, of course, images.
...In Ernest Hemingway's The Sun Also Rises, Bill Gorton asks Mike Campbell, "How did you go bankrupt?" to which Mike replies:
"Gradually, then suddenly."
I believe social media will follow a similar path. It won't be expected; it will just happen. Social media has made an art of creating random celebrities and memes out of nowhere and then making them disappear just as quickly as they appeared.
And just like that, social media's luck will run out fast.
I'm Losing All Trust in the AI Industry Alberto Romero
I think the AI industry is facing a handful of urgent problems it's not addressing adequately. I believe everything I write here is at least directionally true, but I could be wrong. My aim isn't to be definitive, just to spark a conversation. What follows is a set of expanded thoughts on those problems, in no particular order....A disturbing amount of effort goes into making AI tools engaging rather than useful or productive.
I don't think this is an intentional design decision. But when is? The goal is making money, not nurturing a generation of digital junkies — but if nurturing a generation of digital junkies is what it takes to make money... AI companies, like social media companies did before, are focused on increasing the number of monthly active users, the average session duration, etc. Those metrics, apparently inoffensive, lead to the same instrumental goal: to make the product maximally engaging.
...ChatGPT's constant sycophancy is annoying for the power users who want it to do actual work, but not for the bulk of users who want entertainment or company. Most people are dying to have their ideas validated by a world that mostly ignores them. Confirmation bias (tendency to believe what you already believe) + automation bias (tendency to believe what a computer says) + isolation + an AI chatbot that constantly reinforces whatever you say — an incredibly powerful recipe for psychological dependence and thus user retention and thus money.
...But why do they need to make money using what they know are unacceptable tactics that will incite widespread and intense backlash? Because, despite the hype, most frontier AI labs are still money-losing operations that require constant infusions of capital. There's no solid, credible roadmap to profitability yet (except ads, alas).
...Silicon Valley venture capitalists and many entrepreneurs espouse libertarian values. In practice, they subscribe to central planning: Rather than competing to win in the marketplace, entrepreneurs compete for funding from the Silicon Valley equivalent of the Central Committee. The race to the top is no longer driven by who has the best product or the best business model, but by who has the blessing of the venture capitalists with the deepest pockets — a blessing that will allow them to acquire the most customers the most quickly, often by providing services below cost.
...A funny trait of the fake free-market capitalist that O'Reilly warns us about is that their values are always very elevated and pure, but only hold until the next funding round.
...2025 is not the year of AI agents; it's the year of talking about AI agents.
What Happens After A.I. Destroys College Writing? Hua Hsu at The New Yorker
...most detective work occurs after submission. Services like GPTZero, Copyleaks, and Originality.ai analyze the structure and syntax of a piece of writing and assess the likelihood that it was produced by a machine. Alex said that his art-history professor was "hella old," and therefore probably didn't know about such programs. We fed the paper into a few different A.I.-detection websites. One said there was a twenty-eight-per-cent chance that the paper was A.I.-generated; another put the odds at sixty-one per cent. "That's better than I expected," Eugene said.I asked if he thought what his friend had done was cheating, and Alex interrupted: "Of course. Are you fucking kidding me?"
...A.I. has returned us to the question of what the point of higher education is. Until we're eighteen, we go to school because we have to, studying the Second World War and reducing fractions while undergoing a process of socialization. We're essentially learning how to follow rules. College, however, is a choice, and it has always involved the tacit agreement that students will fulfill a set of tasks, sometimes pertaining to subjects they find pointless or impractical, and then receive some kind of credential. But even for the most mercenary of students, the pursuit of a grade or a diploma has come with an ancillary benefit. You're being taught how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of A.I. means that you can now bypass the process, and the difficulty, altogether
...The way we write shapes our thinking. We process the world through the composition of text dozens of times a day, in what the literary scholar Deborah Brandt calls our era of "mass writing." It's possible that the ability to write original and interesting sentences will become only more important in a future where everyone has access to the same A.I. assistants.
...Almost all the students I interviewed in the past few months described the same trajectory: from using A.I. to assist with organizing their thoughts to off-loading their thinking altogether. For some, it became something akin to social media, constantly open in the corner of the screen, a portal for distraction. This wasn't like paying someone to write a paper for you—there was no social friction, no aura of illicit activity. Nor did it feel like sharing notes, or like passing off what you'd read in CliffsNotes or SparkNotes as your own analysis. There was no real time to reflect on questions of originality or honesty—the student basically became a project manager.
Apple Reportedly Loses Key AI Mind Luc Olinga at gizmodo
...According to Bloomberg, Meta has hired Ruoming Pang, a high-level engineer who led Apple's foundation models team. Pang, a former Google veteran and key architect behind the large language models (LLMs) powering Apple Intelligence, will now join Meta's elite AI unit focused on building superintelligent systems.His exit is a significant blow for Apple, especially at a time when the company is trying to convince the public and developers that it's serious about generative AI. He was in charge of the team of roughly 100 engineers building the foundational technology behind Apple Intelligence, the suite of AI features recently announced at the company's WWDC event.
The Hidden Cost of OpenAI's Genius Luc Olinga at gizmodo
The creator of ChatGPT is handing out stock packages that exceed its total revenue, a high-stakes gamble to keep its engineers from defecting to rivals like Meta.,P. ...OpenAI revealed to investors that its stock-based compensation for employees surged more than fivefold last year to an astonishing $4.4 billion. That figure isn't just large; it's more than the company's entire revenue for the year, accounting for a staggering 119% of its $3.7 billion in total revenue.
Alibaba Fights Tencent for Dominance Over AI in China
...The two juggernauts aren't just racing each other—they're positioning themselves differently. Alibaba is betting on cloud-based "Model-as-a-Service offerings, while Tencent is focused on embedding AI into its massive consumer platforms such as WeChat. Their battlegrounds now extend into verticals like education: in May, both launched rival AI agents aimed at helping students with China's grueling college entrance exams. At the same time, the two giants have become aggressive investors. From 2023 onward, they've poured capital into nearly every major domestic model startup—MiniMax, Zhipu AI, Moonshot, Baichuan and more. Tencent even paused its own model development at times to back DeepSeek. Alibaba slashed non-AI investments to go all-in. Meanwhile, ByteDance—creator of TikTok and Douyin—has been steadily building its AI empire too, consolidating research under the new Seed division. With Wu Yonghui, former vice president of Google's DeepMind, at the helm, ByteDance is aiming to repeat its algorithmic dominance from the short-video era—only this time with generative AI. The race isn't just about who builds the best models. It's about who defines the next era of computing in China—and beyond. As one Alibaba executive put it: "AI's impact on the world has barely begun. It's far too early to talk about winning. This is just the beginning." If Tencent is sprinting to dominate consumer-facing AI tools, Alibaba is digging in for a long war—one fought on the deeper, more expensive battlefield of infrastructure. It's a strategy as technical as it is philosophical: open-source to shape the ecosystem, and model-as-a-service (MaaS) to monetize it. Alibaba was one of the first major Chinese tech firms to fully embrace open-source AI development. "Their plan had three legs," said a partner at a venture firm focused on AI applications. "Build their own base models, buy or build an AI chatbot company, and invest aggressively across the stack—from infrastructure to end-user applications." As it turns out, they only needed two. Alibaba's foundational model family, Qwen, proved so robust that plans to acquire another base model firm were quietly shelved.
Artificial intelligence is the opposite of education Helen Beetham via Stephen Downes
...Since 2020 it has been known by the industry that scaling up AI would make catastrophically increasing demands on power and compute, while returns on model accuracy would quickly tail away. But the industry is betting that the financial returns will continue anyway, because everyone will be locked in....one side are the four or five largest corporations that have ever existed, the biggest bubble of financial over-investment, the most powerful military and surveillance states and all the combined forces of tech hype and mainstream media, while on the other side are thoughtful people with arguments.
...What if, at at least in its current, (de)generative, hyper-capitalistic guise, the project of AI is actively inimical to the values of learning, teaching and scholarship, as well as to human flourishing on a finite planet?
Accenture Is Doomed Derick David at Medium
...The Economist asked the killer question that nobody wants to hear. "Who needs Accenture in the age of AI?" They softened it a bit by saying this might just be a rough patch, but they're not sugarcoating the bigger problems."The firm's problems run deep. As semi-autonomous AI agents sweep the world, who needs consultants?"
Accenture built an empire telling companies how to navigate digital transformation. Now, the age of AI is transforming them right out of business. The irony would be delicious if it weren't so brutal.
AI Bundling, or Jack Shit Tony Hirst
...as genAI tools replace many of the old, simpler, "AI tools" (regular expressions, rule based systems, simple classifiers) we have to accept lots of other feature-creep, creepy-creep, creep.
Leif Weatherby's new Language Machines: Cultural AI and the End of Remainder Humanism Henry Farrell at Crooked Timber
... is a staggeringly ambitious effort to revive cultural theory, by highlighting its applicability to a technology that is reshaping our world. Crudely simplifying, if you want to look at the world as text; if you want to talk about the death of the author, then just look at how GPT 4.5 and its cousins work....Cultural theory was right about the death of the author. It was just a few decades early
...high era cultural theory was demonstrably right about the death of the author (or at least; the capacity of semiotic systems to produce written products independent of direct human intentionality). It just came to this conclusion a few decades earlier than it ideally should have. A structuralist understanding of language undercuts not only AI boosters' claims about intelligent AI agents just around the corner, but the "remainder humanism" of the critics who so vigorously excoriate them.
On Chaos Monkeys
Chaos engineering WikipediaElon Musk, Chaos Monkey Kara Swisher at NYT (May 2022)
...Opinion: "Many top entrepreneurs genuinely respect Musk's tech chops. One told me he walked away from an hourslong conversation with Musk a few weeks ago "stoked" about how he might transform and fix Twitter. "If anyone can do it, when he is at his best, Elon can," he said.When he is at his best is key, because there is also a more disheartening side to the man, as the very voluble Musk appears to have descended over the past week into becoming what I can only describe as a chaos monkey.
For those not familiar, that is actually the name of a piece of software made by Netflix that it called "a resiliency tool that helps applications tolerate random instance failures." In other words, it aims to throw stuff haphazardly into a system to test its robustness.
Speaking of which, in response to a very cogent response by Twitter's chief executive, Parag Agrawal, to counter Musk's claims about higher-than-advertised bot activity on the service, Musk flung a poop emoji at him in a tweet.
A man in the throes of pulling off the most unlikely deal in tech with savvy Spock-level chess moves lowered himself to presenting a digital piece of feces to the man he would be replacing. For what reason other than to amuse himself and his legion of rabid followers?"
... "I, like others, suspect the reason for Musk's most recent erratic behavior is to finagle a lower price, as Twitter has dropped by about 25 percent since he reached agreement on a deal late last month. Still others have posited that he is even looking to slither out of the deal and his $1 billion breakup fee altogether, as Tesla shares undergirding the acquisition have hit the skids amid the deal talks, falling to nearly $700 this week from about $1,000 when Twitter accepted Musk's offer.
If it's all negotiating tactics, then perhaps there's an endgame to Musk unleashing a series of seemingly unrelated incendiary tweets, including that he was going to vote Republican since Democrats who were "(mostly) the kindness party" are now the party of “division & hate.” That was preceded by a more paranoid one that posited that "political attacks on me will escalate dramatically in coming months" and another saying that the "dirty tricks attacks will be next-level," leaving out the pertinent fact that next-level is, in fact, a G.O.P. specialty.
In an exchange with a Yale medical school professor, Musk said the school was "the epicenter of the woke mind virus attempting to destroy civilization."
OK, then. Let me say that a shift in his political identity is Musk's to make and his alone, even if you don't agree. Even if he wants to bear-hug Donald Trump and don a "Make America Great Again" hat, that's his choice. But isn't it a curious coincidence that it is also sure to draw attention away from that clear math problem that has affected his Twitter purchase?
The problem for Musk is that while politicians often lie, numbers don't. Rage tweeting or no, it hardly matters, since a deal is a deal. After Musk claimed the transaction could not "move forward" until the bot numbers were clearer, Twitter fired back: “The board and Mr. Musk agreed to a transaction at $54.20 per share. We believe this agreement is in the best interest of all shareholders. We intend to close the transaction and enforce the merger agreement."
..."Focus on that word "enforce," which translates to legal action, sure to create even more problems for both Twitter and Musk. Does he care? Do chaos monkeys care about creating chaos? Sigh. One would hope that such a great mind would not engage in such peanut-seeking antics.
It's also a shame. He was impressively self-aware back in that stressful time in 2017, when someone on Twitter asked whether Musk suffered from bipolar disorder. He first said, "Yeah," but then wrote: "Maybe not medically tho. Dunno. Bad feelings correlate to bad events, so maybe real problem is getting carried away in what I sign up for."
"Carried away" certainly sounds familiar, as does what he tweeted one minute later: "If you buy a ticket to hell, it isn't fair to blame hell..."
No indeed, it is not. So, what's going to happen next? Dunno — I guess you have to ask the monkey. Just remember to duck."
VIKTOR SHVETZ: Why Investors Shouldn't Worry Too Much about the AI Bubble Brad DeLong
...From Arkwright's mill to today's data centers, every technological revolution has ridden a wave of speculative excess. The real risk for individual investors is in falling for the hype and putting all your chips into the pot. But the real risk for humanity, perhaps, lies not in investing too much but in failing to invest enough....While one could debate the importance or dating of specific technologies, there is no doubt that progress over the last two centuries represents the greatest break since the invention of agriculture from traditional economic, social and political patterns, as reflected in a range of income, longevity and healthcare outcomes as well as radically different political and social relations. Perez described these major shifts as the arrival of new "techno-economic paradigms."
...AI can be regarded as either a brand-new revolution (adding to the five identified by Perez) or perhaps more appropriately as the pinnacle and the escape velocity of the Information Age.
...It is only a matter of time — perhaps within a decade — before the fusion of AI with robotics, cloud computing, and 3D printing aggressively disintermediates blue-collar workers. Whether it is the ability to print houses and buildings, manufacture air conditioning units without moving parts, printing entire aircraft engines, developing self-healing pipes or the proliferation of autonomous trucks and taxis, opportunities for efficiencies and collapse in marginal costs are arguably even better than those in services.
...Unlike the conventional giants, which are still largely based on technologies that are at least twenty years old, the start-ups are built around and rooted in AI, relying to a far greater degree on intangible assets, which function very differently to conventional tangible capital by offering a much stronger operational scalability and greater synergies while delivering spillover effects.
When AI Becomes the Water We Swim In Carlo Iacono via Stephen Downes
The most profound technological shifts aren't the ones we see coming. They're the ones that slip into our lives so seamlessly that we forget there was ever a time before them. As I write this in July 2025, we're witnessing one of these transformations: the emergence of AI browsers that don't just add artificial intelligence to our web experience but fundamentally alter what browsing means....The educational establishment is worried and they should be. Not because students might cheat, but because the entire framework we've built around learning, assessment and human capability is dissolving before our eyes.
...When AI becomes environmental rather than instrumental, it transforms from something we control to something that shapes us. We've seen this pattern before with social media algorithms, but those operated on our choices. AI browsers operate on our thoughts, intercepting them at the moment of formation, suggesting completions before we've finished conceiving our own ideas.
The crisis isn't that students might use AI to write their essays. It's that the essay itself, as a form of assessment, assumes a model of individual cognition that no longer describes reality. When every browser includes AI that can see your screen, predict your needs, and seamlessly complete your thoughts, what does it mean to test individual knowledge?
The AI Industry Is Radicalizing Matteo Wong at The Atlantic
...After transferring to Columbia last fall, as Chungin "Roy" Lee tells it, he used AI to cheat his way through school, used AI to cheat his way through internship interviews at Amazon and Meta—he received offers from both—and in the winter broadcast his tool on social media. He was placed on probation, suspended, and, more keen on AI than education, dropped out this spring to found a start-up. That start-up, Cluely, markets the ability to "cheat on everything" using an AI assistant that runs in the background during meetings or sales calls. Last month, it finished a $15 million fundraising round led by Andreessen Horowitz, the storied venture-capital firm....("Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it's normal," Cluely states on its website.)
...AI zealots are everywhere in the Bay Area. I've met dozens of them: people who believe that AI's rapid ascension is inevitable and by far the most important thing happening on this planet. (Some told me it's the only thing worth caring about at all.)
...The extreme rhetoric is accompanied by extreme spending. The tech industry has collectively burned through hundreds of billions of dollars since the arrival of ChatGPT to train more powerful AI systems and build the physical infrastructure they require, and it shows no signs of stopping. In recent weeks, Meta CEO Mark Zuckerberg, apparently desperate to catch up in the AI race, has been on a recruiting spree in which he has reportedly offered nine-figure packages to top researchers. (Meta says that the numbers have been exaggerated or misrepresented.) Exactly how generative AI will make a profit is not at all clear, but tech companies seem to have faith that the money will flow once the technology has completely rewired the world.
What Do Commercials About A.I. Really Promise? Vinson Cunningham at New Yorker
...Thinking's our whole thing. A company that promises—however jokily—to do your thinking for you is, not even subtly, also threatening, somewhere down the line, to scoot you off the stage for good.
Mystery AI Hype Theater 3000 | DAIR
Artificial Intelligence has too much hype. In this stream, linguist Prof. Emily M. Bender and sociologist Dr. Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
Sam Altman and Jony Ive at Mystery AI Hype Theater
Because Sam Altman hates opening his laptop, OpenAI is merging with iPhone guy Jony Ive's design firm in the name of some mysterious new ChatGPT-enabled consumer products: Alex and Emily go full Mystery Science Theater and dissect the announcement video. Plus how tech billionaires like Sam Altman mythologize San Francisco while their money makes it less livable for everyone else.
GPT-5 Is Coming in July 2025 — And Everything Will Change ' writes at Medium
Zuckerberg's last stand—and why it might end in disaster Enrique Dans at Medium
...What exactly is he up to? In a nutshell: trying to rebuild Meta's AI strategy from the ruins of LlaMA4, its bogus open source model that failed to revolutionize the sector. Faced with yet another setback, Zuckerberg has once again decided that brute force is the answer: hire every recognizable name in the world of AI at any price, compete for talent with OpenAI, Perplexity, Anthropic and everyone else, steal their talent, and even try to buy out whole companies such as Safe Superintelligence and Perplexity. Does that sound like a serious, well-thought-out strategy? It is not.
How Google Killed OpenAI's $3 Billion Deal Without an Acquisition Ece Yildirim. at gizmodo
Google just dealt OpenAI a major blow by scuttling a potential $3 billion deal, and in doing so, solidified a rising trend in Silicon Valley's AI arms race: the "non-acquisition acquisition."Google announced on July 11 that it poached key talent from the rapidly rising AI startup Windsurf, which until then had a reported $3 billion acquisition deal with OpenAI that has now collapsed. Instead, Google is paying $2.4 billion to hire away top Windsurf employees, including the company's CEO, and take a non-exclusive license to its technology, according to Bloomberg.
By poaching Windsurf's top brains but not acquiring the startup itself, Google achieved two critical goals at once: it nullified OpenAIs momentum and gained access to the startup's valuable AI technology.
OpenAI, the company behind ChatGPT, ignited the current AI frenzy back in 2022 and has been the leader in generative AI ever since. But its market lead is being increasingly challenged by big tech competitors like Google and Meta, and it is now clearer than ever that elite AI engineers are the most valuable currency in this fight for dominance.
The Day Grok Tried to Be Human Luc Olinga at gizmodo
Recursive Summarization Mark Liberman at Language Log
...The mouseover title: "I saw an article that said it was a 3 minute read then offered an AI summary, and I believe it may be included in an eventual epitaph for civilization."...So far, none of the AI helpers offer emotionally indicative grunting noises. At least not as far as I know, though maybe Grok is on it?
A.I. Is About to Solve Loneliness. That's a Problem The New Yorker
...In the social sciences and the humanities, A.I. tends to be greeted less as a technological advance than as a harbinger of decline. There are the familiar worries about jobs—ours and our students'—and about the ease with which A.I. can be used for cheating. The technology is widely seen as the soulless project of Silicon Valley billionaires whose creativity consists mostly of appropriating other people's. But what really rankles is the idea that these digital interlocutors are a plausible substitute for real friends or family. You have to be either credulous or coldhearted, many people believe, to think so....Evolution primed us to see minds everywhere; nature never prepared us for machines this adept at pretending to have them. Already, the mimicry is good enough for some—the lonely, the imaginative. Soon, it may be good enough for almost everyone.
The End of Reading Derek Thompson at The Ringer
...This rationalist tradition tends to reflect its proudest technological achievements back on the human condition. From the 1600s to the present, Western society has likened the body to a clockwork machine (inspired by mechanical clocks and automata), then to a steam engine (reflecting 19th-century ideas of energy, pressure, and fatigue), a brief detour towards the Jacquard Loom and weaving together strands of thought, followed by the nervous system as a telegraph network (mirroring telecommunication systems). Ever since the 1950s or so, the metaphor of the mind as a computer has been central to how we understand ourselves.But the "computer" metaphor is getting stale. Media theorists have noticed this for a while, and have been busy updating it; I especially recommend K Allado-McDowell's framework culminating in contemporary neural media.
...There are four essential advances in the past decade that have transformed LLMs into the potentially world-changing technology they are today. The first is simple and well-understood: they have sucked up more and more training data, pirating "the whole internet" and as many books as they can get their hands on; Moore's Law is probably slowing down but we still more processing power to throw at that data. Big computer better than small computer, so far the old metaphor works.
The second is similar, and can be understood through a parallel to the hard drive and RAM. The training data is the hard drive, a static repository of knowledge; RAM, the active memory applied to a task, is the context window, or how many words the LLM can hold in its memory at once. This is usually a tighter constraint than training data, computationally — just like hard drive capacities have grown faster than RAM, it's less expensive to grow the size of the training set than to expand the context window.
...The third advance is more discontinuous, and directly interacts with the context window: the Transformer (the T in ChatGPT) models that kicked off the new era did so by parallelizing the training process to include all of the words in the same context window at once.
...It's now cliche to say that LLMs are replacing our capacity for cognition; cliches often contain some truth, but we can benefit by drilling into the technical mechanism by which this cognition is being outsourced. By abandoning the technology of longform reading and writing, we are shortening our context windows and thus weakening our capacity for attention. At the same time, LLMs advance by expanding their context windows and refining their capacity for attention (in the form of some hideously high-dimensional vector of weights).
The CEO of Nvidia Admits What Everybody Is Afraid of About AI
...It's about a fundamental rewiring of our world.So why is this one company so important? In the simplest terms, Nvidia makes the "brains" for artificial intelligence. Their advanced chips, known as GPUs, are the engines that power everything from ChatGPT to the complex AI models being built by Google and Microsoft. In the global gold rush for AI, Nvidia is selling all the picks and shovels, and it has made them the most powerful company on the planet.
The AI Revolution Is Eating Itself Alive Rohit Kumar Thakur at Medium
...I just read a fascinating, and frankly terrifying, research paper by Trevor Nestor, a scientist affiliated with Berkeley and Louisiana State University. The paper with the very long but very direct title, "AI Models Are Not Conscious and Are Massively Inefficient, Causing Complexity and Scalability Bottlenecks to Artificial General Superintelligence Risking Technological Singularity and Loss of Societal Dynamism or Institutional Collapse and Renewal" : isn't your typical AI-dystopia story. It's a cold, hard look at the physics, economics, and sociology of our current AI trajectory.
Harvard and MIT Study: AI Models Are Not Ready to Make Scientific Discoveries AI can predict the sun will rise again tomorrow, but it can't tell you why Alberto Romero
...back to MIT, to the Brains, Minds, and Machines symposium held during MIT's 150th birthday party in 2011, and listen to Noam Chomsky again—maybe he was wrong about language, but I believe he was right in the broader sense. He said:Statistical models may provide an accurate simulation of some phenomena, but the simulation is done completely the wrong way; people don't decide what the third word of a sentence should be by consulting a probability table keyed on the previous words, rather they map from an internal semantic form to a syntactic tree-structure, which is then linearized into words.
"Statistical models may provide an accurate simulation of some phenomena, but the simulation is done completely the wrong way." He was talking about language, but notice how well his sentence fits the discovery that AI models are incredibly proficient at making good predictions from poor world models!
...the power of predictive analysis lies in knowing what happens next, but it's in no way useful to knowing why it happens and thus is ungeneralizable to any scenarios with the slightest variation in conditions. There's a trend in the scientific community, Chomsky lamented, to call that a success, but that's "novel in the history of science." Science is not about "approximating unanalyzed data" but about understanding the principles governing that data.
..I don't think modern scientific efforts, including but not limited to modern AI, are leaning more on the side of "accurately modeling the world" and "making predictions about phenomena" because scientists don't care about the underlying explanations and the "whys" of the world (even a three-year-old child cares about that stuff). I think the real reason is a bit more forgiving and, perhaps, a lot more pessimistic.
The universe is too complex for us. Scientists have merely made, on behalf of humanity, the difficult decision to put a good face on this terrible possibility.
Nvidia CEO Meets With Trump and Secures Permission to Sell AI Chips in China Again gizmodo
These Are the Biggest Rumors for the Next Generation of Meta Smart Glasses lifehacker
Praise and Addiction Fears: Musk's AI Girlfriend Sparks Fierce Debate
Elon Musk Turns His AI Into a Flirty Anime Girlfriend
Grok Companions are customizable characters seemingly designed for a sense of intimacy. Ani doesn't just answer questions; she sends memes, makes jokes, and cultivates a familiar tone.
This taps directly into "waifu culture," a predominantly online phenomenon where individuals develop deep emotional attachments to fictional characters. Some view it as an extension of fandom, while others see it as a form of fantasy. Musk clearly sees it as a market opportunity.
With Grok Companions, he is effectively steering his chatbot towards a digital girlfriend simulator. It is not difficult to imagine future iterations of Grok serving as life coaches, emotional support partners, or even romantic stand-ins.
Unpacking Claude's System Prompt O'Reilly
Billionaires Convince Themselves AI Is Close to Making New Scientific Discoveries gizmodo
*** Harvard and MIT Study: AI Models Are Not Ready to Make Scientific Discoveries
AI can predict the sun will rise again tomorrow, but it can't tell you why Alberto Romero
Critically, Newton realized this mechanistic explanation applied to many other things besides planets, like an apple falling from a tree.
That's the power of a world model. It allows you to go beyond the immediately observed phenomena and extrapolate the underlying cause to unobserved and seemingly unrelated scenarios. Humans do this all the time, since we are toddlers. If you see someone pour juice into a cup and then knock the cup over, you understand, even without seeing the spill, that the juice will be on the floor. You don't need to witness the outcome more than once to know it. That's because you've internalized the world model: "liquid in cup + tipping = spill." Once you grasp that, you can predict all sorts of situations that you have never encountered.
This broken link between prediction and explanation has been a shortcoming of modern AI models for as long as neural networks have dominated the field (as early as the 90s, and even before). It became obvious when large language models (LLMs) grew in size and power over the past decade, now capable of solving PhD-level science problems, yet strangely unable to figure out easy ones like those of ARC-AGI 2. Why? Because they require encoding a world model beyond observed data.
...the authors refer throughout the paper to a 109-million-parameter transformer model as a "foundation model" (the main experiments are done on this small model, but they also test well-known LLMs). 109 million parameters is ~4 orders of magnitude smaller than the largest language models available today, which are ~1-2 trillion parameters. This may seem like a semantics issue, but previous research has offered evidence that scale leads to capability emergence—perhaps a 100M AI model fails this task, but a 1T AI model doesn't. I make this clarification so you don't confuse the authors' use of "foundation model" with "any LLM as large as it might be." But I'm not here to defend the (often contested) research on emergent capabilities, so I will refrain from using this nomenclature and will resort to the more general "AI model" and will specify as "transformer" or "LLM" when the distinction matters.)
(As clear an exposition as I have ever...)
...Empirical data taken from the world has hidden structure—orbital mechanics does, and also language—and so the premise of the LLMs at the core of ChatGPT, Claude, Gemini, Grok, DeepSeek, etc. is that they can encode that structure (syntax, semantics, etc.) just by reading tons of data many, many times and being trained on predicting the next token. And indeed they do—they write well and have a deep grasp of the form of language, and only in edge cases can you see the trick being played.
...My guess is that training an AI model to be optimized for a target in the form of "what's the most probable next token?" provides the model with incredible capabilities, but among them, the very toolkit to bypass the need for encoding a robust world model that would otherwise allow it to generalize its predictions.
...If we could build an AI model that could gather all the data in the universe, then there would be no need for a world model, because no data would be "out-of-distribution." But that's not what happens in practice: we will never not be short on data. To generalize from experience, one needs to encode a set of functions that describe not just what will happen but why the world works that way. AI models fail to do that. Humans are far from perfect, but much better. That's the conclusion of this study.
...To quote another of LLMs' harshest critics, Grady Booch, responding (in a now deleted tweet) to Sam Altman about the $7 trillion he said he'd need to "reshape the global semiconductor industry":
...So, contrary to making me more bullish on "just scale," this study updates me in favor of a complementary approach (scale is still fundamental, don't get me wrong).
...François Chollet's view: the AI community has over-focused on increasing crystallized intelligence (which is knowledge and static skill) rather than fluid intelligence. As Chollet says, they confuse intelligence with "having the skill to solve a problem." A better framing, he argues, is intelligence as the meta-skill of efficiently using previous experience to adapt on-the-fly to solve new problems (thus acquiring new skills). That's what humans do.
I Tried Going on a Date With Elon Musk's New AI Girlfriend lifehacker
Google Discover adds AI summaries, threatening publishers with further traffic declines via Stephen Downes
When Google's slop meets webslop, search stops Cory Doctorow
16vii25
Where Is AI on the Enshittification Curve? O'Reilly
The AI Alignment Problem:
Why It's Hard, and Where to Start
Eliezer Yudkowsky (2016) pdf
Google's AI Just Publicly Humiliated OpenAI Rohit Kumar Thakur at Medium
...The researchers made top AI models from OpenAI, Google, and Anthropic compete against each other and a set of classic, hand-coded strategies (like the famous Tit-for-Tat).
Almost instantly, they noticed something surprising. The models weren't just playing the game, they were showing consistent, deeply rooted patterns in how they played, like unique “strategic fingerprints.”
Google's Gemini acted like a master manipulator....OpenAI's GPT models, on the other hand, were the idealists....Then there was Anthropic's Claude.. the peacekeeper.
Wall Street's AI Bubble Is Worse Than the 1999 Dot-com Bubble, Warns a Top Economist gizmodo
Google Release AI-Generated Film Based Off Vintage Photographs PetaPixel
Does AI understand? Harvard Gazette
"Neural networks have numbers inside them; we call them weights," said Stratos Idreos, Gordon McKay Professor of Computer Science at SEAS. "Those numbers start by default randomly. We get data through the system, and we do mathematical operations based on those weights, and we get a result."
...Rather than operating from a stable understanding of reality, he argues, AI memorizes countless rules and applies them to the best of its ability, a kind of slapdash approach that looks intentional most of the time but occasionally reveals its fundamental incoherence.
..."I think one of the biggest challenges about getting to AGI is that it's not clear how to define it," said Vafa. "This is why it's important to find ways to measure how well AI systems can ‘understand' or whether they have good world models — it's hard to imagine any notion of AGI that doesn't involve having a good world model. The world models of current LLMs are lacking, but once we know how to measure their quality, we can make progress toward improving them."
Idreos' team at the Data Systems Laboratory is developing more efficient approaches so AI can process more data and reason more rigorously. He sees a future where specialized, custom-built models solve important problems, such as identifying cures for rare diseases — even if the models don't know what disease is. Whether or not that counts as understanding, Idreos said, it certainly counts as useful.
The Next Leap for AI: Why Agents Need to Learn to Believe O'Reilly
...the Belief-Desire-Intention (BDI) framework. Rooted in the philosophy of practical reasoning, BDI systems operate on three interconnected levels. Rather than hardcoding every possible scenario, this framework gives agents the cognitive architecture to reason about what they know, what they want, and what they're committed to doing—much like humans do with the ability to handle sequences of belief changes over time including possible consequential changes to the intention thereafter in light of new information.
Beliefs represent what the agent understands about the world, including itself and others—information that may be incomplete or even incorrect but gets updated as new data arrives. Desires capture the agent's motivational state, its objectives and goals, though not all can be pursued simultaneously. Intentions are where the rubber meets the road: the specific plans or strategies the agent commits to executing, representing the subset of desires it actively pursues.
How Generative AI Fixes What Higher Education Broke
Theodore J Kalaitzidis
Kalaitzidis starts off with exactly this point.
The ideas within [this essay] do not present GenAI as a savior or a scourge, but as a mirror, forcing us to confront what education has become and inviting us to rebuild something better.
The Cold War American university, he argues, became incoherent in its "systemic architecture" and fell back with increasing intensity on a “reductive concept of learning” that did not at all resemble the extravagant, utopian description of "liberal education" that had become commonplace in the imagination of faculty as well as wider publics. That process of falling back on a constrictive systematization produced universities that absolutely did notencourage exploration, creativity or synthesis, but instead relied on surface level memorization and performance in service of external validation. "What students 'remember' in such systems is either context-bound or quickly forgotten," he concludes.
LLMs Die Every Time You Close the Chat Window Alberto Romero
Trump Hopes to Kill 'Woke' AI Models Petapixel
...The allegedly liberal leanings of AI models have rubbed some in the President's orbit the wrong way, including "AI Czar" David Sacks and the White House's senior policy adviser for AI, Sriram Krishnan, who WSJ reports are the key writers of the new executive order on AI.
While it may seem like the executive order is narrow in scope, applying only to AI and tech companies with federal contracts, nearly every tech company is trying to secure federal money in one way or another, and winning contracts requires staying on the administration's good side.
He also shared the results of a devastating test that debunked AI's status in its favorite field, namely writing code. This study reveals that software developers think they are operating 20% faster with AI, but they're actually running 19% slower.
Some companies are bringing back human workers because AI can't deliver positive results. Even AI researchers are now expressing skepticism. And only 30% of AI project leaders can say that their CEOs are happy with AI results.
This is called failure. There's no other name for it.
And it will get worse. The Gartner Group is now predicting that 40% of AI agent programs will be cancelled before 2027—due to "rising costs, unclear business value and inadequate risk controls."
The only thing keeping AI in growth mode is the ridiculously high level of investment in data centers by companies promoting it. But they are increasing supply at a time when demand may already have peaked.
This has happened before in history. It's called a bubble. And a bubble can continue expanding for a little while—but, sooner or later, it pops.
Based on developments of the last few days, I'm betting on sooner.
If You Trust AI, You're Asleep. (At Least You're Not "Woke") John Battelle
...you do know who the largest holders of crypto currency are, right? Yep, they're the same folks who just dodged accountability in the Facebook trial, along with a long list of grifters who've used crypto to buy off our current administration, starting with the President. Today brings news that they're getting what they paid for, legislation that will bring cryptocurrency into the mainstream of American economic life, including 401ks. What could possibly go wrong?
OpenAI Quietly Turns to Google to Stay Online Luc Olinga at gizmodo
Earlier this year, OpenAI CEO Sam Altman made a series of public posts on X (formerly Twitter) admitting that the company was struggling with infrastructure. There were not enough graphics processing units—known as GPUs—to keep up with user demand. GPUs are the specialized chips that allow AI models like ChatGPT to operate at scale. They are expensive, hard to find, and mostly controlled by a few tech giants.
...By partnering with Google Cloud, OpenAI gains access to some of the most advanced AI hardware and data center infrastructure on Earth. Google, like Amazon and Microsoft, runs massive server farms that rent out computing power to other companies. And unlike OpenAI, it has enough chips to meet demand.
...This partnership also reveals something deeper about the future of AI. Even as companies talk about independence, decentralization, and disruption, the reality is that a handful of tech giants still control the essential tools. Whether it is through chips, data centers, or cloud infrastructure, companies like Google, Microsoft, and Amazon are still the backbone of everything online, including artificial intelligence.
The AI Ghost in the Machine Fired Him. Then It Gave Him a New Life Luc Olinga at gizmodo
"In 30 seconds, it spit out not only the answer but the complete methodology, what we thought were the clever adaptations we figured out," Quinn recalls. "When I saw that, I realized the world has changed. It was that moment that I said we got to be all in."
He was all in, but soon he was all out. The very efficiency he helped implement by ramping down a 3,000-person human workforce with AI ultimately eliminated his own role. He was a ghost in the machine he had helped build.
...His first step was to reject the common perception of AI. He implores people to do the same. "Do not think of AI as a tool. Do not think of it as a search engine," he insists. "These companies have done themselves a huge disservice by making them look like chatbots. They are not. The more that you can think of AI as a collaborator, as the world's best expert in whatever you need sitting right there next to you, the more you get out of it.
While Everyone Learns AI, Smart Professionals Master This Instead Leandro Guarnieri at Medium
...I've seen companies spend thousands on AI tools only to have their teams ignore the recommendations. Not because the AI is wrong — but because nobody can bridge the gap between what AI produces and what humans actually need to make decisions.
The bottleneck right now is human comprehension.
...AI excels at processing information and identifying patterns. Humans excel at understanding context, making judgments, and taking action. The magic happens when someone can bridge these two capabilities.
...While your colleagues are learning prompt engineering, you should be mastering the art of making AI outputs compelling and actionable.
Practice explaining complex ideas simply. Learn to identify what matters most in a sea of information. Develop frameworks that help people move from insight to action.
The professionals who master these translation skills won't just survive the AI revolution but they'll become more valuable because of it.
Because in a world where AI can analyze everything, someone still needs to help humans understand what any of it actually means.
Children and AI Agents In A New World Giles Crouch at Medium
...our development. LLMs may disrupt this as they provide seemingly authoritative answers, yet have no cultural context, emotional resonance, or the familial and societal bonds so important in child development. So kids may end up being superior in navigating the information landscape (infosphere), but far less skilled at reading social cues, understanding implicit cultural knowledge or developing the embodied intelligence that comes from interacting with other humans.
Merlin Donald Wikipedia
CEOs Are Literally Begging to Hire Anyone With This One Skill Derick David at Medium
Meta Is Breaking OpenAI $100 Million at a Time Luc Olinga at gizmodo
...building superintelligence is a vastly different endeavor from scaling a social media application, and Meta's spending spree comes with inherent risks. Throwing $100 million at an individual is not the same as cultivating a cohesive culture, establishing a unified vision, or developing a coordinated research roadmap. Without strong scientific leadership, the lab risks transforming into a gravity well of competing egos and conflicting agendas.
Policy Uncertainty Not "AI"-Automation Is Almost Surely Behind the Bulk of Recent Graduates' Job Discontent Brad DeLong
More important, probably, is that money that would go to new hires is instead going to buying NVIDIA chips. In the current tech boom, companies are pouring vast sums into the hardware that powers artificial intelligence—most notably, the high-performance graphics processing units (GPUs) produced by NVIDIA. These chips are the backbone of machine learning and generative AI, and demand has been so intense that NVIDIA briefly became the world's most valuable company. For firms, the calculus is straightforward: Investing in AI infrastructure is seen as a ticket to future competitiveness, while hiring junior staff is a cost that can be postponed. The opportunity cost, however, is that young people seeking a first job may find doors closed—not because their skills are obsolete, but because capital is being allocated elsewhere. For a college freshman, this is a reminder that macroeconomic trends and corporate priorities—often far removed from undergraduate coursework—can shape the contours of the job market in unpredictable ways.
OpenAI's New Exec Has a Grand Plan to Make AI for Everyone gizmodo
Tech And Society's Faustian Bargain John Battelle
AI Warfare in Gaza: Targeted Killings, Automated Surveillance Walid El Houri at Informed Comment
An Editor's Thoughts on AI Ben Ulansey at Medium
It read, "I block people who use AI. I block people who tell other people it's ok to use AI. I block people who use AI for editing, too. For ideas. For anything related to creative writing.
Sometimes I think I'm overreacting. But no, I'm not. If you use AI, you're not a writer. If you have a problem with that sentence, unfollow and unsubscribe please."
Initially, I wanted to agree. As someone who owns a publication that's fielded its fair share of AI submissions, I've developed some pretty strong feelings on the subject. I respond with disgust whenever someone submits an article that I can tell has been AI-generated. It's not usually all that difficult to discern when an "author" has outsourced the task of writing to a robot and prompted it with some generic essay topic.
...I earnestly believe that AI has the potential to aid writers in expressing their thoughts better than they already can — and, most importantly, in ways that linger even when the program is no longer in front of them. It's a shame that what we so often see are people with little linguistic experience trying to bypass the journey of becoming writers completely. Those shameful cases when people enter the field and immediately surrender the expression process over to automation.
It's because I'm so passionate about what I do that I refuse to let AI do it for me. No matter how good these models get, it won't change the fact that I began writing because I love it. This isn't some menial job that people do because they have no other choice. There's no burden that's removed if I were to just tell some robot software to write my essay for me — only joy sapped. The would-be fun and enlightening and cathartic routine of expressing my ideas becomes hollow.
But even "hollow" overstates what it is when we tell AI to just elucidate our thoughts for us. The creative process evaporates. The delightfully painstaking trial and error of figuring out how to communicate our wisdom and our experiences is stripped away. The difference couldn't be more colossal.
The Algorithm Wars: Why Humans Are Winning Giles Crouch
I've been seeing other displays of this type of digital sovereignty behaviours in platforms like LinkedIn, where it is becoming increasingly popular to remove the em dash, include spelling mistakes and use interesting twists in grammar. This is a reaction to AI slop, or those who simply use LLMs (Large Language Models) for posts and responses.
Aside from algorithms, it is this AI slop content that is pushing people to reject certain content forms. When it is seen in forums and subreddits, users will quickly down-vote that content if they suspect it as being AI generated.
These pushbacks against the algorithms, the platforms, is the result of people's growing awareness of constantly being manipulated, of their sense of self (agency) being taken away. Humans are social animals. To survive, we must cooperate. Yet within that, even though it is different in various cultures, we always want some form of agency.
MAMLM as a General Purpose Technology: The Ghost in the GDP Machine Brad DeLong
...In sum: AI is the latest GPT to promise the moon. If history is any guide, we should expect the benefits to be real but delayed, broad but uneven, and—at least for a while—more visible in the stories we tell than in the numbers we collect. The challenge, as always, is to turn potential into reality, and to ensure that the gains are widely shared.
And as for AI's "superagency" or Mark Zuckerberg's building ASI—Artificial Super Intelligence? It is, right now, truly nowhere. While the phrase "AI superagency" is now a favorite in the armory of Silicon Valley's self-mythologizing rhetoricians, and the idea is seductive, there has been next to no true advance in turning every knowledge worker into a sort of intellectual superhero, equipped with digital exoskeletons for mind and memory. In the real world—by which I mean the world of quarterly earnings and harried middle managers—AI is most commonly found and will be found crunching invoices, triaging customer service tickets, or generating boilerplate marketing copy. The much-vaunted "copilots" and "assistants" are, for now, glorified autocomplete engines.
The Universal Algorithm: Your Brain is a Transformer Oğ
;uz Birinci at Medium
You're not just reading about AI, you're experiencing a similar computational process that powers the most advanced language models. The difference isn't in the algorithm; it's in the medium. Your neurons, their silicon. Your biochemistry, their mathematics. But the underlying logic? Remarkably, similar.
Why Trump's order targeting 'woke' AI may be impossible to follow New Scientist
Behind the Hype Brad DeLong at Milken Institute Review (MAMLM introduced)
Modern advanced machine-learning models (the aforementioned MAMLMs) have changed this. Now, one can query, instruct, or collaborate with machines in English or any of dozens of human languages. This democratizes access to computation, lowering the barrier for millions — perhaps billions— of people to use, customize and leverage digital tools. Indeed, this ability to "converse" with our machines in human languages represents a rupture as profound as the mouse and graphical user interface in the 1980s or the punch card to keyboard transition before it.
...Calling these technologies MAMLMs rather than AIs is thus vital to gaining a cleareyed view of them. It strips away the anthropocentric fantasy and reveals the technical and economic understructure. We are not building minds, we are refining tools. And tools reshape societies not by becoming human, but by redefining what it means to be skilled, to be productive, to be competent.
...Stripped to essentials, MAMLMs are classifiers, taking an input, mapping it to an internal representation in multiple dimensions, constructing a model of similarity and closeness over the items in that space, and then using that model to assign probabilities or labels to consequences. What makes MAMLMs distinct is their high dimensionality, their enormous data capacity and their extraordinary flexibility.
Consider this: MAMLMs ingest millions or billions of variables. They do not have a single "model" but a stack of algorithms for optimization across unimaginable numbers of parameters. "Very big data" means what it says — all the texts ever digitized, all the clicks ever logged. "Very high dimensional," as in, 175 billion parameters.
...In the past, we could classify and search by keywords, but only with substantial difficulty. We could classify things yes or no: spam or not spam, cat or dog, fraud or fair play. Today MAMLMs sort and classify by placing the vectors they use to represent individual pieces of data in 3,000-dimension virtual spaces
...The Microsofts, Googles, Amazons and Baidus of the world are pouring billions into MAMLM infrastructure along with acquiring the upstream talent and data required to train the models. Their goal is to shore up their pricing power by entrenching themselves as indispensable infrastructure providers for the MAMLM-based economy.
Look closely at this leading edge of early 21st-century high-tech capitalism and what do you see? Extraordinary panic. The panic is not among the masses of consumers or even the millions of workers living with financial insecurity and worried that machines will replace them. It is among the princes of Silicon Valley.
MAMLMs, the platform behemoths fear, are the one force potent enough to disrupt their business models and rob them of profits built on decades of accumulated market power. Whether it is Facebook (Meta), Google (Alphabet), Apple, Amazon or Microsoft, they all worry some startup will build a natural-language interface that people will flock to because it is easier to use than their own. Social media loyalty has faded. Why bother with Instagram or TikTok if some scrappy upstart offers seamless social connection? Why use Google search, the rock on which Alphabet's advertising empire is built, when Claude is slicker? For that matter, why be loyal to the iPhone, or Amazon or Office when MAMLM-enhanced alternatives await?
...Hence the platform giants are all spending tens of billions building natural-language interfaces to ease access. And they are spending comparable sums to build classification engines that improve their core services. As I understand it, the tech-platform incumbents do not expect to make serious money from AI. Their primary objective is to protect themselves from the erosion of profits in the businesses they already dominate.
This obsession with MAMLMs among the platform giants is not a sideshow. It is central to understanding their strategies for keeping the good times rolling in the face of disruptive technological change. Note that they are investing ginormous sums even though they see no clear path to direct profit.
The biggest winners are upstream. Think Nvidia, the company with a near-lock on production of AI-ready digital processing chips, TSMC, the dominant maker of advanced memory chips, and ASML, the Dutch company that manufactures the incredibly complex machines for making the advanced chips.
Google DeepMind Just Dropped a 'Transformers Killer' Architecture Rohit Kumar Thakur at Medium
Google DeepMind (along with researchers from KAIST AI and Mila) just dropped one. It's called Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation," and it has the potential to be a Transformers killer.
...The Mixture-of-Recursions (MoR) architecture doesn't just patch the old skyscraper; it demolishes it and builds something far smarter in its place. It attacks the root problems with two brilliant moves.
...Instead of 32 unique, expensive floors, MoR builds just one, highly-optimized "Recursion Block." It's like a single floor containing an elite team of multi-talented experts.
When a task (a token) needs work, it's sent to this one floor. If the task is complex and needs more work, it doesn't go to a new floor. It just gets sent back to the same elite team for another round of processing. This is recursion.
...MoR introduces a tiny, lightweight "Router" that acts as an intelligent bouncer or project manager. Its job is to dynamically decide the "recursion depth" for every single token that comes in.
...The model is now allocating its most precious resource... compute... on a per-token basis. It's intelligently outsourcing its brainpower only where it's needed. This is adaptive computation baked directly into the architecture's DNA.
...This isn't just a win. It's a complete change in strategy. For years, the AI race has been a heavyweight boxing match defined by brute force. MoR turns it into a martial arts match won by precision and efficiency.
It points to a future where we won't need massive digital infrastructure every time we want a smarter AI model. Instead, we can create models that are naturally more efficient and capable.
Furthermore, the paper hints at something interesting: latent reasoning. The recursive loops act like a model's internal "chain of thought," allowing it to "ponder" a complex token for multiple steps before moving on. It's a step towards models that don't just predict, but reason.
...These experiments were done on models up to 1.7B parameters. The true test will be seeing if these incredible gains hold when scaling to the 70B, 180B, and even larger models that dominate the landscape today
My Ethical AI Principles Stephen Downes
Which Ideas Get To Live? Warren Ellis
Like learning physics by watching Einstein do yoga Mark Liberman at Language Log
As AI Gets Smarter, It Acts More Evil Ted Gioia
Shoggoths amongst us Henry Farrell
AI is flattening language — and redistributing power Youjin Nam at Medium
Now, that dynamic is shifting. Tools like ChatGPT are blurring the line.
In professional settings, I've seen this most clearly with colleagues whose first language isn't English. With the help of AI tools, they can now level the playing field and ensure their ideas are communicated as clearly as intended. The quality of the ideas hasn't changed, but the way they're being received has. The written words finally land in a language the system recognizes as credible.
...Even today, professional writing often serves as a proxy for intelligence, competence, and cultural belonging. Strategy memos, research papers, job applications, grant proposals — so many of these rely not only on what is said, but how it's said. Fluency in the "right" tone signals legitimacy.
But that fluency is rarely distributed evenly. It mirrors access to elite education, cultural capital, and certain kinds of class. Those who master the expected cadence of professionalism — clear, authoritative, polished — are heard. Those who don't are often sidelined, regardless of the insight they bring.
...In non-English-speaking regions, AI tools are doing more than polishing prose — they're translating and adapting ideas into the global lingua franca, giving voices from Lagos to Seoul a chance to compete on equal footing.
AI-Generated Models Now Appear in Vogue Magazine PetaPixel
AI leaps from math dunce to whiz Harvard Gazette
As AI Throws Education Into Chaos, OpenAI Introduces 'Study Mode' to Help Students 'Learn' gizmodo
AI and EducatioNicole Butterfield at O'Reilly
Stop subscribing to AI. Get GPT-4, Claude, and Gemini forever for $80 boing boing
Notes on Google's Attempt at Self-Disruption Brad DeLong
In the pre-AI era, the company's dominance was already formidable; most online journeys began with a query in the search box. But by answering more questions directly—whether through featured snippets, knowledge panels, or now AI summaries—Google transitions from being a mere index to an oracle. This is history rhyming with how the social networks, by keeping users engaged within their platforms, siphoned attention away from the broader web, to our great detriment. AI summaries perform a similar enclosure, but with the added imprimatur of algorithmic authority.
It's Game Over: The Real "AlphaGo Moment" Just Happened in a Chinese AI Lab Rohit Kumar Thakur at Medium
I touched on this idea before, but after digging deeper into a recent paper, "AlphaGo Moment for Model Architecture Discovery" I realized it's a much bigger deal than I first thought.
This isn't just about an AI being creative. It's about an AI system that runs the entire scientific method from start to finish: hypothesizing, coding, experimenting, and analyzing.. to design its own brain. And its "thought process" is leaving a trail of evidence that is both brilliant and deeply humbling.
...the system is designed like a miniature, autonomous AI research lab. It has four key parts that work in a closed loop:
This entire loop runs 24/7. An idea becomes code, code becomes an experiment, and the results become the seed for the next idea. It's evolution in realtime.
...To be competent, the AI could rely on human knowledge. But to achieve true excellence, it had to rely on its own abstract understanding synthesized from its own experience. It wasn't just copying us. It was learning to think. It learned that to make a real leap, it couldn't just reuse past successes; it had to explore, summarize, and discover its own, more abstract principles.
...We've built a system that accelerates its own evolution. The implications are staggering, and we're only in the first inning. The next time a new AI model drops that seems impossibly good, the question we need to ask is no longer "Which team of humans built this?"
But rather: "Which AI designed it?"
How AI is eating the internet Adam Tooze quoting Economist:
AI Breaks the Chinese Room Paul Siemers at Medium
the emergence of large language models, such as GPT-4, has not only changed the landscape of AI but has also brought the Chinese Room from the realm of philosophical speculation into that of testable reality.
What was once a compelling philosophical intuition is now gone. As we step inside the room ourselves, so to speak, we realise that Searle's argument no longer evokes the intuition it was designed to.
Searle's scenario is simple enough. Imagine a person locked inside a room, receiving questions written in Chinese. The person doesn't speak Chinese but is equipped with a massive instruction manual that allows them to match the input symbols to appropriate output symbols, purely by form. From the outside, it appears as though the person understands Chinese. But inside, there is only rule-following; no comprehension, no meaning.
Searle's point is that this is all a digital computer is doing. It manipulates symbols, but it doesn't understand them. The thought experiment was meant to show, conclusively and intuitively, that no matter how sophisticated the program, genuine understanding could not emerge from syntax alone.
For a long time, many people found this persuasive. The image was so accessible, the symbolism so neat, the conclusion so intuitive.
...The Chinese Room was never a logical proof. It was a story — a vivid narrative meant to persuade through intuition. But intuitions are informed by experience, and the real-world experience of LLMs undermines the intuitive impact of the Chinese Room.
In particular, LLMs make apparent a known weakness in Searle's experiment &mdash the rulebook and papers that enable the generation of answers in Chinese. It has long been argued that this book and paper system would need to be very complex. LLMs provide us with a concrete understanding of just how complex they would need to be.
...This article doesn't prove that ChatGPT understands language. It merely shows that Searle's argument no longer gives us a compelling reason to think it doesn't. And if an intuition pump no longer pumps intuitions, what use is it?
Hanoi Turned Upside Down Mike Loukides at O'Reilly
...But if AI isn't the embodiment of some kind of undefinable intelligence, it's still the greatest engineering project of the 21st century. The ability to synthesize human language correctly is a major achievement, as is the ability to emulate human reasoning—and “emulation” is a fair description of what it's doing. AI's detractors ignore—bizarrely, in my opinion—its tremendous utility, as if citing examples where AI generates incorrect or grossly inappropriate output means that it's useless. That isn't the case—but it does require thinking carefully about AI's limitations.
Academia in an interregnum Christopher Hobson
Over the years, as the profession continues to involute at an increasingly faster rate, I have become more convinced about what is not working, rather than having a clear sense of what ways forward might offer the most promise. My sense is that AI is the wrecking ball that will knock down this crumbling edifice, for good and bad. The difficulty becomes identifying viable forms of agency within these entropic conditions. Nonetheless, we must try in an open and earnest manner
...The sad reality is most of what is posted online is destined to be training material for LLMs and probably not much more.
...C. Thi Nguyen's rich description of a process he terms 'value capture', by which our behaviour and choices become shaped by external values we have accepted and adopted with little thought or consideration. He explains:
Value capture happens when your environment presents you with simplified versions of your values, and those simple versions come to dominate your practical reasoning.
From this perspective, really existing academia becomes less about the pursuit of knowledge, and more about achieving metrics that count towards jobs, tenure, promotion and prestige. Nguyen develops the example:
I want to focus on one particularly clear, and quite common, form of value capture: when an institution presents you with some metric, and then you internalize that metric.
...That AI can increasingly replicate and simulate human thought is undoubtedly impressive, but also reflects the predictability and conventionality - if not banality - of so much of our thinking and writing. The steady dulling, homogenising flood of articles and online miscellanea can only increase as the cost for producing text with AI goes to zero. Nonetheless, it raises the stakes, increases the challenge for us: to think in open, creative ways that move beyond the obvious reference points. Thinking must become more human and more humane.
My sense is that one of the only credible ways of adapting to the rapid and unchecked encroachment of LLMs is a conscious emphasis on the aspects of writing that are more distinctive and less easily predictable
...A response in the form of a rearguard action that celebrates the human present in thought and word. And yet, this kind of more adventurous and idiosyncratic thinking is precisely what the deadening hand of peer-review tends to prevent. Indeed, the way peer-review presently operates - demanding displays of 'rigour', requiring a standardised structure, addressing the most obvious authors and reference points - is more likely to push writing towards a format that is easier for LLMs to imitate and replicate.
The current mode of producing research appears to be fast reaching a practical and intellectual dead end. The direction of travel has been increasingly evident for some time, but LLMs are a laser-guided missile aimed at the soft-underbelly of involuted neoliberal academia.
...What can and will replace this system is less clear. This is precisely the predicament of being in conditions described as an interregnum: it is evident the old ways of doing things no longer fit, but it is difficult to have a sense of what new approaches can work. Mastroianni's solution is a simple and obvious one: ‘experiment'. Nguyen points to one of the major impediments in doing so, however, with this not aligning with the incentives and values of participants. Institutions ‘see' metrics, this is what is rewarded, and so the churn of academic articles expands exponentially.
...Substack has been a great opportunity to push my thinking and writing in new directions. Already, however, one can feel the logic of enshittification washing over the platform. The shift towards a Twitter-like dynamic with the notes function, combined with a business model that encourages maximum pump, ends up reinforcing trends towards the debasement of language as a currency of meaning, as everyone produces more and more content that fewer and fewer peopler read. How to stay ahead of these involuting dynamics is a real challenge.
...There are plenty of signs that the neoliberal globalised and platformed model of academia is in severe crisis, with much less evidence of capacities for course correction. In the current moment, the great temptation is to turn to AI as an answer or solution to such problems. It is most likely neither a miracle technology nor snake oil, landing somewhere in-between. I do not pretend to know. What I am more confident in proposing is that the general direction of travel is towards modes of being and interacting that are more brutal, less civilised and more inhumane. AI as it is currently being rolled out seems likely to greatly exacerbate these dynamics.
...What role for the university when thought is no longer valued? This is a question that increasingly needs to be reckoned with. The world we are rushing towards appears heavy on data and information, but light on knowledge and wisdom.
Faced with such conditions, there is a need to reject both the ‘old' of involuted neoliberal academia and the ‘new' generated in an uncritical acceptance of Silicon Valley doctrine. Knowing what to reject is easier than what to build, but that is the challenge. Surely one response must involve recovering and building practices that actively develop and foster knowledge and thought, which is something I will be continuing to work towards here and elsewhere.
(The Lindy Effect Wikipedia)
Interfaces That Build Themselves Shreshta Shyamsundar and Rajeshwari Ganesan at O'Reilly
Chat excels at simple tasks. But when you want to plan a trip, manage a project, or collaborate with others, you find yourself spelling out every detail, reexplaining your intent and nudging the AI toward what you actually want. The system doesn't remember your preferences or context unless you keep reminding it. If your prompt is vague, the answer is generic. If you forget a detail, you're forced to start over. This endless loop is exhausting and inefficient—especially when you're working on something nuanced or ongoing.
The thing is, what most of us are dealing with right now are really just "Type 1" interfaces—conversational ones. They're flexible, sure, but they suffer from what we call "prompt effectiveness fatigue." When planning a complex project or working on something that requires maintaining context across multiple sessions, you'll have to explain your goals, constraints, and preferences over and over again. It's functional, but it's also exhausting.
...Type 1: Conversational Interfaces
This is where most of us live right now—ChatGPT, enterprise search systems using RAG, basically anything that requires you to capture your intent and context fresh in every prompt. The flexibility is great, but the cognitive load is brutal. Every conversation starts from zero.
...Type 2: Coinhabited Interfaces
This is where things get interesting. GitHub Copilot, Microsoft 365 copilots, smaller language models embedded in specific workflows—these systems have ambient context awareness. When we're using GitHub Copilot, it doesn't just respond to our prompts. It watches what we're doing. It understands the codebase we're working in, the patterns we tend to use, the libraries we prefer. The ambient context awareness means we don't have to reexplain the basics every time, reducing the cognitive overload significantly. But here's the catch: When these tools misinterpret environmental clues, the misalignment can be jarring.
...Type 3: Generative Interfaces
This is where we're headed, and it's both exciting and terrifying. Type 3 interfaces don't just respond to your prompts or watch your actions—they actually reshape themselves based on what they learn about your needs... The result feels less like navigating an app and more like having a thoughtful personal assistant who learns your work patterns and discreetly prepares the right tools for each task.
...Generative interfaces need what we think of as "sense organs"—ways to understand what's happening that go beyond explicit commands. This is fundamentally observational learning: the process by which systems acquire new behaviors by watching and interpreting the actions of others. Think of watching a skilled craftsperson at work. At first, you notice the broad strokes: which tools they reach for, how they position their materials, the rhythm of their movements. Over time, you begin to pick up subtler cues.
...We're at the threshold of something genuinely new in software. Generative UI isn't just a technical upgrade; it's a fundamental change in how we interact with technology. Interfaces are becoming living artifacts—perceptive, adaptive, and capable of acting on our behalf.
Microsoft Predicts These Jobs Are Safe From AI gizmodo
AI Is Already Disrupting Labor, and Recent Grads Could Be a 'Lost Generation' gizmodo
Anthropic's latest AI assistant, released on July 15, pretty much does all the work that a finance intern would do at an average Wall Street firm. Shopify CEO Tobias Lütke told the company's hiring managers that they have to explain why an AI agent can't do the job before they can go ahead with hiring new workers, in an internal memo earlier this year. Duolingo CEO Luis von Ahn sent a similar memo to workers this year.
..."Evidence for AI's negative impact on early careers is already strong, and I worry that the current generational squeeze might evolve into a permanent reconfiguration of early career paths," McCarthy said.
This, in practice, is a tearing up of the social contract for recent graduates: entry level white-collar work is supposed to function as a training route for the rest of your career. With less of those opportunities at hand for recent college graduates, we are likely to see — and according to McCarthy are already seeing — increased reliance on elite internships and networking. That is only bound to widen inequality.
..."It's much harder to implement AI in a firm than people realize," Seamans told Gizmodo. "Firms don't typically have the in-house talent that's needed to train, operate and oversee whatever AI they implement, and so until you have the personnel in place that have that expertise, it's going to be really hard to rely heavily on AI."
Transformation of data centers to build out AI in the real world flowing data
Read This Before You Trust Any AI-Written Code giizmodo
...as AI agents become more common, so do agents capable of cracking the very same code. Recent research out of the University of California, Berkeley, found that AI models are getting very good at identifying exploitable bugs in code. So AI models are consistently generating insecure code, and other AI models are getting really good at spotting those vulnerabilities and exploiting them. That's all probably fine.
The Tech Industry Is Locking AI Into Old Models. That's Bad for Everyone John Battelle
Perhaps the most important question is metaphorical — what's the best metaphor for how we interact with AI? We've got plenty of examples to chose from. Will our interactions with AI end up being like the PC — a personal device that we own and control? Or will it instead end up like social media or search (or worse, television) — a centralized service that is owned and controlled by large corporations?
As generative AI took root over the past few years, I've been watching the early returns, and they're not encouraging. We're barreling down the "AI as a service" road, oblivious to the tradeoffs we're making along the way. If AI is indeed the most significant technological breakthrough of our lifetime, do we really want to adopt it into our lives under the same big tech business model that gave us Instagram, TikTok, Amazon, and the gig economy?
...AI as Waifu:
...They chose orbital mechanics because it's historically accurate: Newton built on Kepler's discoveries to derive his famous force laws. Kepler predicted the "what"—planets follow elliptical trajectories around the sun—but he never found a satisfying "why" (he invoked the divine harmony, but for some reason it didn't quite work). Newton formalized Kepler's observation by discovering the causal mechanism of planets' movements: the force of gravity.
...Kepler inferred the movement of the planets from data on past trajectories (inputs) to predict future trajectories (outputs) using geometric methods. Newton took those empirical patterns and showed they were consequences of deeper principles: universal gravitation and the laws of motion. He introduced a unified framework that connected falling apples to orbiting moons, invented calculus to describe continuous change, and gave us an explanatory world model—the underlying dynamics between forces, masses, acceleration, and motion, which made sense of the data.
If you need $7 trillion to build the chips and the energy demand equivalent of the consumption of the United Kingdom, then—with a high level of confidence—I can assure you that you have the wrong architecture [emphasis mine].
...There's a cringe army of AI bros who are seemingly convinced that AI is going to become superintelligent and save us from ourselves —they think that AI companies are creating god. But the hundreds of billions being pumped into AI are not driven by this bizarre ideology. Rather, they are the product of material conditions, a system that sends high-flying companies into a nosedive the instant they stop climbing. AI's merits and demerits are irrelevant to this: they pump AI because they must pump. It's why they pumped metaverse and cryptocurrency and every other absurd fad.
The real risks from "AI" we need to deal with NOW is how they are already hacking our brains to turn us into zombie cognitive slaves of people and systems that do not wish us well at all—with no Artificial Superi-Intelligence required. If you think capitalism is the "final boss" constraining humanity, you haven't met the emergent properties of networked Kahneman System I stupidity. It is our own inability to filter, focus, and resist manipulation and brain-hacking that we need to most fear right now...
...the DeepMind team discovered a batch of 1800s photos at a thrift store and then fed them to a LoRA fine-tuned image generation model, in this case Google Imagen, to create new images in the same style.
...today's artificial intelligence systems can seem awfully convincing. Both large language models and other types of machine learning are made of neural networks — computational models that pass information through layers of neurons loosely modeled after the human brain.
...The agentic AI systems that dazzle us today with their ability to sense, understand, and reason are approaching a fundamental bottleneck—not one of computational power or data availability, but something far more elusive: the ability to navigate the messy, context-dependent world of human beliefs, desires, and intentions.
...Of the things I've been reading lately, I've found a lot of interesting wisdom in an essay by T.J. Kalaitzidis, who works at Brown University. (I'm grateful to him for bringing it to my attention.) Kalaitzidis' essay, “How Generative AI Fixes What Higher Education Broke”,¹ doesn't convince me that AI is actually the fix, but it does convince me that AI exposes what was already broken about higher education, especially in institutions that claim they're built around the idea of “liberal arts”, and that no response to generative AI that stands pat on the status quo version of higher education circa 2015 or so is going to pass muster.
...you have to realize that LLMs, unlike humans, don't recover their memory when they go to sleep. You are still you, but they are not them anymore because they're missing one of the two fundamental ingredients—their memories die every time you close the chat window.
The Wall Street Journal reports that the White House is writing an executive order that will force any technology companies that receive federal contracts to be "politically neutral and unbiased," as WSJ describes. The publication's sources indicate that the Trump administration aims to address AI models that it believes are overly liberal, although the exact meaning of this is unclear.
...An IBM survey of 2,000 chief executives found three out of four AI projects failed to show a return on investment, a remarkably high failure rate,” reports Andrew Orlowski. "AI agents fail to complete the job successfully about 65 to 70 percent of the time, says a study by Carnegie Mellon University and Salesforce."
Over the past two years I've been warning that when it comes to long promised "user agents" that work on our behalf, AI companies would inevitably adopt the big tech playbook of providing centralized services that they control, ensuring that consumers are dependent on their platforms and by extension, locked into their services. This architecture is anathema to true innovation in a modern data economy, but inevitable given the capital constraints of current AI models. Well, this morning brought news of OpenAI's "Agent," which purports to "take over" our computers and take action on our behalf. As I've asked, over and over, is this the way we want the future to unfold? Who exactly do we think OpenAI's agent really works for? Hint: It's not us, anymore than Facebook, Amazon, or Google ended up working for us.
...OpenAI, which is backed by Microsoft, has often been seen as a direct competitor to Google in the race to build and monetize artificial intelligence. Both companies have invested billions into AI and compete on everything, from chatbot performance to search engine dominance. Now, OpenAI is renting server space and computing power from the same company it is trying to beat.
...an operational challenge that had taken him and his team four months of intense work to solve was presented to GPT-4 as an experiment.
...While everyone else fights to become better AI operators, a small group of professionals is quietly positioning themselves for something far more valuable. They're not learning to compete with AI, they're learning to make AI useful to humans. And in a world drowning in AI-generated insights that nobody knows how to act on, these people are becoming irreplaceable.
...What we are witnessing today is the first generation of children that will never know a time where humans didn't have such a companion or assistant. This means a fundamentally different kind of human. The question then isn't so much about is this good or bad, but rather, do we understand what we are creating? What is happening?
...Meta is extending these jaw-dropping pay packages even as it struggles to fill the critical role of chief scientist within its own AI division. Despite months of outreach, the company still lacks a singular leader to spearhead its AGI ambitions. This hasn't, however, deterred Meta from attempting to acquire everyone else's top talent. This narrative transcends mere competitive hiring; it is a story of unprecedented escalation. Meta is striving to achieve superintelligence — machines that are smarter than humans and capable of outperforming human intelligence in virtually every field — by poaching the very individuals who built OpenAI's most advanced systems. The strategy appears to be to offer loyalty-level money without a clear leader or a fully defined plan, operating on the premise that if enough high-IQ individuals are gathered in one building, AGI will inevitably follow.
...There are tech bosses saying, but we do not really knwo how representative they are, that workers should see this year whether ChatGPT instantiations can be their interns. Why pay for a junior analyst to draft reports or summarize documents when a large language model can do it in seconds? But similar fears have accompanied every wave of technological change, from the spreadsheet's arrival in the 1980s (which, some predicted, would eliminate the need for accountants) to the earlier automation of switchboard operators and typists in the mid-20th century. The reality is that while some tasks are indeed automated away, new roles and new forms of work tend to emerge, though not always at the same pace or for the same people.
..."Once we put a personalized AI tutor on every topic at everyone's fingertips, AI will close the gap between people who have the resources to learn and people who have historically been left behind,"
...Thanks in large part to society's rapturous embrace of AI, technology has absorbed the finance industry, adopting its amorality and its rapacious appetite for risk. Capital has become tech's hand maiden, financing an endless stream of audacious bets: Half a trillion for AI server farms? Let's do it twice! A $2 billion seed round for a company with no product? Probably undervalued! Abandoning your values to access autocratic blood money? Everyone else is doing it, why not us!?
Amid the ongoing genocide in Gaza, biometric surveillance and drones have become central tools in modern warfare. Khalil Dewan is a legal scholar and investigator. He is a Nomos Scholar at SOAS University of London. Dewan has spent over 15 years researching the global war on terror and its transformation through AI, drone technology, and legal manipulation. In this interview, he discusses how targeted killings have evolved, the implications for international law, and what Gaza reveals about the future of warfare.
I stumbled onto a comment a few days ago about artificial intelligence.
...Over time an online community like a subreddit or forum group also evolve their own rituals, practices and vocabulary, which may include insider jokes and memories. Algorithms tend to interfere with these community practices and behaviours, disrupting them. This can cause fracturing in the community. Reddit is fairly good at empowering moderators and has less algorithmic interference than the others. Meta, naturally, is the worst.
...Artificial Intelligence (AI), we are told, is the new steam engine, the new electrification, the new computer: a "general-purpose technology" (GPT) that promises profound, economy-wide transformation. But what, precisely, does that mean? General-purpose technologies are those rare innovations—think James Watt's steam engine in the late 18th century, Edison's electrification in the 19th, or Turing's computer in the 20th—that fundamentally alter the production possibilities of entire economies. They are not mere gadgets or sectoral upgrades; rather, they are platforms upon which countless other innovations are built.
As your eyes move across this sentence, your brain is doing something remarkable: taking fragmented visual input, connecting it to memories from years past, predicting what comes next, and weaving it all into seamless understanding. This is exactly how transformers (the algorithm your farovite AI uses) work, and I mean exactly.
...Until now, engaging with computers has required fluency in an alien tongue. Beyond the raw on-off digital switching of machine language itself, it might have been assembly code, or a high-level software language like Fortran or Python, or a WIMP (windows, icons, menus, pointer) interface. But all of these were effectively grammars foreign to human speech. Most everyone — nonprogrammers, casual computer users, even many professionals — remained dependent on translation layers that were brittle and opaque.
...Every once in a while, a research paper comes along that isn't just an incremental improvement. It's a potential dimensional shift.
...Always avoid actual chat with a chatbot. Chatbots are driving people nuts. Don't use it to create material- it will always read like a bot and creating things is a human joy you should not surrender to a computer. Just give it instructions and check its work later.
The most interesting LLM research that I've seen recently is from Alex Cloud and others at Anthropic and Truthful AI, "Subliminal Learning: Language models transmit behavioral traits via hidden signals in data"
LLMs too are collective information systems that condense impossibly vast bodies of human knowledge to make it useful. They begin by ingesting enormous corpuses of human generated text, scraped from the Internet, from out-of-copyright books, and pretty well everywhere else that their creators can grab machine-readable text without too much legal difficulty. The words in these corpuses are turned into vectors — mathematical terms — and the vectors are then fed into a transformer – a many-layered machine learning process – which then spits out a new set of vectors, summarizing information about which words occur in conjunction with which others. This can then be used to generate predictions and new text. Provide an LLM based system like ChatGPT with a prompt — say, 'write a precis of one of Richard Stark's Parker novels in the style of William Shakespeare.' The LLM's statistical model can guess — sometimes with surprising accuracy, sometimes with startling errors — at the words that might follow such a prompt. Supervised fine tuning can make a raw LLM system sound more like a human being. This is the mask depicted in the shoggoth meme. Reinforcement learning — repeated interactions with human or automated trainers, who 'reward' the algorithm for making appropriate responses can make it less likely that the model will spit out inappropriate responses, such as spewing racist epithets, or providing bomb-making instructions. This is the smiley-face.
...For a long time, "good writing" has been more than skill; it's been a form of gatekeeping. Those who could speak and write fluently in the accepted language of power — clear, polished, professional — were granted credibility. Many others — despite insightful or sharp thinking — were often filtered out.
...Google's decision to incorporate AI-generated summaries directly into its search results represents a profound shift. Google had functioned as a gateway, directing users outward to a vast constellation of third-party sites, via its blue links. Now this is upended, as outward traffic is halved, with implications for publishers, e-commerce platforms, and information providers who have built their business models around the expectation of Google-driven traffic. They are unsustainable on their old patterns should the AI-summary become the destination and not just the gatekeeper.
...We often think of AI breakthroughs as things that happen to us. A new chatbot, a new image generator. But what if the biggest breakthrough isn't a new tool, but a new toolmaker? An AI that can invent entirely new kinds of AI, all on its own.
As OpenAI and other upstarts have soared, Google, which has about 90% of the conventional search market in America, has added AI features to its own search engine in a bid to keep up. Last year it began preceding some search results with AI-generated "overviews", which have since become ubiquitous. In May it launched "AI mode", a chatbot-like version of its search engine. The company now promises that, with AI, users can "let Google do the Googling for you". Yet as Google does the Googling, humans no longer visit the websites from which the information is gleaned. Similarweb, which measures traffic to more than 100m web domains, estimates that worldwide search traffic (by humans) fell by about 15% in the year to June. Although some categories, such as hobbyists' sites, are doing fine, others have been hit hard (see chart). Many of the most affected are precisely the kind that might have commonly answered search queries. Science and education sites have lost 10% of their visitors. Reference sites have lost 15%. Health sites have lost 31%. For companies that sell advertising or subscriptions, lost visitors means lost revenue. "We had a very positive relationship with Google for a long time... They broke the deal," says Neil Vogel, head of Dotdash Meredith, which owns titles such as People and Food & Wine. Three years ago its sites got more than 60% of their traffic from Google. Now the figure is in the mid-30s. "They are stealing our content to compete with us," says Mr Vogel. Google has insisted that its use of others' content is fair. But since it launched its AI overviews, the share of news-related searches resulting in no onward clicks has risen from 56% to 69%, estimates Similarweb.
LLMs render Searle's famous thought experiment obsolete
...I want to make it clear that I'm not a convert to the cult of AGI. I don't consider myself a skeptic either; I'm a nonbeliever, and that's different. We can't talk about general intelligence meaningfully if we can't define what "intelligence" means. The hegemony of the technorati has us chasing after problem-solving metrics, as if "intelligence" could be represented by a number.
...Contemporary academia, ceaselessly warped and worsened by the collision of petty politics and neoliberal nastiness, constantly demands more and more. As researchers, we are expected to produce research™: commodified, packaged, ready to be sold back to universities. The type of scholarship and thinking encouraged by such conditions is not only increasingly incremental and irrelevant, it tends to be narrow and banal.
For most people, the face of AI is a chat window. You type a prompt, the AI responds, and the cycle repeats. This conversational model—popularized by tools like ChatGPT—has made AI approachable and flexible. Yet as soon as your needs become more complex, the cracks start to show.
...Microsoft developed what it calls an 'AI applicability score," which measures whether a particular vocation can productively apply AI in its activities or not. The score "allows us to track the frontier of AI's relevance to work," researchers write. Frequently, in jobs where AI ranks relatively high in terms of applicability, the technology "often acts in a service role to the human as a coach, advisor, or teacher that gathers information and explains it to the user," the report claims. "We find the highest AI applicability scores for knowledge work occupation groups such as computer and mathematical, and office and administrative support, as well as occupations such as sales whose work activities involve providing and communicating information," it continues.
..."The disruption of jobs is already underway, it's expanding rapidly and it will continue to," according to John McCarthy, associate professor of global labor and work at Cornell University's School of Industrial and Labor Relations.
To everyday users, AI might just seem like another thing that shows up on the screen. A collection of code, data, and bits wrapped into a chatbot. However, the new toys require exponentially more resources, which require space in the real world.
We are in the era of vibe coding, allowing artificial intelligence models to generate code based on a developer's prompt. Unfortunately, under the hood, the vibes are bad. According to a recent report published by data security firm Veracode, about half of all AI-generated code contains security flaws.
...I think we're approaching AI without considering history's lessons, and because of that we're failing to ask the questions that will matter as the technology becomes inextricably embedded in our culture.