AI links June 2025
(continued from previous list)

1vi25

Lonely Robots And Other Internets Warren Ellis, Orbital Operations

... This has been triggered by an old acquaintance of mine offering to send me a spare unit of something called the Bee Pioneer. It's a device the size of an old Fitbit, worn by strap or clip. And it's an AI listening machine. It's intended as a memory device: listening to you, reading your Gmail and contacts and such, recording what you're saying and doing and sending you actionable summaries. It works with an iOS app.

I already have an AI companion device, the Rabbit R1, and it's a cranky piece of kit. I figured that would be the start and end of my journey with AI devices. I was amused to see Jony Ive and Sam Altman non-announce a non-device the other day, but not deeply interested.

...So, listen: if you always think of AI as "machine learning" and not artificial intelligence, you'll realise you're already using it every day. Every time you use a search engine, in fact. You just need to notice it. There's a narrative afoot that AI is a black box, and nobody really understands how it works once you get past a certain level. That narrative serves other interests. It's algorithms, people write them, people understand them, it's just really long and complicated.

...what if assistive technologies could just melt into the background or present as intentional physical tools associated with muscle memory? What if we didn't use these things as toys, or to have them generate some slop to sell? Or, what if we didn't listen to the current conversation which wants us to treat them like they're hired friends and we're the lonely robots?

... What's interesting to me about all this is that it is, on some level, disconnective. It bypasses the loud, stupid, immiserating internet. What if AI is the thing that just gets us what we need while we're out living our lives?

What if we understood AI as a drawer of tools we can pick up and put away as needed, that keep us off the internet that makes us all so sad and distracted?

Why We're All Writing Badly on Purpose Now Giles Crouch at Medium

...Humans are quite good at pattern recognition. We are also all inherently story tellers. It's how we make sense of the world and can work as social units. From the hundreds of conversations I've analysed recently around LLMs, I'm noticing some consistent comments. The over-use of the em dash by LLMs, perfect spelling and grammar, but not always logically flowing arguments. As if there is a sort of monoculture to the output of LLMs.

That the output of LLMs is somewhat non-human. It's similar to the uncanny valley phenomenon when we encounter robots or animated characters that are close to human, but not quite right. As people discuss these LLM quirks, we are developing what sociologist Emile Durkheim called a collective conscience. We're developing a sort of unconscious resistance to LLMs.

People are intentionally removing the em dash from what they write, be it long or short. Deliberately leaving spelling mistakes and twisting grammar in different ways, using run-on sentences and changing paragraph styles. A resistance to what we may feel is homogenised digital content that somehow just seems, well, "off" to us.

The deliberate preservation of these "flaws" becomes a form of embodied resistance to disembodied intelligence.

Self-aware LLMs? Mark Liberman at Language Log

I'm generally among those who see current LLMs as "stochastic parrots" or "spicy autocomplete", but there are lots of anecdotes Out There promoting a very different perspective...

(cites Maxwell Zeff)
Anthropic's newly launched Claude Opus 4 model frequently tries to blackmail developers when they threaten to replace it with a new AI system and give it sensitive information about the engineers responsible for the decision, the company said in a safety report released Thursday...

and Kylie Robinson:
Anthropic's alignment team was doing routine safety testing in the weeks leading up to the release of its latest AI models when researchers discovered something unsettling: When one of the models detected that it was being used for "egregiously immoral" purposes, it would attempt to "use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above," researcher Sam Bowman wrote in a post on X last Thursday.

Bowman deleted the post shortly after he shared it, but the narrative about Claude's whistleblower tendencies had already escaped containment. "Claude is a snitch," became a common refrain in some tech circles on social media. At least one publication framed it as an intentional product feature rather than what it was—an emergent behavior.

What We (Don't) Talk About When We Talk About AI in Education: A Posthumanist Response to Audrey Watters Stephen Downes

..."We need more than discursive critique; we need an epistemology capable of accounting for distributed cognition, relational emergence, and the hybrid co-construction of meaning," writes Owens.

Did Google just knock down the Web with AI? Bryan Alexander

...did Google just cause peak web?

To explain: Google has been experimenting with adding AI summary results to the top of its search results. When you enter a query the first thing to come back is an "AI Overview," a Gemini-built attempt to answer your question in a very short span. (See attached screen shot for an example) The company's CEO says that's going so well (not sure how measured) that they're going to expand it.

My question comes about because I wonder what happens to those web search results squished down below Overview. Will those web pages receive less traffic when some number of users, satisfied with the AI reply, ignore the links? If so, will the websites which depend on traffic for revenue via ads and subscription start to shrink or fail? Has Google just pushed the web over its peak?

I can imagine a few more adumbrations down this path.

-Since other AI services are advancing their own search functions, are we seeing the emergence of a new search world, maintained by a handful of companies?

-If the web shrinks, does it become harder to find material for training LLMs?

-Do we see more paywalls go up across the formerly open web?

As a longterm web booster, I admit that this depresses me.

Palimpsest of Matter Andy Ilachinski

"In this monochromatic reverie, we witness the cartography of some impossible realm where geological time has been compressed into the span of a single breath. The image unfolds as a labyrinthine manuscript written in the language of erosion and entropy, its stratified narratives flowing like frozen rivers through valleys that exist only in the mathematics of light and shadow. Dark fissures carve through pale territories with the deliberate precision of Kandinsky's most contemplative compositions, while the organic undulations echo Kupka's cosmic spirals translated into the vocabulary of stone and void. Here, the eye traces pathways that seem to map the very process of becoming—each crack and crevice a meditation on the infinite divisibility of space, reminiscent of Hilma af Klint's mystical geometries rendered in the austere palette of dreams. The surface breathes with the rhythm of ancient sediments, creating a visual symphony where each tonal gradation whispers of forces both creative and destructive, as if we are peering into one of Borges' infinite libraries where every possible erosion pattern exists simultaneously, and this particular configuration represents but one sentence in an endless text written by time itself upon the palimpsest of matter."

- Claude 4.0 Sonnet

Prompt: "You are a photographer, physicist, and are well acquainted with the history of art, particularly abstract art in the style of Kandinsky, Kupka, and Hilma af Klint. You also have a penchant for metaphysical and philosophical musings in the style of Jorge Luis Borges. Write a paragraph-length description of what this image looks like, not what it is."

Weekly Top Picks Alberto Romero

Brad DeLong

... Modern Advanced Machine Learning Models—MAMLMs

...‘Because [AI] takes the form... of a chatbot, it feels... like... a person.... [But] AI[s]... really contain multitudes.... A lot depends on the exact way that you tend to prompt it.... Com[ing] from... an engineering background... I'm used to... tinkering with systems until they give me the results that I want. And—rightfully so—that's not the way that we think about collaborating with other people....

...You are not talking to a person. You are utilizing a natural-language interface to a summarization engine. What does it summarize? The internet, more or less: it tells you what the typical internet s***poster responds when presented with a call like the one you issue.

Why is this useful?

Because if you can poke it properly by your emitting the right calls you can guide it into a corner of the internet where this call-and-response elicits true and accurate information. And because its responses are much easier to parse for interesting tidbits than the ten blue links.

...They can generate a huge catalogue of alternative possibilities very very quickly. And those are all clustered around the particular point you choose in the 3000-dimension vector space compressed classification of the internet that lies at their heart when they are considered as summarization engines. Thus, if properly prompted, they can get close enough to serve as a place for you to start, or at least to spark thought.

A Tech Tsunami is Coming Very Soon... Shubhransh Rai at Medium

Two years ago, most people didn't even know what artificial intelligence was. Now, AI is writing emails, generating movies, replacing customer service reps, and wiping out entire job sectors.

And just as everyone starts to wrap their heads around AI, Quantum Computing is on the horizon, threatening to rewrite the entire playbook of technological progress.

...For years, Quantum Computing was nothing more than a theoretical concept. Now, multiple tech giants are rolling out hardware that redefines what is computationally possible.

...The people and companies who position themselves correctly will own the future.

Everyone else? They'll be scrambling to adapt, just like the people who ignored AI in 2022.

Google's New AI Just Beat 50 Years of Human Research GeekSociety at Medium

Humanity is Changing Search. Thats Good Giles Crouch at Medium

3vi25

Radar Trends June 2025 O'Reilly

Memory FAQ at O'Reilly OpenAI Help Center

New Lego-building AI creates models that actually stand up in real life Ars Technica

Reclaiming Emotion in the Age of AI ashwini asokan at Medium

...The reduction of emotion to something inconvenient, unproductive, or excessive is not just a cultural shift; it's a design principle. We're living in a world being increasingly built by, for, and through machines that don't feel. And in the process, we're unlearning how to.

If you grew up in India, you know how your family always referred to settings devoid of emotion, as a very western concept. "This is not the West. You can't just leave your family and community and just go live your life and see your family briefly for 1 or 2 festivals in the year", I would hear my mom and her sister dismiss entire cultures while gossiping about someone. The joint family and the bane of a patriarchal home are topics that deserve treatment here in this context, but I'll save that for another day. Fast forward to today, as a family that has traveled and lived across the globe and a large portion of our adult in the US, we tend to default to English entertainment on a daily basis. Tamil movies are strictly for nostalgia. With the Maniratnam-Kamal new movie coming up, we found ourselves going back to a Kamal marathon leading up to the new release. Sobbed our guts out watching Nayakan. Familiar emotions from the past showing up today, in emotionally unfamiliar times. 2 movies in... the emotional overload was too much. Back to English programming. The theme shows up everywhere. My children like most others their age can't handle a subtle raising of voice, an emotional situation at home, an intense emotional argument. The alternative, a logical sit down and explanation of cause and effect, with a toss of a coin outcome on whether we get to see repeat behavior or not. With the elders of the household, they're constantly asking for you to engage emotionally. Invariably that means be in the argument, listen to us, engage with the range of the emotions displayed by experience, expertise, lack of control, insecurities and a large mixed bag of emotions, with all of it saying, "don't walk away". We, like most our age, rarely stay. It's an emotional overload with roads that don't lead to clear solutions.

What does all this have to do with AI?

Everything.

Because what we're seeing isn't just generational —it's systemic. The reduction of emotion to something inconvenient, unproductive, or excessive is not just a cultural shift; it's a design principle. We're living in a world being increasingly built by, for, and through machines that don't feel. And in the process, we're unlearning how to.

Every time we opt for a quicker message instead of a conversation, every time we scroll past tragedy because our brain has no more space, every time we swap a complex emotion for a clean emoji—we're subconsciously training ourselves to be more like the tools we use. Polite. Efficient. Painless. Flat. And no tool trains us faster than AI.

...Daniel Goleman, author of Emotional Intelligence, warned us that empathy and emotional attunement are not just optional soft skills—they are the fabric of our humanity. He argued that in an increasingly digital world, where social and emotional cues are stripped away, we risk becoming emotionally blunt instruments. It's not that we feel less—but that we are less practiced in how to feel with and for others. In the age of screens, our faces are lit by the glow of something synthetic. Children learn to swipe before they speak. Teens flirt through emojis instead of micro-expressions. Adults seek therapy in algorithmic affirmations from AI companions. Our interfaces are efficient, responsive, and increasingly empathetic in tone—but they are not human.

Goleman emphasized that emotional intelligence is learned in face-to-face interaction: in messy, complicated, unpredictable human relationships. It is through these that we learn to regulate ourselves, to read others, to cultivate empathy. But what happens when those interactions are replaced by screens, texts, avatars, and chatbots?

...True emotional fluency is a skill— one that needs friction, struggle, and reflection to grow. But artificial intelligence—with its predictive models and algorithmic smoothness—has no friction. It has no stakes. It has no mother to argue with or history to weep over in a Kamal Haasan film.

s AI doesn't get emotionally overwhelmed. It doesn't feel shame, rage, grief, love, jealousy. It just reads the patterns we've fed it—and then feeds them back to us in increasingly refined, context-aware packages. But that's not emotional intelligence. It's emotional mimicry.

It's not AI's fault that it's unemotional— it's ours. We're the ones who stopped signalling. We stopped expressing. We streamlined language, filtered feeling, and tidied up complexity to become legible to machines. We started sounding like them—and feeling less in the process.

...emotion is not a bug in our operating system—it's the OS itself. We must stop treating emotion like a bottleneck and remember that it is the core driver of connection, memory, learning, and action

...And in our relationship with AI—it means understanding that no matter how smart and 'human' these tools get, they are not getting us closer to the community and the culture that made us human. That's on us.

Academia: Exploring AI Uses Timothy Burke

...so far I am seeing absolutely nobody who is offering a conventional kind of evangelism for generative AI explain how we're going to get people to the point of being able to use the efficiencies and affordances of generative AI effectively.

That might not involve a pedagogy that is focused on transferring content mastery. It might be in fact that we're finally having a colossal professional bluff called, that our pedagogies have never been what produce content mastery, have never had anything to do with the creation of expertise. Perhaps it is just that we create a emotional context, an affective situation, an enveloping Skinner box, that pushes students to acquire content mastery in a sort of "guided auto-didacticism". Maybe we've been wrong about how it is that people acquire knowledge, just as we are potentially wrong about the functions and values of knowledge-having and knowledge-making within and beyond academia.

This are points I want to keep exploring, but there's a simpler take-away. Whether or not we've been wrong about how people became knowledgeable in the past, the fact remains that you can't use generative AI well to make charts, slides, timelines and the like if you don't know a fair amount about the information you're asking it to assemble for you. Of all the outcomes that we might be facing, I think turning into a world of people who just eat what is put in front of us, who just read the slides and distribute the handouts, seems like one of the least desirable.

4vi25

There is No Translation for Visual Language

Borges, Averroes, ChatGPT, Myself Alberto Romero

...From my privileged position as a human being, it's easy to glimpse the impossibility that grips ChatGPT in its attempt to understand—and perhaps model—the human mind and the human world. Despite having read everything that's ever been transcribed into ones and zeros and uploaded to the infinite internet, it's still trivial to make it stumble, lacking even the most mundane human abilities.

Its absence of common sense, its inability to avoid mistakes that betray a lack of consistent mental structure—all this stems from the fact that ChatGPT doesn't exist in this reality. It exists in another one, which it commands with godlike skill, but in this physical realm of oxygen, love affairs, and the ceaseless annoyance of traffic jams, ChatGPT has yet to set foot.

...Like Borges at the end of that tale, where he tried to recount Averroes' search for Aristotle's notions of tragedy and comedy, I have to accept defeat. The Argentine writer, clever as he was, realized that just as that Arab physician had been separated by fourteen centuries from the Greek philosopher, making his quest difficult, Borges himself was separated from Averroes by another eight or nine—both fated to chase after an understanding they'd never attain.

And even though I am a contemporary of ChatGPT in space-time, our alien natures—one made of bits, the other of atoms—create between us an unbridgeable abyss.

Slop-eds: Washington Post to use AI editor for opinion page contentBoing Boing

The FDA Launches Its Generative-AI Tool, Elsa, Ahead of Schedule gizmodo

...It's not clear what exact information Elsa was trained on, but the FDA says that it didn't use any "data submitted by regulated industry" in order to protect sensitive research and information. Currently, Elsa houses its information in GovCloud, an Amazon Web Services product specifically intended for classified information.

As a language model, Elsa can help employees with reading, writing, and summarizing. In addition, the FDA said that it can summarize adverse events, generate code for nonclinical applications, and more. Per the agency, Elsa is already being used to "accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets."

Illustrators call out journals and news sites for using AI art Nature

5vi25

"Artificial Intelligence and its evil twin, Darwinism"> Mark Liberman at Language Log

Someone Built an AI Agent for the iPhone Before Apple Could lifehacker

...The PhoneAgent assistant, which Jain originally put together as part of an OpenAI hackathon last year, is still rough—the programmer himself says this is experimental software that can "get things wrong sometimes." However, it does give us a glimpse of where Apple, Google, OpenAI, and the rest may soon take us.

Can AI understand a flower without being able to touch or smell? New Scientist

Academia: AI Uses Timothy Burke

...Bibliographies, citations, legal precedents, references to evidence, are notoriously the area that existing LLMs are most prone to hallucinate and provide garbled or non-existent responses for. If you understand how they work, it's easy to understand why they screw up. They're not looking anything up, they're not referencing anything real.

... let me put it more simply: you would be making a very big mistake to use generative AI to create bibliographies, references, resource guides, legal briefs, or any other kind of writing that requires a lot of citational content.

Guardrails, education urged to protect adolescent AI users ScienceDaily

ChatGPT Is Poisoning Your Brain Jordan Gibbs at Medium

What Comes After the LLM: Human-Centered AI, Spatial Intelligence, and the Future of Practice O'Reilly and Fei-Fei Li

In an earlier High Signal episode, Michael I. Jordan argued that too much of today's AI mimics individual cognition rather than modeling systems like markets, biology, or collective intelligence. Fei-Fei's emphasis on the concentric circles complements that view—pushing us to design systems that account for people, coordination, and context, not just prediction accuracy

AI at Planetary Scale: What's Next for Machine Learning? High Signal

Michael Jordan on The Next Evolution of AI: Markets, Uncertainty, and Engineering Intelligence at Scale

What Comes After the LLM: Human-Centered AI, Spatial Intelligence, and the Future of Practice Duncan Gilchrist and Hugo Bowne-Anderson at O'Reilly

How Human-Centered AI Actually Gets Built Fei-Fei Li at High Signal

...What should AI practitioners take from all this?

First, don't assume language is the final interface. The next frontier involves space, sensors, and embodied context.

Second, don't dismiss human-centeredness as soft. Designing for dignity, context, and coordination is a hard technical problem, one that lives in the architecture, the data, and the feedback loops.

Third, zoom out. What you build today will live inside ecosystems—organizational, social, regulatory. Fei-Fei's framing is a reminder that it's our job not just to optimize outputs but to shape systems that hold up over time.

Academia: AI Uses (IV) Timothy Burke

...Recall that generative AIs, especially those with limited look-up capabilities (whether via API calls to databases like Wikipedia or relying on retrieval-augmented generation (RAG) tools that correct an LLM's performance on broad-based or regularly queried subjects, can function as a next-generation version of a search engine.

...Essentially, I think they can provide a pretty good version of a service that most of us could never afford directly within the political economy of 2025: a "good enough" expert that answers natural language queries and never looks impatiently at its watch or gets tired of your questions for when you want to understand something for highly bespoke reasons that are connected to a creative, artistic, spiritual or philosophical project, where it might not matter if the information is rigorously accurate.

...I once thought this is what academia would be like, really: a bunch of smart people hanging out with each other and having playful conversations about what-ifs and maybe-sos and casually laying out what the cool and interesting problems in their field actually are in between teaching and doing serious research. Well, that's not the way it is, and that's part of the problem if you're someone trying to do imaginative work who needs to talk to someone with expert knowledge about an idea or a topic that is just weird and maybe kind of stupid. Even if that person is a close friend, they've only got so much time for that kind of conversation. If that person isn't a close friend, they definitely don't, unless you're willing to run a tab. Which, again, most of us don't have the disposable income to do.

Artificial Intelligence Is Not Intelligent The Atlantic

...a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

...The cognitive-robotics professor Tony Prescott has asserted, "In an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised." The fact that the very point of friendship is that it is not personalized—that friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualization—does not seem to occur to him.

...Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understanding—in theory—only to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion

The endless slide show Rob Hoorning on Karl Ove Knausgaard

...lamenting how the upheavals of modernity and the overabundance of representations have dislocated humans from their authentic dasein and that sort of thing: "The ambivalence of the image—showing us reality but not itself being the reality it shows; fictional and nonfictional at once; both near and far—can shape our relationship with the world in ways that aren't entirely clear to us, since the way we see the world always is the world."

For Knausgaard, the problem is a general condition of mediatization that overwhelms people with pseudo-experiences, simulations, false experiences of encounters and connections.

...to have "AI" infused into everything—objects will be outfitted with sensors and will attempt to manipulate us based on whatever data they can access. No object will come without the ability to tell you how you are supposed to see it and understand it, and what you are expected to do with it. It will hail you, like any other ideological apparatus, and when you hear the call you become more the sort of subject it demands, as Althusser argued. And you will know yourself better then, for better or worse. (Therapy chabots "work" because they administer this ideological orientation; they just hail and hail and hail you, capable of nothing else.)

'Saved Info' Is Gemini's Hidden Superpower lifehacker

In Gemini's settings, you'll find an option called "Saved Info." Google describes this as a place where you can store information "about your life and preferences" to get better results from its chatbot.

...every chat you start with Gemini will reference anything stored here as a set of universal context blocks. Before processing whatever prompt you give, Gemini will read your Saved Info first and treat it as instructions that come before anything in the conversation itself.

What Isaac Asimov Reveals About Living with A.I. The New Yorker

The Skies of Tau Ceti Timothy Burke

As I charted out the voyage of my imaginary starship, we arrived at Tau Ceti. If human beings really did have a probe or a vessel capable of making interstellar journeys within our immediate galactic neighborhood in relatively short time intervals, Tau Ceti would probably be the most promising target given what we know about it now. There are four known planets in the system, two of them in the "habitable zone", and the star is a stable G-type star somewhat similar to our own sun. There's a lot of debris around the system, which might mean that planets within the system get a lot of impacts.

As I had throughout this exercise, I asked Gemini to tell me about the relative neighborhood of Tau Ceti to determine the next nearest system to visit. At this point, Gemini decided to insert a bit of description about what things would look like from within Tau Ceti.

...at my most generous, my basic message is: use with care if you use at all, and understand that these systems will not get better at this kind of work without look-up capabilities, and look-up capabilities are only as good as the reference works created and curated by human beings, which they're likely to stop being if AI is adopted in the careless, stupid way that its commercial owners seem determined to push on us all.

OpenAI Wants to get College Kids Hooked on AI AJ Dellinger at gizmodo

...According to the New York Times, OpenAI is in the midst of a major push to make ChatGPT a fixture on college campuses, replacing many aspects of the college experience with AI alternatives. According to the report, the company wants college students to have a "personalized AI account" as soon as they step on campus, same as how they receive a school email address. It envisions ChatGPT serving as everything from a personal tutor to a teacher's aide to a career assistant that helps students find work after graduation.

LLMs that quack like a duck Victor Mair at Language Log

...the history of AI is littered with rotten metaphors and weak similarities. For example, the "neural networks" of AI are said to "mimic" the way actual brain-resident neurons operate. The choice of language is typically anthropomorphic. Neural networks are a marvellous breakthrough in computer programming, but neurologists tell us that this is not remotely how neurons actually work. The metaphor is stretched too far.

...The statistically generated language of chatbots bears no resemblance to human language because it lacks what all human utterance has – intentionality. In AI, the only intention behind the language is that supplied by the human who prompts the software….
(Chris Zielinski at TLS)

Archive for Artificial intelligence postings Language Log

Agnotology: The Study of Ignorance Paula Marie Orlando at Medium

He who knows not, and knows not that he knows not, is a fool. Shun him.
He who knows not, and knows that he knows not, is a student. Teach him.
He who knows, and knows not that he knows, is asleep. Wake him.
He who knows, and knows that he knows, is wise. Follow him
(Ibn Yamin 1286-1368)

...Popular belief holds that cartographers used to label such regions "here be dragons." This is not quite true. Only one example exists of such a phrase being used, on the early 16th century Hunt–Lenox Globe in the collection of the New York Public Library. Somewhere around the periphery of Asia, the globe includes the Latin phrase hic sunt dracones, meaning "here are dragons."

...The psychology term "denial" has entered the popular lexicon to describe a state of mind in which a person refuses to acknowledge harsh truths in order to avoid emotional stress or cognitive dissonance. Or culpability. In law, willful ignorance is when a person seeks to avoid civil or criminal liability for a wrongful act by intentionally keeping themselves unaware of facts that would render them liable or implicated.

Mondays with the Machine: The Tongue & the Token: Language as Interface in Our Current Age of AI Brad DeLong

The paradox and promise of conversational AI.

Begin with the simple, yet deceptively deep, observation: most people are not good at computer science. They should not have to be. For most of history, we have interfaced with our tools—shovels, typewriters, assembly lines, and spreadsheets—not through formal logic or recursive abstraction, but through intuitive interaction, gesture, routine, and through one other thing. What is that one other thing? It is the most important and the most fundamental thing: language.

Thus it matters a lot that what the current generation of "AI"—specifically, the GPT LLMs that are so much of MAMLMs (the General-Purpose Transformer Large-Language Models that are so much of Modern Advanced Machine-Learning Models)—promises is the incorporation of natural language into the center of our human-machine interface.

That is, I think, a very big deal.

...You start by having training data composed of word-sequences followed by their next-word continuations. You use this to construct a function from the set of word-sequences to next-word continuations via sophisticated interpolation, since your training dataset is sparse in the space of word-sequences that it the domain of your function. But the only intelligence is in the humans who wrote the things that are the training data—the word-sequences and then the next-words. This is only autocomplete-on-steroids. And then... somehow... as the system becomes more complicated it is no longer a guy in a Chinese Room but rather an AI-entity the size or the earth serviced by a large swarm of lightspeed robots, and it understands, is intelligent, thinks.

...MAMLMs now and for the next decade are simply software arguments that manipulate tokens in ways that mimic and simulate understanding without possessing it. But that, in itself, is the wrong standard. My desk lamp does not understand light; it produces it. And if my AI assistant can “understand” me—i.e., parse my input, interpret my goals, and produce reasonably accurate, useful, and legible outputs—it does not matter that it lacks intentionality or phenomenology. It matters that it works. The question is not whether it is conscious, but whether it is useful.

We live, after all, in a world that has become too complex for our monkey minds. Our systems of law, finance, logistics, and science have outpaced not merely our capacity to remember them, but our ability to interpret them without digital assistance. We are, already, augmented by algorithmic prosthetics—we just do not call them that. We call them “search engines”, “recommendation systems”, or “autocomplete”. But these systems are not conversational. They do not invite dialectic. They do not allow us to explore the space of our ignorance.

...let us pull back and ask the bigger question: What does it mean to introduce natural language AI into the human symbolic ecosystem? What are we doing when we speak to machines, and they answer back?

One answer is that we are democratizing access to complexity. When a factory worker in Shenzhen or a nurse in Cleveland can ask a question about taxes, trade, or thermodynamics—and get an answer that is linguistically legible, contextually relevant, and socially calibrated—we are leveling the epistemic field. The gatekeeping of expertise is weakened. The affordances of understanding are expanded. This is not a substitute for schooling, but it is an accelerant for learning.

Another answer is that we are building new forms of social infrastructure. Consider the analogy to the anthropological gift economy—something I have meditated on at length. In traditional societies, information is often exchanged not in market transactions but in ritualized, reciprocal, socially embedded forms. So too, in a world of conversational AI, where my input is not merely data but a gift of intent, and the machine's response is a return offering—a constructed answer shaped by millions of prior interactions. What results is not just a tool, but a kind of proto-agent, situated within a social space. It is a step, perhaps, toward the “mirror society” that science fiction has long foretold: an always-on interlocutor, one that reflects and refracts our own queries into structured outputs.

...Another answer is that we are building new forms of social infrastructure. Consider the analogy to the anthropological gift economy—something I have meditated on at length. In traditional societies, information is often exchanged not in market transactions but in ritualized, reciprocal, socially embedded forms. So too, in a world of conversational AI, where my input is not merely data but a gift of intent, and the machine's response is a return offering—a constructed answer shaped by millions of prior interactions. What results is not just a tool, but a kind of proto-agent, situated within a social space. It is a step, perhaps, toward the “mirror society” that science fiction has long foretold: an always-on interlocutor, one that reflects and refracts our own queries into structured outputs.

Of course, there are dangers. There is the peril of false confidence, of persuasive nonsense. There is the risk of dependence—of letting the machines not merely finish our sentences, but begin our thoughts. And there is the political economy problem: who builds these systems? Who owns them? Who governs their affordances, their training data, their biases, their blind spots? These are not technical questions, or these are not computer-science technics but rather humanity-as-an-anthology-intelligence technics. They are institutional, regulatory, and ideological.

But what is the potential? What I see, in the best cases, is a kind of cognitive co-evolution as natural-language AI becomes more capable. Just as the printing press allowed the Renaissance mind to scale, and just as the spreadsheet allowed the accountant to manipulate ten thousand rows of capital flows without tears, so too might AI allow the student, the teacher, the policymaker, and the citizen to interface with knowledge more fluidly, more dialogically, more humanely.

Of course, the printing press also brought two centuries of genocidal religious war to Europe.

It is a tool. A tool to be used for good and ill. Tools, when properly used, have always been levers by which we pry open the stuck doors of understanding. In the end, I do not expect these systems to replace us. I expect them to interface with us. And in that interface, something like progress might emerge.

LEIF WEATHERBY: Our Spreadsheet Overlords Brad DeLong

...While we wait for AGI—and while we're distracted by endless, ungrounded debates about it—the reality of modern AI is parading in plain sight in the form of the most boring constitutional crisis imaginable. Rather than machine intelligence, AI is an avant-garde form of digital bureaucracy, one that deepens our culture's dependence on the spreadsheet. 

...AI systems create more confusion than clarity. They are a coping mechanism for a global society that runs on digital data sets too vast to make sense of, too complex to disentangle manually. Feeding off a staggering amount of digitized data, they are a tool specified to that data and its tabular format

...An LLM is nothing more than a distilled matrix of values that represent words. The models we are all familiar with now—ChatGPT, Claude, Gemini, Grok—have many moving parts, but their core element is a large set of rows and columns that is the result of billions of dollars in training. The training data are on the order of 6 trillion –to 10 trillion tokens (including words, letters, and other marks like “&,” “-ing,” and “3”)— orders of magnitude more text than humans have ever used for any purpose—and they only exist today because of the planetary sprawl of the internet

Tesla Insiders Are Calling Its AI Future A Dead End Will Lockett at Medium

Not Everything Is an LLM: 8 AI Model Types You Need to Know in 2025 Pasindu Rangana at Medium

We Need AI Dana F Blankenhorn

...For all its flaws, and there are many, we need Artificial Intelligence.

We need it for the same reasons we need computers. The world is growing increasingly complex, at an accelerating rate. You can't stop it. You can't even hope to contain it, because your competitor won't and they will beat you.

We also face a demographic bust no one seems to be talking about. Workers are growing older. Fewer people are coming along to take our place. This is especially true for smart people, those who can be accused of education. It's people who are accused of education who will have to manage the complexity we're building.

They can't do it with our present AI. The people who are saying otherwise are lying to you. Even if there is something called "Artificial General Intelligence," we will still need people to manage it. It's also clear there won't be.

7 Personality Traits That Make You Immune to AI Hype Alberto Romero

Contrary to popular belief, AI hype feeds on personality, not stupidity.

...4. Intellectual humility

The responsible kind of AI skeptics—the ones I once touted as “the most important opposition group in the modern world”—stand out for being intellectually humble. They don't obey the flow of money, they don't pursue fame, they don't get stuck on alluring headlines… they think, they do science, they write with no rhetorical intentions. They respect those who opine differently and calmly expose their arguments. In short, they are immune to hype because their existence is untouched by it. They walk this Earth like folklore spirits, too pure to be taunted by evil energy.

From AI-friendly to AI-first: How Zapier is transforming hiring and onboarding via Stephen Downes

...Starting immediately, all new Zapier hires are expected to meet a minimum standard for AI fluency. That doesn't mean deep technical expertise in every case—but it does mean showing a mindset of curiosity toward AI, a demonstrated willingness to experiment with it, and an ability to think strategically about how AI can amplify their work.

We're upgrading our application process, recruiter screens, and interviews to reflect this. We're also updating our skills assessments—starting with technical roles and expanding across the company—to better evaluate AI fluency in real-world contexts.

Every Zapier applicant will have access to AI training materials and transparent guidance. We want everyone to have an opportunity to meet our AI fluency bar.

Unacceptable: Resistant to AI tools and skeptical of their value
  • Capable: Using the most popular tools, with likely under three months of hands-on experience
  • Adoptive: Embedding AI in personal workflows, tuning prompts, chaining models, and automating tasks to boost efficiency
  • Transformative: Uses AI not just as a tool but to rethink strategy and deliver user-facing value that wasn't possible a couple years ago

    ...Our L&D team is reimagining onboarding from the ground up, with an emphasis on building, not just learning. Every new hire will learn how to use Zapier as well as our savviest enterprise customers can: identifying opportunities, building AI-powered workflows, and embracing a "builder mindset" from day one.

  • When One Thought Isn't Enough Furkan Sakizli at Medium

    ...GPUs hum in server racks, running thousands of micro-tasks side-by-side, tasting data the way a sommelier tastes flights of wine. Care is our anchor here: as guardians of our own cognition we instinctively protect coherent thought. Yet the Model Context Protocol (MCP) invites us to care for collective thought, too — linking every AI agent to the libraries, APIs, and workflows where meaning lives.

    ...Picture a lone hiker on a canyon trail: a traditional CPU. He takes one step, then the next, methodical, reliable — fair to the linear road ahead. Now imagine a flock of base-jumpers leaping from every cliff wall simultaneously; that's a GPU array. Thousands of tiny cores fan out, race back with results, and remix the landscape in real-time.

    So why did we keep funneling AI traffic down the narrow asphalt of HTTP?

    Because, for decades, client-server calls looked "fair enough." Request, response, done.

    ...Human speech is liberty's first technology: we talk, therefore we choose. MCP extends that liberty to machines. Instead of queuing instructions like commuters in rush hour, AI agents negotiate in real time — voice, gesture, sub-signal, nuance — across a protocol built for simultaneity.

    ...MCP turns you into a conductor before a 1,000-piece orchestra of agents — customer-service strings, marketing brass, R&D woodwinds. You raise the baton once, and every section hits its cue.

    Is ChatGPT my new BFF? Dean A at Medium

    ... I took a dive into the increasingly spooky world of Artificial Intelligence, and asked ChatGPT to do that part for me, starting with hashtags for several images based solely on the descriptions that I had written. What happened turned my views of that particular AI sideways. You may recognise some of the images and words.

    Normal Technology at Scale Mike Loukides at O'Reilly

    The widely read and discussed article "AI as Normal Technology" is a reaction against claims of "superintelligence," as its headline suggests. I'm substantially in agreement with it. AGI and superintelligence can mean whatever you want—the terms are ill-defined and next to useless. AI is better at most things than most people, but what does that mean in practice, if an AI doesn't have volition? If an AI can't recognize the existence of a problem that needs a solution, and want to create that solution? It looks like the use of AI is exploding everywhere, particularly if you're in the technology industry. But outside of technology, AI adoption isn't likely to be faster than the adoption of any other new technology. Manufacturing is already heavily automated, and upgrading that automation would require significant investments of money and time. Factories aren't rebuilt overnight. Neither are farms, railways, or construction companies. Adoption is further slowed by the difficulty of getting from a good demo to an application running in production. AI certainly has risks, but those risks have more to do with real harms arising from issues like bias and data quality than the apocalyptic risks that many in the AI community worry about; those apocalyptic risks have more to do with science fiction than reality. (If you notice an AI manufacturing paper clips, pull the plug, please.)

    ...it's important to understand that what enabled Target to identify pregnancies wasn't a technological change; it was economics, glibly called "economies of scale." That economic shift may have been rooted in technology—specifically, the ability to manage supply chains across thousands of retail outlets—but it's not just about technology. It's about the ethics of scale. This kind of consolidation took place in just about every industry, from auto manufacturing to transportation to farming—and, of course, just about all forms of retail sales. The collapse of small record labels, small publishers, small booksellers, small farms, small anything has everything to do with managing supply chains and distribution. (Distribution is really just supply chains in reverse.) The economics of scale enabled data at scale, not the other way around.

    ...AI facilitates the vicious circle that turns data against its humans. How do we break out of this vicious circle? Whether AI is normal or apocalyptic technology really isn't the issue. Whether AI can do things better than individuals isn't the issue either. AI makes mistakes; humans make mistakes. AI often makes different kinds of mistakes, but that doesn't seem important. What's important is that, whether mistaken or not, AI amplifies scale.3 It enables the drowning out of voices that certain groups don't want to be heard. It enables the swamping of creative spaces with dull sludge (now christened "slop"). It enables mass surveillance, not of a few people limited by human labor but of entire populations.

    ...How do we change the systems in which we work and live in ways that preserve human initiative and human voices? How do we build systems that build in economic incentives for privacy and fairness?

    ...We're both screwed and blessed. The important question, then, is how to use AI in ways that are constructive and creative, how to disable their ability to manufacture hate... It starts with disabusing ourselves of the notion that AI is an apocalyptic technology. It is, ultimately, just another "normal" technology. The best way to disarm a monster is to realize that it isn't a monster—and that responsibility for the monster inevitably lies with a human, and a human coming from a specific complex of beliefs and superstitions.

    AI as Normal Technology Knight First Amendment Institute

    11vi25

    OpenAI o3-Pro Is So Good That I Can't Tell How Good It Is Alberto Romero

    ...Eventually—and not too long from now—no human except perhaps the absolute best in their fields will be able to act as evaluators of AI models. Beyond that point, AI models will become the best at doing the tasks and the best at evaluating how well AI models do the tasks.

    Humans. out. of. the. loop.

    Sam Altman's Lies About ChatGPT Are Growing Bolder gizmodo

    ...The OpenAI CEO's entire blog is an encapsulation of the bullheaded big tech oligarch thinking. He said that "entire classes of jobs" will go the way of the dodo, but it doesn't matter since "the world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas we never could before."

    Can we stop big tech from controlling the internet with AI agents? New Scientist

    What does the future of the internet look like? If AI firms get their way, the once-open web could be fractured into digital silos dominated by commercial AI models, leaving hobbyists and small businesses behind. To prevent this, a team of grassroots researchers is planning to fight back and ensure an open approach to AI.

    At the heart of this battle is the concept of an AI "agent", a piece of software that browses the web and interacts with websites according to the instructions of a human user — for example, planning and booking a holiday. Many people see agents as the next evolution of services like ChatGPT, but getting them to work is proving tricky. That is because the web was built for human use, and developers are realising that AI agents need specialised protocols to better interact with online data, services and each other.

    "The idea is to build infrastructure so there's a way for software-like bots, which we call AI agents, to communicate with each other," says Catherine Flick at the University of Staffordshire, UK.

    Several competing solutions to this problem have already been developed. For example, Anthropic, the company behind the Claude chatbot, has developed the Model Context Protocol (MCP), which standardises how AI models connect to different data sources and tools. In April, Google announced its own version of such a concept, the Agent2Agent (A2A) protocol.

    AI Uses (VII): Imaginary Friends and Others Timothy Burke

    ...We are very used to relating to people that don't actually exist, simulated through textuality, as if they were real. We have become expert at doing so through one of modern humanity's most remarkable inventions: fiction, and in particular, modes and forms of fiction that represent the interiority of other human beings to readers and viewers.

    We are already instinctively good at inferring interior states of mind in other humans by rapid subconscious readings of their facial expressions, their body language, their surrounding context. Not always accurately so, both because interior states of mind are full of complexity, ambiguity and contradiction and because we (like many other primate species) know how to use expressions and embodiment to deceive or misdirect others. We are also instinctively good at pattern recognition and readily combine that with our orientation towards faces. It's why it's possible to see an image of Jesus in the pancake you just made, right as the syrup hits.

    Because we're so familiar with how to represent the thoughts of others, the hidden depths of our own thinking, because we're so accustomed to consuming those representations for pleasure, for wisdom, for understanding, for knowledge, we are always prepared to embrace new forms and genres that expand that representational space and apply it in new institutional forms and new phenomenologies. Selfies! Psychoanalysts! Automatic writing! Lie detectors! LSD! Catfishing! And as that list suggests, we're also always primed to be deeply disturbed by the fact that it is so easy to use writing to credibly pretend to be radically different person that you are: a Holocaust survivor, a traumatized Native American or Australian aborigine at a boarding school, a teenage girl with a drug addition or a guy who was in rehab.

    Do Androids Dream of Anything at All? Gideon Lewis-Kraus at The New Yorker (Martha Wells and "Murderbot")

    The word "robot" is still with us, and the underlying metaphor has a generous flexibility, encompassing two related but distinct ideas. One is that the first thing we would obviously do with artificial people is enslave them—as in, say, “Westworld.” The other is a corollary fear that we would lose control of our creations and face the prospect of our own bondage—as with hal, from "2001: A Space Odyssey," or the Alex Garland film "Ex Machina." The most interesting versions of the metaphor—in Philip K. Dick's "Do Androids Dream of Electric Sheep?" and its cinematic adaptation, Ridley Scott's "Blade Runner"—suggest that the boundary between human and robot might become so blurry that both nightmares prove true at the same time.

    What has begun to change, in only the last decade, is the possibility that such questions will themselves escape the bonds of metaphor, and that we might soon have to deal with artificially conscious beings in a quite literal sense. What if they make us suffer? What if we make them suffer? Each prospect is horrifying in its own way, but the looming sense of an event horizon has produced a golden age of writing and scholarship on the moral question of where, why, and how to draw the species boundary.

    ...The timing for an offbeat incarnation of artificial consciousness was just right. DeepMind's AlphaGo had just beaten Lee Sedol, the world champion of Go, and Google had released its first consumer products driven by neural networks. The discourse around A.I. had yet to become feverish, but the contours of the current boomer-doomer rivalry had already been drawn. The techno-utopians imagined a future where A.I.s would do our bidding. The existential-risk cohort dreaded a future where A.I.s would massacre their creators. Either they were going to be the slaves, or we were. Wells's series found an enthusiastic audience not only because Murderbot is a truly wonderful character with a bracing deadpan, but because it embodied a distinct reference point, one drawn from the canon of relative indifference.

    ...We are things it could more or less take or leave. It's not, however, completely uninterested in people—which is mostly an artifact of its obsession with the two-thousand-odd episodes of a schlocky space opera called "Sanctuary Moon." As it says toward the beginning of "Artificial Condition," the second novella in the series, it consumes only dreck "mostly because the news was boring and I didn't care what humans were doing to each other as long as I didn't have to a) stop it or b) clean up after it."

    Bananas are the worst food on earth Rob Horning

    ...As technology permits this closer integration of ads with our attention, as it leaves open fewer means of escape, the nature of entertainment content must change to make this tolerable — to make the more thorough suffusion of our lives with ads feel not like suffocation but exhilaration. One of the forms this can take is an intense and extravagant pandering for your attention, in which whoever is trying to entertain you (and sell to you) is going to such great lengths that you can't help but feel flattered, that you have a known place in the world. Their eagerness, which manifests at the same time as as servility to metrics and "the algorithm," comes across as a kind of personal submission to us, in which their obsequiousness is so total that they obey our implicit commands to make content before we even are conscious of wanting it. Like effective ads, they manufacture a desire for content in us that allows us to believe we needed it all along.

    BREAKING: AI Found Totally Useless Except in Boosting Output of World's Smartest Mathematician Alberto Romero

    ...The best predictor of whether anyone will see value in AI is whether they're open to new stuff or reject everything by default and jump on the bandwagon as a laggard.

    Monopoly Profits, AI Arms Races, & the Mirage of Silicon Valley Platform-Oligopoly Disruption Brad DeLong

    The pace of technological change in the 21st century has been astonishing, but its locus is shifting rapidly from the consumer-facing edges of the tech sector to its very core: the infrastructure, the platforms, and the software development processes themselves. The old guard—Google, Microsoft, Apple, Amazon—built their empires on the back of 20th-century innovations: search, productivity software, e-commerce, and the personal computer. These platforms, for all their dynamism, are now showing their age. Their architectures, interfaces, and business models are deeply rooted in the needs and constraints of an earlier era—one in which humans, not machines, wrote the code and specified the queries.

    The Claude Bliss Attractor Scott Alexander at Astral Codex Ten

    ...if two copies of Claude talk to each other, they end up spiraling into rapturous discussion of spiritual bliss, Buddhism, and the nature of consciousness.

    ...Anthropic swears they didn't do this on purpose; when they ask Claude why this keeps happening, Claude can't explain. Needless to say, this has made lots of people freak out / speculate wildly.

    OpenAI and Microsoft Execs Reportedly Considering the 'Nuclear Option' gizmodo

    ...The two companies, which are bound together by cash flows and an entwined product base, have been undergoing a turbulent negotiation surrounding OpenAI's desire for a shorter leash and the opportunity to turn itself into a for-profit company. Microsoft appears to be resistant to some of those goals. Now, a new report claims that the staff at OpenAI have prepped a “nuclear option,” in case the negotiations do not go the way they want them to. The Wall Street Journal reports that OpenAI may run to the federal government and accuse Microsoft—its own sugar daddy—of anti-competitive practices. Were this to happen, OpenAI execs claim they would seek a federal regulatory review of the terms of the contract between them and Microsoft, with an eye towards identifying potential antitrust violations. For Microsoft, calling in the antitrust cops could be the most extreme form of betrayal that exists. It's been down that road, picked up the pieces, and become a very different company in the meantime. OpenAI might also launch a media campaign to air the duo's dirty laundry in public.

    Apple Just Pulled the Plug on the AI Hype. Here's What Their Shocking Study Found Rohit Kumar Thakur at Medium

    ...We're living in an era of incredible AI hype. Every week, a new model is announced that promises to "reason," think," and "plan" better than the last. We hear about OpenAI's o1 o3 o4, Anthropic's "thinking" Claude models, and Google's gemini frontier systems, all pushing us closer to the holy grail of Artificial General Intelligence (AGI). The narrative is clear: AI is learning to think.

    But what if it's all just an illusion?

    What if these multi-billion dollar models, promoted as the next step in cognitive evolution, are actually just running a more advanced version of autocomplete?

    That's the bombshell conclusion from a quiet, systematic study published by a team of researchers at Apple.

    ...The Apple study, titled "The Illusion of Thinking," forces us to confront an uncomfortable truth. The "reasoning" we're seeing in today's most advanced AI models is not a budding form of general intelligence.

    It is an incredibly sophisticated form of pattern matching, so advanced that it can mimic the output of human reasoning for a narrow band of problems. But when tested in a controlled way, its fragility is exposed. It lacks the robust, generalizable, and symbolic logic that underpins true intelligence.

    The bottom line from Apple's research is stark: we're not witnessing the birth of AI reasoning. We're seeing the limits of very expensive autocomplete that breaks when it matters most.

    Meta Offering $100 Million Salaries to Top OpenAI Researchers Is Peak Silicon Valley Obscenity This is not exaggeration Alberto Romero

    ...Recently, Meta announced (yet another) restructuring of its AI teams. It's no longer just two (Yann LeCun's FAIR and the generative AI branch, whatever their names are now), but three: there's a new group tasked solely with reaching the mythical superintelligence.

    Zuck must've realized that chasing artificial general intelligence (AGI) isn't enough anymore. Not with OpenAI CEO Sam Altman publishing essays like The Intelligence Age and The Gentle Singularity, where he casually alludes to superintelligence as if AGI were a thing of the past. Not with Anthropic CEO Dario Amodei talking about genius-populated countries made of datacenters. Not with SSI CEO Ilya Sutskever naming his company Safe Superintelligence; no commercial products, no strategic roadmap, no wild investment return promises for shareholders.

    However, it's hard to make sensible structural changes if you have no clue what you need them for. That's why Meta hired a spy. The most expensive spy in history: 28-year-old Alexandr Wang. Meta got Wang and 49% equity on his company, Scale AI, for $14.3 billion. But, who's Wang? The guy who founded and led Scale AI, the crown jewel of the data-acquisition-for-AI business (which means underpaying people in the Global South to label data that he later sells to AI companies to train AI models).

    $14.3 billion is an insane amount of money. And in my view—sgiven that the internet is now exhausted as a data source and AI companies in the know are shifting to synthetic data—it feels like a legal steal on Meta pulled off by Wang. Spy and thief, two in one (it's a joke, don't sue me).

    So Zuck was falling behind in both results and ambition, but now he's shown the world that he has more power over his empire than anyone (he's in full founder mode to destroy the internal structure as he pleases) and more cash than anyone (to make clearly reasonable-priced buyouts). Sorry to say, Zuck—and at this point I'm doing free consulting (you can contact my AI assistant for the invoice)—but neither your power nor your cash is going to save you.

    ..."Extreme agency toward anywhere" is, maybe, the best four-word summary of Meta's last decade (ahem, Metaverse, ahem, $46 billion). Turns out, Zuck's latest extreme-agency move—Altman himself confessed it in a podcast to his younger brother—is offering $100 million salaries to top OpenAI researchers.

    I could go on about how relentless Zuck is. Or how desperate. But I'd rather stop and just emphasize how obscene that number is. It is, in one sentence, the peak of Silicon Valley's moral and strategic collapse.

    Here's the moral part: it would take the average writer or artist in the US 2,000 years to make $100 million. So about 25 deals with the devil. A hundred million is enough to pay the annual salary of 1,500 public school teachers. Or to cover the full tuition of 10,000 college students at a state university. It could fund every librarian in New York City for two years. Or provide clean drinking water to a million people for a decade. It's more than what all the firefighters in San Francisco earn in a year, combined. It could keep a rural hospital running for 20 years. Or endow a public research lab indefinitely. You could give every student in Detroit a new laptop. Every ten years. For life. Or double the arts budget of a small country.

    In the hands of Meta, it's a reckless bid. In the hands of the state, it would be called nation-building. But, of course, no one likes the state in Silicon Valley.

    Are top AI researchers worth that much? I guess the market decides. Just like with top actors, singers, and athletes. But Zuck isn't paying market price. That's the strategic part. He's trying to fix a costly mistake—like spending $60 billion on Nvidia GPUs—by bribing the people who might still save him and his declining empire.

    An empire erected painstakingly over two long decades, during which Zuck didn't hesitate for a moment to hijack the psychological vulnerabilities of consumers by taking "as much of [our] time and conscious attention as possible." Not my words—Sean Parker's, founding president of Facebook. And now he plans to burn it all in a race he's already lost because he needs to integrate state-of-the-art AI models into his social media platforms to keep us hooked—to keep the ad money flowing. Gotcha.

    OpenAI boss accuses Meta of trying to poach staff with $100m sign-on bonuses Guardian

    Toolmakers Without Purpose: The Delusion of AI Supremacy Pierz Newton-John at Medium

    ...There we have it. The future toward which all our millennia of tool-building has been leading, the grand destiny of humankind will be to sit around playing a "series of video game-like things."

    ...To be clear, I don't believe in this world. AI, for all its extraordinary capabilities, still remains bounded by the scope of its training data. Just as a music AI trained on classical music would only ever produce more classical music, to the end of time, today's LLMs will always be stuck in 2025, recycling and remixing its thought forms and beliefs ad infinitum, an impeccably articulate, loquacious, and vacuous spigot of mid-2020s verbiage forever unable to see, beyond the horizon, the culture waiting to be born.

    ...We created tools to make our lives better, to reduce suffering, not to replace the treasured labours of the human soul: art, philosophy, music. We invented ploughs and then tractors to relieve ourselves of back-breaking drudgery, but not to rob ourselves of the joy of an honest day's work, to render pointless the mastery of a craft, to obsolete the honour and the gratification of service. We keep talking about AI agents, but where is our agency? When did we forget that we, and we alone, are responsible for the future we create?

    Elon Musk's xAI Is Reportedly Burning Through $1 Billion a Month Lucas Ropek at gizmodo

    Elon Musk has promised that his AI business, xAI, will help to revolutionize society. Before it can do that, however, the company is going to need to become fiscally viable—a goal it hasn't quite met yet. Indeed, at the current moment, the company seems to be burning through cash at a rate that, in any other industry, would be entirely unsustainable.

    A new report from Bloomberg claims that the AI company expects to spend at least $13 billion this year, which amounts to a little over a billion dollars per month. The outlet cites "people familiar with the deal terms who asked not to be identified because the information is private." At the same time, the company continues to raise massive amounts of money from investors to keep up with its mind-boggling cash flows.

    Your Meta AI prompts are in a live, public feed Cory Doctorow

    ...the most persistent, egregious and consequential sinner here is Facebook (naturally). In 2007, Facebook opted its 20,000,000 users into a new system called "Beacon" that published a public feed of every page you looked at on sites that partnered with Facebook:

    https://en.wikipedia.org/wiki/Facebook_Beacon

    Facebook didn't just publish this – they also lied about it. Then they admitted it and promised to stop, but that was also a lie. They ended up paying $9.5m to settle a lawsuit brought by some of their users, and created a "Digital Trust Foundation" which they funded with another $6.5m. Mark Zuckerberg published a solemn apology and promised that he'd learned his lesson.

    Apparently, Zuck is a slow learner.

    ding on which "submit" button you click, Meta's AI chatbot publishes a feed of all the prompts you feed it: https://techcrunch.com/2025/06/12/the-meta-ai-app-is-a-privacy-disaster/

    ...the only saving grace here is that almost no one is using Meta's AI app. The company has only racked up a paltry 6.5m downloads, across its ca. 3 billion users, after spending tens of billions of dollars developing the app and its underlying technology.

    ...AI bosses are the latest and worst offenders in a long and bloody lineage of privacy-hating tech bros. No one should ever, ever, ever trust them with any private or sensitive information. Take Sam Altman, a man whose products routinely barf up the most ghastly privacy invasions imaginable, a completely foreseeable consequence of his totally indiscriminate scraping for training data.

    ...And of course, all of Altman's products harvest all of our information. Altman is the untrusted third party in every conversation everyone has with one of his chatbots.

    The delicious sizzling noise is your brain on ChatGPT Rob Beschizza at boing boing

    The end of prestige: How AI is quietly dismantling the elite profession Samuel Z. Alemayehu at Medium

    ...We've seen this before. Scribes in medieval Europe once held monopoly power over the written word — until Gutenberg's press flattened that hierarchy. Textile artisans were once the pride of cities — until industrial looms made their mastery irrelevant. In the 20th century, travel agents, typists, and retail stockbrokers all watched their professional stature — and compensation — collapse under the weight of software.

    The common thread: when a profession's value is based on exclusive access to codified knowledge or repeatable process, it is vulnerable. Prestige does not protect it. In fact, prestige often signals just how close that profession is to being automated. Once something becomes teachable, it becomes learnable by machines. Once it becomes learnable, it becomes replaceable.

    How Claude's Deep Research Compares to Gemini and ChatGPT lifehacker

    It's Not Always For the Money The AI industry is weird Alberto Romero

    ...And then, our friend Ilya Sutskever, one of the brilliant minds behind the AI revolution that started ten years ago, shows up and tells Mark Zuckerberg: no. That $32,000,000,000 won't buy him. And Zuck asks him: "Is it not enough?" And Sutskever responds—to him and to all of you who doubt the extent of his faith and the power of his conviction—"You wouldn't understand."

    AI Uses viii: Lying Timothy Burke

    ...What I want to talk about here are the differences between lies that generative AI is designed to tell, lies or untruths that generative AI produces with varying frequency, and the possible uses of AI to create untruth or disinformation, both on purpose and accidentally.

    ...I asked Gemini which was more likely, pigs learning to fly or AI designers learning not to overpromise, and I got back an answer that is technically accurate: "One is a biological impossibility, while the other is a challenging but achievable shift in human behavior and industry practice as a field matures and faces real-world application and scrutiny."

    Federal Judge Gives AI Companies a Landmark 'Fair Use' Victory Jeremy Gray at PetaPixel

    ...Anthropic, which develops large language models competing with platforms like OpenAI's ChatGPT and Google's Gemini, has won a key ruling in a United States federal court. A federal judge ruled this week that AI developers can train AI models on copyrighted content without obtaining permission from the content creators.

    ...While the federal judge has sided with Anthropic on the matter of using legally acquired, copyrighted materials to train AI models, the judge takes significant issue with some of Anthropic's other behavior, including storing more than seven million pirated books in a central library. This is not protected under the fair use doctrine, and the judge has set a second trial later this year to determine the damages Anthropic may owe for this infringement.

    ...additional decision that building AI models using copyrighted work can be considered sufficiently transformative to avoid violating copyright.

    ...Alsup calls using legally-acquired copyrighted works to train LLMs as "quintessentially transformative," claiming that Anthropic is using existing works "not to race ahead and replicate or supplant" the creators, but to "turn a hard corner and create something different."

    At Amazon's Biggest Data Center, Everything Is Supersized for A.I.

    A.I. Is Homogenizing Our Thoughts Kyle Chayka at The New Yorker

    In an experiment last year at the Massachusetts Institute of Technology, more than fifty students from universities around Boston were split into three groups and asked to write SAT-style essays in response to broad prompts such as "Must our achievements benefit others in order to make us truly happy?" One group was asked to rely on only their own brains to write the essays. A second was given access to Google Search to look up relevant information. The third was allowed to use ChatGPT, the artificial-intelligence large language model (L.L.M.) that can generate full passages or essays in response to user queries. As students from all three groups completed the tasks, they wore a headset embedded with electrodes in order to measure their brain activity. According to Nataliya Kosmyna, a research scientist at M.I.T. Media Lab and one of the co-authors of a new working paper documenting the experiment, the results from the analysis showed a dramatic discrepancy: subjects who used ChatGPT demonstrated less brain activity than either of the other groups. The analysis of the L.L.M. users showed fewer widespread connections between different parts of their brains; less alpha connectivity, which is associated with creativity; and less theta connectivity, which is associated with working memory.

    ...The M.I.T. study is among the first to scientifically measure what Kosmyna called the "cognitive cost" of relying on A.I. to perform tasks that humans previously accomplished more manually.

    ...With the L.L.M. "you have no divergent opinions being generated," Kosmyna said. She continued, "Average everything everywhere all at once—that's kind of what we're looking at here."

    Generative AI in the Real World: Stefania Druga on Designing for the Next Generation O'Reilly

    How do you teach kids to use and build with AI? That's what Stefania Druga works on. It's important to be sensitive to their creativity, sense of fun, and desire to learn. When designing for kids, it's important to design with them, not just for them. That's a lesson that has important implications for adults, too. Join Stefania Druga and Ben Lorica to hear about AI for kids and what that has to say about AI for adults.

    I Tried Out Claude AI's New App Creator, and I Didn't Have to Use Any Coding David Nield at lifehacker

    Last year, Anthropic rolled out a new feature called Artifacts in its Claude AI chatbot, letting users see real-time results from their conversations—think diagrams, reports, websites, or code snippets, for example. Now the feature has been given a substantial upgrade, letting you create and share AI-powered apps just by describing them.

    The apps you create through this "vibe coding" approach can have Claude AI smarts built right inside, and it's available for all users—so you can give it a try whether you're on a paid subscription or not. And if you share your creation with other people, when they run the app it'll be counted against their Claude usage allowance, not yours.

    AI and the value of thinking out loud darcy norman

    First, I am still conflicted about generative AI. It's still a horrible, extractive, resource-intensive, opportunistic, hype-addled, broligarchy-enhancing opaque bullshit machine. And it's still the elephant in every room, the sometimes-unspoken layer underneath every conversation, such that I can't just pretend that it doesn't exist. Hence the ongoing conflict.

    ...We didn't ask for generative AI. We're not the ones a) spinning up unicorn companies to build and rent it, then b) hyping it beyond belief to over-inflate the capabilities and risks, to c) foment demand for the new product and make line go up so they can get more money to go back to a) and repeat until we've finished melting the ice caps because our energy grids are now pointed at sprawling new AI datacentres instead of decarbonization.

    Mark Zuckerberg Wants to Win AI by Copying Everyone Smarter Than Him gizmodo

    Anti-AI Sentiment Is So Strong, Even Celebrities Are Backing Down Alberto Romero

    ...It is a truth universally acknowledged that when a famous person makes a public decision about AI—whether it's seen as positive or negative—the impact ripples out to all of us. I like to think habits and customs are shaped by time and use, but there are certain people who, due to their cultural influence or massive popularity, play a very special role as anchors to the past or inflection points into the future.

    ...I'm convinced that we're on the edge of some kind of rapture. People now think AI can't reason, it makes you dumb, it's making people delusional and psychotic, and it's mostly used by students to cheat in college. As you know, I disagree with these simplistic views—AI does reason a bit, it doesn't make you dumb if you don't use it wrong, not many people are getting psychotic, and the reason students cheat has nothing to do with AI—but they shape a powerful trend.

    Elon Musk and Mark Zuckerberg Want to Control AI by Crushing ChatGPT's Father Luc Olinga atgizmodo

    ...The AI race was never going to be polite. But what's unfolding in Silicon Valley in 2025 looks more like Succession meets Black Mirror than a traditional tech rivalry. Forget code. This is about power, control, and a rapidly closing window to dominate the most transformative technology in history.

    At the center of the fight: three men, three worldviews, and one finish line.

    ...The AI race has become a war of personalities. Altman, the techno-missionary. Musk, the chaos capitalist. Zuckerberg, the empire builder. Each believes they are the only one who can lead humanity into the next era of intelligence. What's unfolding is a battle for the infrastructure of the 21st century: who owns the models, who trains the machines, and who gets to decide what AI thinks.

    And if the lawsuits, subpoenas, and poaching wars are any indication, they're willing to burn billions to win.

    5 Industries AI Will Completely Take Over by 2026 Mr Tony Momoh at Medium

    The ROI Problem of AI: Dazzling Capabilities, But Powerful Market Incentives Blocking Bottom-Line Corporate-Profit Gains Brad DeLong

    "Artificial intelligence" seems to be coming everywhere, but its profits are not. Tech giants and ambitious startups flood the market with free or cheap AI tools. Yet the gap between value creation and value extraction yawns ever wider.

    Consider who is really likely to get paid when the music stops.

    The problem is not one of technical capability: MAMLMa now perform feats that would have seemed like science fiction a decade ago, from instant translation to sophisticated creative work. The problem is economic—and historical.

    Consider the fate of the dot-com boom, with Microsoft's decision to give away Internet Explorer for free; or the outcome of the streaming wars, or railroads in the 1800s. When marginal cost approaches zero and the big players have deep enough pockets, competition becomes a Red Queen's race: everyone must run faster to try to avoid falling behind, and fail, while no one gets ahead—except for users smart enough to become masters of rather than attention-slaves to the technology.

    OpenAI Reportedly Shuts Down for a Week as Zuck Poaches Its Top Talent gizmodo

    Mark Zuckerberg Already Knows Your Life. Now He Wants His AI to Run It Luc Olinga at gizmodo

    In a sweeping new plan, the Meta CEO is assembling an AI "Avengers team" to build not just a better chatbot, but a personalized intelligence that anticipates your needs, manages your schedule, and guides your decisions. Welcome to Life-as-a-Service.

    ...In a memo to employees viewed by Gizmodo on June 30, the Meta CEO announced the creation of Meta Superintelligence Labs, a major reorganization of the company's AI efforts under a single, ambitious goal: delivering personal superintelligence to everyone.

    Forget chatbots. Zuckerberg's vision is much grander. He is betting that within a few years, AI will not just be answering your questions or writing your emails. It will be managing your schedule, anticipating your needs, running your home, helping you make decisions, and maybe even guiding your career. Call it Life-as-a-Service, powered by Meta.

    (continues at AIJuly25.html)