2ix25
I Tested Googles New Nano Banana Image AI, and Its Insane Thomas Smith at Medium
...Nano Banana excels at editing existing images, rather than simply summoning new ones out of the AI ether....Head to https://gemini.google.com/, select the 2.5 Flash model, upload the photo you want to edit, and Nano Banana will make the changes you request.
To try and prevent these crazy editing capabilities from yielding tons of misinformation, Google is embedding an invisible watermark — as well as a visible one — into each of its images. That way, the company can track how theyre used around the web, and debunk any uses that claim the images are real.
Is the Bubble Bursting? Ted Gioia
Radar Trends to Watch: September 2025 O'Reilly
For better or for worse, AI has colonized this list so thoroughly that AI itself is little more than a list of announcements about new or upgraded models
AI-driven private schools are popping up around the U.S., from North Carolina to Florida via Stephen Downes
As AI Gets Better Im Less Worried About Losing My Job Alberto Romero
...there's an "it" about writing well that's qualitatively different from just writing; if ChatGPT is proof of anything, it's that somehow, reading the entire corpus of human literature is not enough. It might be a great tool for narrative analyses, but it's terrible at manifesting its erudite insights into a masterpiece
3ix25
Understanding the Rehash Loop O'Reilly
...When you're in a rehash loop, the AI isn't broken. It's doing exactly what it's designed to do—generating the most statistically likely response it can, based on the tokens in your prompt and the limited view it has of the conversation. One source of the problem is the context window—an architectural limit on how many tokens the model can process at once. That includes your prompt, any shared code, and the rest of the conversation—usually a few thousand tokens total. The model uses this entire sequence to predict what comes next. Once it has sampled the patterns it finds there, it starts circling. The variations you get—reordered statements, renamed variables, a tweak here or there—aren't new ideas. They're just the model nudging things around in the same narrow probability space. So if you keep getting the same broken answer, the issue probably isn't that the model doesn't know how to help. It's that you haven't given it enough to work with....When you recognize you're in a rehash loop, trying to nudge the AI and vibe-code your way out of it is usually ineffective—it just leads you in circles. ("Vibe coding" means relying on the AI to generate something that looks plausible and hoping it works, without really digesting the output.) Instead, start investigating what's missing
After a Complicated Legal Past, AI Set Her Free gizmodo
4ix25
Is AI Alive? Jeff Hatward at Medium
...I agree it's not conscious the way humans are. It doesn't feel pain, but scientists are working on creating skin for robots that will have nerves and sweat glands. That means you could one day AI-powered humanoids may wince with pain, or feel anxiety.
What makes an AI system an agent? Mark Liberman at Language Log
quotes Antonio Gulli:In simple terms, an AI agent is a system designed to perceive its environment and take actions to achieve a specific goal. It's an evolution from a standard Large Language Model (LLM), enhanced with the abilities to plan, use tools, and interact with its surroundings. Think of an Agentic AI as a smart assistant that learns on the job. It follows a simple, five-step loop to get things done:and a commenter says:
- Get the Mission: You give it a goal, like "organize my schedule."
- Scan the Scene: It gathers all the necessary information—reading emails, checking calendars, and accessing contacts—to understand what's happening.
- Think It Through: It devises a plan of action by considering the optimal approach to achieve the goal.
- Take Action: It executes the plan by sending invitations, scheduling meetings, and updating your calendar.
- Learn and Get Better: It observes successful outcomes and adapts accordingly. For example, if a meeting is rescheduled, the system learns from this event to enhance its future performance.
This brought to mind Bruce Sterling's wonderful 1996 short story "Bicycle Repairman" in which agents like these are called "mooks":"Lyle cordially despised all low-down, phone-tagging, artificially intelligent mooks. For a while, in his teenage years, Lyle himself had owned a mook, an off-the-shelf shareware job that he'd installed in the condo's phone. Like most mooks, Lyle's mook had one primary role: dealing with unsolicited phone calls from other people's mooks. [...] Lyle hated the way a mook cataloged your personal interests and then generated relevant conversation. The machine-made intercourse was completely unhuman and yet perversely interesting, like being grabbed and buttonholed by a glossy magazine ad."The story is internally dated 2037, so we appear to be somewhat ahead of schedule.
5ix25
Bioneural Frankenstein Victor Mair at Language Log
quotes People's Daily:China's Ministry of Industry and Information Technology, along with six other government departments, has jointly issued an implementation plan to accelerate the innovative development of the brain-computer interface (BCI) industry. The comprehensive policy outlines ambitious targets for the emerging technology sector through 2030.By 2027, China aims to achieve breakthrough progress in key BCI technologies while establishing advanced technical, industrial, and standardization systems. The plan specifically targets international-level performance in electrodes, chips, and integrated products. Applications are expected to expand rapidly across industrial manufacturing, healthcare, and consumer sectors, with the creation of 2-3 specialized industrial clusters and the development of new scenarios, models, and business formats.
Brain-computer interfaces create information channels between the brain and machines, enabling collaborative interaction between biological and artificial intelligence. This cutting-edge technology represents the convergence of life sciences and information sciences. The sector continues generating innovative breakthroughs and experiencing accelerated industrial growth, positioning itself as a critical area where technological and industrial innovation deeply integrate.
Civilization in the making: From AGI agent to AGI society Science
In 2024, the Beijing Institute for General Artificial Intelligence (BIGAI) unveiled Tong Tong, an AGI child and a groundbreaking experiment in autonomous development. By 2025, she had "grown" from a curious 3-year-old to a more self-assured 6-year-old, capable of negotiating for better outcomes and showing signs of evolving judgment. Like a human child, Tong Tong doesn't just learn—she's growing up. BIGAI's director, Song-Chun Zhu—also chief scientist at the PKU-Wuhan Institute for Artificial Intelligence (PKU-Wuhan), an emerging research hub bridging Peking University's strengths in social sciences with the frontier of AGI—is aiming to model not just intelligent agents but entire societies. What begins with individual minds like Tong Tong, Zhu believes, may one day scale into full societies of AGIs—forming communities, cities, and ultimately, artificial civilizations modeled on the real world.Zhu and his team have attempted to create AGI agents with human-like cognition using a "CUV framework," a guiding principle for building autonomous agents with purpose and goals. Under this concept, C is the AGI's thinking system, U is its ability to perceive and act, and V its internal motivations. Together, they form a "CUV space" that defines how individual AGI agents learn, reason, and act. This is fundamentally different from large language model (LLM)-based AI, which primarily mimics human behavior rather than performing autonomous, goal-directed actions.
"But humans are not just intelligent individuals," says Zhu. "We are social agents bound by invisible contracts, like atoms held together by chemical bonds." So, shaping AGI agents shouldn't be done in a vacuum. "To understand civilization and build socially aware AGI, we must simulate society itself, not just the minds within it," says Zhu.
...Led by PKU-Wuhan and supported by cross-disciplinary collaborators from Peking University, Tsinghua University, BIGAI, and the Optics Valley of China, the project draws on de-identified real-world data from an approximately 200 squaremile community of 1.18 million people: China's Optics Valley. This initiative aims at advancing socially intelligent AGI by exploring how autonomous agents navigate complex societal challenges—such as cooperation, inequality, value alignment, crisis response, economic systems, and urban planning—by modeling the emergence and evolution of social behavior in a realistic, large-scale environment.
Agentic AI Wikipedia
Agentic AI vs. generative AI IBM
Did we just redo how AIs Learn? Ignacio de Gregorio at Medium
...Token Order Prediction (TOP) could be the next big thing in large-scale AI training....(LLMs) don't actually predict the next word, but the probability distribution over the next word. In layman's terms, they rank the entire vocabulary of known tokens (words or subwords) based on their statistical likelihood as next words in the sequence. This means that, in every prediction, they consider all possible words they know as the next one, not just one.
...the Transformer, the underlying architecture of most modern models, has computationally taxing ' biases, prompting us to seek alternatives
...each model architecture introduces certain "biases", which are implicit assumptions we are making with the choice of architecture
...convolutional neural networks (CNNs), which are historically used for computer vision (processing images), exhibit a locality ' bias'.
...a Transformer is what we call a mixing operator that makes the different parts of the input sequence 'talk' to each other.
6ix25
Google's AI Ambitions An 'Existential Crisis' For News Online gizmodo
...The Guardian reports that sources close to the matter describe a landscape where AI-generated summaries and personalized feeds are now the primary gateways for online content. But critics warn that this shift risks creating "echo chambers" filled with sensationalism and clickbait, at the expense of thorough, investigative journalism.Industry insiders and analysts warn that such trends threaten to undermine the trustworthiness of news ecosystems.
Meanwhile, Google Discover, its personalized content feed, has become a dominant source of engagement, surpassing resulting from search. But industry analysts warn that this channel often rewards sensationalism and clickbait, undermining meaningful journalism.
"Google Discover is of zero product importance to Google at all," David Buttle, founder of the consultancy DJB Strategies, told The Guardian. "It allows Google to funnel more traffic to publishers as traffic from search declines ... Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want."
Nvidia Is Not Happy With the Gain AI Act, Says As Much gizmodo
...Nvidia, a Chinese corporation and currently the world's biggest company..."Advanced AI chips are the jet engine that is going to enable the U.S. AI industry to lead for the next decade," Brad Carson, president of Americans for Responsible Innovation (ARI), a lobbying group for the AI industry, said in a widely distributed statement.
"Globally, these chips are currently supply-constrained, which means that every advanced chip sold abroad is a chip the U.S. cannot use to accelerate American R&D and economic growth," Carson said. "As we compete to lead on this dual-use technology, including the GAIN AI Act in the NDAA would be a major win for U.S. economic competitiveness and national security." ...
7ix25
This AI Tech Runs at the Speed of Light — And Silicon Can't Compete Tina Sharma at Medium
...Photonic Neural Networks...are still mostly experimental, but they hint at a future where AI could be faster, more efficient, and less hungry for energy than anything we've built so far....For a long time, shrinking transistors meant faster, more efficient chips almost effortlessly. Moore's Law and Dennard scaling did the heavy lifting for us. But those days are fading. Today, chips are still getting more transistors, but the speed and efficiency improvements are creeping along, barely keeping pace with our growing AI appetite.
That sets up a tough tradeoff, as AI models keep growing, they gulp more and more power, while the hardware designed to run them isn't improving fast enough to keep up.
When you step back and look at the trends, the conclusion is clear: AI can't keep growing by squeezing a little more out of silicon. If it's going to grow, it needs something entirely different.
...Photonic computing works in a way that feels almost science-fictional. Instead of pushing electrons through silicon, photonic neural networks use light to carry information through tiny waveguides. Because photons don't bump into each other the way electrons do, they can move in parallel channels with almost no resistance. In theory, this means light-based systems could deliver the same computations much faster and with far less energy.
...The real computation happens in programmable photonic circuits. Structures like Mach-Zehnder interferometer (MZI) meshes or arrayed waveguide grating routers (AWGRs) perform the linear matrix operations that sit at the heart of neural networks. Researchers are already exploring ways to combine time, wavelength, and spatial multiplexing to push throughput even higher.
...periodically poled lithium niobate (PPLN) waveguides could provide sigmoid-like activation at ultrafast femtosecond timescales. Other options under study include saturable absorbers and semiconductor nonlinearities. These approaches are exciting, but turning lab prototypes into fully reliable networks will likely take years, as engineers work out how to integrate them at scale.
Claude 4 vs. a Peach: What Is a Peach, Really? Vern R Walker at Medium
...My experiment was to provide the AI system with an everyday story involving a specific peach at a farmer's market. Then I prompted Claude to describe the same peach from different conceptual perspectives or viewpoints—biology and chemistry, economics, cognitive science, law, and Aristotle's metaphysics. My goal is to investigate the question, “What is a peach, really?” I think both the experimental process and the results may surprise you....we have illustrated here five descriptions of the peach that are very different from the original story and from each other. Virtually all the nouns, adjectives, verbs, and adverbs in the five Claude responses are peculiar to each of the five viewpoints. However, the grammatical structure and many other words (such as "the," "a," "its," "is") are the same across all five descriptions.
Second, each viewpoint (such as economics) and topical focus (such as microeconomics) results from a community that creates and adapts its own descriptive language for its own goals and purposes. This differentiation of purpose allows all of the descriptions to be accurate at the same time, from the different perspectives of the different communities. Biologists, economists, cognitive scientists, lawyers, metaphysicians — each type of community has different goals, methods, and useful terminology.
...Third, my concluding observations are not directly about the peach (as all Claude's responses are), but rather about the language about the peach. Yet it seems to me that my observations are highly relevant to thinking correctly about peaches themselves. What precisely that relevance is should be puzzling to us.
Finally, we began with the question, "What is a peach, really?" I think that the answer should be that the word "really" is misleading here. There are many answers to the question, "What is a peach?" The peach is a drupe (botany), commodity (economics), information source (cognitive science), tangible personal property (law), and substance (Aristotle), as well as many other things.
The appropriate answer depends on our community, and on our purpose in asking the question. To suggest that one description should take priority over another would be to suggest that one community should take priority over the others. For most of us, the reasonable approach is to use whatever description is the most useful for our current purposes.
Neuroscience Rewired Part 5: The Consequences of Clarity Rob Whiteman at Medium
...I love hearing experts speculate about the arrival of artificial general intelligence (AGI). Some claim we're only a year or two away. Most think it'll take another 5–10 years. A handful of skeptics aren't even sure AGI is possible. Their forecasts are well-reasoned, but how can you forecast AGI without a model for the human brain?That question sent me down this neuroscience rabbit hole. After months of reading and writing, I've wrapped my head around where we are on the path to AGI. It's a mixed bag, but we're closer than most people realize.
...In terms of intelligence, AI models already perform well in terms of propagation and generalization. Tokenization lets models process a wide range of input signals, and I challenge you to find a single human with more patterns encoded in their brain than ChatGPT.
...That leaves consciousness, where AI labs remain at square one. Signals don't accumulate in ANNs [Artificial neural networks]. Even if they did, the slate is wiped clean with each new prompt. The same model instance that's writing a bedtime story one minute can be working on a mathematical proof the next. Modern AI models probably don't experience anything close to what we call consciousness.
Nvidia Wikipedia (company history)
...On July 29, 2025, Nvidia ordered 300,000 H20 AI chips from Taiwan Semiconductor Manufacturing Company (TSMC) due to strong demand from Chinese tech firms like Tencent and Alibaba.[151]In August 2025, Nvidia and competitor Advanced Micro Devices agreed to pay 15% of the revenues from certain chip sales in China as part of an arrangement to obtain export licenses.[152] Nvidia will pay only for sales of the H20 chips.[153]
...In 2018, Nvidia's chips became popular for cryptomining, the process of obtaining crypto rewards in exchange for verifying transactions on distributed ledgers, the U.S. Securities and Exchange Commission (SEC) said. However, the company failed to disclose that it was a "significant element" of its revenue growth from sales of chips designed for gaming, the SEC further added in a statement and charging order. Those omissions misled investors and analysts who were interested in understanding the impact of cryptomining on Nvidia's business, the SEC emphasized. Nvidia, which did not admit or deny the findings, has agreed to pay $5.5 million to settle civil charges, according to a statement made by the SEC in May 2022.[254]
...In May 2025, U.S. senators Jim Banks and Elizabeth Warren criticized a proposed Nvidia facility in Shanghai, saying that it "raises significant national security and economic security issues that warrant serious review."[257]
...In August 2025, Nvidia ordered suppliers to halt production of its H20 AI chip following Chinese government directives warning domestic companies against purchasing the processor due to security concerns.[258][259] The company directed suppliers including Taiwan Semiconductor Manufacturing Company, Samsung Electronics, and Amkor Technology to suspend work on the China-focused processor.[260]
The H20 was developed in late 2023 specifically for the Chinese market to comply with U.S. export restrictions, featuring 96GB of HBM3 memory and 4.0 TB/s memory bandwidth—higher than the H100—but with significantly reduced computational power at 296 TFLOPs compared to the H100's 1979 TFLOPs.[261][262] Despite lower raw performance, the H20 demonstrated over 20% faster performance than the H100 in large language model inference tasks due to architectural optimizations.[261][262]
Prior to the production halt, Nvidia had placed substantial orders for the H20, including 300,000 units from TSMC in July 2025, driven by strong demand from Chinese technology companies.[263] CEO Jensen Huang denied allegations that the H20 contained security backdoors, stating the chips were designed solely for commercial use.[264] The production suspension occurred as Nvidia was developing the B30A, a new chip based on its Blackwell architecture intended to succeed the H20 in the Chinese market.[265]
8ix25
OpenAI Researchers Have Discovered Why Language Models Hallucinate Alberto Romero
...let me do a quick meta-analysis of the paper, because, besides being an interesting study, it is three other things:
- It is OpenAI's implicit admission that hallucinations deserve priority status; maybe CEO Sam Altman has internalized that he shouldn't promise AGI while America's economy hangs by a thread if he doesn't yet have a means to raise AI's floor of value as much as he's raised the ceiling.
- The simplicity of the findings reveals that not enough money has been devoted industry-wide to understanding this issue; although hallucinations have been touted as AI's Achilles' heel, they don't get precedence over competitive concerns (it is the urge to beat other industry players that has delayed taking genuine care of this problem).
- We may be closer than ever to getting AI products that don't hallucinate; imagine how valuable ChatGPT would be if you were certain by default that it always says the truth (or, at least, it doesn't lie, in favor of expressing confident ignorance, like “" don't know"). I wouldn't blink at AI being endlessly dressed up with words like "revolutionary."
9ix25
Context engineering anchors AI agents to ground truth John Udell
The Complete Timeline of How AI Went From Miracle to Bubble in 3 Months Wesley Edits at Medium
...Altman himself recently said AGI has become "not a super useful term." OpenAI's own charter defines AGI as 'a highly autonomous system that surpasses humans in most economically valuable tasks.'What does that even mean? It's so vague it's basically meaningless.
...So the trick these companies are playing is they are building what they call sophisticated AI agents. Pretty much what they are doing is having an AI system prompt another AI system. It then prompts another, creating chains of artificial reasoning that look impressive on demo day but struggle under real-world pressure.
This isn't deep learning or anything new. It's like when you plug too many extension cables into an outlet. Sure you are getting more power, but you are increasing the chance of problems.
...Each prompt hop away from your original question increases the odds that the AI will completely lose track of what you actually asked for and then confidently deliver complete nonsense while burning through server farms worth of electricity.
...The Register's analysis was blunt: 'OpenAI's new top model appears to be less of an advancement and more of a way to save compute costs' — representing the 'cost-cutting era' for OpenAI.
Why the cost-cutting? OpenAI faced pressure to 'increase its user base, raise prices, or cut costs'. $20 a month is already a lot for something most people don't understand.
We're told to give up coffee and avocado toast to afford a house, but somehow paying for chatbots that hallucinate is essential?
...Nvidia remains the only company actually profiting from the AI boom, making their results critical for understanding the industry's real health.
Nvidia has become the world's most valuable company because tech giants like Microsoft, OpenAI, and Meta are buying their chips for data centers. While every other AI company burns through cash, Nvidia collects the profits from selling the tools.
Dangerous Dependence on Few Buyers: Two "mystery customers," identified as Meta and Microsoft, account for 39% of Nvidia's revenue. When customers like Amazon, Google, and Tesla are included, these few tech giants make up the majority of Nvidia's revenue.
Since raising prices would lose customers to competitors, cost-cutting became the strategy.
...We don't get a market collapse — just gradual degradation of every service we depend on so all these companies can replace workers with AI agents, all justified by 'efficiency" that only appears on quarterly earnings reports.
...The AI revolution promised to make our lives better — less work and more freedom to create what you want for less. Instead, it might just make everything a little bit worse, one automated interaction at a time.
Want To Get Better at Spotting AI Writing? Start With Wikipedia's New List Thomas Smith at Medium
points to Signs of AI writing Wikipedia
Megawatts and Gigawatts of AI Mike Loukides at O'Reilly
10ix25
Building AI-Resistant Technical Debt Andrew Stellman at O'Reilly
...Vibe coding is an exploratory, prompt-first approach to software development where developers rapidly prompt, get code, and iterate. When the code seems close but not quite right, the developer describes what's wrong and lets the AI try again. When it doesn't compile or tests fail, they copy the error messages back to the AI. The cycle continues—prompt, run, error, paste, prompt again—often without reading or understanding the generated code. It feels productive because you're making visible progress: errors disappear, tests start passing, features seem to work. You're treating the AI like a coding partner who handles the implementation details while you steer at a high level.Developers use vibe coding to explore and refine ideas and can generate large amounts of code quickly. It's often the natural first step for most developers using AI tools, because it feels so intuitive and productive. Vibe coding offloads detail to the AI, making exploration and ideation fast and effective—which is exactly why it's so popular.
The AI generates a lot of code, and it's not practical to review every line every time it regenerates. Trying to read it all can lead to cognitive overload—mental exhaustion from wading through too much code—and makes it harder to throw away code that isn't working just because you already invested time in reading it.
11ix25
The Mathematics of GPT Emergence: Why AI Gets Smart Suddenly Dr Swarnendu at Medium
Everyone talks about GPT's “magical” emergence of reasoning. But what if it's just mathematics at incomprehensible scale? Let me show you the elegant statistical mechanics behind AI's sudden intelligence.You're training a language model. For weeks, it produces grammatically correct but shallow responses. Then suddenly — almost overnight — it starts solving complex problems and demonstrating reasoning that seems almost human-like. What just happened? You witnessed a phase transition — the same mathematical phenomenon that explains how water suddenly becomes ice.
Vibe coding vs. the metaverse: a tale of hollow tech buzzwordsYoujin Nam at Medium
The first time I heard the phrase "vibe coding," I felt a wave of déjà vu. The words carried the same hollow ring I remembered from the early days of the "metaverse" — a term lifted from Neal Stephenson's 1992 sci-fi novel Snow Crash and later stretched by tech companies to describe immersive 3D worlds. As I described it in The body remains the interface, it became a mantra after Facebook's rebrand inside Meta, repeated endlessly even though no one could explain it. That vagueness seemed intentional, rather than a flaw.Vibe coding has a different origin but a similar effect. The phrase was introduced in February 2025 by Andrej Karpathy, who used it to describe programming by plain language — sketching ideas in natural words and letting AI generate the code.
MIT Report Reveals 95% of Generative AI Pilots at Companies Are Failing Spectacularly MKWriteshere at Medium
AI isn't smart. It's performative.These systems regurgitate information confidently, including incorrect information fed during training. We're deploying probabilistic text predictors in environments that demand precision, then acting shocked when they hallucinate in critical moments.
The real problem isn't ChatGPT's limitations — it's executives who mistake confident responses for actual intelligence. They're buying digital parlor tricks and expecting strategic transformation.
...NVIDIA H100 chips cost $40,000 each. Google owns 26,000 of them and created AlphaGo, AlphaFold, and Gemini. Meta owns 600,000 and apparently uses them to make teenagers feel worse about themselves.
Sam Altman claims OpenAI needs trillions in infrastructure investment for mainstream adoption. Only nation-states can afford that scale, and I don't see sovereign wealth funds lining up to fund chatbots.
Most AI startups will fail from infrastructure costs alone. The economics don't work unless you get acquired early or find a government willing to subsidize your computing bills.
Meta renamed itself to ensure nobody forgets their $70 billion Reality Labs losses. They pivoted from metaverse to AI when virtual reality proved premature. How long before they pivot again?
...Advanced organizations experiment with "agentic AI" that supposedly learns, remembers, and acts independently within defined boundaries. These systems represent the next evolution beyond current limitations.
I remain skeptical. If companies can't successfully deploy basic AI tools, why would more complex systems fare better? The fundamental problems — unrealistic expectations, poor implementation planning, and liability questions — don't disappear with technological advancement.
...The question isn't whether AI will transform business — it's how many companies will go bankrupt learning what AI actually does versus what they thought they were buying.
Illusions of AI consciousness Yoshua Bengio and Eric Elmoznino at Science
...Despite the plethora of AI models that have been developed, no system likely meets all of the criteria for consciousness set forth in any of the leading theories (1). However, the study also concludes that there are no fundamental barriers to constructing a system that does. Indeed, the set of tools available in modern AI is vast: There is evidence that neural networks can implement attention mechanisms, recurrence, information bottlenecks, predictive modeling, world modeling, agentic behavior, theory of mind, and other computational components considered crucial in leading functionalist theories of consciousness. As AI progresses, there is good reason to believe that it will satisfy more of these indicators for one very important reason: Many of the theories suggest that consciousness plays important functional roles for intelligence. Computational functions often associated with consciousness could provide advantages from the point of view of a learning agent (2). Reasoning, planning, efficiently digesting new knowledge, calibrated confidence, and abstract thought all require consciousness according to one theory or another. It is common for AI researchers to take inspiration from theories of consciousness when approaching these problems (3)....What might be the practical implications of a society that sees AI systems as conscious beings? Such a society might be inclined to treat them as though they have moral status, or rights akin to human rights. But whether or not this is the correct approach, institutions and legal frameworks will have to be substantially amended, and many questions arise about how to do so (12). For instance, AI systems will not be mortal and fragile, as humans are. Software and memories can be copied to survive indefinitely. But human mortality and fragility lie at the foundation of many of the principles that undergird social contracts in society.
...The current trajectory of AI research may be moving society toward a future in which substantial portions of the general public and scientific community believe that AI systems are conscious. As things stand currently, AI science does not know how to build systems that will share human values and norms, and society possesses neither the legal nor ethical frameworks needed to incorporate conscious-seeming AI.
...The technological advances in neuroscience of the past few decades have made it clear that conscious states, which can typically be reported by subjects, have specific observable neural signatures around which functionalist theories can be developed. Many such theories have gained substantial empirical support and can be used to make theoretically justified judgments in the case of AI. This methodology was recently applied in a study that identified a list of “indicators” for a number of leading functionalist theories of consciousness (1). The indicators associated with a given theory correspond to computational properties that are considered both individually necessary and jointly sufficient for a system to be conscious, if that theory is true. Notably, these indicators are sufficiently concrete that their presence or absence can be assessed in modern AI systems. The key suggestion of the study is that, to the degree that these theories are given any credence (and many researchers support these ideas), there should be more confidence that a particular AI system is conscious if it satisfies more of the indicators.
...the Attention Schema Theory of consciousness suggests that the brain constructs an internal model of neural attention mechanisms, and that this internal model is what is considered subjective awareness. Crucially, the information in this internal model need not be logically coherent; it is a useful “story” that the brain constructs, and that story can be full of the sorts of contradictions that could make us believe in a “hard problem” of consciousness (6).
...People have the intuitive sense that their subjective experiences are at once full of rich content and meaning, yet that they are fundamentally ineffable, or indescribable in the same way that they describe all other natural phenomena (e.g., a person can state what gravity is, but it seems fundamentally impossible to fully express what the color red evokes for them). The problem of ineffability in particular makes it appear as if conscious experiences simply cannot be explained in terms of information and function
...Contractive dynamics mathematically drive neural trajectories toward “attractors,” patterns of neural activity that are stable in time. These dynamics divide the whole set of possible neural activity vectors into a discrete set of regions, one per attractor and its basin of attraction. The hypothesis, then, is that what is communicated through discrete words may reflect only the identity of the attractor (identifying it among all others, with a few bits of information) but not the full richness of the neural state corresponding to the attractor (with nearly 1011 neural firing frequencies) nor the fleeting trajectory into it. In this attractor dynamics account, the problems of richness, fleetingness, and ineffability dissolve. Thus, the richness is due to the immense number of neurons in the brain that constitute the attractive states and corresponding trajectories, and the ineffability is due to the fact that verbal reports in words are merely indexical labels for these attractors that are unable to capture their high-dimensional meanings and associations, corresponding to the attractor vector state itself and the recurrent synaptic weights differing from person to person.
...As more is discovered about the brain and about intelligence in general, the philosophical puzzle of consciousness likely evaporates for increasingly more people, and as a result the scientific community becomes increasingly willing to accept that artificial systems could be conscious. Indeed, even without current scientific consensus, most of the general public polled in a recent study (11) already believes that large language models could be conscious as a consequence of their human-like agentic behavior.
Literary Rebellion in the Age of LLMs Joe Duncan
Reaching back to the past to outwrite the future...Good as LLMs may be at simile, they utterly fail at metaphor because it requires thinking about and understanding the words used. Perhaps to outwrite the machines, it would behoove us to write using more metaphors — especially ones that aren't commonplace
...Today, we have countless words that were theretofore unknown in Geoffrey Chaucer's era, thanks to Chaucer himself. He utilized French words when he didn't have satisfactory English ones, and thus, the totality of the English language expanded. Words like miracle, virtue, religion, authority, and others first arrived in English via Chaucer.
...Shakespeare employed clever uses of what linguists call noun phrases. Noun phrases are nouns with modifiers, such as "the horse with the dogs," where every word except "horse" is a modifier that modifies the word "horse" (even "the dogs" describes which horse). Here's Shakespeare using this in Macbeth (pay attention to what "it" does in this sentence):
If it were done, when ‘tis done, then twer wellHe uses the word "it" four times — said three different ways — in the span of one sentence. Everything else modifies. This bit of linguistic witchcraft plays tricks on the brain. It has a rhythm to it that's so unusual. In ordinary speech, we're not used to being bombarded with this many pronouns so quickly. The result is a sense of whiplash and a barrage of words that's pleasant to read and say. It's one reason we've adopted so many Shakespeareisms into ordinary speech, like another example of this process in action with the adjective — all's well that ends well.
It were done quickly...Milton also loved inventing words. I bolded stunning above because Milton coined it. If Shakespeare modified existing words to create new ones, Milton invented them whole cloth. He forged so many words, thanks to his extensive knowledge of Latin and Greek, that he likely created more English words than anyone else in history — six hundred and twenty-three of them to be exact.
Milton was particularly fond of inventing un-prefix negatives:
- Criticize
- Surrounding
- Damp
- Expanse
- Fragrance
- Satanic
- Sensuous
- Vested
- Enjoyable
- Irresponsible
His constant inventing of new words is one reason English made such a massive leap in the 17th century. If you read Shakespeare or Spenser and then read Milton back to back, you'll see clearly how much the language grew thanks to his contributions. While beautiful, Elizabethan English was quite clumsy compared to the concise, efficient English after Milton.
- Unprincipled
- Unoriginal
- Unaccountable
- Unenjoyable
The Genius of Gauss: How the Anthropology of Mathematics Guides Artificial Mathematical Intelligence Sam Vaseghi
Those proficient in mathematics know the divide between two major types of software that—alongside core programming languages such as Python, FORTRAN, C, or R—now assist us: on one side, new generations of GPT-style Large Language Models (LLM); on the other, Computer Algebra Systems (CAS) such as Wolfram's Mathematica (see here) or Maple.New LLMs (e.g., GPT-5, Gemini 2.5 Pro, Claude Opus 4.1) operate over statistical associations, develop emergent arithmetic from text-and-image patterns, and excel at a 'feel' for quantities, heuristics, and analogies. CAS tools, by contrast, follow a symbolic, rule-based path: exact engines run deterministic algorithms (e.g., Gröbner bases, the Risch algorithm) and produce fully certifiable answers—when defined.
Although entirely different in their architecture and functionality, both software types are highly performant in doing certain types of mathematics with brilliant results. There are, in effect, two ways of doing mathematics, which are architecturally and functionally entirely distinct. Nevertheless, they yield some identical results within the same axiomatic system, often without contradiction.
15ix25
When Deepware meets Wetware: the uncomfortable truth about responsible AI Patrizia Bertini at Medium
There is a memo from IBM dated 1979 that should be mandatory reading for every CEO racing to deploy AI. It states simply and profoundly: machines cannot be responsible.A computer can never be held accountableFifty three years later, we have built what I call "deepware" — the layered neural architectures of AI and machine learning that sit atop our traditional software and hardware stack. Yet somehow in our breathless rush to innovation we have convinced ourselves that this deepware can shoulder the burden of responsibility that our wetware — our human brains — seems increasingly eager to abandon.Therefore a computer must never make a management decision
Let me be clear: responsible AI does not start with better algorithms or more data. It starts with humans who remember how to think.
We have always understood technology through layers. Hardware provides the physical substrate. Software gives us the instructions and logic. But now we have added deepware — these probabilistic, pattern matching systems that generate increasingly convincing simulacra of intelligence.
The critical layer though remains wetware: the human brain, with its capacity for judgement, ethics and crucially responsibility. Yet this is precisely the layer we are systematically deactivating in our AI deployments.
16ix25
The Best Things I Have Found on the Unreasonable Effectiveness of Neural Networks Brad DeLong
...Folding space: How back-propagation and ReLUs can actually learn to fit pieces of the world. Intuition is actually possible! Geometrically, neural nets work not by magic, but by folding planes into shapes—again and again, at huge scale, in extraordinarily high numbers of dimensions. Addition of innumerable such shapes composes simple bends into complex functions, and back-propagation finds such functions that fit faster than it has any right to.But our geometrical intuition is limited. Our low‑dimensional brains misread the danger that a model might get "stuck" in a bad local minimum of the loss function, and not know which way to move to get to a better result. In large numbers of dimensions—when you have hundreds of thousands, or more of parameters to adjust what emerges looks to us low-dimensional visualizers as "wormholes." As gradient descent proceeds, the proper shift in the slice you see reveals nearby, better valleys in the loss function down which the model can move.
Your mileage may, and probably will, vary. But these visual intuitions click for me. And I can at least believe, even if not see, how things change when our vector spaces shift from three- to million-dimension ones, in which almost all vectors chosen at random are very close to being at right angles to each other.
(points to 3 videos from Welch Labs:
And several from IBM:
MCP in Practice O'Reilly
Mapping Power, Concentration, and Usage in the Emerging AI Developer Ecosystem...Anthropic's Model Context Protocol (MCP) was released in November 2024 as a way to make tools and platforms model-agnostic. MCP works by defining servers and clients. MCP servers are local or remote end points where tools and resources are defined. For example, GitHub released an MCP server that allows LLMs to both read from and write to GitHub. MCP clients are the connection from an AI application to MCP servers—they allow an LLM to interact with context and tools from different servers. An example of an MCP client is Claude Desktop, which allows the Claude models to interact with thousands of MCP servers.
In a relatively short time, MCP has become the backbone of hundreds of AI pipelines and applications. Major players like Anthropic and OpenAI have built it into their products. Developer tools such as Cursor (a coding-focused text editor or IDE) and productivity apps like Raycast also use MCP. Additionally, thousands of developers use it to integrate AI models and access external tools and data without having to build an entire ecosystem from scratch.
...Yet we don't have a clear idea of the shape of the MCP ecosystem today
...Some of our takeaways are:
- MCP usage appears to be fairly concentrated. This means that, if left unchecked, a small number of servers and (by extension) APIs could have outsize control over the MCP ecosystem being created.
- MCP use (tools and data being accessed) is dominated by just three categories: Database & Search (RAG), Computer & Web Automation, and Software Engineering. Together, they received nearly three-quarters (72.6%) of all stars on GitHub (which we proxy for usage).
- Most MCP servers support both read (access context) and write (change context) operations, showing that developers want their agents to be able to act on context, not just consume it.
...Before the advent of MCP, web scraping and database search were highly integrated applications across platforms like ChatGPT, Perplexity, and Gemini. With MCP, however, users can now access that same search functionality and connect their agents to any database with minimal effort. In other words, MCP's unbundling effect is highly visible here... The prevalence of servers with combined read and write access suggests that agents are not being built just to answer questions based on data but also to take action and interact with services on a user's behalf.
17ix25
Cory Doctorow sez:
...There's many examples of harmful delusions being worsened through online community reinforcement: there's pro-anorexia forums, incel forums, bitcoin, "race realism," and other all-consuming junk science.That's where LLMs come in. While the internet makes it far easier to find a toxic community of similarly afflicted people struggling with your mental illness, an LLM eliminates the need to find that forum. The LLM can deliver all the reinforcement you demand, produced to order, at any hour, day or night. While posting about a new delusional belief to a forum won't generate responses until other forum members see it and reply to it, an LLM can deliver a response in seconds.
In other words, there's one job that an AI can absolutely do better than a human: it can reinforce our delusions more efficiently, more quickly, and more effectively than a community of sufferers can.
18ix25
The Indigenous AI Revolution Giles Crouch at Medium
Indigenous peoples around the world are at the forefront of developing innovative uses and applications of GenAI tools like ChatGPT or Claude, but most often, starting from scratch when they do. Rather than them asking "how do we adapt to AI?" they are instead asking "How do we make AI adapt to our values?"This work is quite profound and the various projects underway in Canada, Australia, New Zealand and other countries can start to play a vital role in shaping culturally rich and relevant AI to better serve humanity. These creates an interconnectedness that contrasts with the often reductionist approaches of Western AI development.
Microsoft's AI Copilot Invades Congress gizmodo
...Axios reports that Microsoft's AI chatbot, Copilot, will be made available to staff in the House of Representatives, where lawmakers will use it to... well, that part isn't exactly clear yet.
ChatGPT and the Em Dash languagehat
19ix25
24ix25
To Understand AI, Watch How It Evolves Quanta Magazine
...the growing field of interpretability, in which researchers poke around inside language models to uncover the mechanisms that make them work. While many of her fellow interpretability researchers draw inspiration from neuroscience, Saphra favors a different analogy. Interpretability, in her view, should take a cue from evolutionary biology."There's this very famous quote by [the geneticist Theodosius] Dobzhansky: 'Nothing makes sense in biology except in the light of evolution,'" she said. "Nothing makes sense in AI except in the light of stochastic gradient descent," a classic algorithm that plays a central role in the training process through which large language models learn to generate coherent text.
Language models are based on neural networks, mathematical structures that process data using connections between artificial "neurons." The strength of each connection is random at first, but during the training process the connections get tweaked as the model repeatedly attempts to predict the next word in sentences from a vast text dataset. Somehow, through trillions of tiny tweaks, the model develops internal structures that enable it to "generalize," or respond fluently to unfamiliar inputs.
Most interpretability research focuses on understanding these structures in language models after the training process. Saphra is a prominent champion of an alternative approach that focuses on the training process itself. Just as biologists must understand an organism's evolutionary history to fully understand the organism, she argues, interpretability researchers should pay more attention to what happens during training. "If you don't understand the origins of the model, then you don't understand why anything works," she said.
...When I started my Ph.D., I developed a neurological disease that made me lose the ability to type or write by hand. That's obviously a huge limitation for a Ph.D. student in computer science. I had to learn to dictate code, and I relied on accommodations like having a private office that I can dictate in.
Trust but Verify O'Reilly
We often say AIs "understand" code, but they don't truly understand your problem or your codebase in the sense that humans understand things. They're mimicking patterns from text and code they've seen before, either built into their model or provided by you, aiming to produce something that looks right and is a plausible answer. It's very often correct, which is why vibe coding (repeatedly feeding the output from one prompt back to the AI without reading the code that it generated) works so well, but it's not guaranteed to be correct. And because of the limitations of how LLMs work and how we prompt with them, the solutions rarely account for overall architecture, long-term strategy, or often even good code design principles.The principle I've found most effective for managing these risks is borrowed from another domain entirely: trust but verify. While the phrase has been used in everything from international relations to systems administration, it perfectly captures the relationship we need with AI-generated code. We trust the AI enough to use its output as a starting point, but we verify everything before we commit it.
Trust but verify is the cornerstone of an effective approach: trust the AI for a starting point but verify that the design supports change, testability, and clarity. That means applying the same critical review patterns you'd use for any code: checking assumptions, understanding what the code is really doing, and making sure it fits your design and standards.
LLM "hallucinations are mathematically impossible to avoid flowingdata
..."Unlike human intelligence, it lacks the humility to acknowledge uncertainty," said Neil Shah, VP for research and partner at Counterpoint Technologies. "When unsure, it doesn't defer to deeper research or human oversight; instead, it often presents estimates as facts.The OpenAI research identified three mathematical factors that made hallucinations inevitable: epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures' representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems.
LLMs function under data with uncertainty attached. For general purpose, in the human world, that will always be the case. This is problematic when you have chatbots spouting sure things all the time.
The Trillion-Dollar AI Bet Alberto Romero
...AI is often described as a 'general-purpose technology,' meaning a kind of meta-technology; the kind that creates new technologies or improves existing ones. It's up to the user to figure out how it works and what it's for. (We've come to think of the chatbot interface and the smartphone form-factor as the only possibilities, but AI could be built into almost any design or device.) AI labs market this so-called “generality” as a virtue—as unbounded scope and infinite potential—but they fail to notice that humans hate the uncertainty that comes with the infinite. It's great, yes, but also intangible, abstract, nebulous....AI is the ultimate multi-tool and an unreliable one at that, so its success is, in every sense, a bet. A bet for the consumers using it, for the AI companies building it, for the businesses buying it, and a bet for the entire economy.
UI paradigms Jakon Nielsen at nngroup.com
...AI is introducing the third user-interface paradigm in computing history, shifting to a new interaction mechanism where users tell the computer what they want, not how to do it — thus reversing the locus of control....The beauty of graphical user interfaces is that they do show the status after each command, at least when designed well. The graphical user interface has dominated the UX world since the launch of the Macintosh in 1984: about 40 years of supremacy until it possibly is replaced by the next generation of UI technology and, more importantly, the next UI paradigm in the form of artificial intelligence.
...in command-based interactions, the user issues commands to the computer one at a time, gradually producing the desired result (if the design has sufficient usability to allow people to understand what commands to issue at each step). The computer is fully obedient and does exactly what it's told. The downside is that low usability often causes users to issue commands that do something different than what the users really want.
With the new AI systems, the user no longer tells the computer what to do. Rather, the user tells the computer what outcome they want. Thus, the third UI paradigm, represented by current generative AI, is intent-based outcome specification.
...Clicking or tapping things on a screen is an intuitive and essential aspect of user interaction that should not be overlooked. Thus, the second UI paradigm will survive, albeit in a less dominant role. Future AI systems will likely have a hybrid user interface that combines elements of both intent-based and command-based interfaces while still retaining many GUI elements.
Has NVIDIA Lost the Lead? Ignacio de Gregorio at Medium
Huawei has recently presented its upcoming AI accelerator roadmap, and it looks pretty insane. The lineup looks so good that China apparently no longer wants NVIDIA GPUs.But is this really what it seems? Should the US Government and AI incumbents be freaking out?
The answer is yes, but not for the reasons one might think. Instead, it's the one thing that is setting China very far ahead of the US.
...Huawei has unveiled what are likely the largest interconnected servers (known as pods) and clusters (groups of pods) ever seen.
... instead of worrying about single-chip metrics, we are actually more interested in cluster-level metrics. The metrics are the same, but at a totally different scale. Hold this thought for later.
...In short, they seem to have presented a new lineup of AI systems that vastly outcompete NVIDIA's performance, at least in terms of raw metrics.
...Not only do energy costs impact nearly every aspect of our lives, from manufacturing to transportation to basic living expenses, but they are also the key factor in determining how much we can reduce AI prices and how much AI compute can be deployed.elect
Artificial Intelligence Captures Christopher Hitchens' Style, if Not His Political Content Paul Austin Murphy at Medium
...What would Christopher Hitchens have to say about current events? Welcome to a unique thought experiment: what if the sharpest voices of the past could weigh in on today's debates? This channel explores that idea by recreating a satirical, AI-generated version of Hitchens, one of the most formidable public intellectuals of our time. Inspired by the tradition here at Oxford of inquiry/debate, it's a labor of love by admirers wondering how his wit might respond to modern issues.
(from YouTube channel 'Christopher Hitchens Resurrected'...My first thought was the prose style was too good, and too close to Hitchens' own, to be AI. Regardless of the political content, the style was impressive. I was already fully aware what AI is capable of. But this was too good. There are no intellectual glitches in the videos I've inspected. In other words, there are no factual mistakes, out-of-tune phrases, etc.
...there are similar "resurrection" channels, such as those for George Orwell, Carl Sagan, etc
...Hitchens' style is AI's dream. His writing — dense with Orwellian nods, biblical inversions, and sardonic jabs — has a predictable structure despite its brilliance. AI thrives on patterns, and his are loud
...What would Christopher Hitchens say to J.K. Rowling about her stance on trans issues? In this satirical reimagining, Hitchens' unmistakable voice and wit are brought forward to deliver a scathing critique of Rowling's descent into bigotry and her detachment from reality. From his imagined vantage point, Hitchens dismantles the hollow arguments and culture-war posturing that underpin Rowling's views, exposing the contradictions and fear-mongering at the heart of her rhetoric.
...It is one of the peculiar tragedies of our age that the woman who taught a generation of children about the importance of choosing love over fear, tolerance over prejudice, and standing up for the marginalized against their oppressors, has herself become consumed by exactly the kind of moral panic she once wrote against. JK Rowling, having given us Harry Potter's struggle against those obsessed with blood purity, now spends her days obsessing over chromosomeal purity with the fervor of a Victorian eugenicist who's just discovered Twitter.
The transformation has been as complete as it has been pathetic. Here is one of the wealthiest women in Britain. Someone who could spend her remaining years doing literally anything. Fighting poverty, promoting literacy, enjoying her castle in Scotland, who has instead chosen to become a full-time opponent of trans people's right to exist in public life
AI and the Future of Human Language (Spoiler: It's Weird) Giles Crouch at Medium
Google translate thinks Putin runs Mordor and ChatGPT invented a guy named "Pernjamin", who doesn't exist but sounds like an old childhood friend. And your phone thinks you want to "duck" someone. They're funny. And true. This did happen with Google translate, ChatGPT and iPhones with the "ducking" thing.Yet some of these weird language plays aren't just digital bloopers. In some ways, they're actually rewiring the human language. Tiny linguistic mutations that become part of how we talk. These machines weren't designed to do this, they were designed to help us perfect our language. Instead, they are becoming co-conspirators in the evolution of our languages, turning errors into cultural elements and bugs into features.
...Back in the early oughts there was the issue of the "Cupertino Effect" with early spellcheckers which turned the word cooperation into Cupertino. The impact? Some major documents in the EU contained phrases like "South Asian Association for Regional Cupertino" and "presentation of African-German Cupertino."
Then there's the infamous "ducking" autocorrect of "f*cking" that lead to some hilarious memes and remains lingering in cultural language to this day. It dates back to a late 90's device and somehow made it into Apple's software.
Now we are seeing a sort of algorithmic folk etymology. Historically, humans reinterpret unfamiliar words through familiar sounds (like "sparrow-grass" for "asparagus"), but today we're reinterpreting algorithmic suggestions through cultural logic. We've not done this before.
... language evolution is no longer human alone. Algorithms are now inadvertent cultural participants. And we know that "mistakes" in complex systems often become the foundation for new capabilities. And in a way, these technologies, designed to reduce friction in how we communicate (autocorrect, predictive text), end up creating new forms of friction that encourage linguistic creativity. (author: That's pretty uhm, "ducking" cool!)
...AI tools like ChatGPT and Claude aren't just responding to changes in language, they're actively influencing them. Helping coin new terms, shaping the popularity of phrases and creating new grammatical structures. We're even developing canny ways of "knowing" when it's AI written content. In part because LLMs can't "play" (yet), with language like we humans do.
...Language is a core technology that we created (or took from the Neanderthals and improved on) to be able to work together and survive as a species. And so we are now doing this with machines. It's equal parts scary and fascinating.