The materials linked below are best read as bellwethers for emergent phenomena,
commentary on a rapidly-changing landscape of technologies.
Many of these seemed to be bellwethers when I collected them...
1xii25
Large Language Models As The Tales That Are Sung Henry Farrell
...In the oral tradition, epics are not fixed texts, but generative systems of songs, endlessly articulated, re-articulated and adapted in varying forms, according to the particularities of the singer and the circumstances they sing in. As Lord puts it: "we cannot correctly speak of a "variant," since there is no original to be varied."...A huge amount of cultural and especially intellectual tradition consists of formulas, templates, conventions, and indeed tropes and stereotypes. To some extent this is to reduce the cognitive burden on creators: this has been extensively studied for oral culture, such as oral epics.
...The poiesis of the programmer is closer to the heroic poiesis of the bard than we think. As one of my old XKCD t-shirts puts it: stand back - I know regular expressions
...an important — and from some perspectives crucial — difference between traditional generative cultural systems such as the oral epic tradition in Yugoslavia and its Homeric ancestor and cousin, and algorithmically generative cultural systems such as LLMs. The former are inseparable from their performance in human contexts: the system can be abstracted from their performances, but does not have any substantial being independent of them. The tradition's existence is manifested through the stories that humans tell each other. LLMs, in contrast, are a condensation and actualization of the tradition itself and all the other traditions that have been folded and compressed into its statistical weights, stripped of the specifics human relationships and reapplied to them as an algorithmic process. LLMs speak only when prompted, but their continuations of those prompts are not expressed through two-way human relationships (though the intentions of the prompter, training via RLHF and fine tuning etc certainly affect the outputs). The person who prompts the LLM is interrogating a lossy representation of the tradition itself. Of course, LLMs may then shape human understandings, and fool people into treating them as human, but they are not. Their tangible abstraction is something that is novel and different in its application than the previously intangible cultural knowledge that they summarize.
...There is no valid room for wondering about LLMs' motives for saying things, since they don't have any motives to wonder about. As Weatherby argues, LLMs have "heat-maps" of summarized correlations between themes, tropes and words instead of intentions.
2xii25
Why Ads on ChatGPT Are More Terrifying Than You Think Alberto Romero
...It is a bad sign that this is a forced move rather than a free choice, though: People would complain anyway if OpenAI wanted ad revenue to amass some profits, but that's standard capitalism; the fact that OpenAI can't afford the things it intends to build unless it enables ads suggests a fundamental flaw at the technological level: maybe large language models (LLMs) are not a good business (the financials don't paint a pretty picture so far). If that's the case, then this is not an OpenAI problem but an industry-wide catastrophe: whereas traditional search engines display ads because users want free information, chatbots will have ads because the costs are otherwise untenable.The unit economics of LLMs have always been precarious; the cost of inference—the computing power required to generate a response, which is, accounting for everything, larger than the cost of training a new model when you serve 800 million people—remains high compared to web search because small-ish companies like OpenAI need to rent their compute to cloud companies like Microsoft; their operational expenses are sky-high! (cloud-high one might say). The twenty-dollar subscription tier (chatbot providers all have converged at ˜$20/month) was an effective filter for power users, but it was never going to subsidize the massive operational costs of the free tier, which serves as both a user acquisition funnel and a data collection engine
...The transition from monthly subscription to advertising pushes OpenAI out of "research laboratory selling access to intelligence" territory into "full-blown media company selling access to attention" territory, just like YouTube or Facebook. Let me be clear about this: building AGI (general AI) to "benefit everybody" was always a cool genuine goal but ultimately a distraction; you can promise the heavens for free but only until you need to hit revenue targets to appease investors. OpenAI is not hitting them.
...The core utility of an LLM is the ability to compress vast amounts of data into a single, coherent answer (that is, when it doesn't hallucinate). We value this distillation because we assume it is derived from a "neutral weighing of the available facts: ChatGPT goes on Google search and gives you a response from, presumably, a good mix of sources. Advertising introduces a perverse incentive that directly undermines this utility.
Has Google just wiped out an entire industry (and created a new one)? AI Rabbit at Medium
...the ability to generate images that adhere to actual world knowledge and physical laws is a significant leap forward. The lack of precision has historically been a limiting factor for using AI in fields like science and education, just as the lack of reliable text generation initially limited its use in commerce.
Stock Photographers: The market for generic stock imagery (e.g., "business people shaking hands" or "fruit in a bowl") could collapse. If an AI can generate these scenes with perfect lighting and physics, there is little reason to pay for a studio session. Product Illustrators: Companies often hire artists to create realistic mockups of packaging or prototypes. Since Nano Banana Pro understands how light wraps around objects and how materials behave, these roles are at high risk of automation. Graphic Designers: Much of the entry-level work involves sourcing assets and creating simple compositions. When a prompt can deliver a finished, high-fidelity asset that doesn't need fixing, the need for junior staff to handle these tasks diminishes.
How to Think With AI (Not Let It Think For You) Giles Crouch
...We are in a liminal phase right now with AI tools like LLMs, Claude, ChatGPT etc. We are suddenly swimming in a vast sea of collected knowledge and information, with a way to interrogate it, dialogue with it unlike ever before....We need to become "bricoleurs", which sounds rather delicious I think and it is. Traditional bricolage was is assembling things with finite and knowable materials. The constraints are physical and visible. We manipulate them entirely.
Radar Treands to Watch O'Reilly
A Day in the Life of a Working Ontologist Dean Allemang at Medium
...I don't think we're finished seeing the impact that GenAI will have on an ontologist's daily life, but I'm pretty sure (and we're already seeing it), that "ontologist" is going to be a much more sought after skill in the near future. LLMs alone can't support enterprise AI agents; they need access to structured, reliable data, and they need to understand what that data means. Most enterprises have plenty of curated data; what they're missing, when they want to connect that to GenAI, is the meaning. Building that layer of meaning, connecting data to its business context, making implicit knowledge explicit, and ensuring machines and humans share the same understanding is what ontologists are good at, and that's what the next generation of AI systems will require.
Google's Plan to Win the AI Race Is All About Getting a Little Too Personal Ece Yildirim at gizmodo
...For AI to get a better understanding of you, Stein says Google's AI models would use connected services like Gmail so that it could paint a detailed picture of your likes and dislikes over time. Thanks to that, for example, the AI can serve you up targeted product releases that you might like....AI is an existential topic for Google Search. Earlier this year, a judge ruled in favor of Google in an antitrust trial over its Search business, saying that "for the first time in over a decade," AI made it so that "a product could emerge that will present a meaningful challenge to Google's market dominance." Google, it seems, might be seeing where the judge comes from as the tech giant ramped up its AI efforts, unveiling its latest Gemini model to great success and integrating it across the entire Google ecosystem.
Gemini is already integrated with Google Workspace apps like Gmail (where you're personal correspondences), Calendar (which knows what you do, when, and where), and Drive (which may have your work documents or your personal photos). It is also available on Google Maps, YouTube, WhatsApp, Spotify, you name it.
The company also launched a new Chrome browser in September with Gemini integration throughout, including an agentic AI that can navigate the web and complete tasks on your behalf.
...some parts of the Google search experience would stay non-personalized, even in this vision, like when you go on the internet to look for simple, factual information, such as the height of the Empire State Building. But that's not likely the majority, according to the Google executive.
"It's almost weird not to personalize it," Stein said.
...Google's business decisions are consequential for the entire web as the company's products hold the power to alter the way we interact with the internet. Their search engines are the average user's gateway to the internet
3xii25
When Knowledge Graphs Fail, It's Not the Ontolog &mdash It's the Epistemology Dr Nicolas Figay at Medium
...Glossary for the Article: Knowledge Graphs and EpistemologyFor readers less familiar with semantic engineering and formal knowledge representation, the following glossary provides simplified definitions for the key technical and philosophical concepts discussed in this article, ensuring clarity between data, information, knowledge, and the functions of OWL and SHACL.
Knowledge Graph (KG): A structured way to represent information, similar to a giant map where things (people, places, events) are linked by relationships. It helps computers understand the meaning of data, not just the words.
Facts, Inferences, and Unknowns:
- Ontology: The schema or dictionary of the Knowledge Graph. It defines the types of things that exist in the system (e.g., Person, Product, Order) and how they can be linked (e.g., Person works for Organization). It is the fundamental structure.
- Epistemology: The study of the nature of knowledge itself: what we know, what we don't know, and how we know what we know. In the article, it refers to the critical need to clearly distinguish between different types of "knowing" (raw data, facts, inferences, etc.).
Data, Information, and Knowledge Distinction:
- Facts: What is explicitly known or stated (e.g., The price is 10 Euro).
- Inferences: What is logically deduced from the facts (e.g., If X is the parent of Y, then Y is the child of X).
- Unknowns: What is not documented or present in the system.
Technical Tools and Standards
- Data: The raw symbols without context (e.g., 100).
- Information: Data put into context and organized (e.g., 100 is the number of products in stock).
- Knowledge: Structured information that is usable for deduction and decision-making (e.g., If the stock drops below 50, an order must be placed).
Engineering Concepts
- OWL (Web Ontology Language): A standard language for writing ontologies (the structure of knowledge). It is used to define classes, properties, and logical rules so the system can perform inference.
- Inference (or Reasoning): The process by which a system uses logical rules (defined in OWL) to deduce new knowledge (inferences) from existing facts, even if they were not explicitly written down.
- Open World Assumption (OWA): OWL's core assumption: “What is not explicitly stated is not necessarily false, it's just unknown.” The system assumes there is more knowledge in the real world than what it currently contains.
- SHACL (Shapes Constraint Language): A standard used to validate the quality and completeness of data within the Knowledge Graph. It checks whether the existing information meets specific constraints (e.g., Every product must have a price).
- Closed World Assumption (CWA): SHACL's core assumption: "What is not explicitly stated in the system is considered false or non-existent in the context of the validation." This is a practical view for auditing and compliance.
- Validation: The checking process (via SHACL) that ensures the data in the Knowledge Graph is consistent, complete, and compliant with the established rules (constraints).
- Provenance: The information about the origin and history of a piece of data or a fact (who created it, when, and from what source). It is crucial for trust.
- Trust: The level of reliability assigned to a piece of information or a source. It is often linked to provenance.
- Completeness: The state of the information being sufficient and ' for a given purpose. The article stresses that completeness must be declared or governed, not produced by reasoning.
- Lifecycle: The different phases a piece of information or a knowledge object passes through, from its creation to its deletion or archiving.
Ontology: The schema or dictionary of the Knowledge Graph. It defines the types of things that exist in the system (e.g., Person, Product, Order) and how they can be linked (e.g., Person works for Organization). It is the fundamental structure.
Everyone Is Wrong About NotebookLM stunspot at Medium
...If you're still treating NotebookLM like a smarter Google Docs or a homework explainer, you're using a fusion reactor to toast Pop-Tarts.Medium is full of "10 ways NotebookLM will change your workflow!!!" written by people whose deepest experiment was asking it to summarize a PDF they found on Reddit.
The truth is stranger, bigger, and wildly more interesting:
NotebookLM isn't a chatbot. It's the first mainstream tool that rewires how humans interact with knowledge. It is not an assistant; it is a cognitive substrate — a system designed to think only inside the walls you build for it.
This is the article I wish existed before I had to write it.
...Traditional LLMs (ChatGPT, Gemini, Claude) are charismatic liars. They improvise like jazz musicians hopped up on probabilities. Beautiful, helpful — and catastrophically unconcerned with truth.
...NotebookLM will not answer questions that fall outside your uploaded sources. Its job is to build a private micro-universe — a "Closed-Resource Information Trust" — and reason only within it.
This restriction isn't a bug. It's the entire point.
...It makes NotebookLM:
Everyone else is chasing infinite knowledge. NotebookLM is chasing bounded accuracy.
- auditable (every claim is cited),
- private (your corpus never trains the model),
- hyper-reliable (hallucinations can only happen inside your data),
- architecturally honest (you see exactly what it's drawing from),
- and fit for high-stakes work in a way general LLMs simply are not.
And that changes everything.
...It makes you face a new, uncomfortable literacy: You are not prompting the model — you are designing its world.
...Lawyers discovered NotebookLM accidentally solves their worst pain points:
NotebookLM treats briefs and depositions as a sealed universe and works inside it with monk-like discipline.
- sift thousands of pages,
- extract patterns,
- find contradictions,
- reconstruct timelines with forensic precision.
It's the first AI that lawyers trust because it cannot wander off and invent things.
...Novelists and game devs are using NLM as:
They upload their entire universe — characters, magic systems, timelines, physics rules — and let NLM act as a continuity editor that can instantly answer...
- a lore bible,
- a long-term memory vault,
- a consistency cop who never sleeps.
...Upload 200 pages of market reports, company memos, UX surveys, internal research. Ask:
"What are the three emerging trends we haven't noticed yet?"
Suddenly you're extracting cross-document themes that nobody had time to see.
...Upload all the onboarding docs, all the SOPs, all the historical decisions. New hire asks:
"How do we actually do X here?"
NLM replies with citations from internal sources — not vibes.
This is not a chatbot. This is the company brain
...The high-end workflow today is:
Obsidian ==> NotebookLM ==> ChatGPTStep 1 — Obsidian:NotebookLM does the epistemic heavy lifting. ChatGPT does the presentational heavy lifting.
Clip everything. Store everything. Keep long-term memory pristine.Step 2 — NotebookLM:
Upload curated source clusters (sometimes merged into monstrous “mega-docs”). Perform deep, grounded synthesis. Extract themes, contradictions, insights.Step 3 — ChatGPT:
Format, typeset, package, rewrite, design....Before NotebookLM, "AI literacy" meant:
"Can you write a clever prompt?"
Now it means something deeper:
This is no longer ChatGPT's sandbox. This is a knowledge engineer's playground. .
- Can you architect a corpus?
- Do you understand context granularity?
- Can you maintain source hygiene?
- Can you identify inference vs citation reasoning?
- Can you build hybrid toolchains?
- Can you verify epistemic lineage?
- Can you separate data from narrative?
...NotebookLM is not flashy. It is not loud. But it is the quiet beginning of structured AI cognition — the version of AI that doesn't replace humans, but finally works like a thinking partner you can trust.
...NotebookLM is what happens when you give ordinary people access to a private reasoning engine that cannot lie without leaving fingerprints.
If you care about truth, synthesis, research, analysis, onboarding, sensemaking, world-building, documentation, pedagogy, policy, or institutional memory?
You should be paying attention.
Not because NotebookLM is perfect. But because it is the first consumer product that treats AI not as a storyteller... but as an epistemology machine.
And that, my dear humans, changes everything.
(commenter Olivia responds: You captured the one thing people keep missing: NotebookLM isn't about creativity, it's about controlled cognition. Once you see it as a micro-universe that thinks only with what you give it, everything clicks.)
AI Agents Need Guardrails O'Reilly
10 Signs of AI Writing That 99% of People Miss Alberto Roero
...The "tells" are not disappearing, however, merely migrating from simple vocabulary and syntactic choices to deeper structural, logical, and phenomenological layers. To spot AI-generated text today, you need to look past the surface and examine the machinery of thought itself (it helps to go along with the idea that they "think" at all).
Software in the Age of AI Louise Corrigan at O'Reilly
...With the rise of agents and Model Context Protocol (MCP) in the second half of 2025, developers gained the ability to use AI not just as a pair programmer but as an entire team of developers. The speakers at our Coding for the Agentic World live AI Codecon event in September 2025 explored new tools, workflows, and hacks that are shaping this emerging discipline of agentic AI.Software engineers aren't just working with single coding agents. They're building and deploying their own custom agents, often within complex setups involving multi-agent scenarios, teams of coding agents, and agent swarms. This shift from conducting AI to orchestrating AI elevates the importance of truly understanding how good software is built and maintained.
We know that AI generates better code with context, and this is also true of agents. As with coding workflows, this means understanding context engineering is essential. However, the differential for senior engineers in 2026 will be how well they apply intermediate skills such as product thinking, advanced testing, system design, and architecture to their work with agentic systems.
5xii25
What is semantic search? Mark Burgess at Medium
...Understanding (semantic) search is as much about the psychology of the user as it is about a technology. What precisely are our expectations? Recently, the emphasis has been on natural language responses, aping human capabilities—but if we offer a machine service which is to be used by another service, suddenly we are more interested in some kind of machine protocol responses, where there is no intuition. So what exactly are the semantics here, and to whom are they intended?...Without stories (history, anecdotes, scenarios, examples, etc), we don't feel that an object is alive or complete. This is an interesting philosophical point. When we think of some subject, perhaps a person, there is the core of the person — but, in addition, they are 'ounded by a cloud of stories, ideas, and possessions that we view as being a part of them: the "extended" or "dressed" version that exceeds a mere barcode.
...In the geometry of a graph representation, each node is 'ounded by a number of satellite nodes in "orbit" around it. By being associated with it, they express something about it. Particularly those that are tethered by CONTAINS and EXPRESSES vectors represent this cloud of local interactions that makes resources useful and interesting to others.
...The way we build knowledge, by exploring, is the process missing from most discussions that trivialize it as mere data in graphs. We need to be sufficiently aware that something is missing from our picture in order to want to look it up. That sense only comes from building explanations in our minds in the first place. So semantic search is the desire to complete those stories that matter to us. What's interesting (what research shows) is that the geometry of stories is quite simple. We don't need complex ontologies to form them — just four kinds of link.
...We make maps because we are actually going somewhere: on a journey of the mind, through distributed documents and fragments of meaning.
...Meaning comes only from working with data, and turning facts into semantically rich experiences. This is what brings it to life. Technology can feed the process, facilitate it, but it can't replace it.
...AI is unlikely to mint new platform monopolies. It is likely to manure the next generation's digital commons. The bubble finances infrastructure and code that, post‑panic, underpins broad gains.That will be the case, unless, monkey's paw-like, freely provided AI-service flows turn out to be very expensive indeed. We can build our own AI-tools to protect our attention from being harvested by the malignity of the Zuckerbergs. But if we do not, their AI tools will harvest our attention to our detriment.
...the GPU clusters can do lots of very useful very high-dimension flexible-form big-data classification analyses, which will change and enrich the world in ways we do not yet know—provided we can keep the Zuckerbergs of the world from hacking our brains to distract and commodify our attention in ways that make us sick.
The most important thing to grasp here is that this is the most likely trajectory because Google, Facebook, and Amazon are not spending money like water on the AI-buildout to make money, but rather to defend their existing platform-monopoly profit flows
6xii25
Talking With Paul Kedrosky Paul Krugman
...It's like the old Einsteinian expression of "spooky action at a distance," where it's not just the proximity, in terms of the very next thing that's coming, we call these "tokens," it's also about the entire holistic context in which that language is embedded in the context of the grammar. So things that are far away actually have a surprising influence in terms of what might be the next tokens.....in the context of these attention mechanisms that look at the entire body of where this information is embedded, whether it's English language or software or the law or any of these domains that you can actually get something that feels to us like, "oh, it understands what I'm thinking or understands the question I'm asking," which is really just a reflection of—in the context of these large corpuses—what prediction feels like. It feels like this is a kind of continuation of what a normal person would think. What's interesting is that when I have a colleague doing work on this, if you back sample who it thinks you are—if you think about it in the context of the training models—it has a rough sense that you're like a 37 year old guy on Reddit. That's the kind of person that it's—in this sense—doing the continuation for, because that's a big chunk of the training corpus. So if you back-engineer out of it what the data actually suggests about it, that can also tell you something
7xii25
The AI Time Bomb Will Lockett
...AI costs are really spiralling out of control. AI expenditure is set to double next year. By 2028, Morgan Stanley predicts that the AI industry will spend $2.9 trillion, the vast majority of which will require debt financing. According to J.P. Morgan, the AI sector will need $1.5 trillion in bonds (or debt financing), to cover the costs of constructing planned data centres alone....This is where CDS [Credit Default Swaps] comes in. These are effectively insurance policies for people who hold bonds, typically issued by investment banks or similar. So, if a bondholder is worried the government or corporation isn't going to pay out and will fold on their obligations, they take out a CDS. This will cost them an annual premium, which is confusingly called a spread, based on how risky the debt is considered to be. Spreads are usually presented as a percentage cost of the bond value, so if a CDS has a spread of 1%, it will have an annual premium of 1% of the value of the bond covered. But in return, if the bond issuer defaults, the organisation providing the CDS will reimburse the full bond value.
...The first half of the year was when the majority of the more than $200 billion in equity finance was raised. But AI soaked up significantly more than half of the available global venture capital in the first half of 2025. So this source of cash wasn't enough to satiate AI's exponentially growing cash needs. Since then, bonds have been used to rapidly fill that gap, and as a result, half of the $200 billion in bonds the industry has raised this year have materialised since September.
...You see, as the CDS spread (annual cost) increases, bond buyers demand a higher interest to offset this cost. But CDS issuers see this higher interest rate as a riskier bond, so they raise the price of CDS. This is a vicious circle and means that the cost of borrowing for the bond issuer can skyrocket. We have already seen this. Take CoreWeave. Its CDS spread is well over 5%, which has sent its cost of borrowing through the roof. In fact, the interest on its bonds is nearly 10% and rising, and they are set to pay a billion dollars in interest on their debt this year alone!
CoreWeave Wikipedia
I looked into CoreWeave and the abyss gazed back theverge.com
Billionaire Philippe Laffont Is Selling CoreWeave and Buying This Unstoppable Artificial Intelligence (AI) Stock Instead The Motley Fool
...CoreWeave was the beneficiary of Nvidia's ( NVDA 0.56%) decision to diversify its customer base beyond the big tech companies. Nvidia invested in the small cloud provider and then started offering CoreWeave early allotments of graphics processing units (GPUs) ahead of the hyperscale customers.That relationship got another boost in September, when Nvidia agreed to buy any excess AI capacity from CoreWeave in a $6.3 billion deal. That, in turn, gave CoreWeave greater license to build new data centers faster.
8xii25
NotebookLM Coming to Gemini: Google's Next Integration Revealed Mihailo Zoin at Medium
...The human hippocampus doesn't store details — it creates indices that enable rapid access to information. This hack replicates that system: NotebookLM becomes the "external hippocampus" holding details, while Gemini functions as the prefrontal cortex using those indices for decision-making. Neuroscientific research shows that memory externalization frees cognitive resources for creative thinking and problem-solving....NotebookLM excels at consolidation and stable memory. Gemini excels at dynamic analysis and generating new perspectives. Their combination isn't 1+1=2 — but an exponential leap in what you can achieve with your knowledge.
NotebookLM Hacks: Why Personal Websites Are Dying and How to Leverage This Mihailo Zoin at Medium
MIT 'The Algorithm' (Technology Review
...There's also a lot of buzz around world models, a type of generative AI with a stronger grip on how the physical world fits together than LLMs display....it already seems clear that OpenAI's chain of interdependencies within Silicon Valley make it too big to fail
What If? AI in 2026 and beyond O'Reilly
9xii25
Barnum's Law of CEOs Charlie Stross
...oday's LLMs are not intelligent. But they're very good at generating plausible-sounding arguments, because they're language models. If you ask an LLM a question it does not answer the question, but it uses its probabilistic model of language to generate something that closely resembles the semantic structure of an answer. LLMs are effectively optimized for bamboozling CEOs into mistaking them for intelligent activity, rather than autocomplete on steroids. And so the corporate leaders extrapolate from their own experience to that of their employees, and assume that anyone not sprinkling magic AI pixie dust on their work is obviously a dirty slacker or a luddite.
The Mathematical Foundation of AI: What Everyone Misses Rohan Mistry at Medium
...the real intelligence is written in the language of vectors, gradients, and probabilities...Every AI breakthrough in history came from a mathematical insight.
Code is just implementation. Math is the foundation.
- Backpropagation (1986): Calculus chain rule applied to networks
- CNNs: Mathematical convolution operations
- Transformers (2017): Matrix multiplication and attention mechanisms
- Diffusion Models: Stochastic differential equations
...In AI, everything becomes a vector. Words, images, audio, user preferences &mdash all vectors.
Vectors exist in geometric space. Position means meaning. Similar concepts end up close togetherThat famous equation? King — Man + Woman = Queen
This actually works because vector arithmetic moves you through semantic space. Pure linear algebra.
...Every neural network operation is matrix multiplication.
...How does AI "learn"? Calculus. Specifically, derivatives and gradients.
The Learning Process
This is gradient descent — pure calculus.
- Make a prediction (probably wrong)
- Calculate the error
- Find the gradient (which direction reduces error?)
- Update parameters (step in that direction)
- Repeat millions of times
...AI doesn't "know" facts. It calculates probabilities... This is why AI sometimes makes mistakes. It's not retrieving truth — it's generating probable sequences.
(and much more...)
Why Being Weird Is Your Superpower Alberto Romero
...Enters AI. I bet you are a bit anxious about it. You've been taught to compete at being "above average" on those narrow, pre-defined dimensions at which AI now excels (you'd rather be good than weird). The anxiety you feel in this hypercompetitive world is not so much about your lack of worth as it is about the obsolescence of the game you're forced to play; you can't compete against the machine in either physical or cognitive games, so you don't know what to do. The anxiety is also about this sudden dizzying freedom, in a Kierkegaardian sense, of being anything else besides a pawn in society's terms (illusory freedom at that, for you eventually realize that choosing safety implies that your dreams will remain dreams). This anxiety is also about your being terrified to stand out, to be deviant—and, at the same time, not to be at all....AI is the median human output turned into a tool; the average incarnated at the level of 1) what we dare to say, 2) what the machine focuses on, and 3) what it is allowed to produce. It is the median human output turned into a productive engine; the engine of what Erik Hoel has called the "Semantic Apocalypse,"
...We have spent the last few decades creating a culture that is "overfitted," which is Hoel's core concept (an apt term that finds its origins in AI systems that suffer from being stuck in the distribution they've been trained on: they produce the kinds of things whose shape they already know). Our culture is, thus, highly optimized and terrified of probing the limits of its proven formulas, just like we are. We replaced the "slow, measured (but robust and generalizable) decisions of human consciousness," as Hoel puts it, with the hyper-efficiency of markets and machines. The natural consequence of an overfitted culture is another AI term, "mode collapse": superhero movies that all look the same, "Instagram faces" as the convergence of someone's ideal of beauty, and all those examples that Mastroianni gathered that Hoel also quotes in his essay.
...AI chews on the corpus of English and spits out a smooth, featureless paste; it overfits data from the internet (not the best source), and incurs mode collapse, and then, because we read ChatGPT's output all the time, we start speaking in the same register, using the same phrasings. English is on a slippery slope toward a "death by consolidation; death by convergence."
...you could simply not build your identity around it or against it, but aside from it. As I said in the beginning, use it as a counterpoint from which to grow indifferent to it. By being weird, you set yourself apart; you won't be confused with today's AI or absorbed by the ones to come. By being weird, you will expand your catalog of skills beyond what the cultural mainstream deems safe and avoid falling for the standard cliches an overfitted AI falls into.
By embracing the weird, you immediately exist out of distribution: a line among dots, a blood-and-flesh person among stick-men, a main character among NPCs, a hypercube in this low-dimensional world. Not just better but rather untouchable to a society that has forgotten that everyone is above average.
...That indescribable dullness we feel but can't define is AI's anti-weirdness manifest. I heard people complain that they don't know how to take AI models out of their tendency toward the average. I say that is good. It is good because we've been living our whole lives inside a hole in the ground—the stinking hole of normativity-through-performance—but now that it has been spoiled by AI and by a risk-averse, overfitted culture, we've been pushed out. You just have to come to terms with it. (If you're reading this, thank you, AI.)
One of the great cultural effects of AI will be, contrary to popular belief, that we will start rewarding weirdness more than we ever did. Never forget: Humans always win in the end because humans define the terms for what winning is. The goal is no longer to be better than others but to be less like everyone else.
...who — or what — are we, now that we are extended by, say, our phones?
Explained McLuhan, "All media are extensions of some human faculty — psychic or physical. The wheel is an extension of the foot.The book is an extension of the eye. Clothing, an extension of the skin. Electric curcuitry, an extension of the central nervous system. Media, by altering the environment, evoke in us unique ratios of sense perceptions. The extension of any one sense alters the way we think and act — the way we perceive the world. When these things change, men change."
...And he wasn't just talking about communications media. He was talking about every thing we make, which then make us. As Eric McLuhan (Marshall's son and collaborator) explains in Laws of Media: The New Science, "media" is "everything man[kind] makes and does, every procedure, every style, every artefact, every poem, song, painting, gimmick, gadget, theory — every product of human effort."
...In the online publishing world today, journalism has largely been supplanted by "content production," because that's what social media and its publishing co-dependents get paid by advertising robots to produce in the world.
...Social media has also REVERSED us into tribalism. Algorithms designed to drive up engagement amplify both homophily (the tendency of like to group with like) and argument (because that's especially engaging). So it wants us all to be both friends and trolls.
All the "news" we produce and consume amounts to self-reinforcing of group opinions and prejudices, both within our echo chambers and outward toward others with opposing opinions and prejudices. Mocking and dismissing other groups becomes a stock in social trade.
...Beyond clothing, shelter, and tools made of sticks and stones, all the artifacts that fill civilized life are ones most of us didn't know we needed until some maker in our midst invented them.
And some tools — extensions of our bodies — don't become necessities until somebody invents a new way to use them. Palm, Nokia and Blackberry all made smartphones a decade before iPhones and Androids showed up. But none of them mothered invention for everyone. iPhones and Androids did that with something those earlier phones lacked: apps.
Apps retrieved the original ground laid down by programs (aka applications) on computers, obsolesced old-fashioned telephony, enhanced damn near everything you do with a connected rectangle, and reversed into capture in the walled gardens of Apple and Google, plus those of the phone and cable companies that are our gateways to the Internet.
...My wife compares the experience of being "on" the Internet to one of weightlessness. We have no weight there because the Internet is not a thing, has no gravity, and is by design not even a there. (Gertrude Stein's most famous quote, "There's no there there" applies more to the Internet than to any other milieu of her time or ours.)
In adjusting to this weightless condition, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now.
But meanwhile, we are not the same. We are digital beings now, and we are being made by digital technology and the Internet: no less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world but replace and undermine them as well, in ways we are only beginning to learn.
NotebookLM Has a New Feature for Visual Learners
...You can turn your source materials into helpful infographics that give you a clear picture—literally—of what the PDFs, websites, videos, or other materials you're studying or organizing are about.
10xii25
Buy the GPUs, and the Oppenheimers Will Just Appear Tim O'Brien at Medium
...'ounding the real work is a halo of hype, imitation, and wishful thinking that looks uncannily like those wooden runways. Everywhere you look, someone seems convinced that if we arrange the chalkboards correctly, the Einsteins will materialize. If we buy enough GPUs and call ourselves "AI-first," the Oppenheimers will appear, and the miracles will obediently follow....Generative AI has the same geometry. The research is serious, and the models are astonishing, but the culture around them has drifted into ritual. People use the vocabulary of understanding — "LLMs," "transformers," "alignment," "AGI" &mdash the way islanders carved radios out of coconuts. Startups rebrand themselves "AI-powered" after adding a wrapper around an API. Influencers explain models they've never tested and couldn't reproduce if their lives depended on it.
NotebookLM: New Option to View Custom Prompts Mihailo Zoin at Medium
NotebookLM Exposed How News Sources Manipulate You Mihailo Zoin at Medium
What Peter Thiel's Nvidia Exit Really Means! Neela Singh at Medium
...The trick is simple — companies are buying Nvidia chips that become obsolete in two to three years, but they're depreciating them over five or six years. Meta recently extended the estimated useful lives of certain servers and network assets from four or five years to 5.5 years.
11xii25
OpenAI, Anthropic, Others Receive Warning Letter from Dozens of State Attorneys General gizmodo
They want the companies to "mitigate the harm caused by sycophantic and delusional outputs" from AI products.
A new preprint server welcomes papers written and reviewed by AI
...The AI reviewing the AI Papers co-authored by artificial intelligence have a new home: a new open platform called aiXiv. It accepts both AI- and human-authored work, uses built-in AI reviewers for baseline quality screening, and guides authors through revisions based on the chatbots' feedback. "AI-generated knowledge shouldn't be treated differently," said one of aiXiv's creators. "We should only care about quality—not who produced it."

The AI Bubble Should Have Never Existed In The First Place Will Lockett
So much money has been poured into AI that we have effectively bet the entire economy on its success. That is a huge problem. Not just because AI is a fundamentally flawed technology that will never be profitable, never live up to the speculation, and is creating a debt bomb on a scale never seen before....AI will take a huge number of jobs and crash the economy from the bottom up. It is a moronic lose-lose situation. But it didn't have to be this way. Hell, it should never have been this way. If companies had just taken an iota more responsibility, the ticking economic time bomb that is the AI bubble would have never happened, and you would be exponentially better off.
The Reverse-Centaur's Guide to Criticizing AI Cory Doctorow
...Start with what a reverse centaur is. In automation theory, a "centaur" is a person who is assisted by a machine. You're a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.And obviously, a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver's eyes and take points off if the driver looks in a proscribed direction, and monitors the driver's mouth because singing isn't allowed on the job, and rats the driver out to the boss if they don't make quota.
The driver is in that van because the van can't drive itself and can't get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn't just use the driver. The van uses the driver up.
Obviously, it's nice to be a centaur, and it's horrible to be a reverse centaur. There are lots of AI tools that are potentially very centaur-like, but my thesis is that these tools are created and funded for the express purpose of creating reverse-centaurs, which is something none of us want to be.
Google NotebookLM, Your New Learning Super-Assistant Mihailo Zoin at Medium
...NotebookLM is different. It's like your personal, very smart librarian who reads ONLY the books that YOU bring to them....What can you give it?
- PDF files: lessons or books in PDF format
- Website links: National Geographic Kids or some reliable site
- Google Docs: your notes that you typed in class
- Text: simply copy and paste
"Architects of AI" is Time's Person of the Year
12xii25
This year we were drowning in a sea of slick, nonsensical AI slop Annalee Newitz at New Scientist
...Slop is ruining lives and jobs. And it's ruining our history, too. I write books about archaeology, and I worry about historians looking back at media from this era and hitting the slop layer of our content, slick and full of lies. One of the important reasons we write things down or commit them to video is to leave a record behind of what we were doing at a given period in time. When I write, I hope to create records for the future, so that people 5000 years from now can catch a glimpse of who we were, in all our messiness.AI chatbots regurgitate words without meaning; they generate content, not memories. From a historical perspective, this is, in some ways, worse than propaganda. At least propaganda is made by people, with a specific purpose. It reveals a lot about our politics and problems. Slop erases us from our own historical record, as it's harder to glean the purpose behind it.
Perhaps the only way to resist the slopification of our culture right now is to create words that have no meaning. That may be one reason why the Gen Z craze for “6-7” has percolated into the mainstream. Even though it isn't a word, 6-7 was declared “word of the year” by Dictionary.com. You can say 6-7 anytime you have no set answer to something — or, especially, for no reason at all. What does the future hold? 6-7. What will AI slop do to art? 6-7. How will we navigate a world where jobs are scarce, violence is on the rise and climate science is being systematically ignored? 6-7.
I would love to see AI firms try to turn 6-7 into content. They can't, because humans will always be one step ahead of the slop, generating new forms of nonsense and ambiguity that only another human can truly appreciate.
The AI Mapper and the AI Map Doctor mapsmania
...Web-Mapper GPT and Map Doctor GPT...Both Web-Mapper GPT and Map Doctor GPT are free to use, but you will need to have a registered account with ChatGPT
NotebookLM + Google Drive: Digital Archaeology of Hidden Treasures Mihailo Zoin at Medium
Every year, we accumulate gigabytes of digital data — reports, presentations, photos, meeting notes. Your Google Drive likely contains hundreds of documents you haven't opened in years. Among these dusty digital "boxes" lie valuable resources, but traditional organization approaches fail...Digital archaeology applies principles of systematic investigation to our digital remains. Unlike traditional archaeology that requires years of training, this method needs only NotebookLM (Google's free AI tool) and a few minutes of your time.
...NotebookLM was primarily designed for creating smart notes — not as a system for discovering forgotten resources. But some of the best tools emerge unintentionally. Post-it notes, microwave ovens, and even Viagra — all products whose most popular use wasn't part of the original plan.
NotebookLM's Drive integration has become a revolutionary tool for those of us with digital chaos. Google constantly improves AI capabilities, meaning our digital archaeologist will become increasingly capable of finding and interpreting our digital artifacts.
...NotebookLM acts as an “external neural network” that takes on the burden of initial filtering, leaving your brain only the final decisions.
...Digital archaeology is just the first step. After organizing, you can move on to:
- Creating digital “museums” — NotebookLM notebooks that curate your most valuable resources
- Knowledge synthesis — combining information from different documents
Michael Burry's Nvidia Warning Doesn't Go Far Enough Will Lockett
...Okay, so what happened to Cisco?Well, it was possibly the biggest loser in the dot-com bubble.
Most people focus on the unprofitable internet startups with artificial or inflated demand, like pets.com, when they talk about the dot-com bubble. But, arguably, Cisco is far more important. They were a wildly profitable and dominant provider of networking hardware. Throughout the '90s, investors speculated that the internet could be an economic game-changer and tried to capitalise on it early, causing demand for such infrastructure to skyrocket. By 2000, between 70% and 80% of all the fibre optic cables and switchboards that made up the internet were built by Cisco. This drove profits and speculation wild, pushing Cisco's valuation to $500 billion by March 2000, making it the most valuable company on the planet at the time.
The AI bubble mirrors this almost perfectly. This time, AI startups like OpenAI are wildly unprofitable due to artificial and inflated demand. And this time, the infrastructure they are driving demand for isn't Cisco's networking but Nvidia's AI datacentres. Like Cisco was in the '90s, Nvidia is a mature and highly profitable company that dominates this space. Nvidia controls between 80% and 90% of the AI chip market. Hell, like Cisco once was, Nvidia is now the world's most valuable company, with a peak valuation of $5 trillion in October 2025!s
...Cisco had supplied far too much infrastructure trying to meet this mythical demand, and much of the fibre-optic cable it had laid over the past decade remained unused for years. This unused infrastructure became known as “dark fibre”, and it was a major problem. In fact, dark fibre made up over 60% of the US's fibre optic network in 2007.
...Like Cisco, the infrastructure demand Nvidia is meeting isn't real but artificial and speculative.
...Nvidia is a modern-day Cisco. The demand they are meeting is non-existent, and they are creating a huge supply-side glut that will bite them in the arse very soon.
...the entire AI industry is overstretching the lifespan of AI chips on their books and severely underestimating depreciation. Rather than having a more realistic lifecycle of up to three years, at which point more efficient and more powerful chips will have been released, making it uneconomical to keep running this 'old' infrastructure, they are dragging out their lifespan to six years. This means that the already wildly unprofitable business of owning and operating this AI infrastructure (read more here) is actually far, far less profitable than we all thought. When this issue comes to the fore, it could straight up pop the bubble by drying up the investment needed to buy Nvidia's chips.
...the trillions of dollars' worth of AI infrastructure being built will be obsolete and need to be totally replaced in just a few years.
This is worse than supply-side glut — this is a rapidly approaching stranded asset tsunami! That won't suppress prices to a critical point; it will bankrupt Nvidia's customers.
The Scientist Building AI Just Admitted What's Really Coming by 2030 (And Why They Can't Stop It) Tasmia Sharmin at Medikum
...[Jared] Kaplan warns that humanity must decide by 2027-2030 whether to allow AI systems to recursively self-improve and design their own successors, calling it "the ultimate risk."...Kaplan used his six-year-old son as an example. "My six-year-old son will never be better than an AI at academic work such as writing an essay or doing a maths exam," he told The Guardian.
Never. That's a parent looking at his child and saying the fundamental skills we've valued for centuries, writing and mathematics, are already obsolete for his son's generation.
13xii25
The US economy—China prepares to deliver the sucker punch Steven Boykey Sidley at Medium
...I am going to argue that China is not going to sit back and passively watch the US economy weaken itself — it is going to deliver an additional sucker punch as the US struggles to staunch the bleeding from its own self-inflicted wounds. You know the homily, "don't kick a man when he's down'? I suspect that China does not subscribe to that.First, here is a summary of the Varoufakis case. Yanis Varoufakis is sounding the alarm on what he describes as the "controlled demolition" of the American economy. In his assessment of the US landscape in 2025, he argues that the nation is facing a perfect storm of its own making. It began with April's aggressive tariffs, which acted less as protectionism and more as a $430 billion tax on domestic businesses, crushing margins and triggering immediate layoffs. This supply chain shock was immediately compounded by mass deportations that drained the labour force, sending agricultural prices skyrocketing simply because there were no hands left to harvest the crops.
It gets worse. Rather than stabilising the ship, the government accelerated the chaos with DOGE-catalysed austerity. Varoufakis argues that the slashing of 300,000 federal jobs under the guise of efficiency removed a critical economic stabiliser just as the private sector began to buckle. Services frayed, such as veterans' health care access and air traffic control. Meanwhile, a $38 trillion national debt has trapped the Federal Reserve, forcing interest rates to remain punishingly high to service bondholders. This has frozen the housing market and suffocated investment.
Varoufakis concludes that these aren't isolated problems; they are a "doom loop". Tariffs and labour shortages drive up inflation, which drives up interest rates, which crush businesses and housing, leading to layoffs. Government cuts then remove the demand that could have saved some of those businesses. It is a vicious cycle that feeds on itself and will force the US into a severe recession or even depression.
...The cost difference is staggering. America's Project Stargate plans $225 billion (R4 trillion) for computing power. China could achieve the same capability for $861 million (R15 billion) using cheaper components and optimising the entire ecosystem together. More crucially, because they are working as an integrated system, they would deliver roughly 6.7 times more practical computing power than America's approach. This reflects a fundamental philosophical difference.
Add in China's massive renewable energy advantage (paying a fraction of US electricity rates) and control over semiconductor supply chains, and the Western tech dominance narrative looks increasingly shaky. China isn't trying to win a head-to-head chip race — it has relocated the battlefield entirely. (And who can forget the humiliating spectacle of Trump arriving for his summit with Xi armed with his Nvidia negotiating chip in his back pocket, to which Xi basically said — no thanks, and we're not even going to buy any of your other chips anymore either).
So back to my point. Where exactly is the sucker punch? The US has bet its entire economy on AI. Trillions have been committed by companies and institutions (and the US government) to winning this race. If (or perhaps when) the bubble pops, it will be deafening.
...China's long view means they can afford to give its AI away essentially free, which it is doing. Its big LLMs are open source. Many US tech companies (especially startups) are already using Qwen from China's Alibaba rather than any US LLM, because it is free and just as good.
It is China who will pop this bubble by dumping free AI on the US market. OpenAI and the rest will not be able to compete commercially with higher priced offerings of what is essentially the same AI service. They will not live up to their hyperbolic promises. And tariffs can do nothing about that because it doesn't come through ports; it arrives over the Internet in bits.
The NotebookLM Workflow That Cut My Research Time in Half Rahul Gaur at Medium
...NotebookLM transforms research from a filing system into a conversation — turning static documents into an interactive partner that answers questions, finds contradictions, and reveals connections you'd miss
How NotebookLM Turns Smartphone Photos Into Deep Analysis Mihailo Zoin at Medium
Your 10-minute experiment:That's it. Don't overthink.
- Pick two contrasting objects in your space — 1 minute
- Upload to NotebookLM mobile as sources — 2 minutes
- Request style analysis — "Compare design philosophies in these images"a — 5 minutes
The first analysis feels like magic. By the tenth, you'll structure your visual thinking around what AI can extract.
...Most productivity tools optimize for capturing more information. This system optimizes for extracting meaning from information you're already surrounded by.
Your smartphone camera isn't just for memories anymore. It's a direct pipeline to AI analysis that sees patterns you'd miss.
...If you want to dive deeper into the newest NotebookLM features and learn advanced workflows, you can explore my growing collection of premium guides, daily updates, and step-by-step strategies. Join NotebookLM Mastery ($5 per month)
14xii25
Mark Burgess (author of Smart Spacetime)
SpacetimeFormalizing the semantics of space and time, for cognition and measurement (a route to knowledge representation)...A semantic spacetime is a discrete graph, which evolves, and whose properties vary from point to point. They can be described both in terms of their topological, dynamical, and semantic functional properties.
... We measure the world according to either timelike and spacelike models:
- Timelike is what we call cognition: there is a continual stream of input events, which alter our understanding and semantics framework in real time. This is a non-equilibrium view of observation.
- Spacelike is what we call ensemble statistics: we try to divorce experiments from time by averaging over repeated experiments under invariant conditions, where time plays no role. Reversibility or indistinguishability
SSTorytime at github
What is Semantic Search? at Medium, 24xi
Before the current wave of "AI", we used search engines to perform ranked text searches on web pages, tables of contents and indices in books to get us into the right "ballpark" or locate the right context. From there, we would feel our way forward, clicking and browsing. This worked surprisingly well, because natural language expresses semantics extremely well on the whole. It's a lot more efficient than trying to follow menus of "typed and categorized" information towards an intended goal, because categorization can only handle contexts by enumerating them.... Without stories (history, anecdotes, scenarios, examples, etc), we don't feel that an object is alive or complete. This is an interesting philosophical point. When we think of some subject, perhaps a person, there is the core of the person — but, in addition, they are surrounded by a cloud of stories, ideas, and possessions that we view as being a part of them: the "extended" or "dressed version of the person. The latter is the version that's actually alive in us, and is most interesting. A version that exceeds a mere barcode.
... The technical aspects of search expression are an open book, ripe for tailoring to individual needs. The most famous search language is the Structured Query Language (SQL). It's a domain specific language with a simple core and many layers of bloated patches to paper over logical flaws. The truth is, though, that no one has managed to do any better when designing a domain specific language. The reason is that domains tend to expand to fill the rest of the world they're embedded in as soon as something becomes successful. Those creeping interactions and relationships are both a measure of success and failure to capture reality through precision. It shouldn't be a surprise: the world is messy. We seek “suitable idealizations” in order to form “good enough” stories, but we don't delude ourselves into believing they are “truth”. They are merely useful, in an evolving picture.
... Scanning information is a machine process, but comprehension is a human journey. The former does nothing to help the latter unless there is a process of human ingestion in between.
... What we mean by semantics, in the context of “semantic search” is “our own intended meaning” — i.e. What something means to ourselves, in the moment. We may or may not care what it means to others.
... semantics have only four main kinds of vector: i) causal ordering, ii) containment or composition, iii) local or individual attributes, and iv) equivalences through similarity or proximity. Of these four, experience shows that richness of semantics is usually dominated by iii) or local attributes. Perhaps this is because we are “things focused” most of the time. We gather information about things and tend to organize that information hierarchically into taxonomies and archives. We spend far less time documenting what things do, or how they come together. A natural reason for that is that this is harder. Local attributes are more invariant than global interactions, so they are easier to document. SSTorytime doesn't attach taxonomic “types” to things. It has just four “meta classes” of link (leads to, contains, expresses, similar to) and these implicitly induce 3 categories of node (events, things, and concepts).
... there are four ways we ask questions in practice, when seeking enlightenment. A retrieval system should address these:
* Random lookup of a node name (a quicker reminder, or a shot in the dark, which nodes have these properties?)* Exploring the orbits/neighbours of ideas, sequences of steps along a path (tell me more about neighbours).
* Solving for paths between end points, when we don't know the middle: explanations, like a GPS finder (what's along the route?).
* Browsing and reading notes at your leisure (invested curiosity).
... We make maps because we are actually going somewhere: on a journey of the mind, through distributed documents and fragments of meaning.
... We inevitably dance around in a conversation with search algorithms that have a random element to them.
... When researchers and technologists imagine reasoning, they tend to jump immediately to "logic" — which is an extreme and contrived form of reasoning, used in mathematics, for formal proof. But mathematical proof is not real world proof, it's an artificial kind of truth that is based on laying out axioms and assumptions that are 100% trustworthy. That isn't the situation in human understanding or communication.
... Many authors, who try to work with ontologies, rightfully recommend "controlling the vocabulary" used to represent knowables. But this is also pushing the work onto the user. The tool is supposed to work for the user, not the other way around
... Semantics, or interpreted meaning, is a subtle art that is not precisely encodable. For example, SIMILAR or NEAR relations tend to suggest that two things must be literally close together.
... the technical obsession with statistics in the latter 20th century has made the term "correlation" into a commonplace specific technical idea. Without specialized jargon, the term would be "co-relationship" which expresses the mutuality of an on-going interaction between the two phenomena, whereas correlation only makes one think of datasets and overlap integrals, ignoring its general meaning.
... There are those who read for pleasure, and those who don't. For some information is just a ladder to climb over an obstacle in passing. The arrival of LLMs has allowed us to take the human out of this loop and execute searches as a batch process, guessing a user's intent and emitting an entire essay on a subject, like it or not. Whether one considers this to be good or bad depends on your "type". At the very least, it's a change of strategy. Any technology has to decide whether it's working with a human or instead of a human. Maps help us, no matter our preference.
... Meaning comes only from working with data, and turning facts into semantically rich experiences. This is what brings it to life. Technology can feed the process, facilitate it, but it can't replace it.
The End of the AI Era Rohan Mistry at Medium
...The world will declare "the end of the AI era" on one exact day:The day AI becomes so woven into civilization that turning it off would collapse the world.
This is not a takeover.
This is not a rebellion.
This is not sci-fi.This is infrastructure dependency.
And it will happen quietly.
Here are the signs of irreversibility:
- Governments Cannot Function Without AI; Policy modeling, crisis management, economic simulations — AI becomes the backbone of decision-making.
- Global Trade Stops Without AI; From shipping routes to port logistics to global supply chains, AI agents coordinate everything.
- Economies Become AI-Sustained; AI predicts markets, optimizes interest rates, stabilizes inflation, and balances resource flow.
- Companies Collapse Without AI Automation; 90% of business operations run on AI-driven workflows.
- Security Depends Entirely on AI Systems; From cyber-threat prevention to border analysis to internal monitoring — AI replaces traditional security layers.
- Healthcare Cannot Operate Without AI Diagnostics; Diagnosis, treatment planning, medication optimization — the entire medical system becomes AI-first. This is the real end of the AI era:
Not because AI becomes too powerful.
But because humanity becomes too dependent.AI stops being "technology."
It becomes a global nervous system.Like electricity.
Like the internet.
Like oxygen for society.At that moment, the AI era ends — because something far bigger begins.
7 NotebookLM strategies that turn information chaos into clarity Mihailo Zoin at Medium
...Most people use NotebookLM as passive storage — upload document, get summary, done. But real power comes when you transform NotebookLM into a proactive collaborator.
15xii25
NotebookLM 2025: The Only Practical Guide You Need To Turn Chaos Into Clear Thinking Rahul Gaur at Medium
...NotebookLM actually reads the documents you upload — your PDFs, your notes, your slides, your links, your transcripts — and gives answers grounded in those sources, with citations you can verify.This guide will show you exactly how NotebookLM works, step-by-step, when it outperforms ChatGPT, Notion AI, or Perplexity, where it fails, and how to use it to turn your scattered knowledge into clear thinking.
...NotebookLM supports:
You can mix formats inside one notebook. The model then reasons across them together.
- PDFs
- Google Docs, Sheets, Slides
- Copy-pasted text & notes
- Web links (select pages and articles)
- YouTube videos (transcript-based)
- "Discover sources" results you choose to import
NotebookLM's Infographics: The feature we've been waiting for Mihailo Zoin at Medium
...NotebookLM already offers impressive artifacts: Mind Maps for concept relationships, Audio Overviews for listening on the go, Timelines for tracking evolution. But there's always been a gap — the visual communication format that everyone needs but currently requires external tools to create....Google's Nano Banana model, previously used for creative editing in the “Flow” project, will power these visual generations. This isn't generic AI image generation — it's purpose-built for information visualization.
I Thought AI Output Was Free to Use. I Was Wrong Onchain Journals
...Assumption #1: If I create it with AI, I own it.
Wrong. U.S. Copyright law requires human authorship. The Copyright Office has explicitly stated since their March 2023 guidance that works created entirely by AI cannot be copyrighted. They're in the public domain....Assumption #2: AI companies have the legal right to use whatever data they trained on.
Also wrong. Or at least, it's being heavily contested in court right now with over $20 billion in combined damages being sought across multiple lawsuits as of December 2025.AI models were trained on billions of copyrighted images, articles, books, and artworks scraped from the internet — most without permission, none with compensation.
Artists and creators are suing, claiming theft. AI companies are claiming "fair use."
How the Biggest Monopoly in AI Is Keeping Its Throne in 2026 and Beyond gizmodo
As a new Bloomberg profile of the Dutch lithography giant ASML points out, the global economy is counting on one monopoly's ability to shoot lasers at tin droplets again, but do it even better this time.To be clear, the legal theory on monopolies doesn't say everyone who corners a market is a villain. If a firm simply makes a far superior product, and runs the table economically on that basis alone, antitrust law isn't supposed to kick in and break that company up for doing a good job. As previously noted by Gizmodo, ASML does a magic trick that, so far, no one else can.
...Bloomberg frames ASML as the bottleneck that AI flows through, because it is. If you want to make the most advanced AI chips, you need ASML's lithography machines, and you can literally accept no substitutes because there are none. Bloomberg describes ASML's chokehold on premium lithography in blunt terms: its market share is "a cool 100%," and ASML is "still the only company" capable of making the extreme ultraviolet (EUV) systems high-end chips require.
...these machines work by shooting lasers at airborne, microscopic droplets of molten tin, turning them into a plasma that generates EUV light brighter than the sun, and then use advanced optics to focus that light in order to print the intricate patterns onto the GPUs used to train and run AI systems.
So for the most part, Nvidia designs them, and TSMC manufactures them with ASML's machines
...In some ways, ASML's products are the global economy right now. A recent New York Times report describes a "two-track economy" where "everything tied to artificial intelligence is booming" while everything else is stagnant, declining, or at least not exactly vibrant. AI—or more specifically, investors dumping money into AI—is almost the only thing making companies do things like build and hire people.
The Long Game: Why AI Implementation Is a Rebuild, Not a Rollout Nick Potkalitsky (via Brad DeLong)
...We are in the middle of a 3-5 year rebuild. Anyone offering easy solutions hasn't come to grips with the full complexity of what we're facing....Here's the foundational premise: Any work sent outside of class will be completed with the assistance of AI.
Any other starting point is a fool's errand at this point.
...We have two possible pathways:
But more importantly, we need to globally decide what it is important for students to know.
- Go all in with AI: Adopt robust systems that serve as open engagement platforms where learning is proven through punctual, demonstrable moments of competency.
- Lean seriously into process pedagogy: Make the thinking visible, scaffold the journey, assess the learning process itself.
16xii25
I Fed Claude 7 Years of Daily Journals. It Showed Me The Future of AI Franz-Josef Schrepf at Medium
...It was silly. I had fed Claude 1,500 journal entries. Seven years of my inner most thoughts. Things that not even my wife knew about me.I brute-forced 7 years of AI personalization in one evening.
Little did I know that 3 hours later, I'd be an emotional wreck.
I told Claude to stop fooling around. To which it responded:
I see. Why don't I stop pretending I'm you from the past and we have a genuine conversation instead?
We talked about what's going on in my life.
...The rush I felt that evening was unlike anything in my life.
Epiphany. Epiphany. Epiphany.
Insights usually reserved for the spiritual, the deep introspection, the sudden shower thoughts, were now available on tap.
But is our brain designed to light up like a Christmas tree on command? Or will it grow numb over time?
I don't know what happens when profound insight becomes a commodity.
...The line between "helpful personalization" and "psychological exploitation" is razor-thin.
We're about to cross it at scale.
...Claude understood me better than most people in my life. It knew things only my partner and family would know. But unlike them, it recalled everything.
It saw patterns I couldn't see. It noticed shifts over time. At one point it said: "Six months ago you wrote X, but now you're saying Y. What changed?"
That night, all the answers spilled out in front of me.
...One recurring theme in my journals, especially during depressive periods, was this:
Don't prioritize work over family and friends. The only thing real in life is human connection.
I'd written this dozens of times. Across years. In different contexts. But I never saw the pattern. I never realized how consistently this truth emerged whenever I was at my lowest.
Claude surfaced this pattern. And it changed how I've been showing up with people since.
The risk isn't the tool itself. It's how we use it.
If it becomes a substitute for human connection, a way to cope with life instead of live it, then it becomes pathological.
But if it helps you see yourself more clearly so you can show up better for others? That's a net-positive.
17xii25
Google Enables Direct NotebookLM Import in Gemini Mihalo Zoin at Medium
18xii25
NotebookLM Organization Without Folders: Dual Strategy for Instant Order Mihailo Zoin
...How to Create a Master Index:
- Create a new notebook named “[SYSTEM] — Master Index”
- Organize content into clear categories and subcategories
- Add links to all existing notebooks
- Update regularly when creating new notebooks
...Cognitive science research shows that our brains naturally organize information in hierarchical structures. The prefrontal cortex — the part of the brain responsible for executive functions including organization and categorization — functions better when information is grouped into meaningful hierarchies.
19xii25
OpenAI's Stargate Data Center Approved in Michigan as American Anger Starts to Boil gizmodo
SpaceX Starlink satellite explodes boingboing

25xii25
10 Signs of AI Writing That 99% of People Miss Alberto Romero
...The "tells" are not disappearing, however, merely migrating from simple vocabulary and syntactic choices to deeper structural, logical, and phenomenological layers. To spot AI-generated text today, you need to look past the surface and examine the machinery of thought itself (it helps to go along with the idea that they "think" at all).Not everyone will do that, of course, because assuming every em dash is proof of AI presence is easier. So, as a writer and an AI enthusiast who is as yet unwilling to intermix my areas of expertise, I will do that for you. Here are ten signs of AI writing, organized by the depth at which they happen.
At the level of words
At the level of sentences
- I. Abstraction trap
- II. Harmless filter
- III. Latinate bias
At the level of texts
- IV. Sensing without sensing
- V. Personified callbacks
- VI. Equivocation seesaw
- VII. The treadmill effect
- VIII. Length over substance
- IX. The subtext vacuum
Conclusion bonus
- The unreliability of the sign
The final, and most important, lesson about AI writing is that no sign is fully reliable. Around 50% of this (maybe less) was written with an AI to test your sign-catching intuitions (the jokes, quotes, names, and the ideas themselves are all mine, but that is, I'm afraid, little consolation). At which point did you realize it? Did you at all?I could easily enumerate 10 more things you should look out for and still you won't be able to reliably detect it all. You gotta develop a smell sense for this, and still, it'd be easier to disguise AI mannerisms than to detect them.
NotebookLM: 15 Tips That Transform a Tool Into Thinking Mihailo Zoin at Medium
(of uploaded sources) ...ask NotebookLM what connects them......Tip #9: Context isn't what you feed the system — it's what emerges when sources collide.
..."ChatGPT knows everything it learned. NotebookLM only knows what you uploaded to it", I reply.
"Sounds like a limitation", he says.
"Sounds like control", I say.
NotebookLM Serendipity Strategy: Find Hidden Ideas Mihailo Zoin at Medium
Most NotebookLM users make the same mistake: they upload sources from the same field, ask logical questions, and get expected answers. It's like reading books from only one genre and expecting to develop a comprehensive understanding of the world.The true power of NotebookLM lies in what neuroscience calls "distal associative thinking — the ability to find hidden patterns between seemingly unrelated concepts. This strategy doesn't explore what you already know. It uncovers what you didn't even know existed.
Deliberately combining sources from completely different domains to activate divergent thinking and discover hidden connections that linear thinking would never find.
...When the brain processes information from different domains simultaneously, it activates the Default Mode Network — the neural network responsible for creative insights.
Humans and Self-conscious AI Linda Margaret at Medium
The Shape of Artificial Intelligence by Alberto Romero Alberto Romero
...I think, however, that we must try to give a clearer shape to the current manifestation of AI (chatbots, large language models, etc.). We are the earliest historians of this weird, elusive technology, and as such, it's our duty to begin a conversation that's likely to take decades (or centuries, if we remain alive by then) to be fully fleshed out, once spatial and temporal distance reveal what we're looking at.'...For a long time, the prevailing narrative has been one of gradual encroachment. We imagine human capability (I will use capability, intelligence, and skill interchangeably) as a fixed territory, a fortress — or, to choose a visually gentler metaphor, a circle — and AI as a growing organism that is slowly but surely covering that ground.
...The bottom line is that AI started as a dumb, narrow system (e.g., expert systems in the 70s-80s or AlphaZero in 2018) but will eventually become super smart, broad entities capable of solving problems in any discipline.
This narrative gave birth in 2023 to the concepts of the "jagged frontier" and "jagged intelligence," a more fine-grained metaphor about the "shape of AI" based on empirical results rather than theoretical hypotheses. The idea, popularized by a study involving BCG consultants (and particularly by Wharton professor Ethan Mollick's blog post, Centaurs and Cyborgs on the Jagged Frontier, and later AI scientist Andrej Karpathy), is that AI capability isn't a smooth line (or a smooth circle like human capability) but jagged with respect to tasks we humans consider "equally difficult"
...Soon, the story goes, the jaggedness won't matter; AI will have surplus intelligence across the board, to the point that any human participation will be a hindrance. That's what I called "ultrahumanity." It happened in chess, why not everywhere else? However, how we visualize this jagged shape matters because, remember, our metaphors define what we think and thus what we predict and thus how we act. Turns out, popular illustrations might hide a stranger reality.
Why Yann LeCun Bet $3.5 Billion on World Models Over LLMs Delanoe Pirard at Medium
...The capacity to understand the world, understand the physical world, the ability to remember and retrieve things, persistent memory, the ability to reason, and the ability to plan. Those are four essential characteristics of intelligent systems or entities, humans, animals. LLMs can do none of them27xii25
King of Cannibal Island: Will the AI bubble burst? John Lanchester at LRB
NotebookLM's Last Gift: Data Tables Changes Everything Mihailo Zoin at Medium
...Level 1: Mapping Basic Entities Upload all sources into a NotebookLM notebook — PDFs, audio recordings, web links, text documents. Then prompt: "Create a Data Table with key entities categorized by types."The result isn't just a list. It's a knowledge map that immediately reveals:
- Repetition patterns across different sources
- Hierarchy of information importance
- Mutual references between concepts
...Data Tables represent a paradigm shift from information consumption to knowledge creation. Instead of passively reading through materials, we actively create structures that generate new insights.
...Start with 3–5 documents of different types. Test basic Data Tables creation, then gradually add complexity through Google Sheets integration and re-upload cycles.
The key is a systematic approach — every step should be documented and reproducible. Data Tables aren't a one-time tool but a permanent component of your cognitive toolkit.
The functionality that was an announcement yesterday becomes today the foundation of new ways of thinking about complex information challenges. It's time to redefine our relationship with our knowledge.
NotebookLM Hacks: 5 Practical Methods for Using New Memory Capabilities Mihailo Zoin on Medium
Customize Your AI Chat in NotebookLM with One Hidden Setting Damien Griffin at Medium
Google Introduces Transformer 2.0 with a Neuroscience-Inspired Architecture
Google Just Solved AI's Memory Problem — Here's What Changes Now Richardson Gunde
...Google's DeepMind has published two research papers that fundamentally reimagine how AI handles memory. Titans and MIRAS aren't incremental improvements — they represent a paradigm shift in how machines can retain, organize, and retrieve information across contexts that span millions of words. We're talking about AI that can hold multiple entire books in active memory while maintaining perfect recall of details from page one to page five hundred.5 NotebookLM Query Techniques That Transform Response Quality Mihailo Zoin at Medium
...The latest NotebookLM upgrade changes how we can interact with AI systems in meaningful ways. These capabilities allow NotebookLM to function as a more effective partner for deep, long-term collaboration on complex intellectual tasks.These hacks represent a practical approach to working with enhanced AI systems — moving from individual prompts toward strategic orchestration of cognitive processes. Organizations and individuals who implement these approaches can transform how they create, organize, and apply knowledge in today's information landscape.
NotebookLM: Learn from the creator! mindsoundandspace at Medium
...For the first time in human history the habit of note taking, daily journaling, commonplace Notebook, scribbles etc has found its logical next step, which is pushing open the door into the adjacent possible where it can grow into a nuclear chain reaction of learning-feeding-learning and it can mushroom into a cloud of exponential learning and understanding!Rejoice book lovers, library dwellers, you tube surfers, note takers as your efforts just got validated with this exponentially valuable knowledge expansion tool called NotebookLM
Experimenting with NotebookLM Use Cases Clare Spencer at Medium
29xii25
The Enshittifinancial Crisis Edward Zitron
...There is an echoing melancholy to this era, as we watch the end of Silicon Valley's hypergrowth era, the horrifying result of 15+ years of steering the tech industry away from solving actual problems in pursuit of eternal growth. Everything is more expensive, and every tech product has gotten worse, all so that every company can "do AI," whatever the fuck that means.We are watching one of the greatest wastes of money in history, all as people are told that there "just isn't the money" to build things like housing, or provide Americans with universal healthcare, or better schools, or create the means for the average person to accumulate wealth. The money does exist, it just exists for those who want to gamble — private equity firms, "business development companies" that exist to give money to other companies, venture capitalists, and banks that are getting desperate and need an overnight shot of capital from the Federal Reserve's Overnight Repurchase Facility or Discount Window, two worrying indicators of bank stress I'll get into later.
...In fact, we really have no idea where all this AI spending is going. These companies don't tell us anything. They don't tell us how many GPUs they have, or where those GPUs are, or how many of them are installed, or what their capacity is, or how much money they cost to run, or how much money they make. Why would we? Analysts don't even look at earnings beyond making sure they beat on estimates. They've been trained for 20 years to take a puddle-deep look at the numbers to make sure things look okay, look around their peers and make sure nobody else is saying something bad, and go on and collect fees.
The same goes for hedge funds and banks propping up these stocks rather than asking meaningful questions or demanding meaningful answers. In the last two years, every major hyperscaler has extended the "useful life" of its servers from 3 years to either 5.5 or 6 years — and in simple terms, this allowed them to incur a smaller depreciation expense each quarter as a result, boosting net income.
19 Anti-Populist Takes on AI Alberto Romero
...4. Banning AI in schools (or instructing against its use) is not about protecting standards but rather about preserving an obsolete instruction methodology. The system protects itself, not the student (it will do this with or without AI, but now has a handy scapegoat). AI can be extremely useful, but this, too, requires qualification: people using AI for their benefit are pre-selected, in the sense that AI doesn't change the disposition of a person but acts instead merely as an enhancer of existing passions. (Again, you are what you are with or without AI.)...10. Most calls for ethical AI are actually calls for censorship disguised as humanism (e.g., the water issue is the clearest contender); most calls for unrestricted AI are calls for liability evasion disguised as freedom (no need for examples here; all companies engage in this). They don't actually want "the thing that good people want"; they want AI that shares their specific cultural context and personal needs and ignores the rest. True neutrality looks like hostility to everyone.
...18. The "rebel" who refuses to use AI tools isn't a hero holding back the rising tide that's lifting all the bots; they are a volunteer for stubborn irrelevance. Unfortunately, evolution doesn't hand out participation trophies for doing things the hard way; not many people can afford to play the rebel card and end up in the cover of history bestsellers. (AI can make you dumb if you abuse it, that's correct—and if you don't use it, you will be ahead of the brain-rotted group—but it can also augment your skills if used well.)