In Real Life

8v25

My current reading of Nicholas Carr's Superbloom: How Technologies of Connection Tear Us Apart, and Max Fisher's The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World encouraged me to recognize the deficiencies in my understanding of the origins and current state of the digital world of 2025: the worlds of "social media" in which I am not a participant, the effects of "Artificial Intelligence" that have erupted around us, and the mindspaces of the tech bros whose power and influence seem to dominate the emergent world of "the Cyber". I need a place to gather and assemble into coherence the various threads —to collect the toothsome bits that arrive, and to try to distill some coherent sense of what's unfolding before us.

I need to construct an understanding of my own place in this emergent digital world, recognizing that oook.info is vieux jeu, an artifact of Web 2.0 and 20 years out of date, but still my preferred mode of engagement with the interwebs, my own version of Rudy Rucker's Lifebox, a personal Archive.

My various feeds and subscriptions bring me new material bearing upon the Cyber every day, and the best I can do is preserve the links and make extracts to try to exemplify what each contributes to my developing understanding of the kaleidoscopic changes. One from yesterday:

AI Apps Need a Higher Minimum Age (Alberto Romero at The Algorithmic Bridge)
...A kid, even under parental control, restricted access, or stricter safety barriers, is easy prey for companies like Google or OpenAI.

Google argues an "AI for kids" can level the playing field for "vulnerable population," but that's just a PR motive, and the NYT wastes no time to make it clear: "Google and other A.I. chatbot developers are locked in a fierce competition to capture young users." No fierce competition is ever a race to see which company makes a kids' product safer or, well, less engaging.

These companies have surely put the standard kids guardrails in place (Google says the service complies "with the federal children's online privacy law") but in case you're new to this: no AI company—not Google nor OpenAI nor anyone else—has learned how to keep users or AI models from jumping over their containment or behavioral measures. AI is not a risk-free toy for kids. Google says "Gemini can make mistakes" and calls it a day.

...There are many reasons why kids under 13 shouldn't have access to ChatGPT or whatever else. One of them is that AI models are, quite appropriately, black mirrors.

"Mirrors", because their main personality trait is agreeableness (as revealed by that wild Rolling Stone piece or the recent ChatGPT update that got widely rejected for excess of sycophancy and which OpenAI had to roll back), and "black"—as in "black boxes"—because no one knows for sure what they can or can't do and that causes us to overlook dangerous abilities that are just a good prompt away.

Kids need someone who pushes back against their still-forming ideas more than someone—they might not be able to tell there's no one behind the screen—who validates whatever thoughts they happen to have

...In case you're willing to side with Google on the basis that parental control is enough, let me gently say that you don't spend enough time online. Digital native kids are good at bypassing things. Much better than their parents. If a kid wants to access something, chances are high they'll find a way. Friends exist. Reddit exists. ChatGPT exists (or have we forgotten about how easy it is to make it jailbreak itself?)

...AI apps need a higher minimum age requirement. Of course, kids can also bypass age limits. This is not intended for them, but for companies like Google, to force them to not facewash their attempts to capture a profitable and untapped market by saying "parents are in control" or "special guardrails are in place," and instead simply not do it. Humans have certainly lost their common sense if we can't agree that not everything should be turned into a market.

And Cory Doctorow:

Rah-rah-Rasputin
Mark Zuckerberg has told investors how he plans to make back the tens of billions he's spending on AI: he's going to use it to make advertisements that can bypass our critical faculties and convince anyone to buy anything. In other words, Meta will make an AI mind-control ray and rent it out to grateful advertisers.

Here, Zuck is fulfilling the fundamental duty of every CEO of every high-growth tech company: explaining how his company will continue to grow. These growth stories are key, because growth stocks trade at a huge premium relative to the stocks of "mature" companies. Every dollar Meta brings in boosts their share price to a much greater degree than the dollars earned by companies with similar rates of profit, but slower rates of growth. This premium represents a bet by investors that Meta will continue to grow, which means that the instant Meta stops growing, the value of its shares will plummet, to reflect the fact that it is a "mature" company, not a "growth" company.

So Zuck needs to do everything he can to keep investors believing that Meta will continue to grow. After all, Zuck's key employees and top managers all take much (or even most!) of their compensation in Meta stock...

And this today from Anna Dorn, via Medium Newsletter:

When someone is labeled "liberal" or "conservative," what does that actually tell you? In American media, these terms don’t describe beliefs. They signal identity. And that shorthand often obscures more than it reveals.

History and politics writer George Dillard reflects on the death of Pope Francis and how quickly American outlets tried to slot him into a category. Was he a progressive pope or a conservative one? Neither label quite fits. Francis opposed abortion and same-sex marriage, but also condemned capitalist greed, defended migrants, and advocated for environmental protection. Dillard argues that forcing him into a red-or-blue framework doesn’t clarify his beliefs. It reduces Catholic doctrine to a familiar script, making a global religious leader legible to an American audience by rewriting him in our own image.

This kind of flattening shows up again in how the media covers world leaders. Dillard critiques the reflex to describe Jair Bolsonaro as "the Trump of Brazil," Boris Johnson as "Trump with a British accent," and Emmanuel Macron as "the French Obama." These comparisons are seductive because they require no new understanding. They reduce distinct political histories to archetypes in our domestic drama.

PhD candidate Kem-Laurin Lubin, Ph.D-C explores why binary thinking persists so stubbornly. Part of the reason is cognitive: we’re naturally drawn to opposites and easy categories. But it’s also structural. Social media platforms are designed to reward fast, polarized responses. Like or ignore. Believe or dismiss. Support or oppose. Lubin points to LinkedIn’s reaction buttons, and music algorithms that sort users by race, taste, or assumed politics. What starts as a tool for convenience becomes a system of constraint. The more streamlined the interface, the harder it becomes to think beyond the choices we’re offered.

It's not just that binary thinking polarizes; it narrows how we understand the world. What if the most honest answers don't fit at all?

Concerns raised over Foresight AI trained on 57 million NHS medical records New Scientist

An artificial intelligence model trained on the medical data of 57 million people who have used the National Health Service in England could one day assist doctors in predicting disease or forecast hospitalisation rates, its creators have claimed. However, other researchers say there are still significant privacy and data protection concerns around such large-scale use of health data, while even the AI’s architects say they can’t guarantee that it won’t inadvertently reveal sensitive patient data.

The model, called Foresight, was first developed in 2023. That initial version used OpenAI's GPT-3, the large language model (LLM) behind the first version of ChatGPT, and trained on 1.5 million real patient records from two London hospitals.

Now, Chris Tomlinson at University College London and his colleagues have scaled up Foresight to create what they say is the world’s first "national-scale generative AI model of health data" and the largest of its kind.

Sue Halpern at NYRB:

For the love of money
In early January Mark Zuckerberg, the CEO of Meta, announced that the company was ending fact-checking on its social media platforms, Facebook, Threads, and Instagram. The reason, he said—parroting right-wing talking points—was that flagging hate speech and misinformation was a form of censorship. The company's fact-checking program was instituted after the 2016 US presidential election, when Facebook, as the company was then known, was roundly criticized for allowing Russian-generated propaganda to tip the scales to Trump; it was canceled shortly after Zuckerberg made a post-election trip to Mar-a-Lago to pay obeisance to the man who had been threatening to jail him for life. (In case allowing falsehoods and hate on its platforms was not sufficient, Meta also paid Trump $25 million to settle a 2021 lawsuit in which Trump claimed that Meta illegally kicked him off its platforms after the January 6 Capitol riot, and Zuckerberg donated $1 million to Trump's inauguration fund.)

...Like many companies, Facebook's corporate goal is growth. But unlike a company that wants to find new markets for its tennis rackets or phone chargers, Facebook isn't moving physical products. Rather, it is in the business of capturing people's attention. More users mean more personal data, and data is the engine propelling the company's core business: advertising. In 2023 and again last year, nearly 100 percent of Meta's revenue came from selling ads. The problem with this model is that it requires more and more users—and more and more data about them—to attract new advertisers and retain old ones.

The problem here is that these excerpts have relevance to the larger questions posed by the Carr and Fisher books...