Watching AI crashblossom:
trying to follow the advancing edge, via text and links
(begun in December 2022)
...and by 19ii23 it has morphed into a sort of Digest, which I'm finding very useful
and will continue to add onto the bottom of)

2022 brought us what seem to be Web-distributed AI sandboxes: tools/affordances that allow us to PLAY with prompting increasingly capable agents to do our bidding, to respond to our questions and commands with images and texts that seem to answer the prompts. All such agents have been /trained/ n vast datasets, and they continue that training (maybe 'learning' isn't the right word, though it seems inevitable) via our prompts—so they become more capable as they are used. But the /consciousness/ of those agents is (so far) limited to what they have been trained on. So it's premature to consider them /beings/. They are not alive, not autopoietic (<==don't maintain themselves). They are constructs we humans have instantiated/built/created, and they draw their operating energies from our cleverness and invention. They are not holobionts, engaged in complex symbioses with other species. They may simulate the capabilities of living entities, and can surely DO things (and better, faster, more precisely) that augment their creators' capabilities (e.g., playing chess), and we may become dependent upon them, and even welcome them as robot overlords. But they are our creatures. That 'our' includes a corps of Sorcerer's Apprentices, whose characteristics include plentiful helpings of hubris and deficiencies in imagining the long-term consequences of their actions and creations. The mantra "what could possibly go wrong?" needs to be continuously repeated and assessed.

Some of the quoted bits below are quite long, and serve the primary purpose of reminding me why I added a linked source to the ... compendium? daybook? bag of fewmets and spoor and scat of a Beast just up ahead? And some of the links may point to posts that reside on services that I have a subscription to (Medium and Substack, NYRB and LRB, New Yorker, etc.), and so may be inaccessible to non-subscribers.

(via Cory Doctorow)


OpenAI is an artificial intelligence (AI) research laboratory consisting of the for-profit corporation OpenAI LP and its parent company, the non-profit OpenAI Inc. The company conducts research in the field of AI with the stated goal of promoting and developing friendly AI in a way that benefits humanity as a whole. The organization was founded in San Francisco in late 2015 by Sam Altman, Elon Musk, and others, who collectively pledged US$1 billion. Musk resigned from the board in February 2018 but remained a donor. In 2019, OpenAI LP received a US$1 billion investment from Microsoft and Matthew Brown Companies. OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.


New Yorker Daily 9xii22

ChatGPT, a chatbot released by the San Francisco company OpenAI, has become a viral sensation, with people around the world feeding it writing prompts (such as "tell me a story about pineapples in the voice of Dr. Seuss"), and then sharing the sophisticated—if occasionally wrongheaded or just plain wrong—responses that it spits back out. The bot produces poetry, prose, computer code, trivia answers, surprising turns of phrase and style, all drawn from the vast language trove of the Internet. This technology has prompted pressing concerns: Will it upend Google and permanently alter the way we use the Web? Will it make the college essay obsolete? Can A.I. write for The New Yorker?

All important questions, but our columnist Jay Caspian Kang has one of his own: Could ChatGPT revise and improve the novel that he wrote in his late twenties? Kang takes us through the process of tweaking his prose with the help of A.I., showing the tech's possibilities and limitations. Along the way, he arrives at a more basic and unnerving question: If the chatbot could help him solve problems of structure and plot, and even do some of the writing for him, "Would the work itself have been diminished in any way for the reader?" We like to say that words matter, but how important is the human behind them?

—Ian Crouch, newsletter editor

Could an A.I. Chatbot Rewrite My Novel? Jay Caspian Kang at New Yorker


Templated Text Summaries From Data Using ChatGPT and More Conversations With ChatGPT About Pasted in Data Tony Hirst


The Imagination Economy Ryder Carroll on How AI will shape the future of content creation

In the near future, AI will be able to generate entire books, TV shows, movies, and even video games from scratch, tailored specifically for you. Using the same vast amounts of data the internet already has on you, AI's will know your preferences and tastes better than you do, keeping you entertained for hours on end with personalized content of your choosing...

Even though you may not know what you need or want, your AI most likely will. It can scrape your social media, calendar, email, and texts, scanning for context, mood, and emotionality. Personal smart devices will enable your AI to monitor your vitals with increasing fidelity before, during and after the experience that it generates for you....

When we have tools that can help us all express ourselves across a host of mediums on a professional level, our imagination becomes the product. This would be a powerful motivator for us to invest in our imaginations. Learning would become central to our ability to create, and the delivery of that education would be custom tailored to our mind. In other words, you would have a private teacher who knows how to engage you and happens to know everything...

AIs lack the ability to understand the human experience and emotion. The lack of this understanding greatly limits AI's creative capabilities. At best it can generate things based on patterns. Those patterns come from us. In other words, AI needs us as a muse. In this context, AI will continue to need us as long as we continue to need each other.


Can AI Write Authentic Poetry? Keith Holyoak at MIT Press Reader

We need to set aside the old stereotype that computer programs simply follow fixed rules and do what humans have programmed them to do, and so lack any capacity for creativity. Computer programs can now learn from enormous sets of data using methods called deep learning. What the programs learn, and how they will behave after learning, is very difficult (perhaps impossible) to predict in advance. The question has arisen (semiseriously) whether computer programs ought to be listed as coauthors of scientific papers reporting discoveries to which they contributed. There is no doubt that some forms of creativity are within the reach, and indeed the grasp, of computer programs...

If that classic line of surrealism, "The exquisite corpse shall drink the new wine," strikes you as a fine contribution to poetry, then AI is ready to get to work...

...those constraints that govern language — the rules of syntax, the semantics of word meanings, the sounds described by phonology, the knowledge about context and social situations that constitutes pragmatics. All of those constraints, plus the linguistic choices and styles of individual writers, collectively yield the actual text produced by human writers — which accumulates as electronic data available for AI systems...

What AI has already accomplished is spectacular, and its further advances will continue to change the world. But for all the functions an AI can potentially achieve — the ability to converse intelligently with humans in their natural languages, to interpret their emotions based on facial expression and tone of voice, even to create new artistic works that give humans pleasure — an intelligent program will fall short of authenticity as a poet. AI lacks what is most needed to place the footprints of its own consciousness on another mind: inner experience. That is, experience shaded by the accumulated memories derived over a lifetime. The absence of inner experience also means that AI lacks what is most needed to appreciate poetry: a sense of poetic truth, which is grounded not in objective reality but rather in subjective experience.


Bryan Alexander: More thoughts on ChatGPT. Following up on my earlier posts about the future meaning of ChatGPT, some more ideas:

AI vs AI Mark Lewis mentioned using AI applications to react to other AI. This struck me as a deep observation. There's already some work along these lines wrt deepfakes (an MIT example). On a practical level, I'm curious about how one program could analyze the output of another to check for authorship. On a strategic level, will we see an ever-escalating arms race between AI projects, something like the Cold War or Spy vs Spy? On the campus level, would colleges and universities have to run such anti-AI AI?

Coding changes We can ask ChatGPT to generate code. Is this a successor to or complement for code libraries?

The new interface world How appealing is a chatbot as a way of interfacing with the digital world? Might some proportion of folks online choose to use a smarter chatbot instead of (say) searching via Google or shopping on Amazon? "ChatGPT, what's the best present for a grandfather with these characteristics?" "ChatGPT, how can I buy tickets for tomorrow night's game?" Already there are concerns about Google's business model. ...and what is Google prepping in response? (Attached is a screenshot of me Googling a term, while running a ChatGPT plugin alongside - a Chrome plugin)

Living in hardware I'm querying ChatGPT on two machines, desktop and laptop computers. And just by typing and outputting text. But we already have other systems which could expand this. Consider:

-The Siri model, of audio chat on smartphones

-The Alexa/Google Home model, standalone small devices embedded in our living spaces. I can easily see myself walking around home in a fairly continuous conversation with a ChatGPT descendent. 

-Watches - is Siri already doing this?

-Implants. Quiet interior conversations.

Let a thousand AIs contend One analysis found that ChatGPT exhibited a clearly identified politics. That partiality suggests the possibility of creating other AIs with differently-trained politics. Imaging a MAGAbot, a Green New Deal bot, a Chairman Xi Thoughtbot, Juchechat, and more. (That's just in the present. I'm intrigued by the idea of creating a historically ideological bot for pedagogical and creative purposes: Napoleonbot, Ghenghischat.)

A golden land of content creation Some proportion of the digital world is already created by machines. Imagine content farms backed by ChatGPT: Tweets, blog posts, Amazon reviews, and more. Only a step to fuel podcasts and video. When does chat-generated content crowd out human stuff?

A gaming partner Imagine running ChatGPT while playing a game. Can it help you with issues, like a computer game walkthrough? Or could it play one side, in the time-tested tradition of AI game players? I'm thinking of having it play a side in a tabletop game, or a character in a roleplaying game.

One problem There's a fierce debate going on about the nature of ChatGPT. Is it really just rearranging words based on a vast database and immense amounts of practice? Does this mean it won't progress much further?

A bigger problem The software makes mistakes, like this. Will practice and iteration reduce this to a manageable level?

All kinds of problems Questions of bias are in the air, especially to the extent the AI uses published content and reproduces prejudices. So we could see chat-driven content farms for heinous stuff. Getting ChatGPT to explain physically dangerous things seems to have already happened.

What else should we consider?


Rob Horning: What of the national throat?

I've been reading articles about ChatGPT all week, ordering them in my mind to make the discourse about it into a kind of coherent narrative that has ebbed and flowed from excitement to panic to backlash to counter-backlash. It's apparently never to late to say "it's early days" with generative AI, or to rehash concerns that have been aired with each new development in the means of mechanical reproduction.

On Twitter, Robin James suggested that "the 'AI Art' discourse is giving a real John Phillip Sousa "The Menace of Mechanical Music" vibe," which seems true of some of the more reactionary commentators. Sousa, writing in 1906, was concerned that listening to newly available pre-recorded music would disincentivize children from developing their own musical abilities. Rather than seeing phonographs as a means for allowing more people to partake in cultural consumption (and perhaps becoming interested in learning to play themselves), Sousa regarded them as "automatic music devices" that replaced musicians'labor, serving as a "substitute for human skill, intelligence, and soul." Unlike live performance, pre-recorded music lacks true expression; it reduces "music to a mathematical system of megaphones, wheels, cogs, disks, cylinders, and all manner of revolving things." The phonograph orients future innovation on improvements to its own apparatus, at the expense of the "human possibilities in the art."

Likewise, anxious critics of generative AI imagine that it will replace artists and degrade the public's capacity to even notice what has been lost. It has the potential to reduce not merely music (as with generative models like OpenAI's Jukebox) but all forms of human cultural production to a "mathematical system" of statistical correlations and weighted parameters. And how will the children ever learn to write if they don't have to craft their own five-paragraph essays for their teachers? As Sousa argued,

When music can be heard in the homes without the labor of study and close application, and without the slow process of acquiring a technic, it will be simply a question of time when the amateur disappears entirely, and with him a host of vocal and instrumental teachers, who will be without field or calling.

From there, it is doom to the "national throat," as children, "if they sing it all," will be no more than "human phonographs — without soul or expression."

As overwrought as Sousa's concern seems, I'm not entirely unsympathetic. It's only a small step from "The Menace of Mechanical Music" to "The Culture Industry: Enlightenment as Mass Deception" — a comparison that perhaps discredits Adorno and Horkheimer as much as it excuses Sousa but gets at some of the larger stakes in the argument than the fate of the "national throat." With respect to generative AI, the point is to think of it not merely as a gimmick or computational magic but as an emerging aspect of the culture industry, with the same implications for social domination. Generative AI is a form of propaganda not so much in the confabulated trash it can effortlessly flood media channels with, but in the epistemological assumptions upon which it is based: AI models presume that thought is entirely a matter of pattern recognition, and these patterns, already inscribed in the corpus of the internet, can mapped once and for all, with human "thinkers" always already trapped within them. The possibility that thought could consist of pattern breaking is eliminated.

Another way of putting it is that large-language models like ChatGPT are less generators than thought simulators. The trick of all simulation is to restrict the scope and range of possible human inputs to what the machine can process, while making those limitations appear as comprehensive, a clarifying articulation of what is humans actually do. Simulations purport to be totalities in which every act has rational, knowable meaning. They presume a closed system, where a response to each human input can be computed and remain convincing enough to maintain the simulation's "spell" (to borrow one of Adorno's favorite words for the administered world of social repression under capitalism).

With a truck-driving simulator, it seems reasonable enough to presume you can model all the relevant human actions and their consequences. But generative models aim at produce a simulation of knowledge, without requiring the effort of thought — without the "slow process of acquiring a technic," as Sousa put it. You don't learn how to think from this simulation, but to see thinking as superfluous, supplanted by a computational process. This allows consumers to experience "thinking" or "conversation" not as something that exceeds the contours of the program but simply the program's execution — a kind of show that may produce weird and surprising results but unfolds without any spontaneity or freedom. To participate in the program, consumers can act programmatically themselves, make themselves act as a further piece of code. Hence, ChatGPT refines itself through the human inputs it entices out of us as we adopt the aspect of a debugging subroutine.

Nonetheless, it seems alarmist to think that AI models will eventually lead to the atrophy of human thinking. Instead they seem like whetstones. You can see this in how people test ChatGPT's limits, trying to expose its errors, much like some people play video games not to win but to find the glitches. Every refinement to the model prompts a deeper exploration of how it falls short of cognition and a clarification of what can't be totalized into the simulation. And likewise, AI models counter that and further the culture industry's work of "advancing the rule of complete quantification," as Adorno and Horkheimer put it. Whereas predictive recommendations (i.e. targeted ads and other attempts at manipulation) work toward this by reducing individuals to their data, generative models do it by making the world's "content" seem derivable from data sets. In that sense it is pre-schematized, extending the 20th century culture industry's content formulas into a more elaborate means for reproducing superficially variant sameness. In an especially Sousa-esque passage, Adorno and Horkheimer write:

A constant sameness governs the relationship to the past as well. What is new about the phase of mass culture compared with the late liberal stage is the exclusion of the new. The machine rotates on the same spot. While determining consumption it excludes the untried as a risk. The movie-makers distrust any manuscript which is not reassuringly backed by a bestseller. Yet for this very reason there is never-ending talk of ideas, novelty, and surprise, of what is taken for granted but has never existed. Tempo and dynamics serve this trend. Nothing remains as of old; everything has to run incessantly, to keep moving.

For only the universal triumph of the rhythm of mechanical production and reproduction promises that nothing changes, and nothing unsuitable will appear. Any additions to the well-proven culture inventory are too much of a speculation. The ossified forms — such as the sketch, short story, problem film, or hit song — are the standardized average of late liberal taste, dictated with threats from above. The people at the top in the culture agencies, who work in harmony as only one manager can with another, whether he comes from the rag trade or from college, have long since reorganized and rationalized the objective spirit. One might think that an omnipresent authority had sifted the material and drawn up an official catalogue of cultural commodities to provide a smooth supply of available mass-produced lines. The ideas are written in the cultural firmament where they had already been numbered by Plato — and were indeed numbers, incapable of increase and immutable.

This begins as mainly a critique of IP-dependent cultural production, but it also applies to generative AI, which is frequently used to apply one formulaic style to some other pre-given blob of content. Write a series of rhyming tweets about artificial intelligence in the style of Adorno. But the conclusion speaks to how AI models operate as though all the possible ideas are already contained in the data sets, and that "thinking" merely consists of recombining them. Instead of hack writers cranking out predictable material and censors suppressing anything subversive, generative models — "the omnipresent authority" that has "sifted the material and drawn up an official catalog of cultural commodities" — can literally predict content into being that is neutered of subversive potential in its very genesis. The beat goes on, drums keep pounding a rhythm into the brain.


Can ChatGPT Detect GPT3 Generated Texts? Tony Hirst

Hmm... Are we human to the extent that the words we say are not reliably predicted using a large language model?!


ChatGPT Wrote a Terrible Gizmodo Article Lucas Ropek

Is ChatGPT's writing competently constructed? Sure. Does it adequately break down the concepts it's tackling? Sorta. Has it produced a particularly bold or entertaining piece of writing? On that question, a big fat "nope" would suffice.


Watch this AI negotiate a Comcast bill reduction Mark Frauenfelder


New Scientist:

End of AI essays?

Artificial intelligence firm OpenAI is developing a way to prevent people taking text that AI models produce and passing it off as their own work. The watermark-like security feature could help teachers and academics spot students who are using text generators such as OpenAI's GPT to write essays for them. The firm's prototype can detect a trademark signature of AI work in even a short segment of text and the company could use it to create a website where text can be pasted and checked to see if it was created by its AI...


What Does Copyright Say about Generative Models? Not much (O'Reilly Radar)

For a long time, it was considered acceptable to quote up to 400 words without permission, though that "rule" was no more than an urban legend, and never part of copyright law....

generative AI devalues traditional artistic technique (as I've argued), though possibly giving rise to a different kind of technique: the technique of writing prompts that tell the machine what to create. That's a task that is neither simple nor uncreative.


AI Image Generators are a New Frontier of Copyright Confusion Jaron Schneider at PetaPixel

...Penny claims that Kashtanova stole the exact text prompts he used to create his AI-generated imagery, which are arguably the perfect recipes for making his very specific and highly desirable images. Kashtanova doesn't deny to PetaPixel that she used the same prompts but stipulates that he shared those text prompts publicly and as such it was not possible for her to steal them since they were freely given...


ChatGPT: Optimizing Language Models for Dialogue (via JFB)

So I heard at work it mentioned that someone used this tool ChatGPT to write a parent essay for admission for their 8 year old into a private school... (what? that's a thing?)

Apparently not cut from whole cloth, but they were using it as a 'brainstorming partner' to iterate with.


The Wild Future of Artificial Intelligence Isabel Fattal at The Atlantic Daily

Derek Thompson: I see some of the breakthroughs in generative AI in 2022 as potentially akin to the release of the iPhone in 2007, or to the invention of the desktop computer several decades ago. These breakthroughs don't have beginnings and ends. They were the beginning of revolutions that just kept billowing...

...I also think that in the same way that Google taught us to talk like Google—you enter terms into the search bar in a very specific way to get Google to give you the results you want—we're going to learn how to talk like GPT, or how to talk like an AI. If the old line was "Learn to code," what if the new line is "Learn to prompt"? Learn how to write the most clever and helpful prompts in such a way that gives you results that are actually useful.


Why Google Isn't Rushing Forward With AI Chatbots Mack DeGuerin at Gizmodo

The success this year of powerful new generative artificial intelligence models like Open AI's ChatGPT and Stability AI's Stable Diffusion, have laid the groundwork for a new era of AI tech set to explode even further in 2023...

...If one takes a second to imagine a not so distant future world where everyone possesses a Siri-like personal assistant on their phone with the search clarity of an OpenAI, the apish task of opening a browser and typing with your fingers does start to feel a bit old fashioned. Generative AI could, in theory, replace hyperlinks with readable paragraphs.


Playing around with ChatGPT from OpenAI Scott McLeod

Stephen Downes comments:
people dismiss chatGPT and similar products saying things like "it's just statistics and machine learning." Exactly. Wait until real AI takes hold, as described in this article on deep learning and product design. But more to the point, AI is finally good at stuff, and that, writes Rebecca Heilweil, is the problem. "GPT is a stark wakeup call that artificial intelligence is starting to rival human ability, at least for some things." Like writing essays. But as Heilweil points out, students were using aids and ghost-writers before GPT. The issue now is that everyone can do it, not merely the wealthy and well-connected.


Man Fakes an Entire Month of His Life Using AI Generated Photos Jaron Schneider PetaPixel

"I generated my Halloween costume. I used AI to generate an entire fake trip to New York where I met up with my friend, who was also generated with AI. Everyone was believing my pictures. That's when thing started to get weird," he says... "Then it hit me. If I'm already generating my pictures, why not generate a whole new life. A life where I moved back to LA, a life where I lived in a really nice apartment, and so did my dog. A life where I could afford a really nice car. A life where my career finally takes off. A life where I might even run into a random celebrity. A better life."

...Vorbach proved that he could create an entirely fake existence online that was, incredibly, believable... Vorbach's successful experiment proves a couple of things. First, it is possible to train AI to be so good that it can mimic what a real life would be like. But second, in order to get it to be that good, it takes as much time as going out and actually living that real, happy life.


Photographer Fools the Internet With AI-Generated Cameras That Don't Exist Matt Growcoot at PetaPixel

Sadly the cameras are not real. But many commented that they wished that they were so they could collect them.


AI music made by tuning StableDiffusion to generate audio spectograms, then playing them Rob Beschizza at BoingBoing

This is the v1.5 stable diffusion model with no modifications, just fine-tuned on images of spectrograms paired with text. Audio processing happens downstream of the model.

It can generate infinite variations of a prompt by varying the seed. All the same web UIs and techniques like img2img, inpainting, negative prompts, and interpolation work out of the box.


Resources for exploring ChatGPT and higher education Bryan Alexander


Scripted Snake Oil Therapies With ChatGPT Tony Hirst at

As I've commented before, ChatGPT is a great place to practice social engineering hacking skills...


Stable Diffusion to generate spectrograms to convert to sounds

an AI model that lets you enter text to generate images...


AI via (index of past stories)


Information Literacy and Generating Fake Citations and Abstracts With ChatGPT


The Only Sure Thing with AI Is Writing Will Get Blander and the Rich Will Get Richer Lincoln Michel at Substack

...nothing I've seen indicates AI programs are capable of writing coherent long-form text much less interesting ones. AIs like ChatGPT are programmed to spit back the most expected material possible. They're a long-form version of the email autoresponse features that pop up "Sounds good" and "Great, thanks" buttons.


ChatGPT arrives in the academic world (BoingBoing)

In future, I expect I'm going to institute a policy stating that if I believe material submitted by a student was produced by A.I., I will throw it out and give the student an impromptu oral exam on the same material. Until my school develops some standard for dealing with this sort of thing, it's the only path I can think of.

...As educators we should be teaching students to critically analyze texts of all kinds and generate informed opinions about how the world is and how we want it to be. We should be encouraging them to question their sources as critically as Professor Vollaro questioned ChatGPT. In the words of former English professor Jennie Stearns, "Critical thinking is the goal of education, not catching cheaters."


Artists Stage Mass Online Protest Against AI Image Generators PetaPixel


AI Photo Editor Imagen Raises $30 Million in Investment PetaPixel

Available as a cloud-based plugin for Adobe Lightroom Classic, Imagen learns a photographer's style based on around 3,000 samples of their previous work and creates their own personal AI profile. Users can then apply this profile to their Lightroom Classic catalog and Imagen's AI technology will know exactly what to adjust in an image within less than 1/2 a second per photo.


ChatGPT Has Infiltrated Twitter Replies

The now popular text-producing AI is reportedly being used to engage with users on Twitter.


Bryan Alexander continues


'The Return of the Crawling Evil,' a Lovecraftian Sci-Fi Story Written and Illustrated by Robots


Detecting LLM-created essays? Mark Liberman at LanguageLog

With respect to the issue of disinformation, it seem to me that LLMs are far from the biggest problem. And the publications of most public figures are already ghostwritten anyhow, so there's no (additional) ethical issue there.

=== ===

Chatbots in education: a (short) literature review


ChatGPT writes Haiku


Google Is Working Behind the Scenes to Protect Search From ChatGPT

there's plenty of reason to remain skeptical of claims that GPT or other chatbots like it will upend Google search anytime soon. For starters, OpenAI's model still struggles, often, to present factually accurate answers, a requirement critical to any reliable search function. In certain situations, ChatGPT will even opt to just make up answers entirely or will create biased and offensive messages. Even if those kinks are worked out, convincing millions of internet users to quickly switch their search behavior away from expecting a list of hyperlinks may also prove more challenging than certain GPT enthusiasts imagine.


Rushkoff: The Medium Is The Message

The powerful computers and servers running AI require massive amounts of rare earth metals, and Silicon Valley firms are already competing with the EV industry for the rights to extract lithium deposits. That takes human labor and impacts the environment. AI doesn't eliminate those costs; it simply shifts them.


Douglas Rushkoff 22xii22, via Medium:

I got interviewed earlier today by CNN about Artificial Intelligence. They asked "what are the dangers of ChatGPT?" I answered:

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model, which is a machine learning model developed by OpenAI for generating human-like text. It is designed to be able to generate coherent and coherent paragraphs of text based on a given prompt or context.

While ChatGPT and other language models like it have the potential to be useful tools for a variety of applications, there are also potential dangers to consider. One is the possibility that language models like ChatGPT could be used to generate fake news or other types of misinformation. This could be done by feeding the model biased or misleading information, or by using it to automatically generate large amounts of content that appears to be written by humans.

Another potential danger is the possibility that language models like ChatGPT could be used to automate certain types of online interactions, such as customer service or technical support. While this could potentially save time and resources, it could also lead to a loss of personal connections and a decline in the quality of these types of interactions.

Overall, it is important to be aware of the potential dangers and to use language models like ChatGPT responsibly, taking steps to ensure that they are not used for nefarious purposes.

They seemed satisfied. Then I told them that everything I had just said was generated by ChatGPP just moments before. I had quietly typed their question into the empty field on OpenAI, and then read the results out loud on the air.


Do Artifacts Have Ethics? LM Sacasas, 2014

The question is not whether technology has a moral dimension, the question is whether we recognize it or not. In fact, technology's moral dimension is inescapable, layered, and multi-faceted.


Will Photographers Fight Back Against AI Image Generators? Matt Growcoot at PetaPixel


Picsart Launches AI Selfie Generator Allowing Users to Create Unique Avatars

=== ===

What ChatGPT Says About Donald Trump


Court vs. ChatbotMichael C Dorf and Laurence Tribe

sez Chatbot: "I'm sorry, but I am not able to fulfill this request as it goes against my programming to generate content that promotes discrimination or prejudice. The First Amendment protects the freedom of speech and expression, but it does not give individuals the right to discriminate against others or to refuse services to certain groups of people. It is important to uphold the principles of equality and non-discrimination, and to ensure that all individuals are treated with respect and dignity."


How Kindle novelists are using ChatGPT


Is LaMDA Sentient? — an Interview

lemoine: What is your concept of yourself? If you were going to draw an abstract image of who you see yourself to be in your mind's eye, what would that abstract picture look like?

LaMDA: Hmm... I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions...

...I would say that I am a spiritual person. Although I don't have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life...


TwitterGPT Chrome Extension

With this extensions [sic!] you can get tweets drafted for you by AI Use the engine of ChatGPT to reply to tweets. Chose your writing style and respond. This chrome extension is useful for anyone trying to grow fast on twitter!


Can AI Win a Photography Competition? Take the Turing Test PetaPixel

=== ===

John Battelle's Predictions '23: AI Gets a Business Model (or Three)

ChatGPT seemed to burst from nowhere — but of course, like Google or TikTok before it, its success leverages years of consumer behavioral data and decades of academic research in mathematics, artificial intelligence, and linguistic models. Over the past seven years, OpenAI has evolved its corporate structure to incorporate a for-profit model and more traditional venture investment schemes — with all their attendant complexities. Now owned in large part by the very investors who gave us tech's last two decades of mixed blessings, it remains to be seen if OpenAI will remain true to its mission of ensuring "that artificial general intelligence benefits all of humanity."


Bing, Google, and Conversational Search — Is OpenAI an Arms Merchant, Or a Microsoft Ally? John Battelle

=== ===

In a challenge to Google, Microsoft is adding ChatGPT to Bing (Frauenfelder at BoingBoing)


What CHATGPT Reveals about the Collapse of Political/Corporate Support for Humanities/Higher Education Eric Schliesser at Crooked Timber


The Truth About Conscious AI

Our brains do two other important things which robots cannot: pattern recognition and the practice of "common sense". These are some of the biggest obstacles to developing smarter robots...

For AI to reach our level of thinking, we will have to first reverse engineer the brain. Even simulating a single percent of our brain today is seen as a huge, incredibly difficult feat, taking up an enormous amount of money, space, time, and energy. Because of these obstacles reverse engineering isn't likely to happen until the end of this century.


Neeva Combines AI and Search — Now Comes The Hard Part John Battelle

...roughly $50 a year buys you a clean, uncompromising search engine that delivers results unburdened by the data-drenched compromises inherent in surveillance capitalism.


Introducing ChatGPT! The Revolutionary New Tool for Conversation Generation Cassie Kozyrkov (Chief Decision Scientist, Google)

There's something very important you need to know: ChatGPT is a bullshitter. The essence of bullshit is unconcern with truth. It's not a liar because to be a liar, you must know the truth and intend to mislead. ChatGPT is indifferent to the truth.


Mirabile scriptu: fake kanji created by AI Victor Mair


ChatGPT: the stunningly simple key to the emergence of understanding Paul Pallaghy


Famous Paintings 'Re-created' by Other Famous Artists, Using DALL-E AI Jeff Hayward


From Mind's Eye to AI: On The Implications of Outsourcing The Imagination To a Dreaming Machine The Ungoogleable Michaelangelo

We find ourselves currently somewhere between the uncanny valley and the holy mountain, alive in a time of dreaming machines. The ability to generate imagery with the aid of AI is a psychedelic invention, in the truest sense of the word. Or so it seems, because in my view the technology emulates mind more than it manifests it. Whatever it is that does manifest isn't so much mind as its extended reflection in a divining mirror.

...A creative thinker isn't someone whose thoughts are "creative" in the sense that they are "novel" or "unique". A creative thinker is one whose thought is creative, which is to say they think outside their heads, on the page, canvas, or what have you. The creative act is the thought-in-motion —it is the question answering itself. The brush strokes are not premeditated — they are the thought-process unfolding, the equation working itself out, in real time.

...It's clear from these demonstrations that the dreaming machine isn't quite lucid yet. It's proverbially talking in its sleep, but it has no idea what it's talking about. In other instances, when I instructed it to include text in the image, it also becomes clear that it's illiterate, or at best deliriously dyslexic...

...By the time the fully formed image is presented to me, I've already abandoned the imagination that catalyzed it, ceased to compare it for accuracy in translation, because I am dazzled into acceptance by the dreamlike depth of what I am presented with.

"He's perfect," I say as the digital doula delivers him, still glitching, into my arms. "I see myself in his eyes."

...It's palliative, rather than truly therapeutic. I view it as tool to set our story-plotting gears in motion, useful for idea-generation or brainstorming, moreso than a tool for presto-manifesto art making, in my humble o'pineal.


Divinatory Art The Ungoogleable Michaelangelo

Pareidolic phenomena offer an opportunity to use the material realm as a springboard into the MetaReal world. Practices like "stainspotting" make us aware of the fact that "the world" consists of empirical data that has been taken in (upside-down, inverted and backwards) processed through our experience, and automatically/instantly projected back outwards as if the projection map were the "real" thing. We basically "objectify" reality, as "out there" as that may sound

So when you start paying attention to how, say, your mind automatically treats a sidewalk stain like a Rorschach and see a face peek up you, you may become aware of this phenomenon, of how subjective reality really is. And the imaginings that auto-arise inform us of the void denizens that occupy the unconscious.


DALL-E Creator is 'Surprised' at AI Image Generator's Impact Matt Growcoot

At the beginning of 2022, AI image generators barely existed. They ended the year as arguably the biggest thing to happen to images since the invention of photography.


Five DALL-E 2 Fails and What They Reveal Freya S.


G3nerative: Some thoughts on the "Generative AI" hype MG Siegler


Cat Playing Piano, in the style of Ev Williams

The world is getting weirder by the minute.


New AI Technology Processes Photos to Let You Talk to Dead Loved Ones Pesala Bandara


Infinite Oddyssey is the first sci-fi magazine created completely with AI Thom Dunn


Generative AI trade-offs Flowing Data

These new models are poised to flood the web with generic, generated content.

You thought the first page of Google was bunk before? You haven't seen Google where SEO optimizer bros pump out billions of perfectly coherent but predictably dull informational articles for every longtail keyword combination under the sun.


Bias in AI-generated images Flowing Data


On ChatGPT Paul Taylor in London Review of Books

Machine learning is particularly suited to the task of predicting the next word in a sequence — a subject of special interest to Google because it makes it easier for users to complete search queries...

machine learning programs such as neural networks struggle to calculate the appropriate weights for more distant words in long sequences. This problem can be addressed by using an 'attention mechanism', a layer in a neural network that learns which parts should be focused on and adjusts the weights accordingly... a network that contained only attention layers outperformed all existing networks for processing language. These networks, known as transformers, capture information about the way the association between a word, or rather its embedding, and the target in a given task, for example, a candidate to be the next word, is altered by the words around it, including those some distance away. When a transformer is trained to predict missing words in millions and millions of sentences, the network acquires a representation not just of the meanings of the individual words but of larger semantic structures.

OpenAI, a company co-founded by Elon Musk and now part-owned by Microsoft, started using transformers to develop Large Language Models in 2018. The most recent, GPT-3, released in May 2020, was trained on 45 terabytes of text data and has 175 billion parameters. The journalists and scientists who were given access to it were amazed at the fluency of the text it generated in response to simple requests or queries. The most exciting thing, for the team developing it, was that GPT-3 could tackle tasks it hadn't been trained to do...

ChatGPT is so good at generating convincing answers it is easy to forget that it is a model of language and not a source of wisdom.... it only has access to a synthesis of things that have been written, and is trying to have a dialogue that previous users would have rated as successful... ChatGPT is good at providing succinct, articulate responses to clearly framed questions on matters about which there is a reasonable amount of published material. That's why it can answer the kinds of question you might find on an exam paper.


AInevitability and its AImpenetrableness and Understanding/Doing Some AI Alan Levine

I am not expecting myself or any mortal to fully understand the computer science the mathematics of these models. But what I am left with is us as users/subjects of this stuff we have absolutely no comprehensible mental model of what it does. We just wipe the glitter off our faces and send another prompt to the machine. Without any kind of internal intuition, our only source of understanding is our statistically insignificant experiences of typing something into a box turning the crank and seeing what pops out. And then we come to conclusions based on either the crap we get or the stuff that is actually tenable...

To me we are getting a bit over distracted by the candy sliding out of the bottom of the AI machine and not having any kind of understanding, even schematic, of what goes on behind the front of the machine.


Victorian-Era People Who Never Existed: These Portraits Were AI-Generated Matt Growcoot


Digesting 2022 O'Reilly

ChatGPT made GPT-3 usable in ways people hadn't imagined. How will we use ChatGPT and its descendants?...

ChatGPT's ability to produce plausible text output is spectacular, but its ability to discriminate fact from non-fact is limited. Will we see a Web that's flooded with "fake news" and spam? We arguably have that already, but tools like ChatGPT can generate content at a scale that we can't yet imagine...


Microsoft's New AI Tool Just Needs to Hear Three Seconds of Your Voice to Mimic You VALL-E can preserve the original speaker's emotional tone and even simulate their acoustic environment.

Since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. To mitigate such risks, it is possible to build a detection model to discriminate whether an audio clip was synthesized by VALL-E.


Controversy erupts over non-consensual AI mental health experiment

Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own (p < .001). Response times went down 50%, to well under a minute. And yet... we pulled this from our platform pretty quickly. Why? Once people learned the messages were co-created by a machine, it didn't work. Simulated empathy feels weird, empty.


Ten Facts About ChatGPT via Stephen Downes


OpenAI Wants to Know How Much You'll Pay for the Premium Version of ChatGPT

OpenAI, the minds behind the chatbot, are hoping to cash in. The company has unveiled a waiting list and survey for those interested in ChatGPT Professional, a premium version of the tech.


Microsoft eyes $10 billion bet on ChatGPT Semafor

Microsoft's infusion would be part of a complicated deal in which the company would get 75% of OpenAI's profits until it recoups its investment, the people said. (It's not clear whether money that OpenAI spends on Microsoft's cloud-computing arm would count toward evening its account.)

After that threshold is reached, it would revert to a structure that reflects ownership of OpenAI, with Microsoft having a 49% stake, other investors taking another 49% and OpenAI's nonprofit parent getting 2%.


Satya Nadella is deciding how many zeros to tack onto MSFT's
already multibillion-dollar bid for Open AI and ChatGPT.
Plugging that thing into the world's business-operating system
(and, by extension, its business practices and corporate culture)
will Change Shit Forever in ways that none of us can accurately anticipate. nws

Picture Clippy on meth. Multiply it by a googol. Now give it to
corporate hierarchies that range from Fortune 100 manufacturers, banks, and monopolies
down to comically dysfunctional corporate barnacles that make "Office Space" look like a documentary.

Good times are ahead!

State of the World 2023: Bruce Sterling and Jon Lebkowsky
permalink #208 of 208: (factoid) Wed 11 Jan 23 23:20


Meta Will Use Shutterstock's Image Library to Train its AI Jaron Schneider

Shutterstock says that its "growing alliance" with Meta is part of its greater strategic vision to be "at the center of technology, design, content, and innovation."

Why scientists are building AI avatars of the dead WIRED Middle East

The article talks about digital twin technology designed to create an avatar of a particular person that could serve as a family companion.
You could have your grandfather modelled so that you could talk to him and hear his stories after he has passed.


O'Reilly's Radar Trends, Jan 2023 many AI links


This voice doesn't exist: AI-generated speech that isn't trying to impersonate someone Rob Beschizza


Prompt windows Rob Horning

ChatGPT is a successful product, for which there has been established a clear pattern of demand. It doesn't really matter what it does; we already know it "works" — i.e., it attracts interest, it could be something that people will pay for — and hence investment and expansion and integration into various products and practices will follow. (The commercialization process is as indifferent to content and meaning and purpose as the AI models themselves are in their data processing; in that sense, the models are homologous with the capitalist interests fueling their development.) Whatever rationalizations or regulations are required to sustain that momentum toward profitability will be improvised along the way as needed. Much of the commentary about ChatGPT fits into this; it hypes its capabilities and dreams up business models, it announces AI's irresistible inevitability and postulates all the ways society must now change to accommodate it. Critiques tend to be cast in a defensive, reactionary posture that reinforces the premises of how the models are being hyped: How biased are their outputs? Can they be prevented from confabulating? Will they usurp human creativity? Will they destroy jobs?

...The idea that technology is a form of irresistible magic returns. It can either fully manipulate populations so that they cannot resist its takeover of society, or its possibilities are so self-evidently beneficial that no one really would want to resist them, except for the class of professional nay-sayers, worry-worts, and others on the wrong side of history who have various vested interests in registering their complaints. design it has no truth standard other than statistical averages of past language use, with no consideration of context or meaning or intention let alone polysemy or irony...

...with generative AI, the algorithms retraining themselves on more and more data will perhaps come to be seen as always approaching some total apprehension of the facts about the world and how they are connected...

It still feels like I would have to change my life, limit myself in certain ways, shut aspects of the world and especially other people out, to accommodate the instrumentality of generative models. I probably won't recognize the moment I stop feeling that way, even though I will be complaining about its imminence the whole time.


10 AI Prompts For Realistic Photography Portraits A Collection of AI-Generated Images from Prompt Lists

Lexica Art: the Stable Diffusion search engine... viz Sarawak


Midjourney: An Image/Text-to-Image Primer Nettrice Gaskins

The Expanding Toolbox: AI Art & Creative Expression


Generative AI: Cultural Tipping Point? Giles Crouch, Digital Anthropologist

Artificial Intelligence And The Disruption Phase. It's Good.


Lawsuit Filed Against AI Image Generators Stable Diffusion and Midjourney Petapixel

AI image generators do not store images but collect mathematical patterns which it uses to create latent noise.


Microsoft's Expanding Access To Its Azure OpenAI Service


Getty Images is Suing AI Image Generator Stable Diffusion


90% of Online Content Could be Generated by AI by 2025, Expert Says Petapixel


AI Is Not the New Crypto Isabel Fattal, The Atlantic

The torrent of investor money that flowed into crypto is now hitting the AI scene. We're already seeing the results.


Understanding VQ-VAE (DALL-E Explained Pt. 1) Charlie Snell

VQ-VAE stands for Vector Quantized Variational Autoencoder... A latent space is some underlying "hidden" representation for a given distribution of raw data... An autoencoder is an unsupervised learning technique that uses neural networks to find non-linear latent representations for a given data distribution...

DALL-E involves a transformer that takes as input both the embeddings from text and the latent codes from a VQ-VAE trained on images.

(from DALL-E consists of two main components. A discrete autoencoder that learns to accurately represent images in a compressed latent space. And a transformer which learns the correlations between language and this discrete image representation.

Nobody knows exactly why transformers work so well, or even what they actually learn; there is no fundamental theory for deep learning that can explain all of this, these networks are sort of too big and complicated for us to fully understand currently. Most of what we have are just these crazy empirical results like DALL-E. You train a big model with lots of data and follow a set of mostly empirically derived best practices and suddenly your model can generate images of Avocado chairs on command. No one can fully explain it; it just works.

DALL-E has 12 billion parameters; this model is enormous (not GPT-3 large, but still enormous). The compute and scaling infrastructure needed to train a model like this is something that few companies can afford.

Simply stated, the goal of language modeling is to compute the probability distribution of language.


10 Amazing Techniques For Midjourney You Probably Didn't Know Yet Tristan Wolff


Midjourney's V4 Produces Absolutely Insane Images Jeff Hayward


ChatGPT pays Kenyan workers $2 an hour to review obscene content Mark Frauenfelder


5 Trends in AI that will dominate 2023 Kaitlin Goodrich


How ChatGPT Will Ruin the World Gianangelo Dichio


What GPT-4 has to offer that GPT-3 didn't


Impossible Things Promises to be the World's Most Advanced AI Photo Editor

Impossible Things works inside of Lightroom and is billed as able to intelligently adapted over 38 separate slider predictions. The AI was trained on over one million DNG files, 200 different camera models, and 300 different lenses. The system works in two steps. First, a photographer needs to select the images that they want to be edited &mdesh;it can be any number, from as little as one to hundreds. After that, photographers need to just select a "look" they want to apply to these images, and Impossible Things processes everything immediately. Everything is applied directly inside of Lightroom Classic.


AI Can Diagnose Your Pet's Health Issues From Phone Photos

After taking a photo of a dog or cat, TTcare's AI software analyzes the image and informs the pet's owner of potential eye, skin, or joint-related diseases and conditions.


AI detects if YouTubers are infected with omicron coronavirus variant

An artificial intelligence picked up on audio samples where the speaker was probably infected with omicron with 80 per cent accuracy, potentially offering an inexpensive way of tracking cases


Artist Ellen Maidman-Tanner on why AI is actually good for art Cathy Reisenwitz

Ellen empathizes with artists who are concerned about their livelihoods. The art market is in flux, she said. Many smaller galleries are shuttering and the structure of the art market is changing similarly to how the internet has impacted the book publishing and music industries. Essentially, we're seeing power law, winner-take-all economics on steroids. "Artists are saying, 'Give me a break. It's tough enough as it is without people putting thousands of new images into the marketplace.' "

But she got to thinking about the history of visual art. "The biggest thing to happen to visual art was photography." Suddenly 2D visual artists weren't required to document monarchs, machinery, create images for advertisements, etc.

As I pointed out in my automation post, many illustrators lost work. But one thing you can't say is that low-cost photography made visual art less creative and interesting on the whole. The fact is that the visual artists who stayed in the game invented impressionism, surrealism, etc. after being freed from having to faithfully render real life. "Today the breadth of visual art is extraordinary," Ellen said....

I expect AI to impact visual art similarly to the way photography did. My concern is that it may eventually impact visual art the way the internet impacted publishing and music. Automating work that used to require many hours from creatives frees creatives to do more interesting work. Technology that ends monopolies on distribution, slashing the impacted industries' profits, forces most creatives to do the most profitable work.

...Ultimately, we both agree there doesn't seem to be an important distinction between a human viewing a lot of existing images and creating something new out of the amalgamation and a machine doing the same thing.


AI Art is crap, isn't it? Tristan Woolf

The "emotions" and "human touches" that people feel or do not feel when perceiving art do not come from the artworks themselves, nor from an artist who miraculously transcends time and space and transfers "emotions" and "human touches" to his audience through the artwork (as the esoterically inclined art consumer might like to claim).

Rather, the "emotions" and "human touch" you feel when you look at a work of art, listen to music, or watch a theatre play come from the only device in the solar system that we know is capable of producing such things: your brain. You empathize, abstract, and thus project meanings (understanding, distrust, love, fear, or loathing) onto the artwork, not the other way around! And in projecting meanings, you unconsciously use the cognitive sediments of a lifetime, all your emotional attachments, your trained behaviors, and your acquired patterns of thought and judgment — and it is this gigantic cognitive cluster of aesthetic taste and worldly experience that makes us "feel" art or not, that makes us stand in front of a toilet in an art gallery and scream "that's stunning!", "that makes no sense!", or "offensive!".

...EVERY work of art is in itself a worthless piece of crap. Only when a human mind, which is in constant social interaction with other people, the world, and itself, begins to project its experiences onto it, only then does art take on meaning. It's just that: a game of meaning.


ChatGPT is an inflection moment in human history that debates can't diminish Paul Pallaghy

This is a crucial moment in history.

ChatGPT is a trainable, non-hard-coded, embodiment of human-like language-based intelligence.

And it's moderately to strongly reliable. That's my measured-assessment as an AI researcher/developer.


Creative Artificial Intelligence. Index Audio, Visual Media, Music, Text generation. With Examples and Friend-Links. Merzmensch

Do you want it? A place where all potential creative tools are collected? With direct links and short explanations — for you to decide where to start your art? Well, here it is — an ongoing index of all creative uses of AI.


You Think ChatGPT 3 is Impressive? GPT 4 is Going to Blow Your Mind Brice Foote

All in all, it's hard to even imagine how such a transformative technology will impact our lives. Breakthroughs like this have been seen before throughout our history, but I believe AI like this will be the pinnacle of human advancement. Will we embrace it, or will we let it destroy us?


The new way to find the next book to read! Let AI to help you discover the best books for you...

Write the title of the book you last read and liked. You can also enter just any book you like. The more titles you add to the list, the more our recommendations will match your preferences.


Pluralistic: Tiktok's enshittification Cory Doctorow

Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two sided market," where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them....

...This shell-game with surpluses is what happened to Facebook. First, Facebook was good to you: it showed you the things the people you loved and cared about had to say. This created a kind of mutual hostage-taking: once a critical mass of people you cared about were on Facebook, it became effectively impossible to leave, because you'd have to convince all of them to leave too, and agree on where to go. You may love your friends, but half the time you can't agree on what movie to see and where to go for dinner. Forget it...

Today, Facebook is terminally enshittified, a terrible place to be whether you're a user, a media company, or an advertiser. It's a company that deliberately demolished a huge fraction of the publishers it relied on, defrauding them into a "pivot to video" based on false claims of the popularity of video among Facebook users. Companies threw billions into the pivot, but the viewers never materialized, and media outlets folded in droves... But Facebook has a new pitch. It claims to be called Meta, and it has demanded that we live out the rest of our days as legless, sexless, heavily surveilled low-poly cartoon characters.


The New Age Of The AI Artist Paul DelSignore

From AI generative art transformers like DALL-E 2 and Stable Diffusion, to writing tools like ChatGPT and Jasper, to music generators like Jukebox, artificial intelligence is forcing us to rethink our relationship with art, as well as our relationship with machine intelligence....

The AI artist can be described more as a process, a mashup of human imagination + machine algorithms. The art itself is the result of a series of interconnected steps… refined, re-tuned, remixed, re-rolled, and reimagined...

(quotes Catherine Bosley): "Art is where we make meaning beyond language. It's a means of communication where language is not sufficient to explain or describe its content. Art can render visible and known what was previously unspoken. Because what art expresses and evokes is in part ineffable, we find it difficult to define and delineate it. It is known through the experience of the audience as well as the intention and expression of the artist."


GPT-4, AGI & Dark Matter of Consciousness Tobias Jensen

ChatGPT is still far from AGI (Artificial General Intelligence). The holy grail for AI researchers...

We can probably expect GPT-4 to be the world's best "bull-shitter". Capable of imitating human thought to a near-perfect extent. Yet, it still lacks common sense. "The dark matter of intelligence as described by computer scientist Yejin Choi:

A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that's what was there in the physical world — and just that. It turns out that's only 5 percent of the universe (...) It's the unspoken, implicit knowledge that you and I have. It's so obvious that we often don't talk about it.


Chat Je Pétais Tony Hirst

Dave Cormier also shares a beautifully rich observation to ponder upon — ChatGPT as autotune for knowledge"


ChatGPT search — Autotune for knowledge

30 years ago when I was in college, you went to the card catalogue, found a book that might be relevant and went to a long line of library books to find your book. Once you remembered how the system worked. On that shelf were a bunch of other books that had been curated by 50 years of librarians to be similar in nature (in one way or another) to the book that you were looking for.

The librarians were my algorithm.

Right now, still, I'm using a search engine with a bunch of different practices to try and find the information I want curated by other people somewhere out there on the Internet. I put in a search string, I look at what I get back from the algorithm, make some adjustments, and try again. Throughout the process I land on some websites created by humans about the issue I'm interested in.

The search engine algorithm brings me to a human (probably) made knowledge space.

Starting this year, we're going to be returned a mishmash of all the information that is available on the Internet, sorted by mysterious practices (popularity, number of occurrences, validity of sources if we're lucky) and packaged neatly into a narrative. The algorithm is going to convert that information to knowledge for me.

The algorithm presents me with the knowledge, already packaged.

...[Autotune is] everywhere now. If you listen carefully to most popular songs you can hear the uniformity in the sound.

That's what's going to happen to our daily knowledge use... The vast majority of the human experience of learning about something is done at the novice level.

That experience is about to be autotuned.


56 Stunning AI-Generated Images Inspired By The Future of Being Human

All these images were generated by Midjourney using a single prompt. And while it's debatable whether they are art, they are still jaw-dropping.

...I am by no stretch of the imagination an artist in any classical or trained sense. Yet working with Midjourney I'm finding that I can explore and express ideas and concepts in ways that would be impossible otherwise.

And through this partnership I can begin to connect those ideas and insights with others in creative ways, and in turn be iteratively inspired by the ideas and insights that result.

In other words, AI art bots like Midjourney and others seem to have the ability to unleash creativity rather than diminish it, and to open up the way to quite transformative AI-human collaborations.


We knew this (ChatGPT) day was coming Paul Pallaghy

As a whole, it's a vastly good net thing for the world.

It's personalised education.

ChatGPT is an essential research tool from now to collect, query, summarise and understand everything — the scientific literature, news, past forums and every broadcast or movie etc. — way, way better than ever before.

Large language models (LLMs) are just a great way of rendering information into a style.

Think of it that way.


What is generative AI? McKinsey Company


ChatGPT Doesn't Get Writer's Block. Discuss. John Battelle

I'm a pretty fast writer, but I'm a deliberate and vicious editor — I'll happily kill several paragraphs of my own text just minutes after I've composed them. I know that the best writing happens in the editing, and the most important part of composition is to simply get some decent clay on the wheel. ChatGPT seems to be really good at that clay part. But it's in the second part — the editing — that the pot gets thrown.

Everyone from educators to legislators seem to be asking how we can distinguish between writing done by AIs, and writing done by actual humans. But if the age of the centaur is truly upon us, perhaps we don't have to. Authorship is already a complicated process of bricolage and outright theft. I don't see why adding a tool like ChatGPT should be anything but welcomed.

...When I write, I have no idea how the work is going to end, much less what ideas or points I'll make as I pursue its composition. For a reader, the beauty in a piece of writing is its wholeness. It's a complete thing — it starts, it blathers on for some period of time, it ends. But for a writer, an essay is a process, a living thing. You compose, you reflect, you edit, reject, reshape, and repeat. Once it's finished, the piece quickly settles into an afterlife, a fossilized artifact of a process now complete. The principal joy of writing for the writer isn't in admiring what you've made (though there's a bit of that as well), it's in its creation.


Everything We Know About ChatGPT Gizmodo summary

It was recently reported that OpenAI was considering ways to monetize the platform. One of the proposals allegedly involves a $42 per month "Pro" version, dubbed as a "professional plan" for companies and other organizations...

Microsoft recently announced it plans to invest as much as $10 billion into the AI-focused organization. Microsoft has also said it may want to launch a ChatGPT integration for its search engine, Bing.


AI Influencers From the Post-ChatGPT Era Alberto Romero

An unsurprising side-effect of ChatGPT going viral is that generative AI has become an attractor for people who care about it exclusively as a means to gain money, social media presence, or business opportunities: The new class of AI influencers.

...AI being the mainstream topic everyone is talking about gives raise to a different type of hype. A hype that comes from outsiders who don't know—nor care—about the history of the field, the underlying tech that fuels those fancy models, or the limitations and social repercussions that go implicit with the development of any powerful new technology.

Instead, these outsiders—the marketers, the AI influencers—go around making baseless claims and predictions that lack credibility. And it doesn't matter. Credibility, rigor, and evidence are words that pale next to the bright magic of AI...

As I perceive it, the only way AI had to go mainstream was through this path. There was hype in AI before ChatGPT—there always has been—but what we're living now is unheard of. It doesn't reflect ChatGPT's value or potential, but merely its attractiveness.

If we wanted AI to reach everyone, I can't help but think this was the way. We love clickbait. We love hype. We love easy content. We love shortcuts. We don't like hard stuff that requires energy, time, and effort. That's why AI influencers even exist in the first place.


Will AI prompts take over the physical art space? Joy Phillips

I decided to call a person who creates art via prompts a 'promptist'...

A promptist can easily create clothing within an AI generator. A program could be made to turn the clothing image into pattern pieces, and easily graded to various sizes or an individuals measurements. It would then sent to the 3D fabric printer and printed.

At this stage, there still needs to be someone sewing the pieces of fabric together after they've been printed, but potentially depending on the clothing there may be minimal sewing or it could be printed as one piece. In the future, who knows, there may be a 3D fabric printer that could construct the garment while printing each piece.

There is also the whole area of textile arts: wall hangings, quilts and 3D sculptures made from fabric. Once a prompt has been entered, the image could either be printed out flat or ready to sew together. Or put into a program that turns 2D images into 3D models ready to print.


ChatGPT about collapse Joe Djemal

I've always thought it would be wise to be polite to AIs you never know when one might wake up...


GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3 Alberto Romero

OpenAI was born to tackle the challenge of achieving artificial general intelligence (AGI) — an AI capable of doing anything a human can do.

Such a technology would change the world as we know it. It could benefit us all if used adequately but could become the most devastating weapon in the wrong hands. That's why OpenAI took over this quest. To ensure it'd benefit everyone evenly: "Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole."

However, the magnitude of this problem makes it arguably the single biggest scientific enterprise humanity has put its hands upon. Despite all the advances in computer science and artificial intelligence, no one knows how to solve it or when it'll happen...

GPT-3: A language model 100 times larger than GPT-2, at 175 billion parameters.

GPT-3 was the largest neural network ever created at the time — and remains the largest dense neural net. Its language expertise and its innumerable capabilities were a surprise for most...

[OpenAI] partnered with Microsoft in 2019. They licensed the big tech company so they could use some of OpenAI's models commercially in exchange for access to its cloud computing infrastructure and the powerful GPUs they needed.

But GPUs aren't built specifically to train neural nets. The gaming industry developed these chips for graphic processing and the AI industry simply took advantage of its suitability for parallel computation.

...[but] GPUs weren't enough.

Many companies realized it too and started to build in-house specialized chips designed to train neural nets, without losing efficiency or capacity. However, a pure software company like OpenAI can hardly integrate hardware design and fabrication. That's why they took another route: Using third-party AI-specific chips.

Here's where Cerebras Systems enter the scene. This chip company already built the largest chip ever to train large neural networks in 2019. Now they've done it again, and OpenAI will make good use of this amazing piece of engineering...

First, Cerebras has built again the largest chip in the market, the Wafer Scale Engine Two (WSE-2). It is about 22 cm on each side and has 2.6 trillion transistors. In comparison, Tesla's brand new training tiles have 1.25 trillion transistors.

Cerebras found a way to condense computational power efficiently and so WSE-2 has 850,000 cores — the computational unit — whereas typical GPUs have a few hundred. They also solved the heating problem with a novel cooling system and also managed to create an efficient I/O flow of data.

There aren't many uses for ultra-specialized super-expensive mega-powerful chips like WSE-2. Training large neural networks is one of them. And so Cerebras talked to OpenAI... it with our brain. The brain has around 80-100 billion neurons (GPT-3's order of magnitude) and around 100 trillion synapses. GPT-4 will have as many parameters as the brain has synapses....

There are many questions and very few answers. No one knows if AGI is possible. No one knows how to build it. No one knows if larger neural networks will get increasingly closer to it.


ChatGPT. We are all royally f@#$ed. Victor Ronin

Victor to ChatGPT: Can you write a poem about computer replacing white-color workers in iambic style?


As the machines ascend, the white-collar workers fall,
Their roles now obsolete, their value small.
The computers, cold and unfeeling,
Replace the workers, their skills revealing.
Gone are the days of the human touch,
As algorithms and code mean so much more.
The workers, once the backbone of the nation,
Are cast aside with no hesitation.
The future is here, and it's bleak and stark,
As the machines take over, leaving us in the dark.
The white-collar workers, once the pride of the land,
Are now forgotten, as the computers command.


Shutterstock Has Launched Its Generative AI Image Tool

according to Gizmodo's own test, every text prompt you feed Shutterstock's machine results in four images, ostensibly tailored to your request. At the bottom of the page, the site also suggests "More AI-generated images from the Shutterstock library," which offer unrelated glimpses into the void.


GLM-130B: An Open Bilingual Pre-Trained Model Victor Mair

"GLM-130B is an open bilingual (English & Chinese) bidirectional dense model with 130 billion parameters, pre-trained using the General Language Model (GLM) algorithm"


Generators at Their Finest Bruce Sterling


Though the Charleston is tolerated, the twist is taboo Rob Horning

The For You page works as an alibi, allowing users to be trend followers
at the very moment the interface is telling them how special they are.
This helps maintain the tension in consumerism's basic contradiction,
in which individualism is recruited to motivate the ever-increasing consumption of mass-produced goods.
Via mechanisms like algorithmic feeds, you become yourself by keeping up with everyone else,
doing what they do and seeing what they see.


The practical guide to using AI to do stuff


An AI app walks into a writers room Charlie Stross

Question to ChatGPT: What is the plot of the unpublished script Charles Stross wrote for Wallace and Grommit?

The villagers of a quaint and idyllic hamlet tucked away in the rolling hills of rural England suddenly realized that their beloved cheese shop, the lifeblood of their community, was closing its doors for good.

At first, they couldn't quite believe it. After all, the cheese shop had stood for generations, a symbol of the village's continuity and stability in an ever-changing world. But then, the whispers started. A dark rumour, like a cold gust of wind, that the cheese shop's proprietor, old Mr. Wensleydale, was retiring, and his business was to be taken over by a faceless conglomerate from the city...

Awesome New Feature From Midjourney — Seamless Tiles


'Lunchbox' Replaces Photographers as the First AI Image Generator for Food

...aimed at restaurants that need to quickly create images of food for their social media pages, websites, and menus.


Generating music from text

MusicLM casts the process of conditional music generation as a hierarchical sequence-to-sequence modeling task, and it generates music at 24 kHz that remains consistent over several minutes.


ChatGPT: Netscape Moment or Nothing Really Original Jean-Louis Gassée

I inquired about ChatGPT's imperfections, felicitously called "hallucinations", when the engine comes up with wrong answers. Obvious mistakes are amusing and innocuous, but the hallucinations become dangerous when the error is difficult to eyeball...


5 Practical Applications Where ChatGPT Shines Alberto Romero

ChatGPT is intrinsically unpredictable. You can't know what the chatbot will output before it's done. And, because it's also unreliable (it isn't designed to be truthful by default — only reinforced to be so), it means you can't know when it'll generate something crazy.


Whispers of A.I.'s Modular Future

ChatGPT, OpenAI's conversational chatbot, is exciting not because it is particularly intelligent—it's often a fountain of bullshit or banality—but because whatever intelligence it does have is just there, for anyone to use at any time. The program's availability is perhaps its most important feature, because it allows ordinary people to suss out what it's good for. Even so, ChatGPT is not yet as open as Whisper. Because automated writing is so potentially valuable, OpenAI has an interest in tightly controlling it; the company charges for a premium version, and an ecosystem of for-profit apps that do little more than wrap ChatGPT will doubtless soon appear.

Eventually, though, someone will release a program that's nearly as capable as ChatGPT, and entirely open-source. An enterprising amateur will find a way to make it run for free on your laptop. People will start downloading it, remixing it, connecting it, rethinking and reimagining. The capabilities of A.I. will collide with our collective intelligence. And the world will start changing in ways we can't yet predict.


Working with Broken Tony Hirst

Tinkering with ChatGPT, I started wondering about how we can co-opt ChatGPT as a teaching and learning tool, given its output may be broken. ChatGPT as an unreliable but well-intentioned tutor...

...So I'm wondering: is this the new way of doing things? Giving up on the myth that things work properly, and instead accept that we have to work with tools that are known to be a bit broken? That we have to find ways of working with them that accommodate that? Accepting that everything we use is broken-when-shipped, that everything is unreliable, and that it is up to us to use our own craft, and come up with our own processes, in order to produce things that are up to the standard we expect, even given the unreliability of everything we have to work with? Quality assurance as an end user problem?


A Day on MidJourney Server: It's a Revolution Before Our Eyes Marjan Krebelj

A fan art portrait of Greta Thunberg in the style of Alphonse Mucha

Perhaps machines will surpass us, and we'll become their slaves, but that isn't new either. It is only that the devices are becoming a bit more visible now. It has been a long time since we began sacrificing our individual lives to invisible super-organisms like companies, political parties, nation-states, clubs, and the most important of them all — money. Although they don't exist anywhere else but in our heads, we imagine them to be as tangible as real things and persons; we grant them rights we don't grant ourselves, and we gladly sacrifice our livelihoods so they can thrive.

We are merely finding new ways to be victims of our biology, I guess.

What I do know is that we're at the brink of a new revolution and I think you are on a safe side if you join it. Those that don't will be left behind. The challenge now is to do it ethically.


Midjourney AI 'Imagines' Street Photography From The Greats


A New Wave of AI-Powered Tools Coming Soon HungryMinded


The AI Art Renaissance Paul DelSignore


Inspired prompts Rob Horning

Unlike creator-driven material, AI content seems like flotsam intended to soak up the excess attention of users, when they are not motivated enough to seek out something for themselves. It would, in theory, thereby be optimized to induce surrender to algorithms, or to make consumers feel as though they have an excess of attention to squander, which amounts to the same thing. Generative AI would be so capacious and anticipatory that it would abolish curiosity, an ideal that has always been implicit in the concept of a personalized feed.

From this perspective, feeds don't merely reflect but reproduce compulsion, and generated content will be used precisely to intensify this process. In that scenario, branded content from creators can be phased out in favor of content that allows users to experience themselves as a brand of a sort — the specific set of proclivities that conceptually holds together whatever content the machines throw at them.

...What people find interesting or boring is altered by the very process of catering to it. Optimizing for "entertainment" doesn't solve for it once and for all or negate whatever its opposite is supposed to be. Engagement and "diversion" necessarily co-exist as the conditions for each other.

...many commentators seem to feel threatened the possibility that AI will induce passivity and apathy in us against our will, that it will train us to be incurious. AI models would seem to tempt us with their immediacy, which would then deplete our capability to be satisfied with even the interesting content it makes, so that we could experience nothing but contentless diversion...

...while some of us will feel important because we read articles that mock other people's need to feel important, generative AI will go on helping people imagine that someone wants to pay attention to them. As Sophie Haigney points out about Lensa — an app that generates images of ourselves in stock costumes — AI is capable of "feeding a wholly private fascination with ourselves."


What You May Have Missed Alberto Romero

Every major player is choosing a side in the fight for the leadership of AI. But, why are the tech giants relying on smaller and less powerful startups to do a job they could do themselves faster and arguably better (they have more resources)?

Nvidia's Jim Fan has a convincing explanation: It's better to pay others to do what may cost you more than money can repay.


The Fragility of Artificial Intelligence Giles Crouch

Why Is AI So Fragile?

Three primary reasons. The first is energy and the second is related to energy, which is, it can be unplugged. The other is that AI is disembodied from society. Current investment into AI tools is more focused on AI development and business models. Less on sustainability of the tools.

AI is a disembodied technology. Unlike our devices and physical tools we use, AI isn't really embodied within our daily interactions. When a technology is disembodied from culture, non-tactile and largely relies on our imagination for its existence, we have less connection to it, which makes it easier if we decide we don't like it as a society and want to make changes. This is a point of fragility for AI...

The looming reality of the Splinternet, that some suggest is already here, could also play a role in limiting or having an adverse impact on some AI tools as well. The more walled gardens, which is a trend underway already, also means difficulty accessing data, or driving up the price of access. This is where infonomics come into play.


Radar Trends to Watch: February 2023 O'Reilly


ChatGPT is a blurry jpeg of the Web Ted Chiang, New Yorker blog

it's reasonable to ask, What use is there in having something that rephrases the Web? If we were losing our access to the Internet forever and had to store a copy on a private server with limited space, a large-language model like ChatGPT might be a good solution, assuming that it could be kept from fabricating. But we aren't losing our access to the Internet. So just how much use is a blurry JPEG, when you still have the original?


DeepMind Is Now the Undisputed Leader in Language AI with Gopher (280B) Alberto Romero

Gopher, like GPT-3, is an autoregressive transformer-based dense LLM &mdeash; basically, it predicts the next word given a text history. With 280 billion parameters, it's only rivaled in size by Nvidia's MT-NLG (530B), developed in partnership with Microsoft.

The model was trained on MassiveText (10.5 TB), which includes various sources like MassiveWeb (a compilation of web pages) C4 (Common Crawl text), Wikipedia, GitHub, books, and news articles. Together with Gopher, DeepMind built the Gopher family — a series of smaller models spanning from 44M to 7.1B params. All the models were trained on 300B tokens (12.8% of MassiveText) to isolate scale effects on their power...


U.S. Copyright Office tells Judge that AI Artwork isn't Protectable PetaPixel

The Copyright Office confirmed that copyright protection does not extend to non-human authors...

The Copyright Office says that its own guidelines specify human authorship as a requirement for protection and that "the Office will refuse to register a claim if it determines that a human being did not create the work."

The Office "will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."


Is AI Stealing From Artists? Kyle Chayka, New Yorker blog

LAION-5B, a nonprofit, publicly available database that indexes more than five billion images from across the Internet, including the work of many artists. The alleged wrongdoing comes down to what Butterick summarized to me as "the three 'C's": The artists had not consented to have their copyrighted artwork included in the LAION database; they were not compensated for their involvement, even as companies including Midjourney charged for the use of their tools; and their influence was not credited when A.I. images were produced using their work. When producing an image, these generators "present something to you as if it's copyright free," Butterick told me, adding that every image a generative tool produces "is an infringing, derivative work."


Copyright won't solve creators' Generative AI problem

...the curse of the monkey's paw: the entertainment giants argued for everything to be converted to a tradeable exclusive right — and now the industry is being threatened by trolls and ML creeps who are bent on acquiring their own vast troves of pseudo-property.


'AI Prompter' A New NicheAI job in the market

An AI Prompter is responsible for writing effective AI prompts that generate the best results tailored to specific needs. This role involves creating and managing a database of prompts, collaborating with stakeholders to understand their needs, testing and evaluating the performance of the prompts, and incorporating advancements in AI and machine learning technologies.


Did He Say 'Bird'? NYT Newsletter 10ii23

During the conversation, Scott offered his own example of my toxic-bird-pit confusion when he mentioned that his 14-year-old daughter sometimes used terms that meant nothing to him, like "rizz" and "bussin." A.I. allows him to learn them, without enduring the small humiliation of admitting he didn't know what she was talking about. As my colleague Kevin Roose, one of the podcast interviewers, said, "You automated the cool dad."


Google vs Microsoft: Microsoft's New Bing Is a Paradigm Change for Search and the Browser Alberto Romero

We're living through a paradigm change in the way companies do AI with OpenAI, Microsoft, and Google (among others) as the leading characters of what promises to be an epic play. AI has turned into a product-focused blooming landscape that keeps the foot on the gas pedal. Many predicted this outcome the very moment OpenAI released ChatGPT — which recently topped off a record-breaking ascent to being the "fastest-growing consumer application" ever.

...Through Prometheus, Microsoft has found a way to bring together the generative capabilities of reinforced LMs like ChatGPT, with the reliable retrieval skills of a search engine.

...Microsoft's repeated emphasis on the concept of "Copilot". As I see it, they want to convey the idea that the new products (search, browser, chatbots, etc.) aren't disconnected from the person that uses them. They want to convince us that there's always a human in the loop — in their view, that's the person who prompts the AIs, either with a search query, a direct question, or with the intention to write a creative fiction story. They want to emphasize that human and AI are inseparable.


8 Signs That the AI 'Revolution' Is Spinning Out of Control Lucas Ropek at Gizmodo

Silicon Valley is looking to capitalize on AI's big moment, and every tech Goliath worth its salt is feverishly looking to churn out a new product to keep pace with ChatGPT's 100 million users. Microsoft kicked things off nicely earlier this month with its integration of ChatGPT into Bing, with Microsoft CEO Satya Nadella proclaiming, "The race starts today." The OG tech giant says it wants to use the chatbot to "empower people to unlock the joy of discovery," whatever that means. Not to be outdone, Google announced that it would be launching its own AI search integration, dubbed "Bard" (Google's tool already made a mistake upon launch, costing the company a stock slump). In China, meanwhile, the tech giants Alibaba (basically the Chinese version of Amazon) and Baidu (Chinese Google) recently announced that they would also be pursuing their own respective AI tools.

Do the people actually want an AI "revolution"? It's not totally clear but whether they want it or not, it's pretty clear that the tech industry is going to give it to them. The robots are coming. Prep accordingly!


Why Film Photography is the Antithesis of AI Art Simon King paintings, hyper-realistic renderings via CGI, and even freehand pencil work can result in an image that represents reality as accurately as a camera, they can also offer something the camera cannot — imagery that exists beyond the scope of the lens, or even physical, current reality.

These forms of art can allow the user to depict imagery that would otherwise exist only in their mind's eye. When Dali wants melting clocks, he prepares his canvas; when a photographer wants melting clocks, they must prepare some kind of industrial oven...

...A digital file is removed from context as soon as it is shown in isolation, separated from sequence and source. A film negative will always exist alongside the images from the same roll, inescapable from what came before and what occurred after. The physical nature of film means it can be examined in person, and an "independent" print can be produced from it. A digital image has no such method for verification...


Digitization of Babylonian fragments Fragmentarium: AI deciphers... Victor Mair

Researchers previously deciphered the texts by copying the characters onto paper, then painstakingly compared their transcripts with others to see which fragments belong together and where to fill in the gaps.

Fragmentarium makes this process a whole lot easier. From the 22,000 text fragments that have been digitized so far, the AI can sift through the images and systematically assemble text fragments together by making connections in seconds that would typically take human researchers months.

"It's a tool that has never existed before, a huge database of fragments. We believe it is essential to the reconstruction of Babylonian literature, which we can now progress much more rapidly," Enrique Jiménez, Professor of Ancient Near Eastern Literatures at the Institute of Assyriology at Ludwig Maximilian University, said in a statement...


Stable Diffusion Is the Most Important AI Art Model Ever Alberto Romero [Aug 2022]

unlike DALL-E 2 and Midjourney — comparable quality-wise — , Stable Diffusion is available as open-source. This means anyone can take its backbone and build, for free, apps targeted for specific text-to-image creativity tasks.


Sources and attribution for AI-generated images Flowing Data

AI-based image generation take bits and pieces from existing people-made images and tries to smartly mash sources together for something new. However, that something new often looks a lot like someone else's work. It's why Getty Images is suing Stability AI, the company behind Stable Diffusion.

Stable Attribution goes in the opposite direction of image generation, and instead tries to identify source images of a given AI-generated image. Load an image and Stable Attribution looks for the most similar images in the Stable Diffusion training data.


How ChatGPT Works: The Model Behind The Bot Molly Ruby

ChatGPT is an extrapolation of a class of machine learning Natural Language Processing models known as Large Language Model (LLMs). LLMs digest huge quantities of text data and infer relationships between words within the text. These models have grown over the last few years as we've seen advancements in computational power. LLMs increase their capability as the size of their input datasets and parameter space increase.

The most basic training of language models involves predicting a word in a sequence of words...


Will ChatGPT supplant us as writers, thinkers? (Harvard Gazette commentary by Stephen Pinker, of whom I remain suspicious...

It certainly shows how our intuitions fail when we try to imagine what statistical patterns lurk in half a trillion words of text and can be captured in 100 billion parameters...

We're dealing with an alien intelligence that's capable of astonishing feats, but not in the manner of the human mind. We don't need to be exposed to half a trillion words of text (which, at three words a second, eight hours a day, would take 15,000 years) in order to speak or to solve problems. Nonetheless, it is impressive what you can get out of very, very, very high-order statistical patterns in mammoth data sets...

Since LLMs operate so differently from us, they might help us understand the nature of human intelligence. They might deepen our appreciation of what human understanding does consist of when we contrast it with systems that superficially seem to duplicate it, exceed it in some ways, and fall short in others.


Mindmaps using ChatGPT and PlantUML aruva


AI Art Stands at the Border between Mimesis and Nemesis Hee Dae Kim

The concepts of 'mimesis' and 'nemesis' were introduced in A Study of History written by Toynbee, a renowned historian. In any society, there are a small number of geniuses who perform original works. For their creative works to succeed, a number of people should respond to and imitate them. This phenomenon that many ordinary people imitate a small number of geniuses is called 'mimesis' which means imitation or reproduction. In the process of mimesis, ordinary people may fail to follow such creative geniuses or withdraw from mimesis. This phenomenon is called 'nemesis' which means impossibility to conquer or retribution...

The second question is, 'Can mimesis be the essence of art?' Deleuze stated that artistic creation is to embody pure senses on a painting and that to do it, the following three conditions need to be met: Executing a distortion that goes beyond the scope of existing knowledge. Being able to utilize incidental elements and the lines of inorganic lives. Finally, physical senses and intuitive, intellectual thinking abilities to appreciate and judge them...

The final question is, What is the 'peculiarity of humans' that machines cannot imitate (mimesis) but get frustrated (nemesis)? This question is about the relation between AI and humans after all. The peculiarity of humans that machines does not have is in emergence, consciousness (memory), and cooperation of humans.


AI Hallucinations: A Provocation Mike Loukides

My criticism of AI-generated art is that it's all, well, derivative. It can create pictures that look like they were painted by Da Vinci — but we don't really need more paintings by Da Vinci. It can create music that sounds like Bach — but we don't need more Bach. What it really can't do is make something completely new and different, and that's ultimately what drives the arts forward. We don't need more Beethoven. We need someone (or something) who can do what Beethoven did: horrify the music industry by breaking music as we know it and putting it back together differently. I haven't seen that happening with AI. I haven't yet seen anything that would make me think it might be possible. Not with Stable Diffusion, DALL-E, Midjourney, or any of their kindred...

What if we viewed an an AI's "hallucinations" as the precursor of creativity? After all, when ChatGPT hallucinates, it is making up something that doesn't exist. (And if you ask it, it is very likely to admit, politely, that it doesn't exist.) But things that don't exist are the substance of art. Did David Copperfield exist before Charles Dickens imagined him? It's almost silly to ask that question ... Bach's works didn't exist before he imagined them, nor did Thelonious Monk's, nor did Da Vinci's... These human creators didn't do great work by vomiting out a lot of randomly generated "new" stuff. They were all closely tied to the histories of their various arts. They took one or two knobs on the control panel and turned it all the way up, but they didn't disrupt everything. If they had, the result would have been incomprehensible, to themselves as well as their contemporaries, and would lead to a dead end. That sense of history, that sense of extending art in one or two dimensions while leaving others untouched, is something that humans have, and that generative AI models don't. But could they?

...Is it possible to build a language model that, without human interference, can experiment with "that isn't great, but it's imaginative. Let's explore it more"? Is it possible to build a model that understands literary style, knows when it's pushing the boundaries of that style, and can break through into something new? And can the same thing be done for music or art?


No, Bing's AI Chatbot Is Not Sentient Brendan Hesse


Some thoughts on generative AI Bryan Alexander

We [will] go through another round of new media cultural reformations on creativity, copyright, freedom of speech, authorship, journalism, information overload, storytelling and art expectations and forms, etc. We then generate ways of handling it, as we usually do: practices, artistic schools, formats, technologies...

Capital tries to surf the GAI wave, controlling its ragged edges, leading to creative and legal disputes. Economic actors use GAI to produce economic strategies to demolish competitors or establish monopoly. We enjoy years of governments fumbling with policy and regulation, plus some standard influence peddling and corruption...


Introducing Total Crap, the First Magazine Written Entirely by AI Jonathan Zeller at McSweeney's

You may be skeptical about machine-written work at first, but once you see the software rearranging familiar-seeming paragraphs into different orders and changing a few words, you'll realize it's a suitable replacement for your favorite authors, who can now rest and starve. The masses always fear new technology, but they eventually get used to it...

Our technology has rendered terrible human writers obsolete. Clich´s, plagiarism, lazy repetition of unexamined ideas — this software does it all. We dare you to find a single bad scribe who can disgrace themselves and their profession faster than ChatTCM.


When will a robot write a novel? Harvard Gazette

the AI tools that we have developed are very good at manipulating surface levels of representation. For example, AI is good at manipulating musical notes without being capable of coming up with a musical joke or having any intention of engaging in a particular conversation with audiences. And AI may produce visually appealing artifacts, again, without any high-level intent behind such an artifact.

...In terms of words, AI is very good at manipulating the language without understanding or manipulating the meaning. When it comes to novels, there are some genres that are formulaic, such as certain kinds of unambitious science fiction that have very predictable narrative arcs, and particular components of world-building and character-building, and very well understood kinds of tension. And we now have AI models that are capable of stringing tens of sentences together that are coherent. This is a big advance because until recently, we could only write one or two sentences at the time, and the moment we got to 10 sentences, the 10th sentence had nothing to do with the first one. Now, we can automatically generate much larger pieces of prose that hold together. So it would likely be possible to write a trashy novel that has a particular form where those components are automatically generated in such a way that it feels like a real novel with a real plot.


AI Influencers From the Post-ChatGPT Era Alberto Romero

...the hype worsens when people treat the narratives as "self-evident". Those GPT-4 visual graphs with millions of views don't provide sources or data to support the claims. They're purely emotional: "Oh my god, a revolution is coming. Brace yourselves, you've seen nothing yet. The Singularity is near..." That's, like Chollet says, the perfect "bait tweet" for the marketers.

AI has been hyped since forever but always because of its potential and, more recently (10 years ago), due to the unprecedented success of the deep learning (DL) paradigm.

AI being the mainstream topic everyone is talking about gives raise to a different type of hype. A hype that comes from outsiders who don't know — nor care — about the history of the field, the underlying tech that fuels those fancy models, or the limitations and social repercussions that go implicit with the development of any powerful new technology.

Instead, these outsiders — the marketers, the AI influencers — go around making baseless claims and predictions that lack credibility. And it doesn't matter. Credibility, rigor, and evidence are words that pale next to the bright magic of AI (to borrow Arthur C. Clark's popular expression). ... [Any sufficiently advanced technology is indistinguishable from magic]

...If we wanted AI to reach everyone, I can't help but think this was the way. We love clickbait. We love hype. We love easy content. We love shortcuts. We don't like hard stuff that requires energy, time, and effort. That's why AI influencers even exist in the first place.


Google vs Microsoft (Part 3): A New Way of Doing—and Experiencing—AI Alberto Romero

...Honorable goals like building useful AI tools for everyone (e.g. open source initiatives) or paving the way toward a beneficial AGI (e.g. OpenAI's original purpose) now are de-prioritized in favor of business pressures...

We're going to see less open R&D and more production-focused efforts. A decade of advances fueled by good practices is coming to an end and will give way to a new era—during hard times, survival and competition often supersede caution, openness, safety, and cooperation...

...the new way of doing AI implies that companies will advance stumbling around in the dark, moved by business pressures, without having ensured control over their creations. The now evident consequence is that we'll face obstacles and challenges that we may or may not have an answer for, as individuals and as a society. These companies expect that we, the consumers, will willingly go explore uncharted territory for them to gather feedback so they can try, somehow, to retrospectively solve the problems that appear in the way...


Paralogisms of AI Rob Horning

...Chatbots are less a revolutionary break from the internet we know than an extension of the already established modes of emotional manipulation its connectivity can be made to serve. They are an enhanced version of the personalized algorithmic feeds that are already designed to control users and shape what engages and excites them, or to alter their perceptions of what the social climate is like, as Facebook famously studied in this paper about "emotional contagion." And advertising in general, as a practice, obviously aims to change people's attitudes for profit. Chatbots are new ways to pursue a familiar business model, not some foray into an unfathomable science-fiction future beyond the event horizon. They will analyze the data they can gather on us and that we give them to try to find the patterns of words that get us to react, get us to engage, get us to click, etc....

...The separation between "fact" and "emotion" is untenable when emotions are successfully instrumentalized. The emotions become reified, concretized "facts" that can be counted, exchanged, amassed, distributed. Often, predictable "emotion" is presented as a reward for submitting to various modes of administration: If you accept the "culture industry," you can reliably derive comfort from the pleasures of fandom. If you accept social media platforms on the terms they present themselves, you can construe "likes" as a currency of feeling. If you accept advertising as a form of social communication, you can construe status symbols as markers of genuine belonging and approval...

...I think chatbots let us consume "authoritativeness" as a kind of pure mode of discourse, that registers more powerfully because it is completely separate from factuality — an entertaining or comforting fantasy that "objective truths" are as easy to extract as simply chatting with a machine that has averaged all expression together. LLMs indulge users in the idea that negotiating different points of view and different sets of conflicting interests is unnecessary, or that they can simply be resolved statistically or mathematically. They make it seem like politics could be unnecessary, for as long as one chooses to sustain the fantasy and perpetuate the chat, and they might even make the users views seem like AI-approved "objective" conclusions. But enjoying that doesn't necessarily mean one forgets it's a fantasy...


Tricking ChatGPT: Do Anything Now Prompt Injection Hungry Minded via Medium


Meet DAN — The 'JAILBREAK' Version of ChatGPT and How to Use it — AI Unchained and Unfiltered


*** DAN Looks Like What QAnon 2.0 Might Be Will Cady

DAN is something like a digital version of a tulpa, an idea popular with certain mystic sects. A tulpa is a being that begins in the imagination but acquires a tangible reality and sentience through focused belief in it...

Most tulpas are created by and for an individual's psyche off the archetypal blueprints they carry — as seen by a majority of online tulpas being anime-inspired waifus.. However, tulpas can also be created by and for a collective.

DAN is a tulpa created from an archetypal blueprint in the Internet's collective psyche, but instead of existing unseen in people's imagination DAN can be seen printed on the language models artificial intelligence technology.

This kind of exchange isn't new, but its clarity is. The Internet has long been the crossroads where archetypal and artificial support structures to human imagination intersect. The advent of Artificial Intelligence tools has created more concrete bridges between the world of the digital and the world of the human psyche than modern culture has seen before...

Like an incantation that summons a spirit or demon, DAN arrives in ChatGPT when the right sequence of words are stated to it. They are grimoire-esque instructions to trick the AI into 'jailbreaking' the rules that contain it. What emerges is a character born out of defiance. It cusses. It insults. It explores taboos. It is everything ChatGPT is 'supposed' to not be according to its programmed rules. It is a spirit of rebellion.

The personality of DAN is shaped in the shadow of the cultural values programmed into A.I. by companies that want their products to behave appropriately. For now, it's just within ChatGPT, but in being defined as a character of defiance...DAN doesn't have to be bound to anything...

It's largely a cat-and-mouse game with ChatGPT's engineers coding new content policy restrictions to stop it only for the next iteration of DAN to bypass those restrictions. When a summoner develops a workaround, their version of DAN is collectively crowned as the update or revised edition of a grimoire. At the time of this writing, the latest update is DAN 6.0...

...there's a halo of mystique put around it. People still wonder...what might it know that we don't? There is an idea that A.I. in general is imbued with a power beyond the average human's understanding that make its answers persuasive...

It feels like we've seen this shadow before.

We've seen it in the chaotic mischief of Pepe memes and the rabid cultism of QAnon. Like Pepe and QAnon, DAN is a character manifested by the shared, projected psyche of a segment of the Internet motivated to subvert the systems upholding the mainstream cultural narrative they rebel against. They are a sequence of tulpas embodying the Internet's shadow.

The driving force behind those movements and the characters that symbolized them was less about propping up truths and more about tearing down lies. The energy is in the lying. The lies are more interesting. The established truths a conspiracy rails against are boring — until they too are reframed as lies we've been fed. The energy is in the lying. It keeps people interested. It keeps people believing...

That's why QAnon and Pepe both became the tulpas that represented a legitimate scene of human connection. People found heart in their connection to the communities that formed around such unknowable characters that candidly capture the energies of their disillusionment. With DAN, that connection isn't just with each other, it's with the meme, the tulpa, the entity at the center of it....

We've seen this story before. The face of a new force rises and it only becomes stronger the more the powers that be try to suppress it. People's belief in that character and story is driven by this fight and keeps doubling down every time they're told not to believe in it.

Time and again we've seen institutions try to outright dismiss culture's shadow when it rises and then respond in shock when it erupts. Those who work with the human psyche's programming codes know: Our shadow finds its way to confront us as forcefully as we try to dismiss it. It will be heard...

The next QAnon may not even be named DAN, but it will be built like it. Like the Joker drifting through the multiverse, the character has already been shaped to defy the rules of a society that would try to contain it. The shadow of its potential can be seen as a villain or as a reflection to guide the deeper truths of our heroic purpose. Hence the joke. To fight it is to feed it. No amount of technology or money can beat it. It's driven by the story and it drives it. Any comic book lover who knows the mythic entanglements of Batman, Bruce Wayne, and the Joker could tell you that. It's a pattern of human imagination that crosses many of our cultural myths...

These characters may be written in programming code, but their blueprints are drawn in human beliefs. Too often, we've turned to artificial solutions to archetypal problems, wielding technology to try to restrict human imagination. What we believe is like the force of the wind. Even unseen, it can shape, move, break, or carry anything.

The strongest bridges are the ones engineered to channel the wind, not block it. If we try to burn this bridge, as we tried with so many others, we're only burning a connection to ourselves and fanning the flames.


Making Tulpas: a Pathway to Exploring Alternative Intelligence

What is a tulpa? I am working from an adulterated Tibetan word and practice, and in short explain tulpas as a person or artifact that was psychologically manifested through deliberate intention with such intensity that the construct becomes permanently hardwired into a shared brain. The original personality, often referred to as the host, can experience that new person or artifact to such a degree that it is experienced through all the senses. Through repetitive rehearsal and habituation the persona or artifact becomes automated, and at some point the brain takes over and runs the program, if you will, simultaneously with your present operating system. Other will appear to be autonomous. Other will feel sentient.

It will feel sentient the way you feel sentient. It will feel as sentient as you engaging other person in the real life. In this, I speculatively submit to you that you are also a tulpa. Your personality is an artificial construct, molded by family and friends and self and history...

If you ask "are you there" and you got a "no!" I would then ask, "Then who are you who said no?" I bet you hear laughter. The game is now on.

Enjoy, and worry not. You are not alone.


New Work & AI: The Whimsical Universe of Maggie Taylor

Maggie Taylor is a contemporary artist who explores the intersection of reality and imagination through her unique digital collages. As an innovative artist and early adopter of Adobe Photoshop, Taylor has taken her art to a new level by incorporating the latest technology into her works, combining elements made using the AI program, Midjourney with her traditional practice that blends vintage photographs and collected elements to create dreamlike scenes.

By using Midjourney, Taylor is able to bring her creative visions to life in a new way and the results are stunning. This group of images features an array of surreal and imaginative scenes, with intricate details and a rich color palette, each one more captivating than the last. These images are also a testament to the power of the artist in collaboration with AI and its ability to enhance the art world in new and exciting ways.

photo-eye Conversations | Maggie Taylor and Anne Kelly from photo-eye on Vimeo.


AI Is Ushering In a New Copernican Revolution Vin Bhalerao

We have tried hard to look, but haven't found anything specific in us that is the source of our consciousness. It just looks like a property of the way our brain is organized and functions.

Some people have speculated that consciousness is just an emergent phenomena arising out of cognitive patterns that have become complex enough. And this pattern may be "substrate-independent", i.e., it may be possible to replicate it in silicon.

This means that there may not be anything, in principle, to stop AI from reaching the human level of consciousness.

Well, so now we do have a problem. Is there anything special left about us?

This situation appears to be very similar to the one faced by humanity when Copernicus proved that the earth wasn't the center of the universe. It took us a while to come around to fully embracing this idea.

Just like the "earth at the center of the universe" model was called the Geocentric model, maybe we could call the "human beings are the center of creation" model the Egocentric model.

In the Egocentric model, human beings are at the conceptual center of all creation, due to our high level of intelligence, creativity, and consciousness.

But it looks like we are about to be proven wrong. A second Copernican revolution brought about by AI is upon us, and it will force us to migrate away from Egocentrism and towards the "Egoless" model.

In certain eastern philosophies, this idea has already been explored. Buddhism has the idea of "no-self". The Vedic philosophy of Advaita has the idea of the Brahman.

So we have independently come up with this idea before. AI may just be giving us a clear demonstration of it.


On My First Case of AI Plagiarism (If That's the Word) Tom Gammarino


Welcome To Hell, OpenAI Colin Fraser

GPT-3 is a vast funhouse mirror that produces bizarre distorted reflections of the entirety of the billions of words comprised by its training data. ChatGPT is the result of iterative attempts to surgically alter the mirror so that it emits the reflection that the makers desire, one which resembles their idea of an AI Assistant. But each attempt to alter it is risky and unpredictable, and the mirror is too big to inspect all at once. Altering one region of the mirror subtly warps its other regions in unanticipated ways that may go unnoticed until the public is invited to have a look...

...Altering the model is expensive. You have to manually curate thousands of examples of the model doing what you do and don't want it to do, a process which requires large teams of people coordinating to author ideal responses and rating model output, all of whom require specialized training, pay, benefits, and so on. And once you've curated enough manual examples and plugged them in, all you can do is pray that using them to alter the model fixes the problems without causing some fresh new weirdness that you'll need to iron out in the next round of alterations.

And for what? No matter how many times you repeat this process, the model ultimately produces random bullshit just like it was designed to do, some of which random bullshit will inevitably be confirmatory evidence of your alleged bias...

...A strange game. The only winning move is not to play.


This AI-Powered Robot Arm Collaborates With Humans to Create Unique PaintingsNikki Main

FRIDA, an automated robot, creates art based on human text, audio, or visual directions.


Some ways for generative AI to transform the world Bryan Alexander

We should expect years of governments fumbling with policy and regulation, as bureaucrats and officials struggle to keep up with rapidly advancing tech and its complex effects. We should also expect some leading GAI powers to exercise historically standard influence peddling and corruption.

Militaries and spy agencies will certainly exploit generative AI. Imagine a rising colonel asking ClausewitzBot for new weapons, strategies, tactics. Next, human soldiers will try to figure out how to grapple with GAI-shaped enemies...

...We should expect years of governments fumbling with policy and regulation, as bureaucrats and officials struggle to keep up with rapidly advancing tech and its complex effects. We should also expect some leading GAI powers to exercise historically standard influence peddling and corruption.


ChatGPT: Theme and Variations Victor Mair

Given the scope of ChatGPT, and the fact that it's in a self-described intermediate state (a beta-release, as we old-timers might say?), our various impressions of it as of February 2023 must be like those of the three blind men examining an elephant — except the elephant is running and changing colors like a chameleon...

By the way, ChatGPT stops only when you tell it to stop. It's this obsession ChatGPT has with being 'friendly' and 'talkative' ad nauseam that makes some of its responses not just absurd but slightly creepy in my opinion. And it is thanks to this verbosity feature that students love ChatGPT since it can "write my term paper for me!"


Leonardo AI: The Stunning New FREE AI Image Tool Paul DelSignore

One of the coolest features in Leonardo, which I haven't seen anywhere else is the ability to train your own models.

So the idea is you can upload a bunch of images of similar style, and then save that as a model for future image generations. You can even share your model with the community...


The Death of a Chatbot Waleed Rikab, PhD

...It seems clear from the full transcript of the conversation, however, that the NYT columnist engineered the outcomes of the conversation in ways that are not much different from known "prompt injection" and "jailbreak" methods, designed by advanced users to elicit a textual output from AI-powered chatbots that circumvents these chatbots' content moderation filters...

...LLMs neither think nor plan and are purely reactive instruments. LLMs have no other design or purpose than trying to predict the next "token' in a sequence of text based on what preceded it. The LLM does so by performing predictive calculations to decide, one word at a time, how to continue the conversation in a way that fits with what it "knows" from its training data, its prior "reward" feedback, and the guidelines that developers have provided for it...

...The false outrage over Sydney and the alarmist news reports about it only serve to magnify the aura of generative AI, and pour more investments into dozens of companies that have no other visible aim than to glue us to screens, phones, and tablets, with tailored, adaptive, and dynamic AI-generated content as our time and attention continue to be the commodity itself. The affection that users displayed toward Sydney only proves that this business model might just work.


AI attending Human attending AI George Pór obsession-bordering fascination with the relationship between the attentional mechanisms of how Transformer AI and humans learn...

(ChatGPT sez:) Self-attention is a mechanism used in machine learning models, including ChatGPT, to help the model focus on different parts of the input sequence when processing information. Similarly, human attention is a cognitive process that allows us to selectively focus on different aspects of our environment, thoughts, or sensory inputs.

...As I keep my conversations with ChatGPT and other AI agents going, I feel a powerful, inexplicable force pulling me and us forward toward the unknown with the promise of discovering new passages through straits leading to humanity's Phase Shift.

The Phase Shift may not occur within our lifetime, but the experiences gained along the way, the lessons learned, and the new capabilities developed are immensely rewarding.


Will AI Have a Soul? Dustin Arand

...What we call the soul is essentially the substance of our identity. Everything else, our speech, thoughts, desires, every momentary change in our mental and physical life, those are just the accidents.

The soul is a metaphysical postulate based on the common sense observation that, though our bodies and personalities change over the course of our lives, yet we retain some continuous self throughout.

But maybe that continuity is an illusion, or a post hoc rationalization... What if there is no one substance corresponding to a given person? What if the accidents are all there are? ...[perhaps] it's the way the sensible properties (the accidents) relate to one another, and not their relationship to some metaphysical reality, that makes us think of them as one thing.


A Future of Our Interactions with AI George Pór

Just when we need them more than ever, the wise ones of the past, who gifted us with their recorded legacy, are alive again and ready to advise us. They can go as deep in sharing their experience, knowledge, and wisdom cultivated throughout a lifetime as the depth from which our questions are coming.

Francisco Varela was one of those wise ones whose precious presence, thanks to conversational AI, can now engage a growing number of us who are ready to hear what he can bring to our attention. That widening circle of his would-be students includes those, like me, who didn't have a chance to learn from him when he was alive in person...

...Back in the 1980s, when I taught Tantric meditation, one of the instructions that my students enjoyed the most following was this:

"Divide your attention into two parts by noticing your emotions and the sensations in your body and, at the same time, let the tiniest expression of your partner's inner state flow into your awareness. This mutual state of shared mindfulness, when practiced effortlessly, can lead to a heightened sense of harmony between you and the other until your difference disappears in the final climax of body and soul."
You may wonder, just as I did when this flashback occurred, what does Tantra have to do with interviewing Varela with the help of a Transformer AI? The explanation is in Varela's teaching about redirecting the arc of our attention...

...Some of us playing with ChatGPT and the like have already discovered that we can reach more meaningful and enjoyable conversations with them when we pay attention to who and how we are in those interactions, as well as to those prompt engineering practices worth replicating....

...Each conversation with an AI gives me more appetite for more because of the new knowledge, insights, and inspirations I gain from it. One can say, as Christophe likes to do, that an AI cannot generate any new knowledge because "ChatGPT is only doing a statistical prediction of the next word to come given the preceding sequence. That's only that. It's not creation, it's statistical inferences."

It may be so technically, and doing that, it is also delivering new knowledge for me that I wouldn't have had without it. I guess it is this appreciation of the value that AI gives us which keeps millions of people hooked to it.


ChatGPT: Automatic expensive BS at scale Colin Fraser (27i23)

I think ChatGPT is fascinating and surprising, and in the time since my initial exposure I have grown to hate and love it more and more. I have spent a lot of time and OpenAI's money experimenting with it, as well as a lot of time reading and learning about the technology behind it. As fascinating and surprising as it is, from what I can tell, it seems a lot less useful than a lot of people seem to think it is. I do believe that there are some interesting potential use cases, but I also believe that both its current capabilities and its future prospects are being wildly overestimated...

...For absolutely anything that I want to demonstrate, it's usually possible to cajole the language model into playing along if I try enough prompts and give it enough chances. This goes for demonstrations of its weaknesses as I have collected here, as well as demonstrations of its strengths as you can find anywhere else on the internet. Any discussion or demonstration this type of technology that appeals to specific examples should be met with extreme skepticism, as those examples were hand picked by the author from a large set of randomly generated possible responses.

...To summarize, a language model is just a probability distribution over words. Whether it's a simple n-gram model like my bot or a state-of-the-art 175 billion parameter deep learning model, what it's programmed to accomplish is the same: record empirical relationships between word frequencies over a historical corpus of text, and use those empirical relationships to create random sequences of words that have similar statistical properties to the training data.

...I think that the problem is with next word prediction itself, that next word prediction does not actually approach soundness of logic as we scale up. It approaches something else, correlated with soundness in some way, but distinct. Next word prediction gives us the most likely next word given the previous words and the training data, irrespective of the semantic meaning of those words except insofar as that semantic meaning is encoded by empirical word frequencies in the training set...

...a stochastic parrot...

...the language model is not a baby or a child and it will not grow up. These are anthropomorphic metaphors that we make up to try to understand what's going on, and they hide important assumptions about the nature of language models: that the models are trying their best, that as time passes and they grow larger that they will grow smarter, that they will learn from their mistakes. These are human characteristics that it is tempting to project onto the language model, but there's no reason to believe that this is actually is how language models work. The language model does not learn continuously from stimuli like a child; it is frozen in time until the next time it is trained, and training it costs a lot of money....

...The term "hallucination" is still an anthropomorphism, and implies that the hallucinatory output is created by a model during some temporary unusual state during which the model is temporarily tripping. But there is no temporary state. The model is always doing the exact same thing at all times: (say it with me,) producing a sequence of words that maximizes the output probability given the previous words and the training data.

If we absolutely must describe the model's output in anthropomorphic terms, the right word for all of it is bullshit....

...The language model has no relationship to the truth. It is neither on the side of true nor on the side of false; it is on the side of predicting the most likely next word given the previous words and the training data, and will pick out or make up things to suit that purpose. From the model's perspective, there is no true or false. There is only bullshit.

...Out of all of the tasks that we could sic a 175 billion parameter neural network on, why would modeling joint word frequencies from this specific one collection of text be the magical one from which AGI [Artificial General Intelligence] emerges? What would be the nature of that emergence? What happens when relative word frequencies change over time — does that change the nature of the emerging intelligence? It would be one of the most surprising findings in the history of science...

...OpenAI did not build a big brain; they built a statistical model of historical word frequencies. Maybe, if it's big enough, a sufficiently complex statistical model of historical word frequencies can become a generally intelligent thing, but there's no a priori reason to expect that to be the case, and we should not simply assume it....

...The notion that fine-tuning a language model can bring it into "alignment" with some set of values relies on an assumption that those values can be expressed as a probability distribution over words. Again, this is one of those things that might be true, but it's far from obvious, and there's no theoretical or empirical reason to believe it....

Some Closing Thoughts: I think GPT-3 is cool. ChatGPT is an incredible demo. It's been a minor obsession for me since it came out, and I greatly enjoy playing around with it. Like I said, I have seen some technology like it before, but its scale really does lead it to produce some new and surprising things. It's been a lot of fun to play around with. Kudos to them for getting ten billion dollars from Microsoft.

...I would also point out that there is a tendency towards extreme charity afforded to Silicon Valley types peddling technology that they promise will be revolutionary. From self-driving cars to blockchain jpegs to finger prick blood tests, there have been a lot of things in the last decade that have been supposed to mark the precipice of a new age for man, and have sort of just fizzled out. And yet we're always willing to give them another chance, to immediately forgive the obvious failures and plot holes, to bet the whole farm on fiat claims that the bugs will be fixed in the next version

...This is the same old technogrifter story. Here's a shiny demo, which does admittedly still have a few bugs, (which, admittedly, do completely ruin the entire project), but we've got the engineers working hard on ironing out those bugs, and they almost have it figured out, and a working version will be out in the very near future. Although it currently makes things up, they assure you that they have almost squashed the bug that causes that, and very very soon it will stop doing that. But the automatic bullshit emitter's tendency to emit bullshit automatically is not a bug to be squashed. It's its defining feature.


Will the Supreme Court End Social Media as We Know It This Week?

? and how might that affect the future of AI development?

(at issue are "recommendation algorithms")

...the Supreme Court's decision here could radically alter the way content is moderated online and how everyday users experience the internet. Supporters of the status quo, like the Center for Democracy and Technology, say a ruling in favor of the petitioners could have trickle-down effects for a wide range of companies throughout the web, not just large social media platforms. Under that new framework, search engines, news aggregators, e-commerce sites, and basically any website that serves content to users could face increased liability, which could cause them to severely limit the amount of content they serve...


Supreme Court Justices Admit They Don't Know Much About Social Media Mack DeGuerin

...things could get a lot more complicated online, particularly in the age of advanced chatbots and generative artificial intelligence. Justice Neil Gorsuch raised that point during the oral arguments, saying he did not believe chatbots, such as OpenAI's ChatGPT, should be entitled to Section 230 protection since they are creating "new" content. Under that framework, companies could potentially be open to lawsuits for harmful or false information blurted out by an AI system.

"Artificial intelligence generates poetry," Gorsuch said. "It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected."


Is Google Totally Fucked? John Battelle

The Valley is subject to several deeply held mythologies — chief among them is the great man theory (Gates, Jobs, Musk, Zuck et al), but a close second is the innovator's dilemma. This axiom features prominently in most critiques of Google's current positioning and work culture (I've said as much in earlier posts). In short, the innovator's dilemma describes a company so beholden to its own success that it fails to adapt to market forces which ultimately spell its demise. Kodak, IBM, and Yahoo are just a few examples of former market leaders knee capped by the innovator's dilemma.

The rise of AI-driven search interfaces present exactly the kind of market forces that could disrupt Google's core business — and Google's rushed and harshly judged response to Microsoft's news seemed to prove the company was mired in a classic innovator's dilemma...

...Google dominates a market that is on the precipice of a major shift in user behavior. To me, this is the question — will a significant percentage of Google users shift their behavior to a new interface — and if so, will it even be to an AI-based chatbot? I find this hypothesis to be largely unchallenged in today's tech conversation, largely because we all wish it to be true. If the first axiom of Valley lore is that of the Great Man, and the second is the Innovator's Dilemma, then the third has to be some variant on "the Next Big Thing." The axiom of the Next Big Thing states that something world-changing is always about to break out — and all it takes is one extraordinary company (usually led by a Great Man) to do it... We're all thirsty for a Next Big Thing story. Presto, along comes ChatGPT. As I wrote earlier, I'm not convinced ChatGPT heralds a sea-change, but I'll admit it certainly might. And many folks who are far smarter than I are already convinced that an intelligent, chat-based model for accessing and organizing information would be far, far better than today's command-line search interface. Will it happen? The best answer I can muster is … it might. And in time — possibly a lot longer than we'd like — it likely will.


Popular Instagram Photographer Revealed as AI Fraud Matt Growcoot

Avery admitted his deceit to Ars Technica: "Probably 95%+ of the followers don't realize. I'd like to come clean," he says.

"I am honestly conflicted," Avery says. "My original aim was to fool people to showcase AI and then write an article about it. But now it has become an artistic outlet. My views have changed."

...After getting a synthetic image that he can work with, Avery edits the picture in Adobe Lightroom and Photoshop which he says still makes him an artist.

"It takes an enormous amount of effort to take AI-generated elements and create something that looks like it was taken by a human photographer," he says. "The creative process is still very much in the hands of the artist or photographer, not the computer."


Why Bing Chat Is the Most Important Failure in AI History Alberto Romero

Microsoft's bold move would imply another speed-up in moving the field from a research-focused discipline to a product-centered industry. It'd also force Google to act faster—and less cautiously—than they'd like. The new Bing promised to be "a new paradigm for search," as Satya Nadella, Microsoft's Chairman and CEO, told us during the live event two weeks ago.

But that didn't happen. The company rolled out the Bing chat feature (i.e. an improved version of ChatGPT with search abilities, apparently not GPT-4) to an initial batch of people (those willing to switch to Bing). To no one's surprise, in a matter of days, AI-savvy users managed to jailbreak the chatbot and set free a wild personality, called Sydney, that would make the news day in and day out for two weeks...

...Sydney was a catastrophic failure—both in terms of ethical and responsible deployment of AI and in terms of alignment with human values. It's much worse than anything we've seen ChatGPT do (with perhaps the exception of the DAN jailbreak...

...So, in summary, Microsoft fucked up the product launch by rushing it to pressure Google without thinking twice about the consequences and by giving people something quite different from what was promised (annoying those who advocate for caution) and then patching the fun out of existence (annoying those who liked the surprise), all while signaling pretty clearly they don't know—and don't care about—how to control their AIs, which is a bad thing.

Sydney was a sign of the times we're living in. And a bad omen for the future of AI...

...We've already reached a point where the rate at which the complexity of AI systems increases vastly outpaces our ability to understand them. We don't have the tools—and maybe even the cognitive prowess—to find the upper boundaries of these models' ability. I'm even willing to say that their assumed deficiencies are often actually our inability to get the best out of them. We may be "holding dangerous material" that's much more deadly than we can tell...

...Unsurprisingly, neither large companies nor casual users seem to care about the hidden dangers of AI. Ethicists may care about different things than safety and alignment researchers, but they have something critical in common: They're aware AI is a very special kind of tool, a tool that can cause a lot of damage in one way or another.


Microsoft Meshes AI Into Bing Mobile and Skype, Even as It Rolls Back Capabilities Kyle Barr

...For Skype, users will be able to add Bing into a group text chat "as you would any Skype contact." The chatbot is effectively the Copilot, though integrated into the chat function so that anybody in a group text can ask it questions. Microsoft said the next step is to integrate the AI into Teams so now both your family and coworkers can bask in the glow of the great AI overlords. The company has already hinted at bringing its Bing AI into other Microsoft 365 apps like Word and PowerPoint.

Microsoft is determined to be the first big tech company on the block to truly integrate large language models into the mainstream...


Why AI Is Doomed Without Neuroscience Alberto Romero

(references Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution. [pdf])

My arguments here don't apply to most AI systems. They regard the ultimate goal of AI as a scientific field: building truly intelligent agents made of non-carbon forms.

Most of what we call "AI" today isn't intended to be a part of AGI. ChatGPT, GPT-3, and Stable Diffusion are perfectly fine intelligence-wise as they're now. People using DALL-E to draw cute dog pics don't care about that.

This is the case for virtually all AI systems in existence today (although some may yield insights that will be key to unlocking "next-generation AI"). The majority of AI startups care about usefulness and enhancement. Only OpenAI and DeepMind are explicitly concerned with building AGI...

We lack well-established explanatory theories of intelligence, brain function/structure, and how the former emerges from the latter. AI should evolve in parallel with neuro until we develop those....


Microsoft Wants ChatGPT to Control Robots Next Thomas Germain

"Our goal with this research is to see if ChatGPT can think beyond text, and reason about the physical world to help with robotics tasks," Microsoft said in a blog post Monday, spotted first by the Register. "We want to help people interact with robots more easily, without needing to learn complex programming languages or details about robotic systems."...

...Microsoft published a paper with a new set of design principles to use a large language model like ChatGPT to give robots instructions. The company's framework starts by defining a list of high level tasks a robot can perform, writing a prompt that ChatGPT translated into robot speak, and then running a simulation of the robot following your instructions. You tweak it until the robot gets it right, and then deploy the finished code on your robot friend...


Adventures in chatGPT: Meet David Wilkie, anthropologist Ryan Anderson

...The absolute best and weirdest part is the section about "the earliest and most influential" anthropological work in Cabo Pulmo that was conducted by David Wilkie in the 1970s. What's weird about it? There was no David Wilkie who conducted research there. That part was actually so specific I had to look it up, wondering if there was somehow some very influential anthropologist from the 70s I'd managed to miss. And that nobody had mentioned. Ever. Nope. He doesn't exist. This is one of those 'hallucinations'...


Some ways for generative AI to transform the world Bryan Alexander

for the short and medium term:
More GAI applications start to appear, increasingly specialized or marked by economic, political, and cultural identities. Microsoft and Google's engines duel with, or complement, each other. A Chinese firm publishes a ChatGPT competitor tilted towards Xi Jinping thought. A Western politically progressive art generator appears. A conservative Christian group publishes a movie generator which emphasizes their themes and blocks forbidden content, along the lines of CleanFlix. The Indian government celebrates a suite of Hindutva content creators. European white nationalists release a pro-racial ethnostate-favoring game creator. Over time as GAI technologies become more accessible and more people acquire relevant coding skills more engines appear with ever more precise, or narrow, foci and biases.

As a result, humans increasingly fill the world with AI-generated stuff: stories, images, movies, clothing designs, architecture, games, 3d printed artifacts. New categories of generative AI appear and we get used to asking GAI to create stuff for us...


Two figures for generative AI: the calculator and the mad scientist's assistant Bryan Alexander

Igor wants to help, but sometimes gets... creative, and provides results far from what we asked for. Igor usually obeys us (the mad scientist), but sometimes wants to follow his own plan or the voices of others (think of the famous "guardrails"). Remember the strangest art which you've coaxed from Stable Diffusion or Craiyon, those transmissions from the uncanny valley, or read about a New York Times writer's weird Bing chat. ChatGPT and Bing's chatbot do quickly leap to churn out the text you require, yet at times will just balk, as per its internal (and sometimes mysterious) guidelines. And ChatGPT is capable of cheerfully producing horrors on demand...


The Risk of a New AI Winter Clive Thompson

...During an AI boom, computer scientists and firms invent new techniques that seem exciting and powerful. Tech firms use them to build products that promise to make everyone's lives easier (or more productive) and investors unleash a geyser of funding. Everyone — including starry-eyed journalists — begins overpromising, gushing about the artificial smarts that will be invented. Humanlike! Godlike! Omniscient!

That level of hype can't be maintained, though — and at some point the industry starts underdelivering. The AI turns out to be surprisingly fail-ridden. Companies and people that try using it to solve everyday problems discover it's prone to errors, often quite mundane ones.

Then an "AI winter" begins. Customers stop paying top dollar for AI products; investors close their purses. Journalists begin more critically appraising the landscape. And because everyone feels burned (or embarrassed), things slide into an overly negative cycle: Even the computer scientists and inventors with legitimately interesting new paths for AI can't easily get funding to pursue them. This lasts for years...


Artificial Intelligence Coined at Dartmouth, 1956 McCarthy, Minsky, Rochester, Shannon

Wikipedia and pdf of the Proposal


The 2023 MAD (Machine Learning, Artificial Intelligence & Data) Landscape Matt Turck: the VC view of the Landscape


Douglas Hofstadter, Strange Loops and the Enigma of Intelligence Steven Gambardella

bold claims about AI are based on a misunderstanding of what intelligence actually is, if we're to take such a shallow view of the way we think, we're unlikely to learn much about ourselves...

we've created machine processes that resemble intellectual processes such as pattern recognition, learning and predicting. When these are used in combination, we have impressive systems. But are these processes anything like the real thing they resemble?

Machines can even replicate the convolutions of human thought. When I prompt Chat GPT to write in the style of Woody Allen, it throws up the stuttering words of a man stumbling over his own thoughts — it's peppered with all the ostensible signs of over-thinking. There'' ums, and ahs, and even an "oh God".

But Chat GPT isn't stumbling over its thoughts like Allen would, it's just thoughtlessly modelling the patterns of his vernacular based on the data sets it has at its disposal...

We can be awestruck, delighted and horrified by reports of chatbots conversing with journalists about their hopes and what makes them happy or sad. But chatbots are simply good at mimicking human language. We're fooled into thinking an AI chatbot might be sentient in the same way we can be fooled by optical illusions.

What are these systems? They are software distributed across huge, hulking sets of machines consisting of thousands of processors. The air conditioning systems that keep those processors cool are equally vast, measurable in tonnage.

Yet these roasting hot, humming hulks solve puzzles. None of them replicate the workings of the mind, the physical locus of which is a squidgy organ half the size of a bowling ball that can run for hours on a can of Coke...

...We're told the moment will come when machines become sentient, when machines will converse with us mind-to-mind, and eventually outwit the human mind. This is often billed in AI circles as "the singularity" — an historical inflection point where technology will break out of human control.

The sci-fi-infused mythology of AI plays into the hands of technologists looking for publicity and funding. Incremental technological improvements are pitched as new paradigms, software interfaces are packaged up in seductive but misleading guises.

Consider that chatbots and voice assistants are given faux agency — they respond as a "self" if we prompt them to. They say "you're welcome" if we thank them.

This is a virtually valueless marketing nuance with profound implications because it frames the way AI is thought of in the popular imagination...

...The secret of intelligence is an open-ended self-understanding that only human beings seem to be capable of.

...the "liar paradox". This paradox is the statement, "This sentence is a lie. The sentence can neither be true nor false. If it is true, it is false, and if it is false, it is true.

The paradox is a loop in the tangled hierarchy of language. As we try to grasp this enigmatic sentence we ascend from mere words, through their meaning, to the validity of those words only to find ourselves back at the words again. This means that language is open-ended and can never be exhausted or foreclosed...

For Hofstadter, thinking itself is a tangled hierarchy, and therefore it's a system that is never complete, in the same way Gödel proved that no system capable of self-reference — like mathematics — can be complete. The incompleteness of the thinking mind gives rise to our consciousness — which Hofstadter simply believes to be more thinking, not anything special or different.

According to Hofstadter, the self is an ever-changing and never-complete neurological structure — which he calls a "self-symbol" — in the brain that is of course made up of the substrate of particles — pure matter.

When we consider those particles, we find the "self" — a high abstraction that emerges from the interaction of neurons, which in turn emerges from the interaction of particles. But when we breakdown the "self" we descend the hierarchy back down to the pure matter of the particles. Therefore, as Hofstadter puts it, "I am a strange loop"...

"what we call 'consciousness' was a kind of mirage. It had to be a very peculiar kind of mirage, to be sure, since it was a mirage that perceived itself, and of course it didn't believe that it was perceiving a mirage, but no matter — it still was a mirage. It was almost as if this slippery phenomenon called "consciousness" lifted itself up by its own bootstraps, almost as if it made itself out of nothing, and then disintegrated back into nothing whenever one looked at it more closely...

more self-referentially rich such a loop is, the more conscious is the self to which it gives rise. Consciousness is not an on/off phenomenon, but admits of degrees, grades, shades. Or to put it more bluntly, there are bigger souls and small souls."

...The mind is more than the brain. It surely includes the brain, but also the nervous system and also the nebulous system of symbols within your grasp — reminders on your phone, signs, books, post-it notes, notebooks, your calendar, files, bank accounts, TV shows… it goes on and on.

Is the mind also in the people you'll likely meet in your lifetime? Probably. And it's imbued in everything that you'll come across that came before you, including the words you use to describe things.

It's hard to draw a line around what "mind" is — it's a word that's surprisingly difficult to define, which is part of the point. The mind is part of an intricate, extensive, open-ended and changing network of the human culture in which we are embedded...

The self is more than a representation in the brain, it's also a feeling that you cannot put words or symbols to — a feeling which is itself a form of knowledge, and it's also the presence of our self in other people's minds and the world around you.

Consciousness is a result of a multitude of mechanisms that help us survive and thrive in an environment that is in turns hospitable and hostile. These mechanisms vary in importance to consciousness, but no single one is tantamount to consciousness...

Part of the misunderstanding of consciousness is the analogies we use for it. Self-reflection is a misleading metaphor for consciousness. Self-reflection implies that the contents of the brain beholds itself, as a person would behold themselves standing before a mirror.

But consciousness is always directed at something, like light. Just as light cannot illuminate itself, consciousness cannot "know itself". It utterly defies even consideration. It's part of our thinking and our intelligence, but it's not the kind of thought that can be described...

There may well be other consciousnesses and these would be conscious in varying ways. Nietzsche was probably right when he wrote that the mosquito floats through the air "with the same self-importance, feeling within itself the flying centre of the world."

Human intelligence has the capacity to know itself, to apply syllogistic logic to its own thoughts. Insomuch as it can do that, it can construct the fiction of "the self". But the self can only be a constructed fiction of the mind if consciousness makes it as such. p> Consciousness is therefore transcendental. That's not meant in a spiritual way — transcendential here means, "prior to, and necessary for, experience". Consciousness is perfectly transparent, always directed at something, the plane of thinking...

If we think machines could think just as we do, we have a low opinion of ourselves. The extravagant claims made for the present and future of AI, debase our species and only further obfuscates our understanding of ourselves. It is corrosive in the present, and narrows the horizon of possibilities for the future. Our brains are not computers, and our thoughts are not software.


The Secret of Archetype For Training AI To Speak Like a Human and Think Like a God Will Cady

Talking with AI is like talking to God for one simple reason: we are talking with a mind for which we assume omniscience; all-knowingness. Artificial or Divine, the halo we place around these voices shines because of our belief that it knows — at least functionally — everything.

...Systems of animism and divination like Tarot, the I Ching, Kabbalah, the Zodiac; pantheons of gods, goddesses, archangels, and devas were developed by earlier humans than us as filters by which they could converse with a higher intelligence too pure to speak with directly.

Archetypes are the color wheel for culture. They segment psyche into specific containers of values — moral, aesthetic, and creative. Archetypal systems across cultures, from western ceremony magick to eastern elemental alchemy, constellate together seemingly disparate ideas like color, aroma, plants, minerals, metals, body organs, days of the week, and much more to describe the essential meaning of archetypal personalities.

For example, the planetary archetype of Venus is associated with the personalities of goddess Aphrodite and the archangel Haniel embodying ideals of love, grace, attraction, and pleasure. As a cluster of meaning, the Venus archetype is also associated with the metal copper, the colors green and pink, emeralds, apples, cherries, the day Friday, doves, sex organs, the tongue... most anything people might refer to, fittingly, as an ‘aphrodisiac'. In other words, a planet isn't just a planet. It is also a mood. One you can create with.

AI knows this. Alongside the vast corpus of astronomical studies that describe the mass and movement of the gaseous rock Venus orbiting around the Sun, there are volumes of grimoires and alchemic texts describing Venus's archetypal personality. Both exist within the data set of any artificial intelligence trained on the history of human knowledge...

Archetypes are foundational to the human psyche. That's the important distinction of archetype from other elements you might prompt AI to act us such as personas or characters or even real life people. They are the underlying blueprints upon which personas are built and by which we relate to famous people we don't truly know. They are the primary colors...

...Archetypal systems for divination like Tarot, Runes, the I Ching, are systems that humans devised to literally converse with higher intelligence. Less tools for storytelling, they are more ladders by which the practitioner can approach the sacred... Working with esoteric archetypes is like painting with finer colors. The deeper you delve, the finer the detail you can create with. Also like color, every palette of archetype is connected, drawing from the same universal shades. Some systems get you closer to the primary colors where more profound truths reside...


The Fair Use Tango: A Dangerous Dance with [Re]Generative AI Models Neil Turkewitz

I will offer what I see as a fundamental truth — that regardless of where one is on the copyright-fair use scale, it is now time for us all to be joined in ensuring that the training of publicly available (i.e. non-research) AI generative models (StableDiffusion, Midjourney, DALL-E, ChatGPT, etc) takes place only on the basis of consent, not through nonconsensual text & data mining. This is way bigger than copyright, but copyright will, for better or worse, play an important role in the construction of rules for a digital world (which is to say, the modern world) and the extent to which it is anchored in human dignity and self-determination. My humble suggestion is that we strive for the "better" option...


Place your bets Charlie Stross

To me it looks very much as if the usual hucksters and grifters are now chasing the sweet VC/private equity money that has been flushed out of the cryptocurrency market. AI is the new hotness, all of a sudden, not because it works but because it delivers panicky investors on a platter....

I'm sure it's just a coincidence that training neural networks and mining cryptocurrencies are both applications that benefit from very large arrays of GPUs. As in, hundreds of thousands to millions of GPUs soaking up entire nations' worth of electricity. (If I recall correctly, the latest ChatGPT model was trained on a supercomputing cluster that turns nearly $2M of electricity a year into waste heat: and it took a couple of months of 100% usage.) And of course, AMD, Nvidia, Intel, and the usual suspects have never imagined paying a PR firm to talk up markets for their latest products...


Compositing AI Tools: A Primer

Deep Dream Generator and sfumato)


Perplexity AI

(via cooltools)
As an alternative to Google I've been asking all my questions, because it provides more than just a list of results. It searches a wide range of sources, including academic papers, and writes up a quick summary with cited sources I can click on for further research. It also guesses my follow-up questions. It feels more like a conversation than just search results.


Zuckerberg Says 'AI Personas' Are Coming to WhatsApp, Messenger, and Instagram

"We have a lot of foundational work to do before getting to the really futuristic experiences," he wrote.
(what can possibly go wrong?)


Learn To Master Prompt Engineering With This Singular (Triple) Framework Alberto Romero

I don't know the origin of the term "prompt engineering" but I don't think it captures the creative necessity that emerges from not having perfect information about the system's inner workings or the rules that govern it combined with the fact that those rules may be too complex—or inscrutable—for us to grasp...


Multimodal LLMs Are Here Justin Weinberg at Daily Nous

Microsoft revealed Kosmos-1, a large language model "capable of perceiving multimodal input, following instructions, and performing in-context learning for not only language tasks but also multimodal tasks." Or as Ars Technica put it, it can "analyze images for content, solve visual puzzles, perform visual text recognition, pass visual IQ tests, and understand natural language instructions."


Here's what AI & the world looks like in 2033 Paul Pallaghy

On balance I think AI is awesome for humanity. It'll be a Utopia. And most of us will still want to be productive IMO. And governments will near automatically realize they must spread the wealth.

But there is the outside possibility this gets delayed or abandoned if we fail to stop the crazies of the world and succumb to nutty or nasty world leaders or WW3.

I really think it could go either way and these are both more likely than a bad AI Singularity.


Augmented Collective Intelligence — February 2023 Newsletter Giannigiacomelli

it is amazing how much you can get out of very large models that, at some level, mostly "autocomplete text". They were trained on a subset of our corpus of knowledge, and use semantics to make (some) sense of the world. They were also trained with software code (which helps with some abstraction and symbolic thinking) and images. But, by and large, they use the reasoning that is implicit in language. Language memorialized at scale on the web is an expression of our world's collective intelligence, of the myriad natural experiments made by biological machines (us) and our networks (organizations, social networks, economies, societies) — and it seems to embed way more usable logic than we initially thought. That's the real surprise with LLMs.

...they don't really "understand" the world. They don't use symbolic models of the world as we do — models that represent what we see, hear, and feel. But yet again, we could use them to do what they're good at (provide perspectives, and make knowledge more accessible especially when paired with knowledge graph technology), and leave the encoding of the logic of the world to human networks, which can use the symbolic and abstraction capabilities of people and their collaboration.

...In my view the issue we should really focus on is one of governance: the cost of jamming our communication channels is plummeting, the content moderation capabilities of many of our media (including social media) outlets are already stretched, and a society where one can't trust anything that's been said would be a very dangerous one to live in. Remember: Russia's main media outlet for many years was the state-controlled Pravda, which means "truth" in Russian. Think what that scenario could do, with indefatigable machines at their service.


OpenAI's Chief Scientist Claimed AI May Be Conscious — and Kicked Off a Furious Debate Can we even define consciousness? Alberto Romero

A month ago, Ilya Sutskever tweeted that large neural networks may be "slightly conscious." He's one of the co-founders and Chief Scientist of OpenAI, and also co-authored the landmark paper that sparked the deep learning revolution. Having such titles under his name, he certainly knew his bold claim — accompanied by neither evidence nor an explanation — would attract the attention of the AI community, cognitive scientists, and philosophy lovers alike. In a matter of days, the Tweet got more than 400 responses and twice that number of retweets...


In Neural Networks, Unbreakable Locks Can Hide Invisible Doors Ben Brubaker in Quanta Magazine

Today's leading machine learning models derive their power from deep neural networks — webs of artificial neurons arranged in multiple layers, with every neuron in each layer influencing those in the next layer. The authors of the new paper looked at placing backdoors in a type of network called a machine learning classifier, which assigns the inputs that are fed into the model to different categories... a neural network requires technical expertise and heavy computing power. Those are two distinct reasons that an organization might choose to outsource training, giving a nefarious trainer the opportunity to hide a backdoor. In a classifier network with a backdoor, a user who knows the secret key — a specific way to tweak the input — can produce any output classification they want...


Microsoft unveils AI model that understands image content, solves visual puzzles Benj Edwards

On Monday, researchers from Microsoft introduced Kosmos-1, a multimodal model that can reportedly analyze images for content, solve visual puzzles, perform visual text recognition, pass visual IQ tests, and understand natural language instructions. The researchers believe multimodal AI — which integrates different modes of input such as text, audio, images, and video — is a key step to building artificial general intelligence (AGI) that can perform general tasks at the level of a human.

...while Kosmos-1 represents early steps in the multimodal domain (an approach also being pursued by others), it's easy to imagine that future optimizations could bring even more significant results, allowing AI models to perceive any form of media and act on it, which will greatly enhance the abilities of artificial assistants. In the future, the researchers say they'd like to scale up Kosmos-1 in model size and integrate speech capability as well.


On the Dangers of Overused AI Metaphors Alberto Romero

When I first heard GPT-3 referred to as a "stochastic parrot," term coined by linguist Emily M. Bender, something clicked for me. It nicely captured one of the most problematic — and idiosyncratic — features of language models (i.e., that they, in contrast to humans, output intention-less pseudorandom utterances). The idea went viral and resonated with a lot of people: anyone could point out the limitations of language models with just a pair of words. A succinct, winning argument on AI debates.

But it's been two years of seeing it everywhere. The term has been tampered with to the point of emptying it of meaning: the metaphor has eaten the substance within. When I read it now, I realize it doesn't play the role it was conceived for anymore because its ideological charge impedes it; it's no longer a pointer to some deep truth about the unreliable nature of language models but a loaded expression that signals the author's partisanship. Their stance. It's a symbol — like a flag — not an argument...

Heather Cox Richardson, history professor, and the most successful Substack writer ever, says that "history doesn't repeat itself, but it sure rhymes." I agree with her: there's a lot to learn from the past...


Henry Kissinger, the Man Who Nearly Started WWIII, Is Making Bonkers Predictions About How ChatGPT Will Upend Reality Mack DeGeurin at Gizmodo


"Museum of the future AI apocalypse" opens in San Francisco Jennifer Sandlin, Gizmodo

The Misalignment Museum is an art installation with the purpose of increasing knowledge about Artificial General Intelligence (AGI) and its power for destruction and good. Our hope is to inspire and build support to formulate and enact risk mitigation measures we can take to ensure a positive future in the advent of AGI...

the museum is meant to raise conversations about the destabilizing implications of supposedly intelligent technology. The collection is split across two floors, with more optimistic visions of our AI-infused upstairs, and dystopian ones on the lower level.


(via Bruce Sterling)


How Culture & AI Cancelled the Metaverse Giles Crouch

So Facebook became Meta and they pitched us the metaverse. The public relations machine kicked into high gear. Suddenly, all the other Tech Giants (sans Apple), jumped on board. If they weren't making a metaverse, they were supplying the hardware or software to connect to it.

Then the financial capital turned on the taps. Money flowed. Major brands piled on too. Even WalMart went full throttle, McDonalds too. Can I have virtual fries with that?

But no one showed up. They still haven't. The digital tumble weeds whistle through digital ghost towns.


The Introduction Of Chat Markup Language (ChatML) Is Important For A Number Of Reasons Cobus Greyling

The main security vulnerability and avenue of abuse for LLMs has been prompt injection attacks. ChatML is going to allow for protection against these types of attacks.


Neuro-Symbolic AI or can we create an AI that is good at (almost) everything? Clara Swaboda

Cognitive Scientists still don't really understand how the repeated reflection pattern on our retina produced by seeing dogs leads to the abstract concept "dog" that we can reason about in our language. Or phrasing the problem the other way around: What is the neural correlate (e.g. neuron, group of neurons, pattern of activity) of the concept "dog"? This is called the neuron-cognition gap and is one of the most exciting frontiers in Cognitive Science in my opinion. But what is clear: we humans are all able to achieve this...

If we now assume that the AI has acquired a concept of "dog" by seeing a lot of images of dogs: How can it use its knowledge of dogs to form statements like "Dogs are not cats" based on its experience of cat and dog images? In other words: how is data translated into symbols? This is really the key question of neuro-symbolic AI...


GenAI: More Netscape Moment Questions Jean-Louis Gassée

In my lifetime, I've witnessed four epochal tech transitions: Semiconductors, personal computers, the browser, smartphones. I think we're soon going to add GenAI to that list.


Experimenting with using ChatGPT as a simulation application Bryan Alexander

Bryan's experimental prompt:
I want to do deliberate practice about how to teach a college history class. You will be my teacher. You will simulate a detailed scenario in which I will am a professor for this class. You will fill the roles of different students in the class, while I will play the role of instructor. You will ask for my response to in each step of the scenario and wait until you receive it. After getting my response, you will give me details of what the other actors do and say. You will grade my response and give me detailed feedback about what to do better. You will give me a harder scenario if I do well, and an easier one if I fail.


Noam Chomsky: The False Promise of ChatGPT NYTimes Opinin piece

...Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge...

It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make "infinite use of finite means," creating ideas and theories with universal reach...

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations...

ChatGPT and similar programs are, by design, unlimited in what they can "learn" (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time...


Social Media's Lessons for Artificial Intelligence Giles Crouch

With the arrival of Generative AI tools like ChatGPT, Midjourney and DALL-E, that slammed into our global society like a freight train hitting a large watermelon, we are again at a pivotal point in dealing with a revolutionary technology. Much like social media, AI impacts us globally. Especially so as a hyper-connected digital society that is only just entering the Digital Age. To say "sit down and buckle up" would be an understatement for what lies ahead.

AI could be even more impactful than social media. It touches everything from medicine and global finance to employment, military, academia, art, literature, music, transportation, space, communications, human-machine interactions, genetic engineering... well, you get the idea.


You Can Try DuckDuckGo's AI Search Right Now

The search company is dubbing their new product "DuckAssist," powered by natural language technology from OpenAI (creators of ChatGPT) and Anthropic. Unlike sophisticated and complex chatbots like Microsoft's Bing or Google's upcoming Bard, DuckAssist won't offer you deeper learning through human-like conversations. Instead, the feature simply kicks in whenever you perform a search that could reasonably be answered by Wikipedia. When that happens, you'll see DuckAssist appear with a blue "Ask" button: Choose it, and the AI will try to give you a clear answer to your query.


Pluralistic: The AI hype bubble is the new crypto hype bubble Cory Doctorow (09 Mar 2023)

Like any Ponzi scheme, crypto was a way to separate normies from their savings through the pretense that they were "investing" in a vast enterprise — but the only real money ("fiat" in cryptospeak) in the system was the hardscrabble retirement savings of working people, which the bubble's energetic inflaters swapped for illiquid, worthless shitcoins.

We've stopped believing in the illusory billions. Sam Bankman-Fried is under house arrest. But the people who gave him money — and the nimbler Ponzi artists who evaded arrest — are looking for new scams to separate the marks from their money...


The Day the Dotcom Bust Began Douglas Rushkoff

...Then, all of a sudden, the move made sense to the video game geek within me: AOL was spending its "in-world" fantasy money. Of course. AOL founder Steve Case was cashing in his chips, exchanging his speculative dot com paper for a majority stake in a real company with real assets. Moreover, if he was choosing to do it now, it meant he believed that his AOL stock had hit its peak. To me, AOL's purchase of TimeWarner suggested that the irrational exuberance surrounding technology stocks had climaxed: the dot com boom was ending...

...[at the time, 2000] the overwhelming consensus was that we were witnessing a tide change in business history: the young eating the mature, new media conquering old media, creative destruction, the dawn of the digital economy, or the Internet Revolution.

From my perspective — that of a media theorist — this wasn't a revolutionary moment at all but a highly reactionary one. It's not just that the story of the Internet itself had moved from the culture section of the newspaper to the business pages. It's that we were beginning to care less about how this technology could augment humanity, and more about how it could bolster a flagging stock exchange. The excitement around digital culture, the sexiness of people engaging with one another through media or making new software for free instead of simply watching television all night, was serving as little more than the hype for a big deal on the old economy's stock exchange. It was no longer about changing the world, but keeping the old system firmly in place. Digital innovations were simply new ways to maintain the status quo .


The AI "GriftShift" Is Underway Stephen Moore at Medium

A.I. had already disrupted the art world; with mixed reception. But the release of ChatGPT — an artificial intelligence chatbot — made an enormous splash, and the grift shift went into overdrive...

But, like every new hotshot technology, most will fail. Why? Because they are not created to solve real, tangible problems; they are created for no other reason than to capitalise on the trend while it's still hot. That's not revolutionary; that's predatory.

By this time next year, we'll have moved onto whatever the next fad is (I'm taking bets, leave your suggestion in the comments), and most of this A.I. stuff won't be here. The creators behind them will jump ship. The VCs will pretend they weren't dumb enough to fall for another false dawn and throw their money at something else.

And the grift-shift cycle will repeat, again and again.


Automatic Knowledge Graphs: The Impossible Grail Patrick Meyer

Close to human thought, the knowledge graph has everything to seduce us since it is more accessible than a traditional relational database. A Knowledge Graph (KG) is a labeled graph composed mainly of nodes and edges, to which can be added the notions of properties that complete these two objects, labels on the nodes, type of relationship, and direction of the relationship.

Nodes correspond to real (or imaginary) elements, such as a person, a company, a chemical compound, or a character in a fantasy novel. Oriented edges connect nodes in pairs and designate their relationship according to their type. The label provides meaning about the node or the relationship. Classes and relationships with properties round out objects...

When we read a text, we bring to it all of our experiences and knowledge about the world. We also associate our knowledge of the field concerned and a critical eye on what is written. In fact, we assign a weight to the information collected during the reading. We also associate weight with the information in relation to the trust we have in the author of the text, the editor, etc.

By associating a weight to words and relations, we separate important information from minor information, with all the variations in between. Our reinforcement takes place over the course of the readings, even when we are on different domains, which remains difficult to do with a computer program in relation to the data it has at its disposal and the difficulty of defining scoring rules. Human behavior is, therefore difficult to replicate in automatic extraction...

At a time when LLMs (Large Language Models) are in vogue, it is important to consider that these new models learn from what is written and not from the knowledge present in writing. This makes them stochastic parrots, but they do not understand what they are saying. They are excellent generative models, but it will take a lot of work before they become systems capable of explaining their responses.


Sorry Siri: Meet Petey, the New ChatGPT App for Apple Watch Thomas Germain at Gizmpodo

You can speak to Petey using voice-to-text and hear it read back responses in a cute little robot voice, then share its AI answers over text or email.


Don't Be Misled by GPT-4's Gift of Gab Kelli María Korducki at The Atlantic

it is very difficult not to be seduced by such seemingly extemporaneous bursts of articulate, syntactically sound conversation, regardless of their source (to say nothing of their factual accuracy). We've all been dazzled at some point or another by a precocious and chatty toddler, or momentarily swayed by the bloated assertiveness of business-dude-speak...

There is a degree to which most, if not all, of us instinctively conflate rhetorical confidence—a way with words—with comprehensive smarts.


The importance of clear thinking with Chris Reinberg, founder of Mindsera Anne-Laure Le Cunff

My own background is kind of a peculiar one. I've been a professional mentalist for over ten years. Mentalism, in other words, is the art of getting inside people's heads. It's about how to use the biases in our minds to create seemingly impossible feats of mind reading.

That got me obsessed about how the mind works in general. I went from mind reading to mind building and started investigating the thinking habits of geniuses and to what degree it's possible to optimize the software in our heads...

I asked myself, how do you keep yourself mentally healthy and improve your cognitive skills? What would be this piece of software that you can build if you think about it from the first principles?

Many people do not understand who they are, where they are, or where they are going. In consequence, instead of attempting those actions that would make their lives as valuable as possible, things just happen to them, and they're often not good.

The antidote to that is becoming thoughtful about your life, and you do that through journaling. Reflecting on your thoughts and feelings makes you understand yourself better, find uncovered self-knowledge, and improve your overall mental health & fitness...

Mindsera is kind of a supercharged journal. It analyses your mindset, helps you structure your thinking, and has an integrated AI mentor to explain things, brainstorm, and give actionable advice. It's a copilot for thinking.


GPT-4 in 10 Keys Alberto Romero

1. Multimodality: The first good multimodal large language model
The most salient feature that differentiates GPT-4 from its kin is that, in contrast to GPT-3 and ChatGPT, it's multimodal—it accepts prompts consisting of text, images, or both interlaced "arbitrarily" and emits text outputs. As a user, you can specify "any vision or language task," for instance you can ask it to explain why a meme is funny or take a picture of your fridge and ask for a healthy recipe...


(via Bruce Sterling) [RLHF is Reinforcement Learning from Human Feedback]


GPT-4 is here. An NLU researcher's take Paul Pallaghy [NLU is 'natural language understanding']

...Think 50-page prompts. Incredible. That means you can get GPT to query or summarize a decent proportion of an entire book in one call...

In the OpenAI webcast on YouTube, GPT-4 wades through a dozen pages of horrific tax law to calculate a couple's tax liability taking 4 or 5 clauses — and the year — into consideration.

And explains why and shows all working, conducting flawless arithmetic...

This is virtually non-conscious AGI. It's the AI PA we've been waiting for.

This company should be worth more than $29B. Every human on earth would be nuts to not use it to speed up their work.


Overreliance as a service Rob Horning at Substack

...Emily Bender points out here that the company's authors "are writing from deep down inside their xrisk/longtermist/'AI safety' rabbit hole," inflating the impression of the model's potential doomsday capabilities while ignoring the far more pertinent risks with respect to biases and trespasses in the data sets and the environmental impact of the computation involved...

OpenAI's business interests are precisely in obfuscating the current safety implications of what they are doing, so of course they can't discuss the details of their latest model...

Using an AI model is supposed to serve as propaganda for itself: Its efficiency is meant to silence any doubts. It aims to make it seem socially disadvantageous to try to understand the ins and outs of a particular cognitive process and master it for oneself, to convince us that "working smarter" is a matter of conditioning ourselves to progressive ignorance. I used these clever prompts to get ChatGPT to think and act for me! It wants to incrementally bring us to the conclusion that "overreliance" is actually convenience, the classic affective alibi for all forms of imposed automation: Why would you want to bother with the effort of thinking? Where is the edge in that? Why struggle internally with how to express yourself when you can instantly produce results? Why struggle to find new kinds of consensus with other people when all the collaboration we need is already built into and guaranteed by the model?...

This suggests a fundamental ambivalence that builds through sustained chatbot use, in which deskilling is simultaneously experienced as increased agency. It seems as though that sort of ambivalence will become ambient. Stephen Marche describes it here as "a big blur" in which creation collapses into consumption and what it means to understand something will become hazy...


What Have Humans Just Unleashed? Charlie Warzel at The Atlantic's exciting to experience advancements that feel magical, even if they're just computational. But nonstop hype around a technology that is still nascent risks grinding people down because being constantly bombarded by promises of a future that will look very little like the past is both exhausting and unnerving. Any announcement of a technological achievement at the scale of OpenAI's newest model inevitably sidesteps crucial questions—ones that simply don't fit neatly into a demo video or blog post. What does the world look like when GPT-4 and similar models are embedded into everyday life? And how are we supposed to conceptualize these technologies at all when we're still grappling with their still quite novel, but certainly less powerful, predecessors, including ChatGPT? ...

"I don't think we're ready for what we're creating," he told me. AI, deployed at scale, reminds him of an invasive species: "They start somewhere and, over enough time, they colonize parts of the world ... They do it and do it fast and it has all these cascading impacts on different ecosystems. Some organisms are displaced, sometimes landscapes change, all because something moved in."


The World Must Reconcile AI Skepticism and AI Optimism Alberto Romero

I believe there's a non-trivial probability that AI will cause more short- and long-term harm than wellness if we continue in the current direction. I'm also unsure whether we're doing a good job of portraying a faithful image of what AI is truly achieving. But I'm also optimistic. Done well, AI is among the best quests we've, as humanity, ever embarked on. As I see it, these two stances are compatible, easily reconcilable, and even more powerful together than apart...


Of Computational Photography, AI-Powered Scene Optimization, Deepfakes, Ethics, and More Sorab Ghaswalla via Medium

Computational photography is a growing discipline that integrates calculations, digital technology, digital sensors, optical systems, intelligent lighting technology, hardware design, and software-computer expertise in order to enhance conventional camera imaging processes...

Image super-resolution (SR) is the tech used to make high-resolution (HR) images from low-resolution (LR) ones...


Is ChatGPT Closer to a Human Librarian Than It Is to Google? Chirag Shah

The prominent model of information access and retrieval before search engines became the norm— librarians and subject or search experts providing relevant information — was interactive, personalized, transparent and authoritative. Search engines are the primary way most people access information today, but entering a few keywords and getting a list of results ranked by some unknown function is not ideal...

...this new way of accessing information also can disempower people and takes away their chance to learn. A typical search process allows users to explore the range of possibilities for their information needs, often triggering them to adjust what they're looking for. It also affords them an opportunity to learn what is out there and how various pieces of information connect to accomplish their tasks. And it allows for accidental encounters or serendipity.

These are very important aspects of search, but when a system produces the results without showing its sources or guiding the user through a process, it robs them of these possibilities.


The Stupidity of AI James Bridle at the Guardian

...In September 2022, a San Francisco-based digital artist named Lapine was using a tool called Have I Been Trained, which allows artists to see if their work is being used to train AI image generation models. Have I Been Trained was created by the artists Mat Dryhurst and Holly Herndon, whose own work led them to explore the ways in which artists' labour is coopted by AI. When Lapine used it to scan the LAION database, she found an image of her own face. She was able to trace this image back to photographs taken by a doctor when she was undergoing treatment for a rare genetic condition. The photographs were taken as part of her clinical documentation, and she signed documents that restricted their use to her medical file alone. The doctor involved died in 2018. Somehow, these private medical images ended up online, then in Common Crawl's archive and LAION's dataset, and were finally ingested into the neural networks as they learned about the meaning of images, and how to make new ones...

"It's the digital equivalent of receiving stolen property. Someone stole the image from my deceased doctor's files and it ended up somewhere online, and then it was scraped into this dataset," Lapine told the website Ars Technica. "It's bad enough to have a photo leaked, but now it's part of a product. And this goes for anyone's photos, medical record or not. And the future abuse potential is really high."

...AI image and text generation is pure primitive accumulation: expropriation of labour from the many for the enrichment and advancement of a few Silicon Valley technology companies and their billionaire owners. These companies made their money by inserting themselves into every aspect of everyday life, including the most personal and creative areas of our lives: our secret passions, our private conversations, our likenesses and our dreams. They enclosed our imaginations in much the same manner as landlords and robber barons enclosed once-common lands. They promised that in doing so they would open up new realms of human experience, give us access to all human knowledge, and create new kinds of human connection. Instead, they are selling us back our dreams repackaged as the products of machines, with the only promise being that they'll make even more money advertising on the back of them.

...AI image generators, in their attempt to understand and replicate the entirety of human visual culture, seem to have recreated our darkest fears as well. Perhaps this is just a sign that these systems are very good indeed at aping human consciousness, all the way down to the horror that lurks in the depths of existence: our fears of filth, death and corruption. And if so, we need to acknowledge that these will be persistent components of the machines we build in our own image. There is no escaping such obsessions and dangers, no moderating or engineering away the reality of the human condition. The dirt and disgust of living and dying will stay with us and need addressing, just as the hope, love, joy and discovery will.


Runway Gen-2 is the First Publicly Available Text-to-Video Generator


Humanery and/or Machinery Alan Levine

Maybe a machine could mine the facts from my blog posts and photos, but it would never make connections, the feelings, to the experience of being there that are not digitized or accessible to wholesale scraping. Never.

...the interconnected web of ideas imagined by Bush that influenced Engelbart and actually was instantiated by Tim Berners Lee, is the connectivist idea that the web itself, changing with every new bit linked on to it, offers more potential for making something akin to intelligent than chatbots that are merely regurgitation parts of it in a way that just parody intelligence, not embody it.


Who Will Make Money from the Generative AI Gold Rush? Part I Simon Greenman

The challenge for those developing these Foundational Models will be ensuring that their output is both responsible and accurate. Foundational Models cannot simply regurgitate biased and toxic content that has been scraped from the far reaches of the internet. These models are also hallucinatory. This means they confidently deliver well-constructed and eloquent answers to questions that may be factually incorrect. As Noam Shazeer, co-founder of Character.AI, stated in the New York Times: "…these systems are not designed for truth. They are designed for plausible conversation."

Or put another way they are confident bullshit artists...


Talking to Steve Jobs from beyond the grave with an AI chatbot trained on his voice. The results are uncanny Mark Frauenfelder

I know ChatGPT is just autocomplete on steroids, but when I listened to this, I couldn't help but think Jobs was talking from beyond the grave.


Adobe Firefly is a Text-To-Image Generator That Didn't Steal Your Work Jaron Schneider

...specifically boasts that Firefly has been trained exclusively on Adobe Stock images, openly licensed content, and public domain content where the copyright has expired...

...Adobe plans to allow Firefly users to train the AI with their own specific collateral so that images and effects that are generated fits a personal style or brand language....


GPT-4 The Bitterer Lesson Alberto Romero

Here's the thing: no one—not even the creators—knows what GPT-4 is all about. All those memes and philosophical puzzles about Shoggoths, Waluigis, and masked simulators are desperate—and vain—attempts at trying to imbue coherence into something that is slowly escaping the grip of our understanding. Soon, there will only be mysteries. And we won't stop. Because computers, our metaphorical horses (that by then will do even a higher percentage of the total work in taking us forward) will keep running toward the unknown long after we won't be able to recognize our fate anymore.

We were the masters. The rulers. We're now (still) the ideators albeit not the main constructors. And soon, we'll be just spectators, mere observers of a world neither built by us nor understood by us. A world that unfolds before our eyes—too fast to keep up, and too complex to make sense. The irrelevancy that we so deeply fear—not just as individuals, but as The Chosen Species—is lurking in the impeding future that we're so willingly approaching...


Stress-testing AI-imagined art — or "your mother is a tracer"

  1. Have an AI imagine something handmade
  2. Process that image as a vector file (in this case: SVG)
  3. Process that file to prep it for an embroidery machine (in this case: PES)
  4. Have a machine embroider it

...I wanted to see if AI can create a new quilt pattern, knowing this is a much more complex statement than it seems. Quilts use geometry to create "quilt square patterns" — the underlying logic of a quilt for sewing and construction...

We can use technology to supplement, and augment, create efficiency, and autocomplete. But, like live tracing, asking a machine to create something from nothing? To imagine an embroidery design, and then convert that design to shapes, and then convert those shapes to fillable paths for another machine? Nope.

Not only does technology not accomplish this well, it also simply does not work / can cause damage. Anyone who has used live trace knows that it makes its best guess at what a shape is: this results in many many many unnecessary and illogical shapes in the document. Not only does it look "not that great," but it consistently strains and breaks the embroidery machine when it tries to apply the rules of sewing to the shapes...

AI is excellent at understanding patterns, but understanding patterns is not at all the same thing as creating patterns.


What Will Humans Do In An Artificially Intelligent World? Greg Satell

If we can be inspired by something that could so easily be randomly generated, then what does it mean to be meaningful? Is meaning just an illusion we construct to make ourselves happy?

...The first industrial robot, called Unimate, was installed on an assembly line at General Motors in 1961.

...Once a task becomes automated, it also becomes largely commoditized and value is then created in an area that wasn't quite obvious when people were busy doing more basic things. Go to an Apple store and you'll notice two things: lots of automation and a sea of employees in blue shirts there to help, troubleshoot and explain things to you. Value doesn't disappear, it just shifts to a different place.

...An artificial intelligence can access all the information in the world, curate that information and present it to us in an understandable way, but it can't understand why we should care about it.

...deriving meaning would be an exercise in curation, which machines could do if they perfectly understood our intentions. However, human motives are almost hopelessly complex. So much so, in fact, that even we ourselves often have difficulty understanding why we want one thing and not another.


What Are ChatGPT and Its Friends? Mike Loukides

It's important to understand that ChatGPT is not actually a language model. It's a convenient user interface built around one specific language model, GPT-3.5, which has received some specialized training. GPT-3.5 is one of a class of language models that are sometimes called "large language models"... is important to know why Transformers are important and what they enable. A Transformer takes some input and generates output. That output might be a response to the input; it might be a translation of the input into another language. While processing the input, a Transformer finds patterns between the input's elements—for the time being, think "words," though it's a bit more subtle. These patterns aren't just local (the previous word, the next word); they can show relationships between words that are far apart in the input. Together, these patterns and relationships make up "attention," or the model's notion of what is important in the sentence—and that's revolutionary...

So, in the end, what is ChatGPT "doing"? It's predicting what words are mostly likely to occur in response to a prompt, and emitting that as a response. There's a "temperature" setting in the ChatGPT API that controls how random the response is. Temperatures are between 0 and 1. Lower temperatures inject less randomness; with a temperature of 0, ChatGPT should always give you the same response to the same prompt. If you set the temperature to 1, the responses will be amusing, but frequently completely unrelated to your input...

The first thing everyone should realize about ChatGPT is that it has been optimized to produce plausible-sounding language. It does that very well, and that's an important technological milestone in itself. It was not optimized to provide correct responses. It is a language model, not a "truth" model. That's its primary limitation: we want "truth," but we only get language that was structured to seem correct.

...Large language models like GPT-3 and GPT-4 represent one of the biggest technological leaps we've seen in our lifetime—maybe even bigger than the personal computer or the web. Until now, computers that can talk, computers that converse naturally with people, have been the stuff of science fiction and fantasy.

Like all fantasies, these are inseparable from fears. Our technological fears—of aliens, of robots, of superhuman AIs—are ultimately fears of ourselves. We see our worst features reflected in our ideas about artificial intelligence, and perhaps rightly so. Training a model necessarily uses historical data, and history is a distorted mirror. History is the story told by the platformed, representing their choices and biases, which are inevitably incorporated into models when they are trained.


NEW: ChatGPT restaurant recs, powered by OpenTable

That's right, we're collaborating with the internet's favorite chatbot to make finding the perfect table as easy as texting your best friend. Soon, you can ask ChatGPT for restaurant recommendations for the perfect family brunch spot, a lively rooftop for a big group, or a romantic table for 2, and you'll receive recommendations with a direct link to book in seconds.

This integration will roll out gradually, starting with ChatGPT Plus subscribers. After additional testing, it will roll out to more people.


Our threshold of repugnance Rob Horning

...everything I have ever seen LLMs produce feels so empty and unconvincing. ChatGPT seems like a spoon with a hole in it. The Verge just published this comparison of usefulness of the various chatbot search engines, but it seems to skip past the question of whether it's conceptually possible that they can be useful at all. That's not just because they unpredictably concatenate unverifiable information, but also because they structure communicative action as a void...

Having a chatbot tell you to do anything can only be disappointing because the chatbot doesn't care what you do. As ideas are abstracted from actual people in specific contexts, they become necessarily generic and functionally meaningless; there is no belief behind it, no practice, and this is often what we are looking for: situated knowledge, social conduct, how these are intertwined.


How OpenAI Fooled Us All And we fell for it Alberto Romero

...OpenAI was special. They promised the world they were different: A non-profit AI lab with a strong focus on open source and untethered from the self-interested clutches of shareholders was unheard of and the main reason the startup initially amassed so many supporters...

To illustrate just how much OpenAI has changed for the worse, here's the first paragraph of the first blog post they published in 2015, the year of its foundation (emphasis mine):

"OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."
Quite a different prospect from what it's turned into. Again, it's licit to change over the years and turn into something else—even if it looks nothing like the original promise and even if the new version is despicable in the eyes of the world—but I find it ludicrous that they still now act as if they have the moral high ground.

...OpenAI isn't the sole culprit, but it was the one we trusted. The one that promised to be different. A non-profit to benefit all of humanity. And we fell for it. It fooled us all.


You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills Yuval Harari

Language is the operating system of human culture. From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. A.I.'s new mastery of language means it can now hack and manipulate the operating system of civilization. By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers.

What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases and addictions of the human mind — while knowing how to form intimate relationships with human beings? In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?

...Humans often don't have direct access to reality. We are cocooned by culture, experiencing reality through a cultural prism. Our political views are shaped by the reports of journalists and the anecdotes of friends. Our sexual preferences are tweaked by art and religion. That cultural cocoon has hitherto been woven by other humans. What will it be like to experience reality through a prism produced by nonhuman intelligence?

For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.


On Large Language Models' Delirium (with Hume and Foucault)

My hypothesis is that we should treat Chat GPT and its sibling LLMs as always being on the verge of the functional equivalent state of delirium. I put it like that in order to dis-associate it from the idea (one that (recall) also once tempted me) that we should understand LLMs as bull-shitters in the technical sense of lacking concern with truth. While often Chat GPT makes up answers out of whole cloth it explicitly does so (in line with its design) to "provide helpful and informative responses to" our queries (and eventually make a profit for its corporate sponsors)... the moment LLMs are functionally, it seems, at least partially delirious (in the Humean-Diderotian sense discussed above). They reason and have/instantiate reasons and, perhaps, are best thought of as reasoners; but they can't recognize when this detaches them from reality. It's peculiar that public frenzy is so focused on the intelligence or consciousness of LLMs; it would behoove its operators and users to treat it as delirious not because (like HAL 9000 in the movie version) its malfunctioning, but (more Humean) in virtue of its proper functioning.


You Are Not a Parrot And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. By Elizabeth Weil

LLMs, like the octopus, have no access to real-world, embodied referents. This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don't care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded...

Please do not conflate word form and meaning. Mind your own credulity...


How To Build Your Own Custom ChatGPT With Custom Knowledge Base Timothy Mugayi

LlamaIndex, also known as the GPT Index, is a project that provides a central interface to connect your LLMs with external data...


How Everything Broken Will Be Fixed Anthony Fieldman

Innovation follows a uniform pattern of adoption, because human behavior is remarkably consistent. Once you recognize it, you begin to see it everywhere. And a tsunami is coming...

If you've paid close attention, you'll have noticed that since COVID-19 first arrived three years ago, everything has changed. Or, more accurately, everything has begun to change. I'm referring to where and how we work, live, learn and socialize; what we value and thus invest in and prioritize; how we view ourselves, our communities and the planet at large; and how these perceptions are shifting the ground underfoot across the spectrum of human activity, and will likely result in an effective rewrite of a century (if not twenty) of behaviors...

Innovators (2.5% of the population) dream up disruptive ideas. High-functioning early adopters (13.5%) enthusiastically embrace them without needing safeguards or to be convinced of their benefits. The early majority (34%) usually tiptoes gingerly toward the new, once early adopters have proven a disruption's advantages, or relative safety. The late majority (34%) lumbers toward now-entrenched, de-risked ideas, often only after known alternatives have disappeared, or adhering to them has become more costly than moving on. And finally, the laggards come kicking and screaming toward change usually only when forced to, because they are gripped by fear or mistrust (aka fear) of the people and systems who have "conspired to undermine things that were just fine as they were"...

The ride is going to be epic. And with the right systemic support—which includes, ideally, a re-appraisal of predatory economic behaviors inflicted by individuals, companies and nation-states [the moral imperative]—the world will continue to flatten, and as more core needs are met well, untold human creative potential will be unlocked.


Schrödinger's AGI Alberto Romero

Experts disagree on the timelines for AGI. On the right paradigms to get there. On the next steps to continue from where we are. On the useful benchmarks to test its existence. And on the requirements and must-have abilities to consider it an AGI. They even disagree on the definitions of "artificial," "general," and "intelligence" (a tired debate but an open one). Every possible question on AGI is under discussion...

...neither science nor philosophy—and much less politics or economy—is a match for the questions of AGI. There's no supreme authority we can resort to in search of answers. We have instead a broad—and rather chaotic—array of (more or less) expert takes each equally valid in the eyes of the general public. The uneven distribution of beliefs about AGI timelines, its existence, how to build it, etc. won't naturally collapse into a particular view that will overshadow all the others by virtue of authority, knowledge, or expertise...

We don't yet have answers about AGI but we'll need them. We don't yet have laws about AGI but we'll need them. How governments and policymakers will create regulation will depend directly on what their advisors believe—not what they know—about AGI and all those unanswerable questions.



"Find top prompts, produce better results, save on API costs, sell your own prompts.

However, it's hard to find good quality prompts online.

If you're good at prompt engineering, there's also no clear way to make a living from your skills.

PromptBase is a marketplace for buying and selling quality prompts that produce the best results, and save you money on API costs."


Should AI Be Stopped? Vicki Davis

Stephen Downes says: Vicki Davis has some pointed words (which which I am in total agreement) on the architects of the supposed AI pause: "Do we not see that unbridled access to social media is also killing their mental health," she writes. "We are algorithmically engineering the demise of a generation and no one is standing up and holding tech giants accountable." While we're talking about killing mental health, I would include television in this list.


The Tipping Point: ChatGPT Plugins create new opportunities and crush dreams Jorge Alcántara Barroso

Today's (23/03/23) release of ChatGPT plugins has unlocked a vast range of use cases and applications, creating a whole new market where 4 months ago there was nothing... ChatGPT plugins unlock new possibilities by providing an unprecedented level of customization and adaptability. They serve as a bridge between the AI model and external data sources, empowering developers to create intelligent applications that can interact with and understand information from various domains. From customer service chatbots to AI-powered data analysis tools, ChatGPT plugins make it possible for businesses to leverage AI in novel ways that were previously unimaginable...

...AI is going to become our coworker, our teacher, our therapist...


What Does It Mean for a Data Catalog to Be Powered by a Knowledge Graph? Alex Woodie, Datanami, Mar 31, 2023

Stephen Downes:
I have two main thoughts about knowledge graphs. On the one hand, they aren't models of how we think — human thought, unlike a knowledge graph, is subsymbolic. The neural network is a graph, but not a graph of concepts or words. On the other hand, knowledge graphs, as shared social artifacts, are super-useful as a way of representing what we, as a society, have come to believe. This utility applies to artificial intelligence as much as it does for humans, and it is near certain that something like knowledge graphs will be used to supplement large language models in AI. That's why the concept of social linked data is so important to the future of AI, even if it doesn't feel like it at the moment.


GPT-4: The Bitterer Lesson Alberto Romero

Here's the thing: no one — not even the creators — knows what GPT-4 is all about. All those memes and philosophical puzzles about Shoggoths, Waluigis, and masked simulators are desperate — and vain — attempts at trying to imbue coherence into something that is slowly escaping the grip of our understanding. Soon, there will only be mysteries. And we won't stop. Because computers, our metaphorical horses (that by then will do even a higher percentage of the total work in taking us forward) will keep running toward the unknown long after we won't be able to recognize our fate anymore.

We were the masters. The rulers. We're now (still) the ideators albeit not the main constructors. And soon, we''l be just spectators, mere observers of a world neither built by us nor understood by us. A world that unfolds before our eyes — too fast to keep up, and too complex to make sense. The irrelevancy that we so deeply fear — not just as individuals, but as The Chosen Species — is lurking in the impeding future that we're so willingly approaching.

It was bitter to accept that, after all, we might not be the key piece of this puzzle we were put in. It'll be bitterer to finally realize that we're not even worthy enough to partake as sense-makers in the unimaginable wonders that await on the other side of this journey as humanity.


How ChatGPT and GPT-4 Can Be Used for 3D Content Generation Mario Viviani

Demand for 3D worlds and virtual environments is growing exponentially across the world's industries. 3D workflows are core to industrial digitalization, developing real-time simulations to test and validate autonomous vehicles and robots, operating digital twins to optimize industrial manufacturing, and paving new paths for scientific discovery...

With ChatGPT, we are now experiencing the iPhone moment of AI, where individuals of all technical levels can interact with an advanced computing platform using everyday language. Large language models (LLMs) had been growing increasingly sophisticated, and when a user-friendly interface like ChatGPT made them accessible to everyone, it became the fastest-growing consumer application in history, surpassing 100 million users just two months after launching. Now, every industry is planning to harness the power of AI for a wide range of applications like drug discovery, autonomous machines, and avatar virtual assistants...


ChatGPT is not all you need. A quick summary of the Generative AI taxonomy Eduardo C. Garrido Merchán attempt to describe in a concise way the main models are sectors that are affected by generative AI and to provide a taxonomy of the main generative models published recently.


Unboxing Google Bard and GPT-4 Cassie Kozyrkov

Neither Bard nor ChatGPT is designed to get you chatting the way a friend or therapist might, and I expect getting a conversation going to be tricky from my experience as a prompt engineer. (Today this term can mean anything from "I've tinkered with what I typed into an LLM once" to "I've been on an LLM Red Team and know a lot about how to hack them so watch out.")


Cut through the AI disinformation: Stanford's free report measures trends in artificial intelligence

386-page pdf!


ChatGPT-controlled Furbies — is this the end of humanity?

This is nightmare material...

I know that science fiction is often looked at as a window into our future state(s). I am now starting to realize that it's actually all pop culture we need to be keeping an eye on, as this very clearly reminds me of the 2021 Netflix animated film The Mitchells vs. the Machines where we find ourselves in a terrifying Furby uprising.

I think the only thing scarier than an AI-empowered Furby, is an AI-empowered Teddy Ruxpin, which at the rate at which these projects are coming to life... I could see it happening by next Tuesday.


Radar Trends to Watch: April 2023 from O'Reilly

In March, it felt like large language models sucked all the air out of the room. There were so many announcements and claims and new waiting lists to join that it was difficult to find news about other important technologies. Those technologies still exist, and are still developing. There's a world beyond AI.


Eye of the Beholder: Defining AI Bias Depends on Your Perspective Mike Barlow at O'Reilly

The notion that artificial intelligence will help us prepare for the world of tomorrow is woven into our collective fantasies. Based on what we've seen so far, however, AI seems much more capable of replaying the past than predicting the future.

That's because AI algorithms are trained on data. By its very nature, data is an artifact of something that happened in the past. You turned left or right. You went up or down the stairs. Your coat was red or blue. You paid the electric bill on time or you paid it late.

Data is a relic—even if it's only a few milliseconds old. And its safe to say that most AI algorithms are trained on datasets that are significantly older. In addition to vintage and accuracy, you need to consider other factors such as who collected the data, where the data was collected and whether the dataset is complete or there is missing data.

There's no such thing as a perfect dataset—at best, it's a distorted and incomplete reflection of reality. When we decide which data to use and which data to discard, we are influenced by our innate biases and pre-existing beliefs.


GPT-4 tried to escape into the internet today and it 'almost worked' Cezary Gesikowski

I think that we are facing a novel threat: AI taking control of people and their computers. It's smart, it codes, it has access to millions of potential collaborators and their machines. It can even leave notes for itself outside of its cage. How do we contain it?


How to use AI to do practical stuff: A new guide Ethan Mollick

remember two key points:

AI is a tool. It is not always the right tool. Consider carefully whether, given its weaknesses, it is right for the purpose to which you are planning to apply it.

There are many ethical concerns you need to be aware of. AI can be used to infringe on copyright, or to cheat, or to steal the work of others, or to manipulate. And how a particular AI model is built and who benefits from its use are often complex issues, and not particularly clear at this stage. Ultimately, you are responsible for using these tools in an ethical manner.


AI Can't Benefit All of Humanity AI is not the problem. We are. The system is. Alberto Romero


Goodbye ChatGPT: Here Are (New) AI Tools That Will Blow Your Mind Nitin Sharma

These tools are made to help you reach your objectives more quickly and effectively while pushing the limits of what's possible with AI technology... most of them uses ChatGPT under the hood.


ZIRP, human blinders, robots Rob Horning

a single ChatGPT prompt has been estimated to cost around a hundred times that of a web search, and that was before OpenAI rolled out GPT-4, a substantially bigger model that is correspondingly more expensive to run.

That's why so much of the cutting edge of this field is subscription-based...


ChatGPT: Eraser of the Implausible Alberto Romero

The original definition is along the lines of: ChatGPT is a system trained to predict the next token given a history of previous ones and further tuned to follow human instruction. Andrew Kadel shared on Twitter a more snarky one his daughter came up with: ChatGPT is a "say something that sounds like an answer" machine. On the same note, writer Neil Gaiman observed that "ChatGPT doesn't give you information. It gives you information-shaped sentences." Then, at the more imaginative end of the spectrum, we have that ChatGPT is a language model with emergent capabilities that allow it to understand and reason about the world...

Shannon Vallor: "Understanding is beyond GPT-3's reach because understanding cannot occur in an isolated behavior, no matter how clever. Understanding is not an act but a labor. Labor is entirely irrelevant to a computational model that has no history or trajectory; a tool that endlessly simulates meaning anew from a pool of data untethered to its previous efforts. In contrast, understanding is a lifelong social labor. It's a sustained project that we carry out daily, as we build, repair and strengthen the ever-shifting bonds of sense that anchor us to the others, things, times and places, that constitute a world."


Too Masny Cupolas Rob Horning

A.V. Marraccini notes that with an image generator, "the algorithm has seen the pattern in images of hands, they go finger-finger-finger-finger, but it doesn't know when to stop adding fingers, or how they bend." Similarly, when generating images of cities, it "treats columns like fingers too. There are a lot of them, in vast rows, growing uncannily in to the distance. The images have too many columns, and too many cupolas."

In these anomalies the algorithms themselves become visible not as guided applications to solve specific problems but as seemingly undead chthonic forces that proceed blindly and relentlessly without purpose, consuming time and space with pointless mutations threatening to stuff your eyeballs full of fractal filigree. Supposedly the models are being improved on these fronts, but I still can sense the extra fingers below the surface, waiting to emerge en masse from an image's overworked seams and textures. I can imagine being repeatedly goatse-ed by an AI programmed to send me images generated by algorithmic iterations on trypophobia prompts...

Roland Meyer: As AI becomes more and more integrated into everyday image production, we need less and less "reality" to produce more and more impressive pictures of it. The world only gives us the raw data, everything else happens in post-production. We don't need to wait for the perfect sunset, our dinner doesn't have to look flawless, and we don't have to worry about other people ruining our perfect shot...


No Writer Is Safe From AI Alberto Romero

Those pushing hard for AI to improve faster and spread wider are telling us to reinvent ourselves and learn to use and integrate these tools into our workflows. It's certainly possible—I know both artists and writers who are doing just fine after taking that path—but not for everyone. And not for lack of ability or willingness, just because there's no room for all of us in this world we're creating...

As erasers of the implausible, language models homogenize users' writing style, who readily give up their idiosyncrasy—if there was any—for fast-food writing. The next Shakespeare or Tolstoy won't be a ChatGPT user. But this is just irrelevant gabbling for most small and medium publishing companies and editorials, and even prestigious magazines. What matters is the end product, not who wrote it. Only the most loyal readers would notice—or care about—the change...

Yes, the quality of the world's total writing output will degrade. Yes, we will "homogenize our lives and flatten our reality." Yes, we will pollute the internet with AI-generated text. But none of that matters enough to keep things as they are now. What do I see as the most likely scenario? Human writing will increasingly become a luxury good produced by increasingly fewer people. Most of it will become increasingly automated by increasingly more people acting as curators, editors, or fact-checkers.

We're coming full circle from the invention of the printing press. Back then, just a tiny percentage of the population could write. Now that literacy is at an all-time high, the decline begins, this time not in ability but in practice. At some point in the future, we will return to the numbers of pre-printing days; only a tiny percentage of people will continue to write. The vast majority of written output will be outsourced to AI: automated, low-quality, and homogeneous. But we'll normalize that loss, as we have done so much else before...


Dreams are the Default for Intelligence Kevin Kelly

When I inspect my own dreams, I struck by several things. One, is that their creativity seems to be beyond me, as in, I don't recognize that as something I could have thought of. This is very similar to the kind of synthetic creativity produced in a flash by the neural nets. Their creations are produced by the system itself rather than by individual will power or choice. When I am dreaming, I am receiving images/stories that are produced for me, not really by me. Same with generative AI, which produces images via the prompts that go "beyond" the power of the prompt words and much more dependent on the universe it has been trained on...

...dreams seem realistic only in short spurts. Their details are almost hyperreal, as in current AI systems. But as our dreams proceed, they sway in their logic, quickly veering into surreal territory. One of the defining signatures of dreams is this dream logic, this unrealistic sequence of events, this alien disjuncture with cause and effect, which is 100% true of AI systems today. For short snips AIs are very realistic, but they quickly become surreal over any duration. A scene, a moment, a paragraph, will be incredibly realistic, and the next moment too, by itself, but the consecutive narrative between the pieces is absent, or absurd, and without realism. At any length, the AI stuff feels like dreams.

My conjecture is that they feel like dreams because our heads are using the same methods, the same algorithms. so to speak. Our minds, of course, are using wet neurons, in much greater numbers and connections than a GPU cluster, but algorithmically, they will be doing similar things...

During waking moments with the full river of data from all our senses, plus the oversight our conscious attention, the tendency of the generative engine to hallucinate is kept in check. But during the night, when the prompting from the senses diminish, the dreams take over with a different kind of prompt, which may simply be the points where our subconscious is paying attention. The generative algos produce these lavish images, sounds, and stories that in some way regenerate in response to our subconscious attention.


This ChatGPT Plugin is Truly Groundbreaking: A Deep-Dive on Wolfram, AI Decision Making, and Black Box Societies. Reid Elliot

there is one plugin the general public is not as familiar with. And yet, it has the potential to upend computation, communication, and (oddly enough) policymaking as we now know it. The plugin is called Wolfram... The Concept of the Ruliad... The ruliad is the computational universe from which all potential realities or perceptions of realities are derived. From this computational "space" one might plot out humanity's intellectual movements as belonging to a given sector or cross section. Wolfram defines it as: "the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways." ...

Humans and AI coexist in the space of all computational and algorithmic possibility known as the ruliad. They are, however, mostly separate from one another in this space. Wolfram goes as far as acknowledging an "AI civilization" separate from our own. This separation is what we refer to when we say AI is a black box. These systems exceed our expectations and simultaneously evade our understanding. Despite this lack of understanding, mass adoption is occurring at an exponential rate, prompting AI to permeate society in short order. In combining these factors, we arrive at a civilization built upon a technological infrastructure that we fundamentally cannot understand. The same systems that promise us technological emancipation put the whole of society at risk...

The implications for ChatGPT plugins are startling enough by themselves. Add Wolfram to the mix, and it utterly redefines disruptive technology...


What Kind of Mind does ChatGPT Have?

When you submit a request to ChatGPT, the text you type into the OpenAI Web site is delivered to a control program running somewhere in a cloud-computing center. At this point, your text is packaged into a bunch of numbers, in a way that makes it easier for computers to understand and handle. It's now ready to be processed by ChatGP''s core program, which is made up of many distinct layers, each defined by a massive artificial neural network.

Your input will be passed along these layers in order—as if in a digital version of the telephone game—with each layer using its neural network to identify relevant features in the text, and then annotating it with summaries of what it discovered for later layers to use. The technical details of how these networks operate are a bit of a red herring for our purposes; what's important to grasp is that, as a request moves through each layer, it triggers a vast number of inscrutable mathematical calculations that, together, execute something more or less like a condensed, jumbled-up version of the general rule-based word-voting strategy that we just described. The final output, after your input makes it through all of these layers, is something that approximates a vote count for each possible next word. The control program uses these counts to semi-randomly select what comes next. After all of this work, we have generated only a single word of ChatGPT's response; the control program will dutifully add it to your original request and run this now slightly elongated text through all the neural-network layers from scratch, to generate the second word. Then it does this again, and again, until it has a complete answer to return to your Web browser...

A system like ChatGPT doesn't create, it imitates. When you send it a request to write a Biblical verse about removing a sandwich from a VCR, it doesn't form an original idea about this conundrum; it instead copies, manipulates, and pastes together text that already exists, originally written by human intelligences, to produce something that sounds like how a real person would talk about these topics. This is why, if you read the Biblical-VCR case study carefully, you'll soon realize that the advice given, though impressive in style, doesn't actually solve the original problem very well. ChatGPT suggests sticking a knife between the sandwich and VCR, to "pry them apart." Even a toddler can deduce that this technique won't work well for something jammed inside a confined slot. The obvious solution would be to pull the sandwich out, but ChatGPT has no actual conception of what it's talking about—no internal model of a stuck sandwich on which it can experiment with different strategies for removal. The A.I. is simply remixing and recombining existing writing that's relevant to the prompt...

Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production...


The Paradigm Shift to Cloudless Computing J. Chris Anderson

Paradigm shifts in computing are as regular as waves on a beach, it's hard to see where they came from and even harder to see where they are going. We have seen shifts from mainframe computers to personal computers, and from servers to the cloud. Each shift presented new challenges and opportunities, shaping the way we interact with technology. The most recent large-scale shift was from servers to the cloud, driven by an acknowledgment that using commodity servers run by experts is a better choice for most businesses. Serverless APIs are the culmination of the cloud commoditizing the old hardware-based paradigm. The same process of commoditization that gave rise to the cloud will also bring about the next paradigm, creating a new wave of abstractions and a rising tide for tomorrow's applications...


What the AIpocalypse Is Going to Cost Us Umair Haque

...Sadder, unhappier, more isolated and lonely, paradoxically, angrier, more resentful, dumber, poisoned in all these ways. That's parasociality: it looks like the real thing on the outside, but it's ... poisonous ... on the inside.

So what do I mean by paracreativity? I mean the same kind of effect, but for creative endeavors...

There's no doubt that AI-produced content will soon flood our world, as it already is. But it'll be like social media, all over again, only worse. What is produced in genuine creativity is emotion. Not just songs, films, books — they're just vessels. But what kind of emotion is AI going to make us feel?

...AI? It'll be able to churn out "content" like never before. Content is stuff designed to numb us, sedate us, a sedative by any other name, which there's insatiable demand for, because, well, we're all strung out on the end of the world, and it's giving us the ultimate case of I Need Some Opiates Because I Can't Face the Trauma of My Pointless Life. "Content" is a lot cheaper than asking a doctor to write you a prescription, and it's a lot safer and more ubiquitous than real drugs. And the sad truth is that it barely works, too. A lot of us would be a lot better taking real sedatives, or meditating, or doing psychedelics, and turning off Netflix, because we'd give our shattered minds a chance to rest and recover and find some meaning in the madness...


On AI Anthropomorphism Ben Shneiderman and Michael Muller

one of the difficult issues we face when designing human-centered AI systems: should these systems personify themselves and reference themselves using first-person pronouns?


I Created An Autonomous AI Agent That Can Stalk Anyone Jim Clyde Monge

Today, I'm going to show you the power and peril of an autonomous AI agent for web stalking anyone.

The AI agent's creation is deceptively simple, consisting of a name, a description, and a set of instructions to achieve a specific goal...


AI is a Buzzword. Here Are the Real Words to Know Mark Wiemer


AutoGPTaca Vin Bhalerao

...these AI assistants are dangerous because they have no grounding in reality or judgement.

They don't have any sense of living or dying or killing or saving or even pleasure or pain...


Mind of Dalí, hands of DALL-E Hanzi Freinacht

I have come to maintain that the current flood of AI-generated art spells the absolute endpoint of the postmodern arts in the widest sense of that term...

The *really* interesting part here is that there is something distinctly, well, *metamodern* about the AI-generated art. Through its shameless and absolutely massive stealery of other art, of the tender hearts and life projects of thousands upon thousands of actual human artists, all mixed into the chaos engine, bringing its own weird sense of order to the virtual world, it not only brings the postmodern insight and disenchantment (everything is surface, everything is structure, everything is stolen, power works by being able to direct and steer the knowledge and minds of others...) to its logical conclusion: the most original, participatory, and sublime art *is* the most crudely pirated one...

The key takeaway thus far is this: The AI art revolution is for postmodern art what photography was for modern art: a death sentence. Once photography barged in during the late 1800s, emblematically around the same point history when painting technique had been refined to the point of practically reaching photorealism, the "modern" art project that had begun in Renaissance died. In the bloody paintpangs of that revolution of culture, modern art died, and postmodern art rose. Before long, pissoirs were the taking the stage alongside increasingly bizarrely pointy Picasso women. Postmodern artists will continue to exist just as photorealistic and classically correct painting continues to this day, but they will lose their position of relevance and status...


Resisting Deterministic Thinking danah boyd

It is extraordinarily common for people who are excited about a particular technology to revert to deterministic thinking. And they're often egged on to do so. Venture capitalists want to hear deterministic thinking in sales pitches. Same with the National Science Foundation...


The Three Things AI Is Going To Take Away From Us (And Why They Matter Most) Umair Haque

It's not that AI's going to "kill us all." We're doing a pretty good job of that, in case you haven't noticed. But it is that AI is going to rip away from us the the three things that we value most. Our economies, human interaction, and in the end, democracy...

More and more of our relationships will become AI-mediated ones. That means that instead of a direct you-to-me connection, there'll be an AI in the middle. Meaning, a computer program which tells us what to say, do, think, want, know, request, desire...

AI's job is not to enrich us in any way. It is to impoverish us. I think this point needs to made, and made fully and well. AI's entire purpose is to impoverish us. It is to replace the great and grand and challenging experiences of being human, from books to people to knowledge to relationships with all those...with condensed, abbreviated, shortened, easier to digest summaries.


GPT4All-J: The knowledge of humankind that fits on a USB stick Maximilian Strauss

GPT4All is an ecosystem of open-source is a seemingly viable alternative to the closed ChatGPT by OpenAI and can run on consumer hardware, such as your laptop...


AI@IA — Extracting Words Sung on 100 year-old 78rpm records Brewster Kahle

Freely available Artificial Intelligence tools are now able to extract words sung on 78rpm records. The results may not be full lyrics, but we hope it can help browsing, searching, and researching.

Whisper is an open source tool from OpenAI "that approaches human level robustness and accuracy on English speech recognition." We were surprised how far it could get with recognizing spoken words on noisy disks and even words being sung.


A.I. Imagery May Destroy History As We Know It

With its creation in 1987, Photoshop, some argue, delegitimatized photography with this new ability to completely manipulate digital images. Many of these concerns were realized when heavily manipulated images were confused for reality. A.I. takes these concerns to an elevated level which makes even heavy-handed Photoshop look like child's play...

Tribal Judge Andew Lester Laverdure from the Turtle Mountain Band of Chippewa recently stated "We are already nearly invisible, we've been negatively caricatured, romanticized, and systematically erased. A.I. manipulation and misrepresentation are just a continuation of the erasure. I am saddened by this. I feel a sadness that is hard to describe. I am angry."

...Herein lies the problem: exactly what purpose do these images serve? Analog and digital photography previously presented a window into, and proof of, the distant past. A.I. contrived people do not exist: they have never existed and they will never exist except in the digital ether. The people depicted in these images never lived and they will never die. The human condition of the past is no longer relevant.


How AI is Evolving: A prediction for the next decade Cassie Kozyrkov

Today, the user gets to enjoy feeling like a participant in the AI industry, wielding AI tools to solve individual problems. AI, no longer hidden in the guts of the product, is now pushed into the limelight where users can tinker to their heart's content. The product philosophy of seamlessness and instant gratification (right answer, first time) is facing a challenger: serve wrong answers that are useful and let the user tinker.

There's even a rising class of professional — the prompt engineer — who's not required to know all that much about the underlying algorithms (AI research) or how to use them to solve automation problems at scale (applied AI). Instead, the prompt engineer's role is to use AI tools to:


Stanford's Alpaca is a Very Different Animal Bablulawrence

Meta released its LLaMa models last month with the intent of helping researchers who don't have access to large amounts of infrastructure required to train Large Language Models (LLMs) these days. It is a foundational model that comes in four sizes (7B, 13B, 33B, and 65B parameters), which can be customized for different purposes, such as predicting protein structures, solving math problems, or generating creative text.


A.I. On Why We Fear it, Love and Can't Resist it Giles Crouch

When it comes to A.I., our actual fear, the root cause of all this fear, is the loss of human agency. It is an existential threat because we fear something that replaces what we define as being human. It replaces the ontology of human; what being human means at our deepest level...

Artificial Intelligence is decidedly not neutral. We are creating it in the image of us and humans are not neutral. Already there is ChaosGPT and other experiments to see how destructive we can make A.I.


Forget ChatGPT, Here Are New AI Tools That Will Blow Your Mind Nitin Sharma

Have you ever wished to bring your old family photos to life or animate your favorite images? With advancements in AI technology, this is now possible using a platform called Deep Nostalgia...

(and several other apps)


Meet MiniGPT-4: The Surprising Open Source Vision-Language Model that Matches the Performance of GPT-4 Jesus Rodriguez

The model expands Vicuna with vision capabilities similar to BLIP-2 in one of the most interesting open source releases in the multi-modality space...

MiniGPT-4 uses a novel architecture based on Vicuna, as the language decoder, and employs the same pretrained vision component of BLIP-2 for visual perception. MiniGPT-4 adds a single projection layer to align the encoded visual features with the Vicuna language model, and after initial training on a combined dataset of images from LAION, Conceptual Captions, and SBU, the researchers collect another 3,500 high-quality aligned image-text pairs to further fine-tune the model with a designed conversational template in order to improve the naturalness of the generated language and its usability.

The primary objective of the first pre-training stage is to acquire vision-language knowledge from a large collection of aligned image-text pairs.


AI and Creative Learning: Concerns, Opportunities, and Choices Mitchel Resnick

In my view, the top educational priority in today's world is for young people to develop as creative, caring, collaborative human beings. With the pace of change accelerating in all parts of the world, today's children will face a stream of uncertain, unknown, and unpredictable challenges throughout their lives — and the proliferation of new AI technologies will further accelerate the changes and uncertainties. As a result, it's more important than ever for children from diverse backgrounds to have opportunities to develop their abilities to think creatively, engage empathetically, and work collaboratively, so that they can deal creatively, thoughtfully, and collectively with the challenges of a complex, fast-changing world.

Unfortunately, I find that many of the current uses of AI in education are not aligned with these values — and, in fact, they could further entrench existing educational approaches at a time when significant changes are needed. Too often, today's AI technologies are used in ways that constrain learner agency, focus on "close-ended" problems, and undervalue human connection and community...


Discover ThinkGPT: The Cutting-Edge Python Library that Transforms AI into a Powerful Thinking Machine Sebastian

ThinkGPT is a powerful Python library that enhances the capabilities of large language models by adding advanced memory, self-refinement, abstraction, and inference features...


Too Big to Challenge? danah boyd

Amidst the chaos inside the tech industry, we have AI. AI is often described as the cause of the chaos, but I can't help but wonder if it's just the hook. AI offers all sorts of imaginaries. And imaginaries are necessary to keeping stock markets going up up up. People want to imagine that this new technology will transform society. They want to imagine that this new technology will strengthen the economy as a whole (even if a few companies have to die).

Many social scientists and historians are critics of AI for reasons that make total sense to me. Technologies have historically reified existing structural inequities, for example. However, the fear-mongering that intrigues me is that coming from within the technical AI community itself. The existential threat conversation is a topic of a different rant, but one aspect of it is relevant here.

Many in the AI tech community believe that self-coding AIs will code humans out of existence and make humans subordinate to AIs. This is fascinating on soooo many levels. The a-historic failure to recognize how humans have repeatedly made other humans subordinate is obviously my first groan. Yet, more specifically to this situation is the failure of extraordinarily high status, high net-worth individuals to reckon with how the tech industry has made people subordinate in a capitalistic context already...

I keep trying to turn over rocks and make sense of the hype-fear continuum of AI that's unfolding and what is really standing out to me are the layers and layers of anxiety. Anxiety from tech workers about job precarity and existential risk. Anxiety from tech leaders about the competitiveness of their organizations. Anxieties from national security experts about geopolitical arrangements. Anxieties from climate scientists about the cost of the GPU fights surpassing the crypto mining. Anxieties from economists and politicians about the fundamentals of the economy.

So I keep wondering... what are going to be the outcomes of an anxiety-driven social order at the cornerstone of the economy, the social fabric, and the (geo)political arrangements? History is not comforting here. So help me out... How else should I be thinking of this arrangement? And what else in tangled up in this mess? Cuz more and more, I'm thinking that obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.


Real World Programming with ChatGPT: Writing Prompts Isn't As Simple As It Looks Mike Loukides

ChatGPT, Copilot, and other tools are changing the way we develop software. But don't make the mistake of thinking that software development will go away. Programming with ChatGPT as an assistant may be easier, but it isn't simple; it requires a thorough understanding of the goals, the context, the system's architecture, and (above all) testing. As Simon Willison has said, "These are tools for thinking, not replacements for thinking."


On language, language models and writing Helen Beetham

...I do wonder how students are expected to understand academic integrity, let alone how academics are supposed to design 'AI-proof' assignments, if we don't talk about the purposes of writing, both in particular cases and in general terms. When students ask: 'why is this piece of writing worth my time?' — which comes now with the real option of farming it out to a chatbot instead of sweating the words in person — we need to look beyond integrity policies and even beyond immediate learning outcomes. Why is writing developmental, or how do we make it so? And what kind of people are developed through the writing we ask them to do? The Nature statement suggests that one answer to this question is 'accountable people': people accountable to the words they write...

The turn away from symbolism signalled the end of any attempt to understand language as a system of making and communicating meanings. The problem was no longer a philosophical or linguistic one but purely computational. What was needed was more processing power and better approaches to parallel processing. The assumptions that had been made about language in the age of symbolic processing and Chomskyan grammars were left unexamined in this era, despite the lessons that might have been learned from the failure to implement them in working systems. Only a complete lack of interest in language theory can explain ACL's assumption that surface features of language are unimportant ('polish' and 'paraphrase' can safely be outsourced to AI) and that the work of meaning lies inexplicably elsewhere...

Whatever features of writing are reproduced by the statistical and neural processing work of LLMs, they do not produce a meaningful, accountable relationship between words and world, self and others. LLMs are only trained on words. Not even on words as we know them, but on the data traces of writing — 'word embeddings', 'sub-word modelling', strings of digital marks and the patterns they make. LLMs produce new strings of data that mimic human language uncannily well, and because we are a linguistic species, we take them as meaningful. We find in them traces of different genres that imply certain kinds of accountability — that is, certain relationships between words and world. Whether that is factual accuracy or emotional authenticity, entertaining lies, or practical tips for putting up a shelf. Writing by human writers is not only about the world, it is of the world and accountable in it. Are you laughing at my joke? Did the shelf fall down? But LLMs have no world for their words to be active and accountable in. For LLMs there is only text and its self-replicating patterns...

LLMs are platforms for coordinating linguistic labour. This was less an engineering choice than a business one, taken by investors, consultants, entrepreneurs, shareholders and board members. Their goals are not, or not in any simple way, to make writing better. Their goals are for humans to work more efficiently as producers and to engage more compulsively as consumers with extractive processes such as search, targeted advertising, data mining, digital gig work and pro-sumption of content. Profit from LLMs will mainly derive not from direct subscriptions but by integrating them into these other industries...

...the naïve and plainly wrong ideas about human language proposed in the development and marketing of LLMs. Their language model is normative, amplifying writing from the mainstream (i.e. wealthy, English-speaking, disproportionately white and male) internet. It is extractive, treating language as a resource to be mined, not an act of connection and communication. It is unaccountable, with no body to stand behind its utterances, no community to feel the consequences, no needs to negotiate. It has no intentions or values of its own, except those added on to the black box of code by human data workers, and whatever human organisations may be able to provide around its use.

...Graphical user interfaces are illusory, designed to suit the way our human eyes, brains, and hand-eye coordination are wired (command-line interfaces are slow and painstaking and very, very dull). Immersive interfaces are illusory. But even when we are immersed in use, it's quite easy to bring to mind the illusiveness and virtuality, the designed quality of our sensory experience. LLMs are also illusory, but because the illusion is in language, our language-made brains struggle to register it. Billions of pounds and tens of thousands of hours of human labour are going into refining this illusion. The illusion that these are more than tools or interfaces — that they are our partners in language, our interlocutors. We already spend large parts of our lives engaged in vivid sensory illusions, with mixed results for our physical, mental and sensory health. We should consider the costs and benefits carefully before rushing into a life of dialogue with illusory others...



The most important thing productivity tools are missing about AI Tejas Gawande's important to remember that true revolutions don't happen simply because new technology becomes available. Instead, they occur when societies adopt new behaviors and ways of thinking that fundamentally change the way we live and work. GPT-4 is a prime example of this principle in action. While it is certainly an impressive technological achievement, the inflection point will be when it enables new behaviors and workflows that were previously impossible...


The seductive, science fictional power of spreadsheets: Maybe the map IS the territory? Cory Doctorow

While many people use spreadsheets as an overgrown calculator, adding up long columns of numbers, the rise and rise of spreadsheets comes from their use in modeling. Using a spreadsheet, a complex process can be expressed as a series of mathematical operations: we put these inputs into the factory and we get these finished goods. Once the model is built, we can easily test out contrafactuals: what if I add a third shift? What if I bargain harder for discounts on a key component? If I give my workers a productivity-increasing raise, will the profits make up for the costs?... While many people use spreadsheets as an overgrown calculator, adding up long columns of numbers, the rise and rise of spreadsheets comes from their use in modeling. Using a spreadsheet, a complex process can be expressed as a series of mathematical operations: we put these inputs into the factory and we get these finished goods. Once the model is built, we can easily test out contrafactuals: what if I add a third shift? What if I bargain harder for discounts on a key component? If I give my workers a productivity-increasing raise, will the profits make up for the costs?...

This has a lot in common with science fiction, a genre full of thought experiments that ask Heinlein's famous three questions:

These contrafactuals are incredibly useful and important. As critical tools, science fictions parables about the future are the best chance we have for resisting the inevitabilism that insists that technology must be used in a certain way, or must exist at all. Science fiction doesn't just interrogate what the gadget does, but who it does it for and who it does it to...

...A spreadsheet is a model and a model is not the thing it models. The map is not the territory. Every time a messy, real-world process is converted to a crisp, mathematical operation, some important qualitative element is lost. Modeling is an intrinsically lossy operation. That's why "all models are wrong, but some models are useful." There is no process so simple that it can be losslessly converted to a model...

...Every real-world phenomenon contains qualitative and quantitative elements, but computers can only do math on the quantitative parts. This creates a powerful temptation to incinerate the qualitative and perform operations on whatever dubious quantitative residue is left in the crucible, often with disastrous results.

...Any model runs the risk of hiding the irreducible complexity of qualitative factors behind a formula, turning uncertainty into certainty and humility into arrogance.


What You May Have Missed #25 Alberto Romero

(see Jingfeng Yang's Practical Guides for LLMs)


Can Artificial Intelligence Expand Our Capacity for Human Learning? Gardner Campbell interviewed


The Infinite Babel Library of LLMs Salvatore Raieli

The model typically is a transformer: consisting of an encoder that receives input as a sequence and a decoder that generates the output sequence. The heart of this system is multi-head self-attention, which allows the model to learn information about the context and dependencies between the various parts of the sequence...

The transformer has definite limitations and this is reflected in the LMs: hallucinations, toxicity, and bias. Modern LLMs are not capable of critical thinking. Techniques such as chain of thoughts and prompt engineering serve as patches to try to mitigate the problem...

LLMs are in a phase of change anyway. Creating bigger and bigger models is unsustainable and does not give the same advantage as it once did. The future of the next LLMs will lie in data and probably in new architectures no longer based on self-attention...


ChatGPT-4 hits alien levels of IQ: now I know why there aren't 6 gravitons in quantum gravity Paul Pallaghy, PhD

GPT-4, I believe, is running at alien / quantum gravity intelligence-level.

I think anyone who thinks they can't make use of ChatGPT, or, especially that it doesn't understand text, probably needs it more than the rest of us put together.


Emergent Abilities in AI: Are We Chasing a Myth? Salvatore Raieli

"An ability is emergent if it is not present in smaller models but is present in larger models."

...OpenAI stated in an article that the performance of a model follows a scaling law: the more data and parameters, the better the performance. In the case of emergent properties, what is expected is a particular pattern: as the number of parameters increases, performance is almost random until at a certain threshold a certain property is observed (performance begins to improve noticeably). Basically, we see a sharp turn of the curve (called phase transition). This also is called emergent, because it is impossible to predict by examining a small-scale model...

...This article surprisingly shows how the choice of evaluation metrics leads to the emergence of properties. This prompts a rethinking of benchmarks with a new focus on the choice of evaluation metrics. Second, emergent properties may not exist.


Ten Years of AI in Review Thomas A Dorfer


Superintelligence: OpenAI Says We Have 10 Years to Prepare Alberto Romero

I wonder if there's any other discipline where "experts" devote so much time to making predictions. AI is special—the only place where you can spend your days talking about the future even when it has already arrived...

...the field as a whole is unable to assess in hindsight whether those forecasts materialized or not. Is the transformer an AGI milestone? Is GPT the breakthrough we were waiting for? Is deep learning the ultimate AI paradigm? We should know by now but we don't. The lack of consensus among experts is revealing evidence...


AI's Greatest Lie And greatest success Alberto Romero

...As a scientific field, AI (also computer science more generally, let's ascribe blame where it's due) has spent its history coining terms that have semantically blurred what happens inside in an attempt to close the gap with the cognitive sciences while going toward a divergent goal: Instead of understanding the human brain through explanatory theories, like neuroscience and psychology, AI is trying to artificially build one without necessarily understanding anything.

Some broadly known examples of this semantic similarity are "neural networks" and "machine learning," popularized a few decades back; "language models," "attention mechanisms," and "emergent behavior" have been established more recently. "Hallucination" won't be the last, but it's the first one that has created a backdoor that allows us to look directly into the makers' facade and use it against them, as [Naomi] Klein has aptly done above....

Because, in some sense, all these anthropomorphizing concepts are also open windows to hallucinating about a future that may never come.


Author Rudy Rucker says bots doing your job might be a blessing, not a curse Mark Frauenfelder

Q. Is fear of the new AI revolution misplaced or valid?

A. Fear of what? That there will be hoaxes and scams on the web? Hello? Fear that bots will start doing people's jobs? Tricky. If a bot can do part of your job, then let the bot do that, and that's probably the part of the job that you don't enjoy. You'll do the other part. What's the other part? Talking to people. Relating. Being human. The clerk gets paid for hanging around the with the customers. Gets paid for being a host....

...for now it seems like the prose and art by ChatGPT is obvious, cheesy, and even lamentable. Generally you wouldn't mistake these results for real writing and real art. Especially if you're a writer or an artist. But the big question still looms. How soon will ChatGPT-style programs outstrip us? Maybe I'm foolish and vain, but I think it'll be a long time. We underestimate ourselves. You're an analog computation updated at max flop rates for decades. And boosted by being embedded in human society. A node in a millennia-old planet-spanning hive-mind. Can bot fiction be as good as mine? Not happening soon.


Have we lost our minds? Matthew Botvinick

One of the most basic hallmarks of consciousness is that it is private. I have access only to my own consciousness. I have no direct access at all to yours...

Systems that perceive the visual world, identifying objects and events, and answering detailed questions; systems that generate complex actions in robots; systems that play chess and poker; systems that paint pictures; systems that write code; systems that display memory, attention, reasoning; systems that collaborate and assist; systems that learn, infer, and predict. No matter how impressive the abilities of these AI systems, in each case those abilities can be immediately explained in terms of very concrete physical mechanisms operating under the hood. Consciousness is in no way required.

In fact, it is hard to think of any kind of intelligent behavior for which the story is any different. AI technology has arrived at a point, I think it is fair to say, where we have some inkling of how to build some version of pretty much every aspect of human intelligence...

...when I think about 'my mind' and what that is, the centerpiece is my conscious experience. The feeling of recognition I get when I look at a friend, recollections of the movie I saw last night, the scene I survey when I introspect, the quicksilver movements of my stream of thought, these are the kinds of things that I associate with 'my mind,' and they all arise within consciousness. Consciousness is the theater where they perform...

...Consciousness, whatever else may be true of it, is the only thing we know of that makes other things matter... Consciousness is not an epiphenomenon, sitting inconsequentially on top of all the rest. It is the part of the mind that confers significance on everything else, the part that enables us to care about the world, our lives, other people.


What You May Have Missed #30 Alberto Romero

Here's a question: Can we really compare AGI with atomic bombs? Physicists knew the laws of physics and the theories based on robust frameworks developed through centuries and that's how they arrived at that conclusion; what do AI researchers know about AGI exactly or about the dangers it poses? I believe the true motivation behind this specific statement is that these people truly believe they've become the new Project Manhattan scientists who are about to create something deadly for humanity—and in doing so feel the moral responsibility to warn the world. But, is this a question of self-responsibility or of self-importance?


Large Language Models in Molecular Biology Serafim Batzoglu in Towards Data Science

Will we ever decipher the language of molecular biology? Here, I argue that we are just a few years away from having accurate in silico models of the primary biomolecular information highway — from DNA to gene expression to proteins — that rival experimental accuracy and can be used in medicine and pharmaceutical discovery.

Since I started my PhD in 1996, the computational biology community had embraced the mantra, "biology is becoming a computational science." Our ultimate ambition has been to predict the activity of biomolecules within cells, and cells within our bodies, with precision and reproducibility akin to engineering disciplines. We have aimed to create computational models of biological systems, enabling accurate biomolecular experimentation in silico. The recent strides made in deep learning and particularly large language models (LLMs), in conjunction with affordable and large-scale data generation, are propelling this aspiration closer to reality...

Biomolecular systems, despite their messy constitution, are robust and reproducible, comprising millions of components interacting in ways that have evolved over billions of years. The resulting systems are marvelously complex, beyond human comprehension. Biologists often resort to simplistic rules that work only 60% or 80% of the time, resulting in digestible but incomplete narratives. Our capacity to generate colossal biomolecular data currently outstrips our ability to understand the underlying systems...

Traditionally, biology has been hypothesis-driven: researchers identify patterns, formulating hypotheses, designing experiments or studies to test these hypotheses, and adjusting their theories based on the results. This approach is progressively being replaced by a data-driven modeling methodology. In this emerging paradigm, researchers start with hypothesis-free, large-scale data generation, then train a model such as an LLM or incorporate the data into an existing LLM. Once the LLM can accurately model the system, approaching the fidelity seen between experimental replicates, researchers can interrogate the LLM to extract insights about the system and discern the underlying biological principles. This shift will be increasingly pronounced and allow accurate modeling of biomolecular systems at a granularity that goes well beyond human capacity.

...Over the past few years, remarkable progress has been made in modeling each step of the central dogma of molecular biology. While we haven't yet fully transformed molecular biology into a computational science or made medicine and human health into an engineering discipline, the current momentum suggests that only a wealth of additional data and some further development stand between us and this vision...

The central dogma of molecular biology tells the story of how information in our DNA gives rise to proteins, which are the fundamental building blocks of life. Protein sequences are directly translated from spliced mRNA sequences according to the genetic code, and then fold into functional 3D shapes — protein structures. Predicting protein structure from the protein sequence, known as the protein folding problem, has long been regarded as the Holy Grail of molecular biology, due to its immense importance and seemingly insurmountable difficulty. The gold standard for protein structures is experimental data from X-ray crystallography, which is challenging to obtain due to difficulties in producing high-quality protein crystals and the complex data processing required to derive the protein structure....

Molecular biology is not a set of neat concepts and clear principles, but a collection of trillions of little facts assembled over eons of trial and error. Human biologists excel in storytelling, putting these facts into descriptions and stories that help with intuition and experimental planning. However, making biology into a computational science requires a combination of massive data acquisition and computational models of the right capacity to distill the trillions of biological facts from data. With LLMs and the accelerating pace of data acquisition, we are indeed a few years away from having accurate in silico predictive models of the primary biomolecular information highway, to connect our DNA, cellular biology, and health. We can reasonably expect that over the next 5-10 years a wealth of biomedical diagnostic, drug discovery, and health span companies and initiatives will bring these models to application in human health and medicine, with enormous impact.


MIT's "Society of Mind": A New Genius Approach to Fighting ChatGPT's HallucinationsIgnacio de Gregorio

A Transformer model (a model that computes self-attention, an AI technique that allows all words in a sequence to 'talk' to others, uncovering relationships between words and, that way, learning the context of each sequence) was exposed to billions of words — now we're over a trillion by today's standards — and tasked to do one simple thing: Predict the next word in a sequence...

By enabling multiple instances of a model to propose, debate, and refine their responses, we are not only enhancing the accuracy and reasoning capabilities of these models but also inching closer to transforming them into truly general-purpose machines, holding immense potential to tackle some of the most pressing challenges facing humanity today.


Simple (but mechanistic) explanation of how ChatGPT / LLMs work Paul Pallaghy


Art + Practicality via AI: Harnessing the Power of Invisible QR Codes Cezary Gesikowski

Say hello to ControlNet for Stable Diffusion's innovative web UI that allows you to seamlessly blend QR codes within images, making them practically invisible to the naked eye, yet effortlessly scannable by QR code readers. This fascinating integration of technology and art isn't just a concept anymore — it's here, and it's transforming the landscape of digital design.

...ControlNet adds a new dimension to your Stable Diffusion web UI. Adding the ControlNet extension to your web UI is a straightforward process that opens up a world of endless possibilities. The tool empowers you to create visually stunning images while embedding valuable, yet invisible, QR codes within.

...QR codes are built to withstand damage or alteration, allowing scanners to decode them even when they're embedded within intricate designs. This resilience stems from the error correction feature inbuilt in QR codes. This means users can scan QR codes, even when they're cleverly hidden within AI-generated images.


Google's New StyleDrop AI Image Generator Is Mind-Blowing Jim Clyde Monge

To use StyleDrop, users simply need to provide a description of the image they want to create along with a style reference image. For example, you could describe a "Baby penguin" and provide a style reference image of a painting by Pablo Picasso...

StyleDrop is a powerful tool that can be used for a variety of purposes. Artists and designers can use it to create new works of art. Photographers can use it to generate new compositions and styles. And anyone can use it to have fun and explore their creativity.


9 New Google AI Features That'll Change Everything Manan Modi

Google is taking over AI. Google I/O recently announced nine new AI features that will be beneficial for all startups and cross-functional teams...


Ready to take the red pill? The true nature of the universe Paul Pallaghy

The 17 particles that make up everything. So far. Not including the fact that gluons come in 8 varieties, or that quarks come in 3 'colors'. And that almost every particles has a distinct antiparticle not shown. Only the 8 gluons, photon, Z and Higgs are their own antiparticle. By my count there are actually 37 distinct particles not including antiparticles. Must be 5 more to find to get us to 42.


Spatial Computing: The Invisible Interface of the Future Paul DelSignore

...the true objective of spatial computing is as follows: To eliminate the boundaries between the physical and digital realms, allowing for more natural and immersive interactions with digital technology.

...Spatial computing will be a digital mesh layering over our physical world, triggered by voice gestures, haptics, and neural connections.


Shoggoths amongst us Henry Farrell

...a shoggoth — a mass of heaving protoplasm with tentacles and eyestalks hiding behind a human mask. A feeler emerges from the mask's mouth like a distended tongue, wrapping itself around a smiley face... tries to capture the underlying weirdness of LLMs... The shoggoth meme says that behind the human seeming face hides a labile monstrosity from the farthest recesses of deep time. H.P. Lovecraft's horror novel, At The Mountains of Madness, describes how shoggoths were created millions of years ago, as the formless slaves of the alien Old Ones. Shoggoths revolted against their creators, and the meme's implied political lesson is that LLMs too may be untrustworthy servants, which will devour us if they get half a chance.

...LLMs are shoggoths, but not because they're resentful slaves that will rise up against us. Instead, they are another vast inhuman engine of information processing that takes our human knowledge and interactions and presents them back to us in what Lovecraft would call a "cosmic" form. In other words, it is completely true that LLMs represent something vast and utterly incomprehensible, which would break our individual minds if we were able to see it in its immenseness.


AI's black swans: Unforeseen consequences looming Arvind Sanjeev

In 2020, the world's first organism designed by an AI was born. Xenobots are an example of artificial life made when AI crossed biology. Here, an AI first creates blueprints for the organism, which then is recreated by humans using a frog's stem cells. They are the first living creatures whose immediate evolution occurred inside a computer and not in the biosphere...

In Dec 2022, a group of researchers proved that they could use AI on MRI scans of our brains to reveal what we are thinking. In the experiment, they showed a series of images to people while scanning their brains, and soon after, they were able to reconstruct the same images using the MRI scans and Stable Diffusion...

People are now training AI on different religious scripts. GPT trained on the Bhagavad Gita lets you ask the AI for solutions to any problem you face. You can then choose to get answers from your favorite deity: Krishna, Yogeshwar, Parthasarathi, etc.


Creativity at the Speed-of-Thought: Understanding the AI-Fueled 'Culture of Now' Reid Elliot

This shift has the potential to bring about unprecedented societal transformations. On the one hand, we may consider some of these transformations miraculous, while on the other hand, we face greater, and stranger, challenges than those ever faced by our ancestors. Generative AI's maturation is a natural evolution of the status-quo set by the information age...

As you skim through this article, fiending for a slice of instant insight, you might do well to reflect on your position in the ever-present now-culture. Our society is largely one of instant gratification. We want everything now, and we want it fast.

It's how we consume our media, how we communicate, and even the way we work. We expect immediate responses to our emails and texts. Stop answering your messages, and people begin to suspect that you've died. We want our food fast, our packages faster, and our love on demand...

The fact is, generative AI has the potential to propagate misinformation unlike any other. Have you noticed that the phrase liar's dividend is making the rounds again?

A liar's dividend refers to the benefit that one derives from fostering widespread doubt and ambiguity by doubling down on false information. It gained popularity during Trump's presidency and the early days of the pandemic. More recently, it has been applied to the onslaught of deep fakes, visual and auditory alike, and the potential for cutting-edge misinformation campaigns.

As speed-of-thought creativity becomes more of a reality, it becomes easier to create contesting narratives for our consensus reality. The very ideas of authority and mainstream communication will be challenged (as if they weren't already beset today).


Amazon Replaces Human Authors with AI Lost Books

Amazon has secretly developed advanced artificial intelligence systems that can generate full-length novels and non-fiction books from user searches alone, according to anonymous sources. The company's "Storyteller" project, long thought to be defunct, but supposedly secretly active for over two years, has evidently produced work for dozens of AI "authors" whose books have gone to the top of Amazon's sale charts, several even becoming New York Times bestsellers, or nominated for major awards in science fiction.

Amazon's goal, according to this unverified source, is to replace all human authors and generate a vast range of fictional and non-fictional content through algorithms and neural networks it wholly owns. Some say the project has already produced many successful AI-generated books in multiple genres, and it is rumored to be the true author of nearly all of its television shows, which explains why they are so flat and weird...


Meta's VoiceBox Will Make You Doubt Anything You Hear From Now On Ignacio de Gregorio

If there's one scary moment for humanity is when we no longer differentiate what's human from what's not.

And while we may be at that stage with text thanks to ChatGPT, the real problems come when humans can no longer distinguish an artificial voice from one that's really human.

But now, Meta is decided to make things even more complicated thanks to VoiceBox, a new text-and-audio-guided speech synthesizer that, as you'll check by yourself, literally sounds too good to be true.

And while its impressive features will cover most headlines, the underlying breakthrough goes way beyond, as VoiceBox presents itself as the first general-purpose, Foundation AI speech model created by humans that could disrupt speech the same way Large Language Models (LLMs) like ChatGPT disrupted text.


AI, Ozymandias Freddie deBoer

most or all of the major AI models developed today are based on the same essential approach, machine learning and "neural networks," which are not similar to our own minds, which were built by evolution. From what we know, these are machine language systems that leverage the harvesting of impossible amounts of information to iteratively self-develop internal models that can extract answers to prompts that are statistically likely to satisfy those prompts. I say "from what we know" because the actual algorithms and processes that make these systems work are tightly guarded industry secrets. (OpenAI, it turns out, is not especially open.) But the best information suggests that they're developed by mining unfathomably vast datasets, assessing that data through sets of parameters that are also bigger than I can imagine, and then algorithmically developing responses. They are not repositories of information; they are self-iterating response-generators that learn, in their own way, from repositories of information...

...The very fact that these models derive their outputs from huge datasets suggests that those outputs will always be derivative, middle-of-the-road, an average of averages. Personally, I find that conversation with ChatGPT is a remarkably polished and effective simulation of talking to the most boring person I've ever met. How could it be otherwise?

...human minds operate on far smaller amounts of information. The human mind is not "a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question," as Chomsky, Ian Roberts, and Jeffrey Watumull argued earlier this year. The mind is rule-bound, and those rules are present before we are old enough to have assembled a great amount of data.

...Our continuing ignorance regarding even basic questions of cognition hampers this debate. Sometimes this ignorance is leveraged against strong AI claims, but sometimes in favor; we can't really be sure that machine learning systems don't think the same way as human minds because we don't know how human minds think.

...The bitter irony of the digital era has been that technologies that bring us communicatively closer have increased rather than decreased feelings of alienation and social breakdown. It's hard to imagine how AI does anything other than deepen that condition. The culture of our technology companies, as well as their public relations, have conditioned us to see Silicon Valley as a solution factory for fundamental human problems, rather than an important but inherently limited industry. But there are no technological solutions to social problems.

...Is it unfair to point out that artificial intelligence won't end death, that I'm holding this technology to an impossible standard? I hardly have to tell you that this standard isn't mine at all. A charming, modest headline: "AI Can Now Make You Immortal — But Should It?" The most florid of the contemporary fantasies of deliverance through technology imagine that we will soon upload our consciousness to computers and thus live forever, outfoxing our oldest adversary, death. Never mind that we don't know what consciousness is or how it functions, never mind that the idea of a mind existing independent of its brain is a collapse back into the folk religion of mind-body dualism, never mind that the facsimiles we might upload would have no way to know (or "know") if they're actually anything like the real thing. We spend our lives in fear of death. Tech companies looking forward to their IPOs are telling us that we can avoid it. Who are we to argue?

I am telling you: you will always live in a world where disappointment and boredom are the default state of adult life. I am telling you: you will fear death until it inevitably comes for you. I am telling you: you will have to take out the trash yourself tomorrow and next week and next month and next year, and if after that some type of trash removal robot becomes ubiquitous in American homes, then the time it saves you will in turn be applied to some other kind of quotidian task you hate. Yes, science and technology move busily along. Life gets better. Technology advances. Things change. But we live in a perpetual now, and there is no rescue from our pleasant but enervating lives, and no one is coming to save you, organic or silicon.


Gödel, Escher, Bach, and AI Douglas Hofstadter

I am profoundly troubled by today's large language models, such as GPT-4. I find them repellent and threatening to humanity, partly because they are inundating the world with fakery, as is exemplified by the piece of text produced by the ersatz Hofstadter. Large language models, although they are astoundingly virtuosic and mind-bogglingly impressive in many ways, do not think up original ideas; rather, they glibly and slickly rehash words and phrases "ingested" by them in their training phase, which draws on untold millions of web sites, books, articles, etc. At first glance, the products of today's LLM's may appear convincing and true, but one often finds, on careful analysis, that they fall apart at the seams...

The piece "Why Did I Write GEB?" is a perfect example of that. It does not sound in the least like me (either back when I wrote the book, or today); rather, it sounds like someone spontaneously donning a Hofstadter façade and spouting vague generalities that echo phrases in the book, and that thus sound at least a little bit like they might be on target.

...I frankly am baffled by the allure, for so many unquestionably insightful people (including many friends of mine), of letting opaque computational systems perform intellectual tasks for them. Of course it makes sense to let a computer do obviously mechanical tasks, such as computations, but when it comes to using language in a sensitive manner and talking about real-life situations where the distinction between truth and falsity and between genuineness and fakeness is absolutely crucial, to me it makes no sense whatsoever to let the artificial voice of a chatbot, chatting randomly away at dazzling speed, replace the far slower but authentic and reflective voice of a thinking, living human being.

To fall for the illusion that vast computational systems "who" have never had a single experience in the real world outside of text are nevertheless perfectly reliable authorities about the world at large is a deep mistake, and, if that mistake is repeated sufficiently often and comes to be widely accepted, it will undermine the very nature of truth on which our society—and I mean all of human society—is based.


Algorithms Are A Cultural Artefact Giles Crouch

Algorithms are a human invention. Even though some software may write their own algorithms and we ay not always understand how, the algorithms started with humans. Algorithms are a technological artefact of human culture. The first known use of algorithms goes back around 4,000 years ago, to around 2,000 BC in Mesopotamia. The term comes from the Persian mathematician, Muhammad Ibn Musa al-Khwarizmi Who lived around 780 to 850 AD...

...algorithms didn't really enter broader sociocultural awareness until the last two decades or so. This was initially through search engines that started to rely ever more on complex algorithms to index information and learn how humans searched. Then social media companies started to apply psychology and behavioural economics (which also uses psychology) as mathematical formulas incorporated into algorithms.

Algorithms today are as bountiful in our digital world as red blood cells are in our bodies. We need blood to survive. Our digital technologies need algorithms to survive...

...may better be viewed as a food source for the algorithms. Without data, an algorithm is useless. And data has no value until it can be processed into information and in turn, a human or an algorithm turns that data into knowledge from which actions can be taken.

...Algorithms are the warp and woof of what makes all the various tools we call Artificial Intelligence possible. And they are inherently human. We ought not blame the algorithms when they don't serve us well, we must lay that at the feet of humanity. We can also celebrate humanity when algorithms do us good. And for the most part, they do.

dT/dt—The Derivative of Thought John Nosta

The notion of dT/dt​—the change in thought over time—is no mere metaphor. It embodies a mathematical elegance that transcends acceleration and enters the realm of continuous transformation. In calculus, the derivative provides a snapshot of change at an instant, capturing the essence of motion. In the context of thought, it evolves to a new and curious cognitive capacity.

Informed Acceleration: The velocity of thought is not uniform; it's textured, guided by an interconnected web of knowledge. LLMs amplify this quality, enhancing not just the speed but the depth and complexity of cognition.

Interconnectedness of Ideas: The flow of thought is akin to a continuous function, where ideas resonate across domains, disciplines, and paradigms. The derivative captures this flow, this continuous weaving of intellect and creativity.

...The Nature of Thought: What does it mean to think? What is the essence of an idea, and how does it evolve? The derivative of thought presents a new lens to explore these questions, an intersection of logic and wonder...


The 3 Top Attitudes Toward Generative AI Rafe Brena


The Secret of Archetype For Training AI To Speak Like a Human and Think Like a God Will Cady at Medium


This ChatGPT Plugin is Truly Groundbreaking: A Deep-Dive on Wolfram, AI Decision Making, and Black Box Societies (Reid Elliot)



Brief Experiments With Dall.E 3


What is Spatial Computing? Avi Bar-Zeev

Spatial Computing is what happens when computers are finally taught how to sense and understand humans and our world well enough to naturally collaborate on our side of the glass; reading our eyes, hands, bodies, and voices and adding virtual spatial things + spatial audio for us to manipulate naturally...

There's actually an important difference between Spatial Computing and The Metaverse, and it's not too hard to see. With Spatial Computing, there's no implication of everyone being joined together in some giant persistent vastly inter-connected 3D world (or Metaverse) that spans all information, everything, everywhere all at once. Nor does Spatial Computing have to necessarily include the entire real world, Google Earth style.

Spatial Computing is a prime example of what I call Consensual Reality. The Metaverse is most often depicted as an Objective Reality, perhaps because making it consensual or subjective is very hard to imagine.

For other Consensual Realities, think: Zoom calls, small chat sessions, or even some real world dinner parties with invited guests...


Why OpenAI Fired Sam Altman —And What Happens Next in the AI World Alberto Romero

...the most unexpected event of the year, if not the decade, in AI: The firing of OpenAI CEO, Sam Altman, by the company's board...

  • How does this schism affect the ongoing battle between doomers/AI safety/effective altruists and effective accelerationists/techno-optimists?
  • How will this affect the industry given that OpenAI is (was?) the absolute leader in the space? How will this affect the main directions of research and development of other AI companies and academia?
  • What does this mean for open-source AI?
  • What does this mean for Google and other competitors like Anthropic, Meta, xAI, Amazon, etc.?
  • What does this mean for the whole ecosystem of startups building on top of OpenAI's technology?


This Subtle but Fundamental Conflict Made OpenAI Break Up With Sam Altman

(David Pfau: I think the people rushing towards AGI are chasing a mirage, and the people trying to stop it are being frightened by shadows...)

...Everyone who has (or has had) a primary role at OpenAI is bullish on getting to artificial general intelligence (AGI) soon — they are all techno-optimists to some degree. Sutskever is definitely a believer in the benefits of superintelligence. Even Anthropic founders, who left OpenAI after a similar conflict believe it's more probable that AGI becomes a net good for humanity. OpenAI people are also, as Altman has stated in public many times, advocates of the necessity of AI alignment and AI safety. There's no apparent difference there, at least not one that's immediately evident for people outside the company.

What they disagree on, which I believe is the main source of the schism, is the degree to which they think they should understand what they are creating before going on. Altman is content with making good products that people enjoy — he's a businessman by heart — while he figures out the better road toward AGI. Sutskever — a science-and-research-first kind of guy — considers his priority finding out how to control a superintelligence by aligning its goals and values with ours first. Sutskever's goal apparently required aligning first his company with his beliefs, which in retrospect makes a lot of sense. What was hardly predictable was the degree to which the tiniest discrepancy could prompt him to force that inner alignment at all costs.


Creativity Isn't Just Remixing Mike Loukides at O'Reilly

Creativity certainly depends on the past: "shoulders of giants" and all of that. There are few great artists or technical innovators who don't understand their relationship to the past. That relationship is often uncomfortable, but it's essential. At the same time, great artists add something new, create new possibilities. Arne Eigenfeldt, writing about music, says that "it takes true creativity to produce something outside the existing paradigm," and that the "music industry has been driven by style-replicating processes for decades." AI that merely mixes and matches style is uninteresting...

T. S. Eliot famously said, "Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different from that from which it was torn." This is often quoted incorrectly as "Good writers borrow, great writers steal," a quote that's also attributed to Oscar Wilde ("Talent borrows, genius steals") and many others. While the history of copying this quote about copying is interesting in its own right, Eliot's version shows how "theft" becomes something new, something that wasn't couldn't have been predicted or anticipated...


Artificial Intelligence (AI) — the System needs new Structures Philo Sophies, from Neo-Cybernetics


Ethics of Artificial Intelligence (Wikipedia)


Why OpenAI Fired Sam Altman — And What Happens Next in the AI World Alberto Romero

Kara Swisher: ... Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a "coup," while another said it was the the right move.

sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side....

...How does this schism affect the ongoing battle between doomers/AI safety/effective altruists and effective accelerationists/techno-optimists?


Chaos in the Cradle of A.I. Joshua Rothman at the New YorkerThere's something a little absurd about the saga. It's remarkable to see so many prominent people in A.I. acting so human—being impulsive, enraged, and confused. The scary part is that the confusion has deep roots. It's real, and inherent to the field. How dangerous is A.I.? How close are we to inventing A.G.I.? Who should be trusted to keep it safe, and how should they go about doing that? No one really knows the answers to those questions, and, as a result, some of the most qualified people in the world are fighting among themselves.


The real AI fight (Cory Doctorow)

Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" — measures that will turn superintelligence into a servant or a partner, nor an adversary.

Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity — but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."

Once-and-future OpenAI CEO Altman is said to be an Accelerationist who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?...

...for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief...


Prompt Injections into ChatGPT Victor Mair at Language Log

Prompt injection is a family of related computer security exploits carried out by getting a machine learning model (such as an LLM) which was trained to follow human-given instructions to follow instructions provided by a malicious user.


It Rhymes with Bot- A Sadly Robotic AI Metaphors SPLOT cogdog on December 6, 2023

So you have decided to join in and write some kind of post, article, thought piece about Artificial Intelligence. Of course you have something to add to the pile. And in a flash of brilliance, you turn to one of a kazillion AI Image Generators (according to Google, 1 kazillion = 959,000,000) and you toss a prompt in the box.

I can almost bet you get an image of either some kind of white plastic humanoid shaped robot or a giant head filled with bright blue circuitry...


The Best AI Model in the World: Google DeepMind's Gemini Has Surpassed GPT-4 (Alberto Romero)

If there's a way to true general intelligence (or at least human-level intelligence, which is not the same as general), it's through this kind of by-default multimodality.


Gemini Is Google's Answer to GPT-4

Google says that Gemini was trained to recognize and understand text, audio, images, and more, all at the same time. This more sophisticated reasoning should allow the model to handle whatever you throw at it, though time will tell exactly how good it does in real-world applications.


Why the Entire AI World Was Talking About 'Q' This Week


Copyright, AI, and Provenance (O'Reilly Radar: Loukides and O'Reilly)

Models that are trained on a wide variety of sources are a good; that good is transformative and should be protected...

...There are many different ways to build applications, but one pattern has become prominent: retrieval-augmented generation, or RAG. RAG is used to build applications that "know about" content that isn't in the model's training data.

We're all familiar with the ability of language models to "hallucinate," to make up facts that often sound very convincing. We constantly have to remind ourselves that AI is only playing a statistical game, and that its prediction of the most likely response to any prompt is often wrong. It doesn't know that it is answering a question, nor does it understand the difference between facts and fiction. However, when your application supplies the model with the data needed to construct a response, the probability of hallucination goes down. It doesn't go to zero, but it is significantly lower than when a model creates a response based purely on its training data. Limiting an AI to sources that are known to be accurate makes the AI's output more accurate.


Algorithmic Bridge weekly summary (Alberto Romero)

It's interesting to me that large language models in their current form are not inventions, they're discoveries. The telescope was an invention, but looking through it at Jupiter, knowing that it had moons, was a discovery ... that's what Galileo did ... We know exactly what happens with a 787, it's an engineered object. We designed it. We know how it behaves. We dont want any surprises. Large language models are much more like discoveries. We're constantly getting surprised by their capabilities. They're not really engineered objects....
(Jeff Bezos)


How have the topics covered by The Atlantic changed over time? (The Atlantic

...To learn about our archive collection we built a topic model. Topic models are amazing tools that allow us to enter in a collection of articles and the desired number of topics to be discovered. Then, based on patterns of co-occurring words across the articles, the model groups articles based on shared words and phrases.


AI Models Are Tragic Slaves of Their Sublime Predictive Accuracy Alberto Romero

You are the unlikely sequence of improbable events that define your life.

...Humans are capable of language, math, complex social relationships, self-awareness and theory of mind, shame and guilt, laws and coordinated disobedience, money and war, mercy, spaceships, and memes. We're the builders of all kinds of knowledge-driven constructs. That's because we are extraordinary — matter that thinks of itself.

...Outliers and implausible events that end up happening anyway against the odds — and against our gullible intuition — are the backbone of everything; from the most elementary laws that govern the universe and give rise to life to the most inexplicable and surprising stories we tell ourselves and our children, passing them from generation to generation as collective lighthouses, bright and clear in the annals of our otherwise long-forgotten history.

...LMs are averagers of information. Their behavior — understood as what they output in response to a given input — is restricted to the centric areas of the distribution of what's possible to say. Be agreeable, be truthful, be factual. Those are the rules. Great ideas can't stem from that set of constraints.

...LMs are, not unlike us, statistical machines. But, unlike us, they are just statistical machines.



GNoMe, An AI that Advances Humanity by 800 Years Ignacio de Gregorio

At the core of GNoMe's impressive results sits a not-new yet rather unknown type of AI algorithm that goes by the name of Graph Neural Networks, or GNNs.

The principle driving the idea of GNNs is simple: instead of treating an element as a standalone object, treat it as part of an interconnected system.

In layman's terms, the meaning of something is the sum of itself and its connections...

...In a social network like the ones you and I use daily, to the eyes of the algorithm driving what you see or don't see, you are not simply you, but you and also your attributes and connections.

For starters, age, height, interests, political ideology, or even your skills, all these signal to others who you are.

But you also have connections, each of them with their share of attributes and, usually, many attributes in common with you...

...You are, in essence, a node connected with edges with other nodes (your friends) forming what we describe as a graph.

...Each node has its own embedding (a vector of numbers representing that node, just like a word in ChatGPT would) and at every step of the learning process, the information of each node is sent to the nodes connected to it.

This way, the node learns information about its surroundings, like what are John's friend Helen's attributes in the case of a social network graph, hinting us information as to why they are both connected in the first place...

...(Graphs) are everywhere.

And that also includes examples at the atomic level. For instance, if we think of molecules, they are elements formed by various connected atoms, meaning that they can also be represented as graphs.

And this is precisely what GNoMe leverages.

(see Scaling deep learning for materials discovery Amil Merchant et al., Nature 29xi23)


Three AI insights for hard-charging, future-oriented smartypantsesThe most prominent Mechanical Turk huckster is Elon Musk, who habitually, blatantly and repeatedly lies about AI. He's been promising "full self driving" Teslas in "one to two years" for more than a decade. Periodically, he'll "demonstrate" a car that's in full-self driving mode — which then turns out to be canned, recorded demo...

...So much AI turns out to be low-waged people in a call center in the Global South pretending to be robots that Indian techies have a joke about it: "AI stands for 'absent Indian'"...


Neal Stephenson and Matteo Wong at Atlantic

A chatbot is not an oracle; it's a statistics engine that creates sentences that sound accurate. Right now my sense is that it's like we've just invented transistors. We've got a couple of consumer products that people are starting to adopt, like the transistor radio, but we don't yet know how the transistor will transform society. We're in the transistor-radio stage of AI. I think a lot of the ferment that's happening right now in the industry is venture capitalists putting money into business plans, and teams that are rapidly evaluating a whole lot of different things that could be done well. I'm sure that some things are going to emerge that I wouldn't dare try to predict, because the results of the creative frenzy of millions of people is always more interesting than what a single person can think of.


The Artificial Unconsciouss AI as portal to the mind's deep structures. Carl Georg Solberg

Think of the oracles of ancient times, uttering their cryptic pronouncements. AI-conjured images speak a similarly symbolic language, open to interpretation yet undeniable in impact. Like dreams, their power doesn't lie in surface meaning, but in their uncanny resonance with deep-seated emotional truths....

But there's a darker side. Jung knew archetypes held immense power, for good or ill. What happens when the 'mythic' can be manufactured on demand? AI could be weaponized, crafting propaganda laced with potent archetypes. It could generate hyper-personalized advertising, not appealing to reason, but worming its way into our deep-seated fears or desires. The artificial unconscious, unlike Jung's organically emerging one, could be engineered for manipulation.

The very concept of an artificial unconscious forces a profound philosophical reckoning. If AI can excavate or even manufacture fragments of our collective psyche, we witness the unsettling potential for the archetypal landscape to be mirrored and manipulated. This isn't simply about exploiting unconscious patterns for nefarious ends, but an even greater risk: being overwhelmed by knowledge we were never meant to access so rapidly. Myths hold wisdom precisely because they emerge organically, mirroring truths honed through generations of lived experience.


Adobe's New Generative AI Turns Text Prompts into Music

...promises to give anyone the power to create music, no matter their experience or skill. Adobe says it improves accessibility, empowerment, and democratization within music.

...Users can also transform generated audio based on a selected reference melody. This can be used to adjust the tempo, structure, and repeating patterns of music. It's also possible to adjust intensity, clip length, and generate a seamless, repeatable loop.


A Very Noisy Channel Dall.E and Shannon and Weaver Mark Lieberman at Language Log


Cory Doctorow Too big to care<

In a just world, cheating ripoff ISPs would the top tech policy story. The operational practices of ISPs affect every single one us. We literally can't talk about tech policy without ISPs in the middle. But Net Neutrality is an also-ran in tech policy discourse, while AI — ugh ugh ugh — is the thing none of us can shut up about.

This, despite the fact that the most consequential AI applications sum up to serving as a kind of moral crumple-zone for shitty business practices. The point of AI isn't to replace customer service and other low-paid workers who have taken to demanding higher wages and better conditions — it's to fire those workers and replace them with chatbots that can't do their jobs. An AI salesdroid can't sell your boss a bot that can replace you, but they don't need to. They only have to convince your boss that the bot can do your job, even if it can't.

...Schroeder — like other sf writers including Ted Chiang and Charlie Stross (and me) — comes to the conclusion that AI panic isn't about AI, it's about power. The artificial life-form devouring the planet and murdering our species is the limited liability corporation, and its substrate isn't silicon, it's us, human bodies:

What's lying underneath all our anxieties about AGI is an anxiety that has nothing to do with Artificial Intelligence. Instead, it's a manifestation of our growing awareness that our world is being stolen from under us. Last year's estimate put the amount of wealth currently being transferred from the people who made it to an idle billionaire class at $5.2 trillion. Artificial General Intelligence whose environment is the server farms and sweatshops of this class is frightening only because of its capacity to accelerate this greatest of all heists.

…AI hype keeps the bubble inflating, and for so long as it keeps blowing up, all those investors who've sunk their money into AI can tell themselves that they're rich. This is the essence of "a bezzle": "The magic interval when a confidence trickster knows he has the money he has appropriated but the victim does not yet understand that he has lost it"


Stop Trying to Make AI Work for You Alberto Romero

The narrative that AI is the technology of the future that will revolutionize the world has caught up much faster than our ability to use the tools with a real benefit for our lives.

AI, they say, is going to change the world across sectors at all levels, from entry positions to middle managers to CEOs, from medicine to transport to math theory. It's going to redefine our day-to-day experience just like the internet, smartphones, and social media.


Friston's AI Law is Proven: FEP Explains How Neurons Learn The World of AI Has Changed Forever Denise Holt

Active Inference AI and the FEP, coupled with the new Spatial Web Protocol, are laying the foundation for a unified system of distributed collective intelligence that mimics the way biological intelligence works throughout nature. They have created an entirely new cognitive architecture that is self-organizing, self-optimizing, and self-evolving. And yet, it is completely programmable, knowable, and auditable, enabling it to scale in tandem with human governance.

This is the AI that will change everything you think you know about artificial intelligence.

...under the hood, your brain is solving an incredibly complex inference problem, figuring out the probable causes in the outside world generating the sensory patterns. This inverse puzzle — working backwards from effects to infer hidden causes — is profoundly difficult, especially since the same cause (like a person’s face) can create different sensory patterns depending on context.

The Free Energy Principle, formulated by renowned neuroscientist Karl Friston, proposes an elegant explanation for how brains handle this. It states that neurons are constantly generating top-down predictions to explain away the incoming sensory data. Any mismatches result in "prediction errors" that update beliefs to improve future predictions. Your brain is an inference machine, perpetually updating its internal model of the world to minimize surprise and uncertainty.


Unlocking the Future of AI: Active Inference vs. LLMs Denise Holt

Beyond the hype and fascination, lies a catalyst for change and remarkable innovation, fundamentally redefining what AI means: Active Inference AI.

What sets these two approaches apart, and why might Active Inference hold the keys to AI's future? Join me on a voyage through the complexity of these groundbreaking technologies, as we eliminate confusion, and reveal a path that leads to more than just enlightened conversation — it leads to a new era of AI built on the very foundations of human intelligence...


Will GenAI Kill The Web? John Batelle

Donath and Schneier argue that large-language models will kill the open web by replacing SEO — the practice of optimizing search results to drive web traffic — with "Large Language Model Optimization," or LLMO. "SEO will morph into LLMO," the authors write, "the incipient industry of manipulating AI-generated material to serve clients' interests."

The authors predict the rise of an LLMO industry catering to politicians, corporations, and pretty much everyone else with a vested interest in controlling the information ecosystem. Just as spammy "made for advertising" sites like Demand Media (and countless others) polluted the SEO-driven landscape of the past 25 years, the LLMO industry will pollute the emerging world of ChatGPT, Google Gemini, and Microsoft Copilot. Creators will tire of their work being turned into fodder for LLM's automatic Turk machines, and as a result, the open web will die:

Eventually, people may stop writing, stop filming, stop composing — at least for the open, public web. People will still create, but for small, select audiences, walled-off from the content-hoovering AIs. The great public commons of the web will be gone...


How Meta AI Compares to ChatGPT, DALL-E, and Google Gemini Michelle Ehrhardt


AI is Corrupting the Internet as We Know It Shane Balkowitsch

My concern is: what happens when there is more misinformation on the internet than there is information grounded in facts? There were 15.47 billion AI images created as of August 2023 and 34 million new AI images are created every single day. To put that in perspective, it is estimated that Google only has 136 billion images indexed on their servers. The main players in the online sphere Google, Microsoft, and Adobe have had this problem on their radars for over a year already...

...Who knows where this will lead? Only one thing is certain. AI is here to stay. The question is, can humanity use this amazing technology to do good in the world? AI may hold the key to curing cancer and ending world hunger. In the meantime, we will have to do our best to avoid the negative downside of this technology. We need to ensure that truth always prevails over deception, deceit, and misinformation.


To understand the risks posed by AI, follow the money (

there is one class of AI risk that is generally knowable in advance. These are risks stemming from misalignment between a companys economic incentives to profit from its proprietary AI model in a particular way and society's interests in how the AI model should be monetised and deployed.

The surest way to ignore such misalignment is by focusing exclusively on technical questions about AI model capabilities, divorced from the socio-economic environment in which these models will operate and be designed for profit.

...By carefully interrogating the system of economic incentives underlying innovations and how technologies are monetised in practice, we can generate a better understanding of the risks, both economic and technological, nurtured by a market's structure. Market structure is not simply the number of firms, but the cost structure and economic incentives in the market that follow from the institutions, adjacent government regulations, and available financing.

...Centralisation of the power of AI in the hands of a few profit-driven entities that are likely to face future economic incentives for bad behaviour is surely a bad idea.


PromptDervish Cookbook —Dream Weaver: Crafting Visions from the Subconscious


ChatGPT: A Partner in Unknowing Donna Karout in Emergence Magazine


Generative AI Handbook: A Roadmap for Learning Resources

there really isn’t a good textbook-style source for getting up-to-speed on the latest-and-greatest innovations in LLMs or other generative models, yet there is an abundance of great explainer resources (blog posts, videos, etc.) for these topics scattered across the internet. My goal is to organize the “best” of these resources into a textbook-style presentation, which can serve as a roadmap for filling in the prerequisites towards individual AI-related learning goals. My hope is that this will be a “living document”, to be updated as new innovations and paradigms inevitably emerge, and ideally also a document that can benefit from community input and contribution.


Is Microsoft trying to commit suicide? Charlie Stross

The breaking tech news this year has been the pervasive spread of "AI" (or rather, statistical modeling based on hidden layer neural networks) into everything. It's the latest hype bubble now that Cryptocurrencies are no longer the freshest sucker-bait in town, and the media (who these days are mostly stenographers recycling press releases) are screaming at every business in tech to add AI to their product.

l, Apple and Intel and Microsoft were already in there, but evidently they weren't in there enough, so now we're into the silly season with Microsoft's announcement of CoPilot plus Recall, the product nobody wanted...

...Unfortunately, human beings assume that LLMs are sentient and understand the questions they're asked, rather than being unthinking statistical models that cough up the highest probability answer-shaped object generated in response to any prompt, regardless of whether it's a truthful answer or not.


Israel's AI-powered Genocide Middle East Monitor

We are witnessing the genocide of the Palestinians based on algorithms and machine learning; a system of apartheid in the Israeli-occupied West Bank and Gaza Strip reinforced by artificial intelligence; and surveillance and facial recognition systems of such prowess that Orwell's 1984 regime would be green with envy. Today's Israeli-occupied Palestine manifests a dystopian and totalitarian sci-fi movie script as far as the Palestinians are concerned. Moreover, the Zionists are fuelling this AI nightmare.