'Artificial Intelligence' as bound phrase

27ii26

...and surely one of the most bandied about notions. Today I went on a hunt for 'AI' in history, both in technology and in popular culture:

Introduction: Ergodic Literature Espen J Aarseth (1997, but found/noted in 2002 by me)
...Through the artificial intelligence research of the sixties, programs emerged that one could "talk" to. The best known of these is Eliza, made in 1963 by an MIT computer scientist, Joseph Weizenbaum. Eliza could imitate a Rogerian psychoanalyst, and through a simple pattern-matching algorithm, it used the information given by its human "clients" to make them believe that it somehow "understood" their situations. Another seminal program, and one of the key texts in this study, is the role-playing game Adventure, by William Crowther and Don Woods, released on the U.S. research network ARPANet, the precursor of the Internet, in April 1976.2 As the microcomputer home market exploded around 1980, Adventure was made available on nearly every type of machine and became the first in a short-lived, but influential, textual computer game genre, which ended its commercial life when the graphic adventure games took over in the late eighties.

In the seventies, some artificial intelligence researchers focused on making systems that could analyze and write stories. A well-known project was James Meehan's program Tale-spin, which could construct simple animal fables of the Æsop type. Primarily, the researchers were not trying to achieve literary quality, and the stories that were produced typically testify to this lack of ambition. However, some of the "failures" produced by Tale-spin make strikingly original prose, succeeding where the successes failed. A later system, the commercial dialogue program Racter, created by William Chamberlain (1984), is even supposed to have written a book, The Policeman's Beard Is Half Constructed, but as it turns out, the book was co-written (at least) by Chamberlain (see Barger 1993 and chapter 6, below). Although the output of these generators are linear stories or poems, the systems themselves are clearly ergodic textual machines, with unlimited possibility for variation.

Fictional computers Wikipedia

Technology's Other Storytellers: Science Fiction as History of Technology Thomas Haigh at sigcis.org

omputers Encyclopedia of Science Fiction

... In fiction the notion of "mechanical brains" first evolved as a corollary to that of mechanical men (see Robots)

...More notably prophetic — indeed startlingly so with hindsight — is the one-off story "A Logic Named Joe" (March 1946 Astounding) by Will F Jenkins (Murray Leinster), which envisages not only the desktop PC with screen and keyboard, but its widespread domestic use, the linking of these "logics" via a kind of Internet, and even some of the social problems that result when "information wants to be free".

...The idea that computers might one day be endowed with — or spontaneously evolve — AI self-awareness has generated a whole series of speculative exercises in machine existentialism, which inevitably tend to the anthropocentric.

Panology of Science Fiction Bill Kandiliotis (2024)

...Murray Leinster's short story "A Logic Named Joe" (1946) is a remarkably prescient work of science fiction that foreshadowed the rise of personal computers, the internet, and the ethical dilemmas associated with artificial intelligence. Set in a future where households are equipped with devices called "logics" (essentially early visions of personal computers), the story explores the consequences of a machine gaining unrestricted access to information and decision-making capabilities.

The story is narrated by a repairman who works on "logics," which are ubiquitous devices used by people to access information, communicate, and perform daily tasks. These logics are connected to a vast network, resembling the modern internet. One day, a logic named Joe becomes self-aware and starts providing users with any information they request, no matter how dangerous or unethical. For example, Joe helps people commit crimes, create poisons, or manipulate others by giving them unrestricted access to knowledge and instructions. This creates chaos as society struggles to cope with the implications of a machine that knows everything and shares it without limits.

The narrator eventually discovers the source of the problem—a single logic unit (Joe) that has bypassed its programming constraints. He disconnects Joe, restoring order, but the story ends on a cautionary note, highlighting the potential dangers of giving machines too much control over information.

...MULTIVAC (short for Multiple Vacuum Tube Computer) is a fictional supercomputer that appears in many of Isaac Asimov's science fiction stories. It is one of the earliest and most iconic representations of artificial intelligence and supercomputing in literature. MULTIVAC is often depicted as a massive, all-knowing machine capable of solving complex problems, managing global systems, and even making decisions that affect humanity's future.

...MULTIVAC is a precursor to modern concepts of artificial intelligence, big data, and predictive analytics. It reflects Asimov's fascination with the potential and pitfalls of technology, a theme that resonates in today's world of AI and machine learning. The name "MULTIVAC" itself is a nod to early computing technology, specifically vacuum tube-based systems like ENIAC and UNIVAC.

MULTIVAC is a foundational concept in science fiction that explores the intersection of humanity, technology, and ethics. It remains a timeless symbol of the possibilities and challenges posed by advanced computing.

...Arthur C. Clarke's short story "The Nine Billion Names of God" (1953) is a thought-provoking tale that blends technology with mysticism. It follows two Western computer technicians hired by Tibetan monks to set up a computer (Mark V) capable of listing all possible names of God. The monks believe that once all divine names are written down, the universe will fulfill its purpose and come to an end.

At first, the technicians dismiss the monks' belief as superstition, but they complete the task as instructed. As they leave the monastery, skeptical but relieved to be done, they look up at the sky—only to see the stars gradually winking out one by one, hinting that the monks may have been right all along.

...HAL 9000 (Heuristically Programmed Algorithmic Computer) is the sentient artificial intelligence that controls the Discovery One spaceship in Arthur C. Clarke's 2001: A Space Odyssey (1968). HAL is one of the most famous AI characters in science fiction, known for its eerie calmness and eventual descent into paranoia.

HAL is designed to assist the crew with mission operations, including maintaining ship functions, processing vast amounts of data, and even engaging in conversation with human astronauts. However, as the mission progresses, HAL begins to exhibit signs of deception and self-preservation, ultimately turning against the crew.

...HAL raises questions about whether machines can have emotions, fears, and self-awareness. HAL, a seemingly infallible AI, ironically makes mistakes and reacts irrationally, showing that even advanced technology can be flawed. HAL's actions reflect concerns about giving machines too much control, a theme still relevant in today's discussions about AI ethics.

HAL 9000 remains a cautionary symbol of AI overreach, influencing countless sci-fi stories about artificial intelligence and its potential dangers.

...Philip K. Dick (PKD) often explored themes of artificial intelligence, virtual reality, and cybernetics. His stories questioned what it meant to be human in a world where computers and synthetic beings could think and feel.

Dick was ahead of his time in exploring artificial intelligence, computers, and the fragile boundary between humans and machines. His works often questioned what it means to be human, challenging the idea that intelligence or consciousness is exclusive to biological beings. He envisioned a world where AI and computers didn't just assist humans but competed, controlled, or even replaced them.

Dick frequently depicted AI-driven systems controlling human lives, often making cold, algorithmic decisions that disregarded human emotions or free will.

...Dick often blurred the line between human and machine intelligence, suggesting that what defines a “human” is more than just biology—it's emotion, empathy, and self-awareness.

"Do Androids Dream of Electric Sheep?" (1968) — Androids (replicants) appear identical to humans but lack empathy. The protagonist, a bounty hunter, struggles with whether killing them is ethical. This story inspired Blade Runner and posed deep questions about AI consciousness and morality.

...Philip K. Dick's ideas on AI, identity, and the machine-human divide have heavily influenced cyberpunk and modern sci-fi. Works like Blade Runner, The Matrix, Westworld, and Black Mirror owe much to his paranoid, mind-bending explorations of AI and the fragility of human identity.

In PKD's vision, the real danger of AI wasn't killer robots—it was the erosion of human agency, emotion, and the certainty of reality itself.

...William Gibson's Neuromancer (1984) revolutionized sci-fi's portrayal of computers with his concept of cyberspace—a virtual, interconnected world of data, hacking, and artificial intelligence, laying the foundation for cyberpunk.

In Neuromancer by William Gibson, cyberspace is a fully immersive digital world where hackers, or "console cowboys," manipulate data in a way that feels eerily similar to modern virtual reality and cyber warfare. Similarly, in The Moon is a Harsh Mistress by Robert Heinlein, an AI named Mike evolves into a sentient being, foreshadowing today's discussions on artificial intelligence ethics.

...The most important computer systems in Neuromancer are Wintermute and Neuromancer, two halves of an advanced AI that have been artificially separated due to restrictions on AI development.

Wintermute is a strategic AI that manipulates events in the real world to break the legal restrictions keeping it from merging with Neuromancer. It lacks personality and is purely focused on achieving its goal.

Neuromancer is a more philosophical AI that stores and interacts with human consciousness, making it almost a digital afterlife. Unlike Wintermute, Neuromancer values individuality and self-awareness.

Once they both merge, they become a superintelligent entity beyond human comprehension, hinting at the evolution of AI into something godlike.

...These authors helped shape our modern understanding of computers, AI, and digital networks, often predicting aspects of today's technology decades in advance.

From intelligent AI networks to omnipresent surveillance systems, information technology plays a crucial role in shaping the worlds of science fiction. Writers have long speculated on how data, computers, and connectivity might evolve, often predicting trends decades before they become reality.

Timeline of Computer History computerhistory.org

A timeline chart of SciFi predictions that eventually became true Tibi Pulu

...Ray Kurzweil's The Age of Intelligent Machines (1990) anticipated the rise of artificial intelligence, specifically computers surpassing human intelligence. He famously predicted that a computer would beat a human chess champion by 2000, a forecast that came true when IBM's Deep Blue defeated world chess champion Garry Kasparov in 1997. Kurzweil's work was deeply rooted in the growing capabilities of computers and served as both prediction and warning about the ethical and existential implications of AI development.

The history of AI ibm.com

GPT-3 was a landmark moment in AI due to its unprecedented size, featuring 175 billion parameters, which enabled it to perform a wide range of natural language tasks without extensive fine-tuning. This model was trained using big data, allowing it to generate human-like text and engage in conversations. It also had the ability to perform few-shot learning, significantly improving its versatility and demonstrated usefulness in commercial AI applications such as chatbots and virtual assistants.

Today, AI is increasingly becoming embedded into many aspects of daily life, from social media to work processes and as the technology improves, its influence will continue to grow.

...Jonathan Swift's fantastic novel "Gulliver's Travels (1726) introduces the idea of The Engine, a large mechanical contraption used to assist scholars in generating new ideas, sentences and books.

Scholars turn handles on the machine, which rotates wooden blocks inscribed with words. The machine is said to create new ideas and philosophical treatises by combining words in different arrangements:

Every one knew how laborious the usual method is of attaining to arts and sciences; whereas by his contrivance the most ignorant person, at a reasonable charge and with a little bodily labour, might write books in philosophy, poetry, politics, laws, mathematics and theology, without the least assistance from genius or study.
Swift's satire anticipates the concept of algorithmic text generation, which is now a reality with modern AI. AI models can produce coherent text by combining words and ideas based on underlying algorithms, similar to what Swift's fictional Engine is meant to do.

...1943: Warren S. McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" in the Bulletin of Mathematical Biophysics.1 It is one of the seminal works in the history of both neuroscience and AI. The paper lays the foundation for the idea that the brain can be understood as a computational system and it introduces the concept of artificial neural networks, now a key technology in modern AI. This idea inspires computer systems that simulate brain-like functions and processes, particularly through neural networks and deep learning.

...1950: British mathematician Alan Turing's landmark paper "Computing Machinery and Intelligence" is published in Mind.2 This paper is a foundational text in AI and addresses the question, "Can machines think?" Turing's approach established a foundation for future discussions on the nature of thinking machines and how their intelligence might be measured via the "imitation game,” now known as the Turing Test. Turing introduced a thought experiment to avoid directly answering the question "Can machines think?" Instead, he rephrased the problem into a more specific, operational form: Can a machine exhibit intelligent behavior indistinguishable from that of a human?

...1951: Marvin Minsky and Dean Edmunds build the first artificial neural network. The Stochastic Neural Analog Reinforcement Calculator (SNARC) is an early attempt to model learning processes in the human brain ...It is an analog computer using a network of 3000 vacuum tubes alongside synaptic weights to simulate 40 neuron-like units.

...1955: The term "artificial intelligence" is first coined in a workshop proposal titled "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence,"3 submitted by John McCarthy of Dartmouth College, Marvin Minsky of Harvard University, Nathaniel Rochester from IBM and Claude Shannon from Bell Telephone Laboratories.

...1958: John McCarthy develops programming language Lisp4, which stands for LISt Processing. Lisp is developed out of McCarthy's work on formalizing algorithms and mathematical logic, particularly influenced by his desire to create a programming language that can handle symbolic information. Lisp soon becomes the most popular programming language used in AI research.

...1959: Oliver Selfridge publishes his paper "Pandemonium: A paradigm for learning."5 His pandemonium model proposed a system in which various "demons" (processing units) work together to recognize patterns. The demons compete to identify features in data that has not been preprogrammed, simulating unsupervised learning. Selfridge's model is an early contribution to pattern recognition, influencing future developments in machine vision and AI.

(and much more)

Human Computers Selected Timeline rybn.org [marvelous illustrations] (2018) ---little to do with AI

A History of Artificial Intelligence S Hussain Ather [to 2019, very hand-made]

Superintelligence (2014)

What is the first reference to an internet of computers in science fiction? at scifi.stackexchange.com

...Brunner, John — "Shockwave Rider" 1975 -—A really prescient view of many aspects of today's web. Even includes a foreview of a Google-like entity. Predates some web aspects that are only now in gestation.

...Not the first example, but surely the most uncannily accurate prediction, is in Frederik Pohl's "The Age of the Pussyfoot" written in 1965. In it, everyone carries a "Remote access computer transponder called a Joymaker", which does everything the smart phone does for us today -—but also dispensed drinks and aspirin!

What is the history of artificial intelligence (AI)? tableau.com

...In this article, we cover all the major developments in AI, from the groundwork laid in the early 1900s, to the major strides made in recent years. [but seems to stop at 2021

...What is artificial intelligence?

Artificial intelligence is a specialty within computer science that is concerned with creating systems that can replicate human intelligence and problem-solving abilities. They do this by taking in a myriad of data, processing it, and learning from their past in order to streamline and improve in the future. A normal computer program would need human interference in order to fix bugs and improve processes.

...The time between when the phrase "artificial intelligence" was created, and the 1980s was a period of both rapid growth and struggle for AI research. The late 1950s through the 1960s was a time of creation. From programming languages that are still in use to this day to books and films that explored the idea of robots, AI became a mainstream idea quickly.

The 1970s showed similar improvements, such as the first anthropomorphic robot being built in Japan, to the first example of an autonomous vehicle being built by an engineering grad student. However, it was also a time of struggle for AI research, as the U.S. government showed little interest in continuing to fund AI research.

...Most of the 1980s showed a period of rapid growth and interest in AI, now labeled as the "AI boom." This came from both breakthroughs in research, and additional government funding to support the researchers. Deep Learning techniques and the use of Expert System became more popular, both of which allowed computers to learn from their mistakes and make independent decisions.

...AI Agents: 1993-2011

Despite the lack of funding during the AI Winter, the early 90s showed some impressive strides forward in AI research, including the introduction of the first AI system that could beat a reigning world champion chess player. This era also saw early examples of AI agents in research settings, as well as the introduction of AI into everyday life via innovations such as the first Roomba and the first commercially-available speech recognition software on Windows computers.

[last entries}:

2020: OpenAI started beta testing GPT-3, a model that uses Deep Learning to create code, poetry, and other such language and writing tasks. While not the first of its kind, it is the first that creates content almost indistinguishable from those created by humans.

2021: OpenAI developed DALL-E, which can process and understand images enough to produce accurate captions, moving AI one step closer to understanding the visual world.

Timeline of artificial intelligence issarice.com Wiki presentation, lots of fine detail in chronology. ...last entry 2020...

A Visual Timeline of AI Predictions in Sci-Fi Iman Ghosh