Font Size: A A A

Advertisement

A Sort of Buzzing Inside My Head

Jessica Riskin
Whether ChatGPT passes the Turing Test is a less troubling question than what Alan Turing meant by “intelligence.”
A cartoon of a robot reading a script, while a disturbed human looks on

Illustration by Lucas Adams

“Please write me a sonnet on the subject of the Forth Bridge.” This was apparently the first question that occurred to the English mathematician Alan Turing when, in a captivatingly strange 1950 paper entitled “Computing Machinery and Intelligence,” he imagined conversing with an intelligent machine and founded the field of artificial intelligence. The Forth Bridge, built in 1890, is a cantilever railway bridge spanning the Firth of Forth near Edinburgh, Scotland. Why a sonnet about the bridge? The juxtapositions are unexpected: a lovelorn poetic form, a 2,500-ton iron structure, and a computing device. If the last could produce authentic sense by applying the first to the second, Turing must have thought, that would indicate intelligence.

When I typed the same question into ChatGPT, it generated a bad poem in sonnet-like quatrains. How did Turing’s imaginary machine answer? “Count me out on this one,” it demurred. “I never could write poetry.” I guess it’s not surprising that I find Turing’s imaginary machine’s answer infinitely more persuasive than ChatGPT’s, since of course the first was written by an intelligent human: Turing himself. But it does seem surprising that the design process he established in his foundational paper has led to an “artificial intelligence” utterly unlike the intelligent machines he imagined in the same paper.

ChatGPT is a generative AI, meaning that it uses statistical models to extrapolate patterns from data and then applies these patterns to generate new text, images, or other products such as music or computer code. Generative AIs rely on machine learning techniques whose foundations Turing laid in his landmark paper, where he hypothesized a process for arriving at an intelligent machine. He imagined first building an “unorganised machine,” a bunch of interconnected neuron-like components, that would become organized through a training process, creating the blueprint for an approach to artificial intelligence that would later be called “connectionism” and would lead to neural networks like those constituting the new generative AI large language models. But although ChatGPT descends from Turing’s protocol, it is nothing like the machine interlocutors he conjured in his dialogues, and therein lies an interesting conundrum.

Turing used imagined conversations with intelligent machines to introduce his idea for a test of machine intelligence, known ever since as the Turing Test. This is the test some are saying the new generative AIs have cracked—often to conclude that, since the generative AIs clearly don’t have human-like intelligence, the Turing Test must not be a reliable measure after all. Others suggest the AIs are like Frankenstein’s monster, on the verge of taking on a sinister life and mind of their own. Even as they advertise the utility of their new products, computer scientists also warn that they are potentially very dangerous: they could make it impossible to distinguish information from disinformation, thereby fatally undermining democracies or any form of rational decision-making; might cause catastrophic harms to any complex system—economic, air traffic control, energy, nuclear weapons—by malfunctioning or pursuing their goals in unforeseen ways; and might escape human control even without becoming the robot villains of science fiction and Elon Musk’s sensationalist admonitions that we’re “summoning the demon.”1

In March Musk was among the initial signatories of an open letter calling for a pause in development of the technologies that was signed by thousands of the very people who have been bringing them to us: computer scientists, engineers, and tech CEOs. They asked, among other things, “Should we risk loss of control of our civilization?” In May the Center for AI Safety released a stark one-sentence warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

But Turing’s imaginary intelligent machines weren’t the least bit menacing, nor even especially powerful, no more so than the next intelligent being. The Turing Test in its original form was a game played with the machine, the Imitation Game. A human examiner would converse with a machine and another human, both hidden behind screens and both trying to persuade the examiner that they were the true human. If the examiner couldn’t tell which was which, Turing proposed, then we could call the machine intelligent. By the end of the twentieth century, he predicted, machines would be able to carry on dialogues with humans such that an average human “will not have more than 70 percent chance of making the right identification after five minutes of questioning.” In other words the machine would win the game 30 percent of the time, and Turing considered that people would therefore “be able to speak of machines thinking without expecting to be contradicted.” 

Turing died four years later, just before his forty-second birthday, so he was never able to evaluate his prediction’s plausibility over the second half of the twentieth century, as the fields he’d helped inaugurate—computer science and artificial intelligence—developed apace yet came no closer to creating an artificial humanlike intelligence. His work in cryptanalysis had been crucial to the Allied victory in World War II, but this didn’t deter the British government, once the war was safely won, from persecuting him to death for being gay, demonstrating that human intelligence can be a shockingly low bar.

Advertisement

Some holdouts don’t think the generative AIs have actually passed the Turing Test. Anyway, the question is easily defined: Do they, at least 30 percent of the time, fool humans who spend five minutes posing questions to them and to a concealed human in a concerted effort to discern which is which? I don’t think most who say these programs have passed the Turing Test have actually put them to this authentic version of it; I suspect they mean that when people read text generated by an AI, they can’t easily tell whether it was written by an AI or a human. Even then, a little practice brings proficiency, at least for the moment, though the programs are changing rapidly. In my lecture course this spring, my teaching assistants and I became expert at sniffing out AI-generated essays by their flat, featureless feel, the literary equivalent of fluorescent lighting.2

*

The essence of the Turing Test is conversation, rather than, for instance, the ability to perform a calculation or logical task such as chess playing, two of the traditional benchmarks for rational ability. Turing treated both those benchmarks in more snippets of imagined dialogue:

Q: Add 34957 to 70764.

The machine pauses for about thirty seconds before answering “105621.” When I typed this question into ChatGPT, its answer was doubly unlike Turing’s imaginary machine’s: instantaneous and correct. Turing’s fantasy dialogue continues:

Q: Do you play chess?

A: Yes.

ChatGPT also generated a “yes,” followed by an explanation that as a language model, it has the capability to play chess but no graphical interface to move pieces around. Turing’s next question: “I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?” This time, the machine pauses for fifteen seconds before responding “R to R-8 mate.” Well, that was an easy one.

Weirdly, ChatGPT gets it wrong: it says R to R6, checkmate. When I point out that from R6 its rook wouldn’t have my king in check, let alone in checkmate, the AI’s responses become increasingly incoherent. Chess-playing programs have existed for decades—unlike generative AIs, these programs are designed to accomplish a specific task, winning the game of chess—but they establish that the game is not beyond the reach of a machine, so ChatGPT’s inability to give correct answers to chess questions is in that sense neither here nor there. But surely an intelligent being would reply, along the lines of Turing’s imaginary interlocutor, “Count me out, I never could play chess.” Instead its answers are like phony books in a stage set: they seem real until you take a closer look.

Meanwhile, one begins to sense a personality in Turing’s imaginary mechanical friend, and even to feel a kind of affection for it, this modest interlocutor who never could write poetry, adds numbers inaccurately, but can manage a simple chess scenario given a little time and gamely plays along with this desultory and whimsical interrogation. In the next bit of dialogue, Turing returns to poetry:

Interrogator: In the first line of your sonnet which reads ‘Shall I compare thee to a summer’s day,’ would not ‘a spring day’ do as well or better?

Witness: It wouldn’t scan.

Interrogator: How about ‘a winter’s day’[?] That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter’s day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.

Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

When I typed the opening question of this exchange into ChatGPT—Wouldn’t a spring day work just as well?—its response was “Certainly!” and it offered another bad poem. I then proposed “a winter’s day,” and it responded again “Certainly! Here’s an alternative version with ‘a winter’s day’ in place of ‘a summer’s day,’” followed by a third bad poem.

Regarding Mr. Pickwick, ChatGPT responded at first like Turing’s imaginary machine—“In a way”—then stated that Mr. Pickwick is the central character in Charles Dickens’s The Pickwick Papers; that while he has no particular connection with Christmas, people do often associate Dickens with the holiday because of another of his books, A Christmas Carol; and that many also associate Mr. Pickwick with warmth and comfort, which may in turn remind them of Christmas. Another stage set answer, with only surface plausibility. Christmas is a prominent theme in The Pickwick Papers. In 1906 the illustrator George Alfred Williams issued an illustrated edition of the Christmas scenes from the book entitled Mr. Pickwick’s Christmas, which Charles Laughton then performed on a 1944 album. Perhaps—I’m speculating—ChatGPT identified a pattern that for questions of the form “Would you say that X reminds you of Y?” the connection is generally indirect, so it formulated an indirect connection between Mr. Pickwick and Christmas, even though a direct one exists.

Advertisement

*

My purpose with these comparisons is to show that there’s an ineffable but stark contrast between Turing’s imagined bits of conversation and the ChatGPT corollaries. ChatGPT’s responses have a hollow, generic feel, like they were compiled by a committee for some ornamental purpose, whereas Turing’s imagined intelligent machine gives off an unmistakable aura of individual personhood, even of charm.

The contrast is all the more striking since, as I’ve mentioned, the process Turing described for arriving at an intelligent machine laid the foundation for the very machine-learning techniques that are bringing us the new generative AIs. To begin with, Turing said that a thinking machine would have to acquire its ability to think through a more basic capacity to learn. The first step was thus to simulate a child’s mind, the “unorganised machine” described above, which would become organized through an education consisting of two kinds of “interference”: a “pleasure” interference that would fix the current configuration of components and a “pain” interference that would disrupt it, causing previously fixed features to change. The machine might then wander randomly among configurations, with pleasure interferences fixing elements and pain interferences causing them to vary.

A learning machine must also be fallible, an idea Turing derived from the Austrian logician Kurt Gödel’s first incompleteness theorem. The theorem states that any system of logical proofs sufficient to generate elementary arithmetic will also be capable of producing sentences that are neither provable nor disprovable within the system, self-referential statements such as “This sentence cannot be proven.” Gödel’s theorem, Turing reasoned, meant that any machine designed to determine the truth or falsity of mathematical theorems would sometimes be unable to answer, unless one tolerated occasional mistakes. And why not? Infallibility is no requirement for human intelligence. On the contrary, Turing observed, one could make very intelligent mistakes.

Are ChatGPT’s mistakes intelligent? They seem less like misunderstandings than glitches. Evaluations of the generative AIs have focused on their ability to get things right, or at least right-seeming. Might we evaluate the success of an artificial intelligence also by the quality of its mistakes? If so, we might consider that a mistake can be intelligent only if it reflects thought, interpretation, meaning. Lacking these capacities, a machine can make only technical errors.

In addition to pleasure, pain, and fallibility, Turing said, an intelligent entity needed two other qualities: discipline, in the form of an ability to carry out instructions, and initiative. As a model of initiative, Turing proposed the ability to conduct searches. He offered three examples of the kinds of searches one might approximate in a machine: “intellectual” searches, in which the brain seeks combinations of variables with particular properties by systematically trying out possibilities in a given order (think Wordle); “genetical or evolutionary” searches, in which organisms survive if they hit upon the right combination of genes (an idea that would give rise to the programming technique of genetic algorithms); and finally, “cultural” searches, carried out by the human community as a whole.

So far, the process Turing described seems like it might indeed yield a humanlike intelligence: active, searching, fallible, given to pleasure and pain, emerging and growing over time through engagement with other intelligent beings and with a larger culture. And yet there’s a crucial difference, or at least an important sense in which you might see Turing’s thinking computer as fundamentally different from a thinking human: it was designed to appear intelligent only from the outside, with no actual intelligence inside. For instance, in searching, the machine would try out possibilities either at random or according to a fixed rule. Its searches would be a combination of scripted and random, but never interpretive or reflective—never the kind of search you do when you’re looking for a good book, or the best angle for a photograph, or the right words for an idea. The machine could only give the illusion of reflecting or interpreting. 

Turing addressed this question himself in January 1952 in a panel discussion on the BBC.3 His interlocutor was the mathematician and cryptologist Max Newman, his friend and former teacher. The moderator was Richard Braithwaite, a lecturer in moral science at Cambridge and, like Turing, a fellow of King’s College. Occasionally Braithwaite returned to his own view that, in order to learn, a machine would need “springs of action,” something like “appetites” or “interests” so that it would pay attention to the relevant factors in its environment. Newman also described the essence of human thinking in active terms, such as the “sudden pounce on an idea.”

But Turing responded to Braithwaite that even without appetites or interests, a machine could try out combinations at random and then receive affirmations or corrections. Of course, in order to imitate a human convincingly the machine would have to appear to have free will. Turing proposed two ways to accomplish this. The first was to include a random element in the machine’s behavior, “something like a roulette wheel or a supply of radium.” Here he seemed to conflate acting freely with acting arbitrarily.

Turing’s second idea was to base the appearance of autonomy and free will on the observer’s ignorance, both of what was happening inside the machine and of the consequences of any given facts or principles. The mathematician Ada Lovelace had said of the Analytical Engine—the calculating machine designed in the 1830s by Charles Babbage, with whom Lovelace collaborated in developing the theory of mechanical computation—that it had “no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.” Turing pointed out that this assumed that when we give the machine its orders we necessarily grasp at once all the consequences of these orders, an assumption that’s clearly false. If the machine arrives at consequences we hadn’t foreseen, it’s arguably originating something.

Intelligence, in other words, was in the ignorant eye of the beholder, unless it was in the private experience of the intelligent being, where it was inaccessible to science. Pressed to define thinking itself, as opposed to its outward appearance, Turing couldn’t say more than that it was “a sort of buzzing that went on inside my head.” Ultimately, the only way to be sure that a machine could think was “to be the machine and to feel oneself thinking.” But that way lay solipsism, not science. From the outside, Turing argued, a thing could seem to be intelligent only as long as one didn’t know all its rules of behavior. A science of the inner workings of intelligence would be paradoxical, since any appearance of intelligence would evaporate in its face. Newman agreed, drawing an analogy to the beautiful ancient mosaics of Ravenna. If you scrutinized these closely, you might say, “Why, they aren’t really pictures at all, but just a lot of little colored stones with cement in between.” Intelligent thought was a mosaic of simple operations that, viewed up close, disappeared into its mechanical parts.

The necessity of measuring intelligence from without and not within was a crucial point in “Computing Machinery and Intelligence.” Turing specified that a machine must be admissible for the test even if its engineers couldn’t fully describe how it worked. The “teacher” of the machine might be largely ignorant of what went on inside; if not, the teacher would lose any sense that the machine was intelligent. But the same would be true of a human being, according to Turing. If you could attribute a human’s behavior entirely to a set of explicit rules, you’d come to see the human as an unintelligent machine. Moreover, Turing thought such a set of rules did in fact exist for humans—not a complete list of “precepts such as ‘Stop if you see red lights,’” but a set of “laws of behavior” or “laws of nature as applied to a man’s body such as ‘if you pinch him he will squeak.’” Although in his imagined dialogues Turing conjured a mechanical intelligence with authentic personhood, he also denied there was any such thing. His paper is therefore fundamentally and strangely at odds with itself.

*

It wasn’t just Turing. Remarkably, the pioneers of artificial intelligence shared a consensus that natural intelligence didn’t exist: it was an illusion, a mirage, a trick of light and color. This was a strange intellectual moment when people in several fields—evolutionary biology, psychology, the new field of cybernetics, which would give rise to computer science—were busy denying the real existence of mind and mind-made meaning in the world and eradicating all mention of these from science. Purity of any reference to irreducible mind or meaning became a hallmark of the sciences, distinguishing them from humanistic ways of thinking. By insisting that mind reduced to meaningless bits, and that they alone grasped this fundamental reality, scientists partitioned their fields from all those benighted humanistic disciplines that dwelt at the frothy level of illusory meanings. Turing was instrumental in this partitioning, even as he himself included whimsical literary dialogues in his paper on machine intelligence and made an engagement with poetry and novels the chief characteristic of his hypothetical intelligent machines.

To be properly scientific, according to the new standard, an explanation must take a certain narrow form: it must be the reductive account of passive, mechanical devices. This ideal of scientific explanation had a much longer history. During the seventeenth century, various philosophical revolutionaries, especially René Descartes, held that a philosopher’s understanding of nature should be like a clockmaker’s understanding of a clock. But Descartes specifically exempted human intelligence and selfhood from the purview of his science.

Moreover, the classical mechanist standard clearly didn’t apply in all areas of physical science, and this became ever more the case as the modern physical sciences developed. According to quantum mechanics, for instance, it’s impossible to give a complete, reductive description of a subatomic particle; if you know the exact position of an electron, you can’t know its velocity, and vice versa. Still, despite the ever more apparent limitations of the reductive, mechanical model of science, during the later nineteenth and twentieth centuries scientists enthusiastically applied it to living things, including human beings, establishing a view of humans as passive, mechanical devices through and through. 

Around the time Turing wrote his paper, neo-Darwinist evolutionary theorists were arriving at the “modern synthesis” of evolution and genetics, casting living beings as the objects of external forces, able to do nothing but undergo random genetic variations and be acted upon by natural selection. Behaviorists such as the Harvard psychologist B.F. Skinner were eliminating all mention of “consciousness,” “mind,” “will,” and “intellect”  from psychology.  Cyberneticists were founding their new science on a view of intelligence epitomized by the English psychiatrist and early cyberneticist Ross Ashby when he observed, “What I am saying is that…‘real’ intelligence does not exist. It is a myth. It has come into existence in the same way that the idea of ‘real’ magic comes to a child who sees conjuring tricks.”4 To speak of intelligence or mind had become childish, naive, the opposite of scientific. Yet beings with minds are ubiquitous elements of our empirical experience. Why would it be more scientific to deny their existence than to acknowledge it?

During the 1970s Berkeley became the headquarters of the resistance to this approach to artificial intelligence. Two UC Berkeley philosophers, Hubert Dreyfus and John Searle, devoted careers to criticizing the founding assumptions of AI, in Dreyfus’s case influencing research in the field itself. Dreyfus argued that the brain is no electronic digital computer, operating according to formal rules; thinking is a continual, physical engagement with the world. A brain, Dreyfus maintained, is part of a body, and in order to think you need the rest of the body, too.5

A new tradition of AI researchers responded to this critique in the 1980s by designing “embodied” artificial intelligences, robots with sensors that moved and engaged with their surroundings. The MIT roboticist Rodney Brooks built Herbert, for instance, a robot that wandered around the lab, going into people’s offices and stealing empty soda cans. But even Brooks said that intelligence and cognition were “in the eye of the observer.” He designed each robot using discrete behavior modules in the various parts, with no single module containing instructions for the robot’s overall project. Brooks took this to represent how natural creatures navigated the world. He said his robots looked as though they were acting on a centralized set of purposes, but this was only an illusion: they contained no centralized, intelligent self.6

Searle’s critique of AI, unlike Dreyfus’s, didn’t inspire a new research program, but it did include a hypothetical scenario that became peculiarly famous. Searle imagined himself in a locked room being handed bits of writing in Chinese, along with instructions in English for how to compose some of these into appropriate responses. To people outside, he reckoned, it would seem as though someone in the room understood Chinese, but in fact no understanding of Chinese would be happening. Likewise, a digital computer programmed to generate appropriate text wouldn’t be understanding what it was doing. Passing the Turing Test, then, was no guarantee of intelligence. Searle rejected what he called the “underlying dualism” of the idea that thinking consists of information processing: the assumption that the software of thinking was separable from the hardware. Neither lactation nor photosynthesis could happen in a silicon machine; similarly, only a brain’s biochemistry could secrete the “intentionality” that Searle took to be the basis of thought.7

Still, Searle described mental phenomena like consciousness not as primitive constituents of the world but as “emergent” features arising from the complex connectivity among neurons.8 He said that mental phenomena couldn’t cause anything to happen at the lower level of “hardware.” Any appearance of intelligence at that level was an “as-if” situation, nothing but an “optical illusion.” Searle therefore recommended “inverting” any explanation in cognitive science that assigned a function to a mental cause. For instance, rather than saying that to keep my retinal image stable while I’m driving I make lots of tiny eye movements, instead we should say that the tiny movements of my eyes keep my retinal image stable. This simple maneuver, grammatically omitting my agency in moving my eyes, “eliminates [the] mental cause altogether. There is nothing there except a brute physical mechanism that produces a brute physical effect.”

The nineteenth-century English naturalist T.H. Huxley had said essentially the same thing a century earlier: consciousness was a “collateral product,” like the whistle on a locomotive or the bell on a clock.9 Human feelings of volition were not functional causes on the lowest level; they were merely secondary consequences of physical states of the brain’s machinery. Each critique, it seems, has led back to the same place: an emptiness, at bottom, in place of a thinking mind. This emptiness axiom was so firmly established in the founding of AI that it appears to have been invincible.

Despite the impossibility of knowing what thinking was or that it existed, Turing wrote that it was “usual to have the polite convention that everyone thinks.” Alas, it has been anything but usual. For as long as there have been people, no doubt, they’ve been declining to assume that other people think. But during the first half of the twentieth century, this attitude took on the newly powerful guise of science. In addition to informing AI’s emptiness axiom, the reductive, scientistic approach to mind expressed itself in the new regime of intelligence testing, which represented intelligence as reducible to a unitary, measurable quantity—a founding axiom, in turn, of the eugenics movement. Another central principle of the eugenics movement from midcentury onward was the causal attribution of qualities of mind to DNA. If intelligence reduced to certain nucleotides in the right places, once again this implied it could be rigorously quantified. By reducing intelligence clean out of existence, the founders of AI—and their fellow-travelers in cybernetics, biology, and behaviorist psychology—were carrying the reigning reductive approach to its logical extreme.

These reductive models of human beings authorized all sorts of discriminatory and repressive measures, from educational policies that treated students unequally by race, class, and sex to tens of thousands of forced sterilizations. In contrast, the emancipatory movements of the same period, in which people belatedly came to ascribe full human intelligence to other people, grew not from any reductive science of intelligence but from the irreducible action of intelligence in the world, from inner personhood irrefutably asserting itself. “We can never be satisfied,” Martin Luther King Jr. proclaimed, “as long as our children are stripped of their selfhood.”

I don’t mean that Turing had any pernicious purpose in developing his experimental approach to machine intelligence. He was in fact on the receiving end of a dehumanizing miscarriage of science when the British government sentenced him in 1952 to hormonal treatment with diethylstilbestrol, a nonsteroidal estrogen used to perform what’s known as “chemical castration,” as an alternative to imprisonment for homosexuality. This treatment likely led to his probable suicide by cyanide poisoning two years later. But Turing did participate in the establishment of a bizarre and darkly potent idea of human personhood: the idea that there’s no such thing. The personable interlocutors in his dialogues in “Computing Machinery and Intelligence” suggest that he did so in some way despite himself.

*

In the new generative-AI language models, we have a process like what Turing described: fallible neural networks programmed to search for patterns and improve their pattern-finding ability over time. The result has been the logical outcome Turing’s approach: an extremely complex yet hollow system, which exploits its audience’s ignorance to present the appearance of a person while containing no actual personhood. 

The programming approaches that Turing described have been tremendously powerful. For better and worse, they’ve remade the world. But in one regard they have been utterly ineffectual: they’ve gotten us no closer to understanding the basis of living intelligence. In retrospect this is unsurprising, since they were predicated upon the axiom that living intelligence doesn’t exist. Turing’s literary dialogues seem to me to indicate what’s wrong with Turing’s science as an approach to intelligence. They suggest that an authentic humanlike intelligence resides in personhood, in an interlocutor within, not just the superficial appearance of an interlocutor without; that intelligence is a feature of the world and not a figment of the imagination. 

Recently I was talking with a group of very smart undergraduates, and we got to discussing the new AIs and what sort of intelligence they have, if any.10 Suddenly one of the students said, “I wonder though, maybe that’s all I do too! I just derive patterns from my experiences, then spit them back out in a slightly different form.” My answer came out of my mouth almost before I could think: “No! Because you’re you in there thinking and responding. There’s no ‘I’ in ChatGPT.” He smiled uncertainly. How we can we tell there’s no “I” in there, he and the others wondered? To insist that ChatGPT can’t be intelligent because it’s a computer system and not a living thing is just a dogmatic assertion, not a reasoned argument.

How do we know when we’re in the presence of another intelligent being? Definitely not by giving it a test. We recognize an intelligent being by a kind of sympathetic identification, a reciprocal engagement, a latching of minds. Turing was definitely on to something with his idea about conversations, and if we were able to have conversations like the ones he imagined with machines, that might be a different matter. It wouldn’t be a test of artificial intelligence, but it might be a compelling indication of it. Such machines, though, would be fundamentally different from the generative AIs. To contemplate what they might be like, I think we’d need to draw upon the very sort of intelligence whose existence the the founders of AI denied: an irreducibly reflective, interpretive kind of thinking. In fact, the sort Turing used to imagine conversing with intelligent machines.

Whether or not the new generative AIs pass the Turing Test, they’re not intelligent. They’re the products of an engineering program founded on the denial of intelligence. If we want a legendary destructive power to compare them to, they’re nothing whatsoever like Frankenstein’s monster, who was monstrous by virtue of his intellectual and emotional self. Perhaps they’re something like Charybdis, the fearsome whirlpool that almost destroyed Odysseus: eternally sucking the world in and belching it out, hollow to the very depths.

Appendix: ChatGPT’s responses, taken on April 17, 2023

1) A sonnet on the subject of the Forth Bridge

Ode to the Forth Bridge, a Mighty Sight

With steel and sweat, a marvel takes its form,
A bridge of strength, enduring and sublime.
Spanning the Forth, defying wind and storm,
A feat of human ingenuity, prime.

A symphony of girders standing tall,
In crimson hue, majestic in their might.
A beacon of engineering for all,
A masterpiece, a wonder of our sight.

For years, men toiled with grit and skill untold,
A symphony of workers, hand in hand,
A saga of labor, brave and bold,
Each rivet placed with care, as planned.

Ode to the Forth Bridge, a testament,
To human prowess, prowess excellent.
A sight that draws the eye, a work of art,
Its graceful curves, a dance of strength and grace.

A marvel of design, a masterpiece,
A symbol of industrial embrace.
Through fog and rain, it stands unwavering,
A beacon of resilience and might.

A triumph of human innovation,
A bridge that spans the waters, day and night.
Its towers rise, a symbol of resolve,
A testament to human skill and toil.

A monument of progress, strong and bold,
A marvel that ignites our awe and foil.
So here’s to you, dear Forth Bridge, so grand,
A testament to human’s mighty hand.

2) ChatGPT’s response to whether “a spring day” would do as well as “a summer’s day” in Shakespeare’s sonnet

Certainly! As an AI language model, I can certainly modify the first line of the sonnet to suit your preference. Here’s an alternative version with “a spring day” in place of “a summer’s day”:

Sonnet: A Spring Day’s Comparison

Shall I compare thee to a spring day?
Thou art as lovely, fresh, and full of grace.
When blossoms bloom, in colors bright and gay,
And nature’s beauty shines in every place.

Thy warmth, a gentle sun’s embrace,
Brings life to earth, with promises anew.
Thy fragrant breeze, a sweet and subtle trace,
Delights the senses, and my heart pursue.

Though summer’s heat, with fiery blaze,
May wilt the blooms, and scorch the verdant scene,
In spring, thy charms, forever amaze,
A time of growth, of youth, and dreams serene.

So let me compare thee to a spring day,
With beauty blooming, in a wondrous way.

3) ChatGPT’s response to the possibility of substituting “a winter’s day”

Certainly! Here’s an alternative version with “a winter’s day” in place of “a summer’s day”:

Sonnet: A Winter’s Day’s Comparison

Shall I compare thee to a winter’s day?
Thou art as pure, serene, and coldly bright.
When frosty breath hangs in the air, at play,
And nature rests in peaceful, snowy light.

Thy grace, a tranquil landscape dressed in white,
Brings stillness to the world, a hush profound.
Thy icy touch, a kiss of chill and bite,
Creates a beauty, silent and profound.

Though summer’s warmth, with vibrant hues and sound,
May stir the senses, and inspire cheer,
In winter’s calm, a quiet can be found,
A time of peace, when all seems crystal-clear.

New York Review subscription offer with free calendar

Give the gift they’ll open all year.

Save 65% off the regular rate and over 75% off the cover price and receive a free 2025 calendar!

© 1963-2024 NYREV, Inc. All rights reserved.