Solid Clues: Quantum Physics, Molecular Biology, and the Future of Science
Gerald Feinberg, a physicist at Columbia University, writes that his book will “attempt to predict the changes that will take place over the next few decades in the content of science and in the lives of scientists,” especially physicists and biologists. He does not think that there is a “procedure for anticipating the future of science,” a science of science, but he supposes that he can identify the gaps in existing science and then “speculate on how they might be filled.” He also thinks that he can foretell future scientific developments “by analyzing the ways in which science has evolved in the past, and using these insights to make educated guesses about future breakthroughs.”
The result of Feinberg’s predictions is a scientific utopia, where computers share scientists’ thoughts, spare human organs are kept on hospital shelves like carburetors, and genetic engineers wave magic wands that cure inherited diseases. Forecasts of this kind are often bandied about, and laymen may wonder how seriously to take them. Feinberg’s book stimulated me to try to find out whether any such advances are foreseeable on the basis of present scientific knowledge and whether there are other important advances already in the making that are not foreseen in his book.
Feinberg believes that “computer capabilities are evolving very rapidly,” and that “computers will not only become better tools to aid human thought but also partners in human thought.” He asserts that “once we understand any intellectual activity well enough to describe clearly what it accomplishes, then eventually we can teach computers to do it.” He cites as an example advances in artificial intelligence that “could result in small computer packages with decision-making capabilities comparable to those of human beings” to replace human beings in space probes.
How likely are computers to develop in this way? Today computers are cleverer than people in some ways and stupider in others, but above all they are different. Computers work about three million times faster than brains, because electric pulses travel along nerves at a mere 100 meters a second, while they travel along metal wires at nearly 300,000 kilometers a second. The memory storage capacity of computers is enormous, since in addition to the thousands of millions of numbers stored in their own memory, they can be made to have almost instant access to a multitude of satellite discs and magnetic tapes. This allows computers to memorize the timetables and passenger bookings of all the world’s airlines and spurt out any part of this information at the pressing of a few buttons, something that no human brain could possibly do.
On the other hand, brains are more versatileâ€”for example, they can create works of art and make scientific discoveries. The reasons for this may not all be known yet, but here are some of them. In a computer, each switch works as an onâ€“off device and is normally connected to only three other switches, while each of the ten thousand million nerve cells in the brain may be connected to more than a thousand others. Most connections act not by transmitting electric currents, but by transmission of specific chemicals. The transmitters in the brain are of many kinds, and their action is “tuned” in a multitude of different ways by at least forty other chemical compounds secreted in various parts of the brain, such as the natural pain relievers called enkephalins, which can stop pain signals from peripheral nerves reaching our consciousness. (I feel sure that laughter is one way of triggering their release.) While computer memory is generated by the magnetization of tiny clusters of metal atoms, brain learning requires chemical synthesis, perhaps for making new nerve connections. Computers run on electrical energy, while brains run on chemical energy. Computers deprived of current can be revived; brains deprived of oxygen for more than a few moments are dead. In short, computers are electromagnetic devices with fixed wiring between more or less linearly connected elements, while brains are dynamic electrochemical organs with extensively branched connections continuously capable of generating new molecules to be used as transmitters, receptors, modulators, and perhaps also capable of making new connections.
Despite these fundamental distinctions between the brain and the computer, the efforts to simulate mental activities by computers, known as artificial intelligence or AI, have attracted some of the world’s best mathematicians and psychologists. They have found it possible to simulate sophisticated activities like playing chess, but hard to imitate the simple ability of seeing in three dimensions, as if it took more intelligence for a frog to catch a fly than for a chess player to win a game against Karpov. Translation of languages has also proved difficult, but after twenty-five years’ effort this is said to have progressed to a stage where the computer can get about 90 percent of the meaning of some texts right.
Present computers are made of silicon chips containing individual switches or elements as small as a thousandth of a millimeter. One chip may contain up to a million such switches. Feinberg predicts that individual computer elements will continue to shrink until they become crowded together as closely as atoms are in a solid body, making computers millions of times more effective than they are now. These prospects have recently been reviewed by R.C. Haddon and A.A. Lamola, two scientists at AT & T Bell Laboratories in Murray Hill, New Jersey.1 According to Haddon and Lamola, technical advances may soon allow individual elements on chips to be made a hundred times smaller than they are now, providing up to 10 billion switches or bits of memory per chip. Yet each of these features would still be 10,000 times larger than the atoms or molecules which Feinberg envisages as the ultimate computer elements.
Haddon and Lamola show that, in fact, there is no chemistry in sight for making molecules, let alone atoms, that could act either as switches or as conducting wires. Even if this were to be accomplished, methods would still have to be found for setting, addressing, and reading such molecular switches individually, and for preventing the unwanted jumping of electrical signals between them. Haddon and Lamola conclude that not just the technology but the basic scientific principles for the construction of such molecular electronic devices are unknown. They do not mention electronic devices employing single atoms, and I know of no properties of single atoms that would allow them to be used as switches or memory stores.
Common sense tells us that there is more to the human brain than the problem solving and information processing that computers can do, because with consciousness go individuality, imagination, love of beauty, tears and laughter, kindness and cruelty, heroism and cowardice, truthfulness and mendacity, and occasionally artistic talent. Greatness in art and poetry carries with it an idiosyncratic, evocative, often irrational way of looking at the world and expressing its image, as in Gauguin’s paintings of Tahiti or Coleridge’s Ancient Mariner. Paul Klee thought the artist makes the invisible visible, and an Irish writer, George Moore, put the distinction best by saying that art is not mathematics, it is individuality. Probably no two human brains, not even those of identical twins, are exactly alike, while computers are made as identical units in serial batches. Even so, artificial-intelligence experts are brilliant at dialectic and capable of confounding any specific distinction between human beings and computers that a layman cares to raise. For example, the late Alan Turing devised a question-and-answer game between A and B in one room and C in another, communicating with A and B by teletype. C tries to discover whether A or B is a person or a computer, but the computer defeats C’s interrogation. In Turing’s game, when C asks A to write him a sonnet, the computer answers quite reasonably: “I never could write poetry.”
If computers were to become partners in human thought, as Feinberg predicts, they would have to acquire consciousness. Is this likely to happen? Physiologists have discovered where and how images received by the retina of the eye are processed to provide the sensation of a moving object, and they have mapped areas of the brain where speech, hearing, and other functions are centered, but the physical or chemical nature of consciousness has eluded them. As a schoolboy, I was mystified by gravity, and when I reached university I eagerly attended physics lectures in the hope of learning what it really is. I was disappointed when they merely taught me that gravity is what it does, an attractive force between bodies that makes the apple fall with an acceleration of ten meters a second per second.
Perhaps consciousness is like that, and we may get no further than stating that it is what it does: a property of the brain that makes us aware of ourselves and of the world around us, “a beam of light directed outward,” as Boris Pasternak’s Zhivago calls it. The Cambridge physicist Brian Pippard has argued that in evolution consciousness may have arisen suddenly when brains reached a certain degree of complexity, but I doubt that any sharp distinctions exist between animals that do and do not possess consciousness; more probably consciousness attained increasing sophistication as animals ascended the evolutionary tree. In the absence of knowledge of its physical nature, the question of whether it will ever be possible to simulate it by a machine cannot be answered.
Feinberg believes that computers will become partners of scientists in their research. “This could be done,” he writes, “either by teaching computers to understand human speech, or perhaps by giving computers direct access to the human brain through some kind of electronic hookup.” If this were done, “communication between computer and human would be as rapid as between two people, possibly much more rapid, if the computer has direct access to the brain. Such close communication and recognition by the computer of the modes of thought of an individual is currently beyond the ability of computers, but probably not for very much longer.”
But will computers be able to read our thoughts in this way? At present they cannot even read difficult handwriting. Thought reading by “electronic hookup” would be possible only if nerve impulses emitted suitable electromagnetic signals detectable on or beyond the surface of the skull. In fact, the frequency of nerve impulses is more than a hundred times lower than that of the lowest commonly used radio frequencies, which means that they have wavelengths of hundreds of kilometers. This long wavelength raises a fundamental difficulty, because there exists a physical relationship between the wavelength of the radiation used to look at an object and the details that can be distinguished. For example, a microscope using blue light with a wavelength of two thousandths of a millimeter cannot distinguish two points separated by less than half that distance. By the same token, radiation of a wavelength of 200 kilometers could not distinguish two objects less than 100 kilometers apart. Therefore radiation emitted by electric pulses of single nerves in the brain, even if it were detectable, would not tell the observer from which of the millions of nerve cells or even from which part of the brain it had been emitted; yet without such information thought reading would be impossible.
"The Molecular Electronic Device and the Biochip Computer: Present Status," Proceedings of the National Academy of Sciences of the USA, vol. 82 (April 1985), pp. 1874–1878.↩
“The Molecular Electronic Device and the Biochip Computer: Present Status,” Proceedings of the National Academy of Sciences of the USA, vol. 82 (April 1985), pp. 1874–1878.↩