Gerald Feinberg, a physicist at Columbia University, writes that his book will “attempt to predict the changes that will take place over the next few decades in the content of science and in the lives of scientists,” especially physicists and biologists. He does not think that there is a “procedure for anticipating the future of science,” a science of science, but he supposes that he can identify the gaps in existing science and then “speculate on how they might be filled.” He also thinks that he can foretell future scientific developments “by analyzing the ways in which science has evolved in the past, and using these insights to make educated guesses about future breakthroughs.”

The result of Feinberg’s predictions is a scientific utopia, where computers share scientists’ thoughts, spare human organs are kept on hospital shelves like carburetors, and genetic engineers wave magic wands that cure inherited diseases. Forecasts of this kind are often bandied about, and laymen may wonder how seriously to take them. Feinberg’s book stimulated me to try to find out whether any such advances are foreseeable on the basis of present scientific knowledge and whether there are other important advances already in the making that are not foreseen in his book.

Feinberg believes that “computer capabilities are evolving very rapidly,” and that “computers will not only become better tools to aid human thought but also partners in human thought.” He asserts that “once we understand any intellectual activity well enough to describe clearly what it accomplishes, then eventually we can teach computers to do it.” He cites as an example advances in artificial intelligence that “could result in small computer packages with decision-making capabilities comparable to those of human beings” to replace human beings in space probes.

How likely are computers to develop in this way? Today computers are cleverer than people in some ways and stupider in others, but above all they are different. Computers work about three million times faster than brains, because electric pulses travel along nerves at a mere 100 meters a second, while they travel along metal wires at nearly 300,000 kilometers a second. The memory storage capacity of computers is enormous, since in addition to the thousands of millions of numbers stored in their own memory, they can be made to have almost instant access to a multitude of satellite discs and magnetic tapes. This allows computers to memorize the timetables and passenger bookings of all the world’s airlines and spurt out any part of this information at the pressing of a few buttons, something that no human brain could possibly do.

On the other hand, brains are more versatile—for example, they can create works of art and make scientific discoveries. The reasons for this may not all be known yet, but here are some of them. In a computer, each switch works as an on–off device and is normally connected to only three other switches, while each of the ten thousand million nerve cells in the brain may be connected to more than a thousand others. Most connections act not by transmitting electric currents, but by transmission of specific chemicals. The transmitters in the brain are of many kinds, and their action is “tuned” in a multitude of different ways by at least forty other chemical compounds secreted in various parts of the brain, such as the natural pain relievers called enkephalins, which can stop pain signals from peripheral nerves reaching our consciousness. (I feel sure that laughter is one way of triggering their release.) While computer memory is generated by the magnetization of tiny clusters of metal atoms, brain learning requires chemical synthesis, perhaps for making new nerve connections. Computers run on electrical energy, while brains run on chemical energy. Computers deprived of current can be revived; brains deprived of oxygen for more than a few moments are dead. In short, computers are electromagnetic devices with fixed wiring between more or less linearly connected elements, while brains are dynamic electrochemical organs with extensively branched connections continuously capable of generating new molecules to be used as transmitters, receptors, modulators, and perhaps also capable of making new connections.

Despite these fundamental distinctions between the brain and the computer, the efforts to simulate mental activities by computers, known as artificial intelligence or AI, have attracted some of the world’s best mathematicians and psychologists. They have found it possible to simulate sophisticated activities like playing chess, but hard to imitate the simple ability of seeing in three dimensions, as if it took more intelligence for a frog to catch a fly than for a chess player to win a game against Karpov. Translation of languages has also proved difficult, but after twenty-five years’ effort this is said to have progressed to a stage where the computer can get about 90 percent of the meaning of some texts right.

Advertisement

Present computers are made of silicon chips containing individual switches or elements as small as a thousandth of a millimeter. One chip may contain up to a million such switches. Feinberg predicts that individual computer elements will continue to shrink until they become crowded together as closely as atoms are in a solid body, making computers millions of times more effective than they are now. These prospects have recently been reviewed by R.C. Haddon and A.A. Lamola, two scientists at AT & T Bell Laboratories in Murray Hill, New Jersey.1 According to Haddon and Lamola, technical advances may soon allow individual elements on chips to be made a hundred times smaller than they are now, providing up to 10 billion switches or bits of memory per chip. Yet each of these features would still be 10,000 times larger than the atoms or molecules which Feinberg envisages as the ultimate computer elements.

Haddon and Lamola show that, in fact, there is no chemistry in sight for making molecules, let alone atoms, that could act either as switches or as conducting wires. Even if this were to be accomplished, methods would still have to be found for setting, addressing, and reading such molecular switches individually, and for preventing the unwanted jumping of electrical signals between them. Haddon and Lamola conclude that not just the technology but the basic scientific principles for the construction of such molecular electronic devices are unknown. They do not mention electronic devices employing single atoms, and I know of no properties of single atoms that would allow them to be used as switches or memory stores.

Common sense tells us that there is more to the human brain than the problem solving and information processing that computers can do, because with consciousness go individuality, imagination, love of beauty, tears and laughter, kindness and cruelty, heroism and cowardice, truthfulness and mendacity, and occasionally artistic talent. Greatness in art and poetry carries with it an idiosyncratic, evocative, often irrational way of looking at the world and expressing its image, as in Gauguin’s paintings of Tahiti or Coleridge’s Ancient Mariner. Paul Klee thought the artist makes the invisible visible, and an Irish writer, George Moore, put the distinction best by saying that art is not mathematics, it is individuality. Probably no two human brains, not even those of identical twins, are exactly alike, while computers are made as identical units in serial batches. Even so, artificial-intelligence experts are brilliant at dialectic and capable of confounding any specific distinction between human beings and computers that a layman cares to raise. For example, the late Alan Turing devised a question-and-answer game between A and B in one room and C in another, communicating with A and B by teletype. C tries to discover whether A or B is a person or a computer, but the computer defeats C’s interrogation. In Turing’s game, when C asks A to write him a sonnet, the computer answers quite reasonably: “I never could write poetry.”

If computers were to become partners in human thought, as Feinberg predicts, they would have to acquire consciousness. Is this likely to happen? Physiologists have discovered where and how images received by the retina of the eye are processed to provide the sensation of a moving object, and they have mapped areas of the brain where speech, hearing, and other functions are centered, but the physical or chemical nature of consciousness has eluded them. As a schoolboy, I was mystified by gravity, and when I reached university I eagerly attended physics lectures in the hope of learning what it really is. I was disappointed when they merely taught me that gravity is what it does, an attractive force between bodies that makes the apple fall with an acceleration of ten meters a second per second.

Perhaps consciousness is like that, and we may get no further than stating that it is what it does: a property of the brain that makes us aware of ourselves and of the world around us, “a beam of light directed outward,” as Boris Pasternak’s Zhivago calls it. The Cambridge physicist Brian Pippard has argued that in evolution consciousness may have arisen suddenly when brains reached a certain degree of complexity, but I doubt that any sharp distinctions exist between animals that do and do not possess consciousness; more probably consciousness attained increasing sophistication as animals ascended the evolutionary tree. In the absence of knowledge of its physical nature, the question of whether it will ever be possible to simulate it by a machine cannot be answered.

Feinberg believes that computers will become partners of scientists in their research. “This could be done,” he writes, “either by teaching computers to understand human speech, or perhaps by giving computers direct access to the human brain through some kind of electronic hookup.” If this were done, “communication between computer and human would be as rapid as between two people, possibly much more rapid, if the computer has direct access to the brain. Such close communication and recognition by the computer of the modes of thought of an individual is currently beyond the ability of computers, but probably not for very much longer.”

Advertisement

But will computers be able to read our thoughts in this way? At present they cannot even read difficult handwriting. Thought reading by “electronic hookup” would be possible only if nerve impulses emitted suitable electromagnetic signals detectable on or beyond the surface of the skull. In fact, the frequency of nerve impulses is more than a hundred times lower than that of the lowest commonly used radio frequencies, which means that they have wavelengths of hundreds of kilometers. This long wavelength raises a fundamental difficulty, because there exists a physical relationship between the wavelength of the radiation used to look at an object and the details that can be distinguished. For example, a microscope using blue light with a wavelength of two thousandths of a millimeter cannot distinguish two points separated by less than half that distance. By the same token, radiation of a wavelength of 200 kilometers could not distinguish two objects less than 100 kilometers apart. Therefore radiation emitted by electric pulses of single nerves in the brain, even if it were detectable, would not tell the observer from which of the millions of nerve cells or even from which part of the brain it had been emitted; yet without such information thought reading would be impossible.

It is true that brain activity is detectable by electrodes placed against the skull, but this activity differentiates merely between gross states such as wakefulness and sleep. In fact individual nerves are well insulated from one another and can be monitored only by implanting microelectrodes in the brain through holes drilled into the skull, as David Hubel and Torsten Wiesel at Harvard did in monkeys in order to study the processing of visual information. Hubel and Wiesel used single electrodes, but to give “computers direct access to the brain by some kind of electronic hookup” would require thousands, if not millions of electrodes to be implanted in the brain, each recording the pulses in an individual nerve cell. To Feinberg “this does not seem too difficult to accomplish,” but I am not sure if artificial intelligence enthusiasts and other scientists would volunteer to have themselves wired up to their computers in this way, and even if they did, how their computers would be able to interpret the signals they received. Elsewhere in the book Feinberg suggests that the problem might be overcome by inducing nerves to grow from electrodes placed under the skull to thought-producing nerve cells in the brain, but in fact there is no forseeable way of either producing nerves or of directing their growth, so as to make each such nerve connect with a single specific nerve out of millions of others.

According to Feinberg, biotechnology will be “the most important type of future technology.” In his new world, biotechnology will “include the capability of modifying the most fundamental aspects of human life, such as aging, sexuality, inborn inequalities, and sociopathic behavior.” For example, he foresees “a method for growing organs, genetically identical or very similar to a person’s own,” which “could ensure successful transplants when the original was destroyed by disease or injury. These transplants would not be subject to the immune reaction that interferes with present transplants of foreign organs” and would therefore eliminate the present “problem of scarcity of organs for transplants.”

If these predictions were to come true, The New York Times of 2050 might carry news items like the following:

PRIZE WRESTLER SUES NEWLY WED OCTOGENARIAN

Accusations of fraud were raised in a Brooklyn court by former prize wrestler Achilles Gordon against 83-year-old realtor Fred Steel, alleging that Steel had offered Gordon $5,000 for one of his gonads, but on recovery from the anaesthetic Gordon received a check for only $1,000. Steel denied ever having offered more.

In Feinberg’s new world, Steel would face no problem other than the paternity of his children, but at present Steel’s white blood cells would destroy Gordon’s graft unless he took immunosuppressive drugs for the rest of his life. There is no answer to graft rejection in sight. Surgeons are hoping that this may come about one day, and fear that it would produce a black market in organs such as already exists in India, where a kidney is said to cost about $4,000. Kidney transplants have now become commonplace (about 50,000 have been done), heart and heart–lung transplants are increasingly successful, and pancreas transplants for severe diabetics are beginning. All these transplants come from recent cadavers and cannot be kept on hospital shelves because they do not survive in isolation for more than a few hours; and in each case immunosuppressive drugs are required if the transplants are not to be rejected.

Feinberg believes, however, that “once biologists understand the process of development as it takes place normally, it should be possible to induce it artificially,” so that any given organ could be created with the use of cells taken from a person’s body which contain all the genetic information originally used to produce it. Scientists might then be able to grow new organs within a person’s body or in some artificial medium. But if transplant organs were to be made available off the shelf, as Feinberg predicts, they would have to be grown from single cells by cloning.

What are the prospects of this happening? The English botanist Frederick Steward discovered how to grow carrot plants from single cells taken from fully grown plants, and the English zoologist John Gurdon showed that tadpoles will grow from eggs whose nucleus has been replaced by the nucleus of an adult frog’s skin cell. These experiments proved that most body cells contain all the genetic information for the growth of the entire plant or animal, and thus paved the way for the cloning of genetically identical organisms. On the other hand, though the nucleus of a liver cell, when transferred into an egg, may allow the egg to grow into a tadpole, an isolated liver cell will not grow into a new liver, or an isolated heart cell into a new heart. Such cells normally grow in culture dishes only if they have already taken the first step toward malignancy, and then they grow as sheets of single cells, not as whole organs. Moreover, these cells are all alike, whereas organs such as hearts and kidneys are made up of a multitude of different cells integrated into a complex, highly differentiated fabric.

The only nonmalignant cells that have been usefully cultured are skin cells to cover burn wounds. Fifty years ago severe burns used to be fatal when they covered more than a third of the body’s skin. Recently, Howard Green and his colleagues at Massachusetts General Hospital in Boston have excised tiny patches of healthy skin from severely burned patients and have grown them in culture to as much as 50,000 times their original area. Last year they saved the lives of two children whose burn wounds covered more than 95 percent of their skin. Half of their new skin came from patches grown in culture. So far, this method works only with cultures grown from the patient’s own skin, because foreign skin is rejected more violently even than other grafted organs, such as kidneys or hearts are in the absence of immunosuppressive drugs.

Many people fear that Steward’s and Gurdon’s discoveries may one day make it possible to clone humans, but so far only plants and amphibians have been cloned successfully, and biologists have failed in their attempts to clone mice.

Feinberg writes that one of the forms of biotechnology that are most likely to be developed “in the very near future” is “the cure or elimination of genetic diseases.” Genetic diseases account for up to a third of children’s admissions to hospitals and nearly half of all deaths of patients under the age of fifteen. Feinberg does not explain the formidable difficulties that stand in the way of curing them. The collection of chromosomes that carry human genes consists of a meter of DNA distributed over forty-six chromosomes, and its information content is equivalent to a library of five thousand volumes. To cure a genetic disease, the genetic engineer has to find and correct what may be no more than a single misprint in any one of these volumes. Techniques for finding misprints are very advanced, but those for correcting them are haphazard. Genetically defective strains of mice can be transformed into healthy strains only by injecting thousands of copies of DNA containing the healthy gene into their fertilized eggs. The hope is that at least one of these copies will be incorporated into the chromosomes in such a way as to cure the defect.

This haphazard way may succeed in some subjects, but in others the injected gene may be expressed in the wrong tissue at the wrong time or its accidental insertion into another gene may cause a new genetic defect. This does not matter too much in experiments with mice, where scientists can select the healthy mouse among many defective ones, but it would be unacceptable for human beings. To correct a genetic lesion reliably, a healthy gene would have to be spliced in the correct position in place of the defective one. There are no techniques in sight for doing this.

On the other hand, Feinberg does not mention that molecular biologists and physicians have developed effective new methods of antenatal diagnosis which have already led to dramatic reductions in the number of children born with one of the most crippling and widely spread genetic diseases. This is thalassemia major, an anemia which is common in Mediterranean countries and especially in Southeast Asia. The carriers of the disease are healthy, but one in four of their children is likely to be affected. A simple blood test can now tell parents whether they are carriers. If so, the mother can have a tiny fiber snipped from the membrane surrounding her eight- or nine-week-old embryo and analyzed for its DNA. The result tells whether the embryo has inherited the defective gene from only one parent and will be healthy or from both parents and will be diseased. The parents can then decide if they want the pregnancy to be terminated.

Bernadette Modell, a London pediatrician, experienced the distress which the upbringing of severely crippled thalassemic children brought to the Cypriot families living in England, and enrolled colleagues to help her offer antenatal diagnosis to expectant mothers. She soon found them crowding her clinic, and her work proved so successful that doctors from Mediterranean countries came for training to organize antenatal diagnosis there, with the result that by the end of 1983 the number of thalassemic babies born per year had been reduced from 70 to 2 in Cyprus, from 300 to 150 in Greece, from 70 to 30 in Sardinia, and from 25 to zero in the Italian city of Ferrara. She has recently been to a meeting in Bangkok attended by physicians from the Southeast Asian countries, where the disease is most frequent. Those attending decided to initiate a pilot study in antenatal diagnosis of thalassemia.

Physicians have recently tried to cure thalassemic children by bone marrow transplants rather than genetic engineering. E.D. Thomas at the Fred Hutchison Cancer Research Institute in Seattle, who pioneered such transplants for children with leukemia, was the first to try this, and he was followed by physicians in England and Italy; by now nine out of eighteen such transplants have been successful.

Another method under discussion for the treatment of genetic diseases is the introduction of missing genes by a virus. The gene will first be incorporated into the chromosome of a harmless relative of the polio virus. Bone marrow cells of the patient will then be withdrawn and incubated with the virus to allow its genes to be incorporated into the cell’s chromosomes. The cells will then be put back into the patient, with the hope that they will produce the enzyme coded for by the missing gene. This method would initially try to relieve two particularly severe genetic diseases where even a small fraction of normally functioning genes would help.

Sickle-cell anemia is a similar, though not usually quite as severe, disease, common mainly among blacks. For example, one fifth of the population of South Carolina are carriers of it. Thanks to the work of Yuet Wai Kan at the Howard Hughes Medical Institute in San Francisco it can now also be diagnosed at eight weeks of pregnancy, but it would take money for antenatal clinics and a doctor with the humanity and drive of Bernadette Modell to spread the knowledge that this can be done, before the incidence of the disease can be significantly reduced.

Not all common genetic disorders can yet be diagnosed before birth. Down’s syndrome and spina bifida can. Cystic fibrosis has so far eluded all attempts to discover its genetic cause, but biochemists have discovered that in mothers carrying a cystic fibrosis fetus, the activity of a certain enzyme in the fluid surrounding the fetus tends to be depressed. This finding has allowed a team in Edinburgh to diagnose cystic fibrosis at seventeen to nineteen weeks pregnancy in about 90 percent of cases where the disease was later proved to be present. Hemophilia and some muscular dystrophies could now be diagnosed if they are inherited, but they often arise from new mutations that become apparent only after the baby’s birth. In summary, the prospects are that antenatal diagnosis will drastically reduce the number of babies born with severe genetic diseases long before these can be cured.

Feinberg sees the future mainly as a collection of technological fixes in the United States, but he does not consider how science might be used to eliminate poverty, ignorance, and disease in the rest of the world. This surely is our greatest challenge. Nor does he adequately address questions about the future that concern us in the Western world, where adult working lives are cut short mostly by cardiovascular diseases, cancer, and traffic accidents. Does science in the twenty-first century offer any cures for them?

Feinberg has very little to say on these important questions. He tells us almost nothing about the future of cancer research except that the “war on cancer” initiated by President Nixon has so far been unsuccessful. In fact, the past few years have seen the greatest advances in cancer research since Peyton Rous discovered the first cancer virus in chicken in 1910. Molecular biologists have pin-pointed the cancer-producing activity of bird and animal viruses to specific genes and have shown that, upon infection, these genes are transferred from the single chromosome of the virus to a chromosome of the host.

This work was not immediately relevant to human cancers, because hardly any of them are caused by viruses, but it was followed by another discovery that has now brought us close to an understanding of the causes of some of the most frequently occurring human cancers. Molecular biologists found that genes similar to those of viral cancer genes are part of our normal genetic makeup. Certain mutations can turn these genes into cancer genes, and the positions where these mutations occur are exactly those where the normal human genes differ from their viral counterparts. In some cancers these normal genes are unchanged, but transposed to other chromosomes. It seems that some of these genes control cell duplication in ways not yet completely understood, and that mutations or transpositions allow cell duplication to get out of hand.2 Knowledge of the exact molecular mechanism which causes normal cells to become malignant may not lead directly to prevention or cure of cancer, but it is the first requisite for it.

Another very promising advance is the discovery that the human body itself makes proteins capable of fighting heart attacks and cancer. Molecular biologists have isolated the genes of some of these proteins, cloned them, incorporated them in bacteria, and used the bacteria to manufacture these proteins in quantity. One of the proteins, now under clinical trial, dissolves blood clots formed in arteries during heart attacks and another makes breast tumors in mice die off. The blood-clotting proteins that hemophiliacs lack may soon be produced in this way, which would eliminate the present danger of viral infection. What Helena says in All’s Well that Ends Well, that “Our remedies oft in ourselves do lie,” may be literally true and lead to great medical advances that no one has foreseen.

Feinberg does not address the problem of diminishing injuries and loss of life from traffic accidents. If, like Feinberg, I were inclined to plan a scientific utopia I would try to prevent injuries from road accidents by equipping all cars with microcomputers guiding them safely to their destinations at publicly controlled speeds, a measure that would pay for itself by the enormous savings in medical and Social Security costs.

In 1982 road accidents in Britain claimed 6,000 killed, 80,000 severely and 250,000 slightly injured, and cost more than $3 billion. Thousands of victims suffer spine injuries that paralyze them permanently from the neck or from the hips downward, depriving them of control of their limbs and bladder. Advances in microelectronics and surgery enabled a London neurologist, Giles Brindley, to implant microelectrodes in the spinal cord that restored bladder control. His work has been taken up by others, and by now these electrodes work well in sixty patients. This gives him hope that implanted electrodes will one day enable him to make the lame walk and use their hands, the deaf hear, and the blind read print.

Some of Feinberg’s book is devoted to modern concepts of cosmology. He does not forecast package tours to the rim of black holes nor does he advocate the colonization of space as Freeman Dyson did in Disturbing the Universe. Such fantasies may become technically feasible, but I doubt that even town dwellers used to commuting in their tightly closed cars from their tightly closed offices would want to live in space where they can never breathe fresh air or see a tree or hear a bird, while looking through the porthole of their ship, like Oscar Wilde’s prisoner in The Ballad of Reading Gaol,

With such a wistful eye
Upon that little tent of blue
Which prisoners call the sky

—except that theirs would be black.

I found Feinberg’s chapters dealing with the birth and nature of matter hard to comprehend. For example, the paragraph that follows conveyed no meaning to me:

The broken symmetry of the properties of particles is a consequence of a broken symmetry of the underlying quantum field. The equations that describe quantum fields are thought to be symmetric; there are simple mathematical relations between the equations describing different fields, such as those associated with quarks and those associated with electrons. However, physicists have realized over the past twenty years that many of these equations have solutions that are not symmetric. These solutions correspond to average levels of the quantum field in some region of space that is different for one field than for another. When this situation applies in some region, the symmetry for those fields is said to be broken. Because these average field values influence the properties of any particles present in the region, these particles may also be observed to differ, even though they are described by similar equations.

While I enjoyed every page of another recent book on a similar subject, Steven Weinberg’s Discovery of Subatomic Particles, (W.H. Freeman, 1983), I had to force my way through Gerald Feinberg’s prose. Weinberg, on the other hand, makes his reader share the exciting scientific adventures of people of flesh and blood and he asks himself at every sentence: Would this convey any meaning if the subject were new to me? Robert Graves once said that the writer must cultivate “the reader over your shoulder.”

Feinberg’s glib forecasts about the future of science are linear extrapolations of current progress, but carried into the clouds of science fiction. I believe that scientists writing for the general public should keep their feet on the ground, since otherwise they destroy credibility. Besides, just because the human mind is not like a computer, past progress has rarely been linear, and the greatest advances, like Puck, have popped out of unexpected corners.

This Issue

September 26, 1985