All the major belligerents in World War I save one were party to the Hague Convention of 1899, whose signers had agreed “to abstain from the use of all projectiles the sole object of which is the diffusion of asphyxiating or deleterious gases.” The exception was the United States. The idea of outlawing poisonous weapons dated back to the American Civil War, but Alfred Mahan, the naval theorist and an American delegate at the Hague conference, explained that no such chemical artillery had yet been developed and that “until we knew the effects of such asphyxiating shells, there was no saying whether they would be more or less merciful than missiles now permitted.” Mahan added that

it was illogical, and not demonstrably humane, to be tender about asphyxiating men with gas, when all were prepared to admit that it was allowable to blow the bottom out of an ironclad at midnight, throwing four or five hundred into the sea, to be choked by water, with scarcely the remotest chance of escape.1

However, the concerns of the European nations went beyond considerations of tenderness to troops in the field. The emergence of the European chemical industry in the second half of the nineteenth century, especially in Germany, had given rise to fears that lethal gases could be produced as weapons that, if used on the battlefield, would drift with the winds, jeopardizing civilians. They were thus outlawed in advance of being produced.

The first significant gas attack of the war, launched by the Germans at Ypres on April 22, 1915, kept to the letter of the convention, if not its spirit, by releasing chlorine gas from 5,730 cylinders fixed in place, rather than from projectiles. The chlorine formed a thick green-yellow cloud about five feet in height that wafted westward on a breeze toward the Allied trenches and gradually grew to a height of about thirty feet. Soon hundreds of men were choking and vomiting and dying while the rest of the troops fled in panic to the rear. A German officer recalled of the gas attack:

…The commission for poisoning the enemy, just as one poisons rats, struck me as it must any straightforward soldier: it was repulsive to me. If, however, the poison gas were to result in the fall of Ypres, we would win a victory that might decide the entire campaign. In view of this worthy goal, all personal reservations had to be silent. So onward, do what must be done! War is necessity and knows no exceptions.

Following such logic, the Allies established chemical warfare programs, and so did the United States, which consolidated its various new chemical weapons efforts in May 1918 into an Army Chemical Warfare Service. By then, both sides had abandoned the legal nicety of the mode of attack used at Ypres and were firing artillery shells filled with different gases, including phosgene, which was eighteen times more toxic than chlorine, and mustard agent, an oily astringent that wounded or killed by burning skin, eyes, bronchia, and lungs. By the Armistice, in November 1918, the two sides between them had deployed some twenty-two different chemical agents, delivering them by shells, mortars, grenades, and aerial bombs. All told, some 560,000 people fell victim to gas.2

Advocates of chemical weapons insisted that they were humane instruments of war, killing many fewer of their victims than those hit by bullets and high explosives, but the weapons were memorably indicted for their effects, notably in John Singer Sargent’s Gassed of 1918–1919, a searing depiction of a file of troops blinded by mustard gas; and in the poet Wilfred Owen’s “Dulce et Decorum Est” of 1917, his vivid rendering of the agony of a gas victim. Seen as repugnant on the battlefield, chemical weapons were made all the more threatening by the expectation, arising from the development of the airplane, that they could now be turned deliberately against cities and civilians, including women and children. Critics, military men among them, found that prospect barbarous. General John J. Pershing, the head of the American Expeditionary Force in Europe, spoke for many when he declared that “chemical warfare should be abolished among nations, as abhorrent to civilization.”3

According to a poll in 1922, the American public almost unanimously supported an international ban against chemical weapons, and so did a sizable number of Europeans. At Geneva in 1925, an international conference on the arms trade adopted a “Protocol on the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare.” However, although in the next decade some forty nations would ratify the Geneva Protocol, it languished in the US Senate, in effect defeated by a gas lobby including Chemical Warfare Service officers, leading chemists, the American chemical industry, and a number of war veterans who subscribed to the view of gas as a humane weapon. The lobby managed to keep the protocol bottled up in the Senate Foreign Relations Committee, contending that a strong chemical arsenal was needed to deter other countries from starting wars.


Despite the lobby’s victory, the isolationism of the era kept military budgets down, including appropriations for the Chemical Warfare Service. Moreover, much of the professional military establishment continued to dislike chemical weapons, finding them troublesome because gas wandered wherever the winds might take it and indecisive because protective clothing and masks could defend soldiers against it. Military leaders also worried about the adverse public reaction to the use of chemicals against civilians.

In World War II, however, fearing that the Germans might once again resort to gas, the United States and Britain adopted a two-pronged policy for chemical warfare that foreshadowed their later strategy for nuclear weapons—a no-first-strike commitment coupled with strong deterrence. President Roosevelt declared that the nation would under no circumstances resort to chemical weapons unless they were deployed against its forces first and that it would then retaliate immediately and fully against the Axis power using them. By then the United States had revived its capacity to wage chemical warfare, building facilities to manufacture mustard, phosgene, and other poisons. The Allies were prepared to drop hundreds of tons of phosgene and mustard gas on German cities within forty-eight hours of an attack.

Roosevelt’s policy of no-first-use was rooted in his conviction that chemical weapons were barbaric and inhumane. Winston Churchill’s view was pragmatic. In July 1944, after the Germans had begun their attacks on England with V-1 rockets, an indiscriminate assault on civilians, Churchill asked his military chiefs for a “cold-blooded calculation” of the payoffs of using poison gas against German forces. “It is absurd to consider morality on this topic…,” he wrote, adding, “In the last war the bombing of open cities was regarded as forbidden. Now everybody does it as a matter of course.” The military advisers successfully persuaded Churchill against first use of chemical weapons, pointing out that the costs would likely include the chemical bombing of British cities by the Germans.4

Before the war, the Chemical Warfare Service, short on funds, staff, and public enthusiasm for its mission, had busied itself with devising peaceful uses for its chemicals, particularly as insecticides.5 Poison chemicals are dual-use technologies. Those lethal to human beings can be used to attack bugs, and those murderous to insects can be strengthened to kill human beings. In fact, in 1936, at a branch of the chemical giant I.G. Farben in Leverkusen, Germany, a chemist named Gerhard Schrader discovered a new lethal agent while trying to devise a synthetic insecticide that would kill weevils in grain silos. Schrader added phosphorous to an organic molecule, thus making a chemical called an “organophosphate”; then, finding it promising as a bug killer, he added cyanide to the molecular mix. Tests on animals, including apes, revealed that as little as a tenth of a milligram of the substance attacked the nervous system, producing dramatic effects, including contraction of the pupils, shortness of breath, convulsions, and finally death.

The German military recognized Schrader’s innovation as the first major advance in chemical weaponry since mustard, code-named it “Tabun,” and encouraged Schrader and I.G. Farben to pursue further work on nerve agents. In 1938, Schrader devised an improvement that became “Sarin,” acronymically named for himself, Otto Ambros, the head of the chemical weapons effort at I.G. Farben, and two officers in army ordnance. After the invasion of Poland, the German army asked I.G. Farben to construct a plant at Dyhernfurth that was intended to produce one thousand metric tons of Tabun per month.

The inventions of Tabun and Sarin inaugurated a new era in chemical weapons, and its story is authoritatively told in Jonathan B. Tucker’s War of Nerves: Chemical Warfare from World War I to Al-Qaeda. A chemical and biological weapons specialist at the Monterey Institute of International Studies in California, Tucker advocates the abolition of chemical weapons. His book, a chilling story told with lucidity and restraint, suggests why it has been so difficult to get rid of them. It covers many countries and is grounded in numerous archival sources, including government records and collections on chemical and biological warfare as well as national security. While Tucker’s book is short on analysis, it is outstanding among the histories of chemical warfare because of its focus on nerve agents, its depth of coverage, and, as its subtitle indicates, its range, including chemical warfare in Hitler’s war, the spread of chemical weapons to third-world states, and their possible use by terrorists.

At several points during the war, some of Hitler’s advisers urged that Tabun be dropped by air on enemy cities or used against Soviet troops. Hitler of course had no compunction about gassing Jews to death in concentration camps; he was averse, however, to using chemical weapons, partly because he had been gassed in 1918, while serving in the German army, but, more importantly, because Otto Ambros told him in 1943 that Tabun production was behind schedule and that the United States had likely developed Tabun or could do so quickly and produce it in great quantities.


American scientists had not devised any nerve agents and the Allies had only intimations of the German work. Surprised when they discovered its extent and sophistication, the United States and Britain launched postwar programs of research and development in nerve agents, moving a number of German scientists and engineers to the US, Britain, and Canada. Among them was Gerhard Schrader, who reported that the Soviets had captured the Tabun plant at Dyhernfurth. In 1947, as Soviet–American relations were deteriorating into cold war, President Harry Truman withdrew the 1925 Geneva Protocol from its longstanding pending status before the Senate Foreign Relations Committee, and that year the United States, Canada, and Great Britain entered into a tripartite agreement that coordinated their respective chemical weapons programs and gave each country access to the findings of the others.

The hardening of the cold war in 1949 and 1950—with the testing of the Soviet atomic bomb, the Communist takeover of China, and the outbreak of the Korean War—prompted hawks in and out of the administration to call for abandoning Roosevelt’s retaliation-only policy. The Air Force and the Army, including the Chemical Corps (the sanitized name given to the Chemical Warfare Service in 1946), and civilian defense consultants all touted chemical weapons as an alternative to the former nuclear monopoly for holding back Soviet ground forces in Europe and countering the Chinese human-wave attacks in Korea.

But before the start of the Korean War, Truman reaffirmed the policy of using chemical weapons only for retaliation. He had been advised by General Omar Bradley, the chairman of the Joint Chiefs of Staff, that neither the American people nor their European allies would accept a different strategy. During the war, the two-pronged policy remained in place. Research, development, and production of chemical weapons, particularly nerve agents, were stepped up so that, beginning in the 1950s, the United States engaged with the Soviets in what Tucker aptly calls “a shadowy chemical arms race.”

The two-pronged policy was reaffirmed by President Eisenhower in March 1956. However, that October, Tucker writes, a secret Defense Department directive “essentially freed the Pentagon to initiate the use of poison gas during a conventional conflict,” thus tacitly liberating the military from the restraint of no-first-use.

The administration of John F. Kennedy, aware that the Soviet nuclear arsenal was by then formidable, reemphasized the need to develop chemical and biological weapons, arguing that conventional forces must be strengthened in order to avoid the risk of full-scale nuclear war in deterring and countering a Soviet invasion of Western Europe or Japan. Chemical weapons were comparatively cheap and they could also be adapted to a limited conflict, such as the brushfire wars that might break out in the third world. Between 1961 and 1964, the Chemical Corps’s budget tripled. Mustard and Sarin munitions were secretly deployed on Okinawa without informing the Japanese government. The Pentagon negotiated the right to store chemical weapons in West Germany, France, and Italy at US-controlled depots.

All the while, the manufacture of chemical weapons was even more shrouded in secrecy than the parallel nuclear arms race. Tucker reports that at most 5 percent of Congress was aware of the chemical weapons complex that the two-pronged policy called into being. The Rocky Mountain Arsenal, near Denver, Colorado, included a plant to produce Sarin—the military’s gas of choice in the early 1950s—a venture that contaminated local groundwater and jeopardized bird life. Sarin was succeeded as the preferred gas by “VX,” a new and far more toxic nerve agent that was devised at the Edgewood Arsenal, near Baltimore, Maryland, and was soon produced at a sprawling, fully automated plant three miles south of Newport, Indiana, on State Highway 63.

The Soviets, for their part, sponsored research into nerve agents but preferred to exploit America’s new gases, notably VX, whose formula they obtained through espionage. From 1963 onward, Warsaw Pact plans for war against NATO included the surprise use of chemical weapons on the battlefield. In the later 1950s and again in the early 1960s, the British shut down much of their research on nerve agents and production of them, finding it pointless to pay for weapons that it could not use under the Geneva Convention and concluding, in any case, that chemical weapons would likely not prevent a battle with nations of the Warsaw Pact from escalating to a nuclear level. Still, the British continued to maintain stockpiles of chemical weapons as a deterrent against their use by the Warsaw Pact and to collaborate with the United States and Canada on research in chemical agents through what was now called the Tripartite Technical Cooperation Program.


In mid-March 1968, a shift of wind during a live-agent test of VX at the Dugway Proving Ground, in northwest Utah, rained oily droplets of the nerve agent onto several large flocks of sheep grazing in Skull Valley, some twenty-seven miles northeast of the test area. The sheep began acting strangely and soon several thousand were dead or severely injured. Despite efforts by Dugway officials to cover up the incident, the connection between the test and the disaster was publicly established. The next year, people on Okinawa were outraged when a Sarin leak at an army depot revealed that the gas had been placed on their island without their knowledge, let alone consent. The United States was forced to remove the weapons from the island and to admit that it had stockpiled chemical weapons in West Germany.

The incidents broadly exposed the existence of the US chemical weapons program and, amid the climate of the war in Vietnam and the environmental movement, provoked outcries against it. Chemists and biologists were already criticizing the use of herbicides in Vietnam, and many nations were castigating the United States for deploying riot-control gases there. Now investigations by reporters and Congress revealed that the military was storing toxic chemicals near—and transporting them through—heavily populated areas. The chemical weapons program was attacked as a threat to public health and safety, to the environment, and to the United States’ moral standing internationally.6

The outcry had an effect on President Nixon, who was eager to make his mark as an international statesman. In November 1969, Nixon renounced the use of offensive biological weapons and lent the support of the United States to a proposed international ban on their development, stockpiling, and production.7 He restored the no-first-use policy for chemical weapons and declared that the United States would restrict further production of them, keeping those it had only as a deterrent. He also pledged that the US would seek an international ban on such weapons and that his administration would resubmit the Geneva Protocol to the Senate for ratification. Late the next year, Nixon ended the use of herbicides in Vietnam, but he allowed continued use of riot-control gases there, holding that such action did not violate the Geneva Protocol.

The Senate ratified the protocol in 1975—the endorsement had been slowed by the provision for riot control—but the agreement prohibited only the use of chemical weapons, not their development. Arms analysts were now calling the weapons developed since 1915 “unitary weapons,” in contrast to “binary weapons,” which were seen as those of the future. The unitary variety was chemically complete, containing a single substance ready to do its lethal work. The binary variety consisted of two separate components, neither of which was toxic; they became lethal only when combined in, for example, a shell in flight. Binary weapons could be stored in domestic and foreign military depots without posing the threats to the environment and public health that had provoked objections to the storage of unitary weapons. Nixon’s pledge in 1969 had left open a loophole for the use of binary weapons should they be developed. The Chemical Corps, seeing that its future lay with binaries, devoted a quarter of its research budget to them in 1970, and two thirds of it in 1973.

That year, the Arab–Israeli Yom Kippur War renewed fears in the United States of Soviet abilities to make chemical weapons. With the help of Moscow, the Egyptians had begun developing a chemical arsenal in the 1960s. Now both sides in the war, including Syria, were equipped with chemical weapons, and the Egyptian and Syrian forces possessed sophisticated defenses against poison gas such as gas masks and antidotes for nerve agents. In 1974, General Creighton Abrams, the army chief of staff, told a congressional committee, “Our forces are not equipped in that fashion.”

The heightened fear of the Soviet chemical arsenal stimulated a decade-long struggle over whether the United States should proceed with the development of binary weapons. Blue-ribbon defense panels and anti-Soviet chemical hawks contended that the nation’s unitary arsenal was inadequate to deter a Soviet ground attack in Europe; binary weapons, they said, would be more acceptable to the public and to military planners. Some opponents castigated binary weapons just as their predecessors had denounced unitary ones, arguing that their use particularly threatened innocent civilians. Others warned that going ahead with them would jeopardize the disarmament talks on chemical weapons, which Nixon had encouraged and were underway in Geneva. The opponents held the hawks at bay until, according to reports from refugees, the Soviets used chemical weapons against the mujahideen guerrillas after their invasion of Afghanistan in 1979. President Carter had previously blocked funding for binary weapons development. Now, in 1980, he signed an omnibus military construction bill that had passed Congress by large majorities in both houses and that included authorization for a pilot binary program.

President Reagan wanted to turn the pilot program into a production program, but Congress refused to go along during his first administration. Because Reagan’s initiative threatened to escalate the chemical arms race rather than bring it to the halt that Nixon had proposed, opposition came from both parties and from both liberals and conservatives. Reagan’s move was closely contested in the Senate, where, in 1983 and 1984, it squeaked through only because of the tie-breaking vote of Vice President George H.W. Bush, and it repeatedly failed in the Democratic-controlled House. However, in 1985, the binary authorization passed the Senate easily, having gained the votes of several senators who were persuaded that the US was falling behind the Soviets. Then the House voted for the weapons just a few days after Hezbollah hijacked TWA Flight 847, killing one of five navy men on the plane. Tucker considers the hijacking, followed by the vote, a major turning point in American chemical weapons policy.

Binary weapons were also being developed in Britain, several Warsaw Pact countries, and the Soviet Union, but opposition to chemical weapons in Western Europe was strengthening. Whatever the expectations that binaries would be politically acceptable, NATO insisted that the new weapons be stored at bases in the continental United States and brought to Europe only in case of war or crisis. West German Chancellor Helmut Kohl added the requirement that the 120,000 unitary chemical weapons that the United States had long stationed in Europe be withdrawn, a demand that Reagan heeded, much to the outrage of conservatives.

The end of the cold war brought the long US–Soviet chemical arms race largely to a halt. The key shift came in 1990, when the United States concluded a sweeping bilateral chemical disarmament agreement with the Soviets that committed the two countries to cease production of all chemical weapons, including binaries, and to reduce their chemical arsenals within eight years to five thousand metric tons.

In September 1992, the Geneva negotiations finally yielded a chemical weapons convention and within several months it was endorsed by the UN General Assembly and signed by 130 countries. A major extension of the Geneva Protocol, it banned the development, production, stockpiling, and transfer of chemical weapons and required that all existing stockpiles of them be destroyed within a decade. President George H.W. Bush signed the convention, but President Clinton was suspicious that Russia, despite Gorbachev’s pledges, was developing a new class of super nerve agents; he therefore delayed its submission to the Senate. When it arrived there Senator Jesse Helms, the arch-conservative Republican, kept it in the Foreign Relations Committee. Tucker points out that the American chemical industry, in a sharp reversal of the role it had played in blocking approval of the Geneva Protocol in the 1920s, helped pry it loose. Its leaders feared that the industry would suffer commercially in Europe if the US failed to ratify the convention, and in the interest of public relations they wanted to demonstrate that they were not chemical killers. In 1997, after heated debate, the Senate endorsed the measure by almost two to one, which meant that the United States was now committed to forgo future development of chemical weapons and to destroy its existing stockpiles over the next ten years.


Egypt, Syria, Lebanon, Libya, and Iraq all refused to sign the Chemical Weapons Convention, contending that chemical weapons could be eliminated from the Middle East only if there were a total regional ban on all weapons of mass destruction—including, so it was implied, Israel’s nuclear arsenal. Since at least the 1970s all five countries had possessed the technical capacity to acquire and deploy chemical weapons. In its war with Iran in the 1980s, Iraq showed that, in the absence of any deterrent threat of retaliation, chemical weapons could be used to military advantage.

In 1981, his troops stalled in their year-old war against Iran’s far more numerous army, Saddam Hussein saw in chemical weapons what he called a “force multiplier.” Iraq built a research, development, and production facility for chemical weapons fifty miles northwest of Baghdad, naming it the Muthanna State Establishment for Pesticide Production. Iraq obtained the necessary chemicals and equipment from a variety of suppliers, including more than thirty Western firms, fourteen of them in West Germany.

Tucker describes Iraq’s first use of chemical warfare against Iranian troops in 1983 and its subsequent use of Tabun and Sarin in 1986. In its chemical attacks in 1987 and 1988 against Kurdish villages and the Kurdish city of Halabja, Iraq used nerve agents among other weapons. Some five thousand people were killed at Halabja and another seven thousand were injured or suffered subsequent illnesses. Evidently before the attacks against the Kurds, the chemical campaign was discussed at taped meetings of Saddam’s Revolutionary Command Council of the Baath Party High Command. Izzat Ibrahim al-Douri, Saddam’s vice-president, wondered aloud whether chemical attacks would be effective against civilians, suggesting that their use might provoke an international outcry.

“Yes, they’re very effective if people don’t wear masks,” Saddam responded.

“You mean they will kill thousands?” Douris asked, to which Saddam replied, “Yes, they will kill thousands.” He added matter of factly, “They will prevent people eating and drinking the local water, and they won’t be able to sleep in their beds. They will force people to leave their homes and make them uninhabitable until they have been decontaminated.”8

In charge of the campaign was Ali Hassan—later known as Chemical Ali—who was also caught on tape saying, “I will kill them all with chemical weapons. Who is going to say anything? The international community? F—them!”

The international community, in fact, said very little. Reagan administration officials protested Iraq’s use of chemical weapons, but not very vigorously; they were eager to maintain Iraq as a counterforce to Iran in the region. For example, in December 1983, in a meeting with Iraq’s deputy foreign minister, Tariq Aziz, special envoy Donald Rumsfeld expressed the United States’ principled opposition to the use of chemical weapons but said that Washington’s interest in normalizing relations with Iraq was “undiminished.” In September 1988, a month after the end of the Iran–Iraq War, the New York Times columnist Flora Lewis criticized the “deafening silence” of Western governments in the face of Iraq’s extensive use of chemical weapons. She predicted that the record of indifference and inaction would encourage other states in the Middle East to acquire their own chemical arms, and obtain their own deterrent.

In fact, in October, Ali Akbar Hashemi Rafsanjani, the speaker of the Iranian parliament and future president of the country, remarked:

Chemical and biological weapons are the poor man’s atomic bomb. We should at least consider them for our defense. Although the use of such weapons is inhuman [sic], the war taught us that international laws are only drops of ink on paper.

In 1989, Iran signed contracts with Swiss and German chemical firms to build pesticide plants that could be converted into chemical weapons factories. Egypt and Syria also expanded their capacities for chemical warfare.

At the start of the Gulf War in 1991, Saddam authorized the use of chemical weapons against Israeli, Saudi, and American forces, but Tucker writes that he was deterred from actually using them by a warning from the United States that it would deliver a devastating response if Iraq conducted chemical warfare. The cease-fire terms in UN Security Council Resolution 687 prohibited Iraq from possession of chemical, biological, or nuclear weapons, or missiles with a range of more than 150 kilometers.

In 1991, shortly after the approval of the resolution, Iraq declared much of the contents of its chemical arsenal, and during the next three years UN inspectors destroyed thousands of chemical weapons and supplies. However, they could not account for the entire declared arsenal. The Iraqis claimed that in 1991 they had secretly destroyed a significant part of their chemical arsenal themselves, but they failed to supply documentary or physical evidence they they had done so. Tucker provides a convenient summary of the disastrous cat-and-mouse game that Saddam played with the inspectors and the UN between the early 1990s and the United States invasion of Iraq in March 2003. Contrary to a US intelligence estimate in the fall of 2002 and Secretary of State Colin Powell’s claims to the UN Security Council, in February 2003 Saddam did not possess a chemical arsenal. On October 6, 2004, the report of Charles Duelfer, the head of the US-sponsored Iraq Survey Group, concluded, Tucker writes, that “Iraq had destroyed its undeclared chemical stockpile in 1991 and had not renewed production thereafter.” Tucker adds, “Saddam had deliberately created ambiguity about whether or not he possessed chemical weapons so as to deter Iran from attacking and to intimidate his domestic enemies.”


The willingness of foreigners to supply dual-use chemicals and facilities has made it extremely difficult to control the acquisition of chemical weapons by nations unable to manufacture them on their own. It is all the more difficult to prevent the acquisition and use of chemical weapons by terrorists, against whom the deterrent of retaliation is futile. For example, the Aum Shinrikyo cult in Japan, having accumulated several hundred million dollars from legal and illegal businesses, managed to purchase the chemical precursors of Sarin and manufacture the nerve agent at its main compound, near Mount Fuji. In June 1994, Aum operatives failed in an attempt to gas a dormitory for judges, but in March 1995, they successfully released Sarin from plastic bags into Tokyo subways at a central transfer station during the height of the morning rush hour. Twelve people died in the attack; hundreds more were injured, many permanently and severely. The incident left Tokyo terrified for months, with many refusing to ride the subways.

The Tokyo subway attacks heightened apprehensions that chemical weapons might become a weapon of choice for terrorist attacks against American and European cities. In 1998, after terrorist bombings of two American embassies in Africa, President Clinton ordered a Tomahawk missile strike against targets in Afghanistan and Sudan, including the al-Shifa chemical factory in Sudan that was linked to al-Qaeda and was said to be manufacturing VX. The evidence that al-Shifa was producing chemical weapons was controversial—Sudanese officials insisted that it was manufacturing pharmaceuticals. But two members of the National Security Council staff cited testimony by a former al-Qaeda operative that chemical weapons were being made in Khartoum; and they argued that physical evidence of a chemical weapons component had been found outside the al-Shifa plant. Moreover, Richard Clarke, the White House counterterrorism expert, testified, Tucker writes, that “he continued to believe that the Al-Shifa factory had been involved in VX production.”9

Tucker concludes that although the Chemical Weapons Convention applies to states,

it can be of value in slowing the spread of toxic weapons to rogue states and terrorist organizations …by strengthening the restrictions on trade in precursor chemicals and increasing the vigilance of the international chemical industry about the proliferation threat.

The convention has been signed and ratified by all but a few of the world’s nations. And in response to an initiative by President George W. Bush in May 2003, more than sixty countries have agreed to cooperate in joint operations to seize illicit shipments of weapons of mass destruction, including their constituent parts, to and from states and nonstates. But several countries, including Syria, North Korea, and, according to US intelligence estimates, probably Iran, maintain chemical weapons stockpiles, and China has refused to sign the Bush initiative. In Iraq, bombs releasing chlorine gas have become increasingly common; in late February, American troops in Iraq raided a bomb-making factory in Falluja that contained cylinders of chlorine gas.

Any agreement to halt proliferation is vulnerable to chemical innovation of the type that Schrader achieved in 1936, which could lead to the development of chemical weapons that are deadlier and easier to use. For the moment, the existing nerve agents remain the poor man’s atomic bomb, coveted by rogue nation-states and terrorists alike. In January 2004, President Bashar al-Assad suggested that chemical weapons have enabled Syria to counter Israel’s nuclear weapons, adding, “It is not difficult to get most of these weapons anywhere in the world…at any time.” Once obtained, they might well be used, especially if the nations of the world, including the United States, fail to enforce their strictures against them, as the US failed to do in the 1980s in Iraq.

This Issue

April 12, 2007