Robert Chamberlain, a veteran of two tours to Iraq and a Rhodes scholar who has struggled with post-traumatic stress disorder

Peter van Agtmael/Magnum Photos

Robert Chamberlain, a veteran of two tours to Iraq and a Rhodes scholar who has struggled with post-traumatic stress disorder, on the day he was promoted to the rank of army major, Brooklyn, 2011

Almost every major war brings the introduction of a terrifying new weapon. During the US Civil War, Union and Confederate troops employed a revolutionary bullet—known as the Minié ball after its creator, the French army captain Claude-Étienne Minié—that spun from the gun barrel, dramatically increasing its velocity, accuracy, and lethality. World War I saw submarines, poison gas, and nonstop artillery barrages. World War II introduced indiscriminate air assaults, kamikaze attacks, and, of course, the atomic bomb. The Vietnam War brought the helicopter gunship, napalm, and chemical defoliants. With these weapons came an ever-expanding vocabulary to depict their hellish consequences, from shell shock to radiation poisoning to Agent Orange Syndrome.

In 2005 Barack Obama told his Senate colleagues that traumatic brain injury (TBI), a condition afflicting thousands of soldiers, “could become the ‘signature wound of the Iraq War.’” Little studied at the time, it appeared to result from the impact of roadside bombs planted by insurgents. Before long, however, the term “signature wound” had come to include the vaguely defined condition known as post-traumatic stress disorder (PTSD). These “are two of the signature wounds of this conflict,” an army official told Congress in 2007. “We’re working [on] them, but we have a lot of work to do.”

David Kieran is hesitant to describe these conditions as the raging epidemics portrayed in much of the media. “This is not to say…that [they] are not materially real,” he writes in Signature Wounds. “Of course they are.” The problem, as Kieran sees it, is that the prevailing argument was molded by the forces most opposed to the conflicts in Iraq and Afghanistan: the antiwar left. As a result, he contends, the distinction between the “ordinary and temporary readjustment challenges” faced by all veterans and the “long-term, debilitating psychological issues” faced by some was seriously blurred.

It’s natural for critics of any war to stress the toll on soldiers and civilians alike. Yet even a cursory reading of Signature Wounds demonstrates that public concern for the mental health of returning Iraq and Afghanistan veterans ranged widely across the political map. It may be true, as Kieran contends, that congressional Democrats like Obama focused on TBI to help neutralize Republican charges that those who opposed the Iraq war didn’t much care about the welfare of the troops who were fighting it. But this was hardly their primary motive, and to blame the left for concocting an inflated story of a mental health crisis, much less ensuring its widespread distribution, is an extraordinary stretch.

Kieran’s view is all the more curious because he possesses an otherwise sure grasp of the pressures of modern warfare and the US military’s stepped-up efforts to address the consequences. Unlike many, he concludes that the Pentagon has become deeply invested in mental health issues, and that the Department of Veterans Affairs—everyone’s favorite whipping boy—has made great strides as well, despite the stinginess of politicians who purport to “love our vets” but rarely provide the resources needed to assist them. Still, says Kieran, it wasn’t the military’s foresightedness that brought about these changes; it was rather the brutal, spirit-breaking conflicts in Iraq and Afghanistan that plague us to this day.

One of the weaknesses of Signature Wounds is the absence of historical background. Kieran ignores the age-old fascination with mental trauma on the battlefield, from Herodotus to Shakespeare to Stephen Crane and beyond. In the seventeenth century, army doctors in Europe wrote of a condition they called “nostalgia,” in which soldiers fighting for long stretches, far from home, became “sad, taciturn, listless, solitary…indifferent to everything which the maintenance of life requires.” Worse still, wrote one physician, “neither medicaments, nor arguments, nor promises nor threats of punishment are able to produce any improvement.”

During the Civil War, medical officers used terms such as “irritable heart” and “battle fatigue” to describe a widening list of symptoms. Dr. George Burr caused a minor sensation by contending that “serious injury to the nervous system” could occur “without the body receiving either wound or contusion.” Writing in the prestigious New York Medical Journal in 1865, Burr claimed that the impact of nearby explosions—known colloquially as “the wind of passing shells”—produced temporary blindness, hearing loss, and paralysis by compressing the brain. Most physicians of that era thought the very premise absurd, as did military leaders who viewed emotional upset of any sort as a moral failing of weak-minded men.

Civil War historians disagree about the number of troops afflicted with severe emotional trauma. Some who have studied individual units during the conflict (and after) see it as a substantial problem; others dispute this, insisting that Civil War soldiers be judged by the world in which they lived rather than by the psychiatric standards of today. Their point is that men of this era experienced warfare with different expectations than their modern counterparts—they were more stoic, more religious, and more likely to view an early death as a common occurrence, which it was.

Advertisement

As psychiatry advanced in the late nineteenth century, medicine became more receptive to the connection between the mind and the body. That, in turn, raised the question of whether certain personality types were especially susceptible to battle trauma, and, if so, whether a well-trained professional could weed them out in advance. During World War I, a much-quoted and somewhat alarming survey of American recruits showed fully half of them testing at or below the level of “moron,” the mental age of a child between ten and twelve. (Many blamed it on the “low intelligence” of immigrants then pouring into the US from Southern and Eastern Europe.) Less well known, however, was the military’s attempt to exclude the “insane,” “feeble-minded,” and “psychopathic.” Though accounts are sketchy, the rejection rate appears to have been quite low, around 2 percent.

The war demonstrated the failure of these early screening attempts—not just among American troops, but almost everywhere they were tried. So many British soldiers developed emotional problems during the brutal months of trench warfare that physicians invented a blanket term to describe them: shell shock. Medical opinion of battlefield trauma softened a bit: what had once been viewed as an affliction of the weak and unstable was seen, in some quarters, as a danger to all.

Military leaders almost unanimously disagreed. And so, too, did US officials who faced the heavy costs of these “nonphysical” diagnoses. The federal government spent nearly $1 billion in the 1920s and 1930s on benefits for World War I soldiers who claimed a psychiatric disability, a staggering sum, due largely to the lobbying efforts of new groups like the American Legion and the Disabled American Veterans. Fearing a repeat, the War Department developed a more comprehensive mental screening process for World War II draftees—or so it appeared. Relying on the expertise of distinguished psychiatrists such as William C. Menninger and Harry Stack Sullivan, it devised a program (including a fifteen-minute interview) to uncover signs of depression, alcoholism, “homosexual proclivities,” “stupidity,” and other “abnormalities.” In all, about two million men—or 12 percent of draftees—were rejected for psychological reasons, with another 750,000 discharged following their induction.

The effort failed miserably. What could a psychiatrist, much less a poorly trained draft board member, glean from a brief, formulaic interview? Precious little, it turned out. The program not only depleted the military’s draft pool, it also stigmatized those it rejected. Meanwhile, the number of shell shock cases among the troops kept climbing as the war expanded—especially in places where soldiers experienced long stretches of combat.

Much as they had during World War I, military psychiatrists found that the best ways to treat battlefield trauma were also the simplest—rest, relaxation, and hot food, offered close to the front lines—and that most men returned to duty within several days. While more severe cases might require drugs, therapy, and hospitalization, a short, comforting break from the battlefield was usually enough.

To many in the military, however, even modest remedies smacked of coddling. The most notable example was General George S. Patton—“Old Blood and Guts” to his admirers, “Our Blood, His Guts” to wary GIs. With his riding britches and ivory-handled pistols, Patton had led the rout of crack German armored divisions in North Africa before commanding American troops in Operation Husky, the Allied invasion of Sicily, in 1943. “Battle,” he believed, “is the most magnificent competition in which a human being can indulge.”

In Sicily, Patton was warned about “a very large number of ‘malingerers’…feigning illness in order to avoid combat duty.” Shortly thereafter, while visiting an evacuation hospital near the front lines, he encountered a recent arrival with no apparent battle wounds. He asked what was wrong, and the soldier—feverish, exhausted, disoriented—replied, “I guess I can’t take it.” Patton exploded, slapping the soldier’s face with his gloves and violently tossing him from the medical tent.

Still seething, he ordered his senior officers to deny “such cases” medical care. And a few days later, as if to show he meant business, Patton assaulted another hospitalized soldier—“It’s my nerves,” the man told the general: “I can hear the shells come over, but I can’t hear them burst”—by punching “the yellow son of a bitch” and waving a pistol in his face. “You’re a disgrace to the Army and you’re going right back to the front to fight,” Patton raged. “In fact, I ought to shoot you myself right now, goddam you!”

Advertisement

A medical officer at the hospital filed a report about these incidents, which reached General Dwight D. Eisenhower, the supreme Allied commander. Disgusted—if not exactly surprised—he sent Patton a note warning him that such behavior raised “serious doubts in my mind as to your future usefulness” and ordering him to apologize to the soldiers he’d assaulted. The complication, Eisenhower admitted, was that Patton’s warrior code paid big dividends in the field. “[He’s] the best ground gainer developed so far by the Allies,” Ike conceded. “Patton is indispensable to the war effort—one of the guarantors of our victory.”

Privately, Patton remained defiant, insisting that a bunch of muddle-headed pacifists had concocted a phony illness to undermine military discipline. “There’s no such thing as shell shock,” he declared. “It’s an invention of the Jews.”

Still, military psychiatry made some notable strides. The enormous jump in uniformed psychiatrists, from fewer than a hundred in 1940 to several thousand in 1945, produced a wealth of studies and anecdotal information pointing to the same conclusion: “constant exposure” to “intense combat” could turn any soldier temporarily insane, and the “breaking point” came at about 210 days. According to one military report, the focus had shifted “from problems of the abnormal mind in normal times to problems of the normal mind in abnormal times.”

Among the more enduring portraits of World War II is the relatively seamless reentry of American veterans into civilian life. “They were mature beyond their years, tempered by what they had been through, disciplined by their military training and sacrifices,” wrote Tom Brokaw in The Greatest Generation. “They stayed true to their values of personal responsibility, duty…and faith.” It’s an honorable depiction with some notable holes. In truth, the return of 16 million veterans, more than half of whom had seen combat, was a daunting prospect. Magazine stories asked, “Will Your Boy Be a Killer When He Returns?” A prominent Columbia University sociologist warned that “unless and until he can be renaturalized into his native land, the veteran is a threat to society.” More common were reports from mainstream publications like Newsweek that tens of thousands of returning soldiers had developed “some kind of psychoneurotic disorder” overseas that required further attention.

Military leaders preached a different reality: American GIs had come home stronger, fitter, more responsible, and better disciplined as a result of their service in a just and necessary war. The army even enlisted the acclaimed Hollywood director John Huston to make a documentary, Let There Be Light, about the fine work being done to help the (supposedly few) veterans who had returned with mental health issues. But trouble arose when Huston, taking his responsibility quite seriously, asserted in the film’s opening scroll that “twenty percent of our army casualties suffered psychoneurotic symptoms,” including “a sense of impending disaster, a feeling of hopelessness and utter isolation”—before proceeding to show exactly what that damage entailed.

Alarmed by the documentary, the War Department banned it on grounds of patient confidentiality, although Huston claimed to have written permission from everyone he filmed. Decades would pass before it was made public. “They wanted to maintain the ‘warrior’ myth,” Huston recalled. “Only a few weaklings fell by the wayside. Everyone [else] was a hero.”

It wasn’t just military hard-liners who felt this way. Even the highly respected General George C. Marshall, who would go on to serve both as secretary of state and secretary of defense in the Truman administration, loathed the thought of indulging veterans with psychiatric illnesses he believed to be fake or exaggerated—yet impossible to disprove. “He wears the clothes of an invalid,” said Marshall, and “he escapes from those duties which he seeks to evade…. He enjoys a life of leisure with one great goal ahead: to wit, a discharge for physical disability, a comparatively highly paid job as a civilian, and eventually a pension from the Veterans Administration.”

Progress came grudgingly. Studies of the mental health of soldiers in Vietnam noted important changes—some gleaned from the lessons of previous wars, others resulting from new combat conditions. According to army physicians stationed there, “No one served in the theatre of war for longer than a year; there was plenty of rest and recreation during the tour of duty; battles were short; soldiers had to endure few major artillery barrages.” As Kieran sees it, the Rambo-like portrait of the returning Vietnam veteran “as permanently and debilitatingly traumatized” is largely a fiction of Hollywood. “In fact,” he writes, “only about 15 percent…screened positive for PTSD, and just over 5 percent had ever had a ‘major depressive episode.’ The vast majority of Vietnam veterans, that is, went on to lead lives relatively unhindered by their wartime experiences.”*

Kieran hardly minimizes the fact that 15 percent remains a disturbing figure, or that soldiers’ morale and discipline faltered badly in the war’s final years. A good part of Signature Wounds is devoted to the military’s analysis of its strategic and cultural failures in Vietnam. What emerged, he notes, was the blueprint for an all-volunteer fighting force based on “garrison leadership,” which preaches discipline, order, and attention to the soldier’s well-being.

Behind this lay the assumption that America’s future wars would be the opposite of Vietnam: short and decisive, whether against the Communists in Europe or lesser opponents elsewhere. For a time, this held true: armed intervention in Grenada, Panama, the Balkans, and Kuwait (Desert Storm) proved relatively quick and painless. But one of the consequences, it appears, is that psychiatric issues became easier to ignore. The few mental health providers assigned to Grenada were left standing on the tarmac because their names hadn’t been included on the flight manifest. And military psychiatrists who took part in Desert Storm were surprised to find that the soldiers were more stressed by concerns about the lack of material comforts than by worries of Saddam’s “weapons of mass destruction.” Then came September 11 and its dreadful aftermath in Afghanistan and Iraq.

In 1951 General Omar Bradley famously described the proposal by right-wing hawks to expand the Korean conflict into Communist China as “the wrong war, at the wrong time, in the wrong place, and with the wrong enemy.” For America’s military, the 2003 invasion of Iraq was all that and more. The astonishing failure of the Bush-Cheney-Rumsfeld administration to plan for an extended conflict put unprecedented demands on the all-volunteer force. “The soldiers and marines…were perpetually at risk,” Kieran writes, “required to fight longer, with fewer breaks, and on more deployments than any previous fighting force that the United States had fielded.”

The army’s post-Vietnam goal had been to expand the “dwell time” between deployments. But the ideal ratio—three years home for every year in the field—proved insufficient to sustain the 150,000-troop rotation required for Iraq and Afghanistan. Before long, the ratio had fallen to two-to-one, then one-to-one, and finally (during the “surge” of 2007) to an exhausting fifteen-month deployment with just nine months in between.

Recruiting also slowed down significantly as the war dragged on, leading the army to lower its personnel standards, which in turn increased the likelihood of soldiers with substance abuse issues and behavioral problems being deployed to Iraq. Meanwhile, the burden of maintaining adequate troop strength fell to National Guard and Reserve forces, few of whom signed up expecting to be sent to a combat zone. By 2005, these forces comprised about half the units in Iraq; inadequately trained yet given the dangerous duty of protecting convoys, they suffered higher casualties than their counterparts.

The main reason was the “improvised explosive devices” (IEDs) used by insurgents to disable military vehicles. With US forces employing better body armor, the chances of surviving a shrapnel wound had dramatically increased. The problem was that soldiers exposed to IEDs regularly experienced concussive brain injuries—some severe, some less so. How, exactly, did one measure such injuries, or differentiate between them and PTSD, since both presented the same general symptoms? So little research had been done that it became a guessing game, with army doctors mostly choosing the TBI diagnosis as a way of protecting their patients. “It was physical, and people could grasp that as something that they were okay with,” the surgeon general admitted. “Having a psychological reaction to combat didn’t have the merit or cachet.”

It was the old stigma in modern garb. The difference this time was that it didn’t go unchallenged. In 2008 four-star General Carter Ham publicly acknowledged his own struggles with PTSD, which he attributed to seeing soldiers blown apart in suicide bombings. “You need somebody to assure you that it’s not abnormal,” Ham said of the insomnia, depression, and mood swings that he, and thousands more, had faced in shame and silence.

Ham quickly found an ally in General George Casey, the newly appointed commander of US forces in Iraq. “I can’t say that I went [there] with the notion that I needed to do something about PTSD and TBI,” Casey recalled. What changed his thinking were the surveys that crossed his desk showing an extreme reluctance among his troops to admit to any psychological problem for fear of losing respect, being passed over for promotion, or even being dismissed from the service. One survey put the figure of those who would avoid getting psychiatric care at 90 percent, and others showed the stigma to be most pronounced among young officers engaged in day-to-day combat—those most dependent on keeping their men in action. “Ninety percent. That’s a cultural issue,” Casey said, “but the Army didn’t want to hear it.”

Changing that culture became a primary goal. There’s little doubt that the “top-down” pronouncements of Ham, Casey, and others made it more acceptable for the average soldier to seek professional help, or that such pronouncements caught the Pentagon’s eye. Kieran is superb in demonstrating how the military became a major research center for the study of brain injuries and stress-related disorders. It was one thing to acknowledge that concussions can result from a direct blow to the head—slamming into a car windshield or helmet-to-helmet contact on the football field—and quite another to attribute them to the damage done by an explosion, often yards away, that causes no visible injuries. Army studies suggested (not unlike Burr in 1865) that the blast waves emanating from an IED (Burr’s “wind of passing shells”) were powerful enough to cause brain damage at the cellular level—the variables included the distance from the explosion and the frequency of exposure.

Also studied were “co-occurring factors” that worsened the post-concussion symptoms for battlefield soldiers, as opposed to, say, professional athletes. Returning Iraq veterans who met the criteria for mild TBI—about 15 percent of those who served there—reported more serious mental health problems than other groups. As one researcher explained:

When…you’re a Ben Roethlisberger [quarterback for the Pittsburgh Steelers], and you…recover from your daze, and you’ve got a trainer by your side, and the crowds are cheering, this is a different context from when you wake up after an IED blast…where people have been killed, and friends may be badly injured, and they may still be shooting at you.

What researchers couldn’t agree upon, however, was the relationship between the two signature wounds. Debate raged over whether mild TBI could lead to PTSD, whether the same incident could produce both conditions, and whether mild TBI was being overdiagnosed at the expense of PTSD. The good news, says Kieran, was that the army now accepted the premise that “behavioral health issues were medical problems to be solved through rigorous, evidence-based research.”

One comes away from Signature Wounds with a healthy respect for the military’s attempts to understand and manage these problems, and an even greater contempt for the armchair hawks most responsible for creating them. Old stigmas were confronted, leading to the medicalization of illnesses once disparaged, and new programs were begun to prepare the troops (and their families) for the punishing conditions ahead.

At the VA, meanwhile, a flurry of initiatives confronted a disturbing rise in active-duty suicides, which by 2007 exceeded the civilian suicide rate (adjusted for factors like age and gender) for the first time ever. Some—like a twenty-four-hour crisis hotline, better tracking of soldiers with suicidal behavior, and more rapid counseling—seemed long overdue; others—such as gently encouraging troubled veterans to remove firearms from their homes—were canceled after raising the ire of Fox News and the gun lobby. It wasn’t necessarily combat stress or TBI that caused most suicides, studies found; it was the deteriorating relationships, financial troubles, legal difficulties, drug problems, and day-to-day anxieties brought on by the hectic pace of training and relocation. Suicide rates among active military remain significantly higher today than they were in 2003—yet another legacy of the disastrous Iraq incursion.

Signature Wounds, says Kieran, is “a story of an Army that worked incredibly hard to care for soldiers under the unprecedented strain of the nation’s longest wars.” What makes it so unsettling is the knowledge that most of the reforms he describes were put in place to enable soldiers to withstand conditions of perpetual combat that are unreasonable and most likely intolerable. “The human mind was not made for war,” General Casey remarked. “That’s the starting point for everything.”