One week to the day before the New Hampshire primary last February 26, Representative John Anderson of Illinois, his daughter, his traveling staff, and his trailing press corps drove through sunny weather and a strangely snowless countryside from Manchester to Hanover—all in one van. Mr. Anderson, then exciting more public interest as a character in “Doonesbury” than as a Republican presidential campaigner, was looking forward to what he considered a big event in his campaign: he was to be interviewed by an ABC News television crew. For a contender buried in the pack of seven “major” candidates, a network TV spot was a rarity indeed.
But one of the two reporters accompanying Mr. Anderson heard the news of the ABC interview with a sinking heart. He was not sadistic enough to tell the elated candidate the cruel truth—that the ABC crew was working on a documentary, which would not be shown until summer, a little late to influence the New Hampshire primary.
Two days later another Republican hopeful, Senator Howard Baker of Tennessee, appeared for an early morning rally at the fire station in North Londonderry, a small community not far from Manchester. At that time, primarily because of his prominence as Senate Minority Leader, Mr. Baker was considered one of “the top three”—or, as John Anderson enviously termed them, “the charmed circle”—which also included Ronald Reagan, the leader in national polls, and George Bush, the surprise winner of the Iowa caucuses.
The main advantage of being among the top three was the attention of television crews, which cost the networks something like $2,000 a day to deploy; at those rates, and considering the scarcity of time on thirty-minute evening news shows, the cameras were seldom pointed at the lowly likes of Mr. Anderson or Representative Philip Crane. But at the North Londonderry firehouse, that chilly February morning, a total of seven cameras, network and local, filmed Mr. Baker’s typically low-keyed speech—while, at most, perhaps two dozen laconic Hampshiremen and women listened with no great enthusiasm.
That lack of local interest in Howard Baker foretold his fate; within a week or two, he was not only out of the top three but out of the race. On the other hand, Mr. Anderson, with a surprising second-place finish in the Massachusetts primary, sprang right out of Doonesbury and into the “charmed circle” he had so envied—ultimately, of course, into the national spotlight and an independent presidential candidacy that has both major parties looking over their collective shoulders.
At least two conclusions suggest themselves from these cautionary tales. One is that in presidential politics, television can neither redeem an otherwise lifeless campaign (Baker signally lacked organization, an issue appeal, or the kind of victor’s aura Iowa had given Bush), nor kill by inattention a campaign that has a real base of public support (which Mr. Anderson as “the only moderate in the race” only needed opportunity to demonstrate).
But the other conclusion is that nothing, any more, is quite so important to a presidential candidate as television coverage. Television made Jimmy Carter in 1976, it gave George Bush his brief fling into notoriety in 1980, it has carried John Anderson—a national unknown in January of this year—into serious contention for the presidency, and it is the primary instrument by which Ronald Reagan will reach the White House, if he does.
Presidential politics today, it is reasonably fair to say, is television. Party politics in America has given way to media politics, and the full consequences of that momentous shift probably are yet to be seen; among them, surely, is the loss of function of the traditional parties and the widening gap between the media arts of running for president and the grinding politics of governing the country.
But it is not just television that has changed the way we choose presidents almost beyond recognition—hence changed the kind of presidents we are likely to elect, and what they will do with the office when they win it. When Hubert Humphrey won the Democratic nomination in 1968 without winning or even entering a single primary, a reaction centered in the Democratic Party led to “opening up the system” for nominating presidential candidates; and when the vast sums raised for Richard Nixon’s re-election in 1972 were shown to have been tainted by scandal, steeped in influence, and poured into Watergate, another reaction—this time in Congress—produced a complex federal subsidy scheme to “take the money out of politics.”
Both reforms succeeded—succeeded so well, in fact, that they turned the nominating system upside down and inside out and raised in the process the questions whether the system is not now too open on the one hand and too constrained on the other by federal restrictions on fund-raising and spending. Like most reforms conceived in committee cerebration, moreover, these produced side effects foreseen by none but the longheaded.
The new system produced, for 1980, the apparently certain nomination of President Carter by the Democrats and of Ronald Reagan by the Republicans—a pair of ex-governors, one of whom had in late June only 30 percent public approval for his handling of the presidency, and the other of whom had been rejected twice by his party and lacked, at age sixty-nine, any demonstrable experience in foreign policy, national security, or congressional affairs.
“This is what reform gets us?” a reader wrote to me last spring. And when I published in the New York Times the rather snide conclusion that Carter’s record of inepitude was the worst since Warren G. Harding’s, several letters informed me that this was a slur on Harding.
In one poll, 58 percent of the respondents termed themselves unhappy with the choices offered them by the two parties; and John Anderson and his managers freely concede that his independent campaign was made possible only by the unpopularity of the Reagan-Carter match-up, which left many a voter in both parties looking for an alternative and gave Anderson roughly 20 percent of the vote in pre-convention polls.
It is not clear, however, that the system inevitably produced Reagan and Carter, or that other nominees would have emerged from a different system—say, the old, pre-1968 method—of separating wheat from chaff. Reagan, for instance, was a front-runner and won; Carter was an underdog (last fall) and won.
Both, it’s true, were veterans of the 1976 campaign, the first under the new dispensation, and presumably took advantage from that experience. But it seems unlikely that if nominations were still dominated by party leaders and professionals, an incumbent president would have been challenged for renomination, as Carter was. On the other hand, the necessity to run in primaries, which Gerald Ford did not want to do, foreclosed his chances and kept Reagan’s most formidable foe off the field.
Aside from the end product in any one year, however, sharp questions about the new nominating system are now being raised by many students of politics—practitioners, academics, journalists. And though one and a half elections—1976 and the primary half of 1980—provide limited experience by which to judge, a number of cogent criticisms seem to be emerging already, from the relatively obvious (thirty-six state primaries are too many) to the comparatively subtle (is proportional selection of delegates as fair as it seems?). Naturally, proposals to reform the reforms (regional primaries, for example) are being heard.
Here, in summary form, are the major problems—at least as I see them—of the way we nominate now:
The Early Primaries and Caucuses. Something has to come first, of course; if not the New Hampshire primary, then the Iowa caucuses, or whatever. But in a nominating system in which public contests between two or more candidates are largely determinant, the first such contests—particularly the very first—are bound to draw press coverage out of all proportion to their intrinsic importance. Iowa and New Hampshire may have only eight and four electoral votes apiece but if they provide the arenas for the first victories and the first defeats, the press will descend in numbers more appropriate to a national convention.
Editorialists as well as press critics can and do argue that this should not be so, that editors and political reporters ought to discipline themselves to give coverage to the early campaign events in proportion to their intrinsic importance. But a happening that is first of many does take on outsized even if momentary importance, particularly when candidates have been organizing and campaigning for months and when the public—the press must assume—is hungry for some measure of who’s doing well and who’s not.
Besides, in a free and highly competitive business, does the Washington Post ask the New York Times what kind of coverage it plans for the Iowa caucuses? Does NBC ask CBS? Of course not. They all assume the other fellow will go all out, and they plan to match or outdo him. And if newspapers and television did try to restrain coverage generally, they might lay themselves open to the charge they least want or need—that of collusion to affect public opinion.
The result in a contest-centered system and a media age is that the first “winner” reaps a disproportionate harvest of publicity; television, in particular, quickly stamps him (or maybe some day her) as a frontrunner and parlays his name, face, and foibles (Carter’s teeth, Bush’s jogging) into national familiarity. Carter, up against a relatively faceless field in 1976, was never headed after gaining such a media advantage in Iowa and New Hampshire; Bush, facing the famous Reagan in 1980, was boosted into his most persistent Republican challenger.
The Proliferation of Candidates. The availability of all those primaries, plus the provision of federal financing even for unknown candidates who meet a relatively low threshold of fund-raising, ensures a big field of contenders in the out-party and makes likely a challenge even to an incumbent. That’s fine for giving new faces a break, offering the voters a variety of choices, and keeping a president on his toes.
The problem is that it means somebody can come in first in a multi-candidate primary, and thus be declared a “winner,” with perhaps as little as 28 percent of the vote, as Carter did in New Hampshire in 1976. Less than a third of the voters of a minority party in a state with four electoral votes is not representative of much of anything, but the resulting press “circus” took the Georgian a long way in 1976—which is why some critics say that primaries plus federal subsidies plus television have taken the nomination away from one small and unrepresentative group (party leaders and pros) and given it to another such group (a few New Hampshire or Iowa Democrats or Republicans).
Primary Spending Limits. Not only can someone with a small percentage of the total party vote in a small state be catapulted by omnipresent media into national prominence and frontrunner status. But no other candidate—including some who might have finished only a percentage point or two behind—can rush out to his supporters, beg or borrow an infusion of funds, then outspend that “frontrunner” the next time out in order to catch up. Acceptance of a federal subsidy also means acceptance of federal spending limits for each primary.
This is a classic example of how a reform meant to preclude any candidate from having a financial advantage over another instead can give a huge advantage to the winner of the first public contest. Those who fall behind at the outset are, in effect, penalized by restrictions on what they can spend and where.
John B. Connally, Jr., tried to get around this problem in 1980 by raising huge sums privately (about $14 million) and refusing federal subsidy, thus entitling himself to ignore the spending limits. He then took South Carolina and Florida as the targets for intense efforts—a scheme that might have worked for a more appealing candidate. It produced a total of one delegate for Big John before he went home to Texas.
Proportional Selection of Delegates. In the Democratic Party, and to a somewhat lesser extent in the Republican, unit rules and winner-take-all primaries are now prohibited. A candidate entered in a primary or competing for delegates at a convention, if he or she reaches a minimum level of support, is entitled to a number of delegates proportionate to his or her final share of the vote.
What could be fairer than that? Nothing, on the face of it, and in fact this was one of the more eagerly accepted Democratic reforms, following the contentious 1968 campaign. But in both of Carter’s winning campaigns, proportional selection gave him a considerable advantage—again derived from getting out front early.
Once an early frontrunner takes a delegate lead, and assuming a few victories or at least a decent showing in all remaining primaries, he has a good chance to maintain or add to that lead even when he comes in third or fourth. With the delegates being divided among all candidates, no one is likely to win so many more than the early frontrunner in any state, or group of states, as to catch up or go ahead. The frontrunner goes on piling up his total.
Thus, even though Carter lost numerous primaries in the late 1976 race, the early lead he had established was never seriously challenged. And in one brief early stretch in 1980—March 11 to March 18—encompassing five primaries (three in the South) and seven state caucuses (almost all in states favorable to him), the president took such a huge lead over Senator Edward Kennedy that he was all but guaranteed victory no matter what happened the rest of the way.
Even after Kennedy won New York and Connecticut on March 25, he would have had to capture about 60 percent of all remaining delegates—which meant defeating Carter nearly two to one in each primary thereafter—to overcome the early Carter lead. Merely beating the president by, say, fifty-one to forty-nine, would yield almost no change in their relative delegate strength. So even though Kennedy later carried Pennsylvania, New Jersey, and California, his margin of victory was never large enough to yield many more delegates than Carter won even while losing.
Tightly Pledged Delegates. When John F. Kennedy smashed Hubert Humphrey in the West Virginia primary in 1960, it wasn’t the delegates he won that mattered; rather, Kennedy proved that a Catholic could win in a non-urban, semi-Southern state. When Nelson Rockefeller defeated Barry Goldwater in Oregon in 1964, he kept his candidacy alive not with Oregon’s handful of delegates but by underlining doubts about the conservative Goldwater’s electability.
Under the post-1968 system, however, candidates enter most primaries because that is the best way to accumulate delegates—as Gerald Ford quickly found out last spring when he tried to make the race outside the primaries; and the rules provide that delegates won in primaries or picked off in convention contests are tightly pledged for at least one ballot. That prohibits the old evil of party leaders snatching delegates away from candidates who might have shown themselves “the people’s choice,” if not that of the leaders.
But it also makes reconsideration, negotiating, compromising, and maneuvering, in the classic presidential style, difficult if not impossible. A delegate pledged to Carter from the New Hampshire or the Florida primary, back in February and March, conceivably might have concluded later that the president’s economic policy was a disaster and his rescue effort in Iran a fiasco; but he would still be a pledged Carter delegate, his only option to resign in favor of another pledged Carter delegate.
Together with proportional selection, irrevocably pledged delegates make it unlikely that there will be late entries into the presidential race, or that any such entries can succeed. They tend to make the later primaries irrelevant, or at least less important than the earlier, even though the most populous states, as things now stand, come along in the later part of the program. That has a depressing effect, in the late primaries, on public participation and voter turnout—not what the reformers had in mind.
Finally, if a candidate like Jimmy Carter can win a majority of delegates, all tightly pledged to him on the first ballot, by early May or thereabouts, what remaining value do the national conventions have? Why call the roll, if the outcome is inevitable a month before the gavel falls? If in these traditional arenas of compromise and maneuver, there can be no compromise or maneuver—no consideration of changing circumstances or late developments—what is the purpose of holding a convention at all, other than for blather and ballyhoo?
If the consequences of delegate-selection and fund-raising reforms have been extensive in themselves, they have been magnified many times over by the rising dependence of presidential campaigners on television—a dependence which seems to me essentially unreformable in an era when the networks have become a sort of national nervous system.
When a candidate must compete in all or most of thirty-six state primaries, the home screen obviously is the most effective instrument with which to reach so many voters so widely dispersed. When the amount of money that may be spent is restricted by federal law, the high cost of television time dictates that most of the funds available will be spent on the ubiquitous “tube.”
Thus, what most voters in, say, Illinois knew of that state’s primary last March 18, they learned from what they saw on television—either in candidates’ paid advertisements, or in the news broadcasts and talk-show interviews that all candidates sought desperately to break into, or in the Republican candidates’ forum sponsored by the League of Women Voters. Personal appearances before live audiences were insignificant by comparison; and in fact most such appearances were staged in the hope of television news coverage, then immediately restaged in some other television market in the hope of further coverage.
There’s nothing inherently wrong with this; voters see more of candidates via television than they ever saw of them in person, in the old days. In practice, however, media politics tends to heighten the puffery, pretense, and downright deception that have always been part and parcel of politics—which is the art of persuading people to think and do what you want them to think and do. What the voters see of candidates, not how much, is the problem.
More than ever before—in my judgment, anyway—television campaigning puts the emphasis on a contrived image of the candidate—what he and his media whizzes can persuade the public to think about him. And what they want the public to think about the candidate often has less to do with what the candidate is or believes than with what public opinion polls have disclosed that the voters would like to think he is or believes. Armed with that kind of precise information, television campaign specialists—a thriving new industry—can design and produce a series of ads to create exactly the desired effect. And these ads, ranging from thirty-second spots to five-minute or thirty-minute productions and televised to enormous numbers of people, not only have the ability to convince for which TV is justly famed (who is more “real” in American life than J.R. Ewing of Dallas and why did even the American Bar Association once invite Perry Mason/Raymond Burr to address its convention?) but also have the advantage of great flexibility and immediacy. Ad campaigns can be put together just for, say, Pennsylvania, where unemployment is high, while an entirely different series is presented in Texas, where oil and gas questions dominate. For candidates, television is a dial-an-image godsend, with which reporters—even when they try to get at the truth—can rarely cope.
Thus, candidate Carter bore down heavily, in post-Watergate 1976, on homely images of a peanut farmer in clodhopper shoes communing with the old values on a Georgia farm; this was a man who was not a lawyer, hardly knew where Washington was, and would never tell us a lie—now would he?
Thus, in 1980, bold John Anderson went before the gun buffs in New Hampshire and advocated gun control, damned if he didn’t—with the television cameras that recorded the dramatic moment for the national audience failing to explain that since there were few Anderson votes in that New Hampshire crowd anyway, the candidate was risking little locally to make a big score nationally.
Thus, too, Carroll O’Connor appeared in a series of ads for Edward Kennedy, not only throwing his own popularity behind the senator but implicitly suggesting that the blue-collar views of Archie Bunker were shared by the candidate, Chappaquiddick or no Chappaquiddick.
Politics, of course, always has been illusionist, to a certain extent. Television merely extends the possibilities and yields greater returns for the superior magic show. But the heavy modern reliance on television imagery, like procedural reforms, also tends toward unanticipated side effects—for example, a creeping disparity between the ability to get elected and the ability to serve well after election.
There is no necessary distinction between these abilities but obviously there may be quite a gap—perhaps more often than not. Divining what the public would like to hear and what kind of leader it thinks it wants in particular circumstances, then calculating a campaign to satisfy those desires, is no doubt an art and not a despicable one, either.
The problem is that this art does not have much to do with the ability to govern, as the Carter Administration demonstrates. In fact, the successful projection of an image in a campaign can bring on serious trouble later, if the image can’t be realized or sustained under the pressures of office—again, witness Jimmy Carter.
On the other hand, a successfully established image can sustain a candidate even when his performance in office might normally mandate a change. Carter—obviously the most prominent product of media politics so far—kept winning 1980 primaries with the votes of Democrats who deplored his record but who nevertheless still regarded him as the honest, moral family man of the 1976 peanut-farmer ads, in sharp contrast to the Chappaquiddick-laden Kennedy.
At the same time that presidential campaigning was changing, moreover, so was the presidency. After Vietnam and Watergate, the prestige and authority of the office declined, while congressional independence rose. The combination of single-interest politics, independent legislators, and sophisticated lobbying made it more difficult to put together effective coalitions. Even the major issues—energy and the economy—are more complex than they were a decade ago.
So if the arts of winning elections have less to do with governing, it’s the other way around with the qualities important for a chief executive trying to surmount such difficulties—experience in government, a solid background in party politics with its emphasis on alliances and compromises, an intricate network of associations with other political leaders, a deep sense of the way the system works or can be made to work. These have little relevance to the problems of winning a presidential nomination or election in the media era.
Another “side effect” of media politics and electoral reforms has been the diminished functions of the American political party. Ever since Thomas Jefferson invented this strange beast its main purpose has been to bring various factions and leaders together under what Lyndon Johnson called “one great tent,” and to unite them, however uneasily, around an issue or a personality for just long enough to register a national majority. To choose candidates who could unite such a majority, or articulate an issue that could do so, was a prime party responsibility.
When most voters could neither see nor hear candidates, moreover, the parties gave them their identities; an Arkansas farmer might not know much about a presidential hopeful, but if the Democrats nominated him he must be all right—or at least better than a bloody-shirt Republican. The parties also raised funds and financed campaigns and their platforms more or less defined the issues, if for election-day purposes only.
Television alone takes away or diminishes most of these party functions. Candidates now are identified more decisively by widely perceived television images than by party labels. Expensively appealing to mass audiences, they are unlikely to target on Democrats or Republicans alone, but to present themselves in less partisan guise. All candidates, especially national candidates, it follows, are more nearly independents now than partisans, no matter what their ideologies and party labels; and voters, too, are more likely to take an independent attitude as against traditional party loyalty.
For all these reasons, the unifying factor in a momentary national majority now tends to be the candidate’s television image, rather than his party. Proliferating primaries, moreover, have handed to the general public—at least that part of it involved in the early primaries—the old party function of choosing candidates. Even the party’s fund-raising chores and its influence on campaign spending have been largely usurped in national elections by federal subsidies.
And when most of the money available to the presidential nominees has to be spent on waging the central, all-important television contest, those participatory functions which the parties once organized and supervised—store-fronts, canvassing, registration, and get-out-the-vote drives—take a back seat, and the parties along with them.
But all of these criticisms don’t seem to me to constitute an indictment of the new system; certainly, they don’t mean that the nation should return forthwith to the 1968 model. Despite the advantage conferred on the early frontrunner, for example, by proportional delegate selection, who would want to re-establish the unit rules, winner-take-all devices, smoke-filled rooms, wheeling and dealing, and outright skulduggery by which delegates used to be apportioned about as the Mayor Daleys and Boss Flynns and Mark Hannas decided?
There may well be too many primaries; thirty might make more sense and be easier on the health of candidates and reporters. But that’s essentially a matter for the states to decide and no method by which the parties or Congress might try to dictate a more orderly system appeals to me. Similarly, regional primaries—states in a particular region holding primaries at the same time—might be an improvement; but the states seem to me to be moving into this arrangement by usage. The New England, southern, and northwestern primaries were identifiably grouped in 1980, on one or consecutive Tuesdays.
The campaign subsidy law would be improved, I think, if the individual contribution limit were $5,000 instead of $1,000, and if the subsidies and spending limits were increased; in general, we spend too little, not too much, on political education, which, at its best, is what campaigning is. And the experience of John Anderson, like that of Eugene McCarthy in 1976, raises the real question whether a federal subsidy law ought to be used to shut out independents and third parties and build in the Democrats and Republicans as official parties just at the time when they have become less useful than ever.
As for television, nothing can or should be done about its reach and impact, or to prevent candidates from taking advantage of it. I have strong First Amendment reservations about regulating their use of television—by prohibiting their use of spot advertisements, for instance, or restricting them to blocs of free time extorted by government from the national networks.
One welcome development, however, is increased attention by the political press to the use candidates make of the home screen. Numerous newspapers now assign reporters to cover the television campaigns exclusively, reporting not only on what the public sees but on what messages the candidates are trying to get across, why, and by what means or trickery. That television documentary for which John Anderson was interviewed in February was about the impact of television on politics, a subject getting increased attention from self-conscious networks and local stations. Still, probably no news institution as yet pays anywhere near as much attention to the television campaign as it does to the candidates’ personal appearances.
Instead of designing reforms to meet such problems as those I’ve suggested—which would surely produce more unexpected side effects, whether or not they corrected the problems—I’d rather see a more general corrective. And I think it’s available in that unique and venerable American institution, now fallen on lean days—the national political convention.
The convention literally brings factional and state representatives together under the same roof—that “one great tent” imagined by LBJ. The original idea was to facilitate exchange and compromise, to test the acceptability of ideas and personalities, to develop a consensus on programs and candidates. If unanimity was seldom possible, the convention at its best could and did produce broad, winning coalitions.
But there has not been a convention that required more than one ballot since the Democrats nominated Adlai Stevenson in 1952; few since then have even been really contested on the first ballot. Now, with thirty-six state primaries and proportional selection offering the possibility of winning a convention majority of pledged delegates far in advance of the opening gavel, the quadrennial party gatherings have little real utility.
But that utility could be restored at a stroke if it were made far more difficult, but still not impossible, to win an unshakable majority of pledged delegates in the primaries and state conventions. At present, for example, 10 percent of Democratic delegates are not chosen by primary and convention votes but are reserved for party leaders, elected officials, and the like. Suppose that were expanded to, say, a third of the total, perhaps even 40 percent, with the vital proviso that this group of delegates could not go to the convention formally pledged to any candidate.
That would permit candidates to seek as many pledged delegates as possible, just as at present, in primaries and state conventions—but only from among 60 to 67 percent of the total. If any candidate could put together a preconvention pledged majority in such circumstances, there would be little doubt about his or her appeal and electability.
More likely, in most election years, some candidate would open a big lead over all rivals and come to the convention with every likelihood of being nominated. But at that point, the nominally unpledged 40 percent would be able to act as a sort of balance wheel, to force a reconsideration or encourage a compromise, or to put the final seal of approval on the primary leader. A rough analogy would be to the House and Senate—one group a representative, the other a deliberative body, the approval of both necessary.
In the circumstances of 1980, this arrangement would permit the Democratic convention to weigh such questions as whether President Carter’s real strength might not have been exaggerated by the crises in Iran and Afghanistan; to take into account the extraordinary protest votes later cast against him (either for Senator Kennedy or for “none of the above”); perhaps even to put forward an alternative (say, Muskie) who might be less threatened by the Anderson independent candidacy.
On the Republican side, former President Ford’s reluctance to run in the primaries still would have put him at a disadvantage, but it would not necessarily have been fatal. With Ford a possible alternative at the convention, the candidacies of both George Bush and John Anderson—while he remained in the Republican race—would not have been mooted by the commanding lead Ronald Reagan so soon established.
It would always be difficult, of course, to unhorse someone who had dominated the primaries, particularly a sitting president; nor would it be likely, most of the time, that the 40 percent would be truly uncommitted. But their formally uncommitted status would at least make change possible, and their numbers would provide the necessary base of strength—but not the power arbitrarily and without reason to thwart the verdict of the pre-convention contests.
The distorting effects, if any, of early primary victories, proportional delegate selection, and spending limits, all magnified by television, might not in every instance be corrected. But with the convention more nearly able to perform its traditional function of facilitating compromise and consensus, neither a primary leader nor his opponents nor the voters could take the outcome for granted. Voters in late primary states, for example, would no longer feel that everything had been settled in the early ones, so that they had no reason even to go to the polls.
Who would make up that unpledged 40 percent, the “Senate” delegates? I envision, no doubt optimistically, not the cigar-chomping pros of myth and tradition but a group of politically active and responsible persons, local and county chairmen, elected officials, party elders perhaps no longer active day-to-day, augmented by interested political laymen—some businessmen, some academics, representatives of civic and community organizations, and just plain taxpayers.
Who serves, of course, is dependent on who chooses them and under what rules. I see no good alternative to leaving that power to the parties themselves—to state parties acting under rules and guidelines established by the national parties. That may seem at first glance a formula to produce the old-time cigar-chompers; but since the idea is to redeem the parties as well as the conventions from the tyranny of the primaries, with their vulnerability to television imagery, the results might be better than cynics would expect.
But please—before we tinker any more with our elections system, let’s take plenty of time to think through further changes before we’re stuck with them. Nothing we’re doing now is so bad that “reform” can’t wait on deliberation, thorough analysis, more experience. After all, as John F. Kennedy once told the Senate (quoting Viscount Falkland), if it’s not necessary to change, it’s necessary not to change.
August 14, 1980