Europeana—which already has offices in The Hague—is still in a formative phase, but its basic structure is well developed. Instead of accumulating collections of its own, it will function as an aggregator of aggregators. Information will be accumulated and coordinated at three levels: particular libraries will digitize their collections; national or regional centers will integrate them into central databases; and Europeana will transform those databases, from twenty-seven constituent countries, into a single, seamless network. To the users, all these currents of information will remain invisible. They will simply search for an item—a book, an image, a recording, or a video—and the system will direct them to a digitized version of it, wherever it may be, making it available for downloading on a personal computer or a handheld device.
To deliver such service, the system will require not only an effective technological architecture but also a way of coordinating the information required to locate the digitized items—“metadata,” as librarians call it. The staff of Europeana at The Hague has perfected a code to harmonize the metadata that will flow into it from every corner of Europe. Unlike Google, it will not store digital files in a single database or server farm. It will operate as a nerve center for what is known as a “distributed network,” leaving libraries, archives, and museums to digitize and preserve their own collections in the capillary system of the organic whole.
A digital library for America might well follow this model, although Europeana has not yet proven that it is workable. When a prototype went live on November 20, 2008, it was flooded with so many attempts at searches that the system crashed. But that failure can be taken as testimony to the demand for such a mega-library. Since then, Europeana has enlarged its capacity. It will resume functioning at full tilt in the near future; and by 2015 it expects to make thirty million items, a third of them books, available free of charge.
Who will pay for it? The European Union will do so, drawing on contributions from its member states. (Europeana’s current budget is e4,923,000, but most of the expenses fall on the institutions that create and preserve the digital files.) This financial model may not be suitable for the United States, but we Americans benefit from something that Europe lacks: a rich array of independent foundations dedicated to the public welfare. By combining forces, a few dozen foundations could provide enough money to get the DPLA up and running. It is impossible at this point to provide even ballpark estimates of the overall cost, but it should come to less than the e750 million that President Sarkozy pledged for the digitization of France’s “cultural patrimony.”
Moreover, in building up its basic collections, it could draw on the public-domain books that are currently stored in the digital archives of not-for-profit organizations like Hathi Trust and the Internet Archive—or (why not?) in the servers of Google itself, Google willing.
Once its basic structure has been erected, the Digital Public Library of America could be enlarged incrementally. And after it has proven its capacity to provide services—for education at all levels, for the information needs of businesses, for research in every conceivable field—it might attract public funds. Long-term sustainability would remain a problem to be solved.
Other problems must be confronted in the near future. As the Google case demonstrated, nearly everything published since 1923, when copyright restrictions begin to apply, is now out of bounds for digitization and distribution. The DPLA must respect copyright. In order to succeed where Google failed, it will have to include several million orphan books; and it will not be able to do that unless Congress clears the way by appropriate legislation. Congress nearly passed bills concerning orphan books in 2006 and 2008. It failed in part because of the uncertainty surrounding Google Book Search. A not-for-profit digital library truly devoted to the public welfare could be of such benefit to their constituents that members of Congress might pass a new bill carefully designed to protect the DPLA from litigation should rightsholders of orphan books be located and bring suit for damages.
Even better, Congress could create a mechanism to compensate authors for the downloading of books that are out of print but covered by copyright. In addition, voluntary collective agreements among authors of in-print books, similar to those in Norway and the Netherlands, could make much contemporary literature accessible through the DPLA. The copyright problems connected with works produced outside the United States might be resolved by agreements between the DPLA and Europeana as well as by similar alliances with aggregators on other continents. Items that are born in diverse formats such as e-books pose still more problems. But the noncommercial character of the DPLA and its commitment to the public good would make all such difficulties look less formidable than they seemed to be when they were confronted by a company intent on maximizing profit at the expense of the public and of its competitors.
In short, the collapse of the settlement has a great deal to teach us. It should help us emulate the positive aspects of Google Book Search and avoid the drawbacks that made Google’s enterprise flawed from the beginning. The best way to do so and to provide the American people with what they need in order to thrive in the new information age is to create a Digital Public Library of America.