In May 2018, a new data and privacy law will take effect in the European Union. The product of many years of negotiations, the General Data Protection Regulation is designed to give individuals the right to control their own information. The GDPR enshrines a “right to erasure,” also known as the “right to be forgotten,” as well as the right to transfer one’s personal data among social media companies, cloud storage providers, and others.
The European regulation also creates new protections against algorithms, including the “right to an explanation” of decisions made through automated processing. So when a European credit card issuer denies an application, the applicant will be able to learn the reason for the decision and challenge it. Customers can also invoke a right to human intervention. Companies found in violation are subject to fines rising into the billions of dollars.
Regulation has been moving in the opposite direction in the United States, where no federal legislation protects personal data. The American approach is largely the honor system, supplemented by laws that predate the Internet, such as the Fair Credit Reporting Act of 1970. In contrast to Europe’s Data Protection Authorities, the US Federal Trade Commission has only minimal authority to assess civil penalties against companies for privacy violations or data breaches. The Federal Communications Commission (FCC) recently repealed its net neutrality rules, which were among the few protections relating to digital technology.
These divergent approaches, one regulatory, the other deregulatory, follow the same pattern as antitrust enforcement, which faded in Washington and began flourishing in Brussels during the George W. Bush administration. But there is a convincing case that when it comes to overseeing the use and abuse of algorithms, neither the European nor the American approach has much to offer. Automated decision-making has revolutionized many sectors of the economy and it brings real gains to society. It also threatens privacy, autonomy, democratic practice, and ideals of social equality in ways we are only beginning to appreciate.
At the simplest level, an algorithm is a sequence of steps for solving a problem. The instructions for using a coffeemaker are an algorithm for converting inputs (grounds, filter, water) into an output (coffee). When people say they’re worried about the power of algorithms, however, they’re talking about the application of sophisticated, often opaque, software programs to enormous data sets. These programs employ advanced statistical methods and machine-learning techniques to pick out patterns and correlations, which they use to make predictions. The most advanced among them, including a subclass of machine-learning algorithms called “deep neural networks,” can infer complex, nonlinear relationships that they weren’t specifically programmed to find.
This is exclusive content for subscribers only.
Try two months of unlimited access to The New York Review for just $1 a month.
Continue reading this article, and thousands more from our complete 55+ year archive, for the low introductory rate of just $1 a month.