Alan J. Devlin
Government of the United States of America - Federal Trade Commission
Michael S. Jacobs
DePaul University - College of Law
March 17, 2010
William & Mary Law Review, Vol. 52, p. 75, 2010
Fueled by economics, antitrust has evolved into a highly sophisticated body of law. Its malleable doctrine enables courts to tailor optimal standards to a wide variety of economic phenomena. Indeed economic theory has been so revolutionary that modern U.S. competition law bears little resemblance to that which prevailed fifty years ago. Yet, for all the contributions of economics, its explanatory powers are subject to important limitations. Profound questions remain at the borders of contemporary antitrust enforcement, but answers remain elusive. It is because of the epistemological limitations of economic analysis that antitrust remains unusually vulnerable to error.
The fear of mistakenly ascribing anticompetitive labels to innocuous conduct is now pervasive. The Supreme Court has repeatedly framed its rulings in a manner that shows sensitivity to the unavoidability of error. In doing so, it has adopted the principle of decision theory that Type I errors are generally to be preferred over Type- II. It has crafted a pro-defendant body of jurisprudence accordingly. In 2008, the Justice Department picked up the gauntlet and published the first definitive attempt at extrapolating optimal error rules. Yet, in 2009, the new administration promptly withdrew the report, opining that it could “separate the wheat from the chaff” and thus marginalizing the issue of error. Notwithstanding this confident proclamation, error remains as visible as ever. Intel’s behavior in offering rebates has been subject to wildly fluctuating analysis by the U.S. and E.U. enforcement agencies. In a marked departure from precedent, the DoJ is again viewing vertical mergers with consternation. And the agency has reversed course on the legality of exclusionary payments in the pharmaceutical industry. Antitrust divergence, both within and outside the United States, remains painfully apparent, demonstrable proof that vulnerability to error remains systemic. For this reason, error analysis may be the single most important unresolved issue facing modern competition policy.
This Article seeks to challenge the contemporary mode of error analysis in antitrust law. We explain the causes and consequences of antitrust error and articulate a variety of suggested cures. In doing so, we debunk the current presumption that false positives are necessarily to be preferred over false negatives. We highlight a variety of cases in which the contemporary bias in favor of underenforcement should be revisited.
Number of Pages in PDF File: 58
Date posted: March 18, 2010 ; Last revised: November 7, 2010
© 2016 Social Science Electronic Publishing, Inc. All Rights Reserved.
This page was processed by apollobot1 in 0.218 seconds