Cost-Benefit Analysis of Financial Regulation: A Reply
Let me begin by thanking Professors Posner, Weyl, and Sunstein, and Mr. Kraus, for their thoughtful and thought-provoking replies, and the editors of the Yale Law Journal for organizing this exchange. The comments are rich, and a full response would take on the size of another article—but in the interest of time and readers’ patience, I will limit this Reply to a few points.
Cost-benefit analysis (CBA) has become increasingly important to the way that regulations are proposed, analyzed, and implemented, in and out of the financial regulatory context. This trend is likely to continue. Many significant regulations required or authorized by post-crisis re-regulatory laws are still being finalized and put into practice. Even if legal requirements for CBA do not change, and even if courts take a lighter touch, the intellectual interest in, and potential payoff of, developing better methods to analyze and quantify costs and benefits of financial regulation will continue to hold the attention of anyone interested in how financial markets function.
Still, for reasons I try to illuminate in Cost-Benefit Analysis of Financial Regulation: Case Studies and Implications,1 efforts to quantify and monetize costs and benefits of significant financial regulations in precise and reliable ways face significant challenges. From the comments, I think this is a point on which many with different perspectives on CBA could agree.
Sunstein’s focus on alternatives to standard CBA (such as breakeven analysis), in my view, implicitly concedes this point.2 His recent book also makes clear that he accepts that CBA ought to play a limited role when externalities are common and willingness-to-pay surveys cannot easily be used to estimate regulatory benefits.3 This will generally be the case when those benefits consist of rare but extreme events, such as systemic financial crises or bubbles in which fraud encounters ideal conditions to latch, virus-like, onto envy, greed and optimism.4
Kraus initially seems to resist accepting the unreliability of CBA for financial regulation (CBA/FR). He sketches my argument for CBA/FR as follows: (a) CBA proponents say CBA can be done, (b) Coates tried it and failed, and (c) hence, no one can.5 If that were all Case Studies was about, he would be right to be skeptical. However, the Article does more (as I am sure he knows). First, I not only tried to do my own CBA but also analyzed CBAs held up by CBA proponents as “gold standards” and showed that those, too, did not include reliable and precise quantification. Second, I posted Case Studies on the web, widely presented it to various academic and regulatory audiences, and invited anyone who could identify a good example of reliable, precise quantified CBA/FR to do so. So far, no one has identified a good example, or found major flaws in my specific case studies. Third, I chose examples to analyze that were intended to be representative of major types of financial regulations, using a range of instruments to achieve a range of goals with different kinds of costs. None of the comments, in sum, takes issue with the analysis in the specific case studies in Case Studies. Indeed, Kraus eventually concedes that if CBA must meet the “impossible standard of quantitative CBA/FR”—that is, precisely what policy advocates, the Government Accountability Office, members of Congress, and some judges on the D. C. Circuit have repeatedly criticized the independent agencies for failing to do6—the effort will “naturally fail.”7
A second point on which the commenters and I seem to agree is that court review of quantified CBA is unlikely to improve CBA significantly, at least not without adding costs that are of the same order as the benefits of such review. Kraus puts it most pungently: “[C]ourts should get out of the business of second-guessing CBA/FR.”8 As Sunstein has written elsewhere, it is “exactly correct” that “the law does not require agencies to measure the immeasurable [and that an agency’s] discussion of unquantifiable benefits fulfills its statutory obligation to consider and evaluate potential costs and benefits.”9 Or, more tentatively, as Posner and Weyl put it, “[W]e agree with Coates, albeit with less confidence, that judicial review is premature at the current time.”10
Agreement on these two core points is an advance over the optimism about judicially reviewed quantified CBA often reflected in congressional proposals to mandate CBA for independent agencies.11 If my Article and this exchange help make it more likely that those proposals will be significantly modified—to embrace retrospective review, for example, rather than to insist that agencies use unreliable, up-front quantified guesstimates to defend regulatory changes—then this will be a good outcome, even if it takes a Chicken Little (to which Posner and Weyl liken me12) to generate a consensus. With those preliminaries out of the way, I turn to the specifics.
Sunstein’s Response advances the technique of breakeven analysis to address the serious challenges financial regulators face in estimating the effects of major regulations.13 Breakeven analysis can be used where costs but not benefits, or, less often, benefits but not costs, can be quantified. By quantifying one half of the equation, breakeven analysis “can help to discipline the judgment about whether to proceed” with a rule.14 Sunstein notes that we all use breakeven analysis in ordinary life, offering the example of real estate investments.15 Another type of example familiar to everyone is the purchase of “experience” goods, such as concerts, theatre, and movies. We know what the cost is—the price of a ticket, the time to be spent—but we do not know the benefits in advance (because others’ opinions of the experience’s quality will at best imperfectly predict our own experience). But we can compare the known cost with a plausible, if not necessarily reliable, estimated range of benefits drawn from past experience, and that comparison is more informed than one in which both costs and benefits remain uncertain.
Although Sunstein does not make the point explicit, breakeven analysis can also be an important and valuable component of cost-effectiveness analysis, as opposed to cost-benefit analysis. As noted in Case Studies,16 such analysis compares reliably estimated costs of different regulatory alternatives while stipulating or assuming the benefits.17 I am fully supportive of the idea that cost-effectiveness analysis is a potentially important tool for regulators to use. While neither breakeven analysis nor cost-effectiveness analysis allows a regulator to rely on net quantified benefits to determine whether to proceed, both can help inform the regulator’s judgment, and both will push the regulator to choose the least costly means to accomplish a regulatory goal. Where possible, analysis of this kind should be pursued.
However, as Sunstein would acknowledge, I think, in some cases, agencies may know so little that they cannot even engage in breakeven analysis.18 He does not provide evidence on how common these cases are. The bottom line of Case Studies suggests that such situations not only “may” occur but that they will be fairly common in financial regulation. In none of the cases I analyzed were the gross costs or the gross benefits of the regulations capable of being reliably quantified. If these cases are representative of financial regulation generally, as I believe they are, then regulators will frequently be faced with the job of regulating—in fact, typically, re-regulating, given the pervasive regulation of finance already in place—without being able to rely on breakeven analysis. Instead, as Sunstein acknowledges, they will be faced with relying on some combination of presumptions, meta-principles, coin flipping, and judgment.19 In fact, since judgment properly includes the use of meta-principles (tie-breakers, presumptions, etc.) or necessarily arbitrary (but provisional) choices, all the alternatives really boil down to expert, informed judgment, fallible as that often is.
Judgment, as I acknowledge in Case Studies, is far from ideal as a basis for regulation. We should aspire to more. But the value of transparency, generally identified as a primary virtue of CBA, requires that we be clear about distinguishing aspirations from capabilities. The institutional framework for financial regulation should reflect reality, not aspiration, even as it creates incentives for and removes impediments to continuous progress towards those aspirations. The review of case law in Case Studies suggests that our current legal and institutional framework pushes agencies to hide weaknesses and suppress uncertainties in CBA. We should be doing just the opposite. We should not only be pushing agencies to do better CBA, but also to acknowledge the limits and weaknesses of the CBA that can be done. It should not be controversial that CBA pretending to greater precision or reliability than it can deliver is worse than rule releases that candidly admit the difficulties of estimation.
Posner and Weyl’s primary focus in their Response is not on the core of Case Studies, which is the analysis of and conclusions about the case studies themselves—that is, my effort to show that CBA/FR is hard. Rather, they focus on my suggestions—which I agree are speculative—about whether (and if so, why) financial regulation is different in kind as regards CBA than non-financial regulation.20 Any contrast between CBA of financial and non-financial regulation must be general and rough. Financial regulations exist whose costs (if not benefits) are comparatively simple to estimate. For example, a minor disclosure requirement may have as its sole cost the one-time compliance cost of adjustments in disclosure forms and may thus be susceptible to break-even analysis even if it has the hard-to-quantify benefit of improving consumers’ ability to compare decision-relevant information. Meanwhile, non-financial regulations exist for which full quantified CBA is just as challenging as for any financial regulation. Climate change regulations are clearly an example, as the social cost of carbon is a highly contestable and, according to well-respected economists who know more about this topic than do I, unreliable and imprecise.21
The interesting question, however, is not whether there are examples where financial and non-financial regulations are similar, but whether there is typically a difference, and if so, why. Here, I remain unconvinced by Posner and Weyl that there is none.22 While they point out correctly that financial modeling can be usefully used to predict markets,23 they offer no examples where quantified CBA of major financial regulations is or could be reliable and precise. They argue that antitrust law is disciplined by CBA—but for this they cite to the horizontal merger guidelines,24 which is puzzling. Those guidelines are not cost-benefit analyses themselves, quantified or otherwise; nor was their adoption accompanied by CBA by the Department of Justice or the Federal Trade Commission.25 Institutionally, the guidelines are not (technically) regulations, but statements of enforcement policy, designed to give private parties insight into how the regulators analyze the effects of mergers and select enforcement strategies under the antitrust statutes.26 While it is fair to think about the guidelines functioning as regulations, understood in a loose sense, their promulgation and several revisions27 were not subject to notice-and-comment or judicial review under the Administrative Procedure Act. Furthermore, these guidelines function in practice much more loosely than standard regulations—that is, they are applied in highly fact-specific ways to particular mergers and require numerous applications of judgment28 to be taken to an actual enforcement decision. Moreover, in practice, the result is almost always negotiated between the deal parties and the relevant government agency, as I can attest from many deals I have personally handled. The European Union, moreover, follows similar but significantly different standards in its antitrust reviews,29 and this suggests there is not a consensus approach to estimating the effects of a merger on competition.
Posner and Weyl note that the guidelines are well-accepted as net beneficial in their effects on mergers, citing to earlier work by Weyl.30 This may be true, but if so it is likely because academics and practitioners who have evaluated the guidelines think they are better than the purely ad hoc enforcement that preceded their adoption, not because the guidelines were subject to a pre-adoption cost-benefit analysis. Moving from pure discretion to constrained discretion is likely to be an improvement for most subject parties if only because the pre-existing statute created significant and uncertain enforcement risks. (That may be why few CBA advocates have critiqued the financial agencies for failing to conduct CBA in the course of adopting safe harbors or otherwise self-imposing constraints on their enforcement discretion.) But no one has ever engaged in formal conceptual CBA, much less a quantitative assessment, of how the antitrust laws would function in practice in a world of enforcement discretion compared to the way they function under the guidelines.
A second part of Posner and Weyl’s Response offers an argument that financial regulation is not differently related to other kinds of science than non-financial regulation is. Posner and Weyl correctly point out that social science is often relevant to non-financial regulation, since those subject to regulation will react, and regulators should anticipate those reactions, thereby complicating CBA. Posner and Weyl also argue that physical (hard) science is relevant to financial regulation, since “[p]hysical laws constrain financial transactions, which ultimately involve keystrokes, the movement of electronic impulses, and other physical manifestations, just as they constrain rocket ships.”31 Who could disagree?
But the fact that both natural and social sciences are relevant to both financial and non-financial regulations does not answer the question of whether financial regulations overall are qualitatively different from non-financial regulations. To see why, consider again the example I used in Case Studies—a regulation mandating rear-facing cameras in minivans.32 While human responses to a new mandatory safety feature should be a part of a full CBA of this rule, socially shaped responses are likely to be second-order (in the sense of “smaller”) in their effects compared to the immediate effects of increasing manufacturing costs and helping drivers avoid backing over children. Car sellers cannot easily evade the requirement to include the cameras, and car buyers cannot easily make their own cars or find alternative modes of transport. Yes, some perverse buyers angry at the government for forcing them to buy a feature they did not want might disable the camera. Yes, it is possible to imagine drivers getting even more reckless in backing up as a result of not having to crane their heads around anymore. But such potential consequences are almost certain to be minor, and the first-order (in the sense of “larger”) effects of the rule can be calculated by running experiments with representative individuals to see how often the cameras reduce accidents.33 These experiments have (to me, at least) strong external validity, and they are not likely to be seriously confounded by second-order reactions, which would be within the domain of social science. None of this is to suggest that the task of monetizing life and the emotional effect of killing one’s own child is easy, only that the science of estimating important first-order inputs (for example, will the cameras actually reduce deaths?) is typically tractable in the non-financial context. Posner and Weyl offer no reasons or examples to support their implicit claim otherwise.
By contrast, the largest effects of financial regulations are only rarely similarly capable of being studied in this way. Yes, finance is carried out on physical objects called computers. But financial regulations are rarely aimed at the computers, or telephones, or printers, or note pads, or pens. Instead, they are aimed at how the computers, telephones, etc. are used to represent, manipulate, and communicate the intangible contracts, “instruments,” and social constructs that constitute the financial markets (stocks, bonds, currency, funds, corporations, options, and so on). Experiments in financial regulation are possible34 and should be pursued more seriously by financial regulators, who should be given resources and legal and institutional support for doing so. But the external validity of these experiments will typically be far less clear than in the case of rear-facing minivan cameras. Consider experiments on how often people cheat or deceive. These behaviors can be (and have been) tested in laboratory-style experiments.35 But the artificiality of the experiments is evident to even a casual reader. In a lab, subjects have only a few opportunities for cheating; the experimenter may control how they can do so. In the real world, humans are remarkably adept at innovating in how they deceive.
It is true that some areas of knowledge relevant to non-financial regulation (for example, the medical properties of new drugs, the systemic effects of information technology, or, as noted above, the implications for climate change of carbon emissions) can evolve rapidly as science progresses. In this way, the non-stationary properties of financial economic theory and beliefs may have analogues in those other areas of knowledge. But if non-stationarity were generally true of non-financial regulation, this would call into question the reliability of CBA of non-financial regulation, rather than make CBA of financial regulation reliable. I am not the first person to note how much less reliable and primitive social science is compared to the physical sciences.36 To the extent that the rate of change in the frontier of knowledge is greater overall in the former than the latter, economics will be less reliable as a base for CBA of financial regulation than, say, engineering is as a base for CBA of regulations about bridge design. Along with Posner and Weyl, I, too, hope this will change over time—that social science will improve, even if it is likely always to lag the natural sciences. But unrealistic hopes should no more guide policy than unfounded fears.
A final point to which I would like to reply is Kraus’s effort to showcase the ways in which economists at the Securities and Exchange Commission (SEC) have, of late, increasingly used numbers and quantities to inform rulemaking on money market funds, Regulation D, and swaps regulations. He is right to identify this role for quantification in CBA of financial regulation. I think of the potential for this type of quantification in terms of helping to “scope” the effects of a rule, including both its benefits and costs. These types of quantification can be helpful in informing agency priorities (and possibly the priorities of private commentators), and they may be able to guide judgmental rule design in some ways. For example, if a scoping exercise shows that only a small number of businesses are involved in a given market, a rule might be designed with the characteristics of those businesses in mind, using, for example, definitions or data that those businesses already use, to minimize compliance costs.
At the same time, none of the examples that Kraus lists reflect the kind of quantification and monetization sought by some CBA advocates. (Kraus seems to suggest I exaggerate what advocates seek, but he does not address the specific cites for my argument in Case Studies.37 If I am wrong—if advocates only seek conceptual (or, as Kraus calls it, “pragmatic”) CBA—then it would be useful for them to say so, and if they did, much of the charged political controversy over CBA would abate.)Nor do these scoping exercises approach the CBA conducted by executive agencies under the oversight of the Office of Information and Regulation Affairs in the context of many types of non-financial regulation. These quantifications did not contribute in any way that Kraus makes clear, or that I can identify, to the SEC’s decisions about whether to adopt the rules in question, or how the rules were written, or the Commission’s assessment of how their net benefits compared to those of reasonably available alternatives or the pre-regulatory baseline. To be clear, I am not critiquing the use of this type of quantification, or suggesting that more could be done. I am simply pointing out the limits of what has been done.
On the one hand, then, nothing in Case Studies should be understood as suggesting that scoping exercises should not be done by financial regulatory agencies, when in the view of the staff economists (or, better, a combined team of economists and lawyers working on a given possible rulemaking project) such analyses will help them in their tasks. On the other hand, if such limited efforts to quantify are included in how a rule is presented to the public, then the limits of what they can tell us about the rule’s virtues and vices should be made clear. That means being clear that they are not going to accomplish what many analysts present as the goal of CBA: to discipline regulatory choices by generating precise and reliable estimates of the costs and benefits of regulation.38
Much more remains to be said about CBA of financial regulation. The topic is vast, and if I am right that the frontiers of the relevant science are constantly changing, the potential for valuable CBA of financial regulation is bound to change over time. The legal and political context in which regulations are adopted is also constantly changing. Case Studies should be read not as a general condemnation of CBA, economic analysis more broadly, or quantification. Rather, it is carefully and deliberately intended to be read as a picture of the current state of the art. Let us try to make that picture outdated as rapidly as possible.
John C. Coates IV is John F. Cogan, Jr. Professor of Law and Economics, Harvard Law School.
Preferred Citation: John C. Coates IV, Cost-Benefit Analysis of Financial Regulation: A Reply, 124 Yale L.J. F. 305 (2015), http://www.yalelawjournal.org/forum/cost-benefit-analysis-of-financial-regulation-a-reply.