To presume much: federal agencies have a poor track, record when estimating proposed regulations costs and benefits.
Batkins, Sam
Countless rhetorical and political battles have been fought over
the merit or fault of particular federal regulations. Typically,
industry will muster ex ante estimates of a rule's costs during the
rulemaking process, and the agencies and their public interest allies
will respond with ex ante estimates of the public health and safety
benefits of the action.
This historical regulatory yin-and-yang often takes place without
ex post analysis of previous regulation, along with an assessment of
whether those regulations had the effects that advocates or critics
claimed at the time they were adopted. Did the rule save the predicted
number of lives? Was pollution abated and can we attribute that
reduction to the regulation? Unfortunately, the number of truly
retrospective regulatory reviews is utterly dwarfed by the number of ex
ante regulatory fights. This needs to change, and thankfully
policymakers and analysts are beginning to concede this.
In recent years there has been a steady stream of retrospective
reviews from various sources offering new data on regulatory
performance. Generally, agency ex ante estimates on benefits have proven
to have been inflated, sometimes wildly so. This shouldn't come as
a surprise, as agencies have heavy incentives to "sell" their
rules to the administration and the public. The research that currently
exists isn't enough to completely undermine the "omniscient
agency" narrative that regulatory proponents (and sometimes the
courts) profess, but sound retrospective review is steadily building the
case that agencies routinely rely on flawed assumptions and make
unreliable projections.
COST-BENEFIT ANALYSIS IN THE REGULATORY PROCESS
The genesis of cost-benefit analysis within the executive branch
dates to Presidents Lyndon Johnson and Richard Nixon. Allan Schmid, who
oversaw the Army Corps of Engineers at the time, argued that such
analysis should apply not only to public works projects, but also to
regulations. Nevertheless, neither president established a formal
cost-benefit analysis regime.
That changed under President Jimmy Carter's Executive Order
12044. The order established, among other aspects: a semiannual agenda
of regulations (known today as the Unified Agenda), a requirement that
significant regulations address new reporting and recordkeeping
requirements, an evaluation of the direct and indirect effects of a
rule, and the need to ensure that paperwork costs for the rule are
minimized.
Often unnoticed about EO 12044 is its retrospective review
component. The order called for a review of existing regulations to
analyze the continued need for certain rules and the burdens imposed.
As part of this new process to improve federal regulation, Carter
created the Office of Information and Regulatory Affairs (OIRA) in 1980
to conduct cost-benefit analysis. But it was President Ronald Reagan who
assigned cost-benefit analysis a central place in regulatory policy,
vowing that new regulations would not go forward unless "the
potential benefits to society for the regulation outweigh the potential
costs to society." This was one of the few times in history when
regulation actually declined.
Every succeeding president has upheld the importance of
cost-benefit analysis in regulation. Most recently, President Obama
reaffirmed a commitment to cost-benefit analysis when he issued
Executive Orders 13563, 13579, and 13610. In addition to common themes
from the past, Obama emphasized that some important values--equity,
fairness, and distributive effects--may be difficult to grasp
quantitatively. He also required executive agencies (and asked
independent agencies) to "engage in a periodic review of existing
significant regulations." Although the Carter administration also
promoted retrospective review, the Obama administration appeared to
actually commit to it, identifying more than 500 rules for review.
However, research suggests that some of those reviews merely provided
cover for new regulations, rather than the appraisal and reform of
existing rules.
Today, every cabinet agency engages in some form of cost-benefit
analysis for "economically significant" rules--measures with
an economic impact of $100 million or more. In addition, cabinet
agencies generally follow the Office of Management and Budget's
Circular A-4, which establishes guidelines for measuring benefits and
costs. For example, the U.S. Department of Energy (DOE) routinely
devotes countless pages of analysis to the net present value benefits
and costs of a rule, the annualized costs and benefits, and how the
regulation will affect consumer prices. For some large regulations, the
cost-benefit analysis in the preamble is accompanied by a separate
"Regulatory Impact Analysis" and several "Technical
Support Documents." Yet, these documents are almost always
non-existent for independent agencies, and agencies rarely carry out
their own retrospective studies. To date, third parties have been tasked
with producing just a handful of regulatory lookbacks.
[ILLUSTRATION OMITTED]
STUDIES IN ERROR
Although there have been several notable studies appraising
individual regulations, there have been few organized attempts to tackle
broad retrospective review. Thankfully, the public policy research group
Resources for the Future (RFF) launched a "Regulatory Performance
Initiative" aimed at documenting whether past regulation succeeded
and at what cost. In nine case studies, this undertaking has performed
34 retrospective reviews on costs and benefits from environmental
rulemaking.
The findings? Regulators generally bungle their estimates. For
benefits, RFF found 10 of the 22 regulations or regulatory requirements
overestimated benefits by 25 percent or more; six others were
"relatively accurate," and six were underestimated. The
research found that the U.S. Environmental Protection Agency 1 s air
toxics rules tended to exaggerate benefits more often than any other
policy area.
Despite this herculean research task, RFF's Richard
Morgenstern stressed that good public policy needs comprehensive
retrospective review. He wrote, "The lack of funding for
retrospective assessments, both inside and outside of government, is
clearly a barrier to further progress." Despite that barrier, there
is now--thanks in part to RFF's work--a critical mass of
retrospective studies that paint an unflattering picture of agency error
on many levels.
Energy efficiency in Michigan / The analyses noted by RFF are not
the only federal cost-benefit studies to reach questionable results.
Consider the Obama administration's "Clean Power Plan,"
which proposes to reduce power plant emissions by 32 percent by 2030.
Buried in the "building blocks" of the rule is an ambitious
plan to increase energy efficiency. The more efficient homes and
buildings are at using energy, the less demand for energy output,
resulting in fewer greenhouse gas emissions.
But will the plan yield the desired fruit? In all its politicking
for the rule, the U.S. Environmental Protection Agency failed to cite a
recent National Bureau of Economic Research working paper on the utter
failure of a recent DOE energy efficiency program for consumers in
Michigan. According to the paper, the costs of the agency's
Weatherization Assistance Program outweighed its benefits by a -9.5
percent annual rate of return. Although the program did reduce monthly
energy consumption by 10 to 20 percent, the costs still trumped the
benefits by 2.5-to-1.
This disparity might be explained by the "rebound effect"
of efficiency. That is, as efficiency improves and the cost of
consumption falls, consumers will be more inclined to increase their
energy consumption; e.g., setting their thermostats higher and heating
more rooms than they normally do. As the authors conclude,
The results are striking because Michigan's cold winters and
the likelihood that the weatherized homes were not in perfect
condition suggest that it may have been reasonable to expect
high returns in this setting. Regardless of one's priors, this
paper underscores that it is critical to develop a body of credible
evidence on the true, rather than projected, returns to energy
efficiency investments in the residential and other sectors.
Sadly, the EPA and other regulators tend to view efficiency rules
as free money. Don't expect this latest research to change that
view.
OSHA's missing fatalities / The EPA is hardly the only federal
agency to overestimate the benefits of its proposed regulations. An
assessment of six major Occupational Safety and Health Administration
(OSHA) regulations promulgated in the 1980s and 1990s found that each
rule overestimated the number of fatalities prevented. Using data from
the "Census on Fatal Occupational Injuries" and "National
Traumatic Occupational Fatality" figures, the analysts compared the
number of projected fatalities prevented from OSHA rules with actual ex
post fatalities. Their conclusions were stark: "In general, we
found little persuasive evidence provided to justify OSHA's
calculations."
In the first rule studied, "Electrical Work Practices for
General Industry," OSHA predicted a 41 percent reduction in
fatalities, or 97 lives annually. Part of the problem with retrospective
review is finding two numbers to compare. The authors of the study spent
considerable effort attempting to recreate and justify OSHA's
baseline. For the electrical work practices rule, they arrived at a
baseline annual fatality figure of 135, compared to 235 for OSHA.
Frequently, the authors found that OSHA would "correct" the
public figures on workplace fatalities by literally doubling the number
to account for underreporting. Despite the dispute over the baseline,
the authors note that a significant drop in deaths didn't occur
until seven years after the rule became effective. And even if they
attributed all of the decline to the rule, the mortality decline
"was also considerably lower than the 97 deaths projected by
OSHA."
For the remaining rules, the authors found that OSHA's
projections were "highly implausible," "unlikely,"
and "overoptimistic." In one rulemaking, "Electrical
Power Generation," the authors combed through three datasets on
occupational mortality and injuries and found that deaths slightly
declined the year that the standards went into effect, but then
increased during the next three years. No data source suggested an
overall decline because of the rule.
In the final rule examined, for workplace scaffolding protection,
OSHA projected roughly 42 fewer deaths because of the regulation.
However, even though the standards went into effect in 1997, the authors
found the number of fatalities declined by just four from 1996 and 2002.
After consulting with an industry expert, they discovered no mitigating
technological breakthrough that would have increased or decreased
scaffolding safety.
Fewer microwaves and air conditioners / The DOE is one of the most
prolific regulators in the federal government. That few people are aware
of this is perhaps just as shocking as the agency's regulatory tab:
more than $150 billion in net present value costs since 2007. Even
according to OIRA, the DOE is the third most burdensome regulator in the
federal government, behind the EPA and the Department of Transportation.
The DOE is active because policymakers believe that new energy
efficiency standards essentially act as "free money" for
consumers. A higher upfront purchase price for an efficient new product
supposedly will pay for itself over the coming years of reduced energy
use. (See "The Disappearing Benefits of Energy Efficiency," p.
4.) As discussed in the Michigan example above, these claims warrant
increased scrutiny.
A recent American Action Forum (AAF) paper I authored examines two
past DOE rules for microwaves and air conditioners, and the subsequent
new-unit shipment rates for those two goods. If the shipment rate drops
significantly below the agency's projections, then it's
unlikely the actual benefits would match the agency 1 s estimates
because consumers end up using fewer of the more efficient products.
For both rules, the DOE's projected shipment rates ended up
being much higher than in reality. For air conditioners, the higher
purchase price likely led to a rush in orders the year before the
standards took effect. Then after the effective date, shipments fell
26.1 percent, compared to an agency estimate of 2.1 percent. For
perspective, this drop in orders occurred before the Great Recession, in
a time when unemployment hovered between 4.4 and 4.8 percent. Shipments
are now still below DOE projections; thus, Americans continue to operate
less efficient units, lowering potential benefits of the rule. I
conclude that the benefits of the rule, initially projected at $1.2
billion compared to $1.1 billion in costs, are now likely lower than the
annual burdens.
For microwaves, the DOE's erroneous projections are even more
pronounced. Beginning in years before the new standard was implemented,
annual microwave shipments fell as a result of the 2007 financial crisis
and ensuing recession. Those lower shipment rates continued through the
end of the recession and the subsequent implementation of the rule, at
which time the actual shipment rate of 9.6 million was 33 percent lower
than the projected rate of 14.4 million. The lower shipment rate
persisted through 2014, the last year data are available.
Though it's difficult to argue that the new rule directly
contributed to the decline in air conditioner sales, the large
overestimate of shipment rates underscores the unreliability of agency
projections about the benefits of various regulations.
EPA and coal / In one of the most expensive regulations in recent
history, in 2012 the EPA finalized its Mercury Air Toxics Rule (MATS),
ostensibly designed to regulate toxic gases and heavy metals from
coal-fired power plants. Although the cost-benefit balance was tipped in
favor of the latter based on the cuts to particulate matter emissions,
the rulemaking naturally contained hundreds of assumptions: higher IQs
in children, reduced mortality and morbidity, and the future of the coal
industry.
If the past few years are any guide, the EPA has already missed the
mark on projecting the future of coal-fired generation capacity. By
2013, the agency estimated coal would generate 341,407 megawatts (MW) of
electricity. Instead, coal generation fell to 329,815 MW. Of course,
environmentalists would consider that decline a bonus benefit, but it
again underscores the unreliability of agency projections. And given the
Clean Power Plan, the gap between EPA predictions and reality will
likely continue to widen.
IMPROVING COST-BENEFIT ANALYSIS
The preceding case studies illustrate that even though there are
plenty of expert analysts at these agencies, they are not soothsayers.
They cannot see the future and it is to be expected that their
cost-benefit analyses will not always prove accurate.
Does this mean the federal government should abandon cost-benefit
analysis? No; carefully weighing expected costs and benefits is a
fundamental part of policymaking, and if anything, cost-benefit analysis
should be expanded to the independent agencies, which are not bound by
executive order. But how can agencies improve their analysis?
Broadly speaking, even cabinet agencies that are required to
conduct cost-benefit analysis underperform on this task. The Mercatus
Center at George Mason University routinely tracks agency analyses and
finds them lacking in several respects. Its "Regulatory Report
Card," which attempts to measure the quality of agency analyses,
found that since 2012 the average cabinet regulation has scored just
13.3 out of a possible score of 30, easily an "F." Revising
executive orders to strengthen cost-benefit standards, providing the
public with advanced notices of proposed rulemaking for billion-dollar
regulations, as well as improving OMB Circular A-4, could aid in
prospective analysis, but that's only half of the equation.
Engaging in a comprehensive system of retrospective analysis will
aid policymakers and regulators alike, allowing them to learn from each
new regulatory review. Here, the details matter. Under President
Obama's EO 13563 and 13610, agencies are already supposed to
conduct retrospective reviews. However, agencies often recycle old
regulations and tighten regulatory requirements, rarely learning from
past attempts to regulate. An AAF study I conducted found that the
latest round of retrospective reports actually increased regulatory
costs by $14.7 billion and added 13.4 million paperwork burden hours. In
addition, for new rules, agencies should be establishing metrics to
measure whether a regulation will be successful in the future. Research
conducted by the George Washington University Regulatory Studies Center
indicates that cabinet agencies rarely do this.
What is likely required, although this would be unpopular in
conservative circles, is the creation of a new agency to conduct either
prospective analysis, retrospective review, or both. In a previous
Regulation article, Ike Brannon and I suggested that OIRA carry out this
work using economists reassigned from the regulatory agencies.
("Toward a New and Improved Regulatory Apparatus," Fall 2013.)
Other commentators favor the creation of a new independent agency to
manage the regulatory state. Another option is to place regulatory
economists and lawyers at a new branch of the Congressional Budget
Office, capitalizing on its strong reputation for forecasting budget and
tax data. Regardless of where this new analysis is conducted, it's
more than clear that the status quo should not continue.
Perhaps a final way to revamp cost-benefit analysis is to amend how
courts view agencies' current efforts. The Regulatory
Accountability Act would change the standard of judicial review of
agency cost-benefit analysis from the more deferential "arbitrary
and capricious" to "substantial evidence." The higher
standard would enable courts to find against agencies when their figures
lack sound evidence.
But the most important step in improving cost-benefit analysis is
conducting and learning from retrospective review. Whether such review
is conducted after five years or 10, any effort to discern the actual
effects of a rule would improve future rulemaking. If performed
correctly, it would inform regulators, the courts, Congress, and the
executive. As the Mercatus Center's Patrick McLaughlin has argued,
it would create a sort of positive feedback loop for better policy.
Given the agency missteps noted in this article (and many more that
could have been included), it's clear that regulatory
decisionmaking suffers from a lack of sound evidence. Any steps taken to
uncover more evidence about the effects of regulation would be most
welcome.
CONCLUSION
How correct are lofty agency presumptions about regulations having
large benefits and small costs? Generally, given the scant information
that exists now, the agencies tend to inflate the former while
low-balling the latter, though more evidence is needed before it can be
stated definitively that this failure is pervasive across the regulatory
state.
The federal government issues around 80 major rules each year, and
maybe two or three will draw the attention of scholars interested in the
actual effects of the rule. More research is needed to determine whether
regulators provide dependable analysis or just engage in bureaucratic
propaganda.
READINGS
* "Administration's July 2015 'Regulatory
Review' Add $14.7 Billion in Costs," by Sam Batkins. American
Action Forum, August 25,2015.
* "Assessing the Accuracy of OSHA's Projections of the
Benefits of New Safety Standards," by Si Kyunh Seong and John
MendelofF. Journal of Industrial Medicine, Vol. 45, No. 4 (April 2004).
* "Do Energy Efficiency Investments Deliver? Evidence from the
Weatherization Assistance Program," by Meredith Fowlie, Michael
Greenstone, and Catherine Wolfram. National Bureau of Economic Research
Working Paper No. 21331, July 2015.
* "The Department of Energy: Under the Radar, Overly
Burdensome," by Sam Batkins. American Action Forum, Oct. 9,2015.
SAM BATKINS is director of regulatory policy at the American Action
Forum.