Policy analytical capacity and evidence-based policy-making: lessons from Canada.
Howlett, Michael
Policy analysis is a relatively recent movement, dating back to the
1960s and the U.S. experience with large-scale planning processes in
areas such as defence, urban redevelopment and budgeting (Behn 1981;
Garson 1986; Lindblom 1958; MacRae and Wilde 1985; Wildavsky 1969). Seen
as a social movement, it represents the efforts of actors inside and
outside formal political decision-making processes to improve policy
outcomes by applying systematic evaluative rationality to public
problems and concerns (Aberbach and Rockman 1989; Mintrom 2007). There
have been debates about whether policy analysis has improved on the
outcomes associated with processes such as bargaining, compromise,
negotiation and log-rolling that are less instrumental (Uhr 1996;
Colebatch 2006; Majone 1989). However, from within the policy analytical
community, there has been no fundamental challenge to the primary raison
d'etre of policy analysis: to improve policy outcomes by applying
systematic analytic methodologies to policy appraisal, assessment and
evaluation (MacRae 1991; Nilsson et al. 2008; Radin 2000).
Evidence-based or "evidence-informed" policy-making represents
a recent effort to again reform or re-structure policy processes by
prioritizing evidentiary decision-making criteria (Nutley, Walter, and
Davies 2007; Pawson 2006; Sanderson 2006). This is being done in an
effort to avoid or minimize policy failures caused by a mismatch between
government expectations and actual, on-the-ground conditions. The
evidence-based policy movement is thus the latest in a series of efforts
undertaken by reformers in governments over the past half-century to
enhance the efficiency and effectiveness of public policy-making. In all
of these efforts, it is expected that through a process of theoretically
informed empirical analysis, governments can better learn from
experience and both avoid repeating the errors of the past as well as
better apply new techniques to the resolution of old and new problems
(Sanderson 2002a; May 1992).
Exactly what constitutes "evidence-based policy-making"
and whether analytical efforts in this regard actually result in better
or improved policies, however, are topics that remain contentious in the
literature on the subject (Boaz et al. 2008; Jackson 2007; Packwood
2002; Pawson 2002). A spate of studies, for example, has questioned the
value of a renewed emphasis on the collection and analysis of large
amounts of data in policy-making circumstances (Tenbensel 2004). Among
the concerns raised about an increased emphasis on evidence in
contemporary policy-making are the following:
1. Evidence is only one factor involved in policy-making and is not
necessarily able to overcome other factors such as constitutional
divisions of powers or jurisdictions, which can arbitrarily assign
locations and responsibilities for particular issue areas to specific
levels or institutions of government and diminish the rationality level
of policy-making by so doing (Davies 2004; Radin and Boase 2000; Young
et al. 2002).
2. Data collection and analytical techniques employed in its
gathering and analysis by specially trained policy technicians may not
be necessarily superior to the experiential judgments of politicians and
other key policy decision-makers (Jackson 2007; Majone 1989).
3. The kinds of "high-quality" and universally
acknowledged evidence initially proposed when "evidence-based
policy-making" first entered the lexicon of policy analysts in the
health-care field--especially the "systematic review" of
clinical findings--often has no analogue in many policy sectors, where
generating evidence using the "gold standard" of random
clinical trial methodologies may not be possible (Innvaer et al. 2002;
Pawson et al. 2005).
4. An increased emphasis on evidence-based policy-making can
stretch the analytical resources of participating organizations, be they
governmental or non-governmental, to the breaking point (Hammersley
2005). That is, government efforts in this area may have adverse
consequences both for themselves in terms of requiring greater
expenditures on analytical activities at the expense of operational
ones. This is also true for many nongovernmental policy actors, such as
small-scale NGOs whose analytical resources may be non-existent and who
may also be forced to divert financial, personnel and other scarce
resources from implementation activities to policy-making in order to
meet increased government requests for more and better data on the
merits and demerits of their proposed policy solutions and programs
(Laforest and Orsini 2005).
Some of these concerns are misplaced or easily refuted and can be
seen to result from an overly rationalistic view of policy-making
(Howlett, Ramesh, and Perl 2009). For example, with respect to the first
concern about the adverse effects of constitutional and institutional
orders on policy-making, this can almost always be taken as a
meta-contextual "given" within which any form of analysis,
evidence-based or otherwise, must take place. Concerns in this area then
apply to all kinds of policy analysis and are not a function of any
increased emphasis on knowledge utilization. With respect to the second
concern--that the evidence-based movement represents a return to early
ideas about technocratic, expert-driven policy-making and that such
forms of policy analysis are not necessarily superior to political
judgments based on experience--it should be noted that unlike many
earlier efforts at improving policy through analysis that did rely on an
underlying apolitical, technocratic view of optimal policy-making,
evidence-based policy-making represents a compromise between political
and technocratic views of policy-making. That is, it relies on the
notion of policy-making not as a purely rational affair but as an
exercise in pragmatic judgment, whereby political, ideological or other
forms of "non-evidence-based" policy-making are tempered by an
effort on the part of policy specialists to "speak truth to
power"--to present evidence to policy-makers that supports or
refutes specific policy measures as appropriate to resolve identified
policy problems (Sanderson 2002b; Wildavsky 1979), but does not attempt
to replace their judgment with their own (Head 2008; Tenbensel 2004).
The same is true with respect to the third concern--that systematic
reviews do not exist in many sectors and issue areas. This should be
seen more as a criticism of the lack of effort expended to date in
collecting more and better data in many policy sectors than as a
critique of the idea of the enhanced utilization of systematically
complied and assessed evidence in public policy formulation and
decision-making (Qureshi 2004; Shaxson 2004; Warburton and Warburton
2004; Young et al. 2002).
The fourth concern--that many policy actors may not have the
ability or resources required to carry out evidence-based
policy-making--is much more serious. An increased emphasis on the use of
evidence in policy-making requires that policy actors, and especially
governmental ones, have the analytical capability required to collect
appropriate data and utilize it effectively in the course of
policy-making activities. As such, it highlights the fact that a
significant factor affecting the ability of policy-makers to engage at
all in evidence-based policy-making pertains to the level of both
governmental and non-governmental actors' "policy analytical
capacity."
Overall, Norman Riddell summarized the requisites of policy
analytical capacity as lying in "a recognized requirement or demand
for research; a supply of qualified researchers; ready availability of
quality data; policies and procedures to facilitate productive
interactions with other researchers; and a culture in which openness is
encouraged and risk taking is acceptable" (2007: 7) If
evidence-based policy-making is to be achieved, policy actors require
the ability to collect and aggregate information in order to effectively
develop medium- and long-term projections, proposals for, and
evaluations of future government activities. Organizations both inside
and outside of governments require a level of human, financial, network
and knowledge resources enabling them to perform the tasks associated
with managing and implementing an evidence-based policy process. Without
this they might only marshal these resources in particular areas,
resulting in a "lumpy" set of departmental or agency
competences in which some agencies are able to plan and prioritize over
the long-term while others focus on shorter-term issues or, if evenly
distributed, may only be able to react to short- or medium-term
political, economic or other challenges and imperatives occurring in
their policy environments (Voyer 2007).
The evidence-based policy movement, thus, must confront the fact
that many recent studies suggest that the level of policy analytical
capacity found in many government and non-governmental organizations is
low, a fact that may contribute significantly to the failure of efforts
to enhance evidence-based policy-making and its ancilliary goal of
improving policy-making processes and policy-making outcomes. The
theory, concepts and evidence lying behind the idea of policy analytical
capacity and its relationship to evidence-based policy-making are set
out below.
Evidence-based policy-making as an effort to avoid policy failures
and enhance the potential for policy success through policy learning
Evidence-based policy-making represents an attempt to enhance the
possibility of policy success by improving the amount and type of
information processed in public policy decision-making as well as the
methods used in its assessment (Morgan and Henrion 1990; Nilsson et al.
2008). Based on the idea that better decisions are those that
incorporate the most available information, it is expected that
enhancing the information basis of policy decisions will improve the
results flowing from their implementation, while iterative monitoring
and evaluation of results in the field will allow errors to be caught
and corrected. Through improved information processing and utilization,
it is expected that policy learning will be enhanced and result, at
minimum, in the avoidance of policy failures or a reduction in the
chances of their occurrence, with, accordingly, an increase in the
potential for additional or greater success in attaining policy goals
and anticipated or expected policy outcomes (Bennett and Howlett 1992;
March 1981, 1994).
Evaluating these claims is difficult, of course, not least due to
the fact that policies can succeed or fail in numerous ways, with
significant variations at different levels of severity and aggregation.
Sometimes an entire policy regime can fail, for example, while more
often specific programs within a policy field may be designated as
successful or unsuccessful. The most egregious cases are when an entire
policy regime substantively fails, a failure that is typically very
public and obvious to voters and the public at large. In such cases,
policy-makers may acknowledge that mistakes were made and attempt to
gather information in order to clarify the reasons why the failure
occurred and suggest alternative routes that would avoid repeating the
same or similar errors in future. This can involve the use of
commissions and other types of inquiries linked to an evidence-based
perspective (Aucoin 1990; Bulmer 1981) but can also often result in a
strictly partisan critique based more on ideological and electoral
considerations than on gathering and processing information (Hird 2005;
Schmidt 1996). More often, however, it is specific programs within a
policy field that are designated as unsuccessful. Such failures are
often much less visible to non-experts and do not entail the threat of a
general legitimation crisis requiring public intervention in the form of
a commission or inquiry (Schudson 2006). Efforts to deal with such
failures typically would include specialized legitimation-building
exercises within the relevant policy community, such as specialized
consultations with experts as part of efforts designed to correct such
failures (Peters and Barker 1993). These latter failures are typically
more amenable to the knowledge generation and utilization activities
associated with evidence-based policy-making than are the more
generalized and highly publicized regime failures.
Similarly, policies and programs can also succeed or fail either in
substantive terms--that is, as objectively or perceived to be delivering
or failing to deliver expected material outcomes--or, in procedural
terms, as being legitimate or illegitimate, fair or unfair, just or
unjust in their formulation, implementation or results. Such judgments
are often themselves highly unsystematic and partisan in nature. In
fact, such judgments involve most of the key actors arrayed in policy
subsystems in a variety of formal and informal venues for assessing and
critiquing policy outcomes and processes. They almost always involve
officials and politicians within government dealing with the policy in
question but may also involve members of the public, who often will have
the ultimate say on a government's policy record when they vote at
elections, and members of relevant interest groups, political parties,
think tanks, the media, and other policy actors (Bovens, t'Hart,
and Peters 2001; Brandstrom and Kuipers 2003). The political nature of
judgments about policy success and failure also implies that such
assessments will rarely be unanimous. This is in part due to the fact
that political evaluations depend on the imputation of notions of
intentionality to government actors made by policy evaluators--so that
the results of policy-making can be assessed against expectations. This
is often a highly partisan, controversial and much less than neutral or
objective task because 1) government intentions themselves can be,
intentionally or otherwise, very vague and ambiguous, secret, or even
potentially contradictory or mutually exclusive; 2) labels such as
"success" and "failure" are inherently relative and
will be interpreted differently by different policy actors; and 3)
government policies take time to put into place and circumstances may
change in such as way as to render moot initial government assessments
of policy contexts and judgments of the severity of policy problems and
the appropriateness of particular policy tools for their solution.
Designations of policy success and failure are semantic tools themselves
used in public debate and policy contestation in order to seek political
advantage (Hood 2002; Sulitzeanu-Kenan and Hood 2005; Weaver 1986).
Policy evaluations affect considerations and consequences related to
assessing blame and taking credit for government activities at all
stages of the policy process, all of which can have electoral,
administrative and other consequences for policy actors and affect the
susceptibility of the evaluations to evidence-based criteria. Thus the
sites of judgments of policy success and failure are broader than often
suggested, and the resulting cacophony of judgments and evaluations can
make the analysis of success and failure quite difficult.
Despite these ontological and epistemological issues, however, as
proponents of enhanced evidence-based or evidence-informed policy-making
have observed, it is possible to make some headway in assessing policy
success and failure by examining the role evidence and knowledge play in
the specific types of policy failures identified in the different stages
of the policy-making process by the more systematic academic literature
on the subject.
Types of policy failures and the role played by evidence therein
Unlike popular commentators who more often than not tend to blame
policy failures on the personality quirks and psychological limitations
(such as stupidity, venality or corrupt behaviour on the part of
politicians and administrators) of participants or on associated innate
organizational failings (the "bureaucratic mentality"), the
academic literature on policy failures has found that clearly
identifiable policy failures have tended to occur only in very specific
circumstances and have little to do with the psychological propensities
of policy participants. They include the following situations where
1. an overreaching government has attempted to address
"unaddressable" or "wicked" problems, where neither
the cause of a problem nor the solution to it is well known (Churchman
1967; Pressman and Wildavsky 1973);
2. governments have failed to properly anticipate the consequences
of their proposed courses of action or the general susceptibility of
their policy or administrative systems to catastrophic and other kinds
of collapse (Bovens and t'Hart 1996, 1995; Perrow 1984; Roots
2004);
3. a variety of "implementation failures" have occurred
in which the aims of decision-makers have failed to be properly or
accurately translated into practice (Ingram and Mann 1980; Kerr 1976),
often where there has been a lack of effective oversight over
implementers on the part of decisionmakers (Ellig and Lavoie 1995;
McCubbins and Lupia 1994; McCubbins and Schwartz 1984); and
4. governments and policy-makers have failed to effectively
evaluate policy processes and outcomes and/or have failed to learn the
appropriate lessons from their own and other government's previous
experiences (May 1992; Scharpf 1986).
Each of these sources of failure originates in a different stage of
the policy cycle (see Table 1 below).
Each source of failure is also amenable, at least in theory, to
improvement through better information management in policy-making, as
proposed by proponents of evidence-based policy-making. Solutions for
overreaching governments, for example, lie in better information being
provided to policy-makers on their capabilities; better research and
information on problem causes and policy effects can turn some
apparently wicked problems into more manageable ones; better risk
analysis can hedge against future consequences; better information
systems can be implemented to aid implementation and enhance oversight;
and more attention paid to policy-monitoring and feedback processes can
help ensure better evaluation of program and policy results and more
effective policy-learning (see Table 2). Policy-makers and managers
interested in avoiding these common sources of policy failure and
enhancing the potential for greater policy success can address some of
these causes of failure by insisting that government intentions be
clarified and made consistent with resource endowments and can, at the
same time, insist that criteria for measuring policy goals and their
rationales be clearly specified. And they can continually monitor
changing circumstances and alter some aspects of policies as these
circumstances unfold (Anderson 1996; Hawke 1993; Uhr and Mackay 1996;
Waller 1992).
Inspection of Table 2 shows that a significant factor affecting
policy failures and their management through enhanced evidence-based
policymaking is closely related to governmental and non-governmental
"policy analytical capacity." That is, each of the managerial
strategies set out in Table 2 involves improvement of some aspect of
information management for policy analysis. Enhancing policy analytical
capacity is, in fact, an essential precondition for the adoption of
evidence-based policy-making and the improvement of policy outcomes
through its application, an essential precondition that is often ignored
or downplayed in the literature.
Defining policy analytical capacity
Policy capacity can be defined as
a loose concept which covers the whole gamut of issues associated
with the government's arrangements to review, formulate and
implement policies within its jurisdiction. It obviously includes the
nature and quality of the resources available for these
purposes--whether in the public service or beyond--and the practices and
procedures by which these resources are mobilized and used (Fellegi
1996: 6).
While policy capacity can be thought of as extending beyond
analysis to include the actual administrative capacity of a government
to undertake the day-to-day activities involved in policy implementation
(Painter and Pierre 2005; Peters 1996), policy analytical capacity is a
more focused concept related to knowledge acquisition and utilization in
policy processes (Adams 2004; Leeuw 1991; Lynn 1978; MacRae 1991;
Radaelli 1995). It refers to the amount of basic research a government
can conduct or access, its ability to apply statistical methods, applied
research methods, and advanced modelling techniques to this data and
employ analytical techniques such as environmental scanning, trends
analysis, and forecasting methods in order to gauge broad public opinion
and attitudes, as well as those of interest groups and other major
policy players, and to anticipate future policy impacts (O'Connor,
Roos, and Vickers-Willis 2007; Preskill and Boyle 2008). It also
involves the ability to communicate policy-related messages to
interested parties and stakeholders and includes "a
department's capacity to articulate its medium- and long-term
priorities" (Fellegi 1996: 19) and to integrate information into
the decision-making stage of the policy process. (1) These fundamental
elements or components of policy analytical capacity are set out in
Table 3 below.
The policy functions outlined above require either a highly
trained, and hence expensive, workforce that has far-seeing and
future-oriented management and excellent information collection and data
processing capacities, as well as the opportunity for employees to
strengthen their skills and expertise (O'Connor, Roos, and
Vickers-Willis 2007) or the ability to outsource policy research to
similarly qualified personnel in private or semi-public organizations
such as universities, think tanks, research institutes and consultancies
(Boston 1994). It also requires sufficient vertical and horizontal
coordination between participating organizations to ensure that research
being undertaken is relevant and timely. "Boundary-spanning"
links between governmental and non-governmental organizations are also
critical (Weible 2008). As George Anderson has noted, "a healthy
policy-research community outside government can play a vital role in
enriching public understanding and debate of policy issues, and it
serves as a natural complement to policy capacity within
government" (1996: 486).
Assessing policy analytical capacity in practice
Whether or not, and to what degree, government and non-governmental
policy actors in a policy analytical community have the capacity to
actually fulfil these tasks remains an important and largely unanswered
empirical question in the study of evidence-based policy-making
(Turnpenny et al. 2008; Wollmann 1989).
Studies of the actual behaviour and job performance of policy
analysts, for example, have constantly challenged the view often put
forward in academic texts that policy analysis is all about the neutral,
competent and objective performance of tasks associated with the
application and use of a small suite of technical policy analytical
tools on the part of governmental or non-governmentally based analysts
(Boardman et al. 2001; Boston 1994; Durning and Osama 1994; Patton and
Sawicki 1993). This raises to the fore the question, "What do
policy analysts actually do in contemporary governmental and
non-governmental organizations? And, related to this, "Are their
training and resources appropriate to allow them to meet the requisites
of evidence-based policy-making?" (New Zealand, State Services
Commission 1999; Weller and Stevens 1998).
At present, only very weak and partial, usually anecdotal,
information exists on the situations found in different countries. Over
thirty-years ago, Arnold Meltsner (1976) had observed in the case of the
U.S. that analysts undertook a number of roles in the policy-making
process, most of which did not involve neutral information processing
and analysis. Later observers, such as Beryl Radin (2000), Nancy Shulock
(1999) and Sean Gailmard and John Patty (2007) observed much the same
situation, along with a propensity for politicians to continually
re-enact the same failed policies in many problem areas (Schultz 2007).
In the U.K. and Germany, for example, contrary to the picture of
carefully recruited analysts trained in policy schools to undertake
specific types of microeconomic-inspired policy analysis (Weimer and
Vining 1999), investigators such as Edward Page and Bill Jenkins (2005)
and Julia Fleischer (2009) have provided some empirical evidence that
British and German policy-making typically features a group of
"policy process generalists" who rarely, if ever, deal with
policy matters in the substantive areas in which they were trained and
who have, in fact, very little training in formal policy analysis
techniques such as cost-benefit analysis or risk assessment. As Page and
Jenkins concluded,
The broad features of our characterization of UK policy bureaucracy
are that policy officials at relatively junior levels are given
substantial responsibility for developing and maintaining policy and
servicing other, formally superior officials or bodies, often by
offering technical advice and guidance. These people are not technical
specialists in the sense that they develop high levels of technical
expertise in one subject or stay in the same job for a long time. They
are often left with apparently substantial discretion to develop policy
because they often receive vague instructions about how to do their
jobs, are not closely supervised, and work in an environment that is in
most cases not overtly hierarchical (2005: 168).
Similar findings have been made in the cases of the Netherlands,
Australia and New Zealand, by Robert Hoppe and Margarita Jeliazkova
(2006), Patrick Weller and Bronwyn Stevens (1998) and Jonathan Boston
and his colleagues (1996), respectively.
Policy analytical capacity in Canada
How does Canada shape up with regard to this important indicator
(and predictor) of the successful application of enhanced evidence-based
policymaking and, ultimately, improved policy success through the
avoidance of policy failures? Little is known about the supply and
demand for policy analysis in Canada, although recent critiques of the
pedagogy of public administration and public policy programs suggest
there is good reason to suspect that a significant gap between pedagogy
and practice may exist in this country (Gow and Sutherland 2004).
Current evidence suggests that, with the possible exception of some
major Canadian business associations and corporations (Stritch 2007),
capacity in the non-governmental sector is very limited. This is true of
a majority of actors involved in the Canadian labour movement (Jackson
and Baldwin 2007), the voluntary sector (Laforest and Orsini 2005;
Phillips 2007), as well as the media (Murray 2007), think tanks (Abelson
2002, 2007), and political parties (Cross 2007), most of which have very
few if any permanent employees employed to conduct policy analysis of
any kind. In many cases, analysis is carried out by consultants rather
than paid staff, contributing to the transitory nature of much program
design and policy analysis in Canada. However, even less is known about
the training and activities of this "invisible public service"
(Bakvis 2000; Perl and White 2002; Saint-Martin 1998; Speers 2007).
This portrayal of a generally impoverished and low-capacity policy
analytical community pushes the emphasis for the prospects of enhanced
evidence-based policy-making back onto Canadian governments, which, in
theory at least, have access to the kinds of personnel, treasure and
organizational resources that would allow them to construct substantial
policy analytical capacity. What little is known about the actual work
of policy analysts in contemporary Canadian governments, however,
reveals a picture of a very "lumpy" or uneven distribution of
policy analytical capacity, varying by level of government and by
department or agency involved.
Early works in the late 1970s and early 1980s on the emerging
policy analysis professions provided little empirical evidence of what
analysts actually did in practice (Prince 1979; Prince and Chenier 1980)
but rather often simply assumed they would contribute to the increased
rationality of policy-making through the application of systematic
analytical techniques such as cost-benefit analysis to the evaluation of
policies and policy alternatives. Studies undertaken by federal
government analysts, however, raised doubts about this picture (French
1980; Hartle 1978). Later studies, in the 1990s, also noted the growth
and subsequent decline of employment of policy analysts in government
and their limited capacity for developing long-term strategic advice to
governments (Bennett and McPhail 1992; Hollander and Prince 1993). Work
since the early 1990s has suggested that tasks of policy analysts may be
shifting, as in the U.K. and the other countries cited above, towards an
increased emphasis on policy process design and network management
activities and away from "formal" types of policy analysis
(Howlett and Lindquist 2004; Lindquist 1992).
This general pattern, however, varies greatly by level of
government and, within each level, by the agency or department involved
(Dobuzinskis, Howlett, and Laycock 2007). The current policy analytical
capacity of the Canadian federal government, for example, although
highly varied in terms of its distribution among departments and between
departments and central agencies (Bakvis 1997, 2000; Voyer 2007), is
reasonably high by historical and comparative standards (Prince 2007).
Resources were cut during the budgetary crises of the 1980s and 1990s,
setting back analytical capacity to levels not seen since the 1970s
(Bakvis 2000; Hollander and Prince 1993). However, the federal
government and several provinces eliminated their deficits in the late
1990s and began to revitalize their civil services in their new-found
surplus positions. Federal government policy capacity needed
re-energizing after the cuts of the 1980s and 1990s, particularly in key
departments tasked to assist in identifying new priorities and
strategies (Lindquist and Desveaux 1998), and efforts specifically
directed at enhancing policy capacity were undertaken, beginning with
Ivan Fellegi's 1996 Task Force on Strengthening Our Policy Capacity
(Bourgon 1996). In the late 1990s, the Policy Research Initiative (PRI)
promoted collaboration with an ever-expanding array of university
institutes and think tanks (Bakvis 2000; Voyer 2007) as a way to
re-build federal policy analytical capacity in the new era of
participatory governance. This is significant since the number and range
of players in policy areas such as climate change--governments, interest
groups, think tanks, aboriginal communities, NGOs, international
organizations and others--has expanded, along with the range of issues
with which analysts must now be concerned (Lindquist 1992). Federal
policy development processes now typically contain mandated criteria for
consultations and political leaders and administrators typically access
polling data and conduct focus groups as part of the standard process of
policy development. This has created a far more complicated
policy-making environment for governments, since different strategies
for building arguments and cases for policy initiatives and far more
consultation are required than in past eras (Howlett and Lindquist
2004). Ultimately, highly centralized and well-resourced decisionmaking
systems eventually took shape in the hands of the prime minister and the
minister of finance (Bernier, Brownsey, and Howlett 2005; Savoie 1999),
with a focus on policy performance management (Saint-Martin 1998). This
results orientation has led the federal government to increasingly
promote horizontal and holistic analyses of policy problems, such as
climate change adaptation, and to try to better align initiatives across
governments and sectors, including recruitment and retention of policy
analysts, in order to "even out" the uneven distribution of
capacities across departments and units (Aucoin and Bakvis 2005).
Whether or not the policy analytical capacity of the federal
government has grown sufficiently to deal with this increased scope,
range and complexity is uncertain, but there is little doubt that
analytical capacity has improved since its nadir in the late 1980s
(Wellstead, Stedman, and Lindquist 2007). However, evidence at the
provincial, territorial and local levels--although much less extensive
than at the federal level--suggests that policy analytical capacity at
these levels is much weaker (McArthur 2007; Rasmussen 1999; Stewart and
Smith 2007) and leads to a short-term focus in many policies and
programs adopted at these levels of government. However, efforts--such
as the Policy Excellence Initiative in Nova Scotia, the Knowledge and
Information Services initiative in British Columbia, the Policy
Innovation and Leadership project in Ontario, as well as cabinet-level
initiatives in Yukon, Manitoba, Newfoundland and Labrador, and
Alberta--are underway in many jurisdictions to systematically grapple
with this issue (Ontario, Executive Research Group 1999; Hicks and
Watson 2007; Manitoba, Office of the Auditor General 2001; Nova Scotia,
Policy Excellent Initiative 2007).
Conclusion
In their 2006 study of the policy analytical activities undertaken
in the U.S., the U.K. and several other European countries, H.K.
Colebatch and Beryl Radin concluded that there are currently three areas
of priority for contemporary research work on policy analysis:
1. "We need more empirical research on the nature of policy
work in specific contexts: how policy workers (and which sort) get a
place at the table, how the question is framed, what discourse is
accepted as valid, and how this work relates to the outcome at any point
in time";
2. "What sort of activity do practitioners see as policy work,
and what sort of policy workers do they recognize"; and
3. "There are questions for teaching and professional
preparation" that will derive from these first two studies
(Colebatch and Radin 2006: 225).
These are all important observations, both for the evaluation of
the capacity of policy analytical communities to undertake high-level,
long-term policy analysis and for the possibility of enhancing
evidence-based policymaking processes and procedures in government. The
set of jobs and duties actually performed by policy analysts in both
government and non-governmental organizations is very closely tied to
the resources they have at their disposal in terms of personnel and
funding, the demand they face from clients and managers for high-quality
results, and the availability of high-quality data and information on
future trends.
In Canada, recent work provides some evidence of the activities of
analysts in a wider range of situations, both inside and outside of
government, than has usually been considered or investigated in the
past. The basic "sociology" of policy analysis in Canada--who
policy analysts are and what policy analysts actually do (and its
pedagogy--how they are trained and how their training fits their
job)--suggests the existence of a generally government-dominated policy
analytical community but also a very mixed pattern of policy analytical
capacity by jurisdiction and administrative unit, with some central and
departmental-level units in the federal government displaying the
highest capacity and some provincial and local government agencies the
lowest (Dobuzinskis, Howlett, and Laycock 2007).
The weak policy capacity found among most of the major actors
involved in policy analysis, even in rich countries like Canada, is very
problematic in the context of dealing with the challenges of improving
policy-making through the adoption of evidence-based techniques for
dealing with complex contemporary policy challenges. The short-term
focus it often promotes, for example, is very ill-suited for the
development of the ongoing and long-term solutions required to deal with
large multifaceted contemporary problems like climate change mitigation
and adaptation (Adamowicz 2007).
Ultimately, both governments and, increasingly, non-governmental
actors in Canada and elsewhere are being asked to design effective
long-term policy measures to deal with such problems without necessarily
having the kinds of resources they require to successfully avoid common
policy failures through the use of enhanced evidence-based analytical
techniques. Without prior or at least concurrent efforts to enhance
policy analytical capacity, unfortunately, "failure may be the only
option" available to governments in their efforts to deal with
critical contemporary policy challenges.
References
Abelson, Donald E. 2007. "Any ideas? Think tanks and policy
analysis in Canada." In Policy Analysis in Canada: The State of the
Art, edited by Laurent Dobuzinskis, Michael Howlett, and David Laycock.
Toronto: University of Toronto Press.
--. 2002. Do Think Tanks Matter? Assessing the Impact of Public
Policy Institutes. Kingston and Montreal: McGill-Queen's University
Press.
Aberbach, Joel D., and Bert A. Rockman. 1989. "On the rise,
transformation, and decline of analysis in the US government."
Governance 2 (3) July: 293-314.
Adams, D.. 2004. "'Usable knowledge in public
policy." Australian Journal of Public Administration 63 (1) March:
29-42.
Adamowicz, Wiktor. 2007. "Reflections on environmental policy
in Canada." Canadian Journal of Agricultural Economics 55 (1)
March: 1-13.
Anderson, George. 1996. "'The new focus on the policy
capacity of the federal government." Canadian Public Administration
39 (4) Winter: 469-88.
Aucoin, Peter. 1990. "Contribution of commissions of inquiry
to policy analysis: An evaluation." In Commissions of Inquiry,
edited by A.P. Pross, I. Christie, and J.A. Yogis. Toronto: Carswell.
Aucoin, Peter, and Herman Bakvis. 2005. "Public service reform
and policy capacity: Recruiting and retaining the best and the
brightest." In Challenges to State Policy Capacity: Global Trends
and Comparative Perspectives, edited by M. Painter and J. Pierre.
London: Palgrave Macmillan.
Bakvis, Herman. 1997. "'Advising the executive: Think
tanks, consultants, political staff and kitchen cabinets." In The
Hollow Crown: Countervailing Trends in Core Executives, edited by P.
Weller, H. Bakvis, and R.A.W. Rhodes. New York: St. Martin's Press.
--. 2000. "Rebuilding policy capacity in the era of the fiscal
dividend: A report from Canada." Governance 13 (1) January: 71-103.
Behn, Robert D. 1981. "Policy analysis and policy
politics." Policy Analysis 7 (2): 199-226.
Bennett, C.J., and M. Howlett. 1992. "The lessons of learning:
Reconciling theories of policy learning and policy change." Policy
Sciences 25 (3) September: 275-94.
Bennett, Scott, and Margaret McPhail. 1992. "Policy process
perceptions of senior Canadian federal civil servants: A view of the
state and its environment." Canadian Public Administration 35 (3)
Autumn: 299-316.
Bernier, Luc, Keith Brownsey, and Michael Howlett (eds.). 2005.
Executive Styles in Canada: Cabinet Structures and Leadership Practices
in Canadian Government. Toronto: University of Toronto Press.
Boardman, Anthony E., David Greenberg, Aidan Vining, and David
Weimer (eds.). 2001. Cost-Benefit Analysis: Concepts and Practice. Upper
Saddle River, N.J.: Prentice Hall.
Boaz, Annette, Lesley Grayson, Ruth Levitt, and William Solesbury.
2008. "Does evidence-based policy work? Learning from the UK
experience." Evidence and Policy 4 (2): 233-53.
Boston, Jonathan. 1994. "Purchasing policy advice: The limits
of contracting out." Governance 7 (1) January: 1-30.
Boston, Jonathan, John Martin, June Pallot, and Pat Walsh. 1996.
Public Management: The New Zealand Model. Auckland: Oxford University
Press.
Bourgon, Jocelyn. 1996. "Strengthening our policy
capacity." In Rethinking Policy: Strengthening Policy Capacity,
Conference Proceedings. Canadian Centre for Management Development.
Ottawa: Supply and Services Canada.
Bovens, M., and P. 't Hart. 1995. "Frame multiplicity and
policy fiascos: Limits to explanation." Knowledge and Policy 8 (4):
61-83.
--. 1996. Understanding Policy Fiascos. New Brunswick, N.J.:
Transaction Press.
Bovens, M., P. 't Hart, and B.G. Peters. 2001. "Analysing
governance success and failure in six European states." In Success
and Failure in Public Governance: A Comparative Analysis, edited by M.
Bovens, P. 't Hart, and B.G. Peters. Cheltenham, U.K.: Edward
Elgar.
Brandstrom, Annika, and Sanneke Kuipers. 2003. "From
'normal incidents' to 'political crises':
Understanding the selective politicization of policy failures."
Government and Opposition 38 (3) Summer: 279-305.
Bulmer, M. 1981. "Applied social research? The use and non-use
of empirical social inquiry by British and American governmental
commissions." Journal of Public Policy 1 (3): 353-80.
Churchman, C.W. 1967. "Wicked problems." Management
Science 14 (4) December: B141-B142.
Colebatch, H.K. (ed.). 2006. The Work of Policy: An International
Survey. Lanham, Maryland: Rowman & Littlefield.
Colebatch, H.K., and Beryl A. Radin. 2006. "Mapping the work
of policy." In The Work of Policy: An International Survey, edited
by H.K. Colebatch. Lanham, Maryland: Rowman & Littlefield: 217-26.
Cross, William. 2007. "Policy study and development in
Canada's political parties." In Policy Analysis in Canada: The
State of the Art, edited by Laurent Dobuzinskis, Michael Howlett, and
David Laycock. Toronto: University of Toronto Press.
Davies, E 2004. "Is evidence-based government possible?"
Jerry Lee Lecture presented to the 4th Annum Campbell Collaboration
Colloquium, 19 February 2004, Washington, D.C.
Dobuzinskis, Laurent, Michael Howlett, and David Laycock (eds.).
2007. Policy Analysis in Canada: The State of the Art. Toronto:
University of Toronto Press.
Durning, Dan, and Will Osama. 1994. "Policy analysts'
roles and value orientations: An empirical investigation using Q
methodology." Journal of Policy Analysis and Management 13 (4)
Fall: 629-57.
Ellig, J., and D. Lavoie. 1995. "The principle-agent
relationship in organizations." In Economic Approaches to
Organizations and Institutions: An Introduction, edited by E Foss.
Aldershot: Dartmouth.
Fellegi, Ivan. 1996. Strengthening our Policy Capacity. Report of
the Deputy Ministers Task Force Ottawa: Supply and Services Canada.
Fleischer, Julia. 2009. "Power resources of parliamentary
executives: Policy advice in the UK and Germany." West European
Politics 32 (1) January: 196-214.
French, R. 1980. How Ottawa Decides: Planning and Industrial
Policy-Making 1968-1980. Toronto: Lorimer.
Gailmard, Sean, and John W. Patty. 2007. "Slackers and
zealots: Civil service, policy discretion, and bureaucratic
expertise." American Journal of Political Science 51 (4) October:
873-89.
Garson, G. David. 1986. "From policy science to policy
analysis: A quarter century of progress." In Policy Analysis:
Perspectives, Concepts, and Methods, edited by William N. Dunn.
Greenwich. Conn.: JAI Press.
Gow, James Iain, and Sharon L. Sutherland. 2004. "Comparison
of Canadian masters programs in public administration, public management
and public policy." Canadian Public Administration 47 (3) Autumn:
379-405.
Hammersley, M. 2005. "Is the evidence-based practice movement
doing more good than harm? Reflections on Iain Chalmers' case for
research-based policy making and practice." Evidence and Policy 1
(1): 85-100.
Hartle, D.G. 1978. The Expenditure Budget Process in the Government
of Canada. Toronto and Mont-real: Canadian Tax Foundation.
Hawke, G.R. 1993. Improving Policy Advice. Wellington, N.Z.:
Victoria University Institute of Policy Studies.
Head, Brian W. 2008. "Three lenses of evidence-based
policy." The Australian Journal of Public Administration 67 (1)
March: 1-11.
Hicks, Ron, and Peter Watson. 2007. Policy Capacity: Strengthening
the Public Service's Support to Elected Officials. Edmonton:
Queen's Printer.
Hird, John A. 2005. "Policy analysis for what? The
effectiveness of nonpartisan policy research organizations." Policy
Studies Journal 33 (1) February: 83-105.
Hollander, Marcus J., and Michael J. Prince. 1993. "Analytical
units in federal and provincial governments: Origins, functions and
suggestions for effectiveness." Canadian Public Administration 36
(2) Summer: 190-224.
Hood, Christopher. 2002. "The risk game and the blame
game." Government and Opposition 37 (1) Winter: 15-54.
Hoppe, Robert, and Margarita Jeliazkova. 2006. "How policy
workers define their job: A Netherlands case study." In The Work of
Policy: An International Survey, edited by H.K. Colebatch. Lanham,
Maryland: Rowan & Littlefield.
Howlett, Michael, and Evert Lindquist. 2004. "Policy analysis
and governance: Analytical and policy styles in Canada." Journal of
Comparative Policy Analysis 6 (3) December: 225-49.
Howlett, Michael, M. Ramesh, and Anthony Perl. 2009. Studying
Public Policy: Policy Cycles and Policy Subsystems (3rd ed.). Toronto:
Oxford University Press.
Ingram, H.M., and D.E. Mann. 1980. "Policy failure: An issue
deserving analysis." In Why Policies Succeed or Fail, edited by
H.M. Ingram and D.E. Mann. Beverly Hills: Sage Publications.
Innvaer, S., G. Vist, M. Trommald, and A. Oxman. 2002. "Health
policy-makers' perceptions of their use of evidence: A systematic
review." Journal of Health Services Research and Policy 7 (4):
239-45.
Jackson, Andrew, and Bob Baldwin. 2007. "Policy analysis by
the labour movement in a hostile environment." In Policy Analysis
in Canada: The State of the Art, edited by Laurent Dobuzinskis, Michael
Howlett, and David Laycock. Toronto: University of Toronto Press.
Jackson, Peter M. 2007. "Making sense of policy advice."
Public Money and Management 27 (4): 257-64.
Kerr, D.H. 1976. "The logic of 'policy' and
successful policies." Policy Sciences 7 (3) September: 351-63.
Laforest, R., and M. Orsini. 2005. "Evidence-based engagement
in the voluntary sector: Lessons from Canada." Social Policy and
Administration 39 (5): 481-97.
Landry, R., M. Lamari, and N. Amara. 2003. "The extent and
determinants of the utilization of university research in government
agencies." Public Administration Review 63 (2) March/April:
192-205.
Leeuw, EL. 1991. "Policy theories, knowledge utilization, and
evaluation." Knowledge and Policy 4 (3): 73-91.
Lindblom, Charles E. 1958. "Policy analysis." American
Economic Review 48 (3) June: 298-312.
Lindquist, Evert A. 1992. "Public managers and policy
communities: Learning to meet new challenges." Canadian Public
Administration 35 (2) Summer: 127-59.
Lindquist, Evert, and James Desveaux. 1998. Recruitment and Policy
Capacity in Government. Ottawa: Public Policy Forum.
Lynn Jr., L. 1978. Knowledge and Policy: The Uncertain Connection.
Washington, D.C.: National Academy of Sciences.
MacRae Jr., D. 1991. "Policy analysis and knowledge use."
Knowledge and Policy 4 (3): 27-40.
MacRae Jr., Duncan, and James A. Wilde 1985. Policy Analysis for
Public Decisions. Lanham, Maryland: University Press of America.
Majone, Giandomenico. 1989. Evidence, Argument, and Persuasion in
the Policy Process. New Haven, Conn.: Yale University Press.
Manitoba. Office of the Auditor General Lion Singleton]. 2001. A
Review of the Policy Capacity between Departments. Winnipeg:
Queen's Printer.
March, J.G. 1981. "Decision making perspective: Decisions in
organizations and theories of choice." In Perspectives on
Organization Design and Behaviour, edited by A.H. van de Ven and W.F.
Joyce. New York: Wiley.
--. 1994. A Primer on Decision-Making: How Decisions Happen. New
York: Free Press.
May, P.J. 1992. "Policy learning and failure." Journal of
Public Policy 12 (4): 331-54.
McArthur, Doug. 2007. "Policy analysis in provincial
governments in Canada: From PPBS to network management." In Policy
Analysis in Canada: The State of the Art, edited by Laurent Dobuzinskis,
Michael HowleR, and David Laycock. Toronto: University of Toronto Press.
McCubbins, M.D., and A. Lupia. 1994. "Learning from oversight:
Fire alarms and police patrols reconstructed." Journal of Law,
Economics and Organization 10 (1): 96-125.
McCubbins, M.D., and T. Schwartz. 1984. "Congressional
oversight overlooked: Police patrols versus fire alarms." American
Journal of Political Science 28 (1) January: 165-79.
Meltsner, Arnold J. 1976. Policy Analysts in the Bureaucracy.
Berkeley: University of California Press.
Mintrom, Michael. 2007. "The policy analysis movement."
In Policy Analysis in Canada: The State of the Art, edited by Laurent
Dobuzinskis, Michael Howlett, and David Laycock. Toronto: University of
Toronto Press.
Morgan, M.G., and M. Henrion. 1990. Uncertainty: A Guide to Dealing
with Uncertainty in Quantitative Risk and Policy Analysis. Cambridge:
Cambridge University Press.
Murray, Catherine. 2007. "The media." In Policy Analysis
in Canada: The State of the Art, edited by Laurent Dobuzinskis, Michael
Howlett, and David Laycock. Toronto: University of Toronto Press.
New Zealand. State Services Commission. 1999. Essential
Ingredients: Improving the Quality of Policy Advice. Wellington: Crown
Copyright.
Nilsson, Mans, Andrew Jordan, John Turnpenny, Julia Hertin, Bjorn
Nykvist, and Duncan Russel. 2008. "The use and non-use of policy
appraisal tools in public policy making: An analysis of three European
countries and the European Union." Policy Sciences 41 (4) December:
33555.
Nova Scotia. Policy Excellence Initiative. 2007. Policy Excellence
and the Nova Scotia Public Service. Halifax: Policy Advisory Council and
Treasury and Policy Board.
Nutley, Sandra M., Isabel Walter, and Huw T.O. Davies. 2007. Using
Evidence: How Research Can Inform Public Services. Bristol, U.K.: Policy
Press.
O'Connor, Alan, Goran Roos, and Tony Vickers-Willis. 2007.
"Evaluating an Australian public policy organization's
innovation capacity." European Journal of Innovation Management 10
(4): 532-58.
Ontario. Executive Research Group. 1999. Investing in Policy:
Report on Other Jurisdictions and Organizations. Toronto: Ministry of
the Environment.
Packwood, A. 2002. "Evidence-based policy: Rhetoric and
reality." Social Policy and Society I (3): 267-72.
Page, Edward C., and Bill Jenkins. 2005. Policy Bureaucracy:
Governing with a Cast of Thousands. Oxford: Oxford University Press.
Painter, M., and J. Pierre. 2005. Challenges to State Policy
Capacity: Global Trends and Comparative Perspectives. London: Palgrave
Macmillan.
Patton, Carl V., and David S. Sawicki. 1993. Basic Methods of
Policy Analysis and Planning. Englewood Cliffs, N.J.: Prentice Hall.
Pawson, Ray. 2006. Evidence-Based Policy: A Realist Perspective.
London: Sage Publications.
--. 2002. "Evidence-based policy: In search of a method?"
Evaluation 8 (2): 157-81.
Pawson, Ray, T. Greenhalgh, G. Harvey, and K. Walshe. 2005.
"Realist review--a new method of systematic review designed for
complex policy interventions." Journal of Health Services Research
Policy 10 (Supplement 1): S1:21-S1:34.
Perl, Anthony, and Donald J. White. 2002. "The changing role
of consultants in Canadian policy analysis." Policy and Society 21
(1): 49-73.
Perrow, C. 1984. Normal Accidents: Living with High Risk
Technologies. New York: Basic Books. Peters, B. Guy. 1996. The Policy
Capacity of Government. Ottawa: Canadian Centre for Management
Development.
Peters, B. Guy, and A. Barker 1993. Advising West European
Governments: Inquiries, Expertise and Public Policy. Edinburgh:
Edinburgh University Press.
Phillips, Susan D. 2007. "Policy analysis and the voluntary
sector: Evolving policy styles." In Policy Analysis in Canada: The
State of the Art, edited by Laurent Dobuzinskis, Michael Howlett, and
David Laycock. Toronto: University of Toronto Press.
Preskill, Hallie, and Shanelle Boyle. 2008. "A
multidisciplinary model of evaluation capacity building." American
Journal of Evaluation 29 (4): 443-59.
Pressman, J.L., and A.B. Wildavsky. 1973. Implementation: How Great
Expectations in Washington are Dashed in Oakland. Berkeley: University
of California Press.
Prince, Michael J. 1979. "Policy advisory groups in government
departments." In Public Policy in Canada: Organization, Process,
Management, edited by G. Bruce Doern and Peter Aucoin. Toronto: Gage.
--. 2007. "Soft craft, hard choices, altered context:
Reflections on 25 years of policy advice in Canada." In Policy
Analysis in Canada: The State of the Art, edited by Laurent Dobuzinskis,
Michael Howlett, and David Laycock. Toronto: University of Toronto
Press.
Prince, Michael J., and John Chenier. 1980. "The rise and fall
of policy planning and research units." Canadian Public
Administration 22 (4) Winter: 536-50.
Qureshi, H. 2004. "Evidence in policy and practice: What kinds
of research designs?" Journal of Social Work 4 (1): 7-23.
Radaelli, C.M. 1995. "The role of knowledge in the policy
process." Journal of European Public Policy 2 (2): 159-83.
Radin, Beryl A. 2000. Beyond Machiavelli: Policy Analysis Comes of
Age. Washington, D.C.: Georgetown University Press.
Radin, Beryl A., and Joan P. Boase. 2000. "Federalism,
political structure, and public policy in the United States and
Canada." Journal of Comparative Policy Analysis 2 (1) March: 65-90.
Rasmussen, Ken. 1999. "Policy capacity in Saskatchewan:
Strengthening the equilibrium." Canadian Public Administration 42
(3) Autumn: 331-48.
Riddell, Norman. 2007. Policy Research Capacity in the Federal
Government. Ottawa: Policy Research Initiative.
Roots, R.I. 2004. "When laws backfire: Unintended consequences
of public policy." American Behavioural Scientist 47 (11):
1376-394.
Saint-Martin, Denis. 1998. "The new managerialism and the
policy influence of consultants in government: An
historical-insfitutionalist analysis of Britain, Canada and
France." Governance 11 (3) July: 319-56.
Sanderson, I. 2006. "Complexity, 'practical
rationality' and evidence-based policy making." Policy and
Politics 34 (1): 115-32.
--. 2002a. "Evaluation, policy learning and evidence-based
policy making." Public Administration 80 (1): 1-22.
--. 2002b. "Making sense of 'what works': Evidence
based policymaking as instrumental rationality?" Public Policy and
Administration 17 (3): 61-75.
Savoie, Donald J. 1999. Governing from the Centre: The
Concentration of Power in Canadian Politics. Toronto: University of
Toronto Press.
Scharpf, F.W. 1986. "Policy failure and institutional reform:
Why should form follow function?" International Social Science
Journa138 (2) May: 179-90.
Schmidt, M.G. 1996. "When parties matter: A review of the
possibilities and limits of partisan influence on public policy."
European Journal of Political Research 30 (2): 155-83.
Schudson, Michael. 2006. "The trouble with experts--and why
democracies need them." Theory and Society 35 (5/6): 491-506.
Schultz, David. 2007. "Stupid public policy ideas and other
political myths." Paper presented to the American Political Science
Association, Chicago.
Shaxson, L. 2004. "'Is your evidence robust enough?
Questions for policy makers and practitioners." Evidence and Policy
1 (1): 101-111.
Shulock, N. 1999. "The paradox of policy analysis: If it is
not used, why do we produce so much of it?" Journal of Policy
Analysis and Management 18 (2) Spring: 226-44.
Speers, Kimberly. 2007. "The invisible public service:
Consultants and public policy in Canada." In Policy Analysis in
Canada: The State of the Art, edited by Laurent Dobuzinskis, Michael
Howlett, and David Laycock. Toronto: University of Toronto Press.
Stewart, Kennedy, and Patrick J. Smith. 2007. "Immature policy
analysis: Building capacity in eight major Canadian cities." In
Policy Analysis in Canada: The State of the Art, edited by Laurent
Dobuzinskis, Michael Howlett, and David Laycock. Toronto: University of
Toronto Press.
Stritch, Andrew. 2007. "Business associations and policy
analysis in Canada." In Policy Analysis in Canada: The State of the
Art, edited by Laurent Dobuzinskis, Michael Howlett, and David Laycock.
Toronto: University of Toronto Press.
Sulitzeanu-Kenan, R., and C. Hood 2005. "Blame avoidance with
adjectives? Motivation, opportunity, activity and outcome." Paper
for ECPR Joint Sessions, Blame Avoidance and Blame Management Workshop,
14--20 April, Granada, Spain.
Tenbensel, T. 2004. "Does more evidence lead to better policy?
The implications of explicit priority setting in New Zealand's
health policy for evidence-based policy." Policy Studies 25 (3)
September: 190-207.
Turnpenny, John, Mans Nilsson, Duncan Russel, Andrew Jordan, Julia
Hertin, and Bjom Nykvist. 2008. "Why is integrating policy
assessment so hard? A comparative analysis of the institutional capacity
and constraints." Journal of Environmental Planning and Management
51 (6): 759-75.
Uhr, John. 1996. "Testing the policy capacities of budgetary
agencies: Lessons from finance." Australian Journal of Public
Administration 55 (4) December: 124-34.
Uhr, John, and Keith Mackay (eds.). 1996. Evaluating Policy Advice:
Learning from Commonwealth Experience. Canberra: Federalism Research
Centre--ANU.
Voyer, Jean-Pierre. 2007. "Policy analysis in the federal
government: Building the forward-looking policy research capacity."
In Policy Analysis in Canada: The State of the Art, edited by Laurent
Dobuzinskis, Michael Howlett, and David Laycock. Toronto: University of
Toronto Press.
Waller, Mike. 1992. "Evaluating policy advice."
Australian Journal of Public Administration 51 (4) December: 440-49.
Warburton, R.N., and W.P. Warburton. 2004. "Canada needs
better data for evidence-based policy: Inconsistencies between
administrative and survey data on welfare dependence and
education." Canadian Public Policy 30 (3) September: 241-55.
Weaver, R.K. 1986. "The politics of blame avoidance."
Journal of Public Policy 6 (4): 371-98.
Weible, Christopher M. 2008. "Expert-based information and
policy subsystems: A review and synthesis." Policy Studies Journa1
36 (4) November: 615--35.
Weimer, David L., and Aidan R. Vining. 1999. Policy Analysis:
Concepts and Practice. New Jersey: Prentice Hall.
Weller, Patrick, and Bronwyn Stevens. 1998. "Evaluating policy
advice: The Australian experience." Public Administration 76 (3)
Autumn: 579-89.
Wellstead, A., R. Stedman, and E. Lindquist. 2007. "Beyond the
National Capital Region: Federal regional policy capacity." In
Report Prepared for the Treasury Board Secretariat of Canada. Ottawa:
Public Works and Government Services Canada.
Whiteman, D. 1985. "The fate of policy analysis in
congressional decision making: Three types of use in committees."
Western Political Quarterly 38 (2): 294-311.
Wildavsky, A.B. 1979. Speaking Truth to Power: The Art and Craft of
Policy Analysis. Boston: Little, Brown.
Wildavsky, Aaron. 1969. "Rescuing policy analysis from
PPBS." Public Administration Review 29 (2) March/April: 189-202.
Wollmann, Hellmut. 1989. "Policy analysis in West
Germany's federal government: A case of unfinished governmental and
administrative modernization?" Governance 2 (3) July: 233-66.
Young, K., D. Ashby, A. Boaz, and L. Grayson. 2002. "Social
science and the evidence-based policy movement." Social Policy and
Society 1 (3): 215-24.
Note
(1) The willingness of policy-makers to use the information
generated in the way it was intended to be used is not always present.
On the "strategic" and "argumentative" versus
"evaluative" uses of research and analysis, see D. Whiteman
(1985) and R. Landry, M. Lamari, and N. Amara (2003).
The author is Burnaby Mountain Professor, Department of Political
Science, Simon Fraser University. He would like to thank Brian Head and
Tim Tenbensel and the participants of the International Research Society
for Public Management meetings in Brisbane, Australia, in April 2008,
for their comments and suggestions for this article. He also
acknowledges the helpful comments made by the Journal's four
anonymous reviewers.
Table 1. Stages of the Policy Process and Associated Policy Failures
Agenda-setting Overreaching governments
establishing or agreeing to
establish overburdened or
unattainable policy agendas
Policy formulation Attempting to deal with wicked
problems without appropriately
investigating or researching
problem causes or the probable
effects of policy alternatives
Decision-making Failing to anticipate adverse
and other policy consequences
or risk of system failures
Policy implementation Failing to deal with
implementation problems
including lack of funding,
legitimacy issues,
principle-agent problems,
oversight failures and others
Policy evaluation Lack of learning due to lack
of, ineffective or
inappropriate policy
monitoring and/or feedback
processes and structures
Table 2. Policy Failures and Management Strategies by Stage of the
Policy Cycle
Stage of policy cycle Problem Solution
Agenda-setting Overreaching Better clarification
governments and precise
articulation of
government goals and
resource capabilities
Policy formulation Attempting to deal Provision of better
with wicked problems data and research on
policy problem
causation and
alternative solutions
Decision-making Failing to Better risk analysis
anticipate policy and assessment and its
consequences or risk integration into
of system structure decision-making
failure processes
Policy implementation Principle-agent More careful matching
problems, oversight of administrative
failures, etc. resources to policy
goals and better design
of monitoring and
inspection systems
Policy evaluation Lack of learning Development of improved
benchmarking and
performance measurement
systems and integration
of this information
into future policy
deliberations
Table 3. Aspects of Political Analytical Capacity
Components
Environmental scanning, trends analysis and forecasting methods
Theoretical research
Statistics, applied research and modelling
Evaluation of the means of meeting targets/goals
Consultation and managing relations
Program design, implementation monitoring and evaluation
Department's capacity to articulate its medium- and long-term
priorities
Policy analytical resources--quantity and quality of employees;
budgets; access to external sources of expertise
Source: Riddell 2007