Developing indicators for a new ERA: should we measure the policy impact of education research?
Watson, Louise
The Australian government has announced its intention to measure
the quality of research in Australian universities under the Excellence
in Research for Australia (ERA) initiative. The process for measuring
research quality through the ERA is still being developed but the
government has proposed that 'a suite of indicators appropriate to
difference disciplines' (Carr, 2008b) is likely to be used and is
seeking public input through a consultation process. Using Weiss's
(1979) seven models of research utilisation, and Husen's (1994)
constraints on policy-makers, this article identifies the many ways in
which education research influences policy. The author proposes that the
uses of education research should be acknowledged in a research quality
assessment process such as the ERA. If the policy influence of education
research is not recognised under the new ERA, there is a risk that a
narrow suite of indicators will be developed that does not capture the
breadth and complexity of education research's impact on policy.
Keywords
educational finance resource allocation educational policy higher
education policy analysis promotion (occupational)
Introduction
The Australian government has announced its intention to measure
the quality of research in Australian universities under the Excellence
in Research for Australia (ERA) initiative. The process for measuring
research quality through the ERA is still being developed but the
government has proposed that 'a suite of indicators appropriate to
difference disciplines' is likely to be used and is seeking public
input through a consultation process.
This article examines the ERA initiative and discusses the possible
threats and opportunities it offers to education researchers. In the
interests of contributing to public debate at the consultation stage of
the development of the ERA, the author explores the ways in which
education research influences policy and discusses how the policy impact
of education research might be captured in the new ERA. The article
describes the theories of Weiss and Husen in order to illustrate the
different ways in which education research influences policy and
concludes with a discussion of the ERA initiative and the challenges it
presents for the education research community.
How does education research influence policy?
The major end-users of education research are government
policymakers and professionals employed in schools and educational
institutions. Yet most research in the social sciences, including
education, can at best, have an indirect influence on public policy
(Wiltshire, 1993). Research in the social sciences is rarely aligned
with government policy priorities and is often dismissed as inconclusive
or irrelevant to the requirements of decision-makers A common view is
that the value of social science research is realised over the long
term, as the findings of a body of work 'percolate' through
policy communities rather than influence policy development directly
(Weiss, 1979). Under the 'percolation model', the major public
contribution of research in the social sciences is indirect in that it
provides theoretical breakthroughs and paradigm shifts that ultimately
influence the context of policy development, rather than driving
specific policy decisions.
Weiss's models of research utilisation
When we judge the impact of education research on public policy
using traditional concepts of research utilisation based on the natural
sciences, it is easy to conclude that educational research is
under-utilised in the policy development process. To counter this view,
Carol Weiss (1979) identifies seven ways in which research influences
public policy, taking into account the unique characteristics of
research in the social sciences. Weiss proposes seven models of research
utilisation:
1 knowledge-driven
2 problem-solving
3 interactive
4 political
5 tactical
6 enlightenment ('percolation')
7 intellectual enterprise.
The knowledge-driven model derives from the natural sciences and
assumes the following sequence of events:
basic research [right arrow] applied research [right arrow]
development [right arrow] application
The model assumes that the mere existence of new knowledge presses
it towards development and utilisation. This is often the case in
biomedical and natural sciences, particularly when there are commercial
applications. Weiss points out that this model has limited relevance to
the social sciences for three reasons. First, social science knowledge
is rarely so compelling or authoritative as to drive inevitably towards
implementation. Social science research tends to be contradictory in the
sense that many studies do not come up with one single answer to a
policy problem and, when they do, the findings can be
disputed--sometimes by the same author in a subsequent publication.
Secondly, social science knowledge does not readily lend itself to
conversion into replicable technologies, either material or social. And,
thirdly, the processes of policy development work against the direct
adaptation of social sciences research.
... unless a social condition has been consensually designed as a
pressing social problem, and unless the condition has become fully
politicised and debated, and the parameters of potential action
agreed upon, there is little likelihood that policy-making bodies
will be receptive to the results of social science research (Weiss,
1979, p. 427)
The problem-solving model of research utilisation involves the
direct application of the results of a specific social science study to
a pending problem. The assumption is that the research provides
empirical evidence and conclusions that help to solve a policy problem.
In other words, the policy decision drives the research. The expected
sequence of events is
problem [right arrow] identification [right arrow] decision
required [right arrow] information lacking to make decision [right
arrow] research provides missing knowledge [right arrow] policy decision
made
This model suggests two possible ways for social science research
to enter the policy-making arena. The first, less direct way, is for
existing research findings to be drawn on by policy-makers, who are
aware of the research through their own efforts or its presentation in
the media or through stakeholders. There is an element of chance in this
process and research utilisation depends heavily on the effectiveness of
the communication between researchers and policy-makers. The alternative
route is the purposeful commissioning of research to fill a knowledge
gap in the expectation that it will have direct and immediate
applicability to the policy problem. The Coleman report (1966) on
educational opportunity in America that was used to justify the
promotion of racially balanced schools through bussing is an example of
this type of research (Dye, 1992, pp. 7-9). Australian examples could
include the Karmel report (1973) into Commonwealth schools funding in
1973, and Dr Bruce Chapman's work on income-contingent loans
commissioned by the Commonwealth government prior to the introduction of
the Higher Education Contribution Scheme (HECS) in 1988.
Under the problem-solving model, government agencies are closely
involved in the sponsoring and supervising of research projects that are
intended to tackle defined policy problems. This involvement tends to
enhance the research's relevance to, and perceived impact on,
government policy (Van de Vall & Bolas, 1979). One limitation of
undertaking commissioned research is that the research will usually need
to be conducted according to explicit terms of reference and the outputs
will be required in a very short time. It can be difficult to conduct
original, high-quality research in the time frames required by
commissioning agencies. A thoughtful analysis of previous research
findings is usually the best that can be achieved. A second limitation
is the necessity for an 'alignment of perspectives' between
policy-makers and researchers engaged in the commissioned research.
Writing in 1979, Weiss declares the expectation of policy alignment
between government and social science researchers to be 'wildly
optimistic' but, over the past two decades, the problem-solving
model has become very common in education research in Australia.
Education departments regularly commission academics and consultants to
conduct policy-related research with clear terms of reference and
specific time frames for delivery. Weiss accurately predicts that the
consequence of this type of research activity is to 'increase
government control over both the specification of requested research and
its conduct in the field' (1979, p. 428). In Australia, this type
of research continues to create tension over issues of intellectual
property rights, when the commissioning agency tries to suppress the
publication of research that does not align with the government
position. Examples from the 1990s are the ACTU's refusal to publish
a commissioned history of the union movement, and the Commonwealth
Department of Education's reluctance to publish commissioned
research that implied criticism of government policies.
The interactive model of research utilisation is one in which
social science researchers enter the decision-making arena as part of an
interactive search for knowledge. It implies that those developing
policy invite input from researchers on a regular basis through expert
committees, consultation and networking. The input of researchers is
simply one among many inputs from a range of sources and it is not
presumed that they have conclusions available or a body of convergent
evidence. While not as direct as the problem-solving model, Weiss
describes the interactive model as 'a familiar process by which
decision-makers inform themselves of the range of knowledge and opinion
in a policy area' (1979, p. 429).This model assumes that the
process of policy development is orderly and well planned, which may not
always be the case.
The political model applies when the opinions of decision makers
are so hardened--for reasons of ideology or interest--that they are not
receptive to new evidence from research. Under this model, research can
only be used as 'ammunition for the side that finds its conclusions
congenial'(Weiss, 1979, p. 429).The use of research for partisan
political purposes often means that findings are reported out of context
and conflicting evidence is ignored or suppressed. Weiss views this
model as a legitimate use of social science research, provided that the
research is available to all participants (so that misrepresentations of
findings can be countered by the opposite side). She also assumes that
social scientists would never willingly produce research to support
partisan political objectives, or to deliberately provide
'ammunition' for one side of a political debate. In Australia
there is much evidence to the contrary over recent decades.
The Australian experience suggests that many researchers and
research communities are ready and willing to 'tailor' their
research findings to support partisan political positions, particularly
when undertaking commissioned research. Privately funded policy
'think tanks,' such as the Centre for Independent Studies and
the Australia Institute are examples of research communities that appear
to have a direct impact on policy development under the political model,
because they publish research that is used as ammunition in political
debates.
The tactical model applies to situations where policy-makers use
the fact that research is being done to justify delaying action
('we're doing research on this important issue right
now') or to deflect criticism of unpopular policy outcomes
('we were acting on the recommendations of the research').
Other tactical moves involve the provision of funding to a research
agency or researcher for the purposes of being allied with social
scientists of high repute or to build a constituency of supportive
academics. Weiss says these tactics are 'illustrations of uses of
research', while acknowledging that the conclusions of the research
may have no impact on policy (1979, p. 429).
Under the enlightenment (or 'percolation') model, no one
piece of research or even a body of research ever directly influences
policy. Rather, the concepts and theoretical perspectives engendered by
social science research permeate the policymaking process over time.
The imagery is that of social science generalisations and
orientations percolating through informed publics and coming to
shape the way in which people think about social issues. Social
science research diffuses circuitously through manifold
channels--professional journals, the mass media, conversations with
colleagues--and over time the variables it deals with and the
generalisations it offers provide decision makers with ways of
making sense of the world (Weiss, 1979, p. 429)
Inevitably, policy-makers influenced by this model of research
utilisation will never be able to cite the findings of a specific study
that influenced their decisions. At best, they may have a sense that
social science research has contributed ideas and orientations that have
influenced the policy agenda. The role of research under this model is
to 'sensitise' decision makers to new issues and help to
'turn what were non-problems into policy problems'. It
'helps to change the parameters within which policy solutions are
sought' and 'in the long run, along with other influences, it
often redefines the policy agenda' (Weiss, 1979, p. 430). Examples
of policy issues influenced by the percolation of research from the
social sciences might include the removal of corporal punishment in
schools or the pursuit of equity as a policy goal in education.
In contrast to the previous five models, the percolation model does
not expect decision-makers to be receptive to, or aware of, any research
findings from the social sciences. Research findings do not have to be
compatible with decisionmakers' values and goals in order to be
useful. It is assumed that through the process of percolation the
powerful 'truths' revealed by social science research will
eventually overturn accustomed values and patterns of thought. While the
model has the inherent inefficiencies of an indirect and unguided
process, Weiss concedes that it is 'perhaps the way in which social
science research most frequently enters the policy arena' (1979, p.
429).
But Weiss also points out that the percolation model is an
extremely unreliable method of disseminating research outcomes because
the public interpretation of research findings is largely beyond the
researchers' control.
When research diffuses to the policy sphere through indirect and
unguided channels, it dispenses invalid as well as valid
generalizations. Many of the social science understandings that
gain currency are partial, oversimplified, inadequate, or wrong.
There are no procedures for screening out the shoddy and obsolete.
Sometimes unexpected or sensational research results, however
incomplete or inadequately supported by data, take the limelight.
(Weiss, 1979, p. 430)
Weiss's seventh model of research utilisation is to view
research as part of the intellectual enterprise of the society. This
model portrays social science research as an intellectual pursuit that
is not context free but that responds to the currents of thought and the
fads and fancies of the period. In this sense, 'social science and
policy interact, influencing each other and being influenced by the
larger fashions of social thought'. Weiss points out that it is
often an emerging policy interest in a social issue that leads to the
appropriation of funds for social science research and that 'both
the policy and research colloquies may respond, consciously or
unconsciously, to concerns sweeping through intellectual and popular
thought' (1979, p. 430).
Weiss concludes with a plea to social science researchers to use
the models of research utilisation to 'pay attention to the
imperatives of policy making systems' and consider what they can do
to 'improve the contribution that research makes to the wisdom of
policy' (1979, p. 431).
Constraints on policy-makers
Weiss's categorisation of research utilisation provides
insight into the potential for education research to influence policy
and may offer comfort to those who lament that education research is
under-appreciated in the policy-making process. Education research
influences policy in at least four ways under Weiss's typology
through the problem-solving, interactive, political and percolation
models of research utilisation. But the imperative to have an impact on
policy places limitations on education policy researchers that should be
acknowledged in an assessment of research quality. Education research
outputs that should have an influence on policy may not appear to do so
simply because of the many constraints on policymakers in terms of their
capacity to utilise research. Husen (1994) identifies five key
constraints within which policy-makers work that might influence their
capacity to utilise research, summarised in Table 1.
The first constraint on policy-makers is that they are primarily or
even exclusively interested in research output that deals with problems
on their agenda. Policy agendas are largely determined by the political
platforms or election promises of governments. For example, research on
education vouchers (both for and against) flourished during the Reagan
era in the USA and issues of civics and citizenship have dominated the
Howard government agenda in Australia. The dominant policy agenda will
inevitably 'spawn' research studies and a key strategy for all
researchers is to frame their research in terms of current policy issues
requiring a solution.
The second constraint on policy-makers in using research is party
political bias. Research of a very high quality can be dismissed or
demonised by politicians if they think it is less than unanimous in
supporting their view. Unfortunately, increasing numbers of politicians
do not appreciate the value of 'frank and fearless' debate on
controversial topics. In responding to overt political bias, some
researchers publish work that shores up ideological positions (usually
with government financial support) while those critical of government
will have to rely on financial support from other sources. Inevitably,
the researchers writing pro-government policy reports and receiving
government funding receive greater public recognition through the mass
media.
Third, policy-makers have very limited time horizons, depending on
the circumstances of the day. They can require information for next
week's budget or a ministerial council meeting in a few months.
Their willingness to consider new issues varies according to the
electoral cycle. In its first year, a newly elected government is
usually receptive to new ideas but, by its third year in office, the
government of the day will be 'playing it safe' and the number
of new policies under consideration will diminish. The short
policy-horizon of policy-makers may explain why very few high-quality
longitudinal studies are funded by government. The research sponsored by
government usually requires an output within a few months, fuelling the
production of short-term research projects based on a weak methodology
and producing limited findings. Researchers seeking to influence
government policy development must work within these constraints, by
drawing on published research, or ready-made data sets. Whatever
strategies researchers employ, the short time-constraints of
policy-makers can result in limited research findings and poor-quality
research.
A fourth constraint on policy-makers is their tendency to be
concerned only with research relevant to their particular portfolio
interests. They are usually not interested in, nor aware of, research
that is more broadly based, so research that suggests a solution
involving more than one portfolio or more than one level of government
is likely to be relegated to the 'too hard' basket. While
governments are now attempting to tackle this limitation in some areas
of program delivery--for example, through Indigenous policy
coordination--progress remains slow. Researchers seeking a more direct
influence on government would be wise to propose policy solutions that
fall within the scope of one government department.
Finally, policy-makers are generally not familiar with the
discourse of research in the social sciences. Academic discourse that
strives for precision is usually dismissed as jargon by those outside
the field. Research findings should therefore be presented publicly in a
way that facilitates understanding in the general population.
Even if education researchers endeavour to work within the
constraints identified by Husen, education policy decisions are rarely
taken in an orderly or rational way by governments. In a federal system
of government such as Australia's, rational policy development is
also hampered by jurisdictional issues. The field of education policy,
in particular, is highly contested. Education policy development occurs
in the context of complex and dynamic interactions between interest
groups and government; decisions are usually the product of a negotiated
compromise between disparate interests rather than a consensus or
alignment of opinion. The policies that emerge from this process may
then be thwarted by strategic cost-shifting, administrative inertia or
the sheer size and scale of education systems. This complex, contested
and nuanced process of policy development has been described a
'decision accretion' (Husen, 1994, p. 1862) or
'incrementalism' in the public policy literature.
Education researchers should expect a high degree of 'hit or
miss' when they aim to influence policy. Given the complexity of
the policy development process and the fact that researchers have other
priorities and responsibilities (such as teaching), it will always be
difficult for researchers to influence policy development, regardless of
the strategies they use. As Weiss says:
It probably takes an extraordinary concatenation of circumstances
for research to influence policy decisions directly: a well-defined
decision situation, a set of policy actors who have responsibility
and jurisdiction for making the decision, an issue whose resolution
depends at least to some extent on information, identification of
the requisite informational need, research that provides the
information in terms that match the circumstances within which
choices will be made, research findings that are clear-cut,
unambiguous, firmly supported, and powerful, that reach
decision-makers at the time they are wrestling with the issues,
that are comprehensible and understood, and that do not run counter
to strong political interests. (Weiss, 1979, p. 428)
In summary, whether their research is disseminated through models
of percolation, interaction, problem-solving or politicisation,
education policy researchers need strategies to communicate their
findings beyond their immediate research communities. Yet regardless of
the strategies researchers adopt, it remains difficult it is for
education researchers to influence policy, for reasons outside their
control. Any attempts to measure the impact of education research on
policy should acknowledge the extent to which policy-makers are
constrained in using educational research, independently of the quality
or significance of the research outputs.
Excellence for Research in Australia (ERA)
Within months of attaining office in 2007, the new federal Labor
government fulfilled its election promise to abolish the Research
Quality Framework (RQF) due to be implemented in 2008 (Carr, 2007) and
replace it with the Excellence in Research for Australia (ERA)
initiative (Carr, 2008a).The RQF had been controversial for its
intention to measure the impact or use of original research outside the
peer community by assessing the level of 'recognition by qualified
end-users that methodologically sound and rigorous research has been
successfully applied to achieve social, economic, environmental and/or
cultural outcomes' (Development Advisory Group, 2006, p. 10) The
process proposed for measuring research impact through the RQF was based
on a narrow concept of research utilisation most appropriate to the
natural sciences. In fields of research that did not have a direct
commercial application, or identifiable end-users, such as education,
impact would have been very difficult to assess. It seemed that the type
of education research most likely to have had a demonstrable impact
under the RQF process would have been research commissioned by
policy-makers and research that provided political ammunition for
governments, consistent with Weiss's problem-solving and political
models of research utilisation. A great deal of education policy
research, particularly that which 'percolated' through policy
communities, would not have been 'counted' under the RQF
because it does not have an identifiable end-user and its policy
influence is indirect (Watson, 2007).
The Excellence in Research for Australia (ERA) initiative scheduled
to commence in 2009 differs from the RQF in that it will be administered
by the Australian Research Council (ARC) rather than a federal
government department. The new Minister for Industry, Innovation,
Science and Research, has described the RQF as 'flawed'
because it 'lacked transparency and did not reflect world's
best practice' (Carr, 2008b).The minister states that under the new
ERA, 'metrics will be used as a measure in disciplines where they
enjoy established confidence', such as the physical and biological
sciences, which are due to be assessed first in 2009. For other
disciplines, the government is consulting with researchers 'to
establish alternative metrics, or proxies for metrics, that will work
effectively and that will have credibility' (Carr, 2008b).
Should the impact of education research be measured?
The federal government has not ruled out the possibility of
measuring research impact under the new ERA. Rather, it states that
'there is a firm commitment to reaching a commonly agreed approach
for each discipline cluster--starting with existing and proposed
citation metrics and journal rankings, and other measures and proxies as
appropriate to each discipline'. The field of education is placed
within a discipline cluster called 'social, behavioural and
economic sciences' (Carr, 2008b). While acknowledging that the way
in which research impact was to be measured under the RQF was extremely
narrow, the new ERA provides an opportunity for members of the education
research community to re-examine the idea of how their research
influences policy and to debate the ways in which the impact of
education research might be measured.
Research quality metrics such as citation rates and publication
rates in journals of high esteem are of limited relevance in education
where much research is action based, context bound, specialised in its
focus and local or national rather than international in orientation
(Wright & Gale, 2008). Even in the natural sciences, where
bibliometrics are more commonly respected, they are a narrow and limited
measure of research quality and impact (Gillies, 2005). Reliance on
expert peer review is similarly problematic yet the government has
announced its intention to set up panels of experts 'in each
discipline who have the necessary background to ensure that anomalies
and discipline-specific issues in compiled data are identified and
addressed' (Carr, 2008b). The proposed base ERA indicators of
citation rates and journal standing supplemented by the input of peer
review panels may not be sufficient to capture the depth and breadth of
education research or its impact on the wider community.
As Wright and Gale (2008) point out, measuring the impact of
education research has the potential to highlight the many uses of
education research outputs beyond the academy. If researchers shy away
from this question, there is a risk that education research will be
undervalued in the ERA through a reliance on traditional indicators of
research quality. An alternative approach would be to develop indicators
or models of research impact specific to education that could be used in
the ERA. These models of research impact could highlight the scope and
complexity of education research and capture its utilisation by schools,
education systems and policy communities in both the immediate and
longer term. The direct and indirect uses of education policy research
could be represented through Weiss's models of research
utilisation, particularly the problem-solving, interactive, political
and percolation models (Weiss, 1979). The limitations identified by
Husen (1994) on policy-makers in using research could also be taken into
account in assessing the impact of education research.
The education research community is well placed to explore the
complexity of the relationship between education research and policy and
to identify the many ways in which education research influences social
and economic life. Through the ERA initiative, the government has issued
an invitation to education researchers to identify how education
research differs from other research areas and to suggest how its
quality should best be measured. This opportunity to debate the many
purposes of education research in terms of both quality and impact
should not be ignored. If education researchers do not contribute
actively to the ERA process, there is a risk that the quality of
education research will be measured only through the traditional
indicators of bibliometrics and peer review. These measures may not be
sufficient to convey the significance and wider impact of education
research in Australia.
Conclusion
The experience of the RQF process is a useful starting point for
examining our assumptions about the quality, impact and influence (both
direct and indirect) of education research in Australia. During the
consultation phase of the Excellence in Research for Australia (ERA)
initiative, the education research community is being asked to define
what is meant by excellence in educational research. Education
researchers need to debate this issue extensively and should not shy
away from identifying the many and varied ways in which education
research influences policy. While Weiss (1979) and Husen (1994) provide
insights into the impact of the social sciences on public policy,
education researchers in Australia need to define education-specific
indicators of quality and impact to ensure that the unique influence of
education research is acknowledged and measured appropriately in the new
ERA.
Acknowledgements
This is a revised version of a paper prepared for the Australian
Association for Research in Education (AARE) Focus Conference,
University of Canberra, 13-14 June 2007. The author is grateful to Dr
Ron Murnain, former Director of the University of Canberra's
Research Office for comments on an early draft.
References
Boyd, W. L., & Plank, D. N. (1994). Educational policy studies:
Overview. In T. Husen and T. N. Postlethwaite (Eds), The international
encyclopedia of education, 1835-1841. UK: Elsevier Science.
Carr, Kim (2007). Cancellation of research quality framework. Media
release 21 December. Retrieved 24 May, 2008 from
http://minister.industry.gov.au/
SenatortheHonKimCarr/Pages/CANCELLATIONOFRESEARCHQUALI
TYFRAMEWORKIMPLEMENTATION.aspx
Carr, Kim (2008a). New ERA for research quality. Announcement of
the Excellence in Research for Australia Initiative. Media Release 26
February. Retrieved 24 May, 2008 from
http://minister.industry.gov.au/SenatortheHonKimCarr/Pages/NEW
ERAFORRESEARCHQUALITY.aspx
Carr, Kim (2008b). A New ERA for Australian research quality
assessment. Campus Review, 18(9), 5.
Coleman, J. S. (1966). Equality of educational opportunity.
Washington, DC: Government Printing Office.
Development Advisory Group (2006, October). Research quality
framework: Assessing the quality and impact of research in Australia.
The recommended RQF. Endorsed by the Development Advisory Group for the
RQF. Retrieved 5 June, 2007 from
http://www.dest.gov.au/Ministers/Media/Bishop/2006/11/B002141106.asp
Dye, T. R. (1992). Understanding Public Policy. Seventh Edition.
New Jersey: Prentice Hall.
Gillies, D. (2005). Lessons from the history and philosophy of
science regarding the research assessment exercise. Paper read at the
Royal Institute of Philosophy in London on 18 November 2005. Retrieved
24 May, 2008 from http://www.ucl.ac. uk/sts/gillies/
Husen, T. (1994). Educational research and policy making. In T.
Husen and T. N. Postlethwaite (Eds), The international encyclopedia of
education, 1857-1864. UK: Elsevier Science.
Karmel, P. H. (1973). Schools in Australia: Report of the interim
committee for the Australian Schools Commission. Canberra: Australian
Government Printing Service.
Tertiary Education Commission (2006). PBRF--Quality Evaluation
2006. Retrieved 10 June, 2008 from
http://www.tec.govt.nz/templates/StandardSummary.aspx?id =1206
Van de Vall, M., & Bolas, C. (1979). The utilisation of social
policy research: An empirical analysis of its structure and functions.
Paper presented to the 74th Annual meeting of the American Sociological
Association, Boston, MA, 27-31 August.
Watson, Louise (2007). Percolated or espresso? The ways in which
education research influences policy development in Australia. Paper
presented to the Australian Association for Research in Education (AARE)
Focus Conference, University of Canberra, 13-14 June.
Weiss, C. H. (1979). The many meanings of research utilization.
Public Administration Review, 39(5), 426-431.
Wildavsky, A. (1979). Speaking the truth to power: The art and
craft of policy analysis. Boston, MA: Little, Brown.
Wiltshire, K. (1993). The role of research in policy making.
Unicorn, 19(4), 34-41.
Wright, J., & Trevor, G. (2008). Where to for quality education
research without community impact? Paper presented to the American
Educational Research Association Annual Conference, New York, March.
Louise Watson
University of Canberra
Dr Louise Watson is Principal Researcher in Lifelong Learning in
the Australian Institute of Sustainable Communities at the University of
Canberra. Email: Louise.Watson@canberra.edu.au
Table 1 Constraints on policy-makers
Constraint Strategies for researchers seeking policy
influence
Dominance of Emphasise the way in which research relates
current policy to contemporary policy issues
agendas Frame research debates in terms of issues
requiring a solution
Party political Sacrifice intellectual independence and
bias academic rigour to support a dominant
political position or seek research funding
from other sources
Limited time Draw on existing research and data
horizons Plan exit points within long-term studies
that enable preliminary findings to be
disseminated along the way
Narrrow portfolio Propose solutions that fall within the scope
interests of one government department
Lack of familiarity Present research findings in a way that
with academic facilitates understanding among the general
agendas population
Source: Husen (1994)