首页    期刊浏览 2024年11月13日 星期三
登录注册

文章基本信息

  • 标题:PERFORMANCE MANAGEMENT: CONFRONTING THE CHALLENGES FOR LOCAL GOVERNMENT.
  • 作者:Hall, Jeremy L.
  • 期刊名称:Public Administration Quarterly
  • 印刷版ISSN:0734-9149
  • 出版年度:2017
  • 期号:March
  • 出版社:Southern Public Administration Education Foundation, Inc.

PERFORMANCE MANAGEMENT: CONFRONTING THE CHALLENGES FOR LOCAL GOVERNMENT.


Hall, Jeremy L.


INTRODUCTION

Local government managers face considerable accountability demands on a daily basis, as do their state and federal counterparts. But local government administration offers distinct challenges to accountability through responsiveness (Koppell 2005), or what we might call performance-based accountability. The tasks of local government are routine, problems are often small in scale, and government organizations are often small, facilitating communication and information sharing through traditional means. But routinization of tasks should also facilitate the use of performance measurement to a greater degree than an environment characterized by task uncertainty. With respect to size, Zheng (2015) found, for example, that the use of e-participation in local governments in New Jersey increased among cities with greater populations. This suggests that as cities become larger, the linkages with citizens become more difficult to maintain, and as agencies grow, the information burden is difficult to bear without systematic approaches. This article uses the capacity/performance paradigm as a framework to build a theoretical synthesis of the obstacles to local government use of performance management. That is, to elucidate the barriers and impediments local government administrators--both municipal and county--face for developing a culture of performance management. In the way of definitions, it is helpful to begin by distinguishing performance measurement, which is the collection, analysis, and reporting of performance information, from performance management, which is the use of such information by managers with adequate discretion in daily decision making (Moynihan 2008).

BACKGROUND: SYSTEMATIC USE OF PERFORMANCE MEASUREMENT AND MANAGEMENT

State governments, such as Virginia, led the way in producing government-wide performance management systems, and most states have such systems in place today (Moynihan 2008). The federal government entered the performance measurement and management arena in a formal sense with the passage of the Government Performance and Results Act of 1993, following a years-long National Performance Review. With the adoption and implementation of CompStat by New York Police Commissioner Bill Bratton and Mayor Giuliani in 1994, performance management began to enter the nomenclature of local government management. The performance movement, as a phenomenon, has emerged quite recently in practical terms, and it has evolved and expanded with ardor. Citistat, implemented in Baltimore, characterizes a city-wide performance management system based on the principles found in CompStat.

The advent of these new approaches to management has been accompanied by a series of trends that shape its application, demand, and value. The rise of the IT sector has provided more and more powerful technology to collect, process, analyze, and report performance information directly to citizens and stakeholders directly through the world wide web. The new generation of consumers has grown up with technology and social media as a central part of their lives, and the demands for government accountability and citizen engagement have taken on new forms as a result. The internet has also changed information sharing not just in terms of availability, but it has drastically reduced transmission times, making information immediately available, and leading to expectations for reduced time from production to reporting. The absence of information now is seen as suspicious, and contrary to the foundational expectation for transparency. Finally, the effects of globalization in the new economy have been widespread. This means that goods and services flow freely from place to place, but it also means that people, as individuals, are more mobile than ever before. They are not place-bound consumers of public services, but mobile mavens with increased expectations who increasingly exercise choice. We are now more aware of what is going on in other places, and more readily able to observe differences in public service quality from place to place. This encourages governments to compete not only with proximate neighbors, but with global ones. Managing expectations is considerably more difficult when information flows freely.

Other societal trends have also influenced local government dynamics. With the growth in government, rational approaches offered considerable appeal to organize information, process it, and report findings in an effort to enhance accountability. The rationality movement, in general, spawned increased use of various accountability-oriented efforts (Stone 2006), including strategic planning, program evaluation, performance measurement, and more recently, evidence-based practice. Objectivity has displaced subjective politics as a preference in decision making strategy, though the magnitude of such shifts will no doubt be tempered by the nature of individual positions. Of course, as Jennings and Hall (2012) have noted, such objective approaches are valuable when questions are instrumental--how to do something that is already agreed upon--and not so valuable in addressing questions about what should be done, as is often the case with values-laden questions that are more common in state and federal government.

The performance literature started off primarily as a descriptive and prescriptive one (Behn 2003; Newcomer 1997; Wholey & Hatry 1992). Over time, a number of studies have sought to describe the extent to which performance measurement is being used as a management tool in various governments (Moynihan 2008; Berman & Wang 2000). Others have sought out the factors that explain the adoption and use of these systems, or factors influencing their success (Moynihan & Pandey 2010; Julnes & Holzer 2001). Ammons (1999) sought to reveal the attributes that prepared cities for successful comparison or benchmarking. Ammons & Rivenbark (2008), for example, report on the experiences of North Carolina cities in benchmarking, and offer insight into the factors that influence successful use of benchmarking of municipal services.

Moynihan and Pandey (2010) examine a series of factors associated with information use by managers, including individual and organizational characteristics. More recent attention has turned to figuring out whether or not performance measurement makes a difference in actual results. Sanger (2013) compared the performance measurement efforts of cities, finding that only 27 out of 190 were using exemplary performance measurement systems. Of those, only 7 were using data for management purposes, and the results showed only a weak correlation between demographics and city characteristics and adoption of strong performance measurement systems. Yang and Holzer (2006) considered the linkage between performance and citizen trust in government--a non-mission oriented value of considerable importance.

Thought and practice continue to evolve, with greater emphasis now on strategic management and the central role of performance measurement in that process. Poister (2010) has suggested that we ought to see greater transition toward strategic management, performance management, and their joint use. LeRoux and Wright (2010), studying nonprofit organizations, sought to determine if performance measurement improved strategic decision making. We are increasingly curious to know: is the headache worth the hype? In spite of sound theory and extensive practice, we still know very little about the performance phenomenon's prevalence, its contributions to effectiveness, efficiency, or equity, and whether those contributions justify continued use. The remainder of this essay examines a number of factors that influence the expected use, success, and support for performance management systems in local government.

THE CAPACITY-PERFORMANCE PARADIGM

A body of recent research has examined the relationships among various forms of capacity and performance outcomes. Ignoring the details of efforts that take place within the black box of specific policies and programs, such findings reveal that the key to solid performance is a solid base of relevant capacity in key areas. The precise definition of capacity depends on the object it seeks to bring about (Gargan 1981), and it is not a singular concept, but rather one comprised of multiple dimensions (Bowman & Kearney 1988). It takes different types of capacity to respond to a train derailment than to collect and dispose of solid waste. But almost any outcome has two major capacity categories--human and financial.

Meier and O'Toole (2002, 2003) have specifically examined the effect of managerial quality and networking on organizational performance. Hall has examined the relationship of capacity to performance in state innovation outcomes (2007a, 2007b) as part of economic development in the new economy, as well as local capacity to leverage federal grant awards (2008a, 2008b). In that research, human and financial factors played prominently into the dimensions of capacity considered. In other research, Hall (2008c, 2010) has shown that mitigating factors can influence the role of capacity in explaining the flow of grants. So it is not just capacity, but the way capacity interacts with other factors, that explains performance. Capacity is necessary, but not sufficient, to produce performance results. All of this research ignores the central role of what goes on within the black box--how are local governments delivering programs? What steps are they taking, or which strategies are they using? Certainly these factors matter, particularly in the manner they are integrated with performance measurement.

The capacity-performance paradigm explains performance outcomes, but there is another related area where it can influence performance. Local governments, if expected to manage their performance outcomes, must possess the necessary capacity to engage in the act of performance measurement and management. What constitutes capacity for performance measurement? It takes both time and money to successfully implement performance measurement, and some have lamented the tradeoff between administrative endeavors, such as performance measurement and reporting, and direct program or service delivery. Administrative requirements are seen as detracting from time or money that could be applied directly to the problem or the people in need.

Without detouring into a discussion of the proper balance between these uses of resources, we can identify the necessary components of capacity that will facilitate performance measurement. First, management support and commitment is key, for without it, no performance measurement endeavor is likely to succeed, or to be taken seriously within the organization. Second, a strategic plan with clear mission, goals, and objectives is necessary to determine which outcomes are important to the organization. Third, personnel or personnel time are necessary to collect performance data in a routine manner, compile it, analyze it, and report it to relevant consumers. While staff time is important, expertise in data collection and analysis, particularly statistical analysis, will be beneficial. Fourth, capacity requires technology in the form of a performance management system into which data is collected, organized, and stored for future analysis, and from which reports can be generated. At the simplest level, this might constitute a computer with a spreadsheet or database program. But large, integrated systems exist to match the size and sophistication of various performance efforts. Fifth, it is necessary for data users--decision makers--at all levels to possess the ability to interpret performance data. And, of course, none of this would be relevant without the discretion to act in response to performance reports in a double-loop learning approach.

Bartels and Hall (2012, 2014, 2015) have proposed an administrative theory of performance to explain why some Tax Increment Finance districts outperform others in the same metropolitan area. They find evidence that the use of particular management tools--performance measurement and risk assessment--leads to improved performance in the primary outcomes of interest--property tax revenue. In their most recent study, they utilized quantitative analysis to isolate particular tools and approaches associated with performance measurement, reporting, and strategic planning, with promising findings associating outcomes with performance management--the use of performance information to make ongoing decisions. They also find some evidence that suggests the number of TIF districts a city operates increases performance. What does this mean for local government performance management? There are two key points to take away; first, evidence is mounting that performance measurement, when used properly, can facilitate performance improvements; second, the better the capacity on hand to engage in appropriate performance measurement efforts, the greater the likelihood of success.

Capacity is necessary, but not sufficient, to bring about effective performance measurement, and therefore also not sufficient to bring about performance improvements. Performance measurement and management are not uniform in their use across agencies or places; they exist on a spectrum from symbolic efforts that adhere to minimum standards to substantive efforts with a strong emphasis on learning and improving. Capacity may permit more substantive efforts, but substantive efforts may also stimulate the development of necessary capacity to implement performance measurement.

EXPLORING THE COMPONENTS OF CAPACITY, AND POTENTIAL IMPEDIMENTS, FOR LOCAL GOVERNMENT PERFORMANCE MEASUREMENT

Seeing that capacity provides the wherewithal for success in both 1) substantive mission-central outcomes and 2) process-oriented efforts such as performance measurement, we can examine the various components of capacity individually to provide insight into the expected limitations those deficiencies could be expected to produce. Capacity for performance measurement can be conceptualized in a number of key dimensions; in some cases, the directional relationship is clear, but in others, additional research is necessary to better determine the influence each capacity dimension has on performance measurement utilization and success.

Financial Capacity

Financial capacity drives all other forms of capacity. The budget is the lever through which policy priorities are implemented, including performance measurement. Without budgetary resources it is not possible to provide employees, technology, supplies, or any of the other things that are necessary to implement and utilize performance measurement. Most studies have explored the effects of performance information on budgetary decision making rather than the reverse (Melkers & Willoughby 2005, for example).

Administrative Capacity

Meier and O'Toole have examined the effect of managerial quality (2002) and networking (2003) on organizational performance. Hall's (2007a, 2007b) research concentrated on the role of particular types of human resources for economic development success. When looking to capacity that is necessary to achieve an outcome, human resources of one type or another almost always enters the equation. What about capacity to engage in performance measurement? Local governments, cities and counties in particular, but also special districts, vary considerably in size and scope. Many local governments do not have more than one or two employees, and those that do have a number of employees rarely have a substantial central administrative staff. Large cities and counties, obviously, are the exception. Typically smaller governments will face pressures to keep overhead costs low and service delivery high. This challenge sets the stage for failure. There is, of course, potential to offset such pressure through citizen education and outreach; notably, Heikkila and Isett recommend three strategies: "(1) garner citizen input on performance criteria, (2) improve performance management processes to allow more citizen involvement, and (3) communicate performance information with citizens more effectively" (2007, 245).

Local government employees in the line departments--police, fire, sanitation, parks, and so on--will likely be pressured by the mission-oriented demands of their jobs. These employees may find it to be challenging to allocate either the time or motivation to substitute part of their busy schedules to measure performance. Existing administrative staff may also have a full plate dealing with citizen inquiries and problems, managing finances, and so on. To ask these employees to take on the responsibility of performance measurement may also be met with resistance. This is all to say that performance measurement is work, and it requires someone to perform it; it will not complete itself. We expect that performance measurement will be more effective when it is implemented with a staff that has proper motivation, management support, and analytic skills.

Proper motivation implies that the staff believe that it is in the best interest of the agency to collect data to monitor its performance, that it supports accountability objectives, and that it can influence decisions that will lead to improved service delivery. Staff skepticism has notoriously been an obstacle to performance measurement implementation (Berman & Wang 2000). To offset such skepticism, and encourage motivation, changes may require amendments and adjustments to existing systems such as annual reviews, or placement of incentives for performance achievements. Long term changes to staff motivation must stem from the resulting evolution of organizational culture. In other words, they believe that the information obtained from the data will lead to improvement. Management support implies that managers share this belief, and that they encourage the performance measurement process, support it with time and resources, and make employees aware that it is an important part of their work. It has recently been shown that leadership capacity is essential to bring about positive performance results (Andrews & Boyne 2010). Finally, analytic skills are an essential part of administrative capacity. Individuals engaged in data collection, analysis, and reporting, need to have good understanding of quantitative methods, as well as the essentials of research design. In other words, they need to understand the implications of threats to validity, the importance of reliability, and how to conduct statistical analysis in a way that maintains the integrity of the data, but that is also interpretable to stakeholders, policymakers, and citizens.

Administrative capacity is more than just the number of individuals working on a project. It is more than the hours they spend per week or per year engaged in performance measurement. Productivity is enhanced with training, skill, and experience, and more qualified individuals will be able to achieve much more in less time than their less qualified counterparts. When we think about the challenges administrative capacity poses to local government use of performance measurement, there are a few. First, local governments--especially small ones--are less professionalized. This means the culture is less likely to encourage performance measurement. It also means that it will be difficult to recruit individuals with the kind of skills necessary to conduct such work because their skills command higher wages that those governments may not be able to pay. This challenge is present whether the application of such expertise occurs in a centralized fashion or within line departments. Technical skills necessary to deliver local services do not often include quantitative analysis. This is why generalist administrative professionals have become so important as federal, state, and local governments have professionalized in recent decades. The value of a graduate degree in public administration, in particular, has continued to rise.

People constitute the core of a performance measurement strategy, and the qualitative dimensions of capacity identified here constitute a better measure of capacity than simple quantity.

Economy of Scale and Efficiency

"We don't do a lot. We have a small budget and a small staff. We really don't have the resources to entertain a substantive performance measurement effort..."--statements like this are not uncommon. But there is a sound basis for the argument. Local governments, because they are small, may find it to be more expensive to implement a government-wide performance measurement system. The argument is the same as for serving a population with low population density in a rural area. A road of equal length and width costs no more to build in either place, but it costs more per resident in a town of 10,000 than in a town of 1,000. Technology required for performance measurement may not be perfectly scalable, and the minimum investment may result in higher costs per person, or per budgetary dollar spent in smaller areas. The argument is simple: the economy of scale that we find in federal, state, and large metropolitan governments may not exist in smaller local governments. Implementation of performance measurement may result in greater overhead expenses in these smaller settings, which could strongly discourage its use.

Is it possible to overcome scale economy challenges in smaller governments? There are ways to do so without disrupting either normal service delivery or agency budgets. One key is to start small and simple, with an eye toward increasing sophistication to better serve decision making needs as experience and comfort with the effort grows. Another approach would be to centralize administration of performance measurement roles in local governments so that effort need not be duplicated within each of the line agencies. And of course, it is always possible to pilot test new administrative techniques on a pilot basis for a sub-set of programs. One additional mechanism that can help offset these scale-oriented challenges is collaboration. Hall and Jennings (2012) dictate a series of factors that influence collaboration on evidence-based practice in an interstate collaborative. The basis of collaboration here is simple: a benchmarking collaborative might be able to overcome some of the challenges of scale by centralizing the organization of performance measurement, analysis, and reporting activities to a task force or third party with stronger expertise. And this also raises the possibility of outsourcing performance measurement to a third party with considerable skill and experience, as has been done with a wide variety of local government services in recent years. Economy of scale provides greater capacity for performance measurement because the marginal cost will be lower, whereas the lack thereof can constitute an obstacle.

Inadequate Strategic Planning

It is difficult to know what to measure if there are not clear goals and objectives in place. Without a good understanding of performance measurement, a practitioner might wrongly assume that it is something they can simply start doing. A manager observes the things that they think are important, and decides to begin collecting data. While this approach may not be wrong, it lacks the guidance that can be obtained from a more strategic analysis and plan. Local governments have been slower to take on strategic planning than their larger state and federal counterparts. The size of small local governments may provide an illusion of control, and proximity to the problems may provide good familiarity with needs and problems. However, there are still areas of disagreement among local factions, and it may not be the case that goals and missions in local agencies are clear. They may be assumed without the justification that strategic planning can provide.

There is a clear linkage between strategic planning and performance measurement. Poister (2010) argued for better alignment of the two practices in the future. Strategic planning processes organizational and environmental information in a way that helps to determine an organization's mission and mandates, its opportunities and challenges, and from those, its strategic goals and objectives. Performance measurement begins with the identification of key performance indicators, and the definition of key is, essentially, strategic. So a good strategic plan will provide the foundation for performance measurement. In winnowing down lists of potential measures that might be collected and reported, performance measurement efforts can benefit by focusing on those that are defined in the strategic plan--they are integral to achieving the organization's mission.

If we accept that a strategic plan provides capacity to facilitate performance measurement, then conversely, the absence of a plan implies a challenge to successful implementation of performance measurement. While performance measures can be determined independently, doing so outside a strategic planning process offers two problems: first, any performance measures selected run the risk of addressing things that are not central to the organization's mission--things that do not matter; and second, engaging in an ad hoc effort to select key performance indicators is duplicative of any effort that would be performed in a de facto strategic planning process. This redundancy would be inefficient if strategic planning were conducted separately, and ineffective if not performed with the same rigor as an earnest strategic planning effort.

Task Complexity and Simplicity

When thinking about performance measurement in any environment, one of the challenges is often that we really can't measure the results of our effort. Take for example NASA; we can track shuttle or rocket launches, or new satellites put into commission on an annual basis, but these outputs do not reflect the agency's core mission. Outcomes for NASA are not measurable on a monthly basis, and if they are, the change is too negligible to be meaningful. For example, in mid-2015, we have begun to receive photographs of Pluto that result from the New Horizons mission launched January 19, 2006, the analysis of which will continue for many years.

The following are NASA's strategic goals, taken from its 2014 strategic plan:

1. Expand the frontiers of knowledge, capability, and opportunity in space.

2. Advance understanding of Earth and develop technologies to improve the quality of life on our home planet.

3. Serve the American public and accomplish our Mission by effectively managing our people, technical capabilities, and infrastructure. (NASA 2014, iv).

Similarly complex outcomes and goals characterize many agencies. The Environmental Protection Agency (EPA) monitors environmental conditions that contribute to climate change. Economic development agencies strive to reshape the character of their communities. Some agencies face tasks that are more challenging, complex, and ambiguous than others. Sanitation departments, for example, face relatively simple goals: removing waste, and reducing waste directed to landfills. Transit departments seek to make transit available to those who need it, and to see that the buses run on time. Task complexity varies within and across agencies, and it offers a challenge to anyone seeking to implement a performance measurement system.

It is a classic square peg/round hole problem when we define the parameters of a performance measurement system and then apply it consistently across departments. Valid comparison requires such consistency, but conforming to the requirements that work for some units will be difficult and stressful for others. Hall and Handley (2011) found this to be the case with the implementation of performance measurement in the federal CDBG program in local governments. The strategic planning goals and performance reporting requirements HUD had put in place were constraining for local governments with divergent priorities.

Task simplicity will facilitate performance measurement, as will goal clarity and consistency; these characteristics enhance capacity to use performance management. Complex tasks, by definition, require greater front-line discretion and less structure to implement; they will be difficult to measure, and proposed measures will likely generate resistance from employees. Task complexity and goal ambiguity reduce the capacity to utilize performance measurement.

Another dimension of this issue is the differences in the types of information we seek from one policy question to another. Demand for information is likely to bring about calls for the implementation of performance measurement to inform decision making. And demand for information is likely to occur when there is a problem, or something significant creates a policy window and subsequent political discourse on the agency and its efforts. Baumgartner and Jones (2005) have shown us that the demand for information varies over time, and that the information search is influenced by punctuations in the agency equilibrium. Unfortunately, we cannot simply turn performance measurement on when demand is high, and off when it is not. A steady and systematic chain of performance data is necessary to make ongoing decisions, and to offer comparisons that demonstrate success or failure.

Performance information, like evidence-based practice and other forms of objective data, is useful only in addressing questions that are instrumental in nature--how to do something that has been agreed upon (Jennings & Hall 2012). Political questions--about what should be done--are beyond the purview of data because they are informed by values and beliefs at a deeper level. Performance measurement will never be useful for determining political questions; only instrumental ones.

Simple tasks, clear goals, and instrumental questions are a suitable medium for developing a successful performance measurement system. Complex tasks, goal complexity (Koppell 2005), and political questions are a recipe for confusion, and may detract from any effort to implement performance measurement. Differences in these characteristics across agencies or divisions will necessarily generate strife or confusion within any centralized performance measurement system.

Complex Implementation Environments

What does government do? Increasingly, public services are contracted out to third parties, including nonprofit organizations and private firms. At higher orders of government, this is more prominent; agencies have long been out of the business of direct service delivery, and instead provide accounting and oversight for competitive grants and contracts. With each step removed from the payer, control becomes more and more difficult as the familiar struggles of the principal/agent problem play out (Frederickson & Frederickson 2006). Information asymmetry, slacking, goal displacement, and other problems can be expected to characterize contractual relationships we see among entities just as they do relationships within bureaucratic agencies.

What becomes of performance objectives in these settings? On the one hand, it becomes more difficult to control outcomes when they are handed off to an external provider. Profit will be the principle motivator for individual firms or agencies fulfilling government contracts, whereas the strategic goals of the public agency may be neglected. Accountability relationships are challenged as goals become muddled (Koppell 2005). The challenges of intergovernmental administration have been well documented, and there has been some interest in the role of performance measurement in the grantmaking and contracting context (Hall & Handley 2011; Heinrich 2002; Amirkhanyan 2009). Hall & Jennings (2011) and Jennings, Hall, & Zhang (2012) emphasized the accountability struggles that manifested in the American Recovery and Reinvestment Act. But they also highlighted the potential of performance measurement to provide a solution to the challenges associated with third party accountability through performance contracts. Performance contracts can provide the specificity necessary to ensure both quantity and quality are sound, and that a series of goals are considered. Gaming is possible in any contractual relationship, and we can fully expect loopholes to be exploited, but learning over time offers the potential to develop better contracts that close such loopholes and help to facilitate accountability for public goals and values. This suggests the need for increased administrative capacity, particularly technical capability focused on grant and contract management. In other words, implementation complexity reduces capacity to engage in performance management, but that deficit could be offset by increases in administrative and technical capacity to manage it.

In collaborative or cooperative settings, goal conformity is associated with greater success. For a collaborative to succeed, its members must share a common goal or objective; when there is consistency of mission, it will be much easier to agree upon measures of collaborative success for performance measurement purposes.

Grantmaking poses a particularly difficult challenge for local governments. Since they are most often on the receiving end of federal or state grants, they are also on the receiving end of a set of strings and conditions that must be fulfilled. Conditions on grant awards exist to achieve some separate policy objective that may be unrelated to the core mission of the grant program itself. To a local government organization, the existence of conditions and strings constitutes burden. Additional burden adds to implementation complexity and often detracts from the core mission of program delivery. With the advent of the American Recovery and Reinvestment Act the set of strings and conditions has grown to include performance measurement requirements. If local governments are required to collect, analyze and report performance on measures determined by the federal agency, they may be forced to implement new performance measurement systems dictated by external goals and objectives. If they have existing local systems, the requirements of the federal agency may overlap, duplicate, add to, or otherwise amend the existing local measurement system. This added complexity can have the effect of shifting goals away from local ones toward federal ones because what gets measured gets done.

Modern organizational strategies and implementation techniques pose new challenges and complexities that may deter the use of performance measurement vis-a-vis traditional direct service delivery.

Lack of Fodder for Comparison

What do we do with performance information once collected? Sure, we can analyze it and report it, but in order to convert data to information, we have to give it meaning. One way to do this is through comparison to past performance (history), or to benchmark organizations. Without data to compare performance with, the performance results will be less meaningful, and consequently difficult to justify. Presence of past performance information or comparable information from peer organizations constitutes capacity around which a performance measurement system may be built. Entities with a history of performance measurement will have greater capacity to continue and to improve performance management practice than those that do not.

PERCEPTUAL BARRIERS TO SUCCESSFUL PERFORMANCE MANAGEMENT

In addition to the various forms and facets of capacity that may influence the adoption and use of performance measurement and management, there are also perceptual barriers that may influence local government interest in its use. These are explored next.

The Art of Distraction: Managing the Cacophony of Reform Expectations

Local governments face pressure to remain as accountable as possible to their citizens and stakeholders. In the rational age, governments are seemingly inundated with rational reform programs. Strategic planning is widely used and accepted, but its contributions to performance measurement are rarely considered. Performance measurement is building a head of steam, but it is not used in local governments as widely as it is in state and federal agencies. Program evaluation and evidence-based practice add fuel to the fire of potential reform strategies. As useful as each of these strategies is, the impetus to adopt reforms for the sake of symbolic accountability is strong. When adopted in such a symbolic fashion, without the substantive attention or devotion ascribed to a committed endeavor, they will not produce the desired effects. Moreover, these reforms can be implemented in an iterative cycle, where the outputs of one process become inputs to another. Local governments will do well to avoid the hype and concentrate on implementing performance measurement in earnest before adopting the reform strategy of the month.

The Illusion of Control

Proximity of decision makers to the agencies and to citizens is highest in local governments, which are smaller and serve smaller numbers of constituents. However, the illusion of control may shape attention to performance measurement (Bazerman & Moore 2012). Overconfidence leads humans to the conclusion that they can control things that are beyond their control. This bias in decision making may be important to the use of performance measurement because it may lead administrators to believe that they are able to control government without active systematic intervention such as performance management provides. The proximity of local government to its citizens and the proximity of elected officials to agency staff likely fosters such illusions, and may bias the decision makers against adoption of performance measurement as an unnecessary luxury.

Operational Challenges

How many measures is the right number? How should we prioritize goals when there are many or when they compete? Which type should be used? Boyne (2002) identified sixteen different dimensions of performance that could be monitored, and they are grouped into five different themes: outputs, efficiency, effectiveness, responsiveness, and democratic outcomes. Which one is most important? Which of them are important at all? We could measure dozens of specific indicators in each dimension, and with them obtain a very clear picture of organizational performance. However, data collection, analysis, and reporting takes time, resources, and money. And each of these is scarce. One of the most difficult challenges of performance measurement is deciding on a list of key measures to be followed. Constraints limit an agency to only a few key indicators. Most will focus on a few workload measures for internal management purposes. Some will focus on outputs, because they represent the intermediate progress toward meeting agency goals. And a few will focus on outcomes, because they are the organization's strategic goals; its raison d'etre.

Operational concerns can produce a cacophony of frustration and resistance: what to measure, when, how, to whom should it be reported? These questions must all be addressed and a clear and systematic approach developed for performance measurement to be successful.

CONCLUSION

This essay has presented the capacity-performance paradigm as a framework for synthesizing theoretical expectations about the various factors that seem to influence the adoption and use of performance management in local government. Various dimensions of capacity have been identified here and discussed as they pertain to local government utilization of performance measurement as a management strategy. Perceptual obstacles were also presented as additional stumblingblocks that may impede performance management use. By cognizantly reviewing these key capacity components it will be possible for local officials to select performance measurement strategies that will have the highest likelihood of success. Failure to identify a capacity deficiency may reduce the probability of success with any performance management system or strategy.

As local government performance management use increases, there will be increasing demand for research to explain the observed differences among local governments, and to validate or invalidate the influence of each of these factors as determinants of performance management adoption, utilization, and success. There is thus a growing need for research that examines--quantitatively and qualitatively, cross-sectionally and in time series analysis--how these factors influence local government performance management endeavors.

REFERENCES

Amirkhanyan, A. A. (2009). Collaborative performance measurement: Examining and explaining the prevalence of collaboration in state and local government contracts. Journal of Public Administration Research and Theory, 19(3), 523-554.

Ammons, D. N. (1999). A proper mentality for benchmarking. Public Administration Review, 105-109.

Ammons, D. N., & Rivenbark, W. C. (2008). Factors influencing the use of performance data to improve municipal services: Evidence from the North Carolina benchmarking project. Public Administration Review, 68(2), 304-318.

Andrews, Rhys, & George A. Boyne. 2010. Capacity, leadership, and organizational performance: Testing the black box model of public management. Public Administration Review 70, 3: 443-454.

Bartels, C. E. & J. L. Hall. 2012. Exploring Management Practice Variation in Tax Increment Financing Districts: Toward an Administrative Theory of Performance. Economic Development Quarterly, 26(1): 13-33.

Bazerman, Max, and Don A. Moore. Judgment in Managerial Decision Making, 8th. Wiley & Sons, 2012.

Behn, R. D. (2003). Why measure performance? Different purposes require different measures. Public administration review, 63(5), 586-606.

Berman, E., & Wang, X. (2000). Performance measurement in US counties: Capacity for reform. Public Administration Review, 60(5), 409-420.

Bowman, A. O. M., & Kearney, R. C. (1988). Dimensions of state government capability. The Western Political Quarterly, 341-362.

Boyne, G. A. 2002. Concepts and indicators of local authority performance: An evaluation of the statutory framework in England and Wales. Public Money and Management 22 (2): 17-24

Frederickson, D. G. and H.G. Frederickson (2006). Measuring the performance of the hollow state. Georgetown University Press.

Gargan, J. J. (1981). Consideration of local government capacity. Public Administration Review, 649-658.

Hall, J. L. 2007a. "Developing Historical Fifty-State Indices of Innovation Capacity and Commercialization Capacity" Economic Development Quarterly 20(2): 107-123.

Hall, J. L. 2007b. "Understanding State Economic Development Policy in the New Economy: A Theoretical Foundation and Empirical Examination of State Innovation in the U.S." Public Administration Review, 67(4), 630-646.

Hall, J. L. 2008a. "The Forgotten Regional Organizations: Creating Capacity for Economic Development" Public Administration Review, 68:1, 110-125.

Hall, J. L. 2008b. "Assessing Local Capacity for Federal Grant-Getting" American Review of Public Administration, 38(4): 463-479.

Hall, J. L. 2008c. "Moderating Local Capacity: Exploring the E.O. 12372's Intergovernmental Review Effects on Federal Grant Awards." Policy Studies Journal, 36(4): 593-613.

Hall, J. L. 2010. "Giving and Taking Away: Exploring Federal Grants' Differential Burden on Metropolitan and Non-Metropolitan Regions" Publius, the Journal of Federalism, 40 (2): 257-274.

Hall, J. L. & C. E. Bartels. 2014. Management Practice Variation in Tax Increment Financing Districts: An Empirical Examination of the Administrative Theory of Performance. Economic Development Quarterly, 28 (3): 270-282.

Hall, J. L. & C. E. Bartels. 2015. To boldly go where no one has gone before: Measuring the effect of performance management in economic development practice. Paper presented at the American Society for Public Administration Annual Meeting, Chicago, IL.

Hall, J. L. and D. M. Handley. 2011. "City Adoption of Federal Performance Measurement Requirements: Perspectives from Community Development Block Grant Program Administrators." Public Performance and Management Review 34(4):443-467

Hall, J. L. and E. T. Jennings. 2011. The American Recovery and Reinvestment Act (ARRA): A Critical Examination of Accountability and Transparency in the Obama Administration. Public Performance and Management Review, 35(1): 202-226.

Hall, Jeremy L. and Edward T. Jennings, Jr. 2012. Administrators' perspectives on successful interstate collaboration: The Drug Effectiveness Review Project. State & Local Government Review, 44 (2).

Heikkila, Tanya & Kimberley Isett. 2007. Citizen involvement and performance management in special-purpose governments. Public Administration Review 67,2: 238-248.

Heinrich, C. J. (2002). Outcomes--based performance management in the public sector: implications for government accountability and effectiveness. Public Administration Review, 62(6), 712-725.

Jennings, E. T., & Hall, J. L. (2012). Evidence-based practice and the use of information in state agency decision making. Journal of Public Administration Research and Theory, 22(2), 245-266.

Jennings, E. T., J. L. Hall, & Zhiwei Zhang. 2012. The American Recovery and Reinvestment Act and State Accountability. Public Performance and Management Review, 35 (3): 527-549.

Jones, B. D., & Baumgartner, F. R. (2005). The politics of attention: How government prioritizes problems. University of Chicago Press.

Julnes, P. D. L., & Holzer, M. (2001). Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation. Public Administration Review, 61(6), 693-708.

Koppell, J. G. (2005). Pathologies of accountability: ICANN and the challenge of "multiple accountabilities disorder". Public Administration Review, 65(1), 94-108.

LeRoux, K., & Wright, N. S. (2010). Does performance measurement improve strategic decision making? Findings from a national survey of nonprofit social service agencies. Nonprofit and Voluntary Sector Quarterly.

Meier, K. J., & O'Toole, L. J. (2002). Public management and organizational performance: The effect of managerial quality. Journal of Policy Analysis and Management, 21(4), 629-643.

Meier, K. J., & O'Toole, L. J. (2003). Public management and educational performance: The impact of managerial networking. Public Administration Review, 63(6), 689-699.

Melkers, J., & Willoughby, K. 2005. Models of performance-measurement use in local governments: Understanding budgeting, communication, and lasting effects. Public Administration Review, 180-190.

Moynihan, D. P. (2008). The dynamics of performance management: Constructing information and reform. Georgetown University Press.

Moynihan, D. P., & Pandey, S. K. (2010). The big question for performance management: why do managers use performance information?. Journal of public administration research and theory, muq004.

NASA. 2014. NASA Strategic Plan 2014. Retrieved May 20, 2015 from: http://www.nasa.gov/sites/default/files/files/FY2014_NASA_SP_508c.pdf.

Newcomer, K. E. (1997). Using performance measurement to improve programs. New directions for evaluation, 1997(75), 5-14.

Poister, T. H. (2010). The future of strategic planning in the public sector: Linking strategic management and performance. Public Administration Review, 70(s1), s246-s254.

Sanger, M. B. (2013). Does measuring performance lead to better performance? Journal of Policy Analysis and Management, 32(1), 185-203.

Stone, D. A. (2001). Policy paradox: The art of political decision making. New York: WW Norton.

Wholey, J. S., & Hatry, H. P. (1992). The case for performance monitoring. Public Administration Review, 604-610.

Zheng, Yueping. 2015. Explaining Government Performance on E-Participation in New Jersey: Government Capacity and Willingness. (Doctoral Dissertation).

JEREMY L. HALL

University of Central Florida
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有