首页    期刊浏览 2025年06月30日 星期一
登录注册

文章基本信息

  • 标题:Performance management in a benchmarking regime: Quebec's municipal management indicators.
  • 作者:Charbonneau, Etienne ; Bellavance, Francois
  • 期刊名称:Canadian Public Administration
  • 印刷版ISSN:0008-4840
  • 出版年度:2015
  • 期号:March
  • 出版社:Institute of Public Administration of Canada

Performance management in a benchmarking regime: Quebec's municipal management indicators.


Charbonneau, Etienne ; Bellavance, Francois


It has long been taken for granted by researchers that from the presence of performance data would come (better) informed decisions (for a recent review, see Aubert and Bourdeau 2012). As a result, "while the production of performance information has received considerable attention in the public sector performance measurement and management literature, actual use of this information has traditionally not been very high on the research agenda" (Van De Walle and Van Dooren 2008: 2). Until recently, most of the large body of research on performance measurement in the public sector examined the determinants of its existence (Streib and Poister 1999; Holzer and Yang 2004; Chung 2005), its implementation (Palmer 1993; Johnsen 1999; de Lancer-Julnes and Holzer 2001; Ho and Chan 2002; Jordan and Hackbart 2005), its perceived benefits (McGowan and Poister 1985; Berman and Wang 2000; Rogers 2006) and shortcomings (Radin 1998; Hood 2007a). There is now a mounting number of studies on the determinants to the use of performance information at the local level by managers (de Lancer-Julnes and Holzer 2001; Moynihan and Pandey 2010; Kwon and Jangb, 2011; Moynihan and Hawes 2012; Moynihan, Pandey and Wright 2012b) and by elected officials (Ho 2006; Askim 2009). This is the first Canadian study on the determinants of performance management at the local level.

This study answers the same research question formulated by this recent stream of research on the statistical determinants of performance management: What are the factors, whether controllable or uncontrollable, that account for the uses of performance measurement by municipal managers? After all, performance measurement is an information-based managerial tool. It is valuable if it is utilized. This study will not only investigate the general use of performance information, but also the management, budgeting and reporting uses of performance information. Moynihan, Pandey and Wright (2012a: 470) would label management and budgeting use as "purposeful" and reporting uses as "political." In step with de Lancer-Julnes and Holzer (2001), Rogers (2006), and Moynihan and Pandey (2010), we try to identify the determinants on the use of performance information.

The Indicateurs de gestion municipaux [Municipal Management Indicators] initiative, a mandatory municipal benchmarking regime in operation in the province of Quebec, constitutes the setting of this study. Quebec is an interesting case to study performance management at the local level. First, it is one of the three municipal mandatory performance regimes in North America, with Ontario and Nova Scotia. Second, the mandatory performance measurement regime is only mandatory regarding the collection and transmission of standardized indicators. It is not a rigid performance measurement regime like England's defunct Comprehensive Area Assessment. Rather, it is a compulsory and publically funded version of performance consortiums in operation in the United States, like the North Carolina Benchmarking Network, the Southeastern Results Network, and the Florida Benchmarking Consortium.

There are at least three limitations of the existing literature on the use of performance information in the public sector. First, the dependent and independent variables of the regression models come from survey instruments that were previously used in other governmental settings. Second, the information carried by the performance indicators themselves will be controlled for in the analyses, as suggested by Boyne (2010: 216, 218). Third, and more importantly, the present study encompasses many small and very small municipalities. Most studies on the use of municipal performance information are targeted at medium to large municipalities: between 10,000 and 100,000 residents (Ho 2003), more than 25,000 residents (Melkers and Willoughby 2005; Rogers 2006), more than 50,000 residents (Moynihan and Pandey 2010; Moynihan, Pandey and Wright 2012b), between 25,000 and 250,000 residents (Chung 2005). To our knowledge, only two studies include smaller municipalities. In Johansson and Siverbo's (2009) study of Swedish municipalities, the smallest municipality in Sweden, Bjurholm, has 2,500 residents (Statistics Sweden 2009). In Rivenbark and Kelly's (2003) study in smaller municipalities, their sample is focused on localities between 2,500 to 24,999 residents. In comparison, two-thirds of the 1,113 municipalities in Quebec have less than 2,000 residents. Rural municipalities include a large share of municipalities in Canada. Patterns of use of performance information in small and very small municipalities are close to unknown.

The article proceeds as follows. The next section will cover in more depth the previous research on use of performance information followed by a description of the data and methods used to test our hypotheses. Lastly, we present the results from our regression analyses and conclude with a discussion of the importance of the findings.

Previous research

The general use of performance information by local managers (Poister and Streib 1999; Wang and Berman 2001; Chung 2005; Moynihan and Pandey 2010) and local politicians (Askim 2007 and 2008) has been studied. Nevertheless, studies assessing the extent of the use of performance measurement seldom take into account subtle symbolic uses (de Lancer-Julnes 2006: 227) or attitudes towards activities (Pollitt 2013: 349).

In past research, when the use of performance measurement was addressed, it was often referred to in a general way. The proportion of local or subnational agencies that used certain kinds of measures was analyzed. The typical findings are that output measures were more prevalent than outcomes measures (Usher and Cornia 1981: 233; McGowan and Poister 1985: 534; Palmer 1993: 32; Berman and Wang 2000: 413; de Lancer-Julnes and Holzer 2001: 699; Wang and Berman 2001: 414). Indicatively, the use of performance measurement was operationalized by researchers in simple straightforward ways such as asking managers if "we use performance measures in this municipality" or "we do not use performance measures in this municipality" (emphasis in the original survey used for Poister and Streib 1999, and Streib and Poister 1999). The Government Finance Officers Association of the United States and Canada (GFAO) asked jurisdiction if performance measurement was being used in their governments in 'some way' (Kinney 2008: 47). The National Administrative Studies Project (NASP-IV) asked local managers the extent to which they "regularly use performance information to make decisions" (Moynihan and Pandey 2010: 857). Swedish local managers were asked if they have adopted the Relative Performance Evaluations system the extent to which "one make use of ratio comparisons" in their municipality (Johansson and Siverbo 2009: 206). On other occasions, the general use of performance measurement was operationalized as the proportion of municipalities reporting consumption of performance information in general terms like decision making (Fountain 1997) or the use "in selected departments or program areas" (Streib and Poister 1999: 111).

Berman and Wang's (2000) sought to bridge the "input/output/outcomes" and "general use" of performance information with the "task relying on performance information" use studies. Their study of U.S. counties, with population over 50,000, establishes if organizations saying they use performance information would have the capacity to include this information in their operations. The authors found that among those using performance measurement, about one-third had what the authors regarded as an "adequate" level of capacity to actually use the information (Berman and Wang 2000: 417). Recent studies look at actual, or purposeful (Moynihan, Pandey and Wright 2012b) uses of performance information instead of a more passive general use.

Performance management

In the "task relying on performance information" use studies, the use of performance measurement information concentrated less on the proportion of input, output or outcomes measures or sweeping generalities. Instead, these studies looked more on activities and functionalities where performance information was used. Using Berman and Wang's (2000) data, Wang and Berman (2001) assessed the link between what they called the "deployment" and "purposes" of output and outcome measures in county government. The authors asked county managers if their jurisdiction uses performance measurement to accomplish nine tasks including "communicating between public officials and resident," "monitor the efficiency/effectiveness of services," "determine funding levels for individual programs" (Wang and Berman 2001: 415). The proclaimed use of performance information was quite high. It spanned from 53 percent for "determine funding priorities across programs," to 82 percent for "communicate between managers and commission." A task like "communicating between public officials and resident" in the survey did not encompass "actual use" of performance measurement, but reporting (Ammons and Rivenbark, 2008: 305). The actual use of performance information excludes "(...) simply reporting measures or somewhat vaguely considering measures when monitoring operations" and implies the "(...) evidence of an impact on decisions at some level of the organization" (Ammons and Rivenbark 2008: 305). According to that definition, only three out of six of the specific uses of de Lancer-Julnes and Holzer (2001: 700) would qualify as "actual uses" of performance. The result of de Lancer-Julnes and Holzer (2001: 708) is that few state, county and municipal managers reported using performance for specific activities "sometimes" or "always."

At the municipal level, slightly different results than county-level government performance information use were obtained using ICMA data from managers in U.S. municipalities with populations between 25,000 and 250,000 (Chung 2005). The results from 173 managers who reported having performance measurement initiative were that performance information was reported to be used for approximately between 25 percent (for strategic planning) and less than 50 percent of municipalities (for managing/evaluating programs), depending on the function (Chung 2005: 116). Rogers' (2006) study used 277 GASB-generated surveys to assess the use of performance measurement in diverse local initiatives in the United States. The unit of analysis here is different than Chung's (2005). "Use" in Rogers' (2006) study looks at the proportion of departments within a municipality that reportedly use performance information, whereas Chung (2005) looks at municipalities as a whole. Once more, the results point out that the reported actual use of performance information at the municipal level in the departments is rarely higher than 50 percent.

Use of comparative performance information

One of the earliest studies covering the performance management behavior in general, and the use of benchmarking tangentially, focused on local authorities in England (Palmer 1993), before the implementation of the Best Value system and the Comprehensive Performance Assessment regime. Managers identified if they used certain indicators as referents, or benchmarks. Her results on the use of benchmarking do not specify what is meant by uses, what is the frequency of use, what is the percentage of decisions for which they are used, or if they impacted decisions. Still, it is informative to see that 63 percent of managers expressed that they used internal (historical) benchmarking and 56 percent indicated that they used external benchmarking (comparisons with other local authorities) (Palmer 1993: 33).

In 1999, postal surveys of General Managers and management accountants based in the UK showed that only a third of all respondents, when identifying the reasons for participating in benchmarking activities, "saw benchmarking as a source of new ideas, or route to improvement building on observed best practice" (Holloway, Francis and Hinton 1999: 355). Later, Boyne and colleagues (2002) performed content analysis of "performance plans" in Wales, under the Best Value system. They paid specific attention to the presence of benchmarking information contained in these "performance plans." The authors found that:

The percentage of plans including comparisons of performance is extremely low. This limited use of comparisons is surprising because benchmarking was one of the key elements of the review. Some plans contained comparative data gained through benchmarking, but not all pilots who were members of the same benchmarking club included the data. Some PPs utilized the Citizen's Charter indicators published by the Audit Commission. Only a few pilots produced extensive comparative data in the PP. In some cases comparative data are provided, but are difficult to interpret as there is little or no information on the comparator organizations (Boyne et al. 2002: 703).

One of the few studies that specifically present their results on managers' use of benchmarking information is Johansson and Siverbo's (2009) study of 207 (out of 290) Swedish municipalities. They find that 40 percent of Swedish municipal managers were reporting to use relative performance evaluations, that is, comparative benchmarking information, "to a great extent" (Johansson and Siverbo 2009: 207).

Establishing from the literature the proportion of local government managers disclaiming that they are using performance information is difficult to do in a precise manner. The expression "use of performance measurement" or "use of benchmarking information" takes different meanings in different studies. The samples and the questionnaires vary between studies. If the reader can tolerate some discrepancies between the few different perception surveys that studied the role of performance information by local managers and politicians and the inherent bias for respondents to overstate socially desirable management behaviors, an overall picture immerges. Performance indicators are not widely used.

Data

The present research is based on municipalities in Quebec. The use of performance information by managers is analysed. Since 2004, all municipalities in that province are mandated to collect and report a set of standardized performance indicators. This is the third and last province that implemented a provincially-mandated municipal performance regime.

The e-mail addresses of General Managers were transmitted to our research team by the Ministere des Affaires municipales, Regions et Occupation du territoire (MAMROT). An electronic survey (1) was sent to all General Managers in the 1,113 municipalities in Quebec in the winter of 2009-2010. The survey instrument was pretested with twelve active members in the Partners on Municipal Management Indicators Committee in the fall of 2009. Many of them are current or former municipal General Managers and/or Chief Financial Officer.

The data of this study uses the whole population of municipalities in Quebec. The fact that MAMROT had electronic contacts of every municipality dissipates some of the concerns about a digital divide that could potentially be present in the province of Quebec, given the proportion of municipalities of less than 5,000 residents. Table 1 summarizes the distribution of these municipalities according to population size. It also includes the population size of survey respondents.

As we observe in Table 1, population-stratified reminders contributed to receiving a response from 391 municipalities out of 1,113, with at least 30% of municipalities in each stratum. This is very similar to the 33% response rate registered by Schatteman's study of municipal performance reports in Ontario. The unit of analysis for this study is the municipality. The survey was addressed to the General Manager, the highest-ranking municipal administrative employee. Most of the returned surveys, 312 out of 391, were filled out by the General Manager. The rest were filled out by someone else in the organization in charge of the indicators, often the Assistant General Manager or Chief Financial Officer. The data used in this research come from two sources. A self-administered survey on the behaviors and perception of managers toward the management indicators and the values of the so called "hard performance information" (that is, of the mandatory indicators) were merged. An identification code on the surveys made possible the joining of the values for the performance information from MAMROT's dataset, the values obtained for the different dependent variables and the independent variables from the survey. Due to missing values on one or more independent variables for 70 respondents, the data from 321 survey questionnaires were used in the statistical analyses (last column of Table 1).

Dependent variable: performance management

Use of performance information

At least four different kinds of uses can be derived from the survey: general use, management use, budgeting use and reporting use. In this study, similarly to Moynihan and Pandey (2010), General Managers were asked how often the Municipal Management Indicators are used in their municipality: "According to your observations, what is the utilization level in your municipality?" This question differentiates between managers who do not use performance information at all and those who do occasionally, often, and very often. It does not differentiate between symbolic or passive uses and actual or purposeful uses of performance information (1). Managers who do not use management indicators passed over the next questions on more specific uses of performance information.

The descriptive statistics on top of Table 2 reveal that the performance information carried through the management indicators is not widely used. Of the 321 respondents with complete survey data, 51.1% said that the performance indicators are never used in their municipalities. The rest of the managers say that performance information is used. Of these 44.2% are from municipalities where the information is occasionally used; 4.7% are from municipalities where this information is used often, and none use it very often. The distribution is very similar in the sample of 391 respondents, with one manager saying using the management indicators very often.

A number of existent survey instruments measuring use of performance information were presented in the literature review. Rogers (2006) differentiates between different uses: use for management, use for budgeting and use for reporting. This differentiation of uses makes this Governmental Accounting Standards Board (GASB) instrument more precise than other survey instruments measuring managerial use of performance information (for example, NASP-IV). For this reason, the GASB instrument was adapted to the reality of Quebec's municipal environment. In a similar way to Rogers (2006), survey takers were asked: "From what you have observed, indicate what are the reasons for which management indicators are used in your municipality," and "Indicate if since their implementation, mandatory management indicators for the different functions and activities have been explicitly mentioned [appeared] in the preparation of the budget and the annual report on the financial situation in your municipality." In Rogers' (2006) dissertation, there were seven items for management uses, six items for budgeting uses, and three items for reporting uses. For all the different questions items, respondents were asked five options about the proportion of departments using them, from no department at all to all departments. An additive index was set up for each use.

Of the 321 respondents with complete survey data, 51.1% said that the performance indicators are never used in their municipalities. The rest of the managers say that performance information is used. Of these 44.2% are from municipalities where the information is occasionally used; 4.7% are from municipalities where this information is used often, and none use it very often.

To keep true to the realities of Quebec's benchmarking regime, and after suggestions from the twelve people who pretested the survey questionnaire, the number of items for each different uses were reduced. In the current survey, there are four items for management uses, three items for budgeting uses, and three items for reporting uses. Also, to reflect the fact that the survey is sent to many small and very small municipalities, the survey takers only had to express if items for specific uses were indeed used. Because the majority of municipalities in Quebec are very small, it did not make sense to ask the proportion of departments that were using indicators for specific functions. For the three specific use indices, a binary variable was computed to indicate if Municipal Management Indicators are used for at least one item or not. What can be noticed in Table 2 is that uses related to reporting, with 38.9% of respondents indicating at least one reporting use, have a slightly higher occurrence than budgeting (35.2%), and especially managing (23.4%) uses. In total, there are four dependent binary variables in this study, one for every type of use.

Independent variables: performance, barriers, and internal, socio-demographic and political characteristics of the municipality

Performance of the municipality

Fourteen mandatory indicators form the basis on which the performance of municipalities can be assessed. Two precisions should be made about the measurement of performance in Quebec. First, the fourteen mandatory indicators do not cover all municipal services performed by municipalities in Quebec. Public libraries, fire services and police services were not included in the list of mandatory indicators at the time of data collection. Second, performance should be understood as internal and external performance. Internally benchmarking is akin to historical trends within a municipality. The portal where managers are invited to assess themselves consists of comparison of the municipality indicator value to the appropriate quartile values for municipalities of its size. The comparative information contained in this portal constitutes external benchmarking. Beside a passing remark to the effect that performance usually translates as effectiveness and efficiency (MAMSL 2004: 3), no judgment is offered by MAMROT in defining what performance is, and what differentiates good or bad, or even better from worst performance. This ambiguity goes against recommendations for performance management in complex settings (Van Dooren 2010: 430). In the online portal, where the comparative data are presented to municipal managers, the fourth quartile represents municipalities with the highest values for the indicators. For example, higher plowing cost and more frequent water boiling notices are in the fourth quartile. In the context of this research, it is hypothesized that cost should be lower rather than higher; higher cost of snow removal per km and higher average length of health-related leaves of absence should be minimized; the debt should represent a lower percentage of assets rather than a higher percentage. Thus, better performance can be defined as being in the first quartile (lower cost of snow removal per km); worst performance can be defined as being in the fourth quartile (higher percentages of debt compared to assets) (2). This evaluation by quartiles for municipalities of similar characteristics is precisely how Zafra-Gomez, Lopez-Hernandez and Hernandez-Bastida (2009: 157) evaluated the performance in a study on Spanish municipalities. It should be noted that only the 2008 quartile values were available to municipal managers for external benchmarking when they could use and had to report their 2009 performance data to MAMROT. Therefore, the quartile of each management indicator in 2009 for a given municipality was obtained by comparing its 2009 value to the 2008 quartile values of the municipalities of the same population size group. The quartiles of the fourteen indicators were further averaged to obtain the externally benchmarked performance of each municipality in the sample (3). The average quartile for the sample of survey participants is 2.61 (Table 3). The difference in performance for each available indicator from the year 2009 to year 2008 was also computed and coded 0 if performance was stable or better in 2009 (that is, equal or lower indicator value in 2009 compared to 20084, and coded 1 if the performance was in decline (that is, higher indicator value in 2009). Internally benchmarked performance is defined in this study as the proportion of management indictors with a declining performance between 2009 and 2008. The municipalities had on average 49% of their management indicators in decline (Table 3).

Beside a passing remark to the effect that performance usually translates as effectiveness and efficiency (MAMSL 2004: 3), no judgment is offered by MAMROT in defining what performance is, and what differentiates good or bad, or even better from worst performance.

Having no guidance from previous research on the influence of performance itself on performance management, it is unclear to us how performance should influence the use of performance indicators. Thus, although externally and internally defined performance are main independent variables, we do not have formal hypotheses on their expected impacts.

Barriers to uses of performance information

Even in voluntary initiatives in Canada, the importance of understanding barriers to the use of performance information has been recognized (Hildebrand and McDavid 2011: 57, 60, 62). In a context of the mandatory and systematic benchmarking regime in Quebec, municipalities are bounded and constrained, at the very least, to collect and transmit information on standardized performance indicators. Although it is not impossible that managers, who are to use performance information, would do so because of perceived benefits, it is more likely that they would practice performance management because they do not encounter barriers. The level of government mandating the collection and transmission often define the benefits that are expected out of the benchmarking regime. It was the case of Quebec's benchmarking regime from the beginning (MAMSL 2004). Siverbo and Johansson (2006: 283-284), in their study of the voluntary Relative Performance Evaluations municipal system in Sweden, used a survey instrument to measure the perceived barriers to performance measurement implementation and use. Siverbo and Johansson (2006) established three categories of barriers of use: those related with one's unwillingness to use performance information, those related to one's inability, and those related to one being prevented from using performance information. Each barrier is constituted of four items. After the pretest, at the suggestion of the pretest survey takers, one item measuring "being prevented from using performance information" "the municipality has an explicit or implicit policy against the Municipal Management Indicators," was dropped from our survey. The rest of the eleven items from Siverbo and Johansson's (2006: 283284) instrument were barely altered. Two more items were added to the list. In the survey, four items constitute the "unwilling" portion of barriers, six items constitute the "unable" portion, and three items constitute the "prevented" portion. On every item, surveyed managers could identify if the statements described the reality in their municipality by expressing if they "agree," "somewhat agree," "somewhat disagree," or "disagree" with the statement. A full disagreement with the perception that the barrier applied in their municipality was coded "1"; a full agreement was coded "4." Additive indices akin to the one Rogers' (2006) used for the dependent variables were developed. Therefore, all three barriers are coded by the average score from one to four, as measured by the items. The descriptive data on the barriers, performance and the control variables discussed in this section and the followings are presented in Table 3.

The research hypotheses about the barriers are:

H1. It is expected that managers who express their unwillingness to use performance indicators will indeed use performance measurement less than managers that do not perceive this barrier.

H2. It is expected that managers who express their inability to use performance indicators will indeed use performance measurement less than managers that do not perceive this barrier.

H3. It is expected that managers who express being prevented from using performance indicators will indeed use performance measurement less than managers that do not perceive this barrier.

Internal characteristics of the municipality

Managerial practices can help explain the use of performance information. Initially developed to understand the characteristics of local authorities in the U.K. under the Best Value System (Enticott et al. 2002), the instrument was later used by the Audit Commission and academics. Academic researchers sampled the lengthy survey to circumscribe the instrument to five dimensions that have been identified by the Audit Commission (2002: 3-4, in Boyne and Enticott 2004: 12). In their study of local authorities' performance Boyne and Enticott (2004) used twenty-five questions related to the five dimensions identified by the Audit Commission from the instrument to measure internal characteristics of local authorities. From the same initial instrument, Andrews, Boyne and Enticott (2006) developed a different instrument to explain poor performance in English local authorities.

The current survey instrument of municipalities' internal characteristics is based on the items used by Boyne and Enticott (2004) and Andrews, Boyne and Enticott (2006). An exploratory factor analysis resulted in eight items constituting three factors: strategic planning, citizen outreach, and leadership. We forced the presence of three factors in the analyses. The factor loading can be consulted in Appendix 1. On every item, surveyed takers could identify if the statements described their municipality, by agreeing with the statement on a four-point Likert scale. A full disagreement with the perception that the management characteristic applied in their municipality was code "1"; a full agreement was coded "4." Hence, all characteristics are coded by the average score from one to four, as measured by the items it contains.

Socio-demographic and political characteristics of the municipality

To follow previous researches about performance measurement (de Lancer-Julnes and Holzer 2001: 695; Askim, Johnsen and Christophersen 2008: 303; Moynihan and Pandey 2010: 858), socio-demographic variables are included in the regression models. In this research, four control variables that are expected to favour the use of performance information are included: the size of population (Askim, Johnsen and Christophersen 2008; Andrews et al. 2010), the size of the budget (Askim, Johnsen and Christophersen 2008; Schatteman 2010: 539; Zaltsman 2009: 464), both in logarithmic form, the population density of municipalities (Williams 2005; Andrews et al. 2010). Three control variables are expected to have a diminishing effect on the use of performance information: devitalisation, which is a seven-item socio-economic index calculated by MAMROT that is akin to deprivation (Andrews 2004; Andrews et al. 2010), fiscal stress (Hendrick 2004; Askim, Johnsen and Christophersen 2008), which is the proportion of the long-term net debt relatively with the net value of assets, and the level of political competition (Askim, Johnsen and Christophersen 2008; Kwon and Jangb 2011: 604), measured by if the mayoral seat was disputed in the 2009 general municipal elections, or if the mayor was elected by acclamation. In a recent article, Hildebrand and McDavid (2011: 68) suggested that local governments with polarized political culture would be less likely to use performance information.

Results

As we have seen previously in Table 2, most managers are either from municipalities where performance information is perceived as not being used or occasionally used. Before moving forward with the results of the multiple logistic regression analyses for the four binary dependent variables, the differences of barriers occurrences and performance are offered for users and non-users of performance information.

Managers using management indicators systematically perceived fewer and less intense barriers on average than managers not using them. Users of performance information tend to have slightly lower externally benchmarking performance (that is, higher average quartile), and fewer indicators with declining values between 2009 and 2008. The regression analyses will determine if differences are statistically significant, even when other possible influences like internal, socio-demographic, and political characteristics are taken into account. The results of the four multiple logistical regression models are presented in Table 4. The coefficients of the independent variables are raw regression coefficients.

Performance of the municipality

The internal and external performance of these municipalities, are measured by the Municipal Management Indicators. Externally benchmarked performance, measured by the average quartile of the fourteen management indicators, proved to be statistically related only with general use of performance information. Managers from municipalities where comparative performance is low (that is, higher average quarter value) tend to use the indicators more than others for general use (p < 0.05). Internally benchmarked performance, measured by the proportion of indicators with a declining performance in a municipality, between 2009 and 2008, is significantly associated, at the 10% level, with uses of general and budgeting of performance information. The predicted probabilities to use performance information for a municipality with an internally declining performance on 80% of its indicators, when all other independent variables are set to their mean values, are respectively 39% (95% C.I.: 27% to 52%) and 23% (95% C.I.: 15% to 35%) for general and budgeting uses compared to 59% (95% C.I.: 47% to 70%) and 42% (95% C.I.: 31% to 54%) for a municipality with an internally declining performance on 20% of its indicators. Overall, the value of indicators as a source of information is used more often in municipalities where it can be seen as a vindication of encouraging results than when it suggests suboptimal results.

Barriers to uses of performance information

The hypotheses are related to the perceived presence of barriers hindering the use of performance information. The expectations were that more barriers would be associated with diminished uses of performance information. The results for our four types of use are that only one of the three barriers has a consistent verifiable impact. The perceived barriers of being unable or prevented to use performance information only seem to impact the general use of the management indicators. On the other hand, the barrier reflecting an unwillingness to use the indicators is statistically significant on all four types of use at the 1% significance level. For H1, we can reject the null hypothesis, but the evidence to support hypotheses H2 and H3 is very modest. This is due for the most part to the relatively high correlation between the three perceived barriers (0.38 < r < 0.48), implying non negligible collinearity in the regression models (5).

We observe that a marginally sterner attitude of unwillingness to use indicators will decrease the logit of the probability that a manager would express using the indicators in general by a factor of 1.18. The same pattern is found for the specific uses of performance information. The marginal effects of the unwillingness barrier on the management, budgeting and reporting uses of performance information are comparable. When all other independent variables are set at their mean value (see Table 3) (6), a manager that on average mostly agrees with barriers related with unwillingness (that is, unwillingness = 3.8), has a predicted probability of 28% (95% C.I.: 20% to 37%) for general use of the indicators and 16% (95% C.I.: 11% to 24%) for budgeting purposes. When the unwillingness barrier score is at the average value of 3, which means that the typical manager "somewhat agree" on average with the four unwillingness barriers in his/her municipality, the predicted probabilities of use of performance information increase respectively to 50% (95% C.I.: 44% to 56%) and 33% (95% C.I.: 28% to 39%) for general and budgeting uses. For managers with an unwillingness score of 2, who on average "somewhat disagree" that the four unwillingness barriers are present, the predicted probabilities of uses respectively jumps to 76% (95% C.I.: 65% to 85%) and 61% (95% C.I.: 49% to 73%). The effect of this independent variable is substantial.

Internal characteristics of the municipality

The relationship between internal characteristics of municipalities and the different types of use is seldom conclusive. Contrary to what Moynihan, Pandey and Wright (2012b: 157) found with American local government jurisdictions with populations over 50,000, the positive influence of leadership is not felt for performance management in Quebec. Citizen outreach does not have an effect on performance management when performance, management, socio-demographic and political characteristics are taken into account. We do not find discernible influence of strategic planning on the use of performance information.

Socio-demographic and political characteristics of the municipality

Six additional variables that were included in the regression models were not covered by hypotheses. These variables were related to sociodemographic and political characteristics of municipalities.

The size of a municipality, as measured in population, is not statistically associated with the different uses of performance information. Neither is the size of the budget. Higher devitalization is negatively correlated with budgeting use of performance information, but only at the 10% level (p < 0.1). Others uses do not seem to be impacted by devitalisation. More-densely populated municipalities would have a higher probability of using performance information in their budgeting (p < 0.05). Municipalities in more stringent fiscal situations would be slightly less likely to use performance data for general use (p < 0.05). The presence of political competition in elections for the mayoral seat does not seem to impact the use of performance information by managers.

Discussion

Do local managers in the province of Quebec use performance information a lot in their operations? The question is difficult to answer, because "use" is also measured in different ways in different studies. Findings from previous studies are that "use," however defined, usually does not go further than 50%. Less than half of the departments within a city, less than half of decisions, less than half of management functions integrate information from performance measures. In Quebec, less than half of General Managers say that the indicators are used at least sometimes. In addition, 41.1% identify at least one management or budgeting function, also known as actual use, for which performance information would be enshrined. The proportions are somewhat comparable to large municipalities in national studies in the United States. The proportions are much lower than for Swedish municipalities in the voluntary Relative Performance Evaluations system. Siverbo and Johansson (2006: 278) classified nine out of ten Swedish municipalities as a "high-intensity" user. The same could be said for about one municipality out of twenty in Quebec. The results of the analysis of uses are that there are two constant influences explaining the variations in performance management.

The first of these influences is performance itself. Municipalities which tend to feature better internal performance (that is, lower proportion of indicators declining) report higher uses of the information. This is observed for general, budgeting and actual uses. Confirming recent findings on performance reporting by municipalities in Quebec (Charbonneau and Bellavance 2012), there seem to be behavior akin to blame avoidance when it comes to municipalities involved in internal benchmarking. Blame avoidance behaviors are observed for declining internal performance. This is to say that managers with a higher number of indicators with declining performance about their municipalities are more prone to ignore the information out of hand. Lastly, despite frequent claims from managers in Quebec (Charbonneau and Nayer 2012), it could not be demonstrated that the size of municipalities impacts the different uses of performance.

The second of these influences is the unwillingness of managers to use indicators. The barriers related to the perception of an inability to use data and being prevented to use data could not be identified as statistically significant forces, but it is important to mention that they are correlated with unwillingness. The case to point out the unwillingness barrier as a prime influence of performance information uses is strong. The manifestation of management behaviors stemming from an unwillingness to use the indicators is constant across the board. Once the discrete changes of variations of unwillingness are taken into account, it becomes clear that this barrier is why certain managers choose not to use the indicators in their decision making and operations.

The analyses of performance management reveal that the factors associated with variation of uses are linked to managers. For the most part, uses are not impacted by influences outside managers' control or immutable characteristics, such as being a very small municipality, or evolving in a hotly disputed political environment. A sizable portion of variation of use can be explained by two main factors: the internal benchmarking performance of municipal services over the years and the unwillingness of managers to use the indicators. It bears repeating that these two factors are not correlated between themselves. In our sample, taking all factors into account, the lowest predicted probability for using the indicators in a general manner is 7.6%. It is observed for a municipality with the highest score for unwillingness, inability and prevented barriers (that is, score = 4) and 60% of indicators in decline between 2009 and 2008. The highest predicted probability of 98% is observed for a municipality with very low scores of unwillingness, inability and prevented barriers (1.25, 1.40 and 1.33 respectively) and 33% of indicators in decline.

Limitations

There are four main limitations in the present study: two have to do with survey items, one with the survey itself, and one as to do with sampling. The substantive nuance between occasional users and non-users of performance information is thin. This can elevate the difficulty of finding differences between users and non-users of performance measures, as municipalities with no or low intensity of use can be similar. This constitutes the nuance problem. There is also a novelty problem. The nuance problem is compounded by the fact that only a handful of studies on the use of performance information utilize regression analyses. Many survey instruments that were used in the survey of Quebec's General Managers come from previous studies about managerial behavior related to performance measurement and management. The survey instrument developed for the realities and specificities of performance measurement regimes of other countries were adapted to fit Quebec's municipal benchmarking regime reality. However, there were no survey items to measure group-grid culture. Potential influences of egalitarian and fatalistic managerial cultures on non-use were not studied (Van Dooren, Bouckaert and Halligan 2010: 141-142). Additionally, newer measures of some variables surfaced after data collection. For example, we might have found leadership to be statistically correlated with performance management if Moynihan, Pandey and Wright's (2012b) instrument was used.

This being said, the independent variables were not fine-tuned survey items for regression purposes. Previous studies related to use were answering research question that could be answered with descriptive statistics, or offer simple correlation tables. Nevertheless, the logistic regressions performed in this study reveal that there are significant differences between municipal non-users and users of performance information.

It has been recognized in past research on use of performance measurement information that surveying a single manager per organization constitutes a limitation (Laegreid, Roness and Rubecksen 2008: 45). Municipal managers working in finances and budgeting do not report their activities according to the same criteria as other managers (Marcuccio and Steccolini 2009: 160). A significant difference exists as well regarding perceived problems of performance measurement for finance and budgeting managers, and other managers (Willoughby 2004: 36).

Aside from its limitations, the present study follows the lead of a landmark performance measurement study on Norwegian local politicians' attitudes towards comparative evaluation of local bureaus' performances against other jurisdictions, by juxtaposing "hard" performance and survey data (Revelli and Tovmo 2007). It also sheds light to performance management in small rural municipalities of less than 2,000 residents.

To deepen our understanding of the determinant of performance management at the local level, future research should find ways to reduce dependence on perception data on the part of managers (Boyne 2010). Observing the direct real use of performance information would enhance our knowledge of performance measurement.

Conclusion

The goal of this research was to uncover what influences the use of performance information by local managers in the province of Quebec. There are two main factors accounting for the use of performance measurement by municipal managers. The first factor influencing the use of performance information is performance itself, defined historically. Municipalities that perform worse than last year on many indicators tend to use performance indicators to a lesser extent. This is an interesting finding. It reiterates the fact that performance measurement is an information-based management tool. Information about how the organization is doing is but one source among many. Even when other factors are taken into account, if the performance information reflects badly on a manager and his/her municipality, he/she is less likely to include it in the municipality's operations. This is akin to blame avoidance strategies described by Hood (2002, 2007b and 2011). The second factor is related to the unwillingness of managers to use performance indicators. No correlations were found between the various uses of performance information and internal characteristics such as citizen outreach, strategic planning, and leadership. Contrary to the claims of managers found in a complementary qualitative study of comments and focus groups (Charbonneau and Nayer 2012), no link was established between the size of a municipality and the use of performance information.

Our findings draw attention to the fact that in an intelligence performance regime like Quebec's, which shares much with other North American municipal performance consortium, unwillingness and blame avoidance can be relied on to explain the low occurrence of performance management. Appendix 1. Results of the Factor Analyses Mean SD Barriers Some managers identified barriers which would limit the use of management indicators in decision making. Indicate your level of agreement regarding the following statements on the management indicators. (1 = disagree to 4 = agree) Management indicators are not 3.11 0.87 considered useful Management indicators are not 2.61 0.95 trustworthy Management indicators are felt 3.08 0.85 to convey an incomplete picture of the organization We fear that management 3.31 0.78 indicators are misunderstood and misinterpreted We do not know how to integrate 2.92 0.90 management indicators into decision making We are not able to access data 2.99 0.95 that would enable us to compare our results to similar municipalities We lack the time to use 3.33 0.85 management indicators We lack the staff with the 3.15 0.98 expertise to work with management indicators We lack the computerized tools 2.85 0.93 to gather the detailed data on the management indicators We need additional information 2.98 0.93 to use the management indicators Our officials are uninterested 3.24 0.85 in the management indicators Management indicators are seen 2.20 0.93 as a threat Management indicators will 2.21 0.91 expose our weaknesses Internal characteristics Indicate your level of agreement regarding the following statements on the management indicators, on the current situation in your municipality (1 = disagree to 4 = agree) There are clear links between 2.96 0.82 the objectives and priorities of our service and those for the municipality as a whole The municipality's objectives 2.95 0.78 are clearly and widely communicated by managers of different services Co-ordination and joint working 3.13 0.77 among the different municipal services is a major part of our approach to the organization of services The general manager and most 3.53 0.63 managers place the needs of users first and foremost when planning and delivering services Working more closely with our 3.29 0.64 citizens is a major part of our approach to service delivery Citizens' demands are important 3.44 0.59 in driving service improvement Political leadership is 3.59 0.57 important in driving performance improvement The general manager is 3.61 0.57 important in guiding decision making to drive performance improvement Rotated factor loadings (varimax rotation) Factor 1 Factor 2 Factor 3 Barriers Some managers identified barriers which would limit the use of management indicators in decision making. Indicate your level of agreement regarding the following statements on the management indicators. (1 = disagree to 4 = agree) Management indicators are not 0.206 0.734 (a) -0.022 considered useful Management indicators are not 0.206 0.737 (a) 0.084 trustworthy Management indicators are felt 0.015 0.758 (a) 0.084 to convey an incomplete picture of the organization We fear that management 0.315 0.600 (a) 0.105 indicators are misunderstood and misinterpreted We do not know how to integrate 0.552 (a) 0.286 0.162 management indicators into decision making We are not able to access data 0.440 (a) 0.291 0.098 that would enable us to compare our results to similar municipalities We lack the time to use 0.662 (a) 0.130 0.200 management indicators We lack the staff with the 0.809 (a) 0.133 0.218 expertise to work with management indicators We lack the computerized tools 0.682 (a) 0.128 0.143 to gather the detailed data on the management indicators We need additional information 0.654 (a) 0.152 0.102 to use the management indicators Our officials are uninterested 0.216 0.483 (a) 0.287 (b) in the management indicators Management indicators are seen 0.193 0.234 0.737 (a) as a threat Management indicators will 0.285 -0.009 0.739 (a) expose our weaknesses Internal characteristics Indicate your level of agreement regarding the following statements on the management indicators, on the current situation in your municipality (1 = disagree to 4 = agree) There are clear links between 0.126 0.671 (a) 0.061 the objectives and priorities of our service and those for the municipality as a whole The municipality's objectives 0.319 0.465 (a) 0.219 are clearly and widely communicated by managers of different services Co-ordination and joint working 0.166 0.814 (a) 0.187 among the different municipal services is a major part of our approach to the organization of services The general manager and most 0.525 (a) 0.312 0.250 managers place the needs of users first and foremost when planning and delivering services Working more closely with our 0.729 (a) 0.189 0.182 citizens is a major part of our approach to service delivery Citizens' demands are important 0.722 (a) 0.130 0.391 in driving service improvement Political leadership is 0.293 0.111 0.848 (a) important in driving performance improvement The general manager is 0.270 0.246 0.584 (a) important in guiding decision making to drive performance improvement (a) loading > 0.4 are marked with an asterisk; the associated item is used in the computation of the factor score (b) Although this item loads on the second factor, we let Johansson and Siverbo's (2009) instrument unaltered (that is, we used it in the computation of the third factor).

References

Ammons, D. N., and W. C. Rivenbark. 2008. "Factors influencing the use of performance data to improve municipal services: Evidence from the North Carolina benchmarking project." Public Administration Review 68 (2): 304-18.

Andrews, R. 2004. "Analysing deprivation and local authority performance: The implications for CPA." Public Money & Management 24 (1): 19-26.

Andrews, R., G. Boyne, and G. Enticott. 2006. "Performance failure in the public sector: Misfortune or mismanagement?" Public Management Review 8 (2): 273-96.

Andrews, R., G. A. Boyne, M. J. Moon, and R.M Walker. 2010. "Assessing organizational performance: Exploring differences between internal and external measures." International Public Management Journal 13 (2): 105-29.

Askim, J. 2007. "How do politicians use performance information? An analysis of the Norwegian local government experience." International Review of Administrative Sciences 73 (3): 453-72.

--. 2008. "Determinants of Performance Information Utilization in Political Decision

Making." Performance Information in the Public Sector: How It Is Used. W. Van Dooren, and S. Van De Walle. Houndmills, UK, Palgrave Macmillan: 125-39.

--. 2009. "The demand side of performance measurement: Explaining councillors' utilization of performance information in policymaking." International Public Management Journal 12 (1): 24-47.

Askim, J., A. Johnsen, and K.-A. Christophersen. 2008. "Factors behind organizational learning from benchmarking: Experiences from Norwegian municipal benchmarking networks." Journal of Public Administration Research and Theory 18 (2): 297-320.

Aubert, B. A., and S. Bourdeau. 2012. "Public sector performance and decentralization of decision rights." Canadian Public Administration 55 (4): 575-98.

Berman, E., and X. Wang. 2000. "Performance measurement is U.S. counties: Capacity for reform." Public Administration Review 60 (5): 409-20.

Boyne, G. A. 2010. "Performance Management: Does It Work?" Public Management and Performance: Research Directions, R. M. Walker, G. A. Boyne, and Brewer G. A. Cambridge, UK: Cambridge University Press: 207-26.

Boyne, G., and G. Enticott. 2004. "Are the 'Poor' different? The internal characteristics of local authorities in the five comprehensive performance assessment groups." Public Money & Management 24 (1): 11-18.

Boyne, G. A., Gould-Williams, J., Law, J, and R. Walker. 2002. "Plans, performance information and accountability: The case of best value." Public Administration 80 (4): 691-710.

Charbonneau, E., and F. Bellavance. 2012. "Blame avoidance in public reporting: Evidence from a provincially-mandated municipal performance measurement regime." Public Performance & Management Review 35 (3): 399-121.

Charbonneau, E., and G. Nayer. 2012. "Barriers to the use of benchmarking information: Narratives from local government managers." Journal of Public Management & Social Policy 18 (2): 25-47.

Chung, Y. 2005. Factors Affecting Uses and Impacts of Performance Measures in Mid- Sized U.S. Cities. Department of Political Science. Knoxville, TN: University of Tennessee.

de Lancer-Julnes, P. 2006. "Performance measurement: An effective tool for government accountability? The debate goes on." Evaluation 12 (2): 219-35.

de Lancer-Julnes, P., and M. Holzer. 2001. "Promoting the utilization of performance measures in public organizations: An empirical study of factors affecting adoption and implementation." Public Administration Review 61 (6): 693-708.

Enticott, G., Walker, R. M., Boyne, G. A., Martin, S., and R. E. Ashworth. 2002. Best Value in English Local Government: Summary Results from the Census of Local Authorities in 2001. Cardiff, Wales, Local and Regional Government Research Unit.

Fountain, J. 1997. "Are state and local governments using performance measures?" PA Times 20 (1).

Johansson, T., and S. Siverbo. 2009. "Explaining the utilization of relative performance evaluation in local government: A multi-theoretical study using data from Sweden." Financial Accountability & Management 25 (2): 197-224.

Jordan, M. M, and M. Hackbart. 2005. "The goals and implementation success of state performance-based budgeting." Journal of Public Budgeting, Accounting b Financial Management 17 (4): 471-87.

Hendrick, R. 2004. "Assessing and measuring the fiscal health of local governments: Focus on Chicago suburban municipalities." Urban Affairs Review 40 (1): 78-114.

Hildebrand, R., and J. C. McDavid. 2011. "Joining public accountability and performance management: A case study of Lethbridge, Alberta." Canadian Public Administration 54 (1): 41-72.

Ho, A. T.-K. 2003. "Perceptions of performance measurement and the practice of performance reporting by small cities." State and Local Government Review 35 (2): 161-73.

--2006. "Accounting for the value of performance measurement from the perspective of Midwestern mayors." Journal of Public Administration Research and Theory 16 (2): 217-37.

Ho, S.-J. K., and Y.-C. L. Chan. 2002. "Performance measurement and the implementation of balanced scorecards in municipal governments." Journal of Government Financial Management 51 (4): 8-19.

Holloway, J., G. Francis, and M. Hinton. 1999. "A vehicle for change? A case study of performance improvement in the "New" public sector." International Journal of Public Sector Management 12 (4): 351-65.

Holzer, M., and K. Yang. 2004. "Performance measurement and improvement: An assessment of the state of the art." International Review of Administrative Sciences 70 (1): 15-31.

Hood, C. 2002. "The risk game and the blame game." Government and Opposition 37 (1): 15-37.

--. 2007a. "Public service management by numbers: Why does it vary? Where has it come from? What are the gaps and the puzzles?" Public Money b Management 27 (2): 95-102.

--. 2007b. "What happens when transparency meets blame-avoidance?" Public Management Review 9 (2): 191-210.

--. 2011. The Blame Game: Spin, Bureaucracy, and Self-Preservation in Government. Princeton, NJ: Princeton University Press.

Johnsen, A. 1999. "Implementation mode and local government performance measurement: A Norwegian experience." Financial Accountability & Management 15 (1): 41-66.

Johansson, T., and S. Siverbo. 2009. "Explaining the utilization of relative performance evaluation in local government: A multi-theoretical study using data from Sweden." Financial Accountability & Management 25 (2): 197-224.

Kinney, A. S. 2008. "Current approaches to citizen involvement in performance measurement and questions they raise." National Civic Review 97 (1): 46-54.

Kwon, M., and H. S. Jangb. 2011. "Motivations behind using performance measurement: city-wide vs. selective users." Local Government Studies 37 (6): 601-20.

Laegreid, P., P. G. Roness, and K. Rubecksen. 2008. "Performance Information and Performance Steering: Integrated System or Loose Coupling?" Performance Information in the Public Sector: How It Is Used. W. Van Dooren, and S. Van De Walle. Houndmills, UK: Palgrave Macmillan: 42-57.

Marcuccio, M., and I. Steccolini. 2009. "Patterns of voluntary extended performance reporting in Italian local authorities." International Journal of Public Sector Management 22 (2): 146-67.

McGowan, R. P., and T. H. Poister 1985. "Impact of productivity measurement systems on municipal performance." Policy Studies Review 4 (3): 532-40.

Melkers, J., and K. Willoughby 2005. "Models of performance-measurement use in local governments: Understanding budgeting, communication, and lasting effects." Public Administration Review 65 (2): 180-90.

Ministere des Affaires municipales, du Sport et du Loisir. 2004. Les Indicateurs de Gestion Municipaux: Un regard neuf sur notre municipalite. Gouvernement du Quebec, QC.

Moynihan, D. P., and D. P. Hawes. 2012. "Responsiveness to reform values: The influence of the environment on performance information use." Public Administration Review 72 (s1): s95-sl05.

Moynihan, D. P., and S. K. Pandey. 2010. "The big question for performance management: Why do managers use performance information?" Journal of Public Administration Research and Theory 20 (4): 849-66.

Moynihan, D. P., S. K. Pandey, and B. E. Wright. 2012a. "Prosocial values and performance management theory: Linking perceived social impact and performance information use." Governance 25 (3): 463-83.

--. 2012b. "Setting the table: How transformational leadership fosters performance information use." journal of Public Administration Research and Theory 22(1): 143-64.

Palmer, A. J. 1993. "Performance measurement in local government." Public Money & Management 13 (4): 31-36.

Pollitt, C. 2013. "The logics of performance management." Evaluation 19 (4): 346-363.

Poister, T. H., and G. Streib. 1999. "Performance measurement in municipal government: Assessing the state of the practice." Public Administration Review 59(4): 325-35.

Radin, B. A. 1998. "The government performance and results acts (GPRA): Hydra-headed monster or flexible management tool?" Public Administration Revieiv 58 (4): 307-16.

Revelli, F., and P. Tovmo. 2007. "Revealed yardstick competition: Local government efficiency patterns in Norway." Journal of Urban Economics 62 (1): 121-34.

Rivenbark, W. C., and J. M. Kelly. 2003. "Management innovation in smaller municipal government." State and Local Government Review 35 (3): 196-205.

Rogers, M. K. 2006. Explaining Performance Measurement Utilization and Benefits: An Examination of Performance Measurement Practices in Local Governments, Doctoral dissertation, Department of Public Administration, Raleigh, NC: North Carolina State University.

Schatteman, A. 2010. "The state of Ontario's municipal performance reports: A critical analysis." Canadian Public Administration 53 (4): 531-50.

Siverbo, S., and T. Johansson. 2006. "Relative performance evaluation in Swedish local government." Financial Accountability & Management 22 (3): 271-90.

Streib, G., and T. H. Poister. 1999. "Assessing the validity, legitimacy, and functionality of performance measurement systems in municipal governments." American Review of Public Administration 29 (2): 107-23.

Statistics Sweden. 2009. "Population in the country, counties and municipalities by sex and age." Available at: http://www.scb.se/Pages/TableAndChart_159278.aspx. Accessed July 28th, 2010.

Usher, C. L., and G. C. Cornia. 1981. "Goal setting and performance assessment in municipal budgeting." Public Administration Revieiw 41 (2): 229-35.

Van De Walle, S., and W. Van Dooren. 2008. "Introduction: Using Public Performance Information." Performance Information in the Public Sector: How It Is Used. W. Van Dooren, and S. Van De Walle. Houndmills, UK: Palgrave Macmillan.

Van Dooren, W. 2010. "Better performance management: Some single- and double-loop strategies." Public Performance & Management Review 34 (3): 420-433.

Van Dooren, W., G. Bouckaert, and J. Halligan. 2010. Performance Management in the Public Sector. New York, NY: Routledge.

Wang, X., and E. Berman. 2001. "Hypotheses about performance measurement in counties: Findings from a survey." journal of Public Administration Research and Theory 11 (3): 403-27.

Williams, M. C. 2005. Can Local Government Comparative Benchmarking Improve Efficiency?: Leveraging Multiple Analytical Techniques to Provide Definitive Answers and Guide Practical Action. Doctoral dissertation. Richmond, VA: Virginia Commonwealth University.

Willoughby, K. G. 2004. "Performance measurement and budget balancing: State government perspective." Public Budgeting & Finance 24 (2): 21-39.

Zafra-Gomez, J. L., A. M. Lopez-Hernandez, and A. Hernandez-Bastida. 2009. "Evaluating financial performance in local government: Maximizing the benchmarking value." International Review of Administrative Sciences 75 (1): 151-67.

Zaltsman, A. 2009. "The effects of performance information on public resource allocations: A study of Chile's performance-based budgeting system." International Public Management Journal 12 (4): 450-83.

Notes

(1) What motivates this choice is that official correspondence from MAMROT is sent electronically; MAMROT routinely contacts municipalities by electronic means and sets-up performance transition tools that are computer-based. The contact list of General Managers provided by MAMROT consisted of electronic addresses. The sheer number of municipalities in the province of Quebec involves prohibitive costs for mail surveys. It also has other methodological implications; the large number of municipalities is a serious hindrance to a large number of face to face interviews. Additionally, the distances between municipalities are great in Quebec, which is the vastest Canadian province. An e-mail survey seemed like the best compromise between an online survey and a mail survey.

(2) The quartile classification of two of the fourteen management indicators was reversed: training effort per employee and percentage of training cost compared to total payroll. A higher value for these two indicators was considered a better performance. The reason motivating this reverse coding is that the intent behind these indicators is to draw attention to training and foster more training, not less. Therefore the classification into the quartiles was reversed for these two indicators so that a classification in the first (fourth) quartile represents a lower (better) performance.

(3) The coding was reversed for training effort per employee and percentage of training cost.

(4) The coding for declining performance was also reversed for the training effort per employee and percentage of training cost compared to total payroll management indicators. As for the quartiles, the number of indicators differs across municipalities.

(5) If we remove unwillingness and prevented (inability) from the regression models, inability (prevented) becomes statistically significant at the 1% with a negative regression coefficient.

(6) Because all other independent variables have a very low or no correlation with the three barrier variables, setting them to their respective means is representative of the observed data patterns in our sample for the different values of unwillingness considered to show the predicted probabilities of use of performance information by the multiple logistic regression models.

Etienne Charbonneau is assistant professor at the Ecole nationale d'administration publique, and a member of the CREXE research center. Francois Bellavance is professor of management sciences at HEC Montreal. They would like to thank Gregg G. Van Ryzin and Donald P. Moynihan for their comments on a previous version of the paper. Table 1. Quebec Municipalities by Population Size in 2009, and Survey Participation Number of observations in the statistical models Number of (due to missing survey value on Size of Number of participants independent municipalities municipalities (%) variables) 0 to 499 206 84 (40.8%) 67 (32.5%) 500 to 999 272 98 (36.0%) 83 (30.5%) 1,000 to 1,999 261 82 (31.4%) 67 (25.7%) 2,000 to 2,999 114 35 (30.7%) 28 (24.6%) 3,000 to 4,999 91 30 (33.0%) 23 (25.3%) 5,000 to 9,999 73 22 (30.1%) 19 (26.0%) 10,000 to 24,999 55 19 (34.6%) 16 (29.1%) 25,000 to 49,999 23 12 (52.2%) 12 (52.2%) 50,000 to 99,999 9 5 (55.6%) 4 (44.4%) 100,000+ 9 4 (44.4%) 2 (22.2%) Total 1,113 391 (35.1%) 321 (28.8%) Table 2. Descriptive Statistics of the Dependent Variables--General Use, Management Use, Budgeting Use, Reporting Use (n = 321) Statements and dependent variables n % Data collection and Very often 0 0 reporting Municipal Often 15 4.7 Management Indicators Occasionally 142 44.2 has been mandatory for Never 164 51.1 all municipalities since 2003. According to your observations, what is the utilization level in your municipality? General use At least occasionally 157 48.9 From what you have In establishing contracts 7 2.2 observed, indicate what for services are the reasons for which management Managing operations or 17 5.3 indicators are used in routine decisions your municipality: Evaluation to establish 49 15.3 underlying reasons for results Specific performance 30 9.3 improvement initiatives Management use At least one of the above 75 23.4 From what you have To prepare budgets, 27 8.4 observed, indicate what including resources are the reasons for allocations or discussion which management of resources reallocations indicators are used in your municipality: Indicate whether at least Annual budget 78 24.3 one mandatory indicator was explicitly mentioned Annual report on the 77 24.0 in financial situation Budget use At least one of the above 113 35.2 From what you have To provide feedback to 32 10.0 observed, indicate what managers and employees are the reasons for which management To report to elected 97 30.2 indicators are used in officials your municipality: To report to citizens, 48 14.9 citizen groups or to inform the medias Reporting use At least one of the above 125 38.9 Table 3. Descriptive Statistics of the Independent Variables (n = 321) Mean SD Min Max 1. Performance Quartile (external 2.61 0.41 1.33 3.50 benchmarking) Decline (internal 0.49 0.17 0.00 0.90 benchmarking) 2. Barriers Unwillingness 3.07 0.66 1 4 (Cron-bach's alpha = 0.81) Inability (Cronbach's 3.04 0.69 1 4 alpha = 0.83) Prevented (Cronbach's 2.20 0.82 1 4 alpha = 0.68) 3. Internal characteristics Strategic planning 3.02 0.63 1 4 (Cron-bach's alpha = 0.72) Citizen outreach 3.42 0.52 1 4 (Cron-bach's alpha = 0.78) Political and administrative 3.60 0.51 1 4 leadership (Cronbach's alpha = 0.73) 4. Socio-demographic characteristics Size of population 7.31 1.38 4.62 12.84 (logarithm) Population density of 0.12 0.36 0.00 3.36 municipality (per [km.sup.2]) Size of budget in 2009 14.45 1.42 12.31 20.25 (logarithm) Fiscal stress 21.87 14.10 0.00 102.07 Devitalisation index 0.30 5.12 -16.76 25.99 5. Political characteristic Presence of political competition 51% for the mayoral seat in 2009 election Table 4. Results of the Logistic Regression Analysis of Performance Information (n - 321)(Regression coefficients and their standard errors are reported in the Table) General use Management use Coeff. S.E. Coeff. S.E. Intercept 2.16 3.56 1.79 4.06 1. Performance Quartile (external 0.77 (b) 0.34 0.47 0.40 benchmarking) Decline (internal -1.38 (a) 0.76 -0.98 0.90 benchmarking) 2. Barriers Unwillingness -1.18 (c) 0.23 -1.19 (c) 0.27 Inability -0.47 (b) 0.23 -0.19 0.27 Prevented 0.01 0.22 -0.50 (a) 0.27 3. Internal characteristics Strategic planning 0.24 0.24 -0.22 0.28 Citizen outreach 0.33 0.32 0.31 0.37 Political and administrative -0.11 0.32 -0.23 0.38 leadership 4. Socio-demo, characteristics Size of population 0.25 0.38 0.21 0.46 (logarithm) Population density of 0.33 0.41 0.40 0.42 municipality Size of budget in 2009 -0.10 0.38 0.05 0.45 (logarithm) Fiscal stress -0.02 (b) 0.01 -0.02 0.01 Devitalisation index -0.03 0.03 0.01 0.03 5. Political characteristic Political competition in 0.05 0.27 -0.12 0.32 2009 election Adjusted Pseudo R-squared 0.265 0.302 P-value of the Hosmer and 0.182 0.531 Lemeshow Goodness-of-Fit Test Budgeting use Reporting use Coeff. S.E. Coeff. S.E. Intercept 4.64 3.63 3.58 3.52 1. Performance Quartile (external 0.13 0.34 0.26 0.33 benchmarking) Decline (internal -1.43 (a) 0.78 -0.58 0.74 benchmarking) 2. Barriers Unwillingness -1.17 (c) 0.24 -1.02 (c) 0.22 Inability -0.30 0.24 -0.31 0.23 Prevented -0.34 0.23 -0.04 0.21 3. Internal characteristics Strategic planning -0.03 0.25 0.13 0.24 Citizen outreach 0.43 0.33 0.33 0.32 Political and administrative 0.40 0.33 0.13 0.32 leadership 4. Socio-demo, characteristics Size of population 0.27 0.41 0.54 0.39 (logarithm) Population density of 0.84 (b) 0.43 0.62 0.40 municipality Size of budget in 2009 -0.28 0.40 -0.44 0.39 (logarithm) Fiscal stress -0.02 0.01 -0.01 0.01 Devitalisation index -0.05 (a) 0.03 -0.04 0.03 5. Political characteristic Political competition in -0.12 0.28 0.41 0.27 2009 election Adjusted Pseudo R-squared 0.257 0.202 P-value of the Hosmer and 0.573 0.550 Lemeshow Goodness-of-Fit Test (a) p < .10, (b) p < .05, (c) p < .01.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有