Crowdsourcing as a new instrument in the government's arsenal: explorations and considerations.
Dutil, Patrice
With the rapid deployment of Web 2.0, the interactive generation of the internet, and the advent of highly efficient computers and software, private sector companies have made progress in using "crowdsourcing," a concept coined (Howe 2006) to describe a new approach to find talent and put it to work. Typically, the label has been given to a strategy where a firm seeks services from anonymous members of society who, in turn, will donate time and effort under certain conditions. A "crowdsource" can contribute software coding, content (text, pictures, and footage), and expertise on a myriad of issues. Crowdsourcing, however, can be a misleading concept as it has, in some parts, come to be used as a synonym for simple fundraising, democratization and public consultation. That tendency can corrupt the dramatically innovative features of this new tool for government. It also points to the reality that the theoretical understanding of the range of policy and program instrument needs to be updated.
This article has three objectives. The first section describes the current understanding of crowdsourcing to distinguish it from other definitions of state activities and highlights its roots in traditional public administration. The second section outlines the major types of crowdsourcing and gives examples from the federal, provincial and municipal levels, mostly drawn from Canada. The third section explores the limits of crowdsourcing and some of the issues this new technique raises in terms of governance and administration and argues that it provokes a rethinking of the classification of the state's instruments.
Selected cases reveal how this instrument can help government meet needs by activating crowds into accomplishing tasks. Experience shows that the use of crowds is more than a procedural novelty: it opens new venues for direct contact between the state and its citizens that can affect the force and direction of decision making and governance generally. It imposes itself as a new instrument in its policy and programming arsenal.
What is crowdsourcing?
Although it has been changed dramatically by technology, the idea of the state relying on citizens for services is not new, and this is a crucial (if often overlooked) theoretical consideration. Governments in all areas and at all levels have at times used this instrument to recruit soldiers, firefighters and searchers or to help informants report malfeasance anonymously (Meijer 2012). Police in Toronto, for instance, posted on the internet the pictures of unknown suspects wanted for criminal acts during the G20 riots of 2010 and were very successful in identifying culprits as a result of this public appeal. In the 1970s, "Neighbourhood Watch" projects across Canada were born of crowdsourcing impulses: anyone could volunteer to be a "Block Parent" and keep a watchful eye on the kids in the street (Bennett et al. 2009; Dumaine and Linden 2005). The literature on "co-production" of public goods, which typically highlights an alliance between a government department and a non-profit organization or a non-profit organization with its clients, has been enriched by theoretical insights and numerous case studies (Brandsen et al. 2012; Ostrom 1975, 1990; Vigoda-Gadot 2006; for an overview see Alford 2009; Pestoff et al. 2012). Indeed, it has been argued that NGOs have traditionally "co-produced" much of their "public value," relying on volunteers to contribute their time and energy (Bovaird and Loffler 2012).
There are a number of substantial differences, however, between co-production and crowdsourcing. First, almost all the examples of co-production involve a non-profit organization--not the state. Second, they typically involve "clients" of the service who will offer their energy and know-how in order to enhance an existing effort to help themselves, a family member, or a friend. Third, the vast majority work in alliance with professionals or para-professionals. Finally, they have made little or no use of internet technology.
Definitions matter and even given its recent coinage, the term "crowdsourcing" has erroneously been used and that has distorted its significance. For instance, it has been used to define the act of releasing data. The City of Vancouver considered it was "crowdsourcing" when it made data publicly available (data.vancouver.ca) allowing software developers to create and offer a free app that reminded users when the city is scheduled to collect trash on their street. Yet, this service has remained in private hands, so it was not used in a crowdsourcing venture. In a similar vein, the Canadian government and many of the provinces have launched "open data" websites (for example, www.open.canada.ca and www.ontario.ca/open-data). The United States government has gone much further in releasing data, with www.data.gov offering over 130,000 sets as of the summer of 2015. This passive activity, where a government simply allows access to its data, may allow anonymous crowds to use data, but in itself is not a "sourcing activity."
Others have used "crowdsourcing" as a synonym for consultation (Cobo 2012). The New Zealand government used a "wiki" process to invite comments on its Police Act in 2007, a consultation effort deemed to have been highly successful (Sommer and Cullen 2009). The drafting of the Police Act, however, was done in the traditional way, involving both public servants and politicians. It was not crowdsourced.
The spectacular collapse of Iceland's finances in 2008 prompted a collective will to re-examine the country's governance. The country's leadership vowed to reinvent the consultation process and with it was born the notion that Iceland's news constitution would be "crowdsourced." The process, which went from 2009 to 2013, involved a rethinking of the entire document and was put to a popular vote. Though the new constitutional package won 67 percent of the ballots cast, the turnout was under 37 percent. The vote was non-binding, and (at the time of writing) the Iceland parliament has not indicated that it will entrench the new version of the constitution.
In reality, the Iceland experiment was nothing more than a very impressive consultation effort. It first involved a conference of a thousand people chosen by lottery (the population of Iceland is just under 300,000). Its wishes were then passed on to an elected constitutional assembly which drafted the proposal that was submitted to a plebiscite. The Iceland experience provided a dazzling example of innovative consultation, but again the drafting has been left to the traditional authorities, and the progress has bogged down. It is misleading to consider this initiative as an example of crowdsourcing.
Finally, "crowdsourcing" has been used to re-label fundraising through the use of the internet. Again, the only transformation in this instance is the relative ease in making a contribution, not the act itself: the contributors are known, the sums they have contributed are known, but their choices have not affected the outcome beyond the size of its final "gift."
Crowdsourcing as a governing/policy instrument
While many have applied the concept to a host of activities, Brabham (2013) emphasized that crowdsourcing was a power-sharing relationship between an organization (he only considered private sector corporations) and the public. He identified four key ingredients: 1) an organization that "has a task it needs performed"; 2) a community willing to do the work; 3) an online environment that allows the work to take place; and 4) "mutual benefit for the organization and the community" (p. 3). Much of his definition can be applied to the public sector, but with some differences (p. xix). The public sector has shown that it can "crowdsource" in both an online environment and in a traditional manner--indeed the state has, as pointed out earlier, a long history in crowdsourcing. Brabham's interpretation even prompted him to reject the inclusion of peer-produced projects such as Wikipedia (often seen as the poster child of crowdsourcing) because of its absence of a clear lead in the project, an actual commissioner of the work. Much of Brabham's restrictive use of the term "crowdsourcing" should be adapted to the public sector--the notion of a governing authority, the availability of an anonymous community willing to do work, and the notion of a "mutual benefit" in particular--in order to distinguish these activities from others and to define their place in the inventory of state instruments.
Like many other mechanisms, "crowdsourcing" is difficult to place in the broad array of identified policy instrument "types" because it combines different features. The literature in this field is important but it must be updated. A comprehensive inventory of sixty two types of policy instruments was first made by E.S. Kirshen and his colleagues in the mid-1960s. Canadians made an important contribution. In the early 1980s, Bruce Doem and Richard Phidd (1992) grouped the inventory into five broad categories: self-regulation, exhortation, expenditure, taxation and regulation, and public ownership (p. 97). S.H. Linder and Guy Peters contributed seven, more refined, classifications of their own: direct provision, subsidy, tax, contract, authority, regulation and exhortation (1989). Lester Salamon (2001) and his collaborators examined fourteen broad instrument categories. None of them had identified the use of anonymous masses of individuals in contributing to the state's ability to deliver a public good. In the 1980s, Christopher Hood grouped the instruments into four broad groups under his NATO acronym: Nodality (the property of being at the centre of social and information networks), Authority (the legal power to command or prohibit), Treasure (the granting or taxing of moneys) and Organization (deployment of state resources) (Hood 1984). Hood noted that any policy solution could make use of any or all of these four broad categories of tools. A few years later, in The Tools of Government in the Digital Age, Hood revisited his work with Helen Z. Margetts to make the case that the NATO categories of instruments were as relevant in the 21st century as they had been before but did not include the more precise concept of crowdsourcing.
The efforts in classifying instruments no longer seem to accommodate easily the tools that will emerge from the new digital environment. Crowdsourcing could be seen as an application of the state's ability to make use of Hood's "family, community and voluntary organizations" to achieve "self- regulation," but it sits uncomfortably within that definition. The same could be said about "organization." Crowdsourcing, of course, depends on the state initiating it, but its features demand a much fuller description and a recognition of how it has been used as a tool already. Communication was a key to those tools, in a way that procurement of services was not.
The growth of information and communication technologies is pushing the limits of state instrument categories, regardless of whether they are substantive or procedural (Howlett 2009; Howlett et al. 2011). The one exception was the highlighting of partnerships, particularly since the 1980s, but that instrument was subsumed by the broader label of "agency." Anne Schneider and Helen Ingram (1990) showed that policy tools were limited by perceptions of how citizens would likely react to each instrument, a set of assumptions that has now been tested with the possibilities offered by crowdsourcing (Meijer 2012). In this regard, I argue that crowdsourcing occupies a niche of its own in its creative use of sticks/carrots/ sermons (crowdsourcing uses no sticks, offers small carrots, and only short sermons). It is targeted to a natural audience of interested parties, but emphatically NOT clients of the state or of its agencies nor of third sector organizations. In addition, crowdsourcing makes no assumptions as to the location or demography of the individuals who will choose to get involved.
The taxonomy of instruments thus needs to be updated systematically to take into account the transformative impact of technology in enlarging the instruments available to the state. Indeed, it is this technology that has allowed governments to tap into the energy of volunteers willing to help and has created a new class of instruments that has already been used successfully in the private sector (Brabham 2013; Howe 2008; Shirky 2008; Surowiecki 2004; Tapscott and Williams 2008). Some pioneering work in this direction has been done. Beth Simone Noveck's, Wiki Government (2009) focused on the Peer-to-Patent project, a particularly successful venture in government crowdsourcing expertise to assess patent applications (see also Cobo 2012). Not least is the use of technology which has allowed government departments and agencies to interact directly with the citizenry and, in this regard, Canada has piloted some innovative projects (Vaillancourt 2012).
Crowdsourcing can find its distinctive place in the lists of policy "tools" that have been documented in earlier paragraphs as an effector. It is not merely a consultation mechanism on vague policy issues or to seek opinions on government services but instead commits action on specific problems. As such, it represents a cross between information-seeking and procurement (Borins and Brown 2009). It places an open order for services and contributions in return for a promise of a tangible outcome in direct measure with what has been contributed. It does not build a permanent "capacity" or empower any particular group, although it is always subject to the threat that a singular pressure group could seek to distort the process by involving itself heavily, far beyond its actual weight in a policy subgroup. Crowdsourcing forces open the tool shed of the state with the promise of a distinct alternative that cannot only assist government in executing projects, but that opens up vast new possibilities for a new role for the government and for new policy and project plans. If new tools have been found, they must be named so that they can be reused.
Regardless of the form of crowdsourcing, anonymous individuals have shown that they can provide a valuable service to the state. They collectively can bring an intelligence, skill or effort to a wide variety of tasks. They can juggle and test ideas, and they can draw attention to new sources of information.
Types of crowdsourcing
Books and countless articles in the business press have commonly identified three broad categories in crowdsourcing:
* "Crowdcontests" involve using the internet to create a competition to generate either a new idea or get people to test a product. Typically, the exercise ends with a winner (or a series of winners) who will receive some compensation for their effort.
* "Macrotasking" applies when the internet is used to attract and identify individuals with specialized skills and then contract with them to perform certain tasks, both physical and intellectual. In this case, as in the first, the internet was used to find (or "source") expertise; but no "winner" is named.
* "Crowdfunding" applies when the internet is used to raise small amounts of money from multitudes of donors for particular projects. As in the first two categories, the World Wide Web is used to access a broader range of like-minded individuals scattered across the territory (or indeed the globe) who likely would not be aware of the need. Crowdfunding is used to ease fundraising for charitable causes but also increasingly to direct priorities.
There are countless examples of crowdsourcing in the public sector, but to illustrate trends, a few case studies recognized for their originality and successes are examined below. Contest Task Knowledge Funding DARPA "Team Up to Project Naming International Aid Clean Up" (Library and (DFAIT)/Single- (Hamilton, Archives Window Disaster Ontario) Canada) Appeal Fund (UK) Challenge.gov Lake Partner Road Watch Canada Culture Program Program Endowment Fund (Environment, (Richmond (Heritage Ontario) Hill, Ontario) Canada)/Placements Culture (Quebec)/ The British Columbia Arts Renaissance Fund
Crowdsourcing "contests" in the public sector
Governments have been reticent to use "crowdsource" contests, save on occasions to name public buildings and zoo animals. Certainly, they use open "bidding" for contracts, but that practice is limited to small samples of the population, not crowdsourcing. The best examples in crowdsourced contests are in the United States. The Defense Advanced Research Projects Agency (DARPA) of the Department of Defense, for instance, has made extensive use of crowd "challenge" contests to fuel research into strategic military needs (Defense Advanced Research Projects Agency 2013; Belfiore 2010).
In 2010, the federal Office of Management and Budget officially allowed the use of contests and prizes to improve government and encourage innovation (Ziens 2010). Since then, the departments of the United States government also make extensive use of www.challenge.gov which is administered by the U.S. General Services Administration in partnership with ChallengePost, a non-profit corporation that is subsidized by a wide range of private sector companies. Challenge.gov inventories all of the government's challenge and prize competitions. These include technical, scientific, ideation, and creative competitions where the U.S. government seeks innovative solutions. An agency pays only for those solutions that meet the criteria and are chosen as winners. According to its website, 58 federal agencies ran 288 challenge competitions from 2010 to 2013 (www.challenge.gov; Mandi et al. 2009). It is unclear to what degree these contests are successful in delivering results, but they certainly hold promise and are ripe for more experimentation.
Crowdsourcing "tasks" in the public sector
"Team Up to Clean Up" (Hamilton, Ontario)
For many years, Hamilton, Toronto, Ottawa, Winnipeg, Edmonton, Montreal and London have made use of crowdsourcing to lead annual "clean up" initiatives to beautify the urban environment. The City of Hamilton in Ontario provides a particularly compelling case study as it has been a pioneer in using this instrument with its "Team Up to Clean Up" event. What makes the exercise particularly remarkable is its success in nearly doubling the public response. It has also made use of crowdsourcing as a tactic in achieving broader objectives of community engagement. In this program, volunteers register with for this event committing to clean up specific parts of the city. The short-term benefits of these types of clean-up initiatives are quite obvious. Litter is removed, media coverage lends an opportunity for positive publicity and feelings of civic team work among citizens are generated for that day.
Hamilton had long sponsored an annual "Pitch-In Week," which was organized under the national umbrella of Pitch-In Canada. Pitch-In provided garbage bags and some promotional to city for the purpose engaging and supporting volunteers. However, the logistics of local implementation, such as coordination of garbage pick-up stretched capacities of a national organization. In 2008, Hamilton officials turned to the methods of Keep America Beautiful, an American NGO formed in New York City in 1953, to recast the annual clean-up event with a local focus. Since 2010, the City of Hamilton has also sought and secured sponsorships from the private sector to provide supplies and prizes. Through its Neighbourhood Clean Team Program, Hamilton has now instituted this form of crowdsourced task force into a year-round activity.
The municipality has moved beyond the task of physical litter to a measurement of sorts through its Community Beautification Reports. The city's request for such "report cards" is evidence of a targeted strategy to go beyond menial help and to also crowdsource data collection. The return rates on "report cards" have been low despite the availability of prizes, but have nevertheless proven useful sources of qualitative data for the city. The data suggests that some designated clean-up areas are progressively less littered every year (Homerski 2011). As a condition of remaining as a member in good standing with Keep America Beautiful, the city completes a "windshield survey" of visible litter that must cover at least twenty percent of the city's surface. The task is completed over many days by a three-person volunteer survey crew after the "Team-Up to Clean-Up" event. The volunteer initiative
costs the City close to $50,000, but city officials value the work of the 16,000 people who "pitch-in" at close to $1,000,000, a financial return on investment of approximately twenty-to-one (Homerski 2011).
Lake Partner Program (Ministry of Environment, Ontario)
The Ontario government has used crowdsourcing to secure data on the health of the inland lakes in its territory for almost twenty years. Until that time, the monitoring was a task assigned to two dozen employees in the Ministry of the Environment located in the regional offices of Dorset and Sudbury. At the most, thirty lakes were regularly monitored for clarity, nutrients and phosphorus. Inspired by previous short-lived "self-help" programs that had encouraged citizens to monitor lakes on their own, ministry employees created the Lake Partner Program in 1995 and set about to seek the help of citizens (Scheider 2013). The manager responsible for monitoring approached the Federation of Ontario Cottager Associations to explore whether their members would be interested in helping with monitoring tasks. The response was hesitant at first, but positive. Soon, a pilot project was born: simple kits were sent to local federations who in turn educated citizens as to the process.
With the advent of faster computers, and as the program grew to be better known, the ministry succeeded in enlisting a growing number of individuals who each spring took lake measurements and sent samples to the Ministry's laboratory. By 2010, the government was receiving data for over a thousand lakes in the province, with enough accumulated data to build historical profiles on the health of at least six hundred lakes (Scheider 2013). The costs of securing this information has been minimal, compared to hiring public servants to carry out the tasks. In 2011, the project received the Technical Merit Award for Outstanding Volunteer Actions from the North American Lakes Monitoring Society. As it continues to enlist amateur "citizen scientists" of this sort, the program is expected to expand considerably.
Crowdsourcing "knowledge" in the public sector
Project Naming (Library and Archives Canada)
At the federal level, Library and Archives Canada (LAC) opted to "crowdsource" the personal identifications of the thousands of northern individuals--almost entirely Inuit--depicted in its collection of photographs dating from the late 1800s to the mid-20th century. Because the elders in a position to identify these individuals were aging, Murray Angus, an instructor with Nunavut Sivuniksavut Training Program (NSTP), proposed to approach them systematically. "Project Naming" was initiated in the winter of 2001 when a partnership was established between Nunavut Sivuniksavut, Nunavut's Department of Culture, Languages, Elders and Youth, and LAC.
Project naming began with the digitization of approximately five hundred photographs taken by Richard Harrington in the 1920s, 1930s and 1940s. They were transferred to disks and given to youth volunteers from the Inuit communities. Their task was to meet with Elders and to discuss the photographs. By LAC's calculation, three quarters of the individuals were identified in this pilot project as elders identified parents, friends and family and even themselves.
In 2003, Elders from the communities of Pangnirtung (Pangnirtuuq) and Pond Inlet (Mittimatalik/Tununiq) named individuals in almost one hundred images, some dating from as early as 1922. In ensuing years, the collection has expanded to 1300 images and uploaded to the internet in 2004. Since then, LAC has continued to enrich its database with the integration of the photos of other collections the Department of Northern Affairs, the RC MP, Health and Welfare Canada and National Film Board. The online collection now counts more than 4,500 photographs and 800 identified individuals (Libraries and Archives Canada 2013).
The Road Watch Program (Richmond Hill, Ontario)
Road Watch programs had been established in Ontario's smaller cities and towns since 1995, but in March 2002 the Public Works Department recommended a crowdsourcing option to the Chief Administrative Officer of Richmond Hill. The following year the town inaugurated its Road Watch program in order "to reduce automotive collusions and reduce fatalities through awareness, education and enforcement" (Town of Richmond Hill 2003). The Road Watch program has used the strategy of crowdsourcing, but with different applications: first, by formally reporting unsafe driving to York Regional Police through a "citizen report form" and second, in securing volunteers to serve on a Road Watch advisory committee that is charged with the task of advertising, promoting, and educating. The committee is composed of fifteen citizens (two of whom represent local business interests), the Mayor, two other council members and one representative from the local police. The committee is thus decisively driven by the community's will.
More recently, sponsored by local businesses and a small grant from the provincial ministry of transportation, the program has featured three components: awareness, education and enforcement. "Awareness" takes full advantage of volunteers who are manifest at all major community events to promote the program. Crowdsourcing has also been effective in the enforcement of the program through the Citizen Report Forms that are submitted by the volunteers. The reports serve several functions. First, suspected offenders may receive letters from the police, and repeat offenders may be required to meet with police officers in person. The purpose in both cases has been to educate drivers, not to instigate charges. Another important function of the reports has been to help the police identify "hot spots": areas where a high number of concerns have been reported. This allows police to more effectively target area to implement enforcement measures and to build data on the conditions that will optimally reduce vehicle accidents (Chau 2011).
The Town of Richmond Hill has taken advantage of an opportunity to both make their roads safer while nurturing a culture of civic engagement. The use of crowdsourcing in this case is consistent with engaging citizens, sparking a dialogue on troublesome driving, and accumulating data that can be used for driver and road analysis. In June 2009, the Road Watch Committee was presented with the Road Safety Achievement Award by the Ontario Ministry of Transportation (Town of Richmond Hill 2009).
Crowdsourcing "donations" in the public sector
International aid
The Government of Canada has used crowdsourcing to direct the amount of money it sends to disaster stricken countries as grants-in-aid by "matching" private citizen donations to registered charitable organizations working in specific countries. This is not an exercise in merely raising money, but in using crowdsourcing to shape policy. The first instance was in response to the December 2004 tsunami that ravaged South Asia, gravely affecting the infrastructure and economies of eleven countries and claiming almost 200,000 lives. The government of the United Kingdom had done something similar with the Single-Window Disaster Appeal Fund.
In early 2005, Prime Minister Paul Martin assembled an ad hoc inter-ministerial team to oversee relief efforts and the federal government acted quickly to assess the impact of their efforts in tsunami affected regions. The Government quickly announced that it had put together a $425 million aid package that included $150 million set aside "to match the generous contributions to eligible organizations by individual and groups of Canadians" (Canada News Centre 2005). The program proved popular, and $230 million was raised by May 2005, prompting the government to raise its match to $200 million (Foreign Affairs and International Trade Canada 2005).
An official Foreign Affairs and International Trade Ministry review of the response to the South Asian tsunami recognized the clear policy choice that had been made. "Channeling matching funds to NGOs creates such problems as rewarding fund raising capability instead of aid capability, potentially distorting the allocation of funds, and limiting CIDA's ability to determine the most appropriate use of Canada's aid monies," it noted. "Informants at aid agencies report that the Matching Fund rules restricted their flexibility in providing aid. Some agencies were confused about the Matching Fund and some simply did not bother to apply" (Ibid.).
This review would set the stage for a shift in the way CIDA would implement the distribution of matched funds when the policy was invoked again three years later when two separate disasters struck Asia in May 2008: the earthquake in China's Sichuan Province which claimed 70,000 victims and affected over fifteen million residents and cyclone Nargis in Burma that killed over 100,000 people and displaced another 2.4 million. In this instance, the minister announced that Canada has matched nearly $30 million in private donations to assist victims of the earthquake that struck Sichuan, China in May and over $11.6 million for Burma (Canadian International Development Agency 2008). The response to these disasters marked a significant shift in policy in comparison to the 2005 tsunami response. This time, CIDA matched funds given to any registered charities in the region but directed funds to humanitarian organizations with whom the Government Canada had formally partnered. The change addressed the problematic issue raised in the 2005 review of the tsunami response which had observed that organizations with the greatest fundraising capacities were receiving the most funds regardless of their capacity to actually deliver aid. The policy shift also had the effect of siphoning some decision-making power around the direction of funds from citizens and recentralizing them into the hands of the government.
In 2010, the dollar-matching instrument was triggered twice, first following the January earthquake in Haiti (Canadian International Development Agency 2010a), for $50 million, and in the summer when Pakistan endured flooding that left over six million in need of humanitarian aid. Interestingly, the government seemed to take greater care in communicating the details of the funding in its official communications: "Please note that the Pakistan Flood Relief Fund is separate from the funds raised by charities and will be administered separately by the Government of Canada," the CIDA website indicated: "This means that an organization declaring donations raised will not receive equivalent funds from the Government" (Canadian International Development Agency 2010b). Other conditions for donations eligible for dollar matching included:
* monetary, up to $100,000
* made by an individual Canadian
* made to a registered Canadian charity that is receiving donations in response to the July-August flooding in Pakistan
* specifically earmarked by such organizations for the purpose of responding to the floods
* made between August 2 and September 12, 2010 (Ibid).
The Pakistan flooding marked the fifth time that the federal had employed dollar matching as a foreign strategy in five years. The details contained in these communications highlighted the evolution of the use of this policy instrument from an ad hoc experiment to a strategy entrenched into a clearer, broader policy framework addressing Canada's role in humanitarian crisis. The devastation of parts of northern Japan in May 2011, interestingly, did not trigger Canada's dollar matching mechanism-- likely because there were relatively few Japanese donors in Canada.
The Canadian International Development Agency's efforts were not designed to crowdsource funds, but to use crowdsourcing to determine the level of the Government of Canada's contribution. As such, it allowed the "crowd" to contribute significantly in shaping the funding level.
The Canada Culture Endowment Fund (originally Canadian Arts and Heritage Sustainability Program)
The Canada Culture Investment Fund provides support to arts organizations. One of the four components of the fund, the Endowment Incentives, is the matching of private donations to arts projects through foundations and Organizations (Canadian Heritage n.d.). Endowment Incentives allow organizations to apply to participate in a dollar-matching program. Unlike the foreign aid dollar matching, this fund matching is not dollar-for- dollar but a proportional formula largely dependent on the number of successful applications. Funding can range, but averages seventy cents on the dollar (LeBlanc 2011). A total of 132 arts organizations have participated in the Endowment Incentives since its inception in 2001. From 2001 to 2009, the government invested over 73 million dollars which had "leveraged" over 104 million in private donations (Office of the Chief Audit and Evaluation Executive Evaluation Services Directorate 2009) on a wide variety of projects, ranging from matched funds of $1.4 million for the Orchestre symphonique de Montreal to $710 for Prince Edward Island's Indian River Festival Association Inc. (Canada Heritage--Endowment Incentives).
Two of Canada's provinces, Quebec and British Columbia, have established similar dollar- matching programs for the arts. Arts organizations in these provinces have the opportunity to apply to both provincial and federal programs, therefore leveraging nearly triple the donation dollars. In Quebec, Placements Culture has matched donations to arts projects up to $250,000 (Government of Quebec and community foundations launch generous matching grants program), while British Columbia's The British Columbia Arts Renaissance Fund has matched donations up to the limit of $350,000. As with CIDA contributions abroad, the departments have demonstrated a willingness to share with the "crowd" in deciding the final level of contributions.
Characteristics of successful crowdsourcing
Crowdsourcing is different than other government instruments in that while it initiates the project, its success depends on the quality of the contribution from a segment of the population. Importantly, crowdsourcing does not necessarily require that the tasks be accomplished electronically, as the cases of Hamilton and Richmond Hill showed. The key feature in a crowd-sourced activity is that government is yielding important parts of the policy cycle. While it may set the agenda (though most often it does not) and will set the parameters of a policy or program, the implementation of the projects and its outputs are dependent on the crowd in this governance-sharing mechanism. In effect, the state is also committing itself to follow the lead of citizens in allocating government contributions to international aid or to cultural/artistic enterprises. This is potentially transformative of the relationship between the government and the citizen who, through this process, not only sees himself as an agent but as a partner of the direction the government will take.
Certainly, crowdsourcing has offered clear benefits. In addition to tangible help (representing the savings of incalculable sums of money) crowdsourcing has also opened for government a new form of community building. While crowdsourcing could apply to many aspects of government activity, it would not apply to all. It yields many goods, and has the merit of not being coercive--only volunteers will join the crowd--but its output is entirely dependent on non-governmental actors. To ensure success, the state must be diligent and the public service must deploy inventiveness and imagination.
First, a crowdsourcing project must be sufficiently challenging or problematic to attract people who will volunteer to donate their time, energy or money. All the successful crowdsourcing projects were demonstrably able to lure help and support by appealing to the self-interest of individuals and communities. The United States National Archives has crowdsourced the indexing of the 1940 census, the search for hand signatures, and captioning videos; the Smithsonian Institution maintains an open call for "citizen transcribers" to bring to life historical manuscripts (transcription.si.edu). Those who have annually volunteered to clean up public space in Hamilton, who will donate to certain relief or artistic efforts, or who will take samples of a lake have an interest in the outcome.
Crowdsourcing project requiring special expertise may well benefit from contest prizes, but these should be substantial as volunteers risk both time and money to formulate a solution to the government-named problem. Still, this may not be sufficient: for crowdsourcing to be successful, it must appeal to a larger benefit, not just personal or communal in terms of accomplishments. The experience with rewards has been mixed. Hamilton certainly benefitted by being able to work with sponsors, but few other governments have gone as far. The self-interest, however writ large, is a crucial component of crowdsourcing. This can be catered to in three ways. The first reward is the job well done. An accomplished mission will do more to sell the crowd on a future project than anything else.
Crowdsourcing works especially well when problems are very clearly defined and where solutions are indicated. Cleaning up litter in a clearly delineated area and in a clear time frame is encouraging of community action. There is a solidarity built into the project. Ideally, the project should lend itself to data accumulation. Crowdsourcing lends itself to massive applications where progress should be measurable. Some projects have involved crowds in reporting on phenomena: water quality, litter, traffic, donations.
Public service is not replaced by crowdsourcing. If anything, crowdsourcing, particularly through the use of Web 2.0 technologies enhances the work of public servants by amplifying, by significant factors, the work of the state (Dutil et al. 2010: chap. 6; McNutt 2014). But state functionaries must be clear about what they are looking for and must be prepared to define with some clarity the crowd they expect to respond. This is true of all the cases. In each instance, government managers displayed remarkable innovation and creativity in seeking solutions to their issues. Equally as important has been patience. The task required a different kind of leadership: a willingness to "go public" with a demand, an ability to be ambassador to new communities and a connoisseurship in being able to find the right kind of "crowds", and most importantly a willingness to allow the crowd to see, in real time, the progression and the fruits of its efforts. The online/community manager--the person who heads the team that conceives projects, that studies and recruits "crowds, ensures quality control, feedback loops and, ultimately, full accountability to government and the community--will inevitably become a more important player in the bureaucracy of a wide range of government departments.
Accountability in crowdsourcing exercises is essential, indeed performance dashboards and regular updates serve to improve the trust between the citizens and the state. The government should also be mindful to recognize notable citizen-champions of crowdsourcing initiatives. Government as a trigger of knowledge accumulation and value creation can play an important role in affirming its partnership with crowds--innumerable individuals, really--who wish to contribute. Government must be open and willing to share responsibility, but it must also be willing to recognize the efforts of the crowd and be fully accountable to it.
Conclusion
The opportunities for crowdsourcing are substantial and have the potential to help the government do better work in implanting policies and programs and in opening entirely new fields of intervention. Crowdsourcing, however, represents a dilemma in terms of its place in the taxonomy of governmental instruments. Certainly, it has a history that is rooted deeply in past practices, although the use of mass communications and the advent of Web 2.0 have brought the state to an entirely new level of motivational abilities. The case is made that crowdsourcing can find its distinct place among the categories of government instruments in that it serves to resolve problems and shape, implement and evaluate policies and programs. Crowdsourcing can be applied to myriad situations--including detection and implementation--at this point it seems only the imagination of the state limits it.
But there are limits. Intellectual property issues have already stalled many initiatives and liability for work being done by volunteers must be a concern (Brabham 2013, pp. 81-97). Crowdsourcing finds itself on the partnership continuum and in many ways must be managed as such. Like any successful partnership, crowdsourcing requires expert leadership. It makes heavy use of the enablement skills Salamon (2001), Howe (2008) and Noveck (2009) have called for, but which apply to all volunteer efforts: talents in activation, orchestration, and modulation. Crowdsourcing is not a panacea, but it certainly can be a vital part of the New Governance. Crowdsourcing can contribute to a state's legitimacy, but it is not a guarantee of equity. It can be efficient, cost effective, non-coercive (therefore less likely to be politically complicated). Above all, public servants must ensure that it is fully accountable.
Governments have been experimenting with various forms of crowdsourcing and from their early experience a number of lessons can be drawn. First, crowdsourcing has worked in that it allowed tasks to be performed at a much lower cost than if the state had to pay for it. Governments have proven that they could circumscribe the tasks with clarity and make use of the labour offered voluntarily. States have also learned from the crowd and improved their efforts over repeated attempts. Crowdsourcing will not replace the state. Crowds have little sense of mission beyond the task at hand, and they cannot be relied upon to strategize on behalf of the state. Moreover, crowds can be unruly, uncooperative and plainly incompetent. Their use (and usefulness) will depend on the state's ability to marshal them in the right way for the right tasks at the right time. The task of motivating (by making it fun, challenging or pointing to a public good) and recognizing/rewarding (through prizes and distinctions) will assume a greater part of the state's concerns.
Crowdsourcing's future is not a given. Government's ability to detect by remote sensing, for instance, is inexorably improved with technological advancements and it would be tempting to limit human intervention--even in some of the cases cited in this study (such as garbage or traffic hot spots). This would be regrettable, particularly as a gulf deepens between the population and the state while at the same time the need to innovate becomes more evident. Crowdsourcing, especially when the threshold to access new communities of volunteers through social media and the continued expansion of the internet and computer literacy, may be the best strategic tool available to braid new ties to the community. As such, crowdsourcing is not so much an instrument to build a new governance, but one to recapture an old one.
Advancements in practice resulting from internet developments and social media have posed a challenge to the theoretical categories of government instruments. The use of crowds is more than a procedural novelty in that it opens new venues for direct contact between the state and its citizens that can affect the force and direction of decision making. Indeed, crowdsourcing reopens every tool to the point where it becomes an instrument itself, a choice the state must consider in its pursuit of objectives.
References
Anonymous. "Lake Partner Program" Ontario Ministry of the Environment." Available at http: / /www.ene.gov.on.ca/environment/en/local/lake_partner_program/index.htm Accessed 30 July 2011.
Alford, John. 2009. Engaging Public Sector Clients: From Service-Delivery to Co- Production. New York: Palgrave Macmillan.
Andreoni, J. and A.A. Payne 2003. "Do Government Grants to Private Charities Crowd out Giving or Fund Raising?" The American Economic Review 93 (3): 792-812.
British Columbia Arts Renaissance Fund Endowment Matching Program, n.d. Available at www.vancoucverfoundation.ca/documents/grants/BCART.pdf. Accessed 2 July 2011.
Belfiore, Michael. 2010. The Department of Mad Scientists: How DARPA is Remaking our World, from the Internet to Artificial Limbs. New York: HarperCollins.
Bennett, Trevor, Katy Holloway, and David P. Farrington. 2009. "A Review of the Effectiveness of Neighbourhood Watch." Security Journal 22: 143-155.
Bovaird, Tony and Elke Loffler. 2012. "From Engagement to Co-Production: How Users and Communities Contribute to Public Services." In New Public Governance, the Third Sector and Co-Production, edited by Victor Pestoff, Taco Brandsen, and Bram Verschuere. New York: Routledge, pp. 35-60.
Borins, Sandford and David Brown. 2009. "E-Consultation: Technology at the Interface between Civil Society and Government." In Digital State at the Leading Edge, edited by Sandford Borins et al. Toronto: University of Toronto Press.
Brabham, Daren. 2009. "Crowdsourcing the Public Participation Process for Planning Projects." Planning Theory 8 (3): 242-262.
--. 2013. Crowsdsourcing. Boston: MIT Press.
Brandsen, Taco, Victor Pestoff, and Bram Verschuere. 2012. "Co-Production as a Maturing Concept." In New Public Governance, the Third Sector and Co-Production, edited by Victor Pestoff, Taco Brandsen, and Bram Verschuere. New York: Routledge, pp. 1-12.
Canadian Heritage, n.d. "Canada Cultural Investment Fund (formerly Canadian Arts and Heritage Sustainability Program." Forthcoming.
Canadian International Development Agency. 2008. "Government of Canada Helps Victims of China Earthquake and Burma Cyclone."
--. 2010a. "Government of Canada to Match Generosity of Canadians in Response to the
Devastating Earthquake in Haiti."
--. 2010b. "Government of Canada Announces Matching Fund for Pakistan Flood Relief."
Canada News Centre. 2005. "Canada Annouces Comprehensive Tsunami Disaster Relief, Rehabilitation And Reconstruction Assistance."
Defense Advanced Research Projects Agency, "Challenge Contest." 2013. Available at www.darpa.mil/NewsEvents/Events/Challenge_Contest.aspx. Accessed 1 November 2013.
Cobo, Christobal. 2012. "Networks for Citizen Consultation and Citizen Sourcing of Expertise." Contemporary Social Science 7 (3): 283-304.
Ding, Li et al. 2010. "TWC data-gov corpus: incrementally generating linked government data from data.gov." Proceedings of the 19th International Conference on World Wide Web. New York, NY, pp. 1383-1386.
Doern, Bruce and Richard W. Phidd. 1992. Canadian Public Policy: Ideas, Structure, Process. Toronto: Nelson. (First edition published by Methuen in 1983.)
Dumaine, Francois and Rick Linden. 2005. "Future Directions in Community Policing: Evaluation of the Ottawa Police Service Community Police Centres." The Canadian Review of Policing Research, 1.
Dutil, Patrice, Cosmo Howard, John Langford, and Jeffrey Roy. 2010. The Service State: Rhetoric, Reality and Promise. Ottawa: University of Ottawa Press.
Estelles-Arolas, Enrique and Fernando Gonzalez-Ladron-de-guervara. 2012. "Towards an Integrated Crowdsourcing Definition" Journal of Information Science 38 (2): 189- 200.
Eaves, David. 2011. "Using the Tools of the 21st century: open data and Wikis." In Approaching Public Administration: Core Debates and Emerging Issues, edited by Robert P. Leone and Frank L. K. Ohemeng. Toronto: Emond Montgomery Publications.
Foreign Affairs and International Trade Canada. 2005. "Lessons from the Tsunami: Review of the Response of Foreign Affairs and International Trade Canada to the 2004 Indian Ocean Tsunami Crisis."
Hilgers, Dennis and Christoph Ihl, 2010. "Citizensourcing: Applying the Concept of Open Innovation to the Public Sector." International Journal of Public Participation 4 (1): 67-88.
Hood, Christopher. 1984. The Tools of Government. London: Macmillan.
Hood, Christopher and Helen Z. Margetts. 2007. The Tools of Government in the Digital Age. New York: Palgrave Macmillan.
Howe, Jeff. 2006. "The Rise of Crowdsourcing." Wired, June. Available at http://archive. wired.com/wired/archive/14.06/crowds.html?pg=3&topic=crowds&topic_set=. Accessed 20 January 2014.
--. 2008. Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business. New York: Crown Business. Hewlett, Michael. 2009. "Government Communication as a Policy Tool: A Framework for Analysis" Canadian Political Science Review 3 (2): 23-37.
Howlett, Michael, M. Ramesh, and Anthony Perl. 2009. Studying Public Policy: Policy Cycles & Policy Subsystems. Toronto: Oxford University Press.
--. 2011. Designing Public Policies: Principles and Instruments. New York: Routledge.
Koch, Giordano, Johann Fuller, and Sabine Brunswicker. 2011. "Online Crowdsourcing in the Public Sector: How to Design Open Government Platforms." In Online Communities, edited by A.A. Ozok and P. Zaphiris Berlin: Springer-Verlag, pp. 203-12.
Leblanc, Jean-Rene. Program Officer, Canada Cultural Investment Fund. Interview with Jordan Goldman, 6 July 2011.
Libraries and Archives Canada. 2013. "Project Naming." Available at http://www.collectionscanada.gc.ca/inuit/indexe.html. Accessed October 2, 2013.
Linders, S.H. and B. Guy Peters. 1989. "Instruments of government: Perceptions and Contexts." Journal of Public Policy 9: 35-58.
Macmillan, Paul. n.d. Unlocking Government: How Data Transforms Democracy. Deloite Inc.
Mandi, Kenneth et al. 2009. "The SMART Platform: Early Experience Enabling Substitutable Applications for Electronic Health Records." Journal of the American Medical Informatics Association; 19 (4): 597-603.
McNutt, Kathleen. 2014. "Public Engagement in the Web 2.0 Era: Social Collaborative Technologies in a Public Sector Context." Canadian Public Administration 57 (1): 49-70.
Meijer, Albert. 2012. "Co-Production in an Information Age." In New Public Governance, the Third Sector and Co-Production, edited by Victor Pestoff, Taco Brandsen and Bram Verschuere. New York: Routledge, pp. 192-208.
Noveck, Beth Simone. 2009. Wiki Government. New York: Brookings Institution Press.
Office of the Chief Audit and Evaluation Executive Evaluation Services Directorate. 2009. Evaluation of Canadain Arts and Hertiage Sustainabilty Program. Ottawa: Government of Canada.
Ostrom, Elinor. 1975. The Delivery of Urban Services: Outcomes of Change. Beverly Hills, CA: Sage.
-- 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge, UK: Cambridge University Press.
Parvanta, C, Y. Roth, and H. Keller. 2013. "Crowdsourcing 101: A Few Basics to Make You the Leader of the Pack." Health Promotion Practice 14 (2): 163-167.
Pestoff, Victor. 2009. A Democratic Architecture for the Welfare State. London: Routledge.
Pestoff, Victor, Taco Brandsen, and Bram Verschuere, eds. 2012. New Public Governance, the Third Sector and Co-Production. New York: Routledge.
Roy, Jeffrey. 2011. "The promise (and pitfalls) of digital transformation." In Approaching Public Administration: Core Debates and Emerging Issues, edited by Robert P. Leone and Frank L.K. Ohemeng. Toronto: Emond Montgomery Publications.
Salamon, Lester. 2001. "The New Governance and the Tools of Public Action: An Introduction." Fordham Urban Law Journal, 2000-2001. This is a slight update of Salamon's introduction to The Tools of Government: A Guide to the New Governance (New York: 2001).
Schneider, Anne and Helen Ingram. 1990. "Behavioural Assumptions of Policy Tools." Journal of Politics 52 (2): 510-529.
Seltzer, Ethan and Dillon Mahmoudi. 2013. "Citizen Participation, Open Innovation and Crowdsourcing: Challenges and Opportunities for Planning." Journal of Planning Literature 28: 3-18.
Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing without Organizations. New York: Penguin.
Sommer, L. and R. Cullen. 2009. "Participation 2.0: A Case Study of e- Participation within the New Zealand Government." Systems Sciences.
Surowiecki, James. 2004. The Wisdom of Crowds. New York: Little, Brown.
Tapscott, Don and A.D. Williams. 2008. Wikinomics: How Mass Collaboration Changes Everything. New York: Penguin.
Town of Richmond Hill. 2003. "Town of Richmond Hill Road Watch Kick Off." 13 April 2003. Available at http://www.richmondhill.ca/subpage.asp?pageid=news_road_watch_ kick_off. Accessed 30 July 2011.
--. 2009. "Safety First in Richmond Hill ... Safety First that is!" 5 June 2009. Available at http://www.richmondhill.ca/subpage.asp?pageid=news_releases_06_05_2009. Accessed 30 July 2011.
Vaillancourt, Yves. 2012. "Third Sector and the Co-Construction of Canadian Public Policy." In New Public Governance, the Third Sector and Co-Production, edited by Victor Pestoff, Taco Brandsen and Bram Verschuere. New York: Routledge, pp. 79-100.
Vigoda-Gadot, Eran. 2006. Managing Collaboration in Public Administration: The Promise of Alliance among Governance, Citizens and Business. Westport, Conn.: Praeger.
Ziens, Jeffrey D. 2010. "Memorandum of the Heads of Executive Departments and Agencies: Guidance on the Use of Challenges and Prizes to Promote Open Government." March 8, 2010. Available at http://www.whitehouse.gov/sites/default/files/omb/assets/memoranda_20107ml0- ll.pdf. Accessed 30 July 2011.
Interviews
Moroz, A. Community Liaison Coordinator, City of Hamilton Public Works Department. Interview with Jordan Goldman, 28 June 2011.
Scheider, Wolfgang (retired). Founder and manager of the Lakes Partner Project from 1995 to 2012, Interview with Patrice Dutil, 27 November 2013.
Homerski, P. Staff Liasion/Information Officer, City of Hamilton, Ont. Interview with Jordan Goldman, 7 July 2011.
Chau, George. Traffic Engineer, Town of Richmond Hill, interviewed by Jordan Goldman, 8 July 2011.
Patrice Dutil is Professor in the Department of Politics and Public Administration, Ryerson University, Toronto (http://patricedutil.com). The author thanks Mr. Jordan Goldman and Ms. Katie Flood for their research assistance and the Journal's reviewers and editor for their insights and commentary.