首页    期刊浏览 2024年11月14日 星期四
登录注册

文章基本信息

  • 标题:Will large-scale assessments raise literacy standards in Australian schools?
  • 作者:Mills, Kathy A.
  • 期刊名称:Australian Journal of Language and Literacy
  • 印刷版ISSN:1038-1562
  • 出版年度:2008
  • 期号:October
  • 语种:English
  • 出版社:Australian Literacy Educators' Association
  • 摘要:The 2007 Australian Government Minister for Education, Julie Bishop (1) recently made the claim: "There is evidence that standards have declined, particularly in the teaching of the fundamental areas of literacy and numeracy" (Davis, 2007, p. 2). Cloaked in the political rhetoric of "standards decline", the Australian Government announced its Realising Our Potential school package. This initiative requires that from 2009, government and non-government education authorities should improve school standards through measures that include "external assessment", reporting "literacy and numeracy" performance against "national benchmarks", and "introducing performance-based pay for teachers" (Department of Education, Science and Training, 2007, p. 1-7). How are educators to respond to calls for national standards and testing? Will accountability programs raise literacy standards in Australian schools?
  • 关键词:Literacy

Will large-scale assessments raise literacy standards in Australian schools?


Mills, Kathy A.


There is a vigorous national and international debate regarding standards of education and student performance. Expectations for literacy achievement and accountability testing are rising in preparation for a strong workforce in the global information economy (Welch & Freebody, 1993). In Australia, the debate about commonwealth and state curriculum standards is dominating the educational landscape. In an era of educational accountability, it has become increasingly common to hear calls for large-scale testing that not only measures, but improves student learning (Chudowsky & Pellegrino, 2003).

The 2007 Australian Government Minister for Education, Julie Bishop (1) recently made the claim: "There is evidence that standards have declined, particularly in the teaching of the fundamental areas of literacy and numeracy" (Davis, 2007, p. 2). Cloaked in the political rhetoric of "standards decline", the Australian Government announced its Realising Our Potential school package. This initiative requires that from 2009, government and non-government education authorities should improve school standards through measures that include "external assessment", reporting "literacy and numeracy" performance against "national benchmarks", and "introducing performance-based pay for teachers" (Department of Education, Science and Training, 2007, p. 1-7). How are educators to respond to calls for national standards and testing? Will accountability programs raise literacy standards in Australian schools?

In May, 2008, approximately one million students in Years 3, 5, 7 and 9 across 9,000 Australian schools participated in the new National Assessment Program--Literacy and Numeracy (Curriculum Corporation, 2007; Queensland Studies Authority, 2008b). This program seeks to raise the stakes for schools, teachers, and students, for whom externally imposed testing in literacy and numeracy will influence bureaucratic decision-making.

This paper addresses the consequences of large-scale assessments, that is, assessments that are administered at the direction of users external to the classroom, such as policy-makers, as opposed to assessment used by teachers in their own classrooms (Chudowsky & Pellegrino, 2003). Frequently, large-scale assessments are tied to consequences for schools, teachers, and students (Russell & Adams, 2004, p. 1340). For example, large-scale assessments can be used to determine ability groups, student promotion to the next grade, teacher performance pay, or school funding allocations.

In recent years, the effects of externally-imposed testing on teaching and learning have been researched extensively in countries that have experienced the pressure of accountability regimes across all levels of schooling. These countries include Singapore (Gregory & Clarke, 2003), China (Liu & Teddlie, 2005), the United States of America (Lamb, 2007; Madaus & Clarke, 2001; William, Lora, & Roberta, 2004), and the United Kingdom (Gregory & Clarke, 2003; Sloane & Kelly, 2003). International research can be used to anticipate the problems that might ensue as the accountability movement intensifies in the Australian educational context. Research concerning the effects of external testing is examined in relation to "teaching to the test", curriculum narrowing, pedagogies, multimodal texts, cultural diversity, and curricular validity. This paper also provides recommendations for the empowerment of educators in their professional journey in the face of a prescriptive, national testing regime in literacy and numeracy.

Philosophical assumptions

Efficiency, economy, precision, and objectivity are central value considerations underlying large-scale, standardised testing. These values are fostered by the objectives of the business sector and the accountability movement. Behavioural techniques have been applied to business practices, such as systems management, advertising, and sales promotion, with a great deal of success. However, in education, the accountability movement has sought to place the responsibility for instructional outcomes unfairly on teachers, seeking to apply these business management techniques and performance-based measures to complex educational contexts (Knight, 1989).

In the context of assessing literacy, the accountability movement ignores the multiplicity of factors identified in research that are involved in language learning, such as cultural and linguistic diversity among students, differing home literacies, pedagogy, sociocultural theory, and Vygotsky's notions of learning and development (Cope & Kalantzis, 2000; Hammond, 2001; Luke & Freebody, 1997; Street, 1999; Vygotsky, 1962, 1978).

Current research illuminates important constructs that should be targets of literacy assessment. However, many of these remain untapped by standardised literacy tests. These include how students organise, process, and make meaning from texts, how they self-monitor their reading (metacognition), and how they engage in multimodal textual practices (National Academy of Education, 1997). Frequently, large-scale assessments do not assess how students generate representations to make meaning from texts, and more importantly, how students engage in critical literacy practices that require students to analyse and interrogate texts with respect to their social construction and cultural assumptions (Comber & Simpson, 2001; Luke, Comber, & Grant, 2003; Mills, 2005).

Instead, large-scale assessments typically aim to measure basic skills that are independent of time and place, and which are separate from real life literacy experiences (Crebbin, 1992). For example, literacy levels are often measured by multiple choice comprehension questions that narrow comprehension by focusing on trivial facts (Anstey & Bull, 2004; Conteh, 2000). In contrast, literacy assessment tools should have authenticity, resembling the way literacy is used in genuine communication (Lowe & Bintz, 1992, p. 19).

Teaching to the test

Almost a century ago, in 1916, William Hatfield admonished: "An education that focuses on memorising information to ensure reaching a single benchmark is an inadequate measure of success." He expressed his concern that schools were not teaching students how to think: "Twelve years of school life has made them [students] adept at memorising, but many of them are novices in thinking ..." (Beers, 2006, p. 7).

Today, the accountability movement still places inordinate value on test scores, rather than on the achievement that the scores are intended to represent (Gunzenhauser, 2003). This frequently leads to the practice of "teaching to the test", in which the content of the test is taught at the exclusion of valuable, but untested content (Martin-Kneip, 2000). The externally imposed test becomes the implicit or explicit "blueprint for curriculum" (Garcia & Pearson, 1991, p. 269).

The insidious practice of "teaching to the test" invalidates the test results, narrows the curriculum, and replaces intellectually challenging instruction and critical thinking with rote learning (Groves, 2002; Gunzenhauser, 2003). Teachers are rewarded for "playing the game" because the results of externally mandated tests are valued by policy makers, and communicated to the principal, community, parents, and students (Gipps & Murphy, 1994). Externally imposed testing frequently creates a value conflict for teachers, who are compelled to devote large amounts of time in test-preparation practices that are inconsistent with their philosophy of education (Groves, 2002; Gunzenhauser, 2003).

This conflict of values is demonstrated in research of teachers in China under the new curriculum reforms since 2004. Students' test scores contribute to sixty-one percent of teacher evaluation for accountability, frequently tied to rewards and punishments for teachers (Liu & Teddlie, 2005). For example, some sections of the Chinese educational system currently use the "Elimination of the Last Persons Method" of teacher evaluation. This means that teachers who are ranked among the bottom 5-10% are fired by the school. Teachers expressed that on the one hand, they are required to stimulate their students' creativity under the new curriculum guidelines. Conversely, they are pressured to "teach to the test" to obtain higher student test scores, since the education system and teacher evaluation is still strongly exam-driven (Liu & Teddlie, 2005).

Similarly, a three-year study of mathematics testing of nine-year-olds in New Jersey concluded that New Jersey's testing policies increased the pressure on teachers, who responded with short-term test preparation and didactic teaching methods, as opposed to inquiry-oriented approaches and embracing the scope of the State and National curriculum (Firestone, Monfils, & Schorr, 2004). In Denver, research of the new standards-based approach demonstrated that test-taking skills and rote memorisation were emphasised, while recess, lunch, and other critical activities for the social, emotional, and physical growth of children were curtailed to make more room for drill and practice (Gratz, 2005).

Educators in Australia have also witnessed the effects of externally imposed testing as a corollary of the Literacy for All Policy (Department of Employment Education Training and Youth Affairs, 1998). Assessment in this policy is primarily directed to establishing students' control of basic skills in reading and writing. Hammond (2001) reports that school funding and prizes are tied to demonstrated outcomes, pressuring teachers to "teach to the test". Schools that do not demonstrate improved results must submit plans to address their "deficiencies". Teachers restrict their literacy programmes exclusively to basic skills in order to meet the reductive notion of literacy that underpins this agenda (Hammond, 2001, p. 172). These stakes for teachers will be raised further if the Realising Our Potential package is implemented in 2009, particularly in relation to "performance-based pay for teachers" (Department of Education Science and Training, 2007).

Research consistently demonstrates that externally imposed testing of basic literacy and numeracy marginalises the inclusion of other subjects mandated in state curricula. For example, this narrowing of the curriculum was demonstrated in a North Carolina elementary school in which the principal adopted a policy that teachers called "90/60/60". Instruction each day included 90 minutes of reading, 60 minutes of math, and 60 minutes of writing. These were the only three areas addressed in the state's accountability testing. Marginalised subjects--including science, social studies, physical education, and the arts--competed for the remaining time in the school day or were untaught (Gunzenhauser & Noblit, 2001). Similarly, a survey of elementary school teachers in a Californian school district reported devoting 70-100% of instructional time to literacy and mathematics. One third of the teachers reported that they had taught no social studies or science, due to basic skills test preparation (Sandholtz, Ogawa, & Scribner, 2004).

Without question, large-scale literacy assessments result in the unhelpful practice of "teaching to the test" and subsequent narrowing of the curriculum. Rather than raise literacy standards, such assessments reduce learning outcomes to a sample of monolingual, monocultural, and rule-governed forms of language (New London Group, 1996).

Pedagogy

An unintended outcome of large-scale assessments is the effect on teachers' range and use of pedagogies. Case studies of teachers who are preparing students for accountability testing demonstrate the reduction of pedagogies to direct instruction, drill and practice, and increased use of sample test items and worksheets (Beckner, 2003; Lamb, 2007; Sandholtz et al., 2004). Sandholz, Ogawa, and Scribner (2004, pp. 1192-1193) reported a survey of elementary teachers in California who admitted minimising the use of "hands-on learning" and "thematic units", and increasing "drill and skill" and "routinisation", because of pressure to meet the district standards. Similarly, studies by Lamb (2007) and Costigan (2002) found that instead of exposing students to varied, open-ended, engaging, and stimulating teaching strategies, instructional repertoires were limited to timesaving methods, such as direct instruction and drill-and-practice worksheets.

There are several arguments against the use of direct instruction as the exclusive basis for pedagogy, despite its apparent utility in the context of external test preparation. First, learners are positioned as passive learners--blank slates to be etched, mugs to be filled, sponges to absorb, or computers to program--rather than as active and constructive learners who are individuals situated in social and cultural contexts, learning in community (New London Group, 1996).

Second, cognitive research demonstrates that the mind is not merely a processor of generalisations and decontextualised abstractions. Rather, the learning of knowledge and skills is principally situated in sociocultural settings, and contextualised in particular domains and practices (Barsalou, 1992; Eiser, 1994; Gee, 1992; Harre & Gillett, 1994; Lave & Wenger, 1991; Street, 1984; Wertsch, 1985). Literacy skills, such as reading and writing, are partly acquired experientially, since the required patterns are dependent upon, and attuned to, context (New London Group, 1996).

Third, direct instruction does not create learners who can critique both "what" and "how" they are taught in terms of historical, economic, cultural, political or value-centred interrelations. Students need to learn how to analyse knowledge and texts critically in relation to whose interests are served by their meanings. Direct instruction does not allow students to think for themselves, nor to interpret how texts work within social and cultural contexts (Kalantzis & Cope, 2000; Kalantzis & Cope, 2005). Theorists have long emphasised the need for critical pedagogies to supplement other forms of instruction (See, for example: Bigum & Green, 1993; Burbules & Callister, 1996; Peters & Lankshear, 1995; Soloway, 2000; Unsworth, 1981). However, there is now a heightened concern associated with the internet as students have access to texts from powerful, unrestrained, and potentially harmful sources (Luke, 2000). Students need critical literacy skills to challenge and evaluate biased and distorted viewpoints, and to identify who benefits from electronic sites. With the enormous growth in the volume of textual materials, literacy skills involving selecting, reducing, organising, and evaluating reliable information are also paramount. In contrast, direct instruction encourages students to receive and recall information, rather than to interrogate information and the workings of power (Fairclough, 1992).

Fourth, direct instruction does not allow students to put knowledge into Action--to transfer meanings from one context to another (New London Group, 1996). Students may be able to articulate certain facts, but may be unable to enact this knowledge reflexively in practice. For example, many students can recite phonics generalisations, but fail to apply this knowledge to accurate spelling when constructing a text for a real audience. Knowledge received through direct instruction becomes inert unless applied to a variety of authentic, natural, or real life functions in a reflective manner.

Therefore, limiting teaching strategies to direct instruction in response to externally imposed testing disregards a substantial body of research in cognitive science, social cognition, and sociocultural approaches to language learning (Barsalou, 1992; Eiser, 1994; Gee, 1992; Harre & Gillett, 1994; Lave & Wenger, 1991; Street, 1984; Wertsch, 1985). A combination of pedagogies is required for meaningful learning across a range of skills and domains of knowledge (New London Group, 1996).

Multimodal texts

Large-scale assessments have an adverse impact on the teaching and learning of digitally mediated textual practices in literacy programmes. Russell and Adams (2004) report a national survey of teachers across forty-nine states in the United States that have formal testing programs. Teachers were found to have decreased instructional uses of computers for writing because of paper-based state tests.

The accountability movement has largely failed to recognise the changes taking place to the "twin touchstones of reading and writing", which are changing with the new realities of information and communication technologies (Leu, Mallette, & Karchmer, 2001, p. 265). Revolutionary changes in iconographic, text, and screen-based modes of language, such as electronic mail, online discussions, and Voice Over Internet Protocol, require new literacy models and assessment. Basic literacy skills that are confined to pencil and paper are the minimum standards of the past, not the present, and certainly, not the future. Previous conceptions of literacy and the basic skills were tightly confined to writing. The boundaries of literacy have collapsed, replaced by a multiplicity of hybrid forms of communication, including audio, visual, gestural, spatial, and linguistic modes (New London Group, 1996; Mills, 2005, 2005a, 2006, 2006a, 2006b, 2006c, 2007, 2007a, 2008).

Global trends in communication call for new forms of assessment to account for an ever-broadening range of hybrid literacies, and multimodal forms of communication. Students today will enter universities and a labour market that are fast becoming globalised. These changes point to the need for fresh approaches to assessment that incorporate multiliteracies; that is, the broadening range of multimodal texts, and increasing cultural and linguistic diversity (Kalantzis, Cope, & Fehring, 2002).

The argument against standardised pencil and paper testing is gaining momentum in the light of policies that instruct Australian teachers to address multimodal literacies. For example, in April 1999, the Ministerial Council for Education, Employment, Training, and Youth Affairs met to formulate a federal policy entitled the Adelaide Declaration on National Goals for Schooling in the Twenty-first Century (Ministerial Council for Education Employment Training and Youth Affairs, 1999). The importance of digitally mediated textual practices is embedded in these goals for Australian school students. Statement 1.6 requires that students "be confident, creative and productive users of new technologies, particularly information and communication technologies, and understand the impact of those technologies on society" (Ministerial Council for Education, Employment, Training, and Youth Affairs, 1999, p. 20).

Furthermore, syllabi across all states and territories in Australia address the need for students to design multimodal texts for a variety of social purposes (ACT Department of Education and Training, 2001; Board of Studies New South Wales, 1998; Department of Education and Training Western Australia, 2005; Department of Education and Training Tasmania, 2004; Department of Employment Education and Training Northern Territory, 2005; South Australian Department of Education and Children's Services, 2004; Victorian Curriculum and Assessment Authority, 2005; Queensland Studies Authority, 2008a). The extension of literacies to include multimodal textual practices cannot be ignored as a curricular and professional development issue for Australian teachers, providing political impetus for classroom-based literacy assessments that allow students to demonstrate their confidence in the use of new technologies for communication. Clearly, there is a need for models of literacy assessment for such new literacy contexts and policies (Leu et al., 2001).

Cultural and linguistic diversity

Accountability testing has significant, negative consequences for students who are not of the dominant, middle-class, Anglo-Saxon culture. Research in Western countries indicates that large-scale assessments increase grade retention and dropout rates. Furthermore, the self-efficacy and motivation of minority groups, including bilingual, multilingual, indigenous, and economically marginalised students, are lowered (Gipps & Murphy, 1994; Haney, 2000; Horn, 2003; LoBianco & Freebody, 1997; Madaus & Clarke, 2001). These students are often placed in low expectation tracks or ability groups based on test scores. This results in students' alienation from school, lowered expectations by school personnel, and the students' subsequent inability to enter higher-education institutions (Cummins, 1996). Literacy tests cannot benefit students from these marginalised groups unless support systems and structures are provided to ensure equity (Martin-Kneip, 2000).

Compulsory, standardised tests are generally constructed by educators from the dominant culture, with the incorrect assumption of universality; namely, that test scores have essentially the same meaning for all individuals (Gipps & Murphy, 1994). Government and education agencies represent the interests of the dominant socioeconomic groups in policies and decision-making, and their interests usually lie in the direction of the mono-lingual, one-nation-one-language model (Ricento & Burnaby, 2001).

In Australia, literacy benchmarks obscure rather than illuminate the problems of the ethnically marginalised, indigenous, and ESL students (Welch & Freebody, 1993). The British and Australian curriculum frameworks and literacy benchmarking activities have frequently failed to accommodate linguistic differences (LoBianco, 2001).

For example, while the Literacy for All Policy (Department of Employment Education Training and Youth Affairs, 1998, p. 43) claims to recognise diversity and include an equity dimension, it does not address the unique needs of ESL (English as a Second Language) students, which differ from native English speaking students who require "early intervention". The 1997 Australian school literacy surveys - Mapping Literacy Achievement (Department of Employment Education Training and Youth Affairs, 1997b) and Literacy Standards in Australia (Department of Employment Education Training and Youth Affairs, 1997a) demonstrated that many ESL students performed poorly on standardised tests that were intended for the dominant, English-speaking population (Hammond, 2001). Labelling these outcomes as 'literacy failure' because ESL students have not met the national benchmarks undermines the dramatic success of many second language learners who enter the country with minimal oral and written English skills (Hammond, 2001).

Schools have an historical role in the reproduction of social inequity, both allowing and preventing access to literacies, and its associated power to gain social mobility, wealth, and professional status (Bourdieu, 1977). Literacies have been distributed unequally, based on gender, class, ethnicity, geographical isolation, disability, and combinations of these social characteristics (Kress, 1993). Extant sociocultural research indicates that the values and practices of the dominant culture are reflected in school literacy practices, while those of minority groups are silenced (Lankshear & McLaren, 1993; Luke et al., 2003). Consequently, there is an urgent need to transform the inequitable distribution of literacies through assessment programs that do not reproduce or legitimise inequities in the educational system.

Curricular validity

A significant feature of externally imposed testing is the lack of curricular validity, since these tests do not resemble the form and manner in which literacy is used within the curriculum (Garcia & Pearson, 1991). Research consistently demonstrates a disparity between the aims of compulsory, standardised assessment, and the aims of models underpinning high quality literacy curricula. This pattern is being played out in the new Australian National Literacy and Numeracy Testing. The Curriculum Corporation (2007) has been commissioned to administer the tests on behalf of all Ministers, and the Australian Education Systems Officials Committee. Strangely, this "drop-out-of-the-sky" testing program has been authorised in the absence of national English and Mathematics curricula. Consequently, students will be administered tasks that are targeted to a grade level, but which fail to connect with students' specific educational experiences. The Curriculum Corporation (2007, p. 2) website stated:
 For students, teachers, and schools, there will be little change
 from current arrangements. The main difference being [will be] that
 students will be sitting identical tests for each year level over
 the same week right across Australia.


Assessment should not be an isolated entity. Rather, it should be integral to instruction (Chudowsky & Pellegrino, 2003, p. 81). The national testing program reflects the common mismatch between the quality, research-based outcomes that are described in policy documents such as state syllabi, and the inadequate learning that results from standardised testing (Willis, 1994). The national literacy and numeracy tests will not have curricular validity with syllabi in each state, yet the results will receive unwarranted attention at all levels of the system:
 The results from these national literacy and numeracy tests will
 provide an important measure of how Australian schools and students
 are performing in the areas of reading, writing, spelling, and
 numeracy. The results from the assessment program will be used for
 individual student reporting to parents, school reporting to their
 communities, and aggregate reporting by States and Territories
 against national standards (Curriculum Corporation, 2007).


These standards-based assessments will be used across the nation to make judgements about students, teachers, schools and States and Territories that are poorly substantiated. These decisions undermine legitimate improvements that could have been realised by rich assessment programs that triangulate multiple indicators, and which are consistent with the philosophy and aims of the curriculum (Gratz, 2005).

With respect to curricular validity, the national testing program marks a regression from existing state testing programs that are tied to the English and Mathematics syllabi. For example, data from the Queensland Years 3, 5, and 7 Tests in Aspects of Literacy, implemented from 2003-2007 (Queensland Studies Authority, 2002), was used to inform revisions to the Queensland Years 1-10 English Syllabus, which was governed by the same curriculum body (Queensland Studies Authority, 2007). Despite being an externally imposed test, there is some recognition of instructional relevance in the assessment program, since the tests are intended to support the learning process (Soodak, 2000, p. 263). Assessment and learning should occur simultaneously, in a dialectical relationship in which learning informs assessment measures and vice versa.

Conclusion and recommendations

Will large-scale assessments raise literacy standards in Australian schools? Clearly, research demonstrates that this is not the case. Rather, externally imposed assessments are tied to a range of adverse outcomes for the lives of those most at stake in the multifaceted process of teaching and learning.

In the current Australian context, it is impossible for teachers to ignore the pressures of standardised literacy and numeracy testing. State and commonwealth policy-makers have the primary responsibility to take into account international research that highlights the negative effects of accountability testing (Gunzenhauser, 2003). In addition, educators must not leave compulsory testing by the bureaucratic centre of the system unopposed. Quantitative, top-down models of assessment should not replace qualitative, "bottom-up" models of assessment to make instructional decisions (Fehring, 2006).

Teachers are in the "contact zone" with students and have the power to negotiate the challenges of externally controlled accountability systems (Adelman, Reyna, Collins, & Taylor, 1999). Educators who are confronted with compulsory, standardised tests can avoid "teaching to the test" and "curriculum narrowing", while ensuring that they implement a wide range of pedagogies, including critical literacy and multiliteracies. Teachers can use qualitative, curriculum-embedded measures produced during the course of instruction, and designed to support student learning (Anstey & Bull, 2004; Soodak, 2000). Valid assessment programs should always involve the triangulation of multiple methods that occur in a variety of authentic literacy contexts, using clearly defined rubrics (Horn, 2003; Martin-Kneip, 2000; Soodak, 2000).

Most importantly, educators and researchers need to take an active part in the political struggle to define the Australian future of literacy teaching and learning. Principals are key figures in maintaining this dialogue, and in many contexts, still have the decision-making power to lower the stakes of standardised testing for teachers and students. For example, principals can oppose performance pay for teachers and grade retention policies for under-performing students, while emphasising assessment strategies that support the values and philosophy of the school.

Ultimately, decision-making about what constitutes appropriate literacy teaching and learning outcomes needs to be transferred from political, external judges to literacy teachers and scholarly researchers (LoBianco, 2001). Despite the pressures from externally imposed testing, it is imperative that teachers and school administrators remain engaged in the standards debate. Through the voices of educators and researchers, national accountability and assessment programs will be exposed as political panaceas for "literacy deficits" that attempt to use teachers as scapegoats for historically inequitable educational systems.

References

ACT Department of Education and Training (2000). English Language Curriculum Framework. Retrieved 18th June 2008, from http://www.curriculum.edu.au/ccsite/cc_curriculum_ frameworks_and_syllabuses,20029.html

Adelman, H., Reyna, C., Collins, R.O.J., & Taylor, L. (1999). Fundamental concerns about policy for addressing barriers to student learning. Reading and Writing Quarterly, 15, 327-349.

Anstey, M., & Bull, G. (2004). The literacy labyrinth (2nd Ed.). Frenchs Forest, NSW: Pearson.

Barsalou, L. (1992). Cognitive psychology: An overview for cognitive scientists. Hillsdale: Lawrence Erlbaum.

Beckner, G. (2003). AAE teachers weigh in on NCLB. Educational Matters: Promoting New Standards of Professionalism and Educational Enrichment, 9(4), 1-8.

Beers, K. (2006). The legacy of leadership. Voices from the Middle, 14(1), 7.

Bigum, C., & Green, B. (1993). New kids on the chopping block? Media culture and educational practice in new times. The Professional Reading Guide, 15(1), 1-4.

Board of Studies New South Wales (2007). K-6 English Syllabus. Retrieved 18th June 2008, from http://k6.boardofstudies.nsw.edu.au/english/english_index.html

Bourdieu, P. (1977). Outline of a theory of practice. Cambridge: Cambridge Uni Press.

Burbules, N., & Callister, T. (1996). Knowledge at the cross-roads: Some alternative futures of hypertext learning environments. Educational Theory, 46(1), 23-50.

Chudowsky, N., & Pellegrino, J. W. (2003). Large-scale assessments that support learning: What will it take? Theory into Practice: The Impact of High-Stakes Testing, 42(1), 75-83.

Comber, B., & Simpson, A. (Eds.). (2001). Negotiating critical literacies in classrooms. Mahwah, NJ: Erlbaum Associates.

Conteh, J. (2000). Multilingual classrooms, standards and quality. Language Education, 14(1), 1-17.

Cope, B., & Kalantzis, M. (2000). Multiliteracies: Literacy learning and the design of social futures. South Yarra, Vic: Macmillan.

Costigan, A. (2002). Teaching the culture of high-stakes testing: Listening to new teachers. The Journal of the Association of Teacher Educators, 23(4), 28-34.

Crebbin, W. (1992). A Political Issue. In C. Bouffler (Ed.), Literacy evaluation: Issues and practicalities (pp. 7-11). New Town, PETA.

Cummins, J. (1996). Negotiating identities: Education for empowerment in a diverse society. Ontariao, CA: California Association for Bilingual Education.

Curriculum Corporation (2007). National literacy and numeracy testing 2008, Retrieved 18 June 2008, from http://www.curriculum.edu.au/ccsite/default.asp?id=19875

Davis, M. (2007, February 7th). Sydney Morning Herald. Bishop shows hand on teachers' pay. Retrieved 19th September 2007, from http://www.smh.com.au

Department of Education and Training Western Australia (2005). Outcomes and standards framework: English. Retrieved 18th June 2008, from http://www.eddept.wa.edu. au/curriculum/CIP2/docs/English%20OSF.pdf

Department of Education Science and Training (2007). New 'reforms to improve school standards. Retrieved 18 June 2008, from http://www.dest.gov.au/ministers/bishop/ budget07/bud20_07.htm

Department of Education and Training Tasmania (2007). English-literacy K-10 syllabus and support materials. Retrieved 18 June, 2008, from http://resources.education.tas. gov.au/item/edres/e0c82c3e-a59b-b515-3a46-70b1aa032234/1/syl-elall.pdf

Department of Employment Education and Training Northern Territory (2005). Northern territory curriculum framework: English learning area. Retrieved 18 June 2006, from http://www.deet.nt.gov.au/education/teaching_and_learning/curriculum/ ntcf/docs/learning_areas_english.pdf

Department of Employment Education Training and Youth Affairs (1997a). Literacy standards in Australia: Commonwealth of Australia. Canberra: DEETYA.

Department of Employment Education Training and Youth Affairs (1997b). Mapping literacy achievement results of the 1996 national school English literacy survey. Canberra: DEETYA.

Department of Employment Education Training and Youth Affairs (1998). Literacy for all: The challenge for Australian schools, Commonwealth literacy policies for Australian schools (Australian school monograph series 1/1998). Canberra: DEETYA.

Eiser, J. (1994). Attitudes, chaos, and the connectionist mind. Oxford: Basil Blackwell.

Fairclough, N. (1992). Critical language awareness. Longman: London.

Fehring, H. (2006). How do we know what we know? Understanding literacy achievement in action. Practically Primary, 11(2), 4-11.

Firestone, W. A., Monfils, L., & Schorr, R.Y. (2004). Test preparation in New Jersey: Inquiry-oriented and didactic responses. Assessment in Education, 11(1), 67.

Garcia, G., & Pearson, P. (1991). The role of assessment in a diverse society. In E. Hiebert (Ed.), Literacy for a diverse society (pp. 253-278). New York: Teachers College Press.

Gipps, C. (1994). Beyond testing: Towards a theory of educational assessment. London: Falmer Press.

Gipps, C., & Murphy, P. (1994). A fair test? Assessment, achievement and equity. Philadelphia: Open University Press.

Gratz, D.B. (2005). Lessons from Denver: The pay for performance pilot. Phi Delta Kappan, 86(8), 569.

Gregory, K., & Clarke, M. (2003). High-stakes assessment in England and Singapore. Theory into Practice, 42(1), 66-74.

Groves, P. (2002). "Doesn't it feel morbid here?" High stakes testing and the widening of the equity gap. Educational Foundations, 16(2), 15-31.

Gunzenhauser, M. (2003). High-stakes testing and the default philosophy of education. Theory into Practice, 42(1), 51-58.

Gunzenhauser, M., & Noblit, G. (2001). Reforming with the arts: Creativity in A+ classrooms and schools. Winston-Salem, NC: Thomas S Kenan Institute for the Arts.

Hammond, J. (2001). Literacies in school education in Australia: Disjunctions between policy and research. Language and Education, 15(2 & 3), 162-173.

Haney, W. (2000). The myth of the Texas miracle. Educational Policy Analysis Archives, 8(41). Retrieved 18 June 2008 from, http://epaa.asu.edu/epaa/v8n41/

Harre, R., & Gillett, G. (1994). The discursive mind. Beverly Hills: Sage.

Horn, C. (2003). High-stakes testing and students: Stopping or perpetuating a cycle of failure? Theory into Practice, 42(1), 30-42.

Kalantzis, M., & Cope, B. (2000). A multiliteracies pedagogy: A pedagogical supplement. In B. Cope & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of social futures (pp. 239-248). South Yarra, Vic: Macmillan.

Kalantzis, M., & Cope, B. (Eds.) (2005). Learning by design. Melbourne, Vic: Victorian Schools Innovation Commission and Common Ground.

Kalantzis, M., Cope, B., & Fehring, H. (2002). Multiliteracies: Teaching and learning in the new communications environment. Primary English Notes, 133, 1-8.

Knight, G. (1989). Philosophy of education (2nd Ed.). Berrien Springs, Michigan: Andrews University Press.

Kress, G. (1993). Language as social practice. In G. Kress (Ed.), Communication and culture (pp. 71-130). Kensington, NSW: New South Wales University Press.

Lamb, J. (2007). The testing culture in one rural Mississippi school. The High School Journal, 90(4), 32-43.

Lankshear, C., & McLaren, P. (1993). Introduction. In C. Lankshear, P. Mclaren & M. Greene (Eds.), Critical literacy: Politics, praxis, and the postmodern (pp. 1-56). New York: State University of New York Press.

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press.

Leu, D.J., Mallette, M.H., & Karchmer, R.A. (2001). New realities, new literacies, and new technologies: Redefining the agenda for literacy research. Reading Research and Instruction, 40(3), 265-272.

Liu, S., & Teddlie, C. (2005). A follow-up study on teacher evaluation in China: Historical analysis and latest trends. Journal of Personnel Evaluation in Education, 18(4), 253.

LoBianco, J. (2001). Policy literacy. Language and Education, 15(2 & 3), 212-227.

LoBianco, J. & Freebody, P. (1997). Australian literacies: Informing National policy on literacy education. Belconnen, ACT: Language Australia.

Lowe, K., & Bintz, B. (1992). Understanding evaluation. In C. Bouffler (Ed.), Literacy evaluation (pp. 28-40). New Town, Vic: PETA.

Luke, A., Comber, B., & Grant, H. (2003). Critical literacies and cultural studies. In M. Anstey & G. Bull (Eds.), The literacy lexicon (2nd Ed., pp. 15-35). Frenchs Forest: NSW.

Luke, A., & Freebody, P. (1997). Shaping the social practices of reading. In S. Muspratt, A. Luke & P. Freebody (Eds.), Constructing critical literacies: Teaching and learning textual practice (pp. 185-225). Sydney: Allen & Unwin.

Luke, C. (2000). Cyber-schooling and technological change. In B. Cope & M. Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of social futures (pp. 69-91). South Yarra, Vic: Macmillan.

Madaus, G., & Clarke, M. (2001). The impact of high-stakes testing on minority students. In M. Kornhaber & G. Orfield (Eds.), Raising standards or raising barriers: Inequality and high stakes testing in public education (pp. 85-106). Chicago: University of Chicago Press.

Martin-Kneip, G. (2000). Standards, feedback, and diversified assessment: Addressing equity issues at the classroom level. Reading and Writing Quarterly, 16, 239-256.

Mills, K. (2005). Deconstructing binary oppositions in literacy discourse and pedagogy. Australian Journal of Language and Literacy, 34(2), 300-302.

Mills, K. (2005a). Multiliteracies: Remnant discourses and pedagogies. Paper presented at the Australian Literacy Educator's Association/ Australian Association of the Teaching of English National Conference 2005: Pleasure, Passion, Provocation, Broadbeach, July.

Mills, K. (2006). Critical framing in multiliteracies. Paper presented at the Australian Literacy Educator's Association/ Australian Association of the Teaching of English National Conference 2006: Voices, Vibes, Visions, Darwin, July.

Mills, K. (2006a). Mr. Travelling-at-will Ted Doyle: Discourses in a multiliteracies classroom. Australian Journal of Language and Literacy, 28(2), 132-149.

Mills, K. (2006b). We've been wastin' a whole million watchin' her doin' her shoes: Situated practice within a pedagogy of multiliteracies. The Australian Educational Researcher, 33(3), 13-34.

Mills, K. (2006c). Discovering design possibilities through a pedagogy of multiliteracies. Journal of Learning Design, 1(3), 61-72.

Mills, K. 2007). Have you seen Lord of the Rings? Power, pedagogy and discourses in a multiliteracies classroom. Journal of Language and Identity in Education, 6(3), 221-241.

Mills, K. (2007a). Access to multiliteracies: A critical ethnography. Ethnography and Education, 2(3), 305-325.

Mills, K. (2008). Transformed practice in a pedagogy of multiliteracies. Pedagogies: An International Journal, 3(2), 109-128.

Ministerial Council for Education Employment Training and Youth Affairs (1999, 28th July). The Adelaide declaration on the national goals for schooling in the twenty-first century.

National Goals. Retrieved 29th December, 2007, from http://www.mceetya.edu.au/ mceetya/nationalgoals/natgoals.htm

National Academy of Education (1997). Assessment in transition: Monitoring the nation's educational progress. Stanford, CA: National Academy of Education.

New London Group (1996). A pedagogy of multiliteracies: Designing social futures. Harvard Educational Review, 66(1), 60-92.

Peters, M., & Lankshear, C. (1995). Critical literacy and digital texts. Educational Theory, 45(4), 51-70.

Queensland Studies Authority (2002). Queensland Year 3, 5, and 7 Tests in Aspects of Literacy and Numeracy. Brisbane: Queensland Government.

Queensland Studies Authority (2007). Years 1-10 English Syllabus. Brisbane: Queensland Government.

Queensland Studies Authority. (2008a). English Essential Learnings. Retrieved 18th June, 2008, from http://www.qsa.qld.edu.au/assessment/3160.html

Queensland Studies Authority (2008b). Year 3, 5, 7 and 9 Tests: General information, Retrieved 18th June 2008, from http://www.qsa.qld.edu.au/assessment/585.html

Ricento, R., & Burnaby, B. (2001). Language and politics in the United States and Canada: Myths and realities. Language and Education, 15(1), 71-76.

Russell, M., & Adams, L. (2004). Instructional uses of computers for writing: The effect of state testing programs. Teachers College Record, 106(6), 1332-1357.

Sandholtz, J., Ogawa, R., & Scribner, S. (2004). Standards gaps: Unintended consequence of local standards-based reform. Teachers College Record, 106(6), 1177-1202.

Sloane, F.C., & Kelly, A.E. (2003). Issues in high-stakes testing programs. Theory into Practice, 42(1), 12.

Soloway, E. (2000). Teachers and technology: Easing the way. Association for Computing Machinery. Communications of the ACM, 43(6), 23.

Soodak, L. (2000). Performance assessment: Exploring issues of equity and fairness. Reading and Writing Quarterly, 16, 175-178.

South Australian Department of Education and Children's Services (2004). South Australian Curriculum Standards and Accountability Framework: Essential Learnings: Literacy. Retrieved 18th June, 2008, from http://www.sacsa.sa.edu.au/index_fsrc.asp?t=EL

Street, B. (1984). Literacy in theory and practice. Cambridge: Cambridge University Press.

Street, B. (1999). The meaning of literacy. In D. Wagner, R. Venezky & B. Street (Eds.), Literacy: An international handbook (pp. 34-40). Boulder, Colorado: Westview Press.

Unsworth, L. (1981). Tests, assessment and evaluation in reading. Paper presented at the Proceedings of the second Milperra AAE reading centre symposium, Sydney: Milperra College of Advanced Education.

Victorian Curriculum and Assessment Authority (2005). P-10 Curriculum Standards: English. Retrieved 18th June 2008, from http://vels.vcaa.vic.edu.au/essential/discipline/ english/index.html

Vygotsky, L. (1962). Thought and language. Cambridge, Massachusetts: Massachusetts Institute of Technology.

Vygotsky, L. (1978). Mind in society: The development of higher psychological processes. London: Harvard University Press.

Welch, A., & Freebody, P. (1993). Individualisation and debates in Australia in current literacy. In A. Welch & P. Freebody (Eds.), Knowledge, culture and power: International perspectives on literacy as policy and practice (pp. 209-232). London: The Falmer Press.

Wertsch, J. (1985). Culture, communication, and cognition: Vygotskian perspectives. Cambridge: Cambridge University Press.

William, F., Lora, M., & Roberta, S. (2004). Test preparation in New Jersey: Inquiry-oriented and didactic responses. Assessment in Education, 11(1), 67.

Willis, D. (1994). Learning and assessment: Exposing the inconsistencies of theory and practice. In C. Bouffler (Ed.), Literacy evaluation (pp. 28-40). NewTown: PETA.

Kathy A. Mills

Christian Heritage College, Cooraparoo, Australia

(1) In December 2007, there was a federal government change from the Liberal-National coalition to the Australia Labor Party. Julia Gillard became the new minister for Education and Deputy Prime Minister.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有