NAPLAN data on writing: a picture of accelerating negative change.
Wyatt-Smith, Claire ; Jackson, Christine
Introduction
Australia's NAPLAN has become what White (2013) has
characterised as 'a political football' (p. 1) and a
multimillion dollar investment in education. More than a decade after
the country's first move to large scale standardised testing,
NAPLAN investment has generated national longitudinal data across Years
3-9 which offers a wide range of opportunities to examine educational
outcomes across successive phases of education. The measurement goal of
NAPLAN has come to the fore in schooling policy, research, and classroom
practice, gaining widespread public attention through popular media.
Currently however, it remains a largely untapped educational resource
for informing improvement at classroom level. Indeed, little is known
about how teachers stitch together instruction for writing for NAPLAN
assessment purposes and their wider program of writing instruction
within and across year levels. Similarly, there are no large scale
research studies of the consistencies (and inconsistencies) between
individual students' NAPLAN reported literacy results in writing
and school reports of achievement. The challenge is to explore and
optimise the potential of NAPLAN assessment instruments and the data,
both within a year and longitudinally, for informing improvement
strategies. To this end this paper takes a concentrated focus on
reported NAPLAN writing results and reveals that concerning percentages
of students are falling below the benchmark.
The emerging interest in data
A hallmark of education in the past three decades has been the
strengthening commitment to census or whole cohort standardised testing,
as distinct from sample testing. In this period successive governments
at state and federal levels have demonstrated insatiable appetites for
data as part of moves towards evidence-based education policy. This
development had been evident earlier in several countries including the
United States, the United Kingdom, Japan, Singapore and China. At the
time of writing, NAPLAN results across Australia are widely understood
in policy, schools and the wider community as central to public
accountability. NAPLAN has been normalised to the point where it is
accepted as a fixed feature of schooling policy, even though critical
discussions about its impact on, and more specifically, benefits to
students and teachers has been limited. For example, it is not uncommon
to hear teachers talking about 'teaching The NAPLAN' as though
the assessment had its own prescribed curriculum content, knowledge and
skills.
While debates continue as to whether NAPLAN assessments in Year 3,
5, 7 and 9 are properly characterised as low or high stakes, it is clear
that they have generated high levels of public engagement and discussion
about intra- and inter-state and-school performance. In part, this
engagement can be traced back to the ambitious goal for literacy and
numeracy that all Australian Education Ministers agreed to at a meeting
of the Ministerial Council on Education, Employment, Training and Youth
Affairs (MCEETYA): 'That every child leaving primary school should
be numerate and be able to read, write and spell at an appropriate
level' and the related sub-goal:
'That every child commencing school from 1998 will achieve a
minimum acceptable literacy and numeracy standard within four
years' (Department of Employment, Education, Training and Youth
Affairs, 1998a).
This initiative was consistent with the belief that children in
Australian schools should be entitled to learn to write and read and
further, that the teaching of writing and reading is core business in
schooling. This is not a new claim. While debates about effective
strategies for teaching literacy have spanned several decades, there is
widespread recognition in research, policy and practice that, 'The
teaching of writing, like that of reading, is a fundamental
responsibility of schooling, and what teachers do is very important in
ensuring that children learn to write, as well as read' (Christie
& Derewianka, 2008, p. 214).
Looking beyond the field of literacy education to the field of
assessment and evaluation, there is an already strong and growing body
of assessment research on standards-driven education reform including
large-scale standardised testing (Lingard, Thompson & Sellar, 2016).
Internationally, the research shows how this combination--standards and
standardised testing--applied to various knowledge domains, and in the
case of Australia, literacy and numeracy, have provided powerful policy
levers for reform (Broadfoot, 2014; Klenowski & Wyatt-Smith, 2012).
Further, the inevitable competitive dimension of testing and reporting
has fueled an increasingly strong move in Australia and elsewhere to the
marketisation of education. This is evident, for example, in how schools
and education sectors showcase test scores in various ways, including on
websites, billboards and other marketing avenues, promoting literacy and
numeracy results as proxies for school quality.
Interestingly, the strengthened testing movement has occurred at
the same time as attention has turned to issues of teacher quality and
quality teaching. In the fields of educational assessment and literacy,
the teacher has long been recognised as playing the central role in
quality learning and assessment practice (Black, 2014). However,
classroom assessment under the control of the teacher has typically been
regarded as falling in the domain of formative assessment or assessment
for improving learning (Earl, 2013). Learning and achievement data that
teachers routinely collect has been long distinguished from the
achievement data generated from standardised testing. Broadly speaking,
the latter is typically used by education systems and sectors for
measurement and accountability purposes. This longstanding distinction
reflects the system needs for high reliability in publicly reported
grades.
In what follows we propose that it is timely to go beyond the
traditional distinction between the goals of measurement and improvement
to see the two, in the context of NAPLAN, as potentially coming together
through a sharpened focus on standards inclusive of benchmarks.
Consideration of these matters is timely: in 2015, a second cohort of
Year 9 students had experienced four episodes of testing across Years 3,
5, 7 and 9. We can now look to reported data to examine it for what it
reveals about the patterns of literacy capabilities of young
Australians.
To this end, in the following section we look to the genesis of
standardised literacy testing in education policy in Australia as a
backdrop for considering later in the paper the data on writing in
particular. Underpinning the discussion is the view that educational
outcomes in literacy are of high priority, connected to the broader
issues of students' equity of opportunity in schooling and long
term educational trajectories into employment and civic contribution.
NAPLAN in historical perspective
In the early 1990s, the historical precursor of NAPLAN was in the
form of state-based tests. The original, stated purposes of the
state-based testing program were twofold: early diagnosis of students at
educational risk of not making satisfactory progress, and timely
intervention for improvement. National goals and standards were
developed in Australia through two significant agreements between the
federal Minister and all state and territory Ministers for Education:
the Hobart Declaration on Schooling (MYCEETYA, 1989) and the Adelaide
Declaration on National Goals for the Twenty-First Century (MCEETYA,
1999). To support the achievement of these goals, the Ministers endorsed
the country's first National Literacy and Numeracy Plan which
included:
1. Assessment of all students by their teachers as early as
possible in the first years of schooling;
2. Early intervention strategies for those students identified as
having difficulty;
3. The development of agreed benchmarks for Years 3, 5, 7, and 9
against which all children's achievement in these years can be
measured;
4. The measurement of students' progress against these
benchmarks using rigorous state-based assessment procedures, with all
Year 3 students being assessed against the benchmarks from 1998 onwards,
and against the Year 5 benchmark as soon as possible;
5. Progress towards national reporting on student achievement
against the benchmarks, with reporting commencing in 1999 within the
framework of the annual National Report on Schooling in Australia; and
6. Professional development for teachers to support the key
elements of the Plan. (Department of Employment, Education, Training and
Youth Affairs, 1998b, p. 10).
The emphases of the Declarations and the National Plan were firmly
on learning and equity of opportunity through early diagnosis, ongoing
monitoring, improvement of student learning and teachers'
professional development. This reflected the policy recognition of
literacy and numeracy as central to student learning in the curriculum,
with the early years focusing on oracy and learning to read and write,
and in the latter years, reading and writing to learn. Literacy and
numeracy were understood as the means by which young people accessed the
curriculum; the corollary being that literacy and numeracy could present
powerful barriers to young people's academic success. Further, the
promise of the National Plan, expressed in Literacy for All: The
Challenge for Australian Schools (Department of Employment, Education,
Training and Youth Affairs DEETYA, 1998a) was one of harnessing the
power of assessment for diagnosing learning needs and informing learning
improvement approaches. Teachers' assessment capabilities and in
particular, their abilities or 'know-how' in using classroom
evidence and assessment data were identified as central to this and as
such, areas for professional development.
Another focus of the National Plan was the endorsement of
Australian literacy benchmarks by the Ministerial Council on Education,
Employment, Training and Youth Affairs (MCEETYA) in two stages, April
1998 and April 2000, for Year 3, 5 and 7. This was the result of an
agreement in July 1996 by the then Ministers for Education in the
States, the Territories and the Commonwealth that the development of
national benchmarks were for 'use in reporting minimum acceptable
standards of literacy and numeracy achievement in support of the
national goal'( Wilson, 2000, p. iii). The intended utility of the
benchmarks was to inform teachers about 'the portable outcomes that
should be available from their pedagogies' (Freebody, in
Wyatt-Smith, 1998, p. 26).
DEETYA (1998a) claimed that 'the benchmark at each year level
is intended to set a minimum acceptable standard: a critical level of
literacy and numeracy without which a student will have difficulty in
making sufficient progress at school' (p. 23). Officially, the
literacy benchmarks for Years 3, 5, and 7, published in 1998 and 2000
respectively, have informed the NAPLAN testing benchmarks or
'National Minimum Standards' (NMS). Undoubtedly, these are
influential with parents, schools and students today. At the time of
writing, however, a Year 9 benchmark has not been published. As
discussed later in this paper, this missing benchmark means that the
teaching workforce and parents and students do not have an external
reference point for considering quality at this key transition in
schooling.
The move away from state-based cohort literacy and numeracy
standardised testing to national cohort testing in 2008 occurred without
a concurrent transfer of the focus on professional development. The
preparedness of the teaching profession to use reported test results
data for informing teaching, diagnosing learning needs and developing
improvement strategies continues to be problematic. Some Australian
researchers have pointed to the negative impact of NAPLAN. This includes
concentrated periods of instructional time being given to teaching to
the test and over-practice of tests and items (Cumming, Wyatt-Smith
& Colbert, 2016; Comber & Cormack, 2013). According to Comber
(2012), the impact can be intensified in 'low-socioeconomic and
culturally diverse primary school communities' (p. 133). However,
the prospect was there from the earliest days of state-based testing for
the tests to model quality assessment for the teaching workforce and
further, be a basis for generating exemplars to inform how teachers
assess and score writing. This has been the approach taken in New
Zealand, for example, in sample testing in the National Education
Monitoring Program (NEMP) where schools and teachers are strongly
engaged with test administration as professional learning through to
marking and using the tests for informing learning and teaching
improvement.
In the last decade Australian Territories and States have developed
data analytic systems to support teachers understanding of the NAPLAN
data (for example, SMART--New South Wales; SunLANDA--Queensland;
EARS--Western Australia). At the time of writing, however, there are no
available exemplars showing features of writing at or above the NMS to
illustrate writing benchmarks. This is the case even though assessment
research points to the utility of exemplars to illustrate standards
(Sadler, 1989, Wyatt-Smith & Klenowski, 2014). The provision of
exemplars, and especially those drawn directly from student samples,
could be useful as part of a strategy to drive improvement by showing
teachers, students and parents about the range of writing standards
across year levels.
The following discussion considers the promised potential of the
country's first National Plan, National Minimum Standards and
NAPLAN more generally and examines the reported data on writing to
determine if the potential has been realised.
The NAPLAN Writing domain
It is important to acknowledge the value and uniqueness of the
Australian writing assessment that is NAPLAN. Currently there is no
recognised international standardised writing assessment against which
Australia can benchmark students' levels of writing competence. As
acknowledged by Hamp-Lyons (2004) and Christie and Derewianka (2008),
the Program for International Student Assessment (PISA), Trends in
International Mathematics and Science (TIMMS), and Progress in
International Reading Literacy Study (PIRLS) assess the domains of
reading literacy, mathematics and science literacy from a global
perspective, but 'writing', understood from a basic skills and
genre-based perspective, is not assessed by PISA, PIRLS or TIMMS. In
terms of an important diagnostic tool, the NAPLAN writing assessment
provides one of the few large-scale system opportunities to explore
Australian students' capabilities in this domain.
Various countries including the United States grapple with a
streamlined approach to the testing of writing. Jeffery's (2009)
comment on a call to action, based on persistent discrepancies between
state and national assessment results, highlights the difficulty of
streamlining a national writing assessment that benchmarks standards of
writing proficiency. Jeffery's study was based on the need for
greater uniformity in US academic standards. The concern was that the
variability and complexity between state writing assessments resulted in
opaque explanations of how descriptions of quality (for example, in the
form of standards or rubrics) relate to writing proficiency definitions.
This variability and complexity made it difficult to 'determine
exactly what a student has to do to produce a 'proficient'
response' (Jeffery, 2009, p. 16). As suggested earlier, Australia
faces the same challenges in formulating and promulgating clear
expectations of quality.
In addressing challenges in communicating expectations of quality,
Australia has sought to streamline the national writing assessment
through adopting what can be described as an analytic approach to
formulating assessment criteria (Diederich, 1974). Broadly speaking, a
main focus in this approach is the identification of specific criteria,
with appropriate weighting or scores attached, so that ultimately in the
scoring, the total equals the sum of the parts. However, the current
lack of transparency when it comes to benchmarks hampers teachers'
efforts to link benchmark standards, curriculum and pedagogy, and
achievement standards. This has led some states and territories to
mapping NAPLAN results and A-E (or other five-point scales) for school
reporting, to examine patterns between the two. Anecdotal evidence
suggests a need for systematic investigation of these patterns between
the two sets of reported results and at a conceptual level, the two
reporting frameworks (A-E and benchmark standards). This is vital given
that achievement standards in the Australian Curriculum are not defined
beyond the year level expectation generally accepted as constituting
'C'.
NAPLAN writing assessment has two important features. First, it is
informed by an internationally respected genre-based language theory
(Halliday and Hasan 1985). Central to this theory was an interest in
language as integrally connected to culture and specifically language in
use. From this theory followed a functional approach to literacy
including the work lead by Australian scholars of applied linguists,
including Martin, Rothery and Christie (Cope & Kalantzis, 1993; Cope
et al., 1993) and more recently applied in classrooms (Love &
Humphrey, 2012); second, its design is criterion-referenced where every
mark a student scores has a particular meaning on a marking scale, as
mentioned above. This approach to NAPLAN design through to scoring means
that students, teachers and parents can have access to information about
individual scores with considerable diagnostic potential.
It is well documented that historically, one of the significant
barriers to realising the diagnostic potential of NAPLAN results in
general, and writing results in particular, is the return of results
late in the school year. The limited opportunity for schools to use the
results has been acknowledged by the Australian Government in response
to the Senate Education and Employment References Committee Report
(Department of Education and Training 2014). The Committee agreed in
principle that 'the quick turnaround of test results should receive
the highest priority in the design of NAPLAN Online' (p. 3) and
accordingly, it is to be implemented from 2017 over a two-to-three year
period. The government focus on the changed reporting schedule is
arguably critical if there are to be opportunities for schools to
utilise returned data to inform student learning at individual, group
and cohort levels. The change to earlier reporting brings potential
benefit to schools, teachers, parents and students though its impact
currently remains unknown.
The following discussion draws directly on the 2015 NAPLAN National
Report published by ACARA of the results for the writing domain across
all Australian states and territories. The focus starts with the 2015
results across Years 3, 5, 7 and 9, and then attention turns to the 2015
writing results against the reading results. Finally, the analysis
extends to the 2009 cohort starting with their Year 3 writing results
through to their 2015 Year 9 writing results, to consider the
performance pattern in writing over time for this cohort.
What the data reveal about writing
The report of the national analysis of NAPLAN results is released
via the National Assessment Program (NAP) arm of the ACARA website,
http://www.nap. edu.au/results-and-reports/national-reports.html. The
first stage of the NAPLAN summary information is released in August each
year prior to the distribution of reports to parents. The full National
Report is released later in the year. It supersedes the preliminary
report as it contains granular detail such as: gender, Indigenous
status, language background other than English, parental occupation,
parental education, and location (metropolitan, provincial, remote and
very remote), and shows results at each year level and for each domain
of the test. The National Report is derived from a significant sample of
the Australian population.
Table 1 below presents a breakdown of each state and territory as a
percentage of students who participated in NAPLAN and their related
levels in the NAPLAN testing in writing from the 2015 National Report.
Of those students who participated in the NAPLAN writing test, the
percentages are presented in relation to three levels or national
standards: 'Below National Minimum Standard', 'At
National Minimum Standard' or 'Above National Minimum
Standard'. The official stance of ACARA (2015) in their reporting
of results is that, 'exempt students are deemed not to have met the
national minimum standard' (ACARA 2015) and therefore contribute to
the percentage for below NMS.
The information in Table 1 and Table 2 is for 2015 only. It does
not represent a longitudinal data set tracking a cohort. Readers are
also advised that the percentages are represented against Band Levels
related to the targeted testing years. For example, the NMS for Year 3
is Band 1, Year 5 is Band 3, Year 7 is Band 4 and Year 9 is Band 5. The
approach taken in the following discussion is to focus exclusively on
those students who sat the NAPLAN writing test in 2015 in these year
levels. That is, we removed the percentage of students who were reported
to be officially exempt in our representation of the writing data to
avoid the possible distorting effect on the writing results.
Close examination of the writing data in Table 1 reveals a growing
trend of students who are performing below NMS. This growth of
accelerating negative change starts from Year 3 with 2.6% of Australian
students below the NMS for writing and increases over 15% to 17.7% by
Year 9. This means that nearly one in every five Australian students is
performing below NMS by the time they are in Year 9.
Examining the Northern Territory results as an illustrative case,
the growth of accelerated negative change for writing starts with 25.7 %
of Year 3 students below NMS and peaks at 52.6% of Year 9 students below
NMS. This means that over half of fourteen to fifteen year old students
in the Northern Territory do not have the requisite skills to meet the
NMS of writing.
While the test itself could be subject to scrutiny from various
perspectives, the reported national data point strongly to the need for
inquiring into the teaching of writing in all Australian states and
territories. Further, it raises the serious issue of young people's
employment readiness and employability, inevitably impacted by low
levels of basic skills in writing.
Writing and reading in perspective
As indicated earlier, 'literacy' is a continuing focus
for policymakers and has remained a high priority in education policy
for some decades. The definition of 'literacy' has
traditionally targeted reading and reading comprehension, and for
Australian students this is undeniably an area that needs improvement.
The Australian PISA results in 2000 and 2009 in the major domain of
reading literacy acknowledged a decline equating to 4Vi months of
schooling for Australian fifteen year olds (Thomson, Hillman, De
Bortoli, 2013, p. 15). It is indisputable that improving
'reading' is a significant issue, especially as this extends
to young people's engagement with and enjoyment of reading.
However, current NAPLAN data in writing show a more significant
concerning national pattern.
To illustrate, both reading and writing results are considered.
Table 2 shows the Australian Reading results from the 2015 NAPLAN
testing in Years 3, 5, 7, and 9. In regard to the below NMS percentages,
overall the Australian reading results demonstrate relative stability:
the greatest gap in results is between Year 7 and Year 9 with 2.9% of
students in Year 7 below NMS increasing to 5.9% by Year 9. This
contrasts with 15.1% increase in students falling below NMS between Year
3 to Year 9 for writing.
Longitudinal Tracking of students
The data for the longitudinal tracking in Table 3 was taken from
the National Report of NAPLAN for 2015 results released via the National
Assessment Program (NAP) arm of the ACARA website, http://www.nap.
edu.au/results-and-reports/national-reports.html Once again, the 2015
results for all years demonstrate the trend of accelerating negative
change when we track the cohort of students who started their first year
of testing in Year 3 in 2009, Year 5 in 2011, Year 7 in 2013 and Year 9
in 2015. When reviewing the longitudinal snapshot for this cohort an
important caveat is that the genre for the 2009 test was the narrative
genre and in the subsequent years, a persuasive genre was chosen. Based
on the 10 criteria for both tests, there are four criteria
(Audience/Text Structure/Ideas/Character and setting) that differ
between the genres. The other six criteria that relate to language and
grammar remain the same and therefore, it is arguable that they create a
sufficiently robust basis for tracking whether students have improved in
these areas.
If we once again turn our focus to the below NMS, there is a
decline in the 2009 cohort's results across the years of testing.
The most significant statistic is that across all states and the ACT the
percentage of students in the below NMS category nearly doubles from
Year 7 to Year 9. In 2014 the commentary on the decline in writing
results included reported 'prompt issues'1 referring to the
suitability of the choice of writing task in the test. Alternatively,
the below NMS percentage could reflect the oft reported middle years
slump (Lingard et al., 2001) associated with disengagement from school.
These possible explanations are worthy of investigation. However, the
key point is that if the data are accepted as accurate, then they should
serve to trigger inquiry in policy, research and practice: Why is this
happening and what catalysts or changes are necessary to prevent further
decline for these students fast headed to un/employment (Goss et al.,
2015)? Additionally, what action is needed for improving writing
outcomes and turning the trend of decline for future cohorts? The
following section begins this exploration.
The Australian Curriculum and Writing Standards
The call for a greater focus on the teaching of writing has been
acknowledged internationally with the National Commission on Writing
(NCoW) in 2003 calling for reform in the United States, arguing that
improvement was needed, 'if students are to succeed in college and
life' (National Commission on Writing, 2003, p. 7). In 2010, based
on the recommendations of the NCoW, the Chief State School Officers
released the Common Core State Standards (CCSS). The grade-level
standards in CCSS were formulated to 'provide a road map for the
writing skills students need to acquire, emphasise the use of writing to
support content learning and the comprehension of text, and assess
students' mastery of CCSS objectives through the use of written
responses'(Graham & Harris, 2015, p. 459).
Prioritising writing for all subjects acknowledges from a policy
perspective the critical attention given to writing and 'the
importance of teaching it effectively at every grade level' (Graham
& Harris, 2015, p. 461). The CCSS 'also provides benchmarks for
what students need to master along the way' (p. 459), allowing a
more tangible opportunity for teachers and students to see what the
standards are at various development stages that will support
understanding and engagement with teaching the writing process. A
related policy position has been adopted in Australia, with the official
stance being that all teachers are responsible for teaching literacy,
though the absence of a literacy curriculum, and comprehensive literacy
standards and literacy benchmarks tend to undermine the impact of this
position. Also at play in the story of Australia's efforts to
improve writing is that currently, the connections between standards for
subject English and literacy standards are not made clear, including as
the latter relate to NAPLAN.
Currently, there is no large scale data on the teaching of writing
in Australian classrooms. Similarly, there is no longitudinal data about
teachers' pedagogical practices to support all students in their
development of writing skills. This gap in knowledge is noteworthy given
the oft reported concerns over poor academic writing skills of
undergraduate students in academic teacher education programs,
recognised in research and more widely (Teacher Education Ministerial
Advisory Group, 2014; Louden, 2005). Teacher understanding of language
features and their confidence in teaching students writing skills,
particularly the language strand of the Australian Curriculum, has been
raised by several authors (Harper & Rennie, 2009; Jones & Chen,
2012).
In this curriculum, writing is predominantly addressed through the
domain of English. In subject English writing is presented to teachers
in three strands: Language, Literacy and Literature. Writing across the
other curriculum subjects is woven through domains such as Languages
(see Systems of Language, Communicating and Socialising); Mathematics
(see Number and Place Value); Digital Technologies (see Processes and
Production Skills); Music, Drama, Visual Arts, Humanities and Social
Sciences (see Historical Knowledge and Understanding). In the main, the
domain of writing is included in the curriculum as part of the
'Content Description' or as a bullet point in the
'Elaborations'. The effect of this is that it leaves writing
instruction--the 'how' writing is to be taught--to the
teacher.
Writing is a challenging and difficult skill to master. There has
been much debate around what is the best course of action to support
teachers in the teaching of writing. Applied linguistic scholars
including Christie and Martin (2007), Hasan (2005) and Humphrey, Droga
& Feez (2012) have pointed to the critical contribution of
teachers' 'Knowledge About Language' from the level of
the word to the whole text, and the implications this has for
teachers' confidence and their ability to assist students in the
classroom to achieve acceptable standards of writing. In Unsworth's
(2014a, 2014b, 2014c) recent work, this argument has extended beyond
print to multi modal texts. Myhill (in Christie & Derewianka, 2008,
p. 1) reiterated this concern particularly at the secondary level where
there is 'a dearth of systematic exploration of the linguistic
characteristics of children's writing'.
Recent research has also challenged how prepared pre-service
teachers are to teach language and their confident understanding of how
language works in the context of writing (Harper & Rennie, 2009;
Jones & Chen, 2012; and Christie & Derewianka, 2008). Love,
Sandiford, Macken-Horarik, and Unsworth (2014) have argued that teachers
need a new kind of professional knowledge base about language. They
discussed the need for a metalanguage or 'grammatics' to
support the analysis of the traditional and emerging forms of text. This
work builds on the earlier work addressing the literacy demands of
curriculum and the related conceptualisation of curriculum literacies
(Cumming & Wyatt-Smith, 2001), with implications for assessing
achievement in all subject areas (Wyatt-Smith & Cumming, 2003).
As discussed in Love, Macken-Horarik and Horarik's study
(2015), teachers have diverse starting points for the teaching of
language, making 'a common knowledge base much harder to
establish' (p. 173). As suggested earlier, leading language
theorists and scholars including Halliday (1979a, 1979b, 1985a, 1985b;
Halliday & Hasan, 1985) and Christie (1995) have provided a critical
foundation for the widely recognised language-in-use framework to the
study of language and specifically, the teaching of writing. Broadly
speaking, drawing on the work of Halliday, genre theory focuses on two
main approaches: the text type and process model. Depending on the genre
or language process required for a writing task, these approaches allow
identifiable and recognisable generic, structural and language features
to be employed. Genre theory has been influential in the framing of
subject English and literacy as one of the seven General Capabilities.
A challenge facing teachers in writing instruction can be traced
back to this influence. The acceptance of genres in the Australian
Curriculum involves a tacit acceptance of the related underpinning
theorising of language and related instructional approaches to teaching
writing. While the authors recognise that an official curriculum cannot
be expected to comprehensively address how to teach writing, the
preceding, discussion of concerning numbers of students falling below
benchmark opens the opportunity to reconsider the extent of teacher
knowledge about writing instruction in curriculum areas. At issue is the
potential benefit of deep professional engagement with pedagogical
frameworks and related assessment approaches consistent with the
orientation to writing taken in the Australian Curriculum.
Vygotsky's (1986) work around the Zone of Proximal Development
described the gap between a child's actual development and his or
her potential development that can be achieved when assisted. Vygotsky
subscribed to the notion of deliberate supported or scaffolded learning
under the guidance of a more 'expert' other. This
consideration of expertise or connoisseurship, certainly from an
assessment perspective, supports the notion of a greater need for a
sharpened focus on standards and illustrative exemplars that provide a
fine-grained analysis of genre and a common knowledge base of grammar to
support teacher understanding of how to teach writing. A developmental
focus of what writing standards look like across the years is needed to
avoid an appearance of 'robotic posturing of genre'(Exley,
Woods & Dooley, 2013, p. 60) in rigid applications of content
descriptions and elaborations in the Australian Curriculum: English
(AC:E).
In an initial move towards progressing standards the AC: E includes
various samples, categorised as 'Above Satisfactory, Satisfactory
and Below Satisfactory' with accompanying annotations. These
samples go some way towards supporting teachers' knowledge of
standards and expected features of quality. The extent of the challenge
of developing such knowledge should not be underestimated, however. The
Grattan Institute's report (Goss et al., 2015) identified that many
Australian teachers 'struggle to accurately interpret curriculum
standards and use them to evaluate their students' learning'
(p. 15). The report also discussed that teachers' ability to grade
against the national A-E Standard is very weak with some provision of
examples of teacher's norming student assessment to their own
classes rather than external standards (Goss et al., 2015). In light of
these observations we identify a critical disconnect between the
annotated samples of quality in the Australian Curriculum, the NAPLAN
assessment criteria, the A-E reporting standards, and the benchmarks
more generally. Teachers are expected to make connections across these
three frameworks that have not been formally connected at system levels.
These disconnects hamper national efforts to develop with the teaching
profession clear understandings of quality and in turn, impact efforts
to realise the potential of NAPLAN to secure real improvement.
Conclusion
This paper has presented and discussed NAPLAN writing data within
2015 year cohorts, and explored the pattern of data across Years 3, 5, 7
and 9 for the 2015 Year 9 cohort. The discussion has brought to light,
negative change in writing across the years of schooling for students in
years 3, 5, 7 and 9 in the 2015 cohorts, and further, that the
percentages of students assessed as below benchmark is increasing across
these years. It has also brought to light the critical need to focus
attention on the connections teachers need to make in assessing writing.
These include connections among Australian Curriculum achievement
standards, NAPLAN assessment criteria, and mandated five-point scale
reporting (for example, A-E).
At the time of writing there are no published Year 9 benchmarks for
writing we propose an empirical approach to address this omission. This
includes selecting actual student writing samples at the current band
level cut scores to develop Year 9 descriptors for the benchmarks. This
work would entail undertaking pairwise comparison. Additionally it is
now timely to revisit the Years 3, 5, 7 benchmarks once again in
reference to marked student scripts. Taken together this work would
provide an opportunity for empirical data to clarify benchmark standards
and offer a test of reasonableness for both the benchmarks and current
cut scores.
We call for a national conversation about what NAPLAN is really
delivering for students, parents, the wider community, and for education
systems and whether we are utilising the national scale and longitudinal
data sets across Years 3-9 efficiently. A key question facing the
education community is whether accountability has become so powerful as
a political priority that the equity purpose, and indeed, the learning
and teaching purposes, have become subsumed by and made subservient to
testing for accountability reporting as the primary goal in and of
itself. Instead there is a need to go beyond the traditional distinction
between measurement and improvement, mentioned in the introduction, to
see both in the context of NAPLAN as potentially coming together through
a sharpened focus on the meaning of the reports and an alignment with
progressing student achievement. This effort would be supported by the
development of a distinctive, Australian survey of the teaching of
writing. Currently we are developing such a survey to investigate
writing instruction in Australian classrooms and also how teachers
assess writing using benchmark criteria.
Finally we revisit the visionary MCEETYA priorities for Australian
students. Had the vision been realised, the discussion of the data in
this paper would have been necessarily different. It is not the case
that every child leaving primary school is able to write at an
appropriate level and further, many children who commenced school in the
period 1998 to the present have not achieved a minimal acceptable level
of writing. Indeed, the paper has revealed a concerning picture of
accelerating negative change across the years of schooling in writing.
However it would be too easy to attribute this change to issues of
teacher quality and quality teaching. The actions we have called for
reflect our understanding that realising improvement in writing will
require a multi-pronged approach as we have outlined. This includes a
systematic review of messages about quality and standards available to
the profession and more fundamentally, clarity around what benchmarks
look like in student work samples.
References
Australian Curriculum, Assessment and Reporting Authority. (2015).
NAPLAN Achievement in Reading, Persuasive Writing, Language Conventions
and Numeracy: National Report for 2015. Retrieved from
http://www.nap.edu.au/ results-and-reports/national-reports.html
Australian Education Council. (1994). A statement on English for
Australian schools. Carlton: Curriculum Corporation. Australian
Education Council. (1994). Statement & Profile on Mathematics for
Australian Schools. Curriculum Corporation, Victoria.
Black, P. (2014). Foreword. In V. Klenowski & C.M. Wyatt-Smith
(Eds.), Assessment for Education: Standards, Judgement & Moderation
(pp. vii-ix). London: Sage.
Broadfoot, P. (2014). Educational assessment evaluation and
research: the selected works of Mary E. James, The Curriculum Journal,
25(3), 461-463.
Christie, F. (1995). The teaching of story writing in the junior
secondary school. Report 2 of a Research Study into the pedagogic
discourse of secondary Subject English. A study funded by the ARC.
Melbourne, Vic.: University of Melbourne.
Christie, F. & Derewianka., B. (2008). School Discourse:
Learning to Write Across the Years of Schooling. Continuum Discourse.
London: Continuum International Publishing.
Christie, F. & Martin, J.R. (2007). Language, Knowledge and
Pedagogy: Functional Linguistic and Sociological Perspectives. London:
Continuum.
Comber, B. & Cormack, P. (2013). High stakes literacy tests and
local effects in a rural school. Australian Journal of Language and
Literacy, 36(2), 78-89.
Comber, B. (2012). Mandated literacy assessment and the
reorganisation of teachers' work: federal policy, local effects.
Critical Studies in Education, 53 (2), 119-136.
Common Core State Standards. (2016). Retrieved from http://
www.corestandards.org/.
Cope, B. & Kalantzis, M. (1993). The Powers of Literacy: A
Genre Approach to Leaching Writing. London: The Falmer Press.
Cope, B., Kalantzis, M., Kress, G., Martin, J. & Murphy, L.
(1993). Bibliographical Essay: Developing the Theory and Practice of
Genre-based. In B. Cope & M. Kalantzis (Eds.), The Powers of
Literacy: A Genre Approach to Leaching Writing Literacy (pp. 231-247).
London: The Falmer Press.
Cumming, J.J. & Wyatt-Smith, C.M. (2001). Literacy and the
Curriculum: Success in Senior Secondary Schooling. Melbourne: ACER.
Cumming, J.J., Wyatt-Smith, C. & Colbert, P. (2016). Students
at risk and NAPLAN: the collateral damage. In Lingard, B., Thompson, G.
& Sellar, S. (Eds.), National Lesting in Schools. An Australian
assessment. Abingdon, Oxon: Routledge.
Department of Education and Training. (2014). Australian Government
response to the Senate Education and Employment References Committee
Report Effectiveness of the National Assessment Program--Literacy and
Numeracy. Retrieved from https://docs.education.gov. au/system/file
s/doc/other/response_to_naplan_senate_ report_0.pdf
Department of Employment, Education, Training and Youth Affairs.
(1998a). Literacy for all: The challenge for Australian schools.
Commonwealth literacy policies for Australian schools. Canberra:
Department of Employment, Education, Training and Youth Affairs.
Department of Employment, Education, Training and Youth Affairs.
(1998b). National Report on Schooling in Australia. Retrieved from
http://scseec.edu.au/archive/
Publications/Publications-archive/National-Report-onSchooling-in-Australia/ANR-1998.aspx
Diederich, P.B. (1974). Measuring Growth in English. Urbana,
Illinois: National Council of Teachers of English.
Earl, L. (2013). Assessment as learning: using classroom assessment
to maximize student learning. Thousand Oaks, California: Corwin Press.
Exley, Beryl E., Woods, Annette F. & Dooley, Karen T. (2013).
Thinking Critically in the Land of Princesses and Giants: The
Affordances and Challenges of Critical Approaches in the Early Years. In
Pandya, Jessica & Avila, Julianna (Eds.) Moving Critical Literacies
Forward: A New Look at Praxis Across Contexts. Routledge, London, pp.
59-70.
Goss, P., Hunter, J., Romanes, D. & Parsonage, H. (2015).
Targeted teaching: How better use of data can improve student learning.
Grattan Institute. Retrieved from http://
grattan.edu.au/report/targeted-teaching-how-better-useof-data-can-improve-student-learning/
Graham, S. & Harris, K. (2015). Common Core State Standards and
Writing: Introduction to the Special Issue. The Elementary School
Journal, IIS (4), 457-463.
Halliday, M.A.K. (1979a). A Grammar for Schools? In M.A.K. Halliday
(Ed.), Working Conference on Language in Education: Report to
Participants (pp. 184-186). Sydney: Sydney University Extension
Programme.
Halliday, M.A.K. (1979b). Differences between Spoken and Written
Language: Some Implications for Literacy Teaching. In G. Page, J. Elkins
& B. O'Connor (Eds.), Communication through Reading:
Proceedings of the Fourth Australian Reading Conference (pp. 37-52).
Adelaide: Australian Reading Association.
Halliday, M.A.K. (1985a). An Introduction to Functional Grammar.
London: Edward Arnold.
Halliday, M.A.K. (1985b). Spoken and Written Language. Geelong:
Deakin University Press.
Halliday, M.A.K. & Hasan, R. (1985). Language, Context and
Text: Aspects of Language in a Social Semiotic Perspective. Geelong:
Deakin University Press.
Hamp-Lyons, L. (2004). Writing assessment in the world. Assessing
Writing, 9(1), 1-3.
Harper, H. & Rennie, J. (2009). 'I had to go and get
myself a book on grammar': A study of pre-service teachers'
knowledge about language. Australian Journal of Language and Literacy,
32 (1), 22-37.
Hasan, R. (2005). Language, Society and Consciousness. London:
Equinox.
Humphrey, S., Droga, R. & Feez, S. (2012). Grammar and Meaning.
Newtown, NSW: Primary English Teachers' Association Australia.
Jeffery, J.V. (2009). Constructs of writing proficiency in US state
and national writing assessments: Exploring variability. Assessing
Writing, 14(1), 3-24.
Jones, P. & Chen, H. (2012). Teachers' knowledge about
language: Issues of pedagogy and expertise. Australian Journal of
Language and Literacy, 35 (1), 147-168.
Klenowski, V. & Wyatt-Smith, C.M. (2012). The impact of high
stakes testing on learning: The Australian story. Assessment in
Education: Principles, Policy & Practice, 19(1), 65-79.
Lingard, B., Ladwig, J., Mills, M., Bahr, M., Chant, D. &
Warry, M. (2001). The Queensland school reform longitudinal study.
Brisbane: The State of Queensland (Department of Education).
Lingard, B., Thompson, G & Sellar, S. (2016). National Testing
in Schools. An Australian assessment. Abingdon, Oxon: Routledge.
Louden, W., Rohl, E., Gore, J., Mcintosh, A., Greaves, D., Wright,
R., Siemon, D., & House, H. (2005). Prepared to Teach: An
investigation into the preparation of teachers to teach literacy and
numeracy. Mount Lawley, Western Australia: Edith Cowan University.
Love, K. & Humphrey, S. (2012). A multi-level language toolkit
for the Australian curriculum: English. Australian Journal of Language
and Literacy, 35(2), 173-191.
Love, K., Macken-Horarik, M. & Horarik, S. (2015). Language
knowledge and its application: A snapshot of Australian teachers'
views. Australian Journal of Language and Literacy, 38(3), 171-182.
Love, K., Sandiford, C., Macken-Horarik, M. & Unsworth, L.
(2014). From 'Bored Witless' to 'Rhetorical Nous':
Teacher Orientation to Knowledge about Language and Strengthening
Student Persuasive Writing. English in Australia, 49(3), 43-56.
Retrieved from http://search. eb scoho st. com/login, a
spx?direct=true&db=eric&AN
=EJ1051302&site=ehost-live\nhttp://www.aate.org.au/ journals/2
014-vol-49-no-l-to-3
Ministerial Council on Education, Employment, Training and Youth
Affairs. (MCEETYA). (1989). The Hobart declaration on schooling.
Retrieved from http://www.
scseec.edu.au/EC-Publications/EC-Publications-archive/
EC-The-Hobart-Declaration-on-Schooling-1989.aspx Ministerial Council on
Education, Employment and Youth Affairs. (MCEETYA). (1999). The Adelaide
Declaration on National Goals for Schooling in the 21st Century.
Retrieved from http://www.scseec.edu.au/archive/Publications/Publications-archive/The-Adelaide-Declaration.aspx National Commission on Writing.
(2003). The neglected R: The need for a writing revolution. New York:
College Board. Retrieved from http://www.collegeboard.com/
prod_downloads/writingcom/neglectedr.pdf Sadler, D.R. (1989). Formative
assessment and the design of instructional systems. Instructional
Science, 18(2), 119-144.
Teacher Education Ministerial Advisory Group (TEMAG). (2014).
Action Now: Classroom Ready Teachers (pp. 1-118).Department of Education
and Training, issuing body. Retrieved from
http://nla.gov.au/nla.arc-150576 Thomson, S., Hillman, K. & De
Bortoli, L. (2013). A Teacher's Guide to PISA Reading Literacy.
Melbourne: ACER. Unsworth, L. (2014a). Point of view in picture books
and animated film adaptations: Informing critical multimodal
comprehension and composition pedagogy. In E. Djonov & S. Zhao
(Eds.), Critical multimodal studies of popular culture (pp. 201-216).
London: Routledge.
Unsworth, L. (2014b). Interfacing Visual and Verbal Narrative Art
in Paper and Digital Media: Recontextualizing Literature and Literacies.
In G. Barton (Ed.), Literacy in the Arts: Retheorising Learning and
Teaching (pp. 55-76). New York. NY: Springer.
Unsworth, L. (2014c). Multimodal Reading Comprehension: Curriculum
Expectations and Large-scale Literacy Testing Practices. Pedagogies: An
International Journal, 9(1), 26-44.
Vygotsky, L.S. (1986). Thought and Language (A. Kozulin, Ed &
Trans.). Cambridge, Massachusetts: MIT Press. White, L. (2013). NAPLAN
under fire. Education Review. Retrieved from
http://www.educationreview.com.au/2013/ 02/naplan-under-fire/Wilson, B.
(2000). Foreword. In Curriculum Corporation, Literacy Benchmarks Years
3, S & 7 Writing, Spelling and Reading (p. iii). Victoria:
Curriculum Corporation.
Wyatt-Smith, C.M. (1998). Interrogating the benchmarks. English in
Australia, 123, 20-28.
Wyatt-Smith, C.M. & Cumming, J.J. (2003). Curriculum
literacies: Expanding domains of assessment. Assessment in Education:
Principles, policy and practice, 10(1), 47-59.
Wyatt-Smith, C.M. and Klenowski, V. (2014). Elements of Better
Assessment for the Improvement of Learning. A Focus on Quality,
Professional Judgement and Social Moderation, in C. Wyatt-Smith, V.
Klenowski and P. Colbert (eds), The Enabling Power of Assessment:
Quality, Ethics and Equity. Dordrecht, The Netherlands: Springer
International.
Note
(1.) The 2015 writing assessment was criticised for the selection
of the writing question. The media referred to the concern as
'prompt issues', http://www.theaustralian.com.au/
national-affairs/education/marked-down-how-one-toughquestion-skewed-the-naplan-results/news-story/48274cc8 75b227cf39b0ea3255454802?=
Claire Wyatt-Smith and Christine Jackson
Australian Catholic University
Claire Wyatt-Smith is a Professor of Educational Assessment and
Literacy and Institute Director of the Learning Sciences Institute
Australia. Her research focuses on professional judgment, standards and
moderation, with an aligned focus on curriculum and literacy education.
Her publications address teachers' assessment identities;
large-scale standardised testing and its impact on learning; assessment
adaptations for students with disabilities and assessment and new
technologies. Claire's research has attracted funding from the
Australian Research Council and she has undertaken numerous
government-funded large-scale longitudinal projects.
Christine Jackson works for the Learning Sciences Institute
Australia, ACU. She has worked in schools and the education sector for
over 15 years. She has taught English in Australia and overseas and has
worked in policy and project management roles for the Government and
Independent Sectors. Christine has worked as an Assessment and Reporting
Manager in all aspects of test development and the reporting of writing
assessments. Christine is currently undertaking postgraduate research in
literacy assessment and educational measurement.
Table 1. 2015 NAPLAN writing results--percentage of students by state,
territory and Australia in each 'standard'
State Below National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 1 Band 3 Band 4 Band 5
and below and below and below
NSW 2.0 4.7 10.9 18.9
VIC 0.7 2.4 6,9 12.2
QLD 3.4 8.0 13.3 20.8
WA 4.2 7.7 12.7 15.8
SA 3.8 8.1 10.5 19.6
TAS 2.8 7.9 13.6 20.3
ACT 2.1 4.2 7.7 13.9
NT 25.7 38.4 46.0 52.6
AUSTRALIA 2.6 5.9 11.0 17.7
State At National Minimum Standard {%)
Year 3 Year 5 Year 7 Year 9
Band 2 Band 4 Band 5 Band 6
NSW 4.1 11.0 18.5 21.9
VIC 2.6 8.2 16.0 20.2
QLD 6.7 14.5 19.5 23.1
WA 5.6 12.8 18.0 19.2
SA 6.8 15.2 18.8 22.5
TAS 6.4 13.7 20.3 23.7
ACT 4.2 10.1 17.0 19.3
NT 12.2 15.4 16.1 16.1
AUSTRALIA 4.8 11.7 18.1 21.5
State Above National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 3-6+ Band 5-8+ Band 6-9+ Band 7-10
NSW 92.3 82.6 69.2 57.7
VIC 93.9 86.5 74.9 65.1
QLD 88.4 76.1 65.6 54.6
WA 89 78.3 68 63.8
SA 87.2 74.5 68.8 55.9
TAS 89 77 64.6 54.7
ACT 91.7 83.7 73.5 64.2
NT 60.3 44.2 35.5 29.1
AUSTRALIA 90.7 80.6 69.2 59
Table 2. Comparison of 2015 NAPLAN reading and writing: percentages of
students by state, territory and Australia in each 'standard'
State Below National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 1 Band 3 Band 4 Band 5
NSW 3.0 4.6 2.6 5.9
NSW-Writing 2.0 4.7 10.9 18.9
Vic 1.7 2.8 1.9 4.0
Vic-Writing 0.7 2.4 6.9 12.2
Qld 3.6 4.9 2.6 6.7
Qld-Writing 3.4 8.0 13.3 Z0.8
WA 5.8 6.9 4.0 5.6
WA-Writing 4.2 7.7 12.7 15.8
SA 4.3 6.1 3.1 6.3
SA-Writing 3.8 8.1 10.5 19.6
Tas 5.4 6.5 3.9 7.3
Tas-Writing 2.8 7.9 13.6 20.3
ACT 2.8 2.8 1.5 3.4
ACT-Writing 2.1 4.2 7.7 13.9
NT-READING 27.0 30.3 25.2 31.6
NT-WRITING 25.7 38.4 46.0 52.6
AUSTRALIA-READING 3.6 4.9 2.9 5.9
AUSTRALIA-WRITING 2.6 5.9 11.0 17.7
State At National Minimum Standard {%)
Year 3 Year 5 Year 7 Year 9
Band 2 Band 4 Band 5 Band 6
NSW 7.1 13.3 12.5 17.3
NSW-Writing 4.1 11.1 18.5 21.9
Vic 5.3 10.8 10.7 15.4
Vic-Writing 2.6 8.2 16.0 20.2
Qld 8.4 14.0 13.0 19.5
Qld-Writing 6.7 14.5 19.5 23.1
WA 9.3 14.8 13.7 15.2
WA-Writing 5.6 12.8 18.0 19.2
SA 8.4 15.1 12.9 18.2
SA-Writing 6.8 15.2 18.8 22.5
Tas 9.4 16.0 14.9 19.3
Tas-Writing 6.4 13.7 20.3 23.7
ACT 5.9 9.4 8.4 11.8
ACT-Writing 4.2 10.1 17 19.3
NT-READING 14.3 18.1 19.5 19.0
NT-WRITING 12.2 15.4 16.1 16.1
AUSTRALIA-READING 7.4 13.2 12.4 17.1
AUSTRALIA-WRITING 4.8 11.7 18.1 21.5
State Above National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 3-6+ Band 5-8+ Band 6-9+ Band 7-10
NSW 88.2 80.5 83.4 75.3
NSW-Writing 92.3 82.6 69.2 57.7
Vic 90.2 83.5 85.2 78.1
Vic-Writing 93.9 86.5 74.9 65.1
Qld 86.6 79.8 82.8 72.3
Qld-Writing 88.4 76.1 65.6 54.6
WA 83.7 77.1 81 78
WA-Writing 89 78.3 68 63.8
SA 85 76.6 82.1 73.4
SA-Writing 87.2 74.5 68.8 55.9
Tas 83.5 76.1 79.8 72.1
Tas-Writing 89 77 64.6 54.7
ACT 89.3 85.8 88.4 82.2
ACT-Writing 91.7 83.7 73.5 64.2
NT-READING 56.8 49.6 52.9 47.2
NT-WRITING 60.3 44.2 35.5 29.1
AUSTRALIA-READING 87.2 80.1 83 75.2
AUSTRALIA-WRITING 90.7 80.6 69.2 59
Table 3. Longitudinal Tracking of 2009-2015 cohort Year 3, 5, 7, 9:
percentages of students by state, territory and Australia in each
standard'
State Below National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 1 Band 3 Band 4 Band 5
2009 2011 and below and below
2013 2015
NSW 1.5 3.5 9.4 [right arrow] 18.9
VIC 0.7 3.0 6.6 [right arrow] 12.2
QLD 4.2 8.2 9.5 [right arrow] 20.8
WA 3.6 8.2 8.8 [right arrow] 15.8
SA 2.3 7.7 8.8 [right arrow] 19.6
TAS 2.2 8.4 12.0 [right arrow] 20.3
ACT 1.2 3.4 6.9 [right arrow] 13.9
NT 24.3 36.2 41.9 [right arrow] 52.6
AUSTRALIA 2.4 5.6 9.1 [right arrow] 17.7
State At National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 2 Band 4 Band 5 Band 6
2009 2011 2013 2015
NSW 5.0 8.8 17.9 21.9
VIC 3.8 10.1 15.6 20.2
QLD 9.7 14.9 17.9 23.1
WA 7.8 14.2 16.8 19.2
SA 6.9 15.6 17.3 22.5
TAS 7.6 17.9 20.1 23.7
ACT 4.9 10.6 14.8 19.3
NT 13.1 15.2 16.8 16.1
AUSTRALIA 6.3 11.8 17.1 21.5
State Above National Minimum Standard (%)
Year 3 Year 5 Year 7 Year 9
Band 3-6+ Band 5-8+ Band 6-9+ Band 7-10
NSW 92.2 86.2 71.3 57.7
VIC 92.5 84.3 75.7 65.1
QLD 84.2 75.3 70.9 54.6
WA 87.3 76.3 73.1 63.8
SA 89.1 74.7 72.3 55.9
TAS 88.9 72.4 66.6 54.7
ACT 91 83.1 75.9 64.2
NT 60.9 46.5 38.8 29.1
AUSTRALIA 89.4 80.7 72.2 59