首页    期刊浏览 2024年11月30日 星期六
登录注册

文章基本信息

  • 标题:Student motivation for NAPLAN tests.
  • 作者:Belcastro, Lauren ; Boon, Helen
  • 期刊名称:Australian and International Journal of Rural Education
  • 印刷版ISSN:1839-7387
  • 出版年度:2012
  • 期号:May
  • 语种:English
  • 出版社:Society for the Provision of Education in Rural Australia Inc. (SPERA)
  • 摘要:The National Assessment Program in Australia, the NAP, consists of all testing endorsed by the Ministerial Council for Education, Early Childhood Development and Youth Affairs (MCEECDYA), including NAPLAN testing (National Assessment Program -Literacy and Numeracy). Since its inception in 2008, the purpose, value and results of NAPLAN testing have come under scrutiny by people from many walks of life, including teachers, parents, ministers for education and politicians (Lingard, 2009, Thrupp, 2009). MCEECDYA functions as the policy-making body for primary and secondary education within Australia, through which the council endorses Australian participation in international assessments (MCEECDYA, 2011). These international assessments are governed by the Organisation for Economic Cooperation and Development (OECD). MCEECDYA justifies participation in these tests for "reporting key performance measures" in the National Report of Schooling in Australia (MCEECDYA, 2011) which outlines the Measurement Framework for National Key Performance Measures. As Klenowski (2010) explains,
  • 关键词:Academic achievement;Mandated achievement tests;Motivation in education;Student motivation;Students;Teachers

Student motivation for NAPLAN tests.


Belcastro, Lauren ; Boon, Helen


NATIONAL TESTING IN AUSTRALIA

The National Assessment Program in Australia, the NAP, consists of all testing endorsed by the Ministerial Council for Education, Early Childhood Development and Youth Affairs (MCEECDYA), including NAPLAN testing (National Assessment Program -Literacy and Numeracy). Since its inception in 2008, the purpose, value and results of NAPLAN testing have come under scrutiny by people from many walks of life, including teachers, parents, ministers for education and politicians (Lingard, 2009, Thrupp, 2009). MCEECDYA functions as the policy-making body for primary and secondary education within Australia, through which the council endorses Australian participation in international assessments (MCEECDYA, 2011). These international assessments are governed by the Organisation for Economic Cooperation and Development (OECD). MCEECDYA justifies participation in these tests for "reporting key performance measures" in the National Report of Schooling in Australia (MCEECDYA, 2011) which outlines the Measurement Framework for National Key Performance Measures. As Klenowski (2010) explains,

Educational changes in curriculum and assessment in Australia are influenced by global factors, as apparent in politicians and policy makers' responses to the international comparisons of student achievement data of the Programme for International Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS). The National Assessment Program--Literacy and Numeracy (NAPLAN) tests in Australia for students in years 3, 5, 7 and 9 have emerged as a consequence of such international emphases. (Klenowski, 2010, p. 11).

This on-going cycle of international assessment informing national priorities, which in turn foster state and territory education initiatives, can be mirrored in other nations (Marsh, 2010).

National testing regimes in Britain and the United States have been implemented as part of national education policy reforms for a number of years in both locations. In Britain the average graduating 18-year-old will have completed approximately 60 national examinations since beginning school (Stowe-Linder, 2009). The tests are implemented in Britain to serve a number of purposes beginning with transparency in teacher accountability. This transparency refers generally to who is teaching what, as set out by the standard levels of achievement of the national curriculum (Directgov, 2011). In the case of the United States, national testing became a part of the No Child Left Behind Act (NCLB) of 2001. The four pillars of NCLB include stronger accountability for results, more freedom for states and communities, proven education methods, and more choices for parents (US Department of Education, 2004). The overall objective of the NCLB Act was to close the gap between those with educational advantage and those lacking this advantage; as then President Bush puts it "too many of our neediest children are being left behind." (Bush, 2001). Ultimately, national testing in the US context can be linked to teacher accountability for providing equitable education outcomes for all students.

In relation to national testing in Australia, it is quite clear why so many continue to question the place of tests such as NAPLAN. A number of criticisms of standardised testing continue to arise (Caldwell, 2010). These criticisms range from teaching to tests, narrowed pedagogy, the de-skilling of teachers, anxiety amongst test participants, questions about the distribution of test results, and the interpretation of these results (Klenowski, 2010; Lingard, 2009; Masters, 2010; Thomson, 2008; Thrupp, 2009). Specifically with regard to national testing in Britain, Klenowski (2010) suggests that with "the introduction of standards-driven reform and standardisation, technical and rationalist approaches will generalise and make superficial the assessment process" (p. 11) for which "the implications are that teachers will need to see beyond the raw scores and understand the related equity issues" (p. 11). For such a geographically widespread nation, Australia needs also to keep this in mind so as to assure appropriateness of policy across urban and rural centres. In addition, Luke and Woods (2008) undertook a critique of the assessment and outcomes set out by the NCLB concluding that "increases in 'accountability pressure' ratings such as those prescribed by this 'fix' have not led to improved quality or equity in national testing outcomes in the United States. This is a consideration for Australian Policy Makers" (p. 16). 'Fix' here refers directly to the issues of equity and inclusion involved with the notion that no child shall be left behind. The move towards NAPLAN testing in Australia can be viewed as the implementation of accountability measures beneath the cloak of providing assessable and equitable education for all: a combination of both the British and American experience of national standardised testing.

NAPLAN AS HIGH STAKES

Arguably, NAPLAN testing has quickly become 'high-stakes' (Lingard, 2009; Thrupp, 2009; Shaw, 2009) due to the publication of student results on the My School website (ACARA, 2011). ACARA (2010) claim that "by providing extensive information on Australian schools, the My School website introduces a new level of transparency and accountability to the Australian school system." The website also offers comparable data between 'like' schools so that the public are able to compare one school to another. With the publication of comparable data sets came the accusations that schools are teaching to tests, as well as the subjective nature of de-contextualised data sets and school league tables (Boston, 2009; Graham, 2010; Lingard, 2009). As evident from experiences of national testing regimes in Britain and the United States, a clear purpose of employing such 'high-stakes' tests must first be reached before presenting the public with any data (Boston, 2009, Graham, 2010). Here, Klinger and Luce-Kapler (2005) suggest that "High-stakes testing must be accompanied by explicit efforts to ensure the tests either support relevant educational goals, or at least, do not limit the educational domains being taught." Recent media attention suggests that such common ground was never reached, resulting in a range of interpretations of the data presented on the My School website. Consequently, what were initially indicators of student academic achievement have now progressed to be sources of social comparison and potential scrutiny (Boston, 2009; Lingard, 2009).

The 'high-stakes' status of NAPLAN testing not only intensifies the pressure placed on schools, administration and teachers to ensure that students are performing at the desired level, particularly for schools in rural areas with less access to resources and staffing, but also places increased pressure on those who are actually participating in these tests. Numerous studies have outlined great numbers of factors contributing to student anxiety and stress, one of which is examinations (Killen, 2006; Martin & Marsh, 2007; Thompson, 2003). It is important here to acknowledge the work of Albert Bandura in relation to self-esteem and self-efficacy, which will be addressed in much greater detail forthwith. In regard to examinations, achieving anywhere on the scale from failure to success contributes to one's perception of their competence and self, their self-efficacy. Here Bandura (1986) emphasises that "people function as anticipative, purposive, and self-evaluating proactive regulators of their motivation and actions" (p. 87). Test anxiety in conjunction with the perceived self-efficacy of a young person can in turn trigger negative attitudes towards testing, particularly high-stakes testing such as NAPLAN.

STUDENTS AND NAPLAN

The turn towards high-stakes testing brings forth the need to better understand how Australian students from all systems and locations respond to such 'high-stakes' testing environments. Better understanding can also lead to strategies to help students to improve their outcomes, especially if this understanding is based on students' own views and perspectives. This research project aims to investigate students' motivation for participating in the NAPLAN testing program. Since the implementation of NAPLAN, the "Education Revolution" in Australia, the term former Prime Minister Kevin Rudd gave to his vision of Australian education, has come to be dominated by a myriad of stakeholders including politicians, administration, academics, the media, teachers and parents. We are well aware of how each of these groups responds to NAPLAN testing, and yet we continue to skim over the most important group involved in the whole process: the students.

Masters (2009) stated that "the NAPLAN tests were introduced to provide a new level of diagnostic information, not only for teachers and schools, but also for education systems and governments" (p. 23). Where do we hear the voices of our students? Until recently, much research has been devoted to providing teachers and administration with strategies and approaches to using NAPLAN results to better plan for teaching, or for improving outcomes based on test results (Davidson, 2009; Jensen, 2010; Masters, 2009; Smeed, 2010; Thian, 2010). The consequences of implementing testing as an assessment tool also comes with a range of consequences and related issues that are at the forefront of the NAPLAN concerns within Australia. While O'Keefe (2011; 2011) has made known a number of effects NAPLAN testing has on student mental health and well-being, the culminating argument here is that little effort is being made to better understand how students feel about NAPLAN testing. We are aware of the possible uses of the data; the ways teachers, schools and administration respond to NAPLAN; the expectations of parents; NAPLAN as a media hot topic; the concern attached to standardised testing; and the social and psychological issues related to testing environments. Ian Whitehead (2010) deconstructed this cycle perfectly:

It is understandable why governments promote these tests; federal education politics in Australia are characterised by activity rather than achievement. NAPLAN and My School fit the bill perfectly. But the further you travel down the totem pole the less useful these tests become. Principals will be more concerned about the effect of the results on morale, rather than as an assessment of their usefulness for future planning. For parents they are but a snapshot and must be taken in context. Competent teachers should have much richer assessment profiles than those provided by NAPLAN. Finally and ironically we reach the child. Judging from the media and politicians they are irrelevant: they have barely rated a mention in the current debate. (p. 6)

STUDY AIMS

This study aims to offer a greater understanding of how students think about these high stakes tests, and how motivational goals influence their participation. This research project encompasses elements of Social Learning Theory and Goal Theory in relation to student perceptions and attitudes towards NAPLAN testing, as well as their motivation to participate in this testing program. Essentially, the project is seeking a greater understanding of how students think about these 'high-stakes' tests, and how motivational and social goals influence participation. Overall, the merit in this project lies in filling in the gap in the lack of research undertaken into student perceptions of NAPLAN testing.

SOCIAL LEARNING THEORY

Social Learning Theory was developed by Albert Bandura (1977) and entails a shift from a theory that was once centred in the needs, drives and impulses that operate below the level of consciousness, to further incorporate the importance of external factors that influence motivation and behaviour (Bandura, 1977; 1986). Of most relevance to this research project is self-efficacy. Bandura (1977) states that "an efficacy expectation is the conviction that one can successfully execute the behaviour required to produce the outcomes." Furthermore, "in social, intellectual...pursuits, those who judge themselves highly efficacious will expect favourable outcomes, self-doubters will expect mediocre performances of themselves and thus negative outcomes." (Bandura, 1986, p. 392). In Social Learning Theory, one's perception of their ability to perform is most readily altered and developed by mastery performances whereby they have experienced a previously successful performance (Bandura, 1977; 1986).

GOAL THEORY

Closely linked to Social Learning Theory, Goal Theory will also be explored in relation to students' motivational goals towards participating in NAPLAN testing. Here, the three most common goals will be investigated: mastery goals, performance goals and avoidance goals (Ames, 1992; Brophy, 1998). When students adopt mastery goals, they view learning as a goal to satisfy being able to complete a task, solve a problem or employ specific strategies in order to master a skill. For them, learning is intrinsically interesting and engaging. Students are working hard because they want to reach full understanding and mastery of a given task. Students who are inclined to take up these sorts of goals are often self-monitoring, self-regulated learners, are able to tolerate failure by applying changes of strategy, and are also more likely to seek help when faced with a particularly challenging task (Mansfield, 2010; Ames, 1992; Archer, 2008).

On the other hand, performance goals might indicate a somewhat more shallow learning experience for students who take up these goals. Performance goal orientated students often work hard because they want others to acknowledge their competence (Mansfield, 2010; Mansfield & Wosnitza, 2010). Learning in this situation is often dominated by the implementation of superficial or short-term learning strategies such as memorising and rehearsing. Performance goals often lead students to avoid asking for help, and also avoid challenging tasks (Barker, Dowson & McInerney, 2003). Goal theory developed further by acknowledging avoidance goals. Students inclined to take up avoidance goals are not cognitively engaged in the learning process, but rather do the least possible to meet the minimum standards or actually avoid engaging for the fear of performing poorly. Mansfield (2010) explains that "avoidance goals have a negative influence on achievement related behaviours, resulting in shallow processing, poor retention and self-handicapping strategies such as procrastination and reluctance to seek help" (p. 45).

The link between Social Learning Theory and Goal Theory becomes evident. One's self-efficacy informs the achievement goals they take up in the classroom, which is then also influenced by the social pressures experienced by adolescents in a school environment. This research project will call upon aspects of both theories in order to better understand the relationship between students' motivation and their participation in NAPLAN testing.

RESEARCH DESIGN

The research project encompassed a mixed method research design. Caracelli, Greene and Graeme (1989) have developed a conceptual framework that encompasses five purposes for implementing mixed method designs. The third purpose is development, whereby the results from one method are used to help develop and inform the other method (Caracelli, Greene & Graeme, 1989). This was the case for this research project as an initial focus group was employed in order to generate a greater understanding of current student perceptions of NAPLAN testing before developing the main research instrument. In this case, the main instruments were sequential surveys; a pre-NAPLAN survey and a post-NAPLAN survey. The purpose of employing pre and post surveys was to acknowledge that 'high-stakes' tests may cause distress among participants in the weeks leading up to the NAPLAN testing. Though this is only an assumption, gathering evidence of a change in attitude may help to further gather students' motivational patterns for engaging or disengaging from the testing.

As with any social research project, the scope of the project determines the validity of the research results and conclusions. The study was conducted with one high school in the North Queensland context with which come certain limitations to the research findings. First, the data collected is only representational of students in one setting, this being a large, public high school in a regional location. This had to be kept in mind when analysing the data and developing implications based on the research. Second, as with any survey, all participants were different and in turn had the potential to interpret survey questions in various ways. Due to this, the data collation and analysis was approached as objectively as possible though human error must here be acknowledged. Furthermore, the study did not aim to raise the number of students participating in NAPLAN testing, but rather better understand why they are or they are not fully engaged when participating in the tests. This too must be acknowledged when developing implications based on the research.

Methods

The study focused on high school students aged under the age of 18, and therefore followed a strict process of ethics approval before carrying out any research. First, an Application for Ethics Approval was lodged to the James Cook University Ethics Committee. The application was granted with conditional approval, to which alterations were made and was then followed by full approval. Also, to adhere to ethical guidelines within the Queensland Department of Education and Training, permission was granted from the school deputy principal of the school in which the study took place.

A high school hosting over 2000 students over years 8 to 12 was approached to be involved in the study. The school is located in the North Queensland region, in a regional town centre. Through an initial letter to the Principal and then a follow-up meeting with the year 9 Deputy Principal and the school NAPLAN Coordinator, the school agreed to participation in the study. At this meeting, several suggestions were made in regard to recruiting participants for the focus group, as well as ways of heightening participation in the pre-test and post-test surveys.

For the first phase of research, the focus group, a number of students were invited to participate. All students were provided with an information and consent form that was to be returned on the day of the focus group session, with signed permission from a parent or carer. For the focus group session, students were invited to a learning annexe on the school campus. In this space, students were provided with comfortable furniture, as well as refreshments for the duration of the session.

The second phase of research was open to all students in year 9 who attended this high school. Information and consent forms were distributed to each individual student, whereby parents or carers who did not wish for their student(s) to participate this would be indicated on a consent form to be returned to the school prior to the date of the pre-test surveys. Students who were not permitted to participate were invited to leave the classroom during the time allocated for both the pre-test and post-test surveys, otherwise all students were invited to participate in both surveys during allocated class time. The school NAPLAN coordinator distributed all year 9 English teachers with the pre-test surveys one week before the NAPLAN test for 2011. Teachers were instructed to have students participate in the survey by the close of the week, with a return of surveys soon after. The post-test surveys were distributed to year 9 English teachers in the week following the close of the 2011 NAPLAN testing, for students to complete by the end of that week with a return date soon after. Both surveys were completed during class time and in the assigned timetabled classrooms at the school.

Results

Qualitative Phase

The qualitative phase encompassed a focus group session in which a total of seven students from the target group of participants participated. The students represented a range of social groups, cultural backgrounds, and academic levels of achievement. The group consisted of 6 female students and 1 male student, all of whom had participated in the year 7 NAPLAN test. The aim of the session was to gain insight into the students' current thinking around NAPLAN testing and their overall perceptions of the test. Throughout the session the students were asked to respond to a series of questions and ideas, some of which were:

* Tell me about NAPLAN

* Tell me about your biggest memory of NAPLAN in year 7;

* Does NAPLAN have a value or a purpose?

* Do you know about the My School website?

* Do you think NAPLAN is important for you personally?

* What do you hear about NAPLAN at home/school?

* Do you do anything at home or at school to prepare for NAPLAN?

* Do you worry about NAPLAN?

* What do you expect from yourself in NAPLAN tests?

* Who do you think gets the most out of NAPLAN?

The discussion concluded after approximately 40 minutes, after which a recording of the session was transcribed. Both researchers then worked in consultation to analyse the content of the discussion. This analysis led to the illumination of recurring themes surrounding Social Learning Theory and Goal Theory. Burns (2000) describes this process of content analysis in Grounded Theory, whereby themes and concepts are identified and meaning is confirmed or made accurate. These themes and other general observations of student attitude were used to guide and develop the quantitative phase of data collection, the surveys. For example, three students responded to the question What do you expect from yourself in NAPLAN tests in three distinctly different ways:

* Student x--"To be prepared."

* Student y--"To try my hardest."

* Student z--"To get a good mark."

Respectively, the responses can be linked to test preparation, mastery goals and performance achievement goals. The surveys were designed so as to incorporate appropriate questions as shaped by the recurring themes of the focus group responses.

Quantitative Phase

A total of 159 students participated in the quantitative phase of research, which equates to 39.75% of the year 9 cohort at this high school. The pre and post surveys each had a total of 27 corresponding questions to be responded to on a scale of strongly agree to strongly disagree. The data completed was collated and all statistical procedures were carried out using the PAWS 19 computer program. To get values for self-efficacy, mastery goals, performance goals, avoidance goals, and preparedness, the responses to the coded questions were added together and then divided by the number of questions. For example, questions 1, 5 and 15 on each survey were based on mastery achievement goals, so the responses were added and then divided by 3 to get a value for mastery achievement goals. This procedure was used for both the pre and post surveys. Tests examining normality assumptions were satisfactory.

First, a paired sample statistics test showed significant differences in pre and post responses. Table 1 indicates significant changes in performance goals and preparedness.

A paired samples t test (N = 159) was performed to evaluate whether responses to the survey questions altered after test participation. Again, significant changes in mean responses to the survey questions occurred in relation to performance achievement goals (t = -6.02, df = 151, p < 0.001) and preparedness (t = 2.92, df = 144, p < 0.004) as shown in Table 2.

Next, achievement goals, self-efficacy and preparedness were compared with one another. Pearson Correlations between each construct were performed for both the pre survey and the post survey. Table 3 indicates that students with mastery goal orientations before the NAPLAN test highly correlated with self-efficacy expectations. Additionally, the data shows significant correlations between students with mastery goal orientations and performance goal orientations.

Correlations between constructs in the post survey are shown in Table 4. There is a significant correlation between student self-efficacy and mastery goals; preparedness and mastery goals; and preparedness and performance goals. Following the analysis of constructs within each survey, an analysis of students with low performance goal orientations was performed. Table 5 shows the means and standard deviations (N = 159) of responses of students with responses to performance achievement survey questions of equal to or less than 1.25. The data shows that students with low performance goal orientations also had low self-efficacy expectations, as well as low mastery orientations. In addition, these students have significant avoidance goal orientations.

Finally, analysis of variance (ANOVAs) were performed to examine the differences in self-efficacy before the NAPLAN test (self-efficacy pre) in students with performance goal orientated responses. There was a significant difference in endorsed self-efficacy and mastery and performance goals between those with low performance goals and those with average performance goals (F (1,148) = 31.9, p<0.001). The means for self-efficacy are 1.25 and 1.79 respectively for the two groups (i.e. low and average performance orientated students). Results of ANOVAs for pre-test self-efficacy were significant for the two groups of students (F (1,148) =31.9, p <.001) as was their avowed preparedness level (F (1,147) =11.3, p<.001) their mastery goals (F (1,150) = 36.5, p<.001) and their avoidance goals (F(1, 149) = 3.8, p <.05).

DISCUSSION

This research project aimed to advance our understanding of student perceptions of NAPLAN testing by exploring the motivational and social factors that influence participation in and attitudes towards NAPLAN testing. The main question guiding the research was what motivates students to participate in NAPLAN?

The generalisability of the data gathered in this project depends on a number of methodological issues. The participants were derived from a single school context in a regional area; this has a number of implications. Because of the large size of the school there is access to staff and resources that may not be available to students in smaller, more remote schools. As a result, other students may not have the same access to such rigorous NAPLAN preparation programs. In addition, the design of the research instrument must also be scrutinised with respect to interpretation of the survey questions. Survey questions 5, 9 and 11 could be split in to two questions which may have resulted in different student responses. Because of the nature of these questions, the results of the motivational goals might have been altered. Future research encompassing a wider range of schools in diverse locations, using a more refined research instrument must be conducted to confirm patterns and results gathered in this study about students' thinking and motivation to participate in NAPLAN testing.

Despite these limitations, some interesting and hitherto unexplored issues emerged. It was found that students with high mastery goal orientations also had high self-efficacy, as well as high performance goal orientations. The connection between the three motivations towards participation shows that students are not only taking part in order to master skills, but also with the self belief in their ability and with the drive to look competent.

The study also indicated that students with high self-efficacy were more likely to be prepared for the test. On the other hand, students with students with low performance achievement goals also had low self-efficacy, while students with average performance goal orientations had greater self-efficacy. Further, students with low mastery goals were more likely to have avoidance goal orientations. It appears that students who are disengaging from the test are also indicating their lack in motivation to master the skills required by the NAPLAN test.

Results of the post NAPLAN testing survey show that students felt that they were less prepared for NAPLAN after participating in the test than they had thought pretest, and also that they were more concerned with achieving results to fulfil performance achievement goals after they had participated in the test.

Bandura (1977; 1986) continues to dominate the field of Social Learning Theory with his theory of self-efficacy. Self-efficacy encompasses one's perception of one's capacity to successfully complete a given task. Subsequently, this self-belief dominates one's motivation and achievement on future tasks (Dowson & Martin, 2009). This study has shown that students with high self-efficacy expectations also had high mastery goal orientations. Mastery goals refer to learning for the value of learning. For example, completing a task to gain new knowledge and skills, or mastering a task in order to reach full understanding (Ames, 1992; Archer, 2008; Mansfield, 2010). As the literature suggests, the two form a combination for deeper motivation and achievement. In terms of NAPLAN, this suggests that those students who have experienced previously positive experiences of testing, or schooling more generally, may be more inclined to display high self-efficacy expectations. Additionally, these students have displayed high mastery goal orientations. The combination of the two reinforces what is already known about the link between self-efficacy expectations and mastery achievement goals.

In addition, this research has shown that there was a significant correlation between student self-efficacy expectations and preparedness for the test. Ultimately, students perceived themselves to be prepared for the test in the week prior to participating in NAPLAN. The connection that can be drawn here is that students were entering the tests with high expectations of how they were going to execute successful responses to the test based on the fact that they felt prepared for NAPLAN. One study that has been undertaken in the senior school context in Western Australia indicates that students with higher expectations are more likely to experience test anxiety, which is, "worry and emotional reactions, in response to an achievement situation that is perceived by the individual to be threatening in nature" (Thompson, 2003, p. 4). In this context, NAPLAN had the potential to be 'threatening' for participants by way of either successful or poor performances. Conversely, the post NAPLAN test survey indicates that the level of preparedness felt by students dropped after participation. One train of thought that can be followed here is that students initially felt that they were able to perform well on the test based on test preparation, yet after participation students felt they were not as prepared as they had initially thought. The Western Australian study (Thompson, 2003) in conjunction with Social Learning Theory (Bandura, 1977; 1986) can be combined to presume that students who perceive themselves as having been through a negative testing experience, such as going into a test with high expectations of their own performance based on preparedness and leaving with a feeling of poor preparedness, may be more likely to show indicators of lower self-efficacy expectations. The data from the study did not show a significant drop in self-efficacy after students had participated in NAPLAN, but rather, there was an increase in students' want to fulfil performance achievement goals post NAPLAN.

The above link between poor preparedness and increased performance achievement goals indicates that NAPLAN has become high-stakes for students. While the media and government continue to emphasise the importance of NAPLAN and its place within Australian education (Lingard, 2009; Marsh, 2010; Thrupp, 2009), the data shows that students are beginning to also acknowledge its significance as shown by the way students perceive their participation in NAPLAN and their wanting to achieve well based on the acknowledgement of competence or simply not looking 'dumber' than the student beside them. The issues raised by the increased want to fulfil performance goals can be associated not only with the feeling of lacking preparation as the data suggests, but also with the notion that students feel more pressure to achieve higher results once the test has been taken. Here the motivation to participate is based on a "desire not to demonstrate lack of ability" (Dowson & Martin, 2009).

The relationship between self-efficacy and performance achievement orientations was also evident in the study. The data suggested that students in this context with low performance achievement goals also had low self-efficacy expectations in regard to the NAPLAN test. On the other hand, students with average to high performance achievement goals for NAPLAN had higher self-efficacy expectations. The correlation between the two motivational orientations suggests that students who perceived themselves as unable or with poor ability to achieve on the test also had little desire to actually perform well on the test, even for the acknowledgement of being competent, and vice versa.

Finally, moving away from performance goal orientations, the study has shown that students with low mastery goal orientations were more likely to expound avoidance goal orientations. While many have acknowledged that students are able to take up multiple goals in any one learning context (Mansfield, 2010; Barker, et al, 2003), this study has shown that the combination of these two goals is unlikely in the context of NAPLAN testing. Students with avoidance goal orientations often display self handicapping strategies where students are reluctant to seek help or become cognitively engaged in the learning experience (Mansfield, 2010). For NAPLAN testing in schools, this becomes an issue as the data is used not only for planning and benchmarking purposes, but also on the wider stage of the My School website with its common following of cross-school comparisons.

IMPLICATIONS AND CONCLUSION

The study has raised awareness in three main areas of student motivation towards participating in NAPLAN testing. First student perceptions of self-efficacy appear to mould broader motivational goals such as mastery and performance goals. What can be taken from this relationship is that interventions focused on raising student self-efficacy may in turn alter students' perceptions surrounding their motivations for participating in high stakes testing.

Second, the study indicated that students were strongly motivated by performance achievement goals both before and after participating in the test. This suggests not only a shallow value of NAPLAN in general, but also the skills and strategies being used in order to complete the test are being retained only at a superficial level in order to simply receive a 'good' result. The implication here for teachers is to ensure that these skills are taught and incorporated into learning experiences that are held with higher regard by students to ensure that these skills do cross over from a standardised test to useful life skills.

Finally, the students participating in this study showed the significance of their experience of preparation for the NAPLAN test. While they initially indicated that they felt prepared, after participation this feeling of preparedness dropped significantly. While it is acknowledged that preparing students for a standardised test is not an easy job, teachers could spend time reflecting on and revising their preparation strategies in terms of what is both said and done for future test participants, particularly with students who need to have their confidence and perceived self-efficacy raised. Strategies known to raise self-efficacy include opportunities to attain mastery of the skills examined by NAPLAN.

Overall, the study gives an insight into the way students are approaching NAPLAN testing in terms of their motivation towards participation. It has shown that student self-efficacy and test preparation have come to be vital factors in time leading up to the NAPLAN test. Perhaps acknowledging these psychological features and preparatory elements is the first step towards genuinely understanding and influencing the way students think about participating in NAPLAN in both rural and urban school settings.

Appendix
Part 1 Motivation to Test--Students and NAPLAN

The following statements tell us about what students think about
NAPLAN testing. Think about each one and put a tick ([square root])
in the column with the most correct response.

Please print your student number:

 Strongly Agree
 Agree

1. I want to participate in the NAPLAN test
to challenge my knowledge and skills.

2. I feel confident that I can do well on
the NAPLAN test.

3. I want to do well on the NAPLAN test to
impress my teachers.

4. I want to participate in the NAPLAN test
to gain new knowledge and skills.

5. I only participate in NAPLAN testing because
I have to and I do not want to look stupid.

6. I feel prepared for the NAPLAN test.

7. I want to do well on the NAPLAN test to
get a good report.

8. I want to do well on the NAPLAN test this
year because I do not want to be placed
in low level classes.

9. I want to do well on the NAPLAN test this
year because I want to know if I have improved
in my skills and knowledge since Yr 7.

10. I think I will struggle with some of the
questions on the NAPLAN test.

11. The NAPLAN test is not a true reflection
of my knowledge and skills but I do not want
to look worse than other students.

12. I want to do well on the NAPLAN test so
I can be seen as competent.

13. I have practised for the NAPLAN test.

14. I think I am able to achieve good results
on the NAPLAN test.

15. I did poorly in Year 7 on NAPLAN so I do
not want to do poorly again.

Please print your student number:

 Neutral Disagree

1. I want to participate in the NAPLAN test
to challenge my knowledge and skills.

2. I feel confident that I can do well on
the NAPLAN test.

3. I want to do well on the NAPLAN test to
impress my teachers.

4. I want to participate in the NAPLAN test
to gain new knowledge and skills.

5. I only participate in NAPLAN testing because
I have to and I do not want to look stupid.

6. I feel prepared for the NAPLAN test.

7. I want to do well on the NAPLAN test to
get a good report.

8. I want to do well on the NAPLAN test this
year because I do not want to be placed
in low level classes.

9. I want to do well on the NAPLAN test this
year because I want to know if I have improved
in my skills and knowledge since Yr 7.

10. I think I will struggle with some of the
questions on the NAPLAN test.

11. The NAPLAN test is not a true reflection
of my knowledge and skills but I do not want
to look worse than other students.

12. I want to do well on the NAPLAN test so
I can be seen as competent.

13. I have practised for the NAPLAN test.

14. I think I am able to achieve good results
on the NAPLAN test.

15. I did poorly in Year 7 on NAPLAN so I do
not want to do poorly again.

Please print your student number:

 Strongly
 Disagree

1. I want to participate in the NAPLAN test
to challenge my knowledge and skills.

2. I feel confident that I can do well on
the NAPLAN test.

3. I want to do well on the NAPLAN test to
impress my teachers.

4. I want to participate in the NAPLAN test
to gain new knowledge and skills.

5. I only participate in NAPLAN testing because
I have to and I do not want to look stupid.

6. I feel prepared for the NAPLAN test.

7. I want to do well on the NAPLAN test to
get a good report.

8. I want to do well on the NAPLAN test this
year because I do not want to be placed
in low level classes.

9. I want to do well on the NAPLAN test this
year because I want to know if I have improved
in my skills and knowledge since Yr 7.

10. I think I will struggle with some of the
questions on the NAPLAN test.

11. The NAPLAN test is not a true reflection
of my knowledge and skills but I do not want
to look worse than other students.

12. I want to do well on the NAPLAN test so
I can be seen as competent.

13. I have practised for the NAPLAN test.

14. I think I am able to achieve good results
on the NAPLAN test.

15. I did poorly in Year 7 on NAPLAN so I do
not want to do poorly again.

Part 2 Motivation to Test--Students and NAPLAN

The following statements tell us about what students think
about NAPLAN testing. Think about each one and put a tick
([square root]) in the column with the most correct response.

Please print your student number:

 Strongly Agree
 Agree

1. I participated in the NAPLAN test to
challenge my knowledge and skills.

2. I feel confident that I did well on
the NAPLAN test.

3. I wanted to do well on the NAPLAN
test to impress my teachers.

4. I participated in the NAPLAN test to
gain new knowledge and skills.

5. I only participated in NAPLAN testing
because I had to and I did not want
to look stupid.

6. I felt prepared for the NAPLAN test.

7. I wanted to do well on the NAPLAN
test to get a good report.

8. I wanted to do well on the NAPLAN
test this year because I did not want to
be placed in low level classes.

9. I wanted to do well on the NAPLAN test
this year because I wanted to know if I
have improved in my skills and knowledge
since Yr 7.

10. I think I struggled with some of the
questions on the NAPLAN test.

11. The NAPLAN test was not a true
reflection of my knowledge and skills but I
did not want to look worse than other
students.

12. I wanted to do well on the NAPLAN test
so I can be seen as competent.

13. I had practised for the NAPLAN test.

14. I think I achieved good results on the
NAPLAN test.

15. I did poorly in Year 7 on NAPLAN so I
did not want to do poorly again.

Please print your student number:

 Neutral Disagree

1. I participated in the NAPLAN test to
challenge my knowledge and skills.

2. I feel confident that I did well on
the NAPLAN test.

3. I wanted to do well on the NAPLAN
test to impress my teachers.

4. I participated in the NAPLAN test to
gain new knowledge and skills.

5. I only participated in NAPLAN testing
because I had to and I did not want
to look stupid.

6. I felt prepared for the NAPLAN test.

7. I wanted to do well on the NAPLAN
test to get a good report.

8. I wanted to do well on the NAPLAN
test this year because I did not want to
be placed in low level classes.

9. I wanted to do well on the NAPLAN test
this year because I wanted to know if I
have improved in my skills and knowledge
since Yr 7.

10. I think I struggled with some of the
questions on the NAPLAN test.

11. The NAPLAN test was not a true
reflection of my knowledge and skills but I
did not want to look worse than other
students.

12. I wanted to do well on the NAPLAN test
so I can be seen as competent.

13. I had practised for the NAPLAN test.

14. I think I achieved good results on the
NAPLAN test.

15. I did poorly in Year 7 on NAPLAN so I
did not want to do poorly again.

Please print your student number:

 Strongly
 Disagree

1. I participated in the NAPLAN test to
challenge my knowledge and skills.

2. I feel confident that I did well on
the NAPLAN test.

3. I wanted to do well on the NAPLAN
test to impress my teachers.

4. I participated in the NAPLAN test to
gain new knowledge and skills.

5. I only participated in NAPLAN testing
because I had to and I did not want
to look stupid.

6. I felt prepared for the NAPLAN test.

7. I wanted to do well on the NAPLAN
test to get a good report.

8. I wanted to do well on the NAPLAN
test this year because I did not want to
be placed in low level classes.

9. I wanted to do well on the NAPLAN test
this year because I wanted to know if I
have improved in my skills and knowledge
since Yr 7.

10. I think I struggled with some of the
questions on the NAPLAN test.

11. The NAPLAN test was not a true
reflection of my knowledge and skills but I
did not want to look worse than other
students.

12. I wanted to do well on the NAPLAN test
so I can be seen as competent.

13. I had practised for the NAPLAN test.

14. I think I achieved good results on the
NAPLAN test.

15. I did poorly in Year 7 on NAPLAN so I
did not want to do poorly again.


REFERENCES

ACARA. (2010). Assessment: National Assessment Program. Retrieved from http://www.acara.edu.au/assessment/assessment.html. (Accessed 8th March 2011)

ACARA. (2011). My School. Retrieved from http://www.myschool.edu.au. (Accessed 23rd March 2011)

Ames, C. (1992). Classrooms: goals, structures and student motivation. Journal of Educational Psychology. 84(3), 261-271.

Archer, J. (2008). Students' reason for working or not working in class: Aligning academic and social motivation. Paper presented at the annual meeting of the AARE, Brisbane, December, 2008.

Bandura, A. (1977). Social learning theory. Englewood Cliffs, New Jersey: Prentice Hall Inc.

Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, New Jersey: Prentice-Hall Inc.

Barker, K., Dowson, M., & McInerney, D. (2003). Conceptualising students goals as multidimensional and hierarchically structured. NZARE Conference November. Retrieved from http://www.aare.edu.au/03pap/bar03775.pdf. (Accessed 2nd April 2011)

Boston, K. (2009). League tables. Teacher. (205), 36-42.

Brophy, J. (1998). Motivating Students to Learn. USA: McGraw Hill Companies.

Burns, R. (2000). Introduction to Research Methods (4th ed.). Frenchs Forrest, NSW: Pearson Education Australia.

Bush, G.W. (2001). Executive Summary of the No Child Left Behind Act of 2001. Retrieved from www2.ed.gov/nclb/overview/intro/execsumm.html. (Accessed 5th March 2011)

Caldwell, B. (2010). The impact of high stakes test driven accountability. Professional Voice. 8(1), 49-54.

Caracelli, V., et al. (1989). Toward a conceptual framework for mixed method evaluation designs. Educational Evaluation and Policy Analysis. 11(3), 255-274.

Davidson, J. (2009). NAPLAN or napalm? Using the new national tests to personalise learning. Teacher Learning Network. 16(1), 3-5.

Directgov. (2011). National Curriculum teacher Assessments and Key Stage Tests. Retrieved from http://www.direct.gov.uk/en/Parents/Schoolslearninganddevelopment/Ex amsTestsAndTheCurriculum/DG 10013041. (Accessed 5th March 2011)

Dowson, M., & Martin, A. (2009). Interpersonal relationships, motivation, engagement, and achievement: yields for theory, current issues and education practice. Review of Educational Research. 79(1), 327-366.

Graham, J. (2010). The trouble with My School. Professional Voice. 8(1), 7-12.

Jensen, B. (2010). Value added measures of school performance. Independence, 35(1), 1-6.

Killen, D. (2006). At the heart of education. EQ Australia. Spring, 38-39.

Klenowski, V. (2010). Are Australian assessment reforms fit for purpose: Lessons from home and abroad. QTU Professional Magazine. 25, 10-15.

Klinger, D., & Luce-Kapler, R. (2005). Uneasy writing: the defining moments of high stakes literacy testing. Assessing Writing 1, 157-173.

Lingard, B. (2009). Testing times: The need for new intelligent accountabilities for schooling. Queensland Teachers Professional Magazine. 24, 13-19.

Luke, A., & Woods, A. (2008). Literacy learning: The Middle Years. 16(3), 11-19.

Mansfeild, C., & Wosnitza, M. (2010). Motivation goals during adolescence: a cross sectional perspective. Issues in Educational Research. 20(2), 149-165.

Mansfield, C. (2007). Academic and social goals in adolescence: developments and directions. Paper presented at the annual meeting of AARE, Fremantle, November, 2007.

Mansfield, C. (2010). Motivating adolescents: goals for Australian students in secondary schools. Australian Journal of Educational and Developmental Psychology. 10, 44-55.

Marsh, C. (2010). The education revolution: national curriculum and equity. In C. Marsh, Becoming a Teacher. (p. 10-36). Pearson Australia.

Martin, A.J., & Marsh, H.W. (2008). Academic buoyancy: towards an understanding of students everyday academic resilience. Journal of School Psychology. 46(1), 53-83.

Masters, G. (2009). The Shared Challenge: Improving literacy, numeracy and science learning in Queensland primary schools. Victoria: ACER.

MCEECDYA (2011). National Assessment Program. Retrieved from http://www.mceecdya.edu.au/mceecdya/about_mceecdya,11318.html. (Accessed 14th March 2011)

O'Keefe, D. (2011). Mental health program boosts NAPLAN results. Education Review. 11th August 2011.

O'Keefe, D. (2011). NAPLAN nightmares. Education Review. 11th August 2011.

Shaw, A. (2009). School performance reporting. Independence. 34(1), 13-15.

Smeed, J. (2010). Accountability through high stakes testing and curriculum change. Leading and Managing. 16(2), 1-15.

Stowe-Linder, J. (2009). A national curriculum: opportunities and threats. Principal Matters. Winter 2009, 4-8.

Thian, D. (2010). Making the most of NAPLAN test data. Independence. 35(1), 42-43.

Thompson, K. (2003). The impact of a cognitive-behavioural program on test anxiety symptoms. Retrieved from http://www.researchrepository.murdoch.edu.au/360/. (Accessed 3rd April 2011)

Thomson, P. (2008). Lessons for Australia? Learning from England's curriculum 'black box'. English in Australia. 43(3), 13-20.

Thrupp, M. (2009). Teachers, social contexts and the politics of blame. Queensland Teachers Professional Magazine. 24, 6-12.

US Department of Education. (2004). No Child Left Behind. Retrieved from www2.ed.gov/nclb/landing.jhtml. (Accessed 14th March 2011)

Whitehead, I. (2010). The murky waters of national testing. Leadership in focus. Winter (18), 46-49.

Lauren Belcastro and Helen Boon

James Cook University

Queensland
Table 1. Means and Standard Deviations (S.D.)
of all motivational and preparedness
constructs pre and post NAPLAN

Construct Mean N S.D

Mastery pre 1.69 151 0.48
Mastery post 1.79 151 0.93
Self-efficacy pre 1.52 152 0.60
Self-efficacy post 1.54 152 0.60
Performance pre 1.36 152 0.77
Performance post 1.67 152 0.82
Avoidance pre 1.88 153 0.80
Avoidance post 1.99 153 0.75
Preparedness pre 2.01 145 0.93
Preparedness post 1.82 145 0.87

Table 2. Comparisons of constructs before and after
NAPLAN testing (Paired T-test)

Construct Paired Differences

 Mean S.D 95% Confidence
 Interval of
 the Difference

 Lower Upper

Mastery pre-post -0.10 0.82 -0.23 0.03
Self-efficacy pre-post -0.02 0.66 -0.13 0.09
Performance pre-post -0.31 0.64 -0.42 -0.21
Avoid pre-post -0.11 0.87 -0.25 0.03
Preparedness pre-post 0.19 0.77 0.06 0.31

Construct Paired Differences

 t df Sig.

Mastery pre-post -1.53 150 0.13
Self-efficacy pre-post -0.37 151 0.71
Performance pre-post -6.02 151 0.001
Avoid pre-post -1.57 152 0.12
Preparedness pre-post 2.91 144 0.001

Table 3. Correlations between Constructs Pre NAPLAN test

 Mastery pre Self-Efficacy Performance
 pre pre

Mastery pre 1 .358 ** .594 **
Self-efficacy pre .358 ** 1 .508 **
Performance pre .594 ** .508 ** 1
Avoidance pre 0.147 -0.05 .235 **
Preparedness pre .261 ** .404 ** .410 **

 Avoidance Preparedness
 pre pre

Mastery pre 0.147 .261 **
Self-efficacy pre -0.05 .404 **
Performance pre .235 ** .410 **
Avoidance pre 1 -0.039
Preparedness pre -0.039 1

**. Correlation is significant at the 0.01 level (2-tailed).

Table 4. Correlations between Constructs Post NAPLAN test

 Mastery Self-Efficacy Performance
 post post post

Mastery post 1 .395 ** .710 **
Self-efficacy post .395 ** 1 .376 **
Performance post .710 ** .376 ** 1
Avoidance post 0.093 0.089 0.103
Preparedness post .431 ** .493 ** .450 **

 Avoidance Preparedness
 post post

Mastery post 0.093 .431 **
Self-efficacy post 0.089 .493 **
Performance post 0.103 .450 **
Avoidance post 1 0.093
Preparedness post 0.093 1

Table 5. Low and Average Performance Goal Orientation
Means and Standard Deviations (S.D.) (N = 159)

 N Mean S.D Minimum

Self-Efficacy pre Average 70 1.79 0.59 0.00
 Low 80 1.25 0.51 0.33
Avoidance pre Average 71 2.02 0.83 0.33
 Low 80 1.77 0.75 0.00
Preparedness pre Average 68 2.27 0.87 0.00
 Low 80 1.76 0.95 0.00
Mastery pre Average 71 1.92 0.46 0.33
 Low 81 1.49 0.41 0.67
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有