首页    期刊浏览 2025年12月03日 星期三
登录注册

文章基本信息

  • 标题:Classroom-based assessment: and the issue of continuity between primary and secondary school languages programs.
  • 作者:Hill, Kathryn
  • 期刊名称:Babel
  • 印刷版ISSN:0005-3503
  • 出版年度:2010
  • 期号:November
  • 语种:English
  • 出版社:Australian Federation of Modern Language Teachers Associations
  • 关键词:Ethnography;Language instruction

Classroom-based assessment: and the issue of continuity between primary and secondary school languages programs.


Hill, Kathryn


[ILLUSTRATION OMITTED]

Abstract

This article presents selected findings from an ethnographic study of classroom-based assessment practices in languages classrooms (Indonesian) in the final year of primary (Year 6) and the first year of secondary (Year 7), respectively. In particular, the paper focuses on differences between the respective year levels in how learning was assessed as well as what was assessed, and considers the implications of these differences for continuity between primary and secondary school languages programs.

Keywords

languages, transition, continuity, Indonesian, assessment

Introduction

This article presents selected findings from an ethnographic study of classroom-based assessment practices in languages classrooms in Victoria in the final year of primary (Year 6) and the first year of secondary (Year 7), respectively (note that these year levels vary across different Australian States and Territories).

The context for the study is the push, both in Australia and internationally, to commence the study of languages in primary school. Notwithstanding the range of benefits cited for an early start to languages learning, the primary languages (PL) policy is clearly motivated by a desire to effect higher proficiency outcomes in the longer term (Sharpe, 2001; Lo Bianco, 2001). However, while the immediate outcomes of PL programs may be positive (e.g., Clyne, Jenkins, Chen, Tsolidou, & Wallner, 1995; Blondin, Candelier, Edelenbos, Johnstone, Kubanek-German, & Taeschner, 1998), the evidence suggests that they do not necessarily result in any long-term proficiency advantage (Johnstone, 1999).

Research has identified 'discontinuity' as a key factor responsible for undermining any of the advantages gained from an early start to languages learning (Balandier-Brown, Bolster, & Rea-Dickins, 2003; Blondin et al., 1998; Burstall, Iamieson, Cohen, & Hargreaves, 1974; Hill, Davies, Oldfield, &Watson, 1997; Hill, 2001; Kubanek-German, 1998). The National Statement for Languages Education in Australian Schools (MCEETYA, 2005), for example, identifies continuity within programs as well as between primary and secondary school levels as one of the major challenges to the success of languages education. However, despite increasing interest in the issue of transition and continuity between primary and secondary school languages programs, this issue has remained relatively under-researched (Rea-Dickins, 2009).

It is important to acknowledge that problems with continuity are not exclusive to languages education programs. Edelenbos and Koster (1993), for example, also identified discontinuity as a problem in other learning areas such as Dutch (first language) and mathematics. In Australia, there is also a growing recognition of the need to improve continuity in schooling, especially during the so-called, 'middle years' of schooling (Years 5-9) (DET, 2003). In practice, however, policy-makers have been preoccupied with the areas of literacy, numeracy, and science, reflecting educational priorities both nationally and around the world. Unfortunately, this preoccupation seems to have contributed to a perception that studying languages takes time away from these 'more important' learning areas (e.g., Lo Bianco, 2003). Hence, as Sharpe (2001) wryly notes, languages is the only area of the curriculum where 'anyone seriously pursues the argument that it is not worth primary schools teaching anything because pupils can catch up later' (Sharpe, 2001, p.37).

There are a number of factors that may contribute to discontinuity. These include a tendency for high school teachers to ignore any prior learning and treat all students as 'beginners' (Edelenbos & Koster, 1993; Edelenbos & Suhre, 1995; Hill, 2003; Low, 1999; Oostdam & Van Toorenburg, 2002). Furthermore, differences have been found in primary and secondary school languages teachers' commitment to proficiency (as distinct from 'enjoyment') as a goal for their programs (Crawford, 2001) as well as in their pedagogical approach (Hill, 2003; Low, Duffield, Brown, & Johnstone, 1993; Low, Brown, Johnstone, & Pirrie, 1995; Sharpe, 2001).

Classroom-based assessment

Purvis and Ranaldo (2003) compared primary and secondary school teachers' judgements of learners' performance on a jointly designed summative assessment task and concluded that teachers at the respective levels applied different notions of quality and standard. A distinction is often made between summative ('assessment of learning') and formative assessment ('assessment for learning', 'assessment as learning') (e.g. VCAA, 2008). The research reported on here used a study of classroom-based assessment (CBA) as a lens to investigate the continuity issue. CBA, as it is defined here, encompasses all three of these aspects of assessment.

It is generally agreed that CBA comprises three key components: evidence, interpretation, and use. However, a review of the literature reveals a diverse range of views about the nature of each component (Table 1).

In line with the exploratory approach used in the study reported on here, the definition of CBA adopted here represents an attempt to incorporate all of the dimensions presented in Table 1. That is,
   any reflection by teachers (and/or by
   learners) on the qualities of a learner's
   (or group of learners') work, and the
   use of that information by teachers
   (and/or learners) for teaching, learning,
   reporting, management (teaching/
   behaviour), or socialisation purposes.


A problem encountered early in the data collection period is that assessment processes are not always observable. This problem is demonstrated in the following interaction with the Year 6 teacher:

T: It's all like you've got antennae sticking out of your ears and it all comes in.

R: Well, that's right.

T: You're constantly processing it, you're constantly building up, I mean, I just know, just sitting in class, you know, you become aware of who's got the answer or who's gonna have a go at it. Like Arthur will keep trying 'til the cows come home. You know he won't get it straight away but, you know? So but, there's that but there's also, there's their identity in the class and there's all sorts of things ... In an attempt to overcome the problem of 'observing' the essentially intuitive forms assessment alluded to in this exchange the unit of analysis chosen for the study was the 'assessment opportunity'. ReaDickins (2006) used the term 'assessment opportunity' to refer to activities actually identified as 'for assessment' by the teacher. In contrast, this study focuses on the opportunities for assessment afforded by the respective classrooms (i.e., regardless of whether the teacher has identified them thus). This is defined as
   any actions, interactions, or products
   (planned or unplanned, deliberate or
   unconscious, explicit or embedded)
   with the potential to provide
   information on the qualities of a
   learner's (or group of learners') work.

Figure 1: Design and data collection

Year 6

Term 4, 2005
(10 weeks)

[down arrow]

Year 7

Terms 1 & 2, 2006
(10 weeks)


Design and data collection

The study used ethnographic methods (including participant observation and case study) to investigate classroom-based assessment practices in Indonesian language classrooms at two successive levels of schooling: Year 6 (the final year of primary school) and Year 7 (the first year of secondary school in Victoria - Figure 1).

There were a number of reasons for selecting Indonesian for this study. First of all, at the time data were collected, Indonesian, after Italian, was the most widely studied language in Victorian primary and secondary schools (DOE, 2006). Secondly, the researcher's qualifications in Indonesian enhanced her credibility and made her a potential resource from the teachers' perspective. Competence in Indonesian also afforded the researcher greater access to classroom activities than if a language less familiar to the researcher had been chosen.

Data collection for the study took place in Term 4, 2005 (Year 6) and Terms 1 and 2 (Year 7) comprising a total ten weeks in each classroom; and was longitudinal in the sense that the Year 7 cohort included a group of students who had originally participated as Year 6 students.

Sampling was motivated by the need to identify cases that were likely to provide the best opportunity to investigate the issue of transition and continuity (Eisenhardt, 2002; Patton, 2002). For example, it was important to identify sites where the languages programs were stable and well-established and where students had the opportunity to study the same language at both primary and high school level. It was also considered important for the participating teachers to have a genuine interest in the research as well as a high level of competence in language teaching (Duff & Uchida, 1997).

One high school and one 'feeder' primary school from the same metropolitan area were recruited for the study. The languages programs at the selected schools were typical in terms of contact hours, resourcing, and student populations. Two part-time teachers shared responsibility for the Year 7 Indonesian class whereas a single teacher was responsible for teaching the Year 6 class. Data included classroom observation and field notes, audio recordings, and documents. Table 2 lists the range of data identified as potential sources of assessment-related information.

Research questions

The study investigated five main research questions. The first four questions have been identified by Leung (2005) and Rea-Dickins (2006) as areas in need of further research. The final question relates the results of the previous questions to the issue of continuity.

1 What do languages teachers do when they carry out classroom-based assessment? (Leung, 2005)

2 What do they look for when they are assessing learners? (Leung, 2005)

3 What theory or 'standards' do they use? (Leung, 2005)

4 Do learners share the same understandings? (Rea-Dickins, 2006)

5 What are the implications of these results for continuity between primary and secondary school languages programs?

Data were analysed using the framework presented in Figure 2. While the five main research questions provide the overall structure for the framework, the categories are derived from themes and patterns arising from the data. The discussion that follows will focus on each of the four sections of the framework in turn.

What do teachers do?

Four distinct CBA processes were identified in the data. These were 'planning' assessment, 'framing' assessment' to learners, 'conducting' assessment, and 'using' assessment-related data. As Table 3 shows, the main differences found between the two levels related to how assessment was framed and conducted.

While assessment activities had been planned in advance and in some detail in Year 6, they were embedded in normal classroom activities and essentially not 'visible' as assessment to learners. In contrast, the assessment process in Year 7 was highly transparent, featuring detailed written instructions, explicit advance information about scoring and criteria, as well as information about how the outcomes of assessment would be used. Assessment in Year 7 formed a routine and predictable part of the teaching cycle with most activities associated with some type of formal assessment activity.

The following examples from the Year 7 data relate to a role-play, or 'dialogue' task, and demonstrate the different ways the 'assessment intention' of the activity was flagged to learners (example 1).

Students were subsequently provided with detailed written instructions (example 2) and a copy of the assessment rubric to put in their workbooks (example 3).
Example 1: Teacher's description of
'assessment intention'

Ok we need to do some oral work in the
next couple of days because we need to
prepare you for your first assessment.
It's based on a lot of the work we've
been doing today and yesterday.

Example 2: Task specifications

DIALOGUE--script/conversation (2
people)

TASK - Oral presentation

In pairs you are to prepare a dialogue in
Indonesian. You are to pretend that you
are meeting someone for the first time.

Each person to speak at least 12 times.

You must include

* greetings

* age

* name

* address (name and number)

* telephone

* hobbies/interests

* family

* pets

* birthday

* goodbye


Another key difference between the two year levels was that assessment tasks in Year 6 were usually completed in small groups (Year 6) whereas in Year 7, students more commonly completed assessment task either individually or in pairs. While the preference for group work is consistent with the emphasis in the current curriculum and standards framework (VCAA, 2005) on the development of interpersonal skills ('Interpersonal Learning: Working in Teams'), it is potentially problematic when it comes to reporting on individual achievement.

A number of researchers have noted the importance of understanding peer relationships in the context of classroom-based research. Torrance and Pryor (1998), for example, concluded that collaboration on assessment tasks both reflected and embedded the social relationships of the participants in their study while Toohey (2001) documented how patterns of domination and subordination impacted individual students' opportunities for learning. In the present study, it was observed that certain group dynamics (such as systematic exclusion or disengagement) limited the opportunity for some members to participate in collaborative assessment activities thus reducing the value of using the product of group work (e.g., through inspection of workbooks) to assess individual progress. While this problem can be overcome to some extent by observing students during completion of collaborative tasks, the experience in this study was that many students only appeared 'on task' when the teacher was nearby.

What do teachers look for?

This question was investigated through an analysis of written and verbal instructions, assessment rubrics, written and verbal feedback to learners, and discussions in reporting meetings in addition to written reports.

Table 4 provides a summary of the 'valued enterprises', 'valued qualities' (criteria) and 'standards' operating in the respective classrooms. This shows, firstly, that although there was broad agreement in the type of qualities valued in the respective classrooms, a wider range of linguistic criteria were applied in Year 7. Secondly, comparison of the content of assessment activities suggested there was a greater emphasis on Indonesian history and culture in Year 6 when compared to Year 7, where the focus was more or less exclusively linguistic. Finally, the concept of 'standard' in the respective classrooms was found to differ in two respects. The first was the use of descriptive assessment scales and rubrics in Year 7 compared to a tendency in Year 6 to define 'standard' in terms of quantity (e.g., number of correctly formed sentences). The second was that the Year 6 teacher appeared focused on exposing learners to linguistically and culturally rich content in Year 6 while in Year 7 the focus was on mastery of a relatively narrow range of linguistic content.

What theories or standards do they use?

A number of researchers have postulated a close relationship between representations of the discipline (Indonesian language education), pedagogic principles, and teachers' assessment practices (Leung, 2005; James, 2006; Wiliam, 2001 ). This question was investigated through analysis of internal policy, planning and reporting documents, as well as teacher interview data. These differences are summarised in Table 5.

Differences in the relative emphasis given to the cultural component of Indonesia, noted earlier, is consistent with differences in the respective teachers' commitment to the relevant external frameworks (i.e. CSF II and VELS). The Victorian Essential Learning Standards (VELS-VCAA, 2005) replaces the Curriculum and Standards Framework II (CSF II, 2000). A key difference between the two frameworks is that the VELS places an increased emphasis on the intercultural aspects of language learning relative to the CSF II. VELS also emphasizes 'interdisciplinary' and 'interpersonal', in addition to discipline-specific, learning. Both frameworks are linked to year levels (or 'stages of learning') based on a recommended minimum of 150 minutes of instruction per week, and specify a separate trajectory (or 'pathway') for students who begin learning the language in primary school to those who begin the language for the first time in high school. Both are intended as a set of guidelines: neither is intended to be prescriptive in relation to programs or resourcing. While schools are obliged to use these frameworks for reporting, this is not considered 'high stakes' as is the case, for example, with assessment and reporting for the UK National Curriculum. Rather, there is an 'understanding' that Government schools will make provision for their students to meet the appropriate standard for their year level.

[FIGURE 3 OMITTED]

In Year 6, the teacher-authored planning and assessment document was explicitly designed to comply with VELS (Level 4), which places a greater emphasis on the cultural component of the discipline (or 'Intercultural Understanding'--ICU), than its predecessor (CSF II).

[FIGURE 4 OMITTED]

This is in contrast to Year 7 where the external frameworks appeared, at best, incorporated into existing practice. This is evidenced in the following discussion with the two Year 7 teachers.

R: How does the stuff you're doing tie in with the VELS stuff? Do you think it needs to be, or does it?

T1 : I don't think it ties in very well at all. I just get a confused with all this VELS stuff I mean we've only done one reporting on it anyway and we've found it ...

T2: I found it very similar to what I used to write anyway.

T1 : Yeah.

R: With the, did you use the CSF before, or you didn't bother?

T1 : Ohh, not really. It's a hard one ...

T2: In all honesty, no.

T1 : Yeah hmm. It's a hard one, yeah.

While it is important to recognise that data collection took place in the first year of VELS implementation (and, hence, the program was still in a period of transition to the new assessment and reporting framework), this finding is consistent with research which identified a tendency for Australian (adult ESL) teachers to use their professional judgment and knowledge of what students 'typically' achieve rather than rely on published criteria and standards (Arkoudis & O'Loughlin, 2004; Davison, 2004).

Finally, when reporting achievement the Year 7 teachers displayed a greater reliance on 'hard' evidence (based on formal assessment) when compared to the Year 6 teacher who also relied on intuition and knowledge about students accumulated over time (as evidenced by the 'antennae' comment quoted earlier). However, these differences may be, at least partly, explained by the fact that the Year 6 teacher had taught the same students for four years in contrast to Year 7, where teachers were meeting students for the first time.

Do learners share the same understandings?

The fourth research question investigated learner understandings about assessment processes, criteria, and standards. An analysis of how one pair of Year 7 learners interpreted the feedback on the 'dialogue' task (presented earlier)is illustrative.

Figure 3 shows the completed assessment rubrics for the two students (Dan and Adam) and Figure 4 shows written feedback on Dan's copy of the script for their dialogue. Note that Adam had not bothered to copy their jointly composed script into his workbook as requested. Note also that Adam had scribbled something in Dan's workbook that Dan had attempted to cross out (see heavy black marks in the middle of the page).

In the following interaction, Dan, Adam, and a third unidentified student are discussing the boys' results. Adam starts by reading the teacher's comment aloud (text in capitals).

1 A: Oh my god! (*) I got 25 out of, I got 25 for this!

2 S?: What did you get?

3 A: I still beat him.

4 S?: Did you get a B? What did you get?

5 A: Dan you writ* the thing in the book and I didn't write it in the book and I still got a higher mark.

6 S?: You beat me too.

7 A: But he, he writ* the thing in his book, I didn't and I still beat him.

8 D Yeah. Look I got a bad marks cause of that crap ((points to scribble))

9 A: ((laughs))

10 D: '... the last time I let someone borrow my book.

11 A: So Dan. '... you like, disgusted?

12 D: I don't know.

13 A: I got 25, I got an A.

14 S?: (*)

15 A: I got an A. I shouldn't of gotten an A.

[ILLUSTRATION OMITTED]

In summary, despite a high level of transparency in terms of task specifications and criteria, the pair seems to have completely misunderstood the basis for their results. For example, although the marked assessment rubric is clearly labelled 'oral presentation', the boys interpret their marks in relation to the written component (i.e., the script). Hence, Adam believed that he didn't 'deserve' the higher score on the grounds that he had not copied the script into his book.

It is also interesting that Dan chose to focus exclusively on the teachers' feedback about 'presentation' (due to the crossing-out) at the expense of other aspects of the feedback provided (e.g., spelling and punctuation). As the 'dialogue' task was their first assessment in Year 7, it could be argued that this unorthodox interpretation of the feedback provides insight into the pre-existing understandings of assessment that the learners have brought with them from primary school. Perry (1998), for example, found that primary school students perceived 'neatness', 'correctness', and 'completeness' as the main criteria used by their teachers. Finally, despite the provision of explicit criteria (on the assessment rubric) the boys have adopted a comparative (or competitive) frame of reference ('I beat him') to interpret their scores.

In short, it appears that these students were on a steep learning curve in terms of their understanding of 'what counts' in the Year 7 Indonesian classroom. Specifically, they needed to understand that their scores were calculated with reference to specific qualities of their performance (criterion-referenced), rather than by comparing their performance to those of other students (norm-referenced). They also needed to learn how to use the stated criteria (rather than notions of 'completeness' and 'presentation') to interpret their results.

The students' attitude toward collaborative versus individual work was also quite illuminating.

1 Jade: Um, I don't like working on my own sometimes. Cause it's all right sometimes cause then you can't say anyone copied. Well, yyyeah. But when you're working in a group, say I don't understand colours and I need colours, she can use her knowledge of colours.

2 Sim: Everyone has their speciality but then there might be something else that you're really bad at that someone's really good at, so.

3 R: Ok so you think learning in pairs or in groups is the way to go?

4 Jade: Yeah, because I think you learn a bit more in pairs or groups cause some people know more than you. Say they know a bit more about colours you learn off them. But then if you work by yourself (*) you won't understand a bit.

5 R: Mm

6 Jade: When the teacher's like, 'time' and stuff, sometimes you don't understand it but other people don't understand it, but other people do understand it. So if there's someone there with you, you can understand more because one of you has probably understanded* it.

The type of 'distributed' competence discussed in this interaction equates to what Wenger (1998) has termed 'knowing in practice'. This notion of competence supports a communal memory that allows individuals to do their work without needing to know everything, and helps newcomers join the community by participating in its practice (Wenger, 1998, p. 46).

However, this conceptualisation of competence runs counter to the need for teachers to assess individual progress. This tension is illustrated in the following interaction, where the Year 7 students were asked about a recent vocabulary test.

1 R: I noticed [the teacher] she really made you space out.

2 Tam: Yeah.

3 Jess: That was a bit annoying.

4 Tam: We didn't really copy anyway.

5 Jess: She made us cover our work kinda thing.

6 R: So in [T2's] class, when you did the test do you think anybody ...

7 Tam: Copy? Yes.

8 Jess: They did copy but they were sort of helping each other, were sort of helping each other not, like, copying.

9 Tam [The teacher] did that, separate us, to see where we were. So in a way that's good, but yeah.

10 Jess: If you don't know the answer to, like, loads of them you don't know what to do.

However, the students appeared to apply a different notion of 'standard' when it came to Maths.

1 R: You just said before that you don't work in pairs in Maths cause that's the 'main one'. What do you mean?

2 Jade: Oh well, as in, sort of, we will use Indonesian if we go to like, other countries and stuff where they speak that language but with Maths and English they're like ones we would use in everyday life other than with Indonesia, with Indonesia when we're in Australia.

3 R: Mm, but why does that mean you don't do it in pairs?

4 Jade: So that you can do it by yourself and you know everything you're doing.

5 Sim: They can determine our own intelligence of the subject.

6 Jade: Yeah (*)

7 Sim: Yeah, yeah cause, cause, like, it's Maths and they want you to learn that stuff for your own like.

In other words, a distributed form of competence (or 'knowing in practice') was considered sufficient standard for Indonesian, whereas 'individual mastery' was required for Maths, a key component of the so-called 'core' curriculum.

Implications for continuity

These results have a number of implications for continuity between the two year levels. Table 6 attempts to summarise the different ways 'competence' appeared to differ across the respective year levels. Firstly, in the Year 6 classroom competence has been characterised as a property of the group ('knowing in practice') rather than of individual students. Secondly, there appear to be differences in terms of the content ('valued enterprises') which is assessed, and the standard expected. Specifically, competence in the Year 7 classroom appeared to entail mastery of a relatively narrow range of linguistic input compared to Year 6 where there appeared to be a greater focus on exposing students to rich cultural and linguistic input (without necessarily requiring master of it). Finally, while they share much in common, competence in the Year 7 class appeared to be defined in terms of a broader range of criteria ('valued qualities') than Year 6.

Discussion

This article presents some findings from an ethnographic study of CBA practices at two successive levels of schooling. The results suggest that the Year 6 and 7 Indonesian classrooms represent two distinct assessment cultures. This has clear implications for continuity at this critical juncture of schooling.

The study also identified a number of tensions in CBA that need to be recognised and managed. The first is the need to balance a desire to expose learners to culturally and linguistically rich input (as intended by current curriculum documents) against the need for students to demonstrate mastery of that content. The other key tension is the emphasis on generic skills such as team work (which are emphasised in the current curriculum and standards framework: VELS) with the requirement to report on individual performance.

The intention was not to make judgements about teachers but rather to better understand their assessment practices. Nonetheless, it is hoped that the findings of this study will encourage languages teachers to reflect more deeply on their own assessment practices. For example, teachers need to be aware that differences in how learning is assessed in the respective classrooms may be as important as what is assessed in determining 'what counts' in the languages classroom. Teachers may also consider the benefit of greater transparency in assessment processes as well as increased learner involvement in assessment more generally. Finally, the demonstrated capacity for students to misunderstand feedback underscores the importance of training learners in the use of assessment rubrics in addition to providing explicit information about task, quality, and standard.

[ILLUSTRATION OMITTED]

References

Arkoudis, S. & O'Loughlin, K. 2004.Tensions between validity and outcomes: teacher asssessment of written work of recently arrived immigrant ESL students. Language Testing, 21, 3, 284-304.

Balandier-Brown, C., Bolster, A., & Rea-Dickins, P. 2003. Investigating issues of transition between primary and secondary in modem foreign languages (MFL). Bristol: University of Bristol.

Blondin, C., Candelier, M., Edelenbos, R, Johnstone, R., Kubanek-German, A., & Taeschner, T. 1998. Foreign languages in primary and pre-school education: context and outcomes. London: CILT.

Burstall, C., Iamieson M, Cohen, S., & M. Hargreaves, 1974. Primary French in the balance. Windsor: NFER Publishing Company.

Clyne, M., Jenkins, C., Chen, I.Y., Tsolalidou, R., & Wallner, T 1995. Developing second language from primary school. Deakin: NLLIA.

Crawford, J. 2001. Teachers' attitudes to proficiency as a goal in primary language programs. Paper presented at the Applied Linguistics Association of Australia Congress, Canberra, 5-8 July 2001.

CSF II, 2000. Curriculum and Standard Framework II. Retrieved 9 August 2010 from http://vels.vcaa.vic.edu.au/support/csf.html

Davison, C. 2004. The contradictory culture of teacher-based assessment: ESL teacher assessment practices in Australian and Hong Kong secondary schools. Language Testing, 21, 3, 305-334.

DET [Department of Education &Training]. 2003. Blueprint for Government schools. Retrieved 5 August 2005 from www. education.vic.gov.au/about/directions/blueprint2008

DOE [Department of Education]. 2006. Languages Other Than English in Government Schools 2006. Retrieved 24 August 2010 from www.education.vic.gov.au/studentlearning/teachingresources/lote

Duff, RA. & Uchida, Y. 1997. The negotiation of teachers' sociocultural identities and practices in postsecondary EFL classrooms. TESOL Quarterly, 31, 3, 451-486.

Edelenbos, P. & Koster, C.J. (Eds). 1993. Engels in het basisonderwijs. Bussum: Coutinho.

Edelenbos, P. & Suhre, C.J. 1995. English in Dutch primary education. In P. Edelenbos & R. Johnstone (Eds), Researching languages at primary school: some european perspectives, 47-58. Stirling: Scottish CILT.

Eisenhardt, K.M. 2002. Building theories from case study research. In A.M. Huberman & MB. Miles (Eds), The Qualitative Researcher's Companion, 5-35. Thousand Oaks: Sage.

Hill, K. 2003. Assessment in transition. Babel, 38, 1, 19-24 & 30.

Hill, K. 2001. Between the cracks: the transition from primary to secondary school foreign language study. Asia Pacific Applied Linguistics: The Next 25 Years. Proceedings of the 2001 ALAA National Congress. ACT: University of Canberra, Centre for Research in Professional Education.

Hill, K., Davies, A., Oldfield, J., & Watson, N. 1997. Questioning an early start: the transition from primary to secondary foreign language learning. Melbourne Papers in Language Testing, 6, 2, 21-36.

James, M 2006. Assessment, teaching and theories of learning. In J. Gardner (Ed.), Assessment and learning, 47-60. London: Sage.

Johnstone, R. 1999. Research agenda for modern languages. In R Driscoll &. D. Frost (Eds), The Teaching of Modem Foreign Languages in the Primary School, 197-209. London: Routledge.

Kubanek-German, A. 1998. Primary foreign language teaching in Europe: trends and issues. Language Teaching, 31, 4, 193-205.

Leung, C. 2005. Classroom teacher assessment of second language development. In E. Hinkel (Ed.), Handbook of Research in Second Language Teaching and Learning, 869-888. Mahwah, N.J.: Lawrence Erlbaum Associates.

Lo Bianco, J. 2001. Calling in the big guns! Australian Language Matters, 9, 1, 6-10.

Lo Bianco, J. 2003. A letter to ALAA. ALAA Newsletter, 23 June 2003.

Low, L. 1999. Policy issues for primary modern languages. In R Driscoll & D. Frost (Eds.), The Teaching of Modem Foreign Languages in the Primary School, 50-63. London: Routledge.

Low, L., Brown, S., Johnstone, R., & Pirrie, A. 1995. Foreign languages in Scottish primary schools--Evaluation of the Scottish pilot projects: Final Report to Scottish Office. Stirling: Scottish CILT

Low, L., Duffield, J., Brown, S., & Johnstone, R. 1993. Evaluating foreign languages in Scottish primary schools. Report to Scottish Office. Stirling: Scottish CILT.

MCEETYA. 2005. National statement for languages education in Australian schools: National plan for languages education in Australian Schools 2005-2008. Hindmarsh, SA: DECS Publishing.

Oostdam, R. & Van Toorenburg, H. 2002. 'Leuk is not enough': Het vraagstuk van de positionering van Engels in het basisonderwijs en de aansluitingmet het voortgezet onderwijs. Levende Talen Tijdschrift, 3, 4, 3-17.

Patton, M.Q. 2002. Qualitative Research and Evaluation Methods. Thousand Oaks: Sage.

Perry, N. 1998. Young children's self-regulated learning and contexts that support it. Journal of Educational Psychology, 90, 715-729.

Purvis, K. & Ranaldo, T 2003. Providing continuity in language teaching and learning from primary to secondary. Babel, 38, 1, 13-18.

Rea-Dickins, R 2006. Currents and eddies in the discourse of assessment: a learning focused interpretation. International Journal of Applied Linguistics, 16, 2, 164-188.

Rea-Dickins, P. 2009. Personal communication. February 1 2009.

Sharpe, K. 2001. Modern foreign languages in the primary school. London: Kogan Page Ltd.

Toohey, K. 2001. Disputes in child L2 learning. TESOL Quarterly, 35, 2, 257-278.

Torrance, H. & Pryor, J. 1998. Investigating formative assessment: teaching, learning and assessment in the classroom. Buckingham: Open University press.

VCAA [Victorian Curriculum & Assessment Authority]. 2008. Framework of essential learnings. Languages other than English. Melbourne: Victorian Curriculum and Assessment Authority.

VCAA [Victorian Curriculum & Assessment Authority]. 2005. Victorian essential learning standards (VELS). Melbourne: Victorian Curriculum and Assessment Authority.

Wenger, E. 1998. Communities of practice: learning, meaning, and identity. Cambridge: Cambridge University Press.

Wiliam, D. 2001. An overview of the relationship between assessment and the curriculum. In D. Scott (Ed.), Curriculum and Assessment , 1, 165-185. Westport: Ablex Publishing.

Kathryn Hill has a B.A. (Indonesian, Politics, & Arabic), Dip.Ed. (Indonesian/TESL), M.A., and PhD (Applied Linguistics) from The University of Melbourne. She spent fifteen years as Research Fellow in the Language Testing Research Centre, University of Melbourne, and the Australian Council for Educational Research where she was involved in a number of projects related to school-based languages learning. During that time, she also taught postgraduate courses in applied linguistics at the University. She currently lecturers in clinical communication in the Faculty of Medicine, Dentistry and Health Sciences. Kathryn is a regular presenter at local and international conferences and has numerous publications, including the 2004 International Language Testing Association (ILTA) best paper in the field of language testing. Her research interests include language testing and language program evaluation, and Languages for Specific Purposes (LSP), including English for health professionals. Her email address is kmhill@unimelb.edu.au.
Table 1: Parameters of classroom-based assessment.

EVIDENCE

Input         What is assessed?    Valued enterprises/behaviours

Approach      How is evidence      * Planned/Incidental
                collected?
                                   * Visible/Embedded

Target        Who is assessed?     * Individuals

                                   * Groups/Whole class

Agent         By whom?             * Teachers

                                   * Students

                                   * Teacher/Student collaboration

INTERPRETATION

Reflection    Level of attention   * Sustained/Fleeting

Criteria      Values guiding       * Explicit/Unconscious criteria
                assessment
                                   * Official/Idiosyncratic criteria

USE

Purpose       How is evidence      Summative

              used?                * Assign level (reporting)

                                   Formative

                                   * Teaching (plan/modify)

                                   * Learning

                                   Management

                                   * Managing lesson

                                   * Managing behaviour

                                   * Socialisation (into
                                   classroom culture)

User          By whom?             * Teachers

                                   * Learners

                                   * School

Table 2: Potential sources of assessment-related information

Teacher Interactions

* planning sessions

* reporting sessions

* discussions with researcher

Teacher-Student Interactions

* written or verbal instructions

* requests for explanation/clarification

* oral or written feedback

Student Interactions

* discussion of task requirements

* discussion of feedback/results

* self- and peer evaluations

Documents

* assessment tasks

* assessment rubrics

* course outlines, work requirements

* student workbooks

* statements of aims (policy)

* teachers' notes and 'running records'

* summative reports

Table 3: Comparison of what the Year 6 and 7 teachers do

                 Year 6       Year 7

1.1 Planning     + detail     - detail
1.2 Framing      + implicit   + explicit
1.3 Conducting   + embedded   + formal, transparent
                 + group      + individual (pair)

Table 4: Comparison of what the Year 6 and 7 teachers look for

                         Year 6                 Both
Valued qualities

(Inter)personal            --           Capable, Confident,
                                        Bilingual, Behaviour

                                         Group work Effort,
Work habits                --               Organisation

                                          Task completion/
                                         Submission of work

Linguistic              Phrasing         Accuracy: grammar
                     (reading aloud)       (word order),
                                             spelling,
                         Variety            punctuation,
                    (question types)          meaning,
                                           Pronunciation

Other                   'Winning'
                    (finishing first)

Valued               Culture (ICU) &    Listening, Speaking,
enterprises             Language          Reading, Writing

Standard                Quantity
                       + Exposure
                         (rich)

                          Year 7
Valued qualities

(Inter)personal         'Fragility'

Work habits                 --

Linguistic          Accent, Intonation,
                      Body Language,
                         Fluency,
                    Comprehensibility,
                        Vocabulary,
                       Credibility,
                        Performance

Other                       --

Valued                   Language
enterprises

Standard             Assess't rubrics/
                          scales
                    + Mastery (narrow)

Table 5: Comparison of theories or standards used

              Year 6               Year 7

Discipline    + cultural content   + linguistic content
Pedagogy      + exposure           + mastery
Assessment    + intuition          + evidence

Table 6: Notions of 'competence' in Years 6 & 7

                         Year 6                  Year 7

Location of competence   + Distributed (group)   + Individual (?)
Valued enterprises       + ICU                   + Language
Valued qualities         Presentation,           Broader range of
                           accuracy,               linguistic
                           completeness            criteria
Notion of standard       + Exposure (rich)       + Mastery (narrow)
                         Quantitive              Descriptive

Example 3: Assessment rubric

ORAL PRESENTATION: Year 7 Dialogue (Term 1, 2006)
Nama Saya:
Kelas:
                                      Guru Saya:

                                      Word    Body       Participation
Accent   Intonation   Pronunciation   Order   Language   & group work

  5          5              5           5        5             5
  4          4              4           4        4             4
  3          3              3           3        3             3
  2          2              2           2        2             2
  1          1              1           1        1             1

OVERALL ASSESSMENT:

Figure 2: Framework for data analysis

1. What do teachers do?

1.1 Planning Assessment         * Task specifications

                                * Focus of assessment

                                * Relationship to instruction

                                * Relationship to external frameworks

1.2 Framing Assessment          * Explicit/Implicit

1.3 Conducting Assessment       * Formal/Instruction-embedded

                                * Planned/Unplanned

                                * Assessment of group/individuals

1.4 Using Assessment Data       * Teaching

                                * Learning (feedback)

                                * Reporting

                                * (Management, Socialisation)

2. What do teachers look for?

2.1 In Advance                  * Written/Verbal instructions

                                * Assessment rubrics

2.2 In Feedback                 * Verbal feedback (during/after
                                performance)

                                * Written feedback

2.3 In Reporting                * Reporting decisions

                                * Written reports

3. What theory or 'standards' do they use?

3. Teacher Theories &           * The discipline (Indonesian)
Beliefs
                                * Language, second language
                                learning/teaching

                                * Assessment

4. Do learners share the same understandings?

4. Learner Theories &           * The discipline (Indonesian)

Beliefs                         * Language, second language learning

                                * Assessment (criteria)
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有