首页    期刊浏览 2024年07月08日 星期一
登录注册

文章基本信息

  • 标题:Reconciling DIBELS and OSELA: what every childhood educator should know.
  • 作者:Li, Xiaoping ; Zhang, Mingyuan
  • 期刊名称:Journal of Research in Childhood Education
  • 印刷版ISSN:0256-8543
  • 出版年度:2008
  • 期号:September
  • 语种:English
  • 出版社:Association for Childhood Education International
  • 关键词:Early childhood education;Early literacy;Education law;Educational assessment;Educational evaluation;Literacy;Teachers

Reconciling DIBELS and OSELA: what every childhood educator should know.


Li, Xiaoping ; Zhang, Mingyuan


Abstract. Since the No Child Left Behind Act of 2001, teachers and parents have witnessed a series of heated debates on the most basic issues of literacy assessment-what to assess and how to assess it. In particular, the controversy rages over DIBELS and OSELA, two popular early literacy assessment instruments. The purpose of this article is to address the early literacy assessment issues from a balanced literacy perspective. In particular, the authors will: 1) introduce DIBELS and OSELA assessments; 2) inspect DIBELS and OSELA from educational philosophy, educational psychology, and literacy development theory bases; 3) examine the DIBELS and OSELA debate from balanced literacy perspectives; and 4) discuss implications and make recommendations for reconciling the DIBELS and OSELA debate.

**********

Nicole, a 1st-grade teacher, was trained to use OSELA to monitor her students' literacy development. Recently, she was informed that OSELA is not scientific and she should use DIBELS instead. She is confused.

Kristen, a reading specialist in a K-2 building, is having a hard time persuading her building teachers to implement DIBELS because they only have training in OSELA.

Linda, a principal in a K-2 building, is trying to convince the school district curriculum director to allow her building to continue to use OSELA while implementing DIBELS.

Literacy assessment dilemmas like the ones above are not unusual, especially within the current climate of school reform initiatives. Balanced literacy perspectivists (Afflerbach, 2007; Cowen, 2003; Gipps, 1999; Hoffman, Paris, Sala, Patterson, & Assaf, 2003; Johnston & Costello, 2005; Pearson, 2004; Reutzel & Cooter, 2004) believe that the fundamental purpose of school-based literacy assessment should be to use the many and varied forms of literacy assessment to identify students' abilities in the pursuit of a broad range of personal and social interests. To this end, any literacy assessment must be embraced as both a social and a cognitive act (Afflerbach, 2007; Gipps, 1999; Johnston & Costello, 2005). In other words, this view places different assessment tools at different points on a continuum--each has its strengths and each has its limitations.

Thoughtful literacy assessment tools are essential to help teachers think analytically about teaching; however, teachers are encouraged to see assessment as tools to be adapted, not as panaceas to be adopted (Cowen, 2003; Hoffman et al., 2003; Johnston & Costello, 2005; Pearson, 2004). Teachers need to establish a theory-based philosophy, study the research, and make informed decisions about literacy assessment tools. Therefore, before taking a stance, the authors recommend that teachers ask themselves three questions: 1) What are the philosophical, psychological, and developmental theories behind each literacy assessment tool? 2) When, for whom, and under what conditions should each assessment tool be used? and 3) What are the strengths and limitations of each assessment tool?

These questions will be addressed in this article from a balanced literacy perspective. In particular, the authors will: 1) introduce DIBELS and OSELA assessments; 2) examine DIBELS and OSELA from educational philosophy, educational psychology, and literacy development theory bases; 3) analyze the DIBELS/OSELA debate from a balanced literacy perspective; and 4) discuss implications and make recommendations for reconciling the DIBELS and OSELA debate.

DIBELS versus OSELA: What Are They?

Since the No Child Left Behind Act of 2001, teachers and parents have witnessed a series of heated debates on the most basic issues of literacy assessment--what to assess and how to assess (Farr & Beck, 2003; Hoffman et al., 2003; Goodman, 2006; Johnston & Costello, 2005). In particular, the controversy rages over DIBELS and OSELA, especially at the primary grade levels (Kamii & Manning, 2005; Denton, Ciancio, & Fletcher, 2006; Goodman, 2006; Manzo, 2005; Riedel, 2007; Samuels, 2007).

DIBELS: History, Components, and Uses The Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002) are a set of six individually administered, standardized measures of early literacy development. They are quickly administered (timed for one minute). According to the DIBELS Official Home Page (http://dibels.uoregon.edu), for the 2004-05 school year, 6,292 schools were using the DIBELS data system, across 1,940 districts in 49 states and Canada, representing over 1.35 million students. The rise of DIBELS came with the Reading First of No Child Left Behind Act of 2001, and it became one of the recommended early literacy assessment tools by the Reading Assessment Committee (Afflerbach, 2007; Goodman, 2006). However, controversy arose since four of the committee members were from the University of Oregon, where DIBELS is housed (U.S. Department of Education, 2006). Furthermore, it has been reported that some states (e.g., Kentucky and Maine) had to change their original assessment plan to DIBELS in order to secure federal grants (Afflerbach, 2007; Goodman, 2006). DIBELS became the only early literacy assessment tool for Reading First schools, and some states even went so far as to have all schools, even those without Reading First, adopt DIBELS as the official early literacy assessment (Goodman, 2006). To further understand DIBELS, the following six individually administered subtests will be introduced: Initial Sound Fluency, Phoneme Segmentation Fluency, Nonsense Word Fluency, Oral Reading Fluency, Letter Naming Fluency, and Word Use Fluency.

Initial Sound Fluency is designed to measure a student's phonological awareness. The examiner would show four pictures to the child and name each of them orally. For example, the examiner would say, "This is mouse, flowers, pillow, letters (point to each picture while saying its name). Mouse (point to mouse) begins with the sound/m/. Which one begins with the sounds/fl/?" The score is the number of correct responses per minute.

Phoneme Segmentation Fluency is designed to measure a student's ability to break apart a word by pronouncing each phoneme in isolation. For example, the examiner would say the word "Sam" and the student would respond with /s/ /a/ /m/. The score for this measure represents the number of phonemes said aloud in a 1-minute period.

Nonsense Word Fluency is designed to measure decoding skills, in which the student reads short vowel nonsense words, either sound-by-sound or whole-word. As an example, the examiner would present the nonsense word sim and the student would either read each individual letter sound, /s/ /i/ /m/, or the whole word sim. The score for this measure represents the number of letter sounds read in a 1-minute period.

Oral Reading Fluency is designed to measure oral reading fluency on a 1-minute timed measure. The student is presented with a grade-level passage and reads continuously for a 1-minute period. The score represents the number of correct words read per minute.

Letter Naming Fluency is designed to measure rapid letter naming, a skill thought to be highly predictive of later reading outcomes. The student is given a page with rows of letters, both upper- and lowercase, and is asked to name as many letters as possible within a 1-minute time frame.

Word Use Fluency is designed to measure the ability to generate a complete or partial sentence orally when given a word and to use it in the correct context. As an example, the examiner would say the word "carrot" and the student might respond with the sentence, "The rabbit is eating a carrot." The number of correct words in a total of one minute is tallied at the end of the test.

As discussed earlier, DIBELS is a mandatory early literacy assessment for the Reading First programs. It is administered to all Reading First students, kindergarten through grade 3, in the fall, winter, and spring. The students' test scores are entered into a data system that is run through the DIBELS website at the University of Oregon. These student data have been used to create decision rules and instructional recommendations. For example, kindergarten students with a score of 4 on the Initial Sound Fluency would be considered at risk and in need of intensive instructional support. It was relatively easy to administer the DIBELS assessment; however, as Pearson (2006) claims, it paid too much attention to separate reading skills and focused on bits and pieces rather than the orchestrated whole of reading as a skilled human process.

OSELA: History, Components, and Uses The Observation Survey of Early Literacy Achievement (OSELA; Clay, 2002), is a set of six individually administered, informal, untimed measures of early literacy development. The popularity of OSELA came with Reading Recovery since it was the primary assessment and evaluation instrument for that program (Clay, 2002). Reading Recovery was developed in the 1970s by Marie Clay, a New Zealand early literacy educator. It was introduced to the United States in 1984, through the Ohio State University. Reading Recovery has been one of the "most widespread teacher-implemented, one-to-one intervention currently in use in schools in the U.S." (Elbaum, Vaughn, Hughes, & Moody, 2000, p. 606). In addition to schools in its country of origin, New Zealand, Reading Recovery was implemented in 9,901 schools in the United States in 2002-03, with about 1,000 U.S. schools offering the program in Spanish (Gomez-Bellenge & Rodgers, 2004; Reynolds & Wheldall, 2007). Many more schools used the OSELA in classroom programs independent of Reading Recovery (Fountas & Pinnell, 1996). With the rise of DIBELS, however, Reading Recovery program was no longer so favored by federal government grant distributors. Under the threat of losing funding, the Reading Recovery Council, together with other literacy organizations such as the Success for All Foundation, filed complaints with the State Department of Education, requesting access to information on how grants were awarded (Lewis, 2006). To respond to the mounting criticisms, the U.S. Department of Education (2006) released an Inspector General Report, which charged that the panels of Reading First grant reviewers appeared to be stacked with people who favored certain reading assessments, such as DIBELS. To further understand the debate, the six subtests of OSELA--Running Record of Text Reading, Letter Identification, Concepts About Print, Word Reading, Writing Vocabulary, and Hearing and Recording Sounds in Words--are reviewed in the following.

In the Running Record task, the student would be presented with a sample of text, and the examiner would apply conventions to record: 1) correctly read words, 2) miscues, 3) repetitions, 4) self-corrections, 5) appeals from the child for help, and 6) words told by the tester. These records can be analyzed to identify patterns in the student's reading behaviors that provide clues to the teacher regarding the kinds of reading skills and strategies the student applies when reading connected text. The test is scored according to the percentage of words read accurately. The percentage of errors the student self-corrects is also recorded.

In the Letter Identification task, children would be asked to identify all uppercase and lowercase letters in random order. The directions for standard administration of this test indicate that students may identify a letter in any one of three ways: by the letter name, by the letter sound, or by the keyword containing the letter. When the test is scored, credit is given for each letter that the child identifies in any of the three ways.

In the Concepts About Print task, after the examiner reads a specially designed book, the student would be asked to show the examiner the first and last part of the story and where to begin reading on a page, etc., to find out the student's understanding of concepts such as 1) locating the front of the book; 2) knowing that the print, rather than pictures, carries the message; and 3) having one-to-one correspondence between the printed and spoken word. The test questions are scripted. There are 24 items on the test, scored as correct or incorrect.

In the Word Reading task, Clay (2002) provided three versions: 1) the Ready to Read word list, widely used in New Zealand; 2) the Duncan Word Test; and 3) the Ohio Word Test, which was constructed from the Dolch word list and is typically used in the United States. The New Zealand version has 15 words in each list, the Duncan has 23, and the Ohio version has 20. Each version of the Word Reading task has three forms that can be administered at different times of the school year.

In the Writing Vocabulary task, students would be asked to write all the words that they can within a 10-minute period. The student would be given a blank piece of paper and a pencil, and as an example, the examiner would say, "I want to see how many words you can write. Can you write your name?" (Clay, 2002, p. 104). The student would write words on his or her own but can be prompted in various ways to write other words. The task is scored by assigning one point for every word that is correctly spelled.

In the Hearing and Recording Sounds in Words task, the examiner would read a sentence to the student and then repeat each word in the sentence one at a time, instructing the student to say the words slowly and write them. The examiner may prompt the student if needed, asking, "How would you start to write it? What can you hear? What else can you hear?" (Clay, 2002, p. 113). In scoring the task, one point is awarded for each phoneme the student records in a way that is acceptable in English. The maximum possible total score is 37.

According to Clay (2002), OSELA was designed for classroom teachers who wanted to be careful observers of how young children learned to read and write. It was also for teachers who worked individually with children having temporary difficulties with literacy learning. The systematic observation of student performance allows teachers to closely identify their students' emerging skills, monitor their progress, and target specific weaknesses for intervention. However, the implementation of OSELA requires a rather extensive understanding of the reading process and the writing process, as well as careful study of Clay's theory of early literacy development and its practice (Clay, 2002).

Examining DIBELS and OSELA From Philosophical, Psychological, and

Literacy Development Theory Bases From the above description of both assessments, it is evident that both DIBELS and OSELA are individually administered measures of early literacy development and that both assess early literacy components ranging from letter identification to reading fluency; however, one is favored by the Reading First initiative of the No Child Left Behind Act of 2001, and the other is not. From what has been reported, the debate between DIBELS and OSELA seems to be another political battle in Washington. However, the authors of this article believe that the root of the conflict is not just the struggle for financial gain and/or political power, but rather is fundamentally rooted in three theoretical bases: educational philosophy, educational psychology, and literacy development theory (Afflerbach, 2007; Clay, 2002; Denton, Ciancio, & Fletcher, 2006; Good & Kaminski, 2002; Goodman, 2006).

Educational Philosophy Bases

Different approaches to resolving educational issues have given rise to different schools of thought in the philosophy of education (e.g., essentialism, progressivism) (Ornstein & Hunkins, 2004). Research indicates that DIBELS reflected essentialism, while OSELA was rooted in progressivism (Goodman, 2006; Groff, 2004; Moats, 2007; Pearson, 2006).

Essentialism is an educational theory based on the positivist philosophical tradition (Ornstein & Hunkins, 2004). Essentialists assume that an objective reality exists that can be measured and modeled. In essentialist classrooms, teachers are considered dispensers of knowledge and students are knowledge recipients (Ornstein & Hunkins, 2004). As a result, evaluation usually takes the form of standardized tests or other "objective" evaluations, such as DIBELS.

Progressivism is an anti-traditional theory associated with child-centered learning through activities, problem-solving, and projects (Ornstein & Hunkins, 2004). Progressivists believe that reality is organized and experienced by the individual. In progressivist classrooms, teachers are considered knowledge facilitators and students are active learners; thus, evaluation is a collaborative effort by both students and teachers, such as in OSELA (Gipps, 1999; Ornstein & Hunkins, 2004).

Educational Psychology Bases

From a psychological point of view, DIBELS reflects a behaviorist perspective that has dominated U.S. education for decades, while the OSELA stems from contemporary cognitive psychology (Clay, 2002; Goodman, 2006; Groff, 2004; Moats, 2007; Pearson, 2006).

The behaviorist perspective explains learning in terms of observable (or inferred) stimulus and response events. Behaviorists view mental activities, such as thinking, like any other activity, attempting to break the behavior into its molecular elements, as in DIBELS (Good & Kaminski, 2002; Goodman, 2006; Hamilton & Ghatala, 1994). Conversely, cognitive theorists assert that behavior cannot be understood by analyzing its components. Cognitivists emphasize central processes and are interested in behavior primarily as a means of inferring mental activity, as in OSELA (Clay, 2002; Goodman, 2006; Hamilton & Ghatala, 1994).

Literacy Development Theory Bases

"Reading readiness" and "emergent literacy" are the two most common literacy development theories (Sulzby, 1991; Teale, 1994). DIBELS is rooted in reading readiness theory, while OSELA bases the practice on emergent literacy (Goodman, 2006; Moats, 2007; Sulzby, 1991; Teale, 1994). Advocates of reading readiness theory argue that learning to read begins only after a set of prerequisite perceptual and prereading skills have been mastered; all children, say these theorists, pass through the same sequence of skill development in learning to read.

In contrast, advocates of emergent literacy theory maintain that literacy development can begin very early in life for virtually all children, and that children move into conventional literacy in different ways and at different rates (Teale, 1994). For example, advocates of emergent literacy theory view scribbling and recognizing a McDonald's logo as signs of emergent literacy. They emphasize meaningful bases of literacy development and believe that assessment should arise out of such contexts (Clay, 2002).

In summary, DIBELS advocates and OSELA advocates have very different perspectives about literacy assessment in terms of children's literacy development, the role of students and teachers, and the role of reading instruction and assessment from their own philosophical, psychological, and literacy development theory bases.

Analyzing the DIBELS and OSELA Debate From a Balanced Literacy Perspective

According to Cowen (2003), balanced literacy instruction respects and addresses the needs of all learners, views teachers as informed decision-makers, is flexible, and is research-based. A balanced literacy approach to instruction provides students with daily opportunities to engage in various reading/writing activities to help them communicate more effectively. In a balanced literacy framework, students participate in read-alouds, shared reading, guided reading, independent reading, reader's workshop, and systematic word study. In addition, they engage in modeled writing, shared writing, interactive writing, guided writing, writer's workshop, and independent writing. Balanced literacy crystallizes decades of reading research, reflects the entire range of research on reading, and respects the wisdom of practice by exemplary teachers with a balanced repertoire of instructional strategies. "Teachers who are faced with the variations in achievement, experience, and aptitude found in today's classes need, and deserve, a full toolbox of pedagogical practices" (Pearson, 2004, p. 245).

Limitations of DIBELS From a Balanced Literacy Perspective

The major strength of DIBELS is that it provides teachers a quick overview of students' decoding skills. However, from a balanced literacy perspective, DIBELS has its limitations. First, a balanced perspective emphasizes the role of literate behavior embedded within cultural activity, whereas DIBELS focuses on cognitive aspects of individual literate behavior and practice, with little regard for the cultural contexts of literate practice (Afflerbach, 2007; Goodman, 2006; Pearson, 2006).

Tierney (1999) suggests that the process involved in literacy assessment may vary depending on the nature of the assessment task, goals, and purposes. These processes also may vary depending on the assessment context, the student's own history, and knowledge base. DIBELS tends to overlook such important features as the social and cultural context in which literacy assessment occurs. Denying the importance of such factors may lead to decontextualized assessment of subskills. "People outside those groups who are subject to different values and experiences will be disadvantaged by assessment based on such perspectives" (Gipps, 1999, p. 366). According to Linn (1992), "Multiple indicators are essential so that those who are disadvantaged on one assessment have an opportunity to offer alternative evidence of their expertise" (p. 44).

Second, balanced literacy approaches are grounded in social constructivism characterized by interactive and reciprocal flows of information and knowledge between students and teachers (Gipps, 1999). In contrast, the literacy assessment in DIBELS is considered a technological activity based on psychometric theory with its emphasis on replicability and generalizability. Measurement did not permit an engagement with the individual being tested or an understanding of the cultural nature of tests (Afflerbach, 2007; Goodman, 2006; Pearson, 2006).

Third, from a balanced literacy perspective, the purpose of literacy assessment is to inform and improve teaching by indicating where students are in their reading development and thereby help teachers know what each child needs to learn next (Cooter, Flynt, & Cooter, 2007). However, there have been several affidavits suggesting that DIBELS perpetuates a teaching to the test approach and not much more (Afflerbach, 2007; Goodman, 2006; Samuels, 2007). As Afflerbach (2007) points out, since DIBELS focuses on speed and neglects the goal of reading--comprehension--it sends a very wrong message to teachers, parents, and students that reading equals "barking at print" and forces teachers to gear their instruction towards meaningless fast reading without comprehension.

Limitations of OSELA From a Balanced Literacy Perspective

The major strength of OSELA is its focus on the teacher as a facilitator and instructor rather than as a knowledge dispenser, and on the child as a dynamic and active learner rather than as a passive knowledge recipient. However, the OSELA perspective is not without limitations (Denton, Ciancio, & Fletcher, 2006; Glynn, Bethune, Crooks, Ballard, & Smith, 1992; Groff, 2004; Moats, 2007).

First, OSELA advocates take it for granted that teacher knowledge is widely and richly distributed in conducting the assessment. For example, in the task of Writing Vocabulary, students are asked to write all the words that they can within a 10-minute period. The student is given a blank piece of paper and a pencil, and the examiner says, "I want to see how many words you can write. Can you write your name?" (Clay, 2002, p. 104). The student may continue to write words on his or her own but can be prompted in various ways to write other words. However, due to varying levels of teachers' knowledge of and experience with the assessment, data drawn from such assessment task are very difficult to use in reports to the other stakeholders, such as parents, policymakers, and the public. Assessment, from the balanced literacy perspective, is not only a social and cultural act, but also a political and economic act that involves all stakeholders, such as parents and the public. They have the right to know where our schools stand in comparison with schools in other districts or states.

Second, OSELA advocators (Clay, 2002; Goodman, 2006) believe that assessment should be classroom-based and individualized in nature and are reluctant to use standardized tests, state assessments, and other external measures of student accomplishment, such as DIBELS. However, from a balanced literacy perspective, both an internal assessment such as OSELA and an external one such as DIBELS are needed in the age of high expectations, explicit standards, and school- and class-level accountability.

Third, OSELA appears to inform but not constrain a more full approach to literacy development and instruction (Denton, Ciancio, & Fletcher, 2006; Groff, 2004; Moats, 2007). The National Reading Panel (National Institute of Child Health and Human Development, 2000) found a strong correlation between students' phonological awareness and later reading development. However, some researchers indicated that OSELA was not sensitive to students' development in that area. For example, it was found that students who entered the program typically had poor phonological processing skills, and were least likely to benefit from the program and its assessment (Center, Wheldall, Freeman, Outhred, & McNaught, 1995; Chapman, Tunmer, & Prochnow, 2001). In addition, Chapman et al. found, when Reading Recovery students were tested after the intervention, that Reading Recovery "did not eliminate deficiencies in phonological processing" (2001, p. 158).

In summary, both DIBELS and OSELA discussed in this article are a partial response to the challenges stemming from an increasingly diverse society. Neither strategy alone can provide a whole picture of student achievement and meet society's accountability requirements.

Implications of Assessment From a Balanced Literacy Perspective

The purpose of assessment in a balanced literacy classroom is to support the identification of a student's strengths as well as to determine what teaching needs to occur next. In that environment, the role of the teacher is to understand that assessment has two forms: summative assessment, which occurs after instruction and reveals what the student has learned, and formative assessment, which occurs during instruction and tells the teacher what the student is ready to learn next.

A balanced literacy assessment perspective has at least two implications for literacy assessment. First, assessment procedures should help teachers discover what children can do, not only what they cannot do. Once teachers understand student abilities, it becomes much easier to decide which new learning experiences should be offered to help them develop further. In other words, the balanced literacy perspective views literacy assessment as a tool to inform and improve teaching (Reutzel & Cooter, 2004). Both DIBELS and OSELA contribute to this process.

Second, literacy assessment should be linked to accountability standards and provide insights into the process of reading (Hoffman et al., 1999; Reutzel & Cooter, 2004). With the passage of the No Child Left Behind and other state and federal legislations in reading education, an even greater emphasis has been placed on classroom assessment. Both external and classroom-based assessments are required to evaluate teaching and student learning. For external assessment at the system level and for high-stakes purposes, forms of standardization that lead to high reliability are of key importance. For assessment used in the classroom and for diagnostic or formative purposes, such attention to reliability is less relevant, and the main focus is instead on construct validity and use of results at the class/school level. Thus, it is important to consider the most appropriate balance between reliability and validity in assessment for different purposes, such as in DIBELS and OSELA.

Towards a Balanced Approach to Literacy Assessment: Recommendations From a balanced literacy perspective, the challenges facing the educators mentioned at the beginning of the article are not only to decide what assessment tools to choose but also to decide why and how to balance the assessment requirements by all stakeholders. The following four steps are some recommendations for reconciling the DIBELS and OSELA debate from a balanced literacy perspective.

Examining Different Literacy Assessment Models

Views of learning influence how we structure classroom environments, select instructional methods, and define the role of the assessment. Therefore, before we use each assessment tool, it is important to ask the following questions:

1) What is the purpose of the assessment?

2) What are its basic assumptions and elements?

3) What are the strengths and weaknesses of each tool?

4) When, for whom, and under what conditions should each assessment tool be used?

5) What particular role does the teacher, as the examiner, play in the assessment process?

Taking Into Account the Social, Cultural, and Political Contexts of the Assessment

Sutherland (1996) argued that assessment was one of the most significant disciplinary mechanisms of society and we needed to take into account the social, cultural, and political contexts in which the assessment operated. Assessment is best thought of as gathering a variety of information, at diverse verse times, and under differing conditions. Too many educators now see assessment as being of two types--that which teachers do to help students, such as OSELA, and that which is forced on teachers and students by the administration, such as DIBELS. Such differences should not exist. DIBELS could be strengthened considerably through the additional information gathered by OSELA. On the other hand, OSELA conducted by teachers could be enhanced by greater attention to reliability and validity and with an eye toward using the information to communicate students' achievement to administrators and to the public.

Developing and Implementing Multiple Measures of Early Literacy Assessments

The National Council of Teachers of English Legislative Platform (2008) proposes a shift from single, high-stakes measures to multiple measures to produce a more accurate portrait of yearly progress toward academic achievement. These should include a set of parallel assessments that measure reading and writing in their more holistic manner--including running records with comprehension and response to literature, regular writing samples, and some index of spelling progress such as OSELA. If the results are consistent, teachers should continue to use multiple methods to ensure a more complete picture of student achievement and other accomplishments. If the results are not consistent, teachers should look for reasons that may yield insights into student behavior and/or the quality of the measures used to lay the foundation for improvement (Pearson, 2006).

Enhancing the Role of Classroom Assessment in Teaching and Learning. Historically, because of their technical requirements, educational tests of any importance were seen as the province of statisticians and not of teachers or subjects specialists (Shepard, 2001). From a balanced literacy perspective, assessment plays an integral role in teaching and learning, however, it should be part of the ongoing learning process. Therefore, teachers should enhance the role of classroom assessment in teaching and learning by using a variety of assessment methods to present a more complete picture of student achievement and meet other accountability requirements of society.

Conclusion

Literacy learning is a complex process, as is literacy assessment. Different views of philosophy, psychology, and literacy development theories may result in different assessment models, as in the case of DIBELS and OSELA. From a balanced literacy perspective, all literacy assessments, whether DIBELS or OSELA, are points on a continuum. They all have their own places in literacy assessment; however, they also all have their own limitations.

From a balanced literacy perspective, the key literacy assessment decisions do not hinge on which of the available assessment is "best." The important concern relates to the relevancy of a particular approach given the assessment situation and teachers' understanding of the strengths and limits of the chosen assessment tool.

The debate of whether literacy is best assessed by DIBELS or by OSELA will rage on for the foreseeable future. Therefore, school teachers, reading specialists, and principals may have questions as diverse as those raised by Nicole, Kristen, or Linda as depicted at the beginning of the article. Nevertheless, when teachers begin to 1) examine different literacy assessment models critically; 2) take into account the social, cultural, and political contexts in which the assessment operates; 3) develop and implement multiple measurements of early literacy assessments; and 4) enhance the role of classroom assessment in teaching and learning, they will find that DIBELS and OSELA can be reconciled to produce a more accurate picture of their students' early literacy development.

References

Afflerbach, P. (2007). Understanding and using reading assessment K-12. Newark, DE: International Reading Association.

Center, Y., Wheldall, K., Freeman, L., Outhred, L., & McNaught, M. (1995). An evaluation of Reading Recovery. Reading Research Quarterly, 30, 240-263.

Chapman, J. W., Tunmer, W. E., & Prochnow, J.E. (2001). Does success in the Reading Recovery program depend on developing proficiency in phonological-processing skills? A longitudinal study in a whole language instructional context. Scientific Studies of Reading, 5, 141-176.

Clay, M. M. (2002). An observation survey of early literacy achievement (2nd ed.). Portsmouth, NH: Heinemann.

Cooter, R. B., Flynt, E. S., & Cooter, K. S. (2007). Comprehensive reading inventory: Measuring reading development in regular and special education classroom. Upper Saddle River, NJ: Prentice Hall.

Cowen, J. E. (2003). A balanced approach to beginning instruction: A synthesis of six major U.S. research studies. Newark, DE: International Reading Association.

Denton, C. A., Ciancio, D. J., & Fletcher, J. M. (2006). Validity, reliability, and utility of the observation survey of early literacy achievement. Reading Research Quarterly, 41(1), 8-34.

Elbaum, B., Vaughn, S., Hughes, M. T., & Moody, S.W. (2000). How effective are one-to-one tutoring programs for reading for elementary student at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605-619.

Farr, R., & Beck, M. D. (2003). Evaluating language development. In J. Flood, D. Lapp, J. R. Squire, and J. M. Jensen (Eds.), Handbook of research on teaching the English language arts (2nd ed., pp. 590-599). Mahwah, NJ: Lawrence Erlbaum.

Fountas, I. C., & Pinnell, G. S. (1996). Guided reading. Portsmouth, NH: Heinemann.

Gipps, C. (1999). Socio-cultural aspects of assessment. Review of Research in Education, 24, 355-392.

Glynn, T., Bethune, N., Crooks, T., Ballard, K., & Smith, J. (1992). Reading recovery in context: Implementation and outcomes. Educational Psychology: An International Journal of Experimental Education Psychology, 12, 249-261.

Gomez-Bellenge, F. X., & Rodgers, E. M. (2004). Reading Recovery and Descubriendo la Lectura National Report 2002-2003. Columbus, OH: Ohio State University National Data Evaluation Center. Retrieved December 16, 2004, from www.ndec.us

Good, R. H., & Kaminski, R.A. (Eds.). (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for the Development of Educational Achievement.

Goodman, K. S. (2006). A critical review of DIBELS. In K. S. Goodman (Ed.), The truth about DIBELS: What it is, what it does (pp. 1-39). Portsmouth, NH: Heinemann.

Groff, P. (2004). A critical analysis of the sources of Reading Recovery: An empiricist perspective. Interchange, 35(1), 31-58.

Hamilton, R., & Ghatala, E. (1994). Learning and instruction. New York: McGraw-Hill.

Hoffman, J. V., Au, K. H., Harrison, C., Paris, S. G., Pearson, P. D., Santa, C. M., Silver, S. H., & Valencia, S. W. (1999). High-stakes assessment in reading: Consequences, concerns, and common sense. Newark, DE: International Reading Association.

Hoffman, J. V., Paris, S. G., Sala, R., Patterson, E., & Assaf, L. (2003). High-stakes assessment in the language arts: The piper plays, the players dance, but who pays the price? In J. Flood, D. Lapp, J. R. Squire, & J. M. Jensen (Eds.), Handbook of research on teaching the English language arts (2nd ed., pp. 619-630). Mahwah, NJ: Lawrence Erlbaum.

Johnston, P., & Costello, P. (2005). Principles for literacy assessment. Reading Research Quarterly, 4(2), 256-267.

Kamii, C., & Manning, M. (2005). Dynamic indicators of basic early literacy skills (DIBELS): A tool for evaluating student learning? Journal of Research in Childhood Education, 20(2), 75-90.

Lewis, A. C. (2006). Washington scene. Education Digest, 72(3), 70-72.

Linn, M. C. (1992). Gender differences in educational achievement. J. Pfleiderer (Ed.), Sex equity in educational opportunity, achievement, and testing. [Proceedings of 1991 Educational Testing Service Invitational Conference] (pp. 11-50). Princeton, NJ: Educational Testing Service.

Manzo, K. K. (2005). National clout of DIBELS test draws scrutiny. Education Week, 25(5), 1, 12.

Moats, L. (2007). Whole-language high jinks: How to tell when "scientifically-based reading instruction" isn't. Washington, DC: The Thomas B. Fordham Foundation. Retrieved April 22, 2008, from www.edexcellence.net/ doc/Moats2007.pdf

National Institute of Child Health and Human Development. (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769). Washington, DC: U.S. Government Printing Office.

National Council of Teachers of English. (2008). 2008 NCTE legislative platform. Retrieved March 10, 2008, from www.ncte.org/about/ over/positions/category/gov/129125.htm

No Child Left Behind Act of 2001, Pub. L. No. 107-110, 115 Stat. 1425 (2002).

Ornstein, A. C., & Hunkins, F. P. (2004). Curriculum: Foundations, principles, and issues. Boston: Allyn and Bacon.

Pearson, P. D. (2004). The reading wars. Educational Policy, 18, 216-252.

Pearson, P. D. (2006). Foreword. In K. S. Goodman (Ed.), The truth about DIBELS: What it is, what it does (pp. v-xix). Portsmouth, NH: Heinemann.

Reutzel, D. R., & Cooter, R. B. (2004). Teaching children to read: Putting the pieces together. Upper Saddle River, NJ: Pearson Merrill Prentice Hall.

Reynolds, M., & Wheldall, K. (2007). Reading Recovery 20 years down the track: Looking forward, looking back. International Journal of Disability, Development and Education, 54(2), 199-223.

Riedel, B. W. (2007). The relation between DIBELS, reading comprehension, and vocabulary in urban first-grade students. Reading Research Quarterly, 42(4), 546-562.

Samuels, S.J. (2007). The DIBELS tests: Is speed of barking at print what we mean by reading fluency? Reading Research Quarterly, 42(4), 563-566.

Shepard, L. A. (2001). The role of classroom assessment in teaching and learning. In V. Richards, (Ed.), Handbook of research on teaching (4th ed., pp. 1066-1101). Washington, DC: American Educational Research Association.

Sulzby, E. (1991). The development of the young child and the emergence of literacy. In J. Flood, J. M. Jensen, D. Lapp, & J. R. Squire (Eds.), Handbook of research on teaching the English language arts (pp. 273-285). New York: Macmillan.

Sutherland, G. (1996). Assessment: Some historical perspectives. In H. Goldstein & T. Lewis (Eds.), Assessment: Problems, developments and statistical issues. Chichester, England: Wiley.

Teale, W. (1994). Emergent literacy. In Encyclopedia of English studies and language arts: A project of the National Council of Teachers of English (Vol. 1, pp. 424-426). New York: Scholastic.

Tierney, R. (1999). Literacy assessment reform: Shifting beliefs, principled possibilities, and emerging practices. In S. J. Barrentine (Ed.), Reading assessment: Principles and practices for elementary teachers (pp. 10-29). Newark, DE: International Reading Association.

U.S. Department of Education, Office of the Inspector General. (2006). The Reading First program grant application process. Final Inspection Report. ED-OIG/I13-F0017. Washington, DC: Author. Retrieved April 22, 2008, www.ed.gov/about/offices/list/oig/aireports/i13f0017.pdf

Xiaoping Li

Mingyuan Zhang

Central Michigan University
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有