首页    期刊浏览 2024年12月04日 星期三
登录注册

文章基本信息

  • 标题:Effects of targeted intervention on early literacy skills of at-risk students.
  • 作者:Wang, Chuang ; Algozzine, Bob
  • 期刊名称:Journal of Research in Childhood Education
  • 印刷版ISSN:0256-8543
  • 出版年度:2008
  • 期号:June
  • 语种:English
  • 出版社:Association for Childhood Education International
  • 关键词:High risk students;Intervention (Psychology);Literacy;Students

Effects of targeted intervention on early literacy skills of at-risk students.


Wang, Chuang ; Algozzine, Bob


Abstract. Reading problems are among the most prevalent concerns in schools; poor readers in elementary school who do not receive special assistance are particularly at risk for dismal academic careers. In a large-scale project, children with serious reading problems received targeted intervention to address critical early literacy skills. The assistance combined focused practice and frequent monitoring to provide instruction needed to improve reading skills. Participating students achieved significant gains in reading performance compared to a control group not receiving intervention. The outcomes of the study relate to continuing efforts to reduce the very large numbers of children failing to achieve early literacy skills in U.S. schools.

**********

For success in any society, children need to know how to read, and improving reading ability is consistently at the center of the federal, state, and local initiatives to improve U.S. education (Frechtling, Zhang, & Silverstein, 2006; International Reading Association, 2001; Kamil, Mosenthal, Pearson, & Barr, 2000; National Institute of Child Health and Human Development [NICHD], 2000a, 2000b, 2000c; National Reading Panel, 2000; National Research Council, 1998). The latest emphasis on the importance of literacy success is clearly established in No Child Left Behind Act of 2001 (NCLB), which requires schools to take steps to ensure that all students are reading at grade level by the end of 4th grade. Schools across the country are searching for ways to make this happen, especially for the large numbers of students at risk of continuing literacy failure.

When questions arise about how best to teach reading skills, all signs point in the direction of a few fundamental factors. For example, hearing, recognizing, and manipulating individual phonemes in spoken words (i.e., phonemic awareness) is a powerful predictor of young children's later reading development. Consequently, systematic teaching of phonics is widely accepted as a critical area for early reading instruction (Adams, 2001; Goodman, 2006; NICHD, 2000a, 2000b, 2000c; National Reading Panel, 2000; National Research Council, 1998; Newman & Dickinson, 2001). According to the Committee on the Prevention of Reading Difficulties in Young Children (Snow, Burns, & Griffin, 1998), phonics methods include: 1) using reading to obtain meaning from print, 2) having frequent and intensive opportunities to read, 3) being exposed to frequent, regular spelling-sound relationships, 4) learning about the nature of the alphabetic writing system, and 5) understanding the structure of spoken words. Further, this group pointed out that adequate progress in learning to read beyond initial levels depends on:

* A working understanding of how sounds are represented alphabetically

* Sufficient practice in reading to achieve fluency with different kinds of text

* Sufficient background knowledge and vocabulary to render written texts meaningful and interesting

* Control over procedures for monitoring comprehension and repairing misunderstandings

* Continued interest and motivation to read for a variety of purposes (pp. 3-4).

Efforts to improve reading and literacy skills also must avoid some pitfalls to be effective:

There are three potential stumbling blocks that are known to throw children off course on the journey to skilled reading. The first obstacle, which arises at the outset of reading acquisition, is difficulty understanding and using the alphabetic principle--the idea that written spellings systematically represent spoken words. It is hard to comprehend connected text if word recognition is inaccurate or laborious. The second obstacle is a failure to transfer the comprehension skills of spoken language to reading and to acquire new strategies that may be specifically needed for reading. The third obstacle to reading will magnify the first two: the absence or loss of an initial motivation to read or failure to develop a mature appreciation of the rewards of reading. (Snow et al., 1998, pp. 4-5)

These critical factors, directions, and conclusions are supported by "blue ribbon" panels (cf. NICHD, 2000a, 2000b, 2000c; National Research Council, 1998), and most literacy scholars agree that the majority of reading problems faced by children, adolescents, and young adults are the result of stumbling blocks, obstacles, and problems that should have been addressed during the early elementary school years (Allington & McGill-Franzen, 2003; Baker, 2002; Goodman, 2006; Guthrie & Davis, 2003; Ivey, 1999; King-Sears, Boudah, Goodwin, Raskind, & Swanson, 2004; Lerner & Kline, 2006; McCray, 2001; NICHD, 2000a, 2000b, 2000c; Tivnan & Hemphill, 2005). Clearly, focusing on a few fundamental factors while avoiding challenges inherent in and/or created by faulty literacy instruction makes the most sense as a method for overcoming reading problems of struggling readers, especially those from culturally and ethnically diverse backgrounds (Allington, 2002; Ehri, Nunes, Willows, Schuster, & Yaghoub-Zadeh, 2001; Haager & Windmueller, 2001; Lerner & Kline, 2006; Winzer & Mazurek, 1998).

O'Connor (2000) and Coyne, Kame'enui, and Simmons (2001) defined the context and shared "big ideas" for prevention and intervention in beginning reading. In this regard, the effects of "tiers of intervention" have become increasingly popular as attention has turned to response to intervention (RTI) and its role in identifying students with disabilities (Marston, 2005, p. 539). In a series of reports, Vaughn and her colleagues have documented and discussed the value of effective interventions for students at risk for reading difficulties (cf. Vaughn & Fuchs, 2003; Vaughn, Gersten, & Chard, 2000; Vaughn, Mathes, Linan-Thompson, & Francis, 2005). Kamps and Greenwood (2005) reported first-year 1st-grade findings for students participating in secondary-level interventions (i.e., small-group reading instruction) in a large-scale project addressing reading and behavior interventions. Although growth was evident in the scores of experimental- and comparison-group students, the researchers reported a continuing need for targeted interventions. In general, the outcomes of specific studies that focused on tiers of intervention reflect positive outcomes (Marston, 2005) and "that there will continue to be a need for secondary interventions for more students in high-risk schools" (Kamps & Greenwood, 2005, p. 506). Further, continued research addressing improvement in early literacy skills of at-risk students is warranted (cf. Marston, 2005; NICHD, 2000a, 2000b, 2000c; O'Connor, 2003; Vaughn, 2003; Vaughn, Linan-Thompson, & Hickman, 2003).

As educators focused attention on enhancing early literacy skills, "DIBELS has become a catchphrase" in efforts to use "test data to inform instruction, to identify children at risk of failure in reading, and to hold schools accountable for student achievement" (Manzo, 2005, p. 1). Although not universally accepted as a benefit to the field of reading (cf. Goodman, 2006), DIBELS (Dynamic Indicators of Basic Early Literacy Skills; Good & Kaminski, 2002, 2003) assessments are widely used for identifying students likely to experience reading failure and are sensitive, time-efficient measures with multiple forms that allow for repeated assessments of value in evaluating intervention efforts with those children (Elliott, Huai, & Roach, 2007).

In this research, we used DIBELS benchmark assessments to identify 1st-graders at risk of reading failure. We provided intensive interventions for these children and compared their progress using DIBELS and other literacy measures to that of their peers who received only district guided reading instruction. We were interested in adding to the body of knowledge on the benefits of direct instruction on the skills of children with low trajectories of early literacy success in schools.

Method

Building on the successful work of others, our purpose was to document the effects of targeted intervention on the reading performance of students at risk of continuing reading failure. The program was designed by using principles grounded in best practices for providing effective literacy instruction, and was delivered by trained teaching assistants participating in the efforts of a large-scale research center and monitored by project personnel. The study was a randomized trial of reading interventions conducted in the southeastern United States. The design of this study was quasi-experimental in the sense that while we randomly assigned schools to treatment and control conditions, we selected participants meeting a widely accepted criterion for being at risk of continuing reading failure.

Participants and Setting

The Behavior and Reading Improvement Center (BRIC) provided services in six public elementary schools in an urban, integrated school system enrolling more than 120,000 students each year. The ethnic backgrounds of students in the district were rich and diverse, including African American (43 percent), American Indian/multiracial (3 percent), Asian (4 percent), white (40 percent), and Hispanic (10 percent) groups. Students participating in BRIC interventions attended schools similar to those included in other large-scale prevention and reform projects (e.g., Texas Reading Initiative is the flagship school for Effective Early Reading Intervention). Working closely with district personnel, we used the following criteria in selecting schools as partners in our research:

* Recognized performance accountability ratings ("At Standard" or above) on national, state, or local assessments

* Above 40 percent participation in federal free and reduced-price lunch program

* Evidence of effective implementation of the district's early reading intervention program and full allocation (at least 120 minutes) of literacy block instructional time

* Willingness to use project-identified measures as evidence of reading and behavior improvement

* Support from the superintendent, senior staff, principal, and campus site-based decision-making team, faculty, and staff

* Willingness to serve as a demonstration site and to collaborate in efforts to mentor other schools.

Our school selection procedure involved several steps: From a pool that met our selection criteria, project staff and school district advisory council members identified six BRIC schools for the study. We randomly selected 2 of the 6 BRIC schools to serve as the control group; the other 4 BRIC schools received the intervention.

Three of the BRIC schools enrolled more than 500 students and 3 of them had enrollments below 500; our smallest school had 398 students and our largest school enrolled 794 children in kindergarten through 5th grade. In general, we provided BRIC interventions in schools enrolling more children from ethnic minority backgrounds than did other schools in the district. African American children represent 43 percent of the overall student population in this school system; their representation in the project was considerably higher in 5 of the 6 schools. While representing a slight majority in the district (40 percent), white children represented less than 15 percent of the students attending all but one of the BRIC schools. Percentages of Hispanic children were above the school system average in four of the BRIC schools. The ethnic distributions with regard to the percentage of non-white students within each school population were not statistically different across the schools, [chi square] (5, N = 536) = 4.72, p = .45. The number of 1st-grade students, total number of exceptional children, and the free- or reduced-price lunch children were all comparable across schools (see Table 1), suggesting that these six schools were comparable in the children's background information. As a result, the unit of analysis was at the student level instead of at the school level.

We used the sixth edition of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) to identify children "at-risk" for reading failure. We included 1st-graders whose scores on the combination of Letter Naming, Phonemic Segmentation, and Nonsense Word assessments fell in the "Emerging" or "Some Risk" instructional range, based on the decision rules developed by the authors of the measures (see http://dibels.uoregon. edu/benchmarkgoals.pdf). From these children in need of "targeted" intervention, 139 first-grade students were included in treatment (n = 101) and control schools (n = 38). The available resources controlled the sample size of the treatment group (e.g., teaching assistants). Of the treatment group, 59 (58.42 percent) were males and 42 (41.58 percent) were females. Of the control group, 26 (68.42 percent) were males and 12 (31.58 percent) were females. The distributions of male and female students within the treatment and control groups were not statistically different, [chi square] (1, N= 139) = 1.16, p = .28. With respect to participants' ethnicity background information, 57 (58.76 percent) were African Americans and 40 (41.24 percent) were Hispanics/Latinos, Asians, or whites in the treatment group. Only 25 of the 38 students in the control group provided their ethnicity information. Of these, 18 (72.00 percent) were African Americans and 7 (28.00 percent) were Hispanics, Asians, or whites. The distribution of student ethnicity information was also independent of the arrangement of treatment and control groups, [chi square] (1, N = 122) = 1.47, p = .23. Therefore, the treatment and control group students were comparable with respect to their ethnicity and gender, thus controlling these possible confounding variables in the following analyses.

Procedure

The BRIC, in collaboration with professionals in the local school district, supports the implementation and evaluation of a program to prevent the development and/or persistence of serious behavior and reading failure. Open Court (SRA/McGraw-Hill) is the core reading program in all elementary schools. Consultants and literacy facilitators support district-wide use of the program. All students participated in a 120-minute literacy block of instruction that focused on scripted lessons for all students and independent work time (IWT) for practicing skills, enriching content, and remedial assistance, as needed and appropriate for individuals and groups of students. All students who failed to make adequate progress in the core curriculum received replacement supplemental instruction during the IWT portion of their literacy instruction. During this time, students in the control group participated in activities that were part of the core curriculum provided by their teachers, who received initial and continuing professional development from consultants and literacy facilitators. Students in the treatment group participated in the project-sponsored lessons presented by teaching assistants, who provided initial and continuing professional development and supervision by project personnel.

Supplemental, Targeted Intervention Program. We implemented a project-developed treatment to increase phonemic awareness, alphabetic understanding, decoding skills, and fluency of targeted students who were not progressing at the expected rate for their grade level, or were shown through screening to be at risk for failure in reading. The scripted lessons followed formats and a sequence of skills recommended in Direct Instruction Reading (Carnine, Silbert, Kame'enui, & Tarver, 2004); and they incorporated six principles of instructional design as described by Simmons and Kame'enui (1998). Students participating in the targeted intervention received from 10 to 15 minutes a day of explicit instruction using lessons incorporating the following areas: 1) onset-rime, auditory skills of blending and segmenting (i.e., phonemic awareness); 2) letter-sound correspondences (i.e., alphabetic understanding); 3) reading phonetically regular words (i.e., decoding); 4) fluency building with connected text; and 5) sight word practice. Sight words practiced as part of the intervention were the most frequently used ones in the 1st-grade materials of Open Court. The 110 lessons in the program address all of the decoding skills included in the 1st-grade core literacy curriculum.

Dependent Measures. We used two measures for assessing reading skills: Woodcock Reading Mastery Test-Revised (WRMT-R; Woodcock, 1987) and Dynamic Indicators of Basic Early Literacy Skills (DIBELS). We selected the WRMT-R because it is widely used as a valid instrument to measure elementary school students' early literacy competence. The district used the DIBELS for K-2 benchmark, program monitoring, and end-of-year assessments within the district.

We used the Word Attack (WA) subtest of WRMT-R as an index of children's decoding skills. It requires students to read nonsense words in isolation. The Word Identification (WI) subtest requires the student to read real words in isolation. The Passage Comprehension (PC) subtest requires the student to read a sentence (or sentences) and supply one word that has been deleted. These measures have been widely used in research and evaluation studies focused on intervention programs with characteristics that were similar to those present in this and similar research projects (cf. Kamps & Greenwood, 2005). The grade-based standardized scores for these measures were used in this study. The national standardized values range from 10 to 175, with a mean of 100 and standard deviation of 15.

In the Examiners Manual, Woodcock (1987) indicated that the validity of the WRMT-R is largely based on content and concurrent validity studies and that the structure of the measure follows the same scope and sequence as other global assessments of reading ability. Outside experts, curriculum specialists, and experienced teachers contributed to the development of the test items. Concurrent validity studies illustrate that the WRMT-R correlated well with other instruments known to measure reading (e.g., mean correlation coefficients of .76, .69, and .63 for WI, WA, and PC, respectively), when compared to similar tests on the Woodcock-Johnson Psycho-Educational Test Battery (Woodcock & Johnson, 1977). The American Guidance Service examined the correlation between the WRMT-R Form G and the Kaufman Assessment Battery for Children (K-ABC; Kaufman & Kaufman, 1983) and found that the median validity coefficient between Total Reading Full-Scale score and the K-ABC Reading score was .85 (Williams & Eaves, 2001). Woodcock (1987) reported split-half reliabilities, with the Spearman-Brown correction, using members of the original norm group for the calculations. The median reliability coefficients were high: WA (r = .89); WI (r = .97); and PC (r = .92).

In general, the DIBELS assessments have excellent technical adequacy as well (cf. Elliott, Lee, & Tollefson, 2001; Fuchs, Fuchs, & Compton, 2004; Good, Kaminski, Simmons, & Kame'enui, 2001; Hintze, Ryan, & Stoner, 2003; Kaminski & Good, 1996; Speece, Mills, Ritchey, & Hill, 2003; Vadasy, Sanders, & Peyton, 2005). In this research, two of the four subtests of DIBELS were administered: phoneme segmentation fluency (PSF) was assessed to measure students' phonological awareness, such as the students' skill to identify and produce the initial sound of a given word; and nonsense word fluency (NWF) was assessed to measure students' mastery of alphabetical principal, such as students' knowledge of letter-sound correspondences, as well as their ability to blend letters together to form unfamiliar nonsense words.

PSF assesses the ability to segment words into individual phonemes. The examiner presents three- and four-syllable words orally, and the student is instructed to repeat the word orally in segmented syllables. The number of correct phonemes per minute is the recorded score. Alternate-form reliability for PSF is .88 for kindergarten children (Kaminski & Good, 1996). Concurrent, criterion-related validity of PSF with the readiness cluster score of the Woodcock-Johnson Psycho-Educational Battery was .54 in the spring of kindergarten (Good et al., 2001). Concurrent validity estimates ranged from .43 to .65 on other measures of cognitive ability and school readiness (Kaminski & Good, 1996). Predictive validity of spring kindergarten PSF with spring 1st-grade Woodcock-Johnson Psycho-Educational Battery was .68 and .62 with curriculum-based measurement Oral Reading Fluency (Good et al., 2001).

For NWF, students have i minute to read consonant-vowel-consonant pseudo-words. The score is the number of sounds produced correctly, with credit earned either by saying individual sounds in the pseudo-words or by phonologically recoding the pseudo-words (with three sounds awarded for each correctly read pseudo-word). Good et al. (2001) reported concurrent validity, with the Woodcock-Johnson readiness cluster score (i.e., visual auditory learning and letter identification) ranging between .35 in May to .59 in February (median coefficient = .52) with samples of 70 to 242 children. The predictive validity coefficients from October of 1st grade to May of 1st grade were .71 with respect to passage reading fluency and .52 with respect to the Woodcock-Johnson reading cluster score.

Treatment Fidelity. In order to ensure that assistants implemented the intervention as intended, they were regularly observed during an entire instructional session. We used a rating scale to evaluate the fidelity of implementation of each activity or section of a lesson across critical categories: 1) appropriate pacing, 2) implementation of prescribed procedures, 3) error correction with appropriate scaffolding, and 4) student engagement and attentiveness. A score of 3 indicated that the teacher implemented the category exactly as intended. A score of 2 indicated an acceptable manner of implementation for a category, but with some error. A score of 1 represented a poorly implemented category. Likewise, we included a global checklist (1 = acceptable, 0 = unacceptable) for readiness of instructional materials, appropriate student seating arrangement, and instructor warmth and enthusiasm. As an indication of overall fidelity for each instructional session observed, we calculated the percent of categories with acceptable or higher ratings. On average across 415 observations, teaching assistants conducted their respective interventions with high levels of fidelity (M = 92.53%, Range = 83.41-99.59).

Data Analysis

Doubly Multivariate Analysis of Variance (MANOVA) was employed when non-commensurate dependent variables, WI, WA, PC, NEF, and PSF, were repeatedly measured across the treatment and control groups (Tabachnick & Fidell, 2007). Wilks' Lambda was used as the test statistic for MANOVA, and the overall alpha level was set at .05 for all statistical analyses in this study.

Plots of residuals against the order of observations were scattered without clear patterns, indicating that the assumption of independent observations was tenable. We checked the multivariate normality by examining the marginal (univariate) normality for each variable. We used normal probability plots of the residuals and histograms to check the distribution of each variable. Square root transformations corrected the positively skewed distribution of fall NWF and PSF, but no difference was evident with respect to the statistical analytic results using original NWF and PSF scores and the transformed values. Therefore, we used the original NWF and PSF scores for all analyses to simplify the interpretation of outcomes. Box's M tests for the equality of covariance matrices turned out to be significant (p < .05) for the independent measures MANOVA and repeated measures MANOVA. Since the group sizes are unequal (101 for the treatment and 38 for the control), the values of the generalized variance (the determinants of the covariance matrices) of each group were compared to determine if the MANOVA results were liberal or conservative. In order to visually examine students' growth of reading skills within the treatment and control groups, respectively, figures illustrating the change of the participants' reading skills were also developed.

Results

Descriptive statistics of the participants' reading skills measured are presented in Table 2 with respect to both treatment and control groups. The MANOVA for students' reading skills in the fall suggested that the treatment and control groups differ significantly on the combination of the five variables, Wilks' Lambda = 0.88, F(5, 131) = 3.61, p = .004, partial [[eta].sup.2] = .12, and that the observed power is .92. The log of the determinant of the covariance matrix of dependent variables for the control group is 21.81 and that for the treatment group is 23.61, which means that the larger generalized variance was with the larger group. Therefore, the results could only be conservative, indicating no concern of type I error (Stevens, 2002). Nevertheless, none of the tests of between-subjects effects was statistically significant, F(1,135) = 0.05,p =. 82 for PSF, F(1, 135) = 3.66, p = .06 for NWF, F(1, 135) = 0.14, p = .71 for WI, F(1, 135) = 1.36, p = .25 for WA, and F(1, 135) = 2.15, p = .15 for PC. None of the effect sizes measured by partial [[eta].sup.2] was larger than .03.

The differences between the fall and spring student reading skills, however, were found to be statistically significant, Wilks' Lambda = 0.31, F(5, 111) = 49.00, p < .001, partial [[eta].sup.2] = .69. The results indicated no significant differences between the treatment and control group student fall and spring average reading skills, but all students gained significantly from the fall to the spring. The interaction between time (fall and spring) and status (treatment and control) was significant, Wilks' Lambda = 0.67, F(5, 111) = 3.43, p <.01, partial [[eta].sup.2] = .13, and the observed power is .90. The significant interaction suggested that, on average, the treatment group gained more than the control group in reading skills (see Figures 1-3 for WI, WA, and PC).

Tests of within-subjects contrasts showed non-significant interactions between time and status for NWF, F(1, 115) = 0.024, p = .878 and PC, F(1, 115) = 0.021, p = .884. As a result, we interpreted the main effects of these two variables. Both the treatment and control group gained significantly from the fall to the spring on NWF, F(1,115) = 146.35, p < .001, partial [[eta].sup.2] = .56 and PC, F(1, 115) = 47.49,p < .001, partial [[eta].sup.2] = .29. Due to the significant time and status interactions for PSF, F(1, 115) = 8.20,p = .01; WI, F(1,115) = 4.67,p = .03; and WA, F(1,115) = 5.79,p = .02, the gains of student reading skills on these measures were examined independently with t-tests for the treatment and control groups. Cohen's d (Cohen, 1988) was used to report the effect size of the differences.

For PSF, both treatment and control groups made statistically significant gains: t(83) = -11.68,p < .001, d = 1.28, for the treatment group and t(34) = -5.78,p < .001, d = .98, for the control group. For WI, the treatment group made significant gains, t(87) = -6.37, p < .001, d = .68, but the control group did not, t(33) = -1.22, p > .05, d = .21. For WA, the treatment made significant gains, t(87) = -3.68, p < .001, d = .39, but the control group scored lower in the spring than in the fall, although this loss was not statistically significant t(33) = .70, p > .05, d = .12. Even when both the treatment and control groups made statistically significant gains, the treatment group gained more than the control group, as indicated by the significant time and status interactions.

Discussion and Conclusions

The ability to read is highly valued and essential for academic, social, and economic advancement. Despite the United States having a mostly effective educational system, many children fail to read adequately by the end of 3rd grade. Large numbers of young people continue to struggle with reading and remain at risk in middle school, rarely performing at the same level as their peers (cf. Allington & McGill-Franzen, 2003; Baker, 2002; Bintz, 1997; Bishop, 2003; Bishop & League, 2006; Guthrie & Davis, 2003; International Reading Association, 2001; Ivey, 1999; Ivey & Broaddus, 2000; Marston, 2005; McCray, 2001; McCray, Vaughn, & Neal, 2001; Tivnan & Hemphill, 2005; Vaughn & Fuchs, 2003). Culturally and ethnically diverse learners who are struggling readers are also more likely to experience continuous failure, to be referred and placed in special education, to experience life in the lower track in school, and to enter the world after school as a high school dropout (Haager & Windmueller, 2001; King-Sears et al., 2004; McCray, 2001; McGill-Franzen, 2005; Tivnan & Hemphill, 2005; Vaughn et al., 2005; Winzer & Mazurek, 1998).

In this article, we are concerned with young people whose educational careers are in danger because they do not read well enough to succeed in school, including about 80 percent of students with learning disabilities and related literacy-based disorders who have difficulty reading and writing (Frankenberger & Franzaglio, 1991; Kirk & Elkins, 1975; Lerner & Kline, 2006; Lyon, 1985; Snow et al., 1998; Ysseldyke & Algozzine, 1995; Ysseldyke, Algozzine, & Thurlow, 2000). Low reading performance in early grades and potential academic failure and dropout after 8th grade signal the continuing need for immediate, explicit, and effective reading interventions for at-risk students, especially those with reading disabilities (Haager & Windmueller, 2001; Kamps & Greenwood, 2005; Lerner & Kline, 2006; McCray, 2001; McGill-Franzen, 2005; Tivnan & Hemphill, 2005).

In a recent listing of"must reads," the Research Committee of the Council for Learning Disabilities (CLD) identified three topics "receiving widespread attention in the field of LD" (King-Sears et al., 2004, p. 77):

1. Focusing attention on increasing implementation of research-based practices in schools.

2. Exploring specialized, individualized, and responsive practices for meeting the needs of students with different types of learning disabilities.

3. Examining the implications of applying new identification and eligibility criteria within the context of response to intervention (RTI) models.

The commentaries provide a context for the importance of each topic, provide reasons for using the information, and point to future value in the field of learning disabilities. Each area has relevance for our research.

Articles related to the first two topics addressed needs and difficulties associated with "scaling up" or increasing the use of evidence-based interventions and practices in U.S. schools. The review of the scholarly work of others pointed out the importance of administrative support, evidence-based core reading in general education classrooms, ongoing professional development, flexibility, individualized attention, and continuous monitoring to the success of efforts to improve instruction for students at risk of failure in school. In our study, we provided intensive and continuing professional development to assistants delivering targeted, small-group, and individual interventions to 1st-grade students. We used direct, curriculum-based measures to identify students in need of intervention, and we used ongoing data-based decision-making to inform the special instruction and to alter the progress and process of providing the assistance. Administrator support was evident in overall acceptance of the model in their schools and in providing flexible personnel and scheduling opportunities so that more students needing supplementary instruction received it, or that those needing it received more time and practice using it. The study took place in schools implementing an evidence-based core reading program with considerable district-level support. Our evidence supports the "must reads" conclusion "that struggling readers benefit from an intensive, explicit approach to reading instruction" (King-Sears et al., 2004, p. 82).

Though less developed, likely because it is closely related to the other areas, the third topic area paid homage to the growing interest on RTI and its "intended benefits" in efforts to improve decision-making for students likely to experience continuing academic difficulties, including:

* Students who are at risk for school failure are availed of [a] "bridge" intended to close the gap between their performance and that of their same-age peers.

* All students receive effective instructional procedures in which their progress is monitored and responsive instruction Occurs.

* Fewer students end up in a program for students with LD, because their learning issues are addressed quickly and effectively.

* More students who are found eligible for special education services are more likely to have "true" learning disabilities. (King-Sears et al., 2004, p. 78)

Although germane to RTI, these clearly are general outcomes with relevance to efforts to improve reading instruction and schooling for all children. We did not initiate our study to illustrate presumed or actual impact of RTI, but our work bears witness to these global benefits of early intervention and multi-leveled literacy instruction. The treatment group students made statistically significant progress on all three grade-based standard scores of WRMT-R, but the control group students only made statistically significant progress on one of these three measures, and the effect sizes for the two non-significant differences are both small (d = .21 and d =.12, respectively). This indicates that the differences will not likely be statistically significant, even with a larger sample, because Cohen's d (Cohen, 1988) represents the standard difference between the two samples (the relative difference divided by the pooled sample standard deviation). Furthermore, the effect sizes of the difference between fall and spring scores for WI and WA are all larger than those of the control group students (.68 versus .21 and .39 versus .12), indicating that the students in the treatment group made greater improvements in their literacy skills during the first year compared with the students in the control group. When students' literacy skills were measured by PSF and compared with the control group students, the treatment group students scored lower in the fall (18.77 versus 19.50) but higher in the spring (37.35 versus 30.83). The difference was not statistically significant in the fall but significant in the spring, indicating that the treatment group students not only caught up with the control group students but also surpassed them. The outcome for NWF was different, with both groups making significant progress during the school year. This makes sense in light of the focus of early lessons in PC on letter-sound correspondences (i.e., alphabetic understanding) and auditory skills of blending and segmenting (i.e., phonemic awareness)--the targets of intervention for children in the treatment group.

Continuing concern for low-performing students, especially those in urban, high-risk environments, caused many districts to adopt school-wide models for the reform of literacy instruction, and that trend intensified under No Child Left Behind and the Reading First Act (cf. Tivnan & Hemphill, 2005; U.S. Department of Education, 2004). The model evaluated in this research is grounded in the accumulated knowledge in general and special education on effective reading instruction and tutorial programs (cf. Allington, 2002; Bintz, 1997; Bishop, 2003; Clay, 1985; Coyne et al., 2001; Goodman, 2006; Guthrie & Davis, 2003; Ivey, 1999; Kamps & Greenwood, 2005; King-Sears et al., 2004; NICHD, 2000a, 2000b, 2000c; Slavin, Madden, Dolan, & Wasik, 1996; Snow, et al., 1998; Torgesen, 1995; Vadasy, Jenkins, Antil, Wayne, & O'Connor, 1997; Vaughn et al., 2005; Wasik & Slavin, 1993). Its primary goal was to provide a basic understanding of reading to nonreaders and those with pronounced reading difficulty by employing age-appropriate materials, promoting independence in reading, and using a direct instructional approach with repetition and immediate performance feedback.

Limitations and Suggestions for Future Studies

The students in the treatment group made higher gains in literacy measures than did the students in the control group, but independent measures MANOVA failed to detect any statistically significant differences between the treatment and control groups on these same measures. This indicates that while the growth rates for the students in the treatment group are higher than those in the control group, the differences between the treatment and control groups may take some time to be statistically significant. Therefore, longitudinal work appears warranted so that students' growth rates can be measured over consecutive years, and the gap between the treatment and control group students might be large enough to be detected by independent measures MANOVA. With respect to the methodology, this study also revealed the limitations of independent measures MANOVA, which only looks at two measurement points. With a longitudinal study, hierarchical linear modeling (HLM) could be used to examine the trajectory of the students' growth in literacy through growth-curve analysis.

The knowledge base about how to improve early literacy is richer today than ever before, and the field of learning disabilities is facing "a desirable dilemma" relative to "how to ensure that those techniques become more widespread and universal practices" (cf. Goodman, 2006; King-Sears et al., 2004, p. 87). The need is clear for continuing research demonstrating that teachers and other professionals can implement these methods with positive effects in typical schools and educational contexts, rather than only in tightly controlled, small-scale experimental settings. These studies must provide teachers access to evidence-based practices that initiate, support, and sustain early and intensive intervention.

Authors' Note: Support for this research was provided in part by Grant No. H237F40012 and H238X00001 from the U.S. Department of Education, Office of Special Education Programs, awarded to the University of North Carolina at Charlotte. The opinions expressed do not necessarily reflect the position or policy of the Department of Education, and no official endorsement should be inferred. Correspondence concerning this article should be addressed to Chuang Wang, BRIC/EDLD/COED, University of North Carolina at Charlotte, Charlotte, NC 28223 [cwangl5@uncc.edu].

References

Adams, M. J. (2001). Alphabetic anxiety and explicit, systematic phonics instruction: A cognitive science perspective. In S. B. Newman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 66-80). New York: Guilford.

Allington, R. L. (2002). What I've learned about effective reading from a decade of studying exemplary elementary classroom teachers. Phi Delta Kappan, 83, 740-747.

Allington, R., & McGill-Franzen, A. (2003). The impact of summer loss on the reading achievement gap. Phi Delta Kappan, 85, 68-75.

Baker, M. I. (2002). Reading resistance in middle school. Journal of Adolescent and Adult Literacy, 45, 364-366.

Bintz, W. P. (1997). Exploring reading nightmares of middle and secondary school teachers. Journal of Adolescent and Adult Literacy, 41, 12-24.

Bishop, A. G. (2003). Prediction of first-grade reading achievement: A comparison of fall and winter kindergarten screenings. Learning Disability Quarterly, 26, 189-200.

Bishop, A. G., & League, M. B. (2003). Identifying a multivariate screening model to predict reading difficulties at the onset of kindergarten: A longitudinal study. Learning Disability Quarterly, 26, 235-252.

Carnine, D. W., Silbert, J., Kame'enui, E. J., & Tarver, S. (2004). Direct instruction reading (4th ed.). Upper Saddle River, NJ: Prentice-Hall.

Clay, M. M. (1985). The early detection of reading difficulties (3rd ed.). Portsmouth, NH: Heinemann.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum Associates.

Coyne, M. D., Kame'enui, E. J., & Simmons, D. C. (2001). Prevention and intervention in beginning reading: Two complex systems. Learning Disabilities Research & Practice, 16, 62-73.

Ehri, L. C., Nunes, S. R., Willows, D. M., Schuster, B. V., & Yaghoub-Zadeh, Z. (2001). Phonemic awareness instruction helps children learn to read: Evidence from the National Reading Panel's meta-analysis. Reading Research Quarterly, 36, 250-287.

Elliott, J., Lee, S. W., & Tollefson, N. (2001). A reliability and validity study of the Dynamic Indicators of Basic Early Literacy Skills-Modified. School Psychology Review, 30, 33-49.

Elliott, S. N., Huai, N., & Roach, A. T. (2007). Universal and early screening for educational difficulties: Current and future approaches. Journal of School Psychology, 45, 137-161.

Frankenberger, W., & Fronzaglio, J. (1991). A review of states' criteria and procedures for identifying children with learning disabilities. Journal of Learning Disabilities, 24, 495-500.

Frechtling, J. A., Zhang, X., & Silverstein, G. (2006). The Voyager Universal Literacy System: Results from a study of kindergarten students in inner-city schools. Journal of Education for Students Placed at Risk, 11, 75-95.

Fuchs, L. S., Fuchs, D., & Compton, D. L. (2004). Monitoring early reading development in first grade: Word identification versus nonsense word fluency. Exceptional Children, 71, 7-21.

Good, R. H., & Kaminski, R. A. (2002). Dynamic indicators of basic early literacy skills (6th ed.). Longmont, CO: Sopris West.

Good, R. H., & Kaminski, R.A. (Eds.). (2003). Dynamic indicators of basic early literacy skills (6th ed.). Retrieved August 12, 2007, from The University of Oregon, Center on Teaching and Learning Web site: http://dibels.uoregon.edu/

Good, R. H., III, Kaminski, R. A., Simmons, D., & Kame'enui, E.J. (2001). Using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an outcomes-driven model: Steps to reading outcomes. Unpublished manuscript, University of Oregon at Eugene.

Goodman, K. S. (2006). The truth about DIBELS. Portsmouth, NH: Heinemann.

Guthrie, J. T., & Davis, M. H. (2003). Motivating struggling readers in middle school through an engagement model of classroom practice. Reading and Writing Quarterly, 19, 59-85.

Haager, D., & Windmueller, M. P. (2001). Early reading intervention for English Language Learners at-risk for learning disabilities: Student and teacher outcomes in an urban school. Learning Disability Quarterly, 24, 235-250.

Hintze, J. M., Ryan, A. L., & Stoner, G. (2003). Concurrent validity and diagnostic accuracy of the Dynamic Indicators of Basic Early Literacy Skills and the Comprehensive Test of Phonological Processing. School Psychology Review, 32, 541-556.

International Reading Association. (2001). Supporting young adolescents' literacy learning: A joint position paper of the International Reading Association and National Middle School Association. Retrieved August 1, 2003, from www. ira.org/positions/supporting_young_adolesc. html.

Ivey, G. (1999). Reflections on teaching struggling middle school readers. Journal of Adolescent and Adult Literacy, 42, 372-381.

Ivey, G., & Broaddus, K. (2000). Tailoring the fit: Reading instruction and middle school readers. The Reading Teacher, 54, 68-78.

Kamil, M., Mosenthal, P., Pearson, P. D., & Barr, R. (2000). Handbook of reading research. Mahwah, NJ: Lawrence Erlbaum.

Kaminski, R. A., & Good, R. H. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25, 215-227.

Kamps, D. M., & Greenwood, C. R. (2005). Formulating secondary-level reading interventions. Journal of Learning Disabilities, 38, 500-509.

Kaufman, A. S., & Kaufman, N. L. (1983). Kaufman Assessment Battery for Children. Circle Pines, MN: American Guidance Service.

King-Sears, M. E., Boudah, D. J., Goodwin, M. W., Raskind, M. H.., & Swanson, H. L. (2004). Timely and compelling research for the field of learning disabilities: Implications for the future. Learning Disability Quarterly, 27, 77-88.

Kirk, S. A., & Elkins, J. (1975). Characteristics of children enrolled in child service demonstration centers. Journal of Learning Disabilities, 8, 630-637.

Lerner, J. W., & Kline, J. (2006). Learning disabilities and related disorders (10th ed.). Boston: Houghton Mifflin.

Lyon, R. (1985). Educational validation studies of learning disability subtypes. In B. P. Rourke (Ed.), Neuropsychology of learning disabilities: Essentials of subtype analysis (pp. 228-253). New York: Guilford Press.

Manzo, K. K. (2005, September 28). National clout of DIBELS test draws scrutiny. Education Week, 25(5), 1, 12.

Marston, D. (2005). Tiers of intervention in responsiveness to intervention: Prevention outcomes and learning disabilities identification patterns. Journal of Learning Disabilities, 38, 539-544.

McCray, A. D. (2001). Middle school students with reading disabilities. Reading Teacher, 55, 298-300.

McCray, A. D., Vaughn, S., & Neal, L. V. I. (2001). Not all students learn to read by third grade: Middle school students speak out about their reading disabilities. The Journal of Special Education, 35, 17-30.

McGill-Franzen, A. (2005). In the press to scale up, what is the risk? Reading Research Quarterly, 40, 367-370.

National Institute of Child Health and Human Development. (2000a). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769). Washington, DC: U.S. Government Printing Office.

National Institute of Child Health and Human Development. (2000b). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction [Online]. Retrieved June 26, 2006, from www.nichd.nih.gov/publications/ nrp/smallbook.htm.

National Institute of Child Health and Human Development. (2000c). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction: Reports of the subgroups (NIH Publication No. 00-4754). Washington, DC: U.S. Government Printing Office.

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: NICHD.

National Research Council. (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. [online]. Retrieved June 26, 2006, from http://books.nap. edu/books/030906418X/html/index.html.

Newman, S. B., & Dickinson, D. K. (2001). Handbook of early literacy research. New York: Guilford.

No Child Left Behind Act of 2001. (2002). Retrieved June 26, 2006, from www.ed.gov/legislation/ESEA02/107-110.pdf

O'Connor, R. E. (2000). Increasing the intensity of intervention in kindergarten and first grade. Learning Disabilities Research & Practice, 15, 43-54.

O'Connor, R. (2003, December). Tiers of intervention in kindergarten through third grade. Paper presented at the National Research Center on Learning Disabilities Responsiveness-to-Intervention Symposium, Kansas City, MO.

Simmons, D. C., & Kame'enui, E. J. (1998). What reading research tells us about children with diverse learning needs: Bases and basics. Mahwah, NJ: Lawrence Erlbaum.

Slavin, R. E., Madden, N. A., Dolan, L. J., & Wasik, B. A. (1996). Every child, every school: Success for all. Newbury Park, CA: Corwin.

Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press.

Speece, D. L., Mills, C., Ritchey, K. D., & Hill, E. (2003). Initial evidence that letter fluency tasks are valid indicators of early reading skill. The Journal of Special Education, 36, 223-233.

Stevens, J. (2002). Applied multivariate statistics for the social sciences. Mahwah, NJ: Lawrence Erlbaum.

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. New York: Pearson.

Tivnan, T., & Hemphill, L. (2005). Comparing four literacy reform models in high-poverty schools: Patterns of first-grade achievement. Elementary School Journal, 105, 419-442.

Torgesen, J. (1995). Instruction for reading disabled children: Questions about knowledge into practice. Issues in Education, 1, 91-96.

U.S. Department of Education. (2004). Charting the course: States decide major provisions under No Child Left Behind. Retrieved December 12, 2006, from www.ed.gov/news/ pressreleases/2004/01/01142004.html

Vadasy, P. F., Jenkins, J. R., Antil, L. R., Wayne, S. K., & O'Connor, R. E. (1997). Community-based early reading intervention for at-risk first graders. Learning Disabilities Research and Practice, 12, 29-39.

Vadasy, P. F., Sanders, E. A., & Peyton, J. A. (2005). Relative effectiveness of reading practice or word-level instruction in supplemental tutoring: How text matters. Journal of Learning Disabilities, 38, 364-380.

Vaughn, S. (2003, December). How many tiers are needed for response to intervention to achieve acceptable prevention outcomes? Paper presented at the National Research Center on Learning Disabilities Responsiveness-to-Intervention Symposium, Kansas City, MO.

Vaughn, S., & Fuchs, L. (2003). Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice, 18, 137-146.

Vaughn, S., Gersten, R., & Chard, D. J. (2000). The underlying message in LD intervention research: Findings from research syntheses. Exceptional Children, 67, 99-114.

Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to treatment as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391-409.

Vaughn, S., Mathes, P. G., Linan-Thompson, S., & Francis, D. J. (2005). Teaching English language learners at risk for reading difficulties to read: Putting research into practice. Learning Disabilities Research & Practice, 20, 58-67.

Wasik, B. A., & Slavin, R. E. (1993). Preventing early reading failure with one-to-one tutoring: A review of five programs. Reading Research Quarterly, 28, 179-200.

Williams, T. O., Jr., & Eaves, R. C. (2001). Exploratory and confirmatory factor analyses of the Woodcock Reading Mastery Tests-Revised with special education students. Psychology in the Schools, 38, 561-567.

Winzer, M. A., & Mazurek, K. (1998). Special education in multicultural contexts. Upper Saddle River, NJ: Prentice Hall.

Woodcock, R. W. (1987). Woodcock Reading Mastery Tests--Revised [Examiners Manual]. Circle Pines, MN: American Guidance Service.

Woodcock, R. W., & Johnson, M. B. (1977). Woodcock-Johnson Psycho-Educational Test Battery. Allen, TX: DLM/Teaching Resources.

Ysseldyke, J. E., & Algozzine, B. (1995). Special education: A practical approach for teachers. Boston: Houghton Mifflin.

Ysseldyke, J. E., Algozzine, B., & Thurlow, M. L. (2000). Critical issues in special education. Boston: Houghton Mifflin.

Chuang Wang

Bob Algozzine

University of North Carolina at Charlotte
Table 1
Background Information of Participating Schools in Percentages
of the School Population

School Student Background Information

 Exceptional
 Non-White 1st Grade Children

 01 90 17 11
 02 99 19 19
 03 88 17 11
 04 89 16 12
 05 73 17 12
 06 97 17 18
[chi square] 4.72 (p =.45) 0.28 (p = .99) 4.83 (p =.44)

School Student Background Information

 Free/Reduced-
 price Lunch

 01 77
 02 94
 03 73
 04 86
 05 59
 06 92
[chi square] 10.91 (p =.05)

Note. The degrees of freedom for all these chi-square tests were 5.

Table 2
Means and Standard Deviations of the Literacy Skills Measured

 Treatment Control

Literacy Skill Fall Spring Fall Spring

Word Identification 93.92 100.73 94.70 95.66
 (13.89) (13.25) (15.99) (18.37)
Word Attack 95.52 100.95 98.00 95.83
 (14.55) (15.31) (13.66) (20.52)
Passage Comprehension 84.49 93.21 82.05 88.57
 (12.50) (12.76) (12.64) (14.38)
Phoneme Segmentation Fluency 18.77 37.35 19.50 30.83
 (14.46) (13.07) (14.49) (16.59)
Nonsense Word Fluency 12.96 42.24 9.00 39.03
 (11.94) (21.98) (6.44) (29.54)

Note. Values enclosed in parentheses are standard deviations.

Figure 1: Students' grade-based standardized scores for word
identification

Word Identification

Standard Score

 Assessment

 Treatment Control

Fall 93.92 94.70
Spring 100.73 95.66

Figure 2: Students' grade-based standardized scores for word attack

Word Attack

Standard Score

 Assessment

 Treatment Control

Fall 95.52 98.00
Spring 100.95 95.83

Figure 3: Students' grade-based standardized scores for passage
comprehension

Passage Comprehension

Standard Score

 Assessment

 Treatment Control

Fall 84.49 82.05
Spring 93.21 88.57
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有