An assessment of the North Carolina school-age child care accreditation initiative.
Cassidy, Deborah J.
Abstract
The primary purpose of this study was to determine if participation
in the North Carolina Quality Enhancement Initiative (NC QEI) improved
the overall quality of the 26 participating school-age child care
programs in three North Carolina communities. A paired t-test showed a
positive and significant increase in the quality of school-age child
care environments and teacher/child interactions over the 9-month
period, from pre-initiative to post-initiative. Pre-initiative and pre- to post-initiative difference scores on each dependent measure were
grouped together using a cluster analysis. Each cluster of programs was
compared to determine relationships between these program variables and
the program clusters. A state license and a smaller group size were
related to higher quality programs at pre-initiative. Director
education, teacher salary, state license, and program size were related
to greatest program improvement from pre-initiative to post-initiative.
In addition, six of the 10 programs were awa rded accreditation by the
National School-Age Care Alliance (NSACA). These findings suggest that
participation in program improvement initiatives, like the NC QEI, is a
viable means of improving the quality of school-age child care programs.
**********
Continuing increases in the number of mothers in the workforce has
created a phenomenon called "out-of-school time" care or
"school-age child care," which is care for school-age children
during the hours when school is not in session. While the term
"school-age child care" typically includes children ages 5 to
12, in elementary school, programs for middle-school students (ages
12-14) are also increasing in number (National Institute on
Out-of-School Time, 1998). In 1991, a national study found that 1.7
million children, kindergarten through 8th grade, were enrolled in
49,000 formal before- and after-school programs (Seppanen et al., 1993).
A more recent study estimated that 39% of kindergartners through
3rd-graders receive some form of non-parental care before and/or after
school on a weekly basis (Brimhall & Reaney, 1999). This figure
translates to a total of 6.1 million primary grade children, who spend
an average of 14 hours per week in out-of-school time child care
arrangements. Due to the increasing numb er of children enrolled in
schoolage child care programs, policymakers, child related advocates,
businesses, and educators are strategizing how to effectively improve
the quality of care available to families and children. One professional
business organization, the American Business Collaboration of Quality
Dependent Care (ABC), funded a pilot Quality Enhancement Initiative (NC
QEI) in the state of North Carolina to aid in the implementation of a
national system developed by the National School-Age Care Alliance
(NSACA) to accredit school-age child care programs.
Prior to the recent concern for the quality of school-age care
program, policymakers, educators, and parents were concerned about the
quality of infant, toddler, and preschool care and education programs.
In the early 1980s, the National Association for the Education of Young
Children (NAEYC) began an accreditation program that outlined the
criteria necessary to support the physical, cognitive, and
socio-emotional development of young children birth to age 8. NAEYC
defines accreditation as a process in which a program's director,
staff, and parents voluntarily work with representatives of the
association to determine whether the program meets nationally recognized
criteria for high quality. Programs achieving accreditation have
demonstrated a commitment to providing the highest quality care and
education.
The NAEYC accreditation system also includes school-age child care
classrooms if the majority of children are 8 years old and younger. For
example, 31% of all NAEYC-accredited programs also serve school-age
children (age 6-8), not just infants, toddlers, and preschoolers.
Furthermore, 3% of accredited programs serve school-age children
exclusively (Bredekamp & Glowacki, 1996). Programs are awarded
accreditation based on their ability to provide developmentally
appropriate environments, as defined by NAEYC's position statement
on developmentally appropriate practices for children birth to age 8.
Since the NAEYC accreditation initiative was created and implemented
almost 10 years prior to the NSACA accreditation system, a review of
NAEYC accreditation findings provides valuable information in
understanding the effectiveness of an accreditation system for child
care centers in improving program quality.
Unfortunately, rather limited research has been conducted to
measure the level of quality achieved and maintained by NAEYC-accredited
programs. Whitebook (1996) argues that there is consensus within the
early childhood field, and among policymakers, that NAEYC accreditation
standards represent a level of quality that exceeds the licensing
standards and current level of care in most states. Because of this
consensus, initiatives that help centers achieve NAEYC accreditation
attract a wide range of funders (i.e., corporations, foundations,
unions, community groups, and governments). While many supporters have
committed funds to help programs through the self-study and validation process fundamental to NAEYC accreditation, few have funded research
into the success of the NAEYC accreditation system.
Accreditation As a Mechanism To Improve Quality
Although the accreditation process was not their primary focus, two
national preschool (birth to age 4) child care studies--the Cost Quality
and Child Outcome (CQCO) (Helburn, 1995) and the National Child Care
Staffing Study (NCCSS) (Whitebook, Howes, & Phillips,
1989)--included NAEYC-accredited programs in their investigation.
Findings from the NCCSS study indicate that accredited programs provided
better than average quality of care. The 14 accredited centers in the
sample differed from non-accredited centers on all dimensions of
quality. In fact, accredited centers paid higher wages, provided better
benefits and working conditions, and had lower turnover rates.
Furthermore, teachers in accredited programs were better educated and
had more early childhood training. The accredited programs also provided
more developmentally appropriate activities and had better staff/child
ratios. Teachers in accredited programs were rated as providing more
appropriate care than teachers in non-accredited programs. Overall, the
centers accredited by NAEYC provided high-quality care to children in
the NCCSS study.
Findings from the CQCO study showed that NAEYC-accredited centers
had higher than average quality; as a group, however, the accredited
centers (31 out of 401) did not provide the highest quality of care.
When the accredited centers were compared with three other types of
centers that also provided higher than average quality care (i.e.,
publicly operated, work-site, and publicly funded), accredited programs
did not provide as high a quality of services as some of the other types
of programs. However, there was some overlap among types. For example, a
higher quality work-site program also may be accredited. The staff/child
ratios in accredited programs were similar to those in publicly funded
programs, but not as high as those in work-site or publicly operated
programs. Accredited programs also employed more teachers with at least
a college degree and were more likely to offer health insurance and pay
somewhat higher wages for teachers and assistants in comparison to the
other centers offering higher quality c are. However, two indicators of
quality--staff/child ratios and turnover (called "tenure" in
the report)--showed no differences between accredited and non-accredited
centers. Therefore, the CQCO study findings suggest that
NAEYC-accredited programs are better than average but not necessarily
the highest quality centers in a community. While these studies provide
valuable information about NAEYC-accredited centers, the NCCSS and CQCO
studies did not investigate the impact that participation in the NAEYC
accreditation process had on program quality.
Whitebook, Sakai, and Howes (1997) have conducted the only
large-scale investigation of the NAEYC accreditation process by
assessing 92 child care centers in three California communities. The
researchers began tracking centers when they initiated the accreditation
process, followed their progress over time, and compared them to other
centers in their communities. Interviews with teaching staff and center
directors also were conducted. Findings suggest that centers that
achieved accreditation were of higher quality when they began the
accreditation process and showed greater improvement in overall quality
scores, staff/child ratios, and higher staff/child interaction scores
than did programs that sought, but did not achieve, accreditation. In
fact, centers that began the accreditation self-study but did not
complete the process demonstrated no improvement in classroom quality,
staff/child ratios, and staff/child interactions. However, almost 40% of
the centers were rated as mediocre in quality, in spite of imp rovements
they had made while undergoing the accreditation process. A nonprofit status, higher wages, and retention of skilled staff, in combination
with NAEYC accreditation, were predictors of high quality care in the
participating child care centers.
Given the NAEYC accreditation findings, an accreditation initiative
exclusively for school-age child care may be a viable strategy for
improving school-age child care programs. The goal of this present study
was to determine the effectiveness of the North Carolina initiative.
Specifically, the study assessed whether participation in the North
Carolina Quality Enhancement Initiative improved the quality of the
school-age child care environment and teacher/child interactions, by
observing programs before and after participation in the initiative. In
addition, information on the relationships between structural and
process variables were examined and analyzed as they relate to program
quality in school-age child care programs.
Method
Participants
The school-age child care programs in the study were selected by
the funders of the initiative--IBM, AT&T, and GE Capital--from among
program applicants in three target communities in North Carolina.
Participation priority was given to programs that served children of
employees at the three funding companies. The goal of the project was to
select 10 programs from each community. However, not all of the
communities were able to recruit 10 programs; therefore, some
communities recruited more than 10, and some did fewer. Twenty-eight out
of the 30 selected programs agreed to participate in the evaluation.
Procedures
Pre-initiative observation and program demographic survey data were
collected on a total of 28 programs (N=28). Post-initiative data were
collected on 26 programs (N=26) in May of 1998: 7 in Greensboro, 8 in
Raleigh, and 11 in Charlotte. Over the course of the project, four
programs dropped out of the initiative, two of which were included in
the evaluation.
The program improvement initiative began with a two-day training
event conducted and planned by two NIOST training associates. Prior to
the two-day training, program administrators were asked to complete a
newly developed questionnaire, Readiness Scale for Program Improvement
and Accreditation in School-Age Child Care Programs (O'Connor,
1997), to place each program in one of two groups-First Steps or Team
Works. The Team Works programs were on a faster paced track designed to
have them ready to apply for NSACA accreditation in the spring of 1998.
The First Steps programs were on a more leisurely paced track focusing
on targeted program improvements during the first year, with the hope of
being ready to apply for NSACA accreditation in 1999. The two groups,
First Steps and Team Works, had different training agendas during their
two-day training event. Each program was also assigned an adviser, who
provided on-site consultation, telephone consultation, and/or resource
development for approximately nine months (September 1997 to May 1998).
The nine advisers (three from each community) were trained for two days
in May of 1997 and were allotted a maximum of 10 hours of technical
assistance per program and 10 hours of bi-monthly peer support meetings.
In addition, each participating program received two sources of
information to assist them in program improvement: the NSACA Pilot
Standards (Sisson, 1995) and the Assessing School-Age Quality (ASQ) Kit.
The post-initiative observation visits were conducted in May of
1998, prior to the NSACA accreditation endorser visits. The same age
group visited in the fall of 1997 was observed post-initiative. The data
collectors conducting the observations were blind to which programs had
applied for accreditation.
Measures
Information on school-age child care program structural or
regulatory features was collected via a questionnaire, which requested
information about the staff/child ratio, group size, director's
level of education, and number of children served. Pre-initiative and
post-initiative observations were conducted using three observation
measures. First, the School-age Care Environment Rating Scale (SACERS)
(Harms, Jacobs, & White, 1996) was used to assess the school-age
child care environment. The SACERS assesses the developmental
appropriateness of the school-age child care program, focusing on 43
items covering six sub-scales: space and furnishings, health and safety,
activities, interactions, program structure, and staff development.
There is a seventh sub-scale, of six items, for programs that include
children with special needs. Each item is rated on a 7-point scale with
the score of 1 signifying inadequate, 3 minimal, 5 good, and 7
excellent. An average score on the 43 items is then calculated. The
SACERS was chosen for measuring the program environment because it is a
comprehensive "best practice" rating scale for school-age
child care programs. The SACERS and the NSACA Pilot Standards (used in
the accreditation process) assess similar areas: indoor and outdoor
environment, health and safety, activities, interactions, and
administration.
Reliability and validity of the SACERS has been evaluated in
several ways. Reliability assessments include Cronbach's alpha =
.95, inter-rater agreement weighted Kappa = .83, and intraclass
correlation r = .96. The validity of the SACERS has been established
through high agreement between SACERS scores and expert evaluations of
quality. In the present study, inter-rater reliability was established
at 73%, and maintained at 80% on 20% of the total programs visited (two
in each community). SACERS reliability was re-established among all data
collectors after 6 months at 79% and maintained at 80% again on 20% of
the total programs visited (two in each community). Inter-rater
reliabilities were established to a criterion of 75% exact agreements
for all observational measures.
The quality of adult-child interactions was measured using the
Caregiver Interaction Scale (CIS) (Arnett, 1989) and the "Human
Relationships Keys of Quality" observation section from the Pilot
Standards for Quality School-age Child Care (Sisson, 1995). The CIS was
selected to measure the quality of teacher-child interactions because of
its previous use in national child care studies (Helburn, 1995;
Whitebook, Sakai, & Howes, 1997). The CIS is a 26-item scale used to
rate a single teacher. A score of 1 indicates that a given behavior is
"never true," while a score of 4 indicates that the behavior
is "often observed." For example, the CIS measure includes
statements such as "speaks warmly to the children" and
"doesn't supervise the children very closely." A
criterion level of 80% agreement between observers was established in
the study for which this measure was developed (Arnett, 1989).
Reliability was established at 89%, maintained on 20% of the programs at
92%, re-established at 84% for post-initiative, and maintained at 97%.
In addition, teacher-child interactions were assessed using the
"Human Relationships Keys of Quality" from the NSACA pilot
accreditation standards ASQ program observation (O'Connor, Gannett,
Heenen, & Mattenson, 1996). The human relationships (HR) category
consists of 9 keys, as well as standards specific to each key, for a
total of 36 items. A score of 0 indicates "no evidence" or
"not met," while a score of 3 indicates the standard is fully
met. "Staff relate to children in positive ways" and
"Staff use positive techniques to guide children's
behavior" are two examples from the nine human relationship keys.
In order for a program to achieve accreditation, NSACA guidelines state
that the program must score at least a 10 on each human relationship key
and at least a two on each standard. Reliability studies on the entire
ASQ Program Observation Instrument (O'Connor, Wheeler, Harms, &
Oryer, 1994) indicate an overall instrument intra-class correlation
coefficient of .84, a test-retest Kappa co-efficient of .85 , and an
overall Cronbach's alpha of .89. Inter-rater reliability was
established at 78%, maintained at 93%, re-established at 78%, and
maintained at 95% throughout 20% of the total observations.
The data collector arrived about 45 minutes before the children
arrived, met with the site director, received a tour of the program, and
observed until most of the children were picked up by their parents. The
observation lasted approximately 3 hours.
Results
A summary of the program and director demographic information is
summarized in Table 1. Tests for normality were computed on each of the
three observational scores (CIS, SACERS, and HR), pre-initiative and
post-initiative, using the Shapiro-Wilk test for normality. All
distributions were normal, except for the HR post-initiative score
distribution (p=.0094). This distribution was skewed toward the higher
scores (ranging from 4.67 to 10.67, M = 8.24), which was noted. An alpha
level of .05 was used to determine significance for all statistical
tests.
On the SACERS, pre-initiative scores differed significantly from
post-initiative, t (25) = 3.73, p = .0005. In addition, on the HR,
pre-initiative scores differed significantly from post-initiative, t
(25) = 2.64, p = .0070; on the CIS, pre-initiative scores differed
significantly from post-initiative, t (25) = 1.70, p = .0511 (see Table
2). On the SACERS, 21 individual programs improved their scores; on the
HR, 17 programs improved; and on the CIS, 18 programs improved. Overall,
there was a positive and significant increase on each observational
measure pre-initiative to post-initiative.
Profile Variables Related to Pre-Initiative Scores
Because of the small sample of programs and even smaller cell sizes
of profile variables, any further tests of statistical significance
tests were not appropriate. Rather, it was deemed more appropriate to
cluster the dependent variable scores by program and then study the
cluster relationships by program characteristics, or by the profile
variables, such as group size. A cluster analysis is a multivariate technique that groups programs into clusters so that the programs in the
same cluster are more similar to one another than they are to programs
in the other clusters by some predetermined selection criteria. The
intent is to maximize the homogeneity of programs within the clusters
while also maximizing the heterogeneity among the groups (Hair,
Anderson, Tatham, & Black, 1998). The cluster predetermined
selection criteria for this cluster analysis were the mean scores on the
three dependent measures.
The pre-initiative scores on all three dependent measures were
cluster analyzed using the average method of hierarchical clustering,
resulting in three clusters of programs. The means and standard
deviations of all three measures by cluster are summarized in Table 3.
Interpreting what types of programs these clusters represent involves
using the selection criterion variables to name or assign a label
accurately describing the clusters (Hair et al., 1998). Cluster One
(n=7) had the programs with the lowest mean scores, or below the average
of all 30 programs on all measures except on the CIS (see Table 2).
Cluster Two (n=5) had the programs at about the overall mean scores of
participating programs on all measures, as reported in Table 2. Cluster
Three (n=14) had the programs with the higher than average mean scores
on all three measures. Clustering the programs by all three dependent
measures indicates that about half (12) of the programs were about
average or below average, and about half (14) were above ave rage. These
clusters were compared with profile variables associated with
high-quality child care programs.
To create the profile variables (i.e., director education, director
salary, program size, auspice, state license, and staff/child ratio)
used in this analysis, the frequencies and percentages were grouped to
reduce the data (or cell sizes) to no more than three groups for each
profile variable. Director education was divided into high school
education (n=2), some college (n=5), and four years of college or more
(n=19). The director salary variable was split into two fairly even
groups: $10,000 to $20,000 per year (n=10) and $20,000 or higher (n=11).
Program size was grouped by 1 to 30 children (n=7), 31 to 70 children
(n=14), and 71 plus children (n=5). By using classifications made by the
National Study of Before- and After-School Programs (Seppanen et al.,
1993), the programs were divided into small programs, medium programs,
and large programs. The program auspice was categorized as for-profit (n=9) or non-profit (n=17). Participating programs either had a state
license (n=19) or they did not (n=6). The st aff/child ratios were split
into three groups: 1:8-1:12 (n=10), 1:14-1:15 (n=11), and 1:18-1:24
(n=5). These ratio groupings represent low teacher/child ratios, average
teacher/child ratios, and high teacher/child ratios. For accreditation,
NSACA requires 1:8-1:12 for children age 6 and below and 1:10-1:15 for
children age 6 and above (low teacher/child ratios). North
Carolina's "A" License ratios are 1:20 for age 5 and
below and 1:25 for age 5 and older (high teacher/child ratios).
To examine the relationship between the profile variables (i.e.,
director education, director salary, staff/child ratios, program size,
auspice, and state license) and program clusters--all categorical variables--a measure of association for a contingency table was used.
Due to the small sizes of many cells, interpretation of the chi square test statistic would be suspect. Therefore, Pearson's measure of
association (or Pearson's P) for a contingency table was computed.
The computed associations between the clusters of programs and the
profile variables are as follows: staff/child ratios P = .24, director
education P = .31, director salary P = .14, program auspice P = .08,
program size P = .38, and state license P = .58. The P coefficient can
be interpreted like a correlation coefficient; therefore, the
associations of .35 or higher warranted closer examination (Bishop,
Feinberg, & Holland, 1975). In this case, those variables were
program size and state license. The cross-tabs contingency table (see
Table 4) reports the distribution of profile variables by cluster.
Cluster Three, the highest quality cluster, had all (100%) licensed
centers and a slightly larger percentage (38%) of smaller center sizes.
Cluster One, the lowest quality cluster, had a higher percentage (72%)
of medium size programs and 71% of the programs were not licensed.
Overall, a state license and smaller program size were related to a
higher quality program clusters pre-initiative, but lower staff/child
ratios, higher director education, higher director salary, and
non-profit program status were not associated with the higher quality
program clusters pre-initiative.
Profile Variables Related to Pre-Initiative to Post-Initiative
Scores
The pre-initiative to post-initiative difference scores on all
three dependent measures were clustered using Ward's method of
hierarchical clustering, resulting in three clusters of programs. The
means and standard deviations of all three difference scores by cluster
are summarized in Table 5. Cluster One (n=9) had programs with negative
change scores, while Cluster Two (n=9) had the programs with small,
positive changes in scores. Cluster Three (n=8) had the programs with
large, positive change scores.
To examine the relationship between the profile variables (i.e.,
director education, director salary, staff/child ratios, program size,
auspice, and state license) and program clusters--all categorical
variables--a Pearson P measure of association for a contingency table
was calculated. The computed associations between the clusters of
programs and the profile variables are as follows: staff/child ratios P
= .26, director education P = .43, director salary P = .24, program
auspice P = .04, program size P = .47, and state license P = .39.
Associations that warranted closer examination were those greater than
.35, which included prograin size, state license, and director
education. The cross-tabs contingency table (see Table 5) summarizes the
distribution of program size, state license, and director education
variables by cluster.
Cluster Three, the programs that increased the most, had more
directors with a high school education (25%) and the lowest percentage
of directors with four years of education or more. Also, Cluster Three
had an equal number of non-state licensed programs (50%) as well as
licensed programs (50%). Cluster One, programs that declined in scores,
had a larger percentage (44%) of large-size centers (with 71+ children),
yet were predominantly licensed. Cluster Two, programs that improved
slightly, were all licensed centers except for one, and had a larger
percentage of medium-size programs (66%) and no large programs. Cluster
Two also had a higher percentage (89%) of directors with four years or
more of education. Programs that had directors with a higher level of
education, were larger, and were licensed by the state showed the least
improvement. The programs that showed the greatest improvement had
directors with a lower level of education and fewer licensed programs.
Large-size programs were more likely to show a decline in scores.
Overall, a state license, smaller program size, and director education
were related to program improvement pre-initiative to post-initiative,
but lower staff/child ratios, teacher turnover, higher director salary,
and non-profit program status were not associated with program quality
improvement from pre-initiative to post-initiative.
Discussion
Participation in the North Carolina Quality Enhancement Initiative
(NC QEI) improved the quality of the participating programs'
school-age child care environment and teacher/child interactions.
Overall, there was a positive and significant increase on each measure
from pre-initiative to post-initiative. Twenty-one of the 26 programs
improved the overall quality of their school-age child care
environments, as assessed by the SACERS. Likewise, 18 programs had
teachers who improved the quality of their interactions with children,
as assessed by the Caregiver Interaction Scale, and 17 programs improved
interactions, as assessed by the Human Relationship section of the
accreditation observation. Furthermore, six of the 10 programs that
applied to be accredited by the National School-Age Care Alliance
(NSACA) were awarded accreditation. Four were deferred, but only one
applied for a deferral accreditation visit and was subsequently awarded
NSACA accreditation in the fall of 1998. Since then, another NC QEI
program was accredited in January of 2001. In the absence of a
comparison group, it is impossible to conclude that participation in the
NC QEI was the sole cause of improvement in the quality of the programs.
Nonetheless, programs did make significant gains in overall program
quality and teacher/child interactions during the nine months they
participated in the NC QEI, suggesting that the involvement of
school-age child care programs in program improvement initiatives is a
viable means of improving the quality of school-age child care programs.
The NC QEI findings are consistent with the results of the
evaluation of the first pilot of the NSACA National Improvement and
Accreditation System for school-age child care programs, which reported
a positive mean score change on all but one of the 21 program
observation keys (Miller, 1997). The initial pilot of the NSACA
accreditation system reported summary information on 75 programs across
the United States. Pre- and post-data were collected on 26 of the 32 ABC
program sites in California, Colorado, Georgia, and New Jersey, using
the Pilot Standards as the observation measure. The evaluator of the
initial pilot acknowledges as confounds the small sample size, as well
as biases in the observation reliabilities (the ASQ advisers were the
pre- and post-test observers). In the NC QEI evaluation, observers had
high reliabilities and were blind to program groupings and accreditation
applicants. Furthermore, in the present study the program participants
were not aware of the evaluation measure being used to asse ss their
improvement. For these reasons, the more rigorous design of the NC QEI
evaluation provides stronger empirical evidence in support of the NSACA
accreditation self-study process than the first pilot did.
It is important to note that the programs awarded NSACA
accreditation were above minimal (3.0) in quality, but were not
considered of good (5.0) quality according to the SACERS instrument. The
overall mean SACERS score for accredited programs was 4.87 at
post-initiative, indicating that many of these programs did not quite
achieve the level of good quality. At pre-initiative, the overall mean
score for programs that later achieved accreditation was 3.81
(minimal-3.0 to good-5.0 quality). These findings are consistent with
NAEYC accreditation study findings (Howes & Galinsky, 1996;
Whitebook, Sakai, & Howes, 1997; Zellman & Johansen, 1996),
indicating that the accreditation process does improve the quality of
preschool child care programs. Specifically, accredited programs in the
Whitebook, Sakai, and Howes (1997) study had a mean score on the ECERS
(Harms & Clifford, 1980) of 4.58 at pre-test and 5.22 at post-test.
The programs in the NC QEI study showed greater improvement (+1.06) than
the programs in the NA EYC study (+.64) but were of lower quality (4.87
compared to 5.22) at post-test, as measured by the ECERS and SACERS. The
NAEYC accreditation study (Whitebook, Sakai, & Howes, 1997) also
found that 39% of NAEYC-accredited programs were rated as mediocre in
quality. In the NC QEI study, the scores of the six accredited programs
on the SACERS ranged from 4.16 to 5.61. Due to these mediocre scores,
concern has been raised about whether accredited centers truly reflect
high quality, and whether these centers can sustain high quality. The
level of quality and sustainability of quality are issues that need to
be addressed by NSACA as more programs around the country apply for
accreditation.
Although the two pilot NSACA accreditation initiatives have been
successful in improving quality, many school-age child care programs
still may not seek accreditation. Bredekamp (1999) estimates that
without sufficient incentives or mandates, only 5% of programs will
voluntarily seek accreditation, and that 5% are probably already
providing higher quality care without accreditation. Another issue
related to the accreditation process is that parents may not be able to
afford the higher cost of quality required by the NSACA standards;
therefore, the cost may need to be subsidized. Thus, the school-age
child care accreditation system is not a panacea for improving child
care quality.
Profile Variables Related to Pre-Initiative Scores
A state license and smaller program size were related to higher
quality programs clusters at pre-initiative. This finding is consistent
with the preschool child care research studies that have found licensure and group size to be related to program quality (Helburn, 1995; NICHD,
1996; Whitebook et al., 1989). In addition, school-age child care
studies have found the number of children enrolled (Rosenthal &
Vandell, 1996) and licensing to be associated with higher quality
programs (Miller, 1997). Intuitively, it is not surprising that a state
license would be related to higher quality. However, it is important to
consider the stringency of the licensing standards in a state as it
relates to the overall quality of school-age child care programs.
Certainly, states with more stringent standards would ensure higher
quality programs. Less than adequate standards, as in North Carolina,
would not ensure higher quality. It is once again important to note that
although a state license was related to higher quality mean scores on
the SACERS, the scores still reflect less than good quality.
A smaller program size was also found to be related to the cluster
of higher quality school-age child care programs. Rosenthal and Vandell
(1996) also found that larger program size in school-age programs is
related to poorer quality. Smaller group sizes have been shown to be
related to higher quality centers in the preschool child care research
(Helburn, 1995; NICHD, 1996; Whitebook et al., 1989). Preschool research
is based on group size, which is similar to program size in school-age
programs where children are encouraged to group themselves in activities
by individual choices and interests, not by age group or classroom
groupings. The relationship between program size and class size is
similar, in that if teachers have smaller numbers of children with whom
to work, they are more likely to know the children as individuals and
engage in more frequent staff/child interactions, which would result in
more positive staff/child relationships.
Profile Variables Related to Improvement Scores
A state license, smaller program size, and director education were
related to program improvement from pre-initiative to post-initiative.
Programs with a state license and that were smaller in size showed less
improvement than the group with larger programs and no state license.
One obvious reason for this finding may be that programs with a license
already are meeting a minimal level of quality and therefore had fewer
program changes to make than programs without a license. In addition,
based on the findings of this study, programs with smaller enrollments
were also higher quality to begin with and therefore had fewer changes
to make than larger-size programs. It also may be that changes are
easier to make in smaller programs because there are fewer environmental
changes needed and fewer staff with whom to work.
The programs that showed the least amount of improvement had
directors with higher levels of education, while programs with directors
with a minimal education showed the most improvement. This NC QEI
finding seems contradictory to the preschool child care research, but it
may be that directors with more education were already facilitating
higher quality programs and therefore had fewer changes to make.
Likewise, the programs with directors with less formal education may
have shown more improvement because they had more to learn and,
therefore, could gain more knowledge by participating in the training
and consultation offered by the NC QEI. This finding suggests that staff
with less formal education, as well as those with more, can make program
improvements if they are given tools and assisted with the process of
program improvement.
Barriers to Program Improvement
Of some concern is the fact that a cluster of nine programs in the
initiative actually declined in quality over the course of the project.
Hypothesizing about what may have accounted for the decline in program
quality, when the programs were actually focused on improving overall
quality, is important in understanding the process. According to program
directors in the first pilot, the greatest barriers to program
improvement were staff turnover, finding time to coordinate the
improvement effort, getting parents on the ASQ team, and getting
questionnaires returned. ASQ advisers in the first NSACA pilot also
mentioned having problems building a relationship with some programs,
which affected their ability to provide assistance. The quality of the
relationships between the ASQ adviser and the program staff was a key
factor in the effectiveness of the technical assistance provided and
utilized by a program. Staff turnover also was a major barrier to
program improvement, because programs that had turnover at the si te
director or leadership level, or that had high levels of teacher
turnover during the course of the pilot, were less likely to improve in
quality (Miller, 1997).
Another barrier to programs achieving accreditation may have been
the tremendous variability in program quality at pre-initiative. For
example, SACERS scores at pre-initiative ranged from 2.21 to 4.53. Some
programs were prepared to attempt the full accreditation process, while
others may have been better off focusing on a few selected areas of
improvement. The findings from the first pilot (Miller, 1997) also
illustrate that not all programs were ready for the accreditation
process that is aimed at programs with a "measure of stability,
quality and internal leadership" (p. 10).
Overall, the positive results from the evaluation of the NC QEI
project indicate that working toward NSACA accreditation may be one
mechanism that can improve the quality of school-age child care
programs. However, it is not a panacea for improving school-age child
care quality. The NC QEI-accredited programs that achieved NSACA
accreditation status were still not of high quality, as measured by the
SAGERS. Also, the accreditation process is an ambitious undertaking
requiring significant motivation and hard work on the part of the site
director as well as members of the ASQ team. Many school-age child care
programs will not voluntarily seek NSACA accreditation; therefore,
licensing standards may continue to be the only standards guiding
school-age child care programs. Clearly, the relationships identified in
this study need further examination with a larger and more nationally
representative sample. This information will not only provide additional
information about high quality school-age child care programs , but also
aid in improving the NSACA accreditation system.
Table 1
Program and Director Demographic Information Summary
Characteristic n f P M SD Range
Total Enrollment 28 54.5 27.5 10-109
State License 25
Yes 19 76%
No 6 24%
Auspice 28
Community College 1 4%
YMCA 5 18%
YWCA 1 4%
Church 5 18%
Private/Profit 9 32%
Public School 7 25%
Teacher/Child Ratios 28
1:8 - 1:12 11 40%
1:14 - 1:15 12 42%
1:18 - 1:25 5 18%
Director Education 28
HS 4 14%
Two Years' Higher Ed. 4 14%
Three Years' Higher Ed. 1 4%
Bachelor's 15 54%
Master's 4 14%
Major 24
Education 9 38%
Child Development 4 17%
Psychology 4 17%
Other (unrelated) 7 28%
Yearly Salary (dollars) 23 22,473 7,856 5-39,000
Benefits 28
Health Insurance 22 79%
Retirement 17 61%
Annual Leave 23 82%
Sick Leave 24 86%
Table 2
Mean Scores on Observation Measures, Pre-Initiative to Post-Initiative
Pre-Initiative Pre-Initiative
Measure M SD Range M SD Range
SACERS 3.41 .72 2.21-4.53 4.09 .90 2.09-5.61
HR 6.58 3.3 0-12 8.24 1.82 4.67-10.68
CIS 2.92 .40 2.04-3.50 3.09 .48 2.12-3.81
Pre-Initiati
ve
Measure t (25)
SACERS 3.73 ***
HR 2.64 **
CIS 1.70 *
Note: * p < .05
** p < .01
*** p < .001.
Table 3
Pre-Intiative Cluster Relationships
Cluster One Cluster Two Cluster Three
(below average) (average) (above average)
n = 7 n = 5 n = 14
M SD M SD M SD
SACERS 2.87 .57 3.23 .32 4.01 .38
HR 3.57 2.07 5.20 2.59 8.57 2.65
CIS 2.85 .19 2.25 .16 3.18 .21
Table 4
License and Program Size by Cluster Contigency Tables
ClusterOne (Low Cluster Two (Medium
Quality)Count Col% Quality) Count Col%
Not Licensed 5 71% 1 20%
Licensed 2 29% 4 80%
Small Programs 1 14% 1 20%
Medium Programs 5 72% 4 80%
Large Programs 1 14% 0 0%
Cluster Three (High
Quality) Count Col%
Not Licensed 0 0%
Licensed 13 100%
Small Programs 5 38%
Medium Programs 5 36%
Large Programs 4 29%
Table 5
Pre-Initiative to Post-Initiative Cluster Relationships
Cluster One
(Negative Change) n = 9
M SD
Difference Scores Pre to Post
SACERS -0.08 .53
HR -1.14 1.73
CIS -.21 .43
Cluster Two
(Small positive) n = 9
M SD
Difference Scores Pre to Post
SACERS .58 .48
HR 1.8 3.07
CIS .14 .41
Cluster Three
(Large positive) n = 8
M SD
Difference Scores Pre to Post
SACERS 1.18 .53
HR 4.64 1.44
CIS .65 .38
Table 6
License, Program Size, and Director Education by Cluster Contingency
Tables
Cluster One Cluster Two
(Least Change) (Moderate Change)
Count Col% Count Col%
Variable
Not Licensed 1 12% 1 11%
Licensed 7 83% 8 89%
Small Programs 1 11% 3 33%
Medium Programs 4 44% 6 66%
Large Programs 4 44% 0 0%
HS Education 0 0% 0 0%
Some Higher Ed 2 22% 1 11%
4 Years or More 7 78% 8 89%
Cluster Three
(Greatest Change)
Count Col%
Variable
Not Licensed 4 50%
Licensed 4 50%
Small Programs 3 38%
Medium Programs 4 55%
Large Programs 1 12%
HS Education 2 25%
Some Higher Ed 2 25%
4 Years or More 4 50%
References
Arnett, J. (1989). Caregivers in day-care centers: Does training
matter? Journal of Applied Developmental Psychology, 10, 541-552.
Bishop, Y M. M., Feinberg, S. E., & Holland, P. W (1975).
Discrete multivariate analysis: Theory and practice. Cambridge, MA: MIT Press.
Bredekamp, S. (1999). When new solutions create new problems:
Lessons learned from NAEYC accreditation. Young Children, 54(1), 58-63.
Bredekamp, S., & Glowacki, S. (1996). The first decade of NAEYC
Accreditation: Growth and impact on the field. In S. Bredekamp & B.
Willer (Eds.), NAEYC accreditation: A decade of learning and the years
ahead (pp. 110). Washington, DC: National Association for the Education
of Young Children.
Brimhall, D. W., & Reaney, L. M. (1999). Participation of
kindergartners through third-graders in before- and after-school care.
Washington, DC: U.S. Department of Education, Office of Educational
Research and Improvement, National Center for Education Statistics.
Hair, J. E., Anderson, R. E., Tatham, R. L., & Black, W. C.
(1998). Multivari ate data analysis (5th ed.). Englewood Cliffs, NJ:
Prentice Hall.
Harms, T., & Clifford, R. M. (1980). Early childhood
environment rating scale. New York: Teachers College Press.
Harms, T., Jacobs, E. V., & White, D. R. (1996). School-age
care environment rating scale. New York: Teachers College Press.
Helburn, S. W. (Ed.). (1995). Cost, quality, and child outcomes in
child care centers, technical report. Denver, CO: University of
Colorado.
Howes, C., & Galinsky, E. (1996). Accreditation of Johnson and
Johnson's child development center. In S. Bredekamp & B. Willer
(Eds.), NAEYC accreditation: A decade of learning and the years ahead
(pp. 47-60). Washington, DC: National Association for the Education of
Young Children.
Miller, B. M. (1997). Final report and evaluation: Pilot of the
national improvement and accreditation system. Wellesley, MA: School-age
Child Care Project.
National Institute of Child Health and Human Development (NICHD)
Early Childhood Research Network. (1996). Characteristics of infant
child care: Factors contributing to positive caregiving. Early Childhood
Research Quarterly, 12, 281-303.
National Institute on Out-of-School Time. (1998). Fact sheet on
school-age children. Retrieved from
www.wellesley.edu/WCW/CRW/SAC/factsht/html
O'Connor, S. (1997). Readiness scale for program improvement
and accreditation in school-age child care programs. Wellesley, MA:
School-Age Child Care Project.
O'Connor, S., Gannett, E., Heenen, C., & Mattenson, P. T.
(1996). Assessing school-age child care quality. Wellesley, MA:
School-Age Child Care Project.
O'Connor, S., Wheeler, K., Harms, T., & Cryer, D. (1994).
The revision and redevelopment of the ASQ program observation.
Wellesley, MA: School-Age Child Care Project.
Rosenthal, R., & Vandell, D. L. (1996). Quality of care at
school-aged child-care programs: Regulatable features, observed
experiences, child perspectives, and parent perspectives. Child
Development, 67, 2434-2445.
Seppanen, P. S., Love, J. M., DeVries, D. K., Bernstein, L.,
Seligson, M., Marx, F., & Kisker, E. E. (1993). National study of
before and after school programs. Final Report. Washington, DC: U.S.
Department of Education Office of Policy and Planning.
Sisson, L. (1995). Pilot standards for quality school-age child
care. Wellesley, MA: National School-Age Care Alliance.
Whitebook, M. (1996). NAEYC accreditation as an indicator of
program quality: What research tells us. In S. Bredekamp & B. Willer
(Eds.), NAEYC accreditation: A decade of learning and the years ahead
(pp. 31-46). Washington, DC: National Association for the Education of
Young Children.
Whitebook, M., Howes, C., & Phillips, D. (1989). Who cares?
Child care teachers and the quality of care in America (final report of
the National Day Care Staffing Study). Oakland, CA: Child Care Employee
Project.
Whitebook, M., Sakai, L., & Howes, C. (1997). NAEYC
accreditation as a strategy for improving child care quality: An
assessment. Washington, DC: National Center for the Early Childhood Work
Force.
Zellman, G. L., & Johansen, A. S. (1996). The effects of
accreditation on care in military child development centers. In S.
Bredekamp & B. Willer (Eds.), NAEYC accreditation: A decade of
learning and the years ahead (pp. 25-30). Washington, DC: National
Association for the Education of Young Children.