Skip to Main Content

PEER releases study examining the association of kindergarten performance with student, teacher, and classroom factors

December 18, 2018
by Joanna Meyer

The Partnership for Early Education Research (PEER) has released a new brief entitled Kindergarten performance in early literacy: student, teacher, and classroom factors. This publication is the last in a series of four briefs based on the research studies that were funded by the U.S. Department of Education’s Institute of Education Sciences through the Researcher-Practitioner Partnerships in Education Research grant program.

When PEER was founded in 2014, stakeholders wanted to know more about factors that might impact the academic performance of young children, such as class size, teacher experience, and teacher level of education. While there were no common assessments used across PEER communities, Norwalk Public Schools (NPS) has administered the Dynamic Indicators of Basic Early Literacy Next (DIBELS Next)[i] to all students in grades K-3 since 2014-2015. PEER partnered with NPS in 2016 to explore three research questions:

  1. How do student factors such as English learner status, special education status, free or reduced-price lunch status, gender, and race/ethnicity predict kindergarten performance in literacy?
  2. How do teacher factors such as level of education, teacher certification endorsement area, and years of teaching experience predict kindergarten performance in literacy?
  3. How do classroom factors such as class size, percentage of students eligible for free or reduce-price lunch, and percentage of English learner students predict kindergarten performance in literacy?

In Norwalk, K-3 students complete the DIBELS Next assessment online at the beginning, middle, and end of the year. At each timepoint, every student receives a composite score that serves as an overall estimate of the student’s early literacy and reading skills. DIBELS composite scores can be compared to established benchmark goals and cut-points for risk, allowing teachers to identify which students are at or above benchmark and which students are below benchmark, so that teachers can individualize instruction accordingly.[ii] This assessment approach allowed PEER to examine how student-, teacher-, and classroom-level factors were associated with the likelihood of students reaching DIBELS benchmark at the end of the 2014-2015 and 2015-2016 school years.

After signing a data sharing agreement with NPS and receiving the requested data, PEER matched de-identified individual-level DIBELS data with student demographic data and teacher data. To account for the fact that students’ scores on a given measure may be partially dependent upon their teacher, classroom grouping, or school, PEER used a multi-level modeling approach to analyze the data. For each of the two study years, PEER conducted several statistical models to examine the relationship among student factors, teacher factors, classroom factors, and student-level DIBELS scores. A detailed description of the analytical approach and the findings can be found in the brief and its appendices.

Three consistent findings emerged from the study. First, DIBELS data for this sample demonstrate unusually low variation among classrooms for both 2014-2015 and 2015-2016, raising questions about the quality of the data during the study years. Teachers may have provided such similar ratings across classrooms because they were very well trained on the administration of the DIBELS measures. Alternatively, it is possible that teachers did not accurately record variation among students when administering the DIBELS, but instead scored them according to some other common expectation, such as how they “should” score students at any given timepoint. The second consistent finding is that special education status was negatively associated with DIBELS benchmark status in both years, such that students in special education were substantially less likely to reach DIBELS benchmark status than their general education counterparts. A third important finding is that gender, race, and ethnicity were not associated with reaching DIBELS benchmark status in either year.

When PEER consulted with NPS about the study findings, district personnel expressed some previously-held concerns about the quality of DIBELS data from 2014-2015 and 2015-2016, which might explain the unusually low variation in DIBELS scores. Since the second study year, the district has implemented a rigorous DIBELS training program for all teachers who administer the DIBELS. It might be informative to repeat these analysis for the 2016-2017 and 2017-2018 school years, given the new training approach.

The study found that no teacher- or classroom-level variables consistently predicted DIBELS benchmark status at the end of the year. However, the relatively small number of NPS kindergarten teachers limited statistical power in this study’s multi-level models, such that it may not be possible to detect small effects. The low variability among teachers in terms of certification status and education level may have made it difficult to assess the association of these factors with DIBELS benchmark status because the majority of teachers hold similar degrees and certifications. A larger pool of districts or schools who use the DIBELS would make it possible to conduct the study at a larger scale, increasing statistical power and variation among teachers.

PEER’s initial research studies have been instrumental in the development of PEER from an idea to an active partnership. Each of the studies proposed in 2014 has evolved as PEER’s knowledge about available data has grown and the interests of partner organizations have shifted. PEER’s current work demonstrates increasing rigor and relevance as PEER deepens its collaborative research skills and its relationships with partner organizations. Stay tuned for the results of PEER’s dual language learner project in 2019!

[i] Dynamic Measurement Group, the authors of the DIBELS Next® assessment, announced in October 2018 that the DIBELS Next has been renamed Acadience™ Reading. For more information see https://acadiencelearning.org/ann_acadience.html.

[ii] Dynamic Measurement Group. (2018) DIBELS Next Information Sheet. https://acadiencelearning.org/DIBELS_Next_Info.pdf.

Submitted by Joanna Meyer on December 19, 2018