As I write this blog during a time of changes and uncertainties in national educational policies, my goal is to advocate for practices that we as a community of researchers and practitioners (regardless of political influences) can directly control to help schools select proven programs that best meet their needs. Congruent with the title of the blog, I’ll start with a vacuum metaphor, although one dealing with type most commonly found in our home utility closets. A few years ago, my family’s elderly vacuum cleaner bit the dust (i.e., ceased removing it) and we went shopping for a new one. The salesman showed us a model that he swore was absolutely the best on the market—the strongest suction and most durable of any available. And, after we tried it out, we never doubted that he was right. The problem was that it figuratively weighed a ton and literally required extensive mechanical skill and time for setting up attachments and replacing the bags. We later discovered the following review of that model in a popular consumer’s magazine—“…highly sturdy and strong but very cumbersome to use and maintain.”
In education practice, unfortunately, the prime evidence consumers (district leaders, principals, and teachers) encounter a different type of vacuum—nowhere to go to find out what programs are sturdy, strong, AND easy to use. Hopes may be buoyed initially through discovery of the plethora of existing educational research clearinghouses and published literature reviews. Clearinghouses typically try to collect all studies on a given program and synthesize the findings into ratings of effectiveness. A notable example is the What Works Clearinghouse (WWC), which was created in 2002 by the federal Institute of Educational Sciences, to review completed research studies and disseminate results to practitioners, researchers, and policy makers[1]. In a recent study, Wadhwa et al. (2023)[2] examined the WWC and 11 other clearinghouses and found only a moderate rate of consistency in their ratings of individual programs’ effectiveness. This result led them to question the construct validity of the label “evidence-based” as we use it today. Clearinghouses can point consumers in helpful directions but the determination of what programs have the best potential and fit for them must be made locally. How about using scholarly journals for making those decisions? Silver et al. (2024)[3] identified 20,245 systematic reviews of educational research that were published between 2001 to 2023. The vast majority dealt with methodological issues and technical research findings that primarily served the interests and vocabulary of academic researchers. They found little in language or content that addressed practical needs of educators.
Even if we accept the consistency limitations of program reviews (after all, different consumer reviews of vacuum cleaners may not select the same favorites) and systematic research reviews do a better job at reporting their findings in a consumer-friendly style, a critical void still remains. In addition to knowing about a program’s educational impacts on achievement, teachers and principals are fundamentally interested in its implementation properties. What activities and resources are needed to use it effectively? How much does it cost per student? Is it reasonably easy to use? And perhaps most vitally for classroom teachers, do fellow educators who participated in the reported studies feel that it’s beneficial for students, not only in raising test scores in a given year, but on other educational outcomes (e.g., motivation, self-efficacy, engagement) and for the long run?
Encouragingly, most of the rigorous research studies that we see in journals and in submissions to Evidence for ESSA do report findings about the fidelity of program implementation at the participating schools. Many, in addition, conduct usage analyses, indicating how instructional time and lessons completed correlate with achievement. Such information can reinforce the importance for schools to ensure that programs are implemented with high fidelity as well as suggest optimal levels of usage. Finally, some but much fewer studies collect data on user (e.g., teachers’ and students’) perceptions of their experiences with the programs and its impacts on them and their students. My strong suggestion is that our community of program providers, consumers, and evaluators avidly encourage inclusion in future efficacy studies rigorous evidence pertaining to these three domains: program impacts, implementation needs and processes, and user experiences and satisfaction. We are already seeing progress in this area, for example, in Accelerate’s recent inclusion of a standard teacher reaction survey in tutoring studies that it sponsors and E4E’s addition of a qualitative summary for accepted studies that conducted participant surveys or interviews.
Knowing only the effect size or what ESSA evidence tier a program has attained still leaves most teachers and principals uncertain about many critical implementation factors. Consequently, as we found in a prior study on school districts’ procurement of educational programs, their reliance on less trustworthy information obtained through word of mouth or marketing promotions increases[4]. For educational program consumers and product consumers in general, to the extent that choices are made in a vacuum, there is increased danger of being sucked in.
[1] As I write this blog, the future of the WWC, at least for continuing to review and post studies, is in serious doubt under new federal initiatives regarding the U.S. Department of Education.
[2] Wadhwa, M., Zheng, J., & Cook, T. D. (2023). How consistent are meanings of “evidence-based”? A comparative review of 12 clearinghouses that rate the effectiveness of educational programs. Review of Educational Research, 94(1), 3-32. https://doi.org/10.3102/00346543231152262 (Original work published 2024)
[3] Silver, R.E., Kumar, V., Fengyi, D.C., Thye, M.T., & Aziz, J. A. For what and for whom? Expanding the role of research synthesizes for diverse stakeholders. Educational Researcher, 53(8), 464–471 DOI: 10.3102/0013189X241285414
[4] Morrison, J. R., Ross, S.M., & Cheung, A. (2019). From the market to the classroom: How school districts acquire ed-tech products: Educational Technology Research and Development, 67, 389-421.
