Category Voices
Published
Author Nathan Storey, Center for Research and Reform in Education

Maybe you can relate to this. You’ve got receipts from two different grocery stores, one for $50 and one for $150. At first glance, you may say to yourself that you want to go back to the first grocery store again because it was so much cheaper. But then you take a look at the items on each receipt. In the $50 receipt, there are three high-quality luxury items that you really wanted and eggs. In the $150 receipt, there are 45 items, all staples of your diet and household, important items that you can use in a variety of recipes and purposes. So now you have to ask which was really the best and most effective use of money.

The Cost of Education

In education practice and policy, it is becoming increasingly important to ask these questions in relation to the many different education programs used to address student literacy or math skills or attendance. There are a lot of choices out there that, with varying degrees of evidence of impact, seek to improve student outcomes in different ways and through different models. In some cases, two literacy tutoring programs may show similar effects on achievement but have wildly different costs. Perhaps one entails small-group tutoring by paraprofessionals, while another offers one-on-one tutoring by certified teachers and uses proprietary software programs. While both may produce student learning gains, one may be a better fit for a school based on its existing budget, the number of students its wants to support, and other programmatic and contextual factors (how many students can be served at a time vs. how many need to support in the district, what type of tutor or trainer is required and whether there is a sufficient market for those individuals in the setting, etc.).

And so, just as in our grocery example, it is not enough for districts or school leaders to just look at the average effect size of a program (or some other measure of program impact). They need to know more about the costs associated with the programs they are considering implementing and which program or programs will provide the biggest bang for their buck. Researchers and implementers are increasingly considering how to provide this information and analysis in a clear and understandable format.

Measuring Ingredients

At the Center for Research and Reform in Education, we have recently been working with several tutoring organizations to conduct cost analyses of tutoring interventions using the “ingredients method” and a web-based tool created by Accelerate. While the Accelerate tool is designed specifically for evaluating or planning tutoring programs, cost analysis can be and is conducted for all sorts of education interventions (graduation rate interventions, school-wide reform, socio-emotional learning interventions, or literacy tutoring, to name just a few).

By quantifying implementation details (number of students receiving services, duration and dosage of tutoring, setting) as well as financial information related to implementation (the ingredients such as personnel, training and support, equipment and materials, facilities expenses, HR or travel costs, incentives), researchers and implementers can examine programmatic cost-effectiveness and efficiency.

Calculating efficiency allows us to estimate the number of hours needed to improve learning by a predetermined amount (say one month or one year of learning) or the number of hours of intervention students receive for a certain amount of money per pupil (i.e., it costs $1,000 per students in a particular intervention where students receive 25 hours of tutoring over a semester). Efficiency estimates can also form the basis for an analysis of cost-effectiveness by determining the ratio between an intervention’s total costs and the effects of an intervention. Cost-effectiveness analysis is particularly useful in allowing the comparison of programs against one another and consideration of the return on investment for a school or district.

A program’s overall cost and its impact on achievement as shown in high-quality research are popular considerations when comparing programs, particularly when it comes to early literacy tutoring models where there are a multitude of proven options. As stewards of their finite dollars, education decision-makers look to choose the program that is both affordable and effective. Comparing key elements of each model, like the implementation details discussed above, beyond the total cost (i.e., sticker price) and outcomes provide a truer picture of what program offers the greatest value.

Consider two recently studied literacy virtual tutoring models that serve first graders. Both are one-to-one tutoring models and cost $2,500 per seat. In recent studies, they produced effect sizes of +0.21 and +0.18 (which is relatively similar). The dosage and duration of tutoring are also similar: Model A is delivered 5 times per week for 15 minutes per session (75 minutes per week), and Model B is delivered 4 times a week for 20 minutes (80 minutes per week). Both models recruit and train tutors who may not be certified teachers and work with the same students for the duration of the school year. However, a deeper look into the study of each model provides valuable information regarding additional costs that went into program implementation and contributed to the overall impact on achievement. Beyond the $2,500 per seat, the funder of the Model A study provided a program manager to oversee implementation; required districts to designate a champion who would dedicate time to the tutoring implementation; paid a stipend to a school champion for providing several hours a week of on the ground support; and planned and hosted convenings for school participants to share best practices. Model B did not have these additional supports. Calculating these expensive implementation supports into the cost of Model A is crucial for understanding a program’s cost-effectiveness.

Based on these ingredients or inputs, we may determine (hypothetically) that Model A has a tutoring efficiency coefficient of 24.8 and Model B 15.3, suggesting that it takes Model B fewer hours of tutoring to improve student learning by one month, making it the more efficient one. Model B may also be more cost efficient, as we saw that there were fewer expensive added inputs, so that students receive more hours of tutoring per $1,000 per pupil. Finally, a cost-effectiveness analysis would show that the model with the higher value coefficient would be the most cost-effective, providing more additional months of learning by investing $1,000 per pupil.

Meeting the Moment with Cost Evidence

These types of analyses have generally been the realm and responsibility of trained economists and were not available to be used in educational decision-making, whether due to lack of training or because it is more common to fall back on marketing, bottom-line cost totals, peer recommendations, or educational theory to inform program selection. They also come with their share of limitations. For one, education interventions are not as predictable as some economists might imagine, and the costs or processes can differ greatly from the study implementation to scale-up or from one district to another, making cost-effectiveness estimates subject to change. Given the variability in study conditions, measurement error of program effects, and differences in program costs in different contexts. The findings from cost analyses may therefore best be considered suggestive rather than absolute.

They also should be considered alongside local constraints. A cost analysis may determine that a tutoring intervention using certified teachers is the best option. However, if a district is facing a teacher shortage, that may not be a realistic option. As with the weekly grocery store trip, sometimes the more expensive Instacart option is necessary when there is a shortage of time or the household’s designated shopper is sick.

Given the recent termination of the Elementary and Secondary School Emergency Relief (ESSER) funds and decreased Department of Education funding for school districts that will likely impact student access to learning, school meals, and healthcare, however, education leaders must be especially thoughtful and strategic with available funds to support students’ needs. Education researchers can help them become more equipped to make informed decisions about educational evidence by providing evidence on cost effectiveness. In addition, researchers can work to ensure that this research is not only available for initial programmatic decisions, but also part of ongoing evaluations to observe how programs are operating in the district and support justification of continued expenditures (or highlight areas where approaches should be changed).

Keep up with our latest news.