Skip Navigation
Action Research: A Protocol to Improve Student Learning
Dr. Linda S. Adamson, Ed. D.
Assistant Professor and Program Coordinator, Department of Teacher Preparation
Johns Hopkins University School of Education, Baltimore, MD

When I tiptoed my tentative way into teaching in the mid-1980s as a career changer, I never encountered the term “action research.” After I had been teaching in public schools for a few years, I started reading about action research in school settings and realized that, without having a name for it or a framework to guide me, I had been doing a version of action research on my own, just because I wasn’t satisfied that “good enough” academic results for my students were all that good. I needed a problem-solving way of thinking about my teaching and its impact on my students’ learning. I lacked a particular model to guide me, so I muddled on as best I could. It would have been a real asset to me to have a protocol for action research that addressed my need for getting to “better than good enough” without winging it.

We all have access to a plethora of strategies, programs, materials, templates, etc., to improve some aspect of student learning. The best of these have been researched over time and have credible evidence of their effectiveness (see, for example, Marzano, Pickering, & Pollock, 2001, and Slavin, 2010, for collections of research-supported “best practices”). But every teacher knows that what works in one school doesn’t necessarily translate intact and with comparable results to a very different setting. If one particular way of implementing Readers’ Theater is effective for your suburban fourth graders, will it be just as helpful to my urban eighth graders? What about my full-inclusion classes or my English language learners? As we continue to focus on the persistent academic achievement gaps that bedevil schools and limit the success of too many children and young people, can we get a handle on who does and who doesn’t actually benefit in a particular setting from a supposed improvement before we scale it up? Teachers need to try out a new approach in ways that give them a chance to analyze results before integrating it into their standard repertoire. They are hungry for what would work better, but how can they be confident that they can put something better in place and see results? A way of carrying out action research that has a likelihood of producing useful insights and benefits to students’ learning could be of great value.

Action Research: The “What? Who? Why?”
Even though there is no agreed-upon definition of action research as used in education, there is consensus on the basics: “systematic and intentional inquiry carried out by teachers” (Cochran-Smith & Lytle, 1999, p. 3) as a means to build teachers’ reflective capabilities (Noffke & Zeichner, 1987; Zeichner & Klehr, 1999) in ways that can help improve some specific aspect of educational practice in their own school settings (Caro-Bruce, 2000; Sagor, 2005). It is often collaborative (Armstrong & Moore, 2004; James, Milenkiewicz, & Bucknam, 2008), especially when the goal is to improve some aspect of practice at the school level rather than in a single teacher’s classroom. In Maryland, action research is virtually hard-wired into teacher preparation programs as part of the national and state accreditation process (see Maryland State Department of Education, MSDE, 2003) and holds a special place in the state’s standards for school-university partnerships (Professional Development Schools; see MSDE, 2002, October 23). These days, action research is becoming one of the “givens” in the professional literature about improving teaching and learning, even when that means different things to different educators. Are there some components that appear to be especially important to include?

Action Research: The “How?”
There are various ways of describing the process of action research. They all have a few phases/components in common, most often expressed as some version of “diagnosing, action planning, taking action, evaluating, and specifying learning” (see Susman, 1983, cited in O’Brien, 1998). There are considerable variations in the nitty-gritty of how each of these phases is operationalized. But there is an even more important aspect missing.
One of the biggest gaps in the professional literature about action research to date has been that the question of target levels of quality has not been resolved. It is an important gap to address, since the “subjects” in view are real live children. We need to pay attention to the questions: What is “high quality” when engaging in action research, and (the brass ring) does better quality action research predict more positive results for the learners?

There is currently no consensus around what “quality” looks like operationally in action research or how to measure it. A proposal from the United Kingdom (Furlong & Oancea, 2005) recommended that “quality” action research be defined by: (a) the level of rigor or “robustness” of methodology and theoretical underpinnings, including level of trustworthiness; (b) the potential value for having an impact on practice; (c) the potential for building capacity and developing “practical wisdom” grounded in an ethical framework; and (d) the economic dimension of providing cost-effectiveness. Excellent ideas, but there hasn’t been a test yet to clarify what it means to address all four priorities at a “high quality” level, nor do we yet know if doing so is more likely to yield improved learning than doing so at a lower level of quality.

Douglas Reeves has made an important contribution to this professional discussion (Reeves, 2008) by developing and using a rubric to evaluate the quality of three of the four key components of action research proposals that guided the implementation of changes in professional practices and, as a by-product, enhanced the development of teacher leaders in a study he led. Reeves’ model included: (a) the statement of a research question addressing a topic of “vital importance and clear relevance to district needs;” (b) a description of the target student population to identify “demographic and educational factors that might influence the research findings;” (c) student achievement data from classroom-level assessments and observations, collected over time; and (d) observations, made over time, of the changed professional practice (Reeves, 2010, p. 80-81). Reeves identifies the observations as “the missing link in most action research projects” (Reeves, 2010, p. 81). This is the component of actually focusing on how the particular intervention or change in practice is delivered during the course of the action research project. Reeves notes that, of action research projects carried out using his protocol in 81 schools in one of the nation’s largest school districts, over two-thirds showed positive impact on student achievement (Reeves, 2010, p. 80).

My own study of a protocol for action research (Adamson, 2008) has resulted in the initial validation of a rubric to score action research projects. The study found a positive correlation between higher rubric scores overall with improved student achievement. The rubric, which drew from both a variety of action research practices and from the knowledge base on effective teaching, includes the following components (note: all examples here are based on an unpublished action research project report submitted by a teaching intern in a Master of Arts in Teaching program; see Alexander, 2009; see also Figure 1, Action Research Model):

Figure 1

(a) The context, needs assessment, and alignment with school goals describes the individual school setting and the target students, key areas for improvement in achievement, and how the student needs align with a larger school goal. (A below-level fifth grade class in a moderately large suburban school of mixed ethnicities shows both low levels of reading proficiency on state tests and low reading comprehension grades on school district assessments.)

(b) The question or topic and a testable hypothesis based on the action researcher’s “theory of causation” focuses the project. (Can improving students’ reading motivation lead to improved comprehension? Participating in a reading “lunch bunch” for students to talk about self-selected books, coupled with targeted instruction in reading strategies, will correlate positively with improved reading comprehension.)

(c) The collection and analysis of baseline data may provide insights into the actual need. (Reading homework is not being completed by the majority of the class, and most comprehension grades are C or lower. Males and females perform comparably; European Americans scored somewhat higher than minority students in both homework completion and reading comprehension.)

(d) Professional sources for an intervention or change in practice are drawn from a sampling of the relatively recent professional knowledge base in the area of interest, with attention to research that showed positive effects with comparable student populations as well as drawing from expert practitioners such as, for example, a school system reading resource teacher. (Research studies and acknowledged experts showed a relationship between motivation, strategy instruction, and reading achievement in intermediate grades and documented the effectiveness of book clubs in increasing motivation and engagement.)

(e) Implementation of an intervention or change in practice, based on integrating recommendations drawn from the professional sources, describes how the intervention/change is integrated into the ongoing teaching activities. While not as complete a depiction as an observation or review of a video of instruction, the documentation gives a clear picture of what the intervention consisted of and how it was carried out. (A lunch book group, based on voluntary participation and use of student-selected books, began along with integration of reading strategy instruction into regular whole class and reading group lessons.)

(f) The collection and examination of formative data permits monitoring the evolving progress of students receiving the intervention and informs dynamic changes in aspects of the intervention that may appear to be unproductive. (Both qualitative notes documenting the students’ enthusiasm to get to the Lunch Bunch every week and their self-ratings of levels of enjoyment enabled the intern to keep her finger on the pulse of how students were responding.)

(g) Summative data collection and analysis provides a point of comparison with the baseline data. In addition, it often adds in results of class observations to note changes in student behavior. (Comprehension grades pre- and post-intervention showed a dramatic reversal, from majority C and below to majority A and B grades; reading homework completion rates soared; participation in reading instruction became enthusiastic.)

(h) Interpretation of results and next steps represent the meaning-making stage, the reflective point when the action researcher answers the big question: So what? So I made this change in my professional practice, and I documented these particular changes. What does it mean for my students, for me in the future, and for my colleagues? (Students in the lunch bunch group transferred increased engagement and self-confidence to class participation and reading comprehension of expository texts. The intern is going to continue to expand her repertoire of motivational strategies and to encourage as much student choice in learning activities as possible.)

Action Research: The “How Well?”
The protocol for carrying out action research that I just described does what other models do: it lists a set of steps or components based on having examined the professional knowledge base. The analytical rubric also evaluates different levels of quality for each of the eight components on a four-point scale, the two top levels being associated with higher quality. The rubric itself was studied to determine if there was a correlation between a sub-set of the eight components and student achievement changes over the course of action research projects. A set of three of the eight components predicted positive changes in student achievement: (a) the context, needs assessment, and goal alignment + (e) the implementation of the change in practice + (f) examination of formative data during the implementation. These results indicate that there is great value in investing time articulating what we know about the setting, the learners and their needs, and how to relate the learners’ needs to the priorities in the larger setting. It matters to make changes in professional practice with attention to doing a high-quality job of the particular change. (Every teacher knows that teaching with cooperative learning, with decades of research supporting its value, is much more easily done badly than done well.) Gathering and using evidence of progress to shape further progress has real value.

Action Research: What Next?
There is much more to study in order to help educators make good decisions about how they invest their time in action research. Two areas in particular that emerged as unexpected insights from the results of my study include the importance of the duration of the change in practice and the difference in the incidence of positive impact on student achievement between elementary and secondary settings. Both of these points are worth exploring separately, because they could be of real importance in how teachers and schools make use of action research. If there are statistically significant differences in student outcomes based on the time invested in a well-designed, research-supported change in practice, educators need to be on the look-out for areas to be improved early in the year and get moving. (I, for one, have moved the action research requirement for my own teaching intern students from spring to the fall semester for this reason.) The difference in evidence of impact between elementary and secondary levels, if it holds up with more extensive study, can help explain why it has generally been more problematic to make major improvements in student achievement in secondary settings than in elementary schools.

Apart from studying particular aspects of how educators carry out action research, it is also intriguing to think of how this validated action research protocol could be tested in other settings and possibly shown to be useful in other contexts. Action research, after all, is not limited to educational settings. There are many directions to explore.

New learning, after all, begets new questions – and the cycle begins again.

Adamson, L. S. (2008). Development and evaluation of an instrument to assess data-informed instructional practice. (Unpublished doctoral dissertation). Johns Hopkins University School of Education, Baltimore, MD.

Alexander, J. (2009). Action research: Motivational book club. Unpublished presentation, Department of Teacher Preparation, Johns Hopkins University School of Education, Baltimore, MD.

Armstrong, F., & Moore, M. (2004). Action research: Developing inclusive practice and transforming cultures. In F. Armstrong & M. Moore, M. (Eds.), Action research for inclusive education: Changing places, changing practices, changing minds (pp. 1-16). London: RoutledgeFalmer.

Caro-Bruce, C. (2000). Action research facilitator’s handbook. Wichita Falls, TX: National Staff Development Council.

Ferrance, E. (2000). Themes in action research. Providence, RI: Brown University. Retrieved April 27, 2010, from

Furlong, J., & Oancea, A. (2005). Assessing quality in applied and practice-based educational research: A framework for discussion. Oxford, UK: Oxford University Department of Educational Studies. Retrieved February 2, 2008, from

James, E. A., Milenkiewicz, M. T., & Bucknam, A. (2008). Participatory action research for educational leadership: Using data-driven decision making to improve schools. Los Angeles: Sage.

Maryland State Department of Education. (2002, October 23). Standards for Maryland Professional Development Schools. (document). Baltimore: Author. Available at

Maryland State Department of Education. (2003). Protocol for Continuing Accreditation. Baltimore: Author. Retrieved March 17, 2006, from

Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing student achievement. Alexandria, VA: Association for Supervision and Curriculum Development.

Noffke, S. E., & Zeichner, K. M. (1987, April). Action research and teacher thinking: The first phase of the Action Research on Action Research Project at the University of Wisconsin-Madison. Paper presented at the annual meeting of the American Educational Research Association (AERA). Washington, DC: AERA. ERIC Document ED295939.

O'Brien, R. (2001). Um exame da abordagem metodológica da pesquisa ação [An Overview of the Methodological Approach of Action Research]. In Roberto Richardson (Ed.), Teoria e Prática da Pesquisa Ação [Theory and Practice of Action Research]. João Pessoa, Brazil: Universidade Federal da Paraíba. (English version) Available: (Accessed 20/1/2002)

Reeves, D. B. (2008). Reframing teacher leadership to improve your school. Alexandria, VA: Association for Supervision and Curriculum Development.

Reeves, D. B. (2010). Transforming professional development into student results. Alexandria, VA: Association for Supervision and Curriculum Development.

Sagor, R. (2005). The action research guidebook: A four-step process for educators and school teams. Thousand Oaks, CA: Corwin Press.

Slavin, R., Ed. (2010). Better: Evidence-based education 2(2). Retrieved from

Susman, G. I. (1983). Action research: A sociotechnical systems perspective. In G. Morgan (Ed.). Beyond method: Strategies for social science research, London: Sage Publications, 95-113.

Zeichner, K., & Klehr, M. (1999, November). Teacher research as professional development for p-12 educators. College Park, MD: National Partnership for Excellence and Accountability in Teaching. ERIC Document ED448156.

Additional Resources
A sampling of websites:

Action Research Journal, Sage Publications:

Action Research: What is a Living Educational Theory Approach to Action Research and a Human Existence?

Center for Collaborative Action Research at Pepperdine University:

Dick, B. (2000) of Southern Cross University, Australia:

International Journal of Action Research:

Madison (WI) Metropolitan School District:

Northeast Florida Science, Technology, and Mathematics Center for Education:

University of Toronto (Canada):


Glanz, J. (1998). Action research: An educational leader’s guide to school improvement. Norwood, MA: Christopher-Gordon.

Hubbard, R. S. & Power, B. M. (1999). Living the questions: A guide for teacher-researchers. York, ME: Stenhouse.

Meyers, E. & Rust, F. (2003). Taking action with teacher research. Portsmouth, NH: Heinemann.

Phillips, D. K. & Carr, K. (2006). Becoming a teacher through action research: Process, context, and self-study. New York: Routledge.

Reason, P. & Bradbury, H. (Eds.) (2001). Handbook of action research. London: Sage.

Stringer, E. (2008). Action research in education (2nd ed.). Upper Saddle River, NJ: Pearson.

Stringer, E. T., Christensen, L. M., & Baldwin, S. C. (2010). Integrating teacher, learning, and action research: Enhancing instruction in the k-12 classroom. Los Angeles: Sage.

Whitehead, J. & McNiff, J. (2006). Action research: Living theory. London: Sage.

©May 2010 The Johns Hopkins University New Horizons for Learning

For permission to redistribute, please go to:
Johns Hopkins University New Horizons for Learning Copyright and Permission Information

Search New Horizons


New Horizons Links

New Horizons home

About Us (NHFL)

Current Journal

Submission Guidelines


Follow us on Facebook, Linked In, and Twitter!

Facebook Icon Twitter Icon LinkedIn Icon

New Horizons Shop

Featured Item: Research-Based Strategies to Ignite Student Learning: Insights from a Neurologist and Classroom Teacher

By Judy Willis | Purchase

Visit the New Horizons store on for more selections

New Horizons store on