Skip Navigation
Assessment as Feedback

by Grant Wiggins

Years ago, Thomas Gilbert summed up the principles of good feedback in his delightful and informative book Human Competence. In it, he catalogued the requirements of any information system "designed to give maximum support to performance." The requirements involved eight steps:

 

1. Identify the expected accomplishments.

2. State the requirements of each accomplishment. If there is any doubt that people understand the reason why an accomplishment and its requirements are important, explain this.

3. Describe how performance will be measured and why.

4. Set exemplary standards, preferably in measurement terms.

5. Identify exemplary performers and any available resources that people can use to become exemplary performers.

6. Provide frequent and unequivocal feedback about how well each person is performing. This confirmation should be expressed as a comparison with an exemplary standard. Consequences of good and poor performance should also be made clear.

7. Supply as much backup information as needed to help people troubleshoot their own performance.

8. Relate various aspects of poor performance to specific remedial actions.

 

Gilbert sardonically adds that "these steps are far too simple to be called a 'technology,' but it may be that their simplicity helps explain why they are so rarely followed."  He elaborates, "In years of looking at schools and jobs, I have almost never seen an ideal [feedback] system. Managers, teachers, employees, and students seldom have adequate information about how well they are performing." A key question to ponder is: why are such "simple" steps "rarely followed"? What views and practices in schools cause us to ignore or violate such commonsensical views about performance?

One reason we rarely follow such simple steps is that there are fundamental misconceptions about assessment generally and feedback in particular among educators. As I have argued, far too many educators treat assessment as something one does after teaching and learning are over instead of seeing assessment as central to learning. (If I were to say that learning requires feedback, then the proposition seems immediately more obvious.) And in terms of feedback, many teachers mistakenly think that giving such general praise as "Good job!" is feedback, for example. But such praise only keeps you interested; it cannot improve your performance, which is what feedback can do.

So, let us begin at the beginning and ask: What is feedback? How does it differ from other forms of performance-related information? And what must assessment be to provide more of it?

What is feedback? Feedback is information about how we did in light of some goal. We hit the tennis ball and see where it lands, we give a speech and hear (as well as witness) audience reaction as we speak, we design an experiment and check the results for error margin, we use the word processor and the spell checker underlines misspellings – feedback. Though we use the word more loosely in day-to-day talk to encompass many kinds of effects or reactions, here we narrow the meaning of feedback to its more technical meaning: information about what and was not accomplished, given a specific goal.

This definition and these examples enable us to see what feedback is and what it isn't. Feedback is useful information about what happened. It thus is not guidance (advice based on feedback) or evaluation (a value judgment about the meaning of the results.) Thus, we profit from pondering our current bad habit of defining assessment as testing and the result as a score merely. How would the tennis player improve if all the coach did was shout out letter grades or stanines? How would the public speaker become skilled and poised if there were never a real audience and experts merely wrote back and gave their scores a few weeks later? Our challenge as educators is to think of assessment as first and foremost educative, in other words. Our aim must therefore be to create assessments that provide better feedback by design, and not think of improvements in terms of more accurate evaluation. Indeed, without better feedback (and guidance based on the feedback) in student assessment, there is little point to precise scores and value judgments.

Feedback is not a labor-intensive, impractical strategy for school reform. Did you notice that all the above examples do not involve a person giving a grade or evaluative comment? A common misconception about feedback in schools is that it is impossible to provide enough of it because good feedback seemingly requires intensive one-on-one tutorials. But much important feedback is derived from situational information in response to trying to accomplish a task. The challenge of designing learning, in fact, is to make it possible for students to self-assess and self-adjust effectively, with minimal intervention by the teacher. Put another way, instructional design is the art of maximizing self-directed learning and useful information from the situation, hence the freeing up of the teacher to provide personal feedback and guidance when needed.

When we ponder the constant use of year-end tests (be they state-imposed or locally-designed) we better see how far we are from making feedback central to learning. A one-shot "secure" test at the end of the year is as little likely to improve student performance as merely being given a single letter grade at season's end (and no other information) by a tennis coach, after being tested on some drills that you have never seen before test day. If our aim is to improve student performance, not just measure it, we must ensure that students know the performances expected of them, the standards against which they will be judged, and have opportunities to learn from the assessment in future assessments.

What, then, must assessment be to be educative? What are the elements of an effective feedback and learning system?

As the above comments suggest, educative assessment requires a known set of measurable goals, standards and criteria that make the goals real and specific (via models and specifications), descriptive feedback against those standards, honest yet tactful evaluation, and useful guidance. Elaborations for these elements follow:

Elements of a an educative assessment system:

1. Standards

· specifications (e.g. 80 wpm w/ 0 mistakes)
· models (exemplars of each point on the scale – e.g. anchor papers)
· criteria: conditions to be met to achieve goals – e.g. "persuasive and clear" writing

2. Feedback

· Facts: what events/behavior happened, related to goal
· Impact: a description of the effects of the facts (results and/or reactions)
· Commentary: the facts and impact explained in the context of the goal; an explanation of all confirmation and disconfirmation concerning the results

3. Elements of evaluation

· Evaluation: value judgments made about the facts and their impact
· Praise / Blame: appraisal of individual's performance in light of expectations for that performer

4. Elements of Guidance

· Advice about what to do in light of the feedback
· Re-direction of current practice in light of results

Feedback vs. Evaluation

1. Facts: provide the evidence without interpretation or evaluation

· What did or did not happen, exactly? Describe the action/performance/product using only specific, concrete, non-judgmental language.

· Specify context and goal, as needed: what/who/where/when/how.

· Commentary:

Describe what happened in terms of the explicit or implicit goal/intent/standard/model. Confirm what was on-target, where effect matched intent, to reinforce it; and note where actions were off-target, where effect did not match intent, to underscore the need for re-direction.

Avoid or downplay language that stresses what the coach/judge liked or didn't like. Liking has nothing to do with it: how did the behavior meet or not meet the criteria and standards?

· Impact:

Describe the effects that occurred as an immediate result of the facts. (e.g. fact: batter swung late and used his arms in swinging, not his body and legs. Impact: the batter hit a soft ground ball to the second baseman which was not his aim.)

An audience or reactor's response: a description of the particular thoughts and feelings without using value language or authoritative generalizations. Examples: the audience applauded enthusiastically, many people looked bored, the questions afterward suggested key points were not understood, many audience members stayed afterward to talk and ask further questions, etc. (Note: it is a fact, not a value judgment, to say: "The ending of your story really bothered me because I felt like you had built up a completely different mood." A value judgment would be to go beyond the facts of your personal reaction to a blanket judgment about merit: "The ending is poor.")

2. Evaluation: The use of specific criterial language (unpersuasive, organized, unclear, polished, etc.) in relation to the goals and standards appropriate to this performance, not just general words of approval/disapproval, like/dislike.

Praise/blame, based on criteria:

· Note that phrases like "Good job!" are useful only when followed or preceded by specific feedback and evaluation justifying the praise or blame. Otherwise the only "feedback" transmitted is that the person was pleased or not, for whatever reason.

Feedback and guidance. Feedback is information about what happened, the result or effect of our actions. The environment or other people "feed back" to us the impact of our behavior, be that upshot intended or unintended. Guidance, on the other hand, gives future direction: what should I do, in light of what just happened? And evaluation, finally, judges my overall performance against a standard. Feedback tells me whether I am on course. Guidance tells me the most likely ways to achieve my goal. Evaluation tells me whether I am or have been sufficiently on course to be deemed competent or successful.

As this brief analysis makes clearer, feedback is value-neutral. It merely reports what did and did not happen. Elbow described the difference between feedback and evaluation in writing, for example, in terms of "criterion-based feedback" and "reader-based feedback." The former in effect asks "What is its quality?" while the latter asks "How does it work?" The mixing up of the two ideas "tends to keep people from noticing that they could get by with far less measurement . . .. The unspoken premise that permeates much of education is that every performance must be measured and that the most important response to a performance is to measure it. The claim need only be stated to be seen through . . . When an individual teacher, a department, or a whole faculty sits down and asks, 'At what point and for what purposes do we need measurement?' they will invariably see that they engage in too much of it.

As this analysis also suggests, performance and assessment form a series of continuous and iterative steps – the so-called feedback loop. A deliberate system of feedback "loops", in which I constantly confirm or disconfirm the results of my actions (by attending to the visible effects of prior feedback acting on that information) is how all successful performance develops and eventually occurs. This analysis underscores what is so often wrong with what passes for feedback in schools, for both students and adults. As Peter Senge put it in his well-known book on management, to get feedback is not to "gather opinions about an act we have undertaken . . ..[Rather] in systems thinking, feedback is a broader concept. It means any reciprocal flow of influence." In education, that means that a "learning system" is one in which I not only receive enough data until I get the task done properly, but opportunities to reveal my learning via self-adjustment in later and deliberately repeated assessments.

Concurrent Feedback. Perhaps the greatest indication of our failure to understand the "loop" nature of feedback and the poor feedback in current testing and student assessment can be found in once again looking at the examples we noted at the outset. In public speaking, tennis, computer, and science the key feedback occurs during performance, not after it. Concurrent feedback is information that is "fed back" to us as we perform; serving as the basis for learning and intelligent self-adjustment en route. (Even when real-world feedback occurs after performance it is typically far more timely than the feedback from all local, state and national testing.)

We often judge competence in the real world, in fact, by a person's ability to adjust in light of feedback to circumstances. Mastery, in other words, is not the answering of simplistic and discrete questions correctly, but the solving of complex challenges which requires responding to the feedback provided as we problem-solve or perform. "You know the trouble with kids today?" one woman in a workshop offered: "They don't know what to do when they don't know what to do." That is primarily because of our testing system which never tests for it. Yet, almost all complex real-world performance requires numerous "trials" (and thus the self-correcting of many "errors" en route through feedback) if standards are to be met.

Here again, then, we must puzzle over our opening question: How did we lose sight of this obvious idea? Though a seemingly-radical move for test construction, the idea of concurrent feedback is hardly opaque or new: Thorndike noted almost a century ago that good educational design involves "the law of effect, which holds essentially that learning is enhanced when people see the effects from what they try." William James, even earlier, wrote that effective education requires that we "receive sensible news of our behavior and its results. We hear the words we have spoken, feel our own blow as we give it, or read in the bystander's eyes the success or failure of our conduct. Now this return wave . . . pertains to the completeness of the whole experience." Haney's recent literature reviews only underscore the point: "a meta-analysis of forty previous studies on the instructional effects of feedback in test-like events showed that relatively rapid feedback (i.e. immediately after a test was completed) is more effective than feedback after a day or more. Also, feedback providing guidance to, or identification of, correct answers is more instructionally effective than feedback that simply tells learners whether their answers are right or wrong."

What, then, should we make of modern testing methodologies that give the students no feedback as they proceed, or the providing of scores and grades on a May test or June exam after school is out? What of instruction that assumes that "coverage" causes learning – as opposed to the learner's attempts to learn? Without being taught what excellent performance is; without being taught how to self-adjust, achievement becomes more a matter of lucky talents and savvy guesswork than self-directed and long-lasting learning. And if instruction only provides teacher guidance (but little in the way of feedback in reference to standards to justify or make clear the meaning of the guidance), then students must perpetually ask – as they do! – "Is this right? Is this what you want" The development of autonomy and competence is undermined when students are reduced to guessing what will be on the test, puzzling over scores, and getting what little feedback they receive many days after performance (in a curriculum that moves on, irrespective of results.)

While it is unclear what has caused us to lose sight of these truths about learning, one ironic observation about adults seems obvious: what is obvious to us is not obvious to students. Indeed, we might define "student" as a person who does not yet know or see what is obvious to the expert. The constant challenge of teaching is to escape adult egocentrism about what is and isn't obvious. This point was brought home to me recently in coaching my 9-year-old son's baseball team. What is painfully obvious to all three adult coaches about "backing up the play" (i.e. getting behind another player who is trying to catch the ball in the event that he fails to catch it) is not at all obvious to the kids. They have not developed the habit of anticipating where the ball is headed and where they must head in support of one another.

That skill, like all complex performance learning, can only become instinctive through instruction and constant feedback in attempts to use it; teaching the idea of backing up – the guidance – makes little or no difference in their behavior unless the kids see many times the consequences of backing up and not backing up. And we are talking here about something far more simple than almost all key learnings in school (yet coaches, like teachers, get impatient and upset when kids don't "see it" and do it properly.) Guidance and evaluation make little difference unless there is prior clarity about goals, means, and feedback.

Important performances are never mastered the first or fortieth time. We therefore need less teaching and summative testing, and more feedback in schools. When and where should you start to regain control of learning via educative assessment?

About the author

Grant Wiggins, President of Grant Wiggins & Associates, earned his Ed. D. from Harvard University and his B. A. from St. John's College in Annapolis. Grant consults with schools, districts and state education departments on a variety of reform matters; organizes conferences and workshops; and develops print materials and Web-based resources on curricular change. Grant's work has been supported by the Pew Charitable Trusts, the Geraldine R. Dodge Foundation, the National Science Foundation, and the Education Commission of the States, and, he has recently completed a one-year appointment as Scholar-in-Residence on educational design issues at The College of New Jersey.

Over the past fifteen years, Grant has worked on some of the most influential reform initiatives in the country, including Vermont's portfolio system and the Coalition of Essential Schools. He has established a statewide Consortium devoted to assessment reform, and designed a performance-based and teacher-run portfolio assessment prototype for the states of North Carolina and New Jersey.

Perhaps best known for being the co-author, with Jay McTighe, of Understanding By Design and The Understanding By Design Handbook, the award-winning and highly successful materials on curriculum published by ASCD, Grant is also the author of Educative Assessment and Assessing Student Performance, both published by Jossey-Bass. His many articles have appeared in such journals as Educational Leadership and Phi Delta Kappan.

Grant's work is grounded in 14 years of secondary school teaching and coaching. Grant taught English and electives in Philosophy, coached Varsity soccer, Cross-Country, JV Baseball, and Track & Field. More recently Grant has been coaching his two sons in soccer and baseball. He also plays guitar and sings in a rock band called the Hazbins.  You may contact him at grant@grantwiggins.org

© March 2004

Search New Horizons

 

New Horizons Links

New Horizons home

About Us (NHFL)

Current Journal

Submission Guidelines

Subscribe

Follow us on Facebook, Linked In, and Twitter!

Facebook Icon Twitter Icon LinkedIn Icon

New Horizons Shop

Featured Item: Research-Based Strategies to Ignite Student Learning: Insights from a Neurologist and Classroom Teacher

By Judy Willis | Purchase

Visit the New Horizons store on Amazon.com for more selections

New Horizons store on Amazon.com