When I first started using a problem based curriculum in science I admit that I had no idea what to expect. Moreover, I had only a vague idea of how I was going to assess my students. As an academic teacher, I am required to give my students a letter grade twice a year. While I am moving more strongly towards the use of portfolios and self-assessment in my classes, I still work within a system that strives to have letter grades accurately reflect a student’s level of understanding and or effort in a discipline, in my case 5th and 6th grade science. I work within a system (pre-school through graduate school) that still values grades as an indicator of how to rank children. Ideally this ranking is used so they can be better served, classified and counseled towards the goal of attending college and possibly future career choices. In this system the easy to mass produce and analyze discrete quality offered by tests, makes for a more valued form of assessment. As a result, 5% of the letter grade that I award my students is still the result of paper style tests and quizzes, or what I refer to as, “check-ins.”
Beginning with the role of peer assessment, I hope to describe the role of alternative forms of assessment (the other 95%) that I have been using in my problem based approach to science. The other forms of assessment that I use, include self-assessment and assessment by a mentor or adult expert. A fourth form of assessment that I hope to learn more about this year (one which I was first introduced to by Dave Otten of the Athenian School) is the role of authentic assessment in the form of published, or open-source sharing of work. These forms of assessment may be used in conjunction with assigning letter grades, as any are easily adaptable to a rubric, or they can be used in a less formal/grade-less setting. Regardless, they stand alone in value, as they bring a rare opportunity for the following student resume to evolve over time:
- leadership, through setting higher quality standards of how to do work, the presentation of work and risk-taking by taking on hard problems
- collaboration, through the sharing of ideas and constructive criticism
- the ability to defend an argument
- the ability to describe a problem
- developing self-awareness as learner
- practicing informed iteration while working towards a solution
Why peer assessment?
Can you trust a 10-14 year old to guide another 10-14 year old?
Forgetting for a minute that my students are ages 10 and 11 (as I must to begin to learn their strengths), I researched forms of assessment. I sought out assessment that would be most authentic to a maker classroom. For me, that looked like behaviors (assessment tools) that led to methods (feedback) for offering new ideas, and collaboration on the growth process of designing a product. Peer critique was something that I began last year and I felt it was working with the previously stated goals in mind, but I had no measure to back up my claim. My fear of the image of the blind leading the blind, over a cliff of failure existed at the beginning of the year, as it would for most teachers. I also know that peer review is crucial in science and it works in various design fields, so why not in a classroom? Using peer critique to give rapid feedback on the design process seemed better than trying to filter all student work through the lens of one teacher. Peer feedback was not only useful it was necessary in an open-ended project scenario.
Having taught middle school for fourteen years, I also knew that the role of peer opinion, as it affects some beliefs and some behaviors, begins to supersede the role of parents and other adults at this developmental age (Berndt 1979). Lastly, due to the democratic nature in which knowledge will be accessed in this century, as well as our location in the heart of the technology world, many of my students come to the classroom with valuable insights, experiences and opinions that could inform the whole group. Why not capitalize on these last two assets?
Many of my students come to the classroom with valuable insights, experiences and opinions.
In the end, the blind leading the blind is often how we all embark on an adventure. Every year we have to learn, as a class or team, how to critique the work of others by doing it a few times. It takes modeling comments and questions in the first few attempts at peer critique to get students to make more thoughtful and insightful criticisms of their classmates’ work. Students too, will inform the group as to what “quality” looks like, over time. The quality of observations, being made by the audience, also increases over time. This is turn, leads to a higher quality of feedback for the presenter. Presentation styles can also be informed through critiquing the quality of a presentation. They soon learn from each other two key elements to sharing your work; the importance of a “good story” about your work and that a great data visualization is worth a thousand words.
Formal versus Informal Critiques
Students can earn feedback from their peers in two different ways in class, formally and informally. When we first began the year, all presentations of work for peer feedback were given formally, that is one or two students giving a slideshow aided presentation of the current progress of their work. These formal “crits” were modeled off of the first year we used product design in the 6th grade curriculum (2). I soon noticed that the quality of peer feedback grew over several weeks and I began to trust my students to give key feedback I would have been dishing out as the adult in the room. With that role covered by my most ardent student critics, I now reserve my comments to offer clues to a solution or for direct suggestions to deepen their knowledge, as any literacy guru would do.
The problem with formal critiques, is that they are formal. Adolescents hate public speaking, at least some do, and they take days to do properly (allow time for feedback). That is a lot of conference/old-classroom/teaching style information to sit through, no matter how interesting it may be. To add to that, the process of active listening for critical feedback is exhausting. I have to remind myself often, that these students are only eleven. We brainstormed as a class, ways to improve the system and two ideas emerged. Students almost unanimously agreed that peer critiques were valuable. Rather than have every share of work be done formally, they decided to do informal style critiquing, where student share their work more science fair style. For informal critiques, several tables are set up gallery style throughout the iLab. Students can then design their work display using whiteboard tables, rolling white boards, markers, standing their iPads up as displays and displaying their prototypes in an analog timeline.
The second idea to make formal crits’ less painful came recently when we needed to participate in a series of formal critiques for students to share the results of their testing for product development. This critique needed to be formal as students presented their authentic questions about their work up to this point. This is more like you would see at a scientific conference only with audience critique of work afterwards (This may in fact happen at real scientific conferences, but not at any educational ones I have attended). To help get through the process more easily and effectively, we made sure to schedule only 5 presentations a day, over a series of days and always indulged in an intermission that required no brain cells (any YouTube video with kittens getting stuck in things will do). The key was to keep the learning process fun, even if it was still formal.
As peers take in the description of work from those presenting, they know that a valuable part of the process is to give real criticism to the presenter. This feedback can be verbal and interactive, such as that given at the end of a formal presentation of work or it can be more passive; feedback in the form of what we call “love notes,” or post-its and sharpie marker (see below, students leaving comments for their peers using the sticky note and sharpie model) These brightly colored comments have been deemed “love notes.” Love notes can have an effect on a student’s project on different layers, emotional, as well as cognitive. The sheer act of getting a paper covered in love notes, still brings a bright-eyed glow of relief to the face of a student, having survived a presentation. My students, seem to genuinely feel rewarded for their intellect and work by the simplest of notes such as those scribbled with the words, “very cool!” or “I liked your ideas.” What adult wouldn’t what to get that kind of encouragement for their work on a regular basis? The key to using peers to critique student work, is that feedback is immediate and expected by the student presenters, which can be a very powerful motivator to do well (Kettle, et. al. 2010).
Love notes can offer key steps to academic growth as well. As research into effective peer assessment for MOOCs has shown, peer assessment can be as effective as assessment done by a single adult or teacher (Koller 2012, Sadler et. al. 2006). While a maker classroom is not a MOOC, it is a place where student driven work can seem overwhelming to assess for a single teacher. Using peer assessment, allows for deeper differentiation in the learning process for our students, something we strive for at Hillbrook.
See below, one student’s collection of feedback from the formal peer critique of her scientific testing. Her question was whether she could prevent bananas from turning brown in her ice-cream recipe using one of two recipe changes. Her ice-cream was designed to combat depression, the problem she chose to investigate for the year. Once she decided to make a food related solution, she researched micro-nutrients that aid in the relief of depression and invented an ice-cream. Looking at her critique or “love note” form, can you tell which comments came from an adult and which came from an 11 year old?
Can we measure the worth of peer critique?
It is one thing to have an intuition that something is valuable in your classroom. It is another to be able to share something of value outside of your classroom using only anecdotal evidence. Isolated in the iLab, I could see growth happening in my students due to the peer critique system we had been using. Still, I struggled with a method of measuring the value so that I could explain the value to others. After deliberation with Hillbrook’s science teacher Ilsa Dohman (also our Center for Teaching Excellence research design guru beginning 2014), I began asking students to do a reflection on the peer critique process. I asked them to dwell on the process for a moment while they focused on the following topics:
- What was the goal of this presentation of your work?
- Self-assess your presentation in terms of quality (see image below)
- Tweeze out constructive criticism from the love notes to decide on a plan of action for your next iteration
I wondered if students could be keeping a better track record of how the comments and feedback they got from their peers was reflected in their iteration process or growth as a student. That way we could all see the value of the process. The analysis of this attempt to measure growth is still pending. Ilsa and I hope to design an experimental assessment that allows students to more actively map the connection between peer feedback and growth (either as a student or of the design or scientific process). In the meantime, we continue to collect data in the form of reflections, as seen below.
- Berndt, Thomas J. “Developmental changes in conformity to peers and parents.” Developmental Psychology, 15.06 (Nov 1979): 608-616. doi: 10.1037/0012-1622.214.171.1248
- Flores, Christa. “Authentic Learning and Assessment in the Self-Directed Environment of a Middle School Maker Space.” paper submission for IDC 2013
- Kettle, Keri and Häubl, Gerald “Motivation by Anticipation: Expecting rapid feedback enhances performance” Psychological Science 21.4 April (2010) 545-547
- Koller Daphne “What we’re learning from online education” TEDGlobal talk, filmed June 2012
- Sadler, Philip M. & Good, Eddie “The Impact of Self- and Peer-Grading on Student Learning” . Educational Assessment 11.1 (2006) 1-31
- Senger, Jenna-Lynn. “Student Evaluations: Synchronous Tripod of Learning Portfolio Assessment—Self-Assessment, Peer-Assessment, Instructor-Assessment.” Creative Education 03.01 (2012): 155-63.