Wednesday, September 9, 2009

Q & A with Henry Jenkins' New Media Literacies Seminar

New media scholar Henry Jenkins is teaching a graduate seminar on new media literacies at the University of Southern California's Annenberg School for Communication. The participants had raised the issues of assessment and evaluation, especially related to educational applications of new media. Henry invited Dan Hickey to skype into their class to field questions about this topic. They perused some of the previous posts here at re-mediating assessment and proceeded to ask some great questions. Over the next few weeks, Dan and other members of the participatory assessment team will respond to these and seek input and feedback from others.

The first question was one they should have answered months ago:

Your blog post on what is not participatory assessment critiqued prevailing assessment and testing practices. So what is participatory assessment?

The answer to this question has both theoretical and practical elements. Theoretically, participatory assessment is about reframing all assessment and testing practices as different forms of communal participation, embracing the views of knowledgeable activity outlined by media scholars like Henry Jenkins, linguists like Jim Gee, and cognitive scientists like Jim Greeno. We will elaborate on that in subsequent posts, hopefully in response to questions about this post. But this first post will focus more on the practical answer.

Our work in participatory assessment takes inspiration from the definition of participatory culture in the 2006 white paper by Project New Media Literacies:
not every member must contribute, but all must believe they are free to contribute when ready and that what they contribute will be appropriately valued.

As Henry, Mimi Ito, and others have pointed out, such cultures define the friendship-driven and interest-driven digital social networks that most of our youth are now immersed in. This culture fosters tremendous levels of individual and communal engagement and learning. Schools have long dreamed of attaining such levels but have never even come close. Of course, creating (or even allowing) such a culture in compulsory school settings requires new kinds of collaborative activities for students. Students like those in Henry’s class, and students in our Learning Sciences graduate program are at the forefront of creating such activities. Participatory assessment is about creating guidelines to help students and teachers use those activities to foster both conventional and new literacy practices. Importantly, these guidelines are also intended to produce more conventional evidence of the impact of these practices on understanding and achievement that will always be necessary in any formal educational context. Such evidence will also always be necessary if there is to be any sort of credentialing offered for learning that takes place in less formal contexts.

Because successful engagement with participatory cultures depends as much on ethical participation (knowing how) as it does on information proficiency (knowing what), At the most basic practical level participatory assessment is intended to foster both types of know-how. More specifically, participatory assessment involves creating and refining informal discourse guidelines that students and teachers use to foster productive communal participation in collaborative educational activities, and then in the artifacts that are produced in those activities. Our basic idea is that before we assess whether or not individual students understand X (whatever we are trying to teach them), they must first be invited to collectively “try on” the identities of the knowledge practices associated with X. We do this by giving ample opportunities to “try out” discourse about X, by aggressively focusing classroom discourse towards communal engagement in X, and discouraging a premature focus on individual students’ understanding of X (or even their ability to articulate the concept of X). Premature focus on individual understanding leaves the students who are struggling (or have perhaps not even been trying) self-conscious and resistant to engagement. This will make them resist talking about X. Even more problematically, they will resist even listening to their classmates talk about X. Whatever the reason the individual is not engaging, educators must help all students engage with increased meangingfulness.
To do participatory assessment for activity A, we first define the relevant big ideas (RBIs) of the activity (i.e., X, Y, and perhaps, Z). We then create two simple sets of Discourse Guidelines to ensure that all students enlist (i.e., use) X, Y, and Z in the discourse that defines the enactment of that activity. Event reflections encourage classrooms to reflect on and critique their particular enactment of the activity. These are informal prompts that are seamlessly embedded in the activities. A paper we just wrote for the recent meeting of the European Association for Research on Learning and Instruction in Amsterdam discussed examples from our implementation of Reading in a Participatory Culture developed by Project New Media Literacies. That activity Remixing and Appropriation used new media contexts to conventional literary notions like genre and allusion. One of the Event Reflection prompts was

How is the way we are doing this activity helping reveal the role of genre in the practice of appropriation?

Given that the students had just begun to see how this notion related to this practice, the students struggled to make sense of such questions. But it set the classroom up to better appreciate how genre was just as crucial to Melville’s appropriation of the Old Testament in Moby-Dick as it was to the music video "Ahab" by nerdcore pioneer MC Lars. The questions are also worded to introduce important nuances that will help foster more sophisticated discourse (such as the subtle distinction between a concept like genre and a practice like appropriation)
Crucially, the event guidelines were aligned to slightly more formal Activity Reflections. These come at the end of the activity, and ask students to reflect on and critique the way the particular activities were designed, in light of the RBIs:

How did the way that the designers at Project New Media Literacies made this activity help reveal the role of genre in the practice of appropriation?

Note that the focus of the reflection and critique has shifted from the highly contextualized enactment of the activity, the more fixed design of the activity. But we are still resisting the quite natural tendency to begin asking ourselves whether each student can articulate the role of genre in appropriation. Rather than ramping up individual accountability, we first ramp up the level of communal discourse by moving from the rather routine conceptual engagement in the question above, and into the more sophisticated consequential and critical engagement. While these are not the exact questions we used, these capture the idea nicely:

Consequential Reflection: How did the decision to focus on both genre and appropriation impact the way this activity was designed?

Critical Reflection: Can you think of a different or better activity than Moby-Dick or Ahab to illustrate genre and appropriation?

We are still struggling to clarify the nature of these prompts, but have found a lot of inspiration in the work of our IU Learning Sciences colleagues Melissa Gresalfi and Sasha Barab, who have been writing about consequential engagement relative to educational video games.

The discourse fostered by these reflections should leave even the most ill-prepared (or recalcitrant) participant ready to meaningfully reflect on their own understanding of the RBIs. And yet, we still resist directly interrogating that understanding, in order to continue fostering discourse. Before jumping to assess the individual, we first focus on the artifacts that the individual is producing in the activity. This is done with Reflective Rubrics that ask the students to elaborate on how the artifact they are creating in the activity (or activities) reflects consequential and critical engagement with the RBI. As will be elaborated in a subsequent post, these are aligned to formal Assessment Rubrics of the sort that teachers would use to formally assess and (typically) grade the artifacts.

Ultimately, participatory assessment is not about the specific reflections or rubrics, but the alignment across these increasingly formal assessments. By asking increasingly sophisticated versions of the same questions, we can set remarkably high standards for the level of classroom discourse and the quality of student artifacts. In contrast to conventional ways of thinking about how assessment drive curriculum, former doctoral student Steven Zuiker help us realize that we have to thing impact of these practices using the anthropological notion of prolepsis. It helps us realize that anticipation of the more formal assessments motivates communal engagement in the less formal reflective process. By carefully refining the prompts and rubrics over time, we can attain such high standards for both that any sort of conventional assessment of individual understanding or measure of aggregated achievement just seems…well…. ridiculously trivial.
So the relevant big idea here is that we should first focus away from individual understanding and achievement if we want to confidently attain it with the kinds of participatory collaborative activities that so many of us are busily trying to bring into classrooms.

No comments:

Post a Comment