Showing posts with label digital media and learning. Show all posts
Showing posts with label digital media and learning. Show all posts

Thursday, October 23, 2014

Open Digital Badges: Recognizing, Assessing, and Motivating Learning in the DPD Project

by Katerina Schenke, James Willis, and Dan Hickey

As the Design Principles Documentation team analyzes the project data from 30 digital badge projects awarded funding in the 2012 DML 4: Badges for Lifelong Learning Completition, certain principles and practices emerge as being successful or overly challenging. This post highlights the three most successful and challenging principles and practices in recognition, assessment, and motivation



Tuesday, September 17, 2013

In Theory and In Practice: Digital Badges in Education and the Challenges that Arise.

This post is an article by Roshni Verghese about badges that features an interview with Cliff Manning and Lucy Neale of DigitalMe .  The article describes the possibilities and challenges of incorporating digital badges into Supporter to Reporter (S2R), a program designed to introduce young sports enthusiasts in the UK to sports reporting.

Sunday, July 21, 2013

Purdue Veterinary Medicine Digital Badges Aim to Excite Youth and Expand Their Knowledge



by Rebecca Itow

Purdue Veterinary Medicine has designed a digital badge system to challenge Kindergarten through high school students to earn digital badges as they learn about veterinary medicine. The PVM Digital Badge system is open to any K-12 student. Youth engage with veterinary medical content online or at PVM events, and then take a short (usually multiple choice) quiz for the chance to earn their badge. This looks the start of a second wave of new projects using digital badge beyond the DML Badges for Lifelong Learning. I am particularly curious about their assessment practices in light of what I learned studying the assessment practices across the 30 DML awardees.  Plus with its campus-wide Passport badging system, Purdue really seems out in front of other universities when it comes to digital badges.

Thursday, April 4, 2013

Digital Badges Meeting at the NSF Headquarters Hosted by NYSCI


by Katerina Schenke
This post describes a meeting at the National Science Foundation where sixty leaders in education and research from around the country gathered to discuss digital badges and education.  Three of use presented the initial set of design principles from the Design Principles Documentation Project.

Wednesday, September 9, 2009

Q & A with Henry Jenkins' New Media Literacies Seminar

New media scholar Henry Jenkins is teaching a graduate seminar on new media literacies at the University of Southern California's Annenberg School for Communication. The participants had raised the issues of assessment and evaluation, especially related to educational applications of new media. Henry invited Dan Hickey to skype into their class to field questions about this topic. They perused some of the previous posts here at re-mediating assessment and proceeded to ask some great questions. Over the next few weeks, Dan and other members of the participatory assessment team will respond to these and seek input and feedback from others.


The first question was one they should have answered months ago:



Your blog post on what is not participatory assessment critiqued prevailing assessment and testing practices. So what is participatory assessment?

The answer to this question has both theoretical and practical elements. Theoretically, participatory assessment is about reframing all assessment and testing practices as different forms of communal participation, embracing the views of knowledgeable activity outlined by media scholars like Henry Jenkins, linguists like Jim Gee, and cognitive scientists like Jim Greeno. We will elaborate on that in subsequent posts, hopefully in response to questions about this post. But this first post will focus more on the practical answer.

Our work in participatory assessment takes inspiration from the definition of participatory culture in the 2006 white paper by Project New Media Literacies:
not every member must contribute, but all must believe they are free to contribute when ready and that what they contribute will be appropriately valued.

As Henry, Mimi Ito, and others have pointed out, such cultures define the friendship-driven and interest-driven digital social networks that most of our youth are now immersed in. This culture fosters tremendous levels of individual and communal engagement and learning. Schools have long dreamed of attaining such levels but have never even come close. Of course, creating (or even allowing) such a culture in compulsory school settings requires new kinds of collaborative activities for students. Students like those in Henry’s class, and students in our Learning Sciences graduate program are at the forefront of creating such activities. Participatory assessment is about creating guidelines to help students and teachers use those activities to foster both conventional and new literacy practices. Importantly, these guidelines are also intended to produce more conventional evidence of the impact of these practices on understanding and achievement that will always be necessary in any formal educational context. Such evidence will also always be necessary if there is to be any sort of credentialing offered for learning that takes place in less formal contexts.


Because successful engagement with participatory cultures depends as much on ethical participation (knowing how) as it does on information proficiency (knowing what), At the most basic practical level participatory assessment is intended to foster both types of know-how. More specifically, participatory assessment involves creating and refining informal discourse guidelines that students and teachers use to foster productive communal participation in collaborative educational activities, and then in the artifacts that are produced in those activities. Our basic idea is that before we assess whether or not individual students understand X (whatever we are trying to teach them), they must first be invited to collectively “try on” the identities of the knowledge practices associated with X. We do this by giving ample opportunities to “try out” discourse about X, by aggressively focusing classroom discourse towards communal engagement in X, and discouraging a premature focus on individual students’ understanding of X (or even their ability to articulate the concept of X). Premature focus on individual understanding leaves the students who are struggling (or have perhaps not even been trying) self-conscious and resistant to engagement. This will make them resist talking about X. Even more problematically, they will resist even listening to their classmates talk about X. Whatever the reason the individual is not engaging, educators must help all students engage with increased meangingfulness.
To do participatory assessment for activity A, we first define the relevant big ideas (RBIs) of the activity (i.e., X, Y, and perhaps, Z). We then create two simple sets of Discourse Guidelines to ensure that all students enlist (i.e., use) X, Y, and Z in the discourse that defines the enactment of that activity. Event reflections encourage classrooms to reflect on and critique their particular enactment of the activity. These are informal prompts that are seamlessly embedded in the activities. A paper we just wrote for the recent meeting of the European Association for Research on Learning and Instruction in Amsterdam discussed examples from our implementation of Reading in a Participatory Culture developed by Project New Media Literacies. That activity Remixing and Appropriation used new media contexts to conventional literary notions like genre and allusion. One of the Event Reflection prompts was

How is the way we are doing this activity helping reveal the role of genre in the practice of appropriation?


Given that the students had just begun to see how this notion related to this practice, the students struggled to make sense of such questions. But it set the classroom up to better appreciate how genre was just as crucial to Melville’s appropriation of the Old Testament in Moby-Dick as it was to the music video "Ahab" by nerdcore pioneer MC Lars. The questions are also worded to introduce important nuances that will help foster more sophisticated discourse (such as the subtle distinction between a concept like genre and a practice like appropriation)
Crucially, the event guidelines were aligned to slightly more formal Activity Reflections. These come at the end of the activity, and ask students to reflect on and critique the way the particular activities were designed, in light of the RBIs:

How did the way that the designers at Project New Media Literacies made this activity help reveal the role of genre in the practice of appropriation?


Note that the focus of the reflection and critique has shifted from the highly contextualized enactment of the activity, the more fixed design of the activity. But we are still resisting the quite natural tendency to begin asking ourselves whether each student can articulate the role of genre in appropriation. Rather than ramping up individual accountability, we first ramp up the level of communal discourse by moving from the rather routine conceptual engagement in the question above, and into the more sophisticated consequential and critical engagement. While these are not the exact questions we used, these capture the idea nicely:

Consequential Reflection: How did the decision to focus on both genre and appropriation impact the way this activity was designed?

Critical Reflection: Can you think of a different or better activity than Moby-Dick or Ahab to illustrate genre and appropriation?


We are still struggling to clarify the nature of these prompts, but have found a lot of inspiration in the work of our IU Learning Sciences colleagues Melissa Gresalfi and Sasha Barab, who have been writing about consequential engagement relative to educational video games.


The discourse fostered by these reflections should leave even the most ill-prepared (or recalcitrant) participant ready to meaningfully reflect on their own understanding of the RBIs. And yet, we still resist directly interrogating that understanding, in order to continue fostering discourse. Before jumping to assess the individual, we first focus on the artifacts that the individual is producing in the activity. This is done with Reflective Rubrics that ask the students to elaborate on how the artifact they are creating in the activity (or activities) reflects consequential and critical engagement with the RBI. As will be elaborated in a subsequent post, these are aligned to formal Assessment Rubrics of the sort that teachers would use to formally assess and (typically) grade the artifacts.

Ultimately, participatory assessment is not about the specific reflections or rubrics, but the alignment across these increasingly formal assessments. By asking increasingly sophisticated versions of the same questions, we can set remarkably high standards for the level of classroom discourse and the quality of student artifacts. In contrast to conventional ways of thinking about how assessment drive curriculum, former doctoral student Steven Zuiker help us realize that we have to thing impact of these practices using the anthropological notion of prolepsis. It helps us realize that anticipation of the more formal assessments motivates communal engagement in the less formal reflective process. By carefully refining the prompts and rubrics over time, we can attain such high standards for both that any sort of conventional assessment of individual understanding or measure of aggregated achievement just seems…well…. ridiculously trivial.
So the relevant big idea here is that we should first focus away from individual understanding and achievement if we want to confidently attain it with the kinds of participatory collaborative activities that so many of us are busily trying to bring into classrooms.