Sunday, June 10, 2012

Digital Badges as “Transformative Assessment”

                                                            By Dan Hickey
               The MacArthur Foundation's Badges for Lifelong Learning competition generated immense
interest in using digital badges to motivate and acknowledge informal and formal learning. The
366 proposals submitted in the first round presented a diverse array of functions for digital
badges. As elaborated in a prior post, the various proposals used badges to accomplish one or
more of the following assessment functions:

               Traditional summative functions. This is using badges to indicate that the earner
               previously did something or knows something. This is what the educational assessment
               community calls assessment of learning.

               Newer formative functions. This is where badges are used to enhance motivation,
               feedback, and discourse for individual badge earners and broader communities of earners.
               This is what is often labeled assessment for learning.

               Groundbreaking transformative functions. This is where badges transform existing
               learning ecosystems or allow new ones to be created. These assessment functions impact
               both badge earners and badge issuers, and may be intentional or incidental. I believe we
               should label this assessment as learning.

This diversity of assessment functions was maintained in the 22 badge content awardees who were
ultimately funded to develop content and issue badges, as well as the various entities associated with HIVE collectives in New York and Chicago, who were funded outside of the competition to help their members develop and issue badges.  These awardees will work with one of the three badging platform awardees who are responsible for creating open (i.e., freely-available) systems for issuing digital badges.
            Along the way, the Badges competition attracted a lot of attention.  It certainly raised some eyebrows that the modestly funded program (initially just $2M) was announced by a cabinet-level official at a kickoff meeting attended by heads of numerous other federal agencies.  The competition and the idea of digital badges were mentioned in articles in the Wall Street Journal, New York Times, and The Chronicle of Higher Education.  This attention in turn led to additional interest and helped rekindle the simmering debate over extrinsic incentives.  This attention also led many observers to ask the obvious question: “Will it work?” 
This post reviews the reasons why I think the various awardees are going to succeed in their stated goals for using digital badges to assess learning.  In doing so I want to unpack what “success” means and suggest that the initiative will provide a useful new definition of “success” for learning initiatives.  I will conclude by suggesting that the initiative has already succeeded because it has fostered broader appreciation of the transformative functions of assessment.



Success Factors for the DML Badges Initiative
The deep pool of submissions, a rigorous review process, and the infusion of additional resources from the Gates Foundation make it likely that most of the funded initiatives will succeed at their stated goals.  Other success factors include the following:

Networked learning communities at HASTAC and Mozilla that have been set up to support badging communities;

The open source ethos of sharing and collaborating that permeates the initiative, and particularly placing the Mozilla foundation at the center of the efforts to define the Open Badges Interface (OBI) protocols;

The inclusion of entrepreneurs, with awards to start-ups both for issuing badges and platforms for issuing them;

New educational frameworks to inform and justify awardees' efforts, including Henry Jenkins’ ideas about participatory learning, and Mimi Ito’s ideas about connected learning.

New collaborative topologies for badges including roles for badges led by Philipp Schmidt and Erin Knight at P2PU, badging systems led by Barry Joseph from Global Kids, and the lexicon for badges led by Carla Casilli at Mozilla[1].

In sum, these factors should allow most of the awardees to succeed in using digital badges to assess learning.  More broadly speaking, their shared success should leave behind a community that is bound together by an emerging set of principles and examples for issuing and awarding badges.

What Will it Really Mean to “Succeed?”
The Digital Media and Learning 2012 request for proposals for badge content and programs included no requirement for documenting learning outcomes. Since I have spent much of my career helping educational innovators meet the stringent expectations for program evaluation at the NSF, DOE, and most other agencies and foundations, this omission was noteworthy.  Proposers were only asked to describe outcomes, including  “learning content, programs, or activities that will be supported by badges,” “skills, competencies, and achievements badges will validate,” “specific identities or roles” that learners might assume, and “opportunities or privileges” that the badges would confer. 
While the RFP asked proposers to describe “existing assessments, if any” that could be used for “tracking or measuring performance,” there was no requirement that formal outcome assessments be included.  Examination of the funded proposals [2] reveals little in the way of formally articulated plans for documenting (a) the learning associated with earning a badge, (b) the learning enhanced by offering badges, or (c) the transformations caused by the introduction of badges.  And none of the proposals even began to consider the evidential, consequential, and systemic validity issues associated with any of the evidence that might be used to document summative, formative, and transformative impact. 
While some observers must have been surprised by the lack of formal evaluation requirements, it made a lot of sense to me.  The actual grants were quite small, and the initiative was quite groundbreaking and practically-oriented.  More importantly, stipulations for formal outcome evaluations would have likely focused on the more salient and measurable summative outcomes, and the corresponding evidential validity issues.  From my perspective, this would have come at the expense of the relatively elusive formative outcomes. This is partly because documenting formative outcomes raises complex issues of consequential validity that assessment scholars like Pamela Moss, Lorrie Shepard, and Samuel Messick have struggled with for decades. Because it is so hard to document formative impact, awardees who tried to evaluate those outcomes would likely have fallen short. What little anecdotal evidence they might have obtained would end up being dismissed by skeptics, allowing critics to argue that their program, and by extension, the larger initiative, had failed.
My point here is that requiring formal evaluation of learning outcomes might have squelched the badges movement before it actually got underway. As everyone agrees, there are currently no “best practices” for issuing and awarding digital badges for learning.  While we all point to initial examples like Global Kids, Stackoverflow.com, and Wikipedia Barnstars, there is currently little actual evidence of learning outcomes associated with these examples.  While some of the DML 2012 Research awardees are likely to generate new evidence of the impact of badges, it is simply too early for most of the DML 2012 Badge Content awardees to formally document summative and formative outcomes, or to grapple with the corresponding evidential and consequential validity issues. 
Some content awardees will soon enough be required to formally document learning outcomes when they seek subsequent additional funding from more conventional agencies and programs.  I hope the assessment community can help prepare them for that time.  In the meantime, I am concerned that some content awardees will be pushed to evaluate formal outcomes earlier than they expected or should.  For example, it seems possible that the introduction of digital badges by more established awardees (e.g., Girl Scouts or 4-H) may cause those organizations to wrestle with some enduring tensions in their own learning ecosystems.  This might lead some stakeholders in those organizations to ask for early evidence that digital badges are worth the trouble.  The obvious concern is that premature efforts to formally document summative and formative learning outcomes will draw resources away from or interfere with efforts to obtain those same outcomes.
This raises ad even broader concern that the rigorous evaluations of learning outcomes will be carried out in the badging contexts that are by virtue of those evaluations least likely to obtain those outcomes.  This effect is rooted in the tensions between formative and summative assessment functions.  Think of it as an inversion of the well-known "Hawthorne Effect" where outcomes are generated by virtue of efforts to document them.  Essentially what happens is that the the effort to formally document summative assessment functions transforms the learning ecosystems in ways that are hard to anticipate or even recognize.  This interferes with the formative functions that are needed to increase the learning outcomes associated with badges, while undermining the validity of the very evidence that might be obtained to document those outcomes.

Evaluating Transformative Outcomes of Badges
            To keep this discussion from getting too complicated, I have so far only focused on evaluating summative and formative learning outcomes.  Things gets a lot more complicated when considering the how assessing learning with badges might be used to transform existing ecosystems or create new ones.  This is because the “learning” associated with transformative assessment defies conventional characterizations of learning.  The learning associated with transformative outcomes is really systemic change.  Such learning is highly contextualized within the social and technological practices that collectively define a specific learning ecosystem.  It is certainly possible to use interpretive methods like ethnography and discourse analysis to obtain rigorous evidence of these transformations.  It is also difficult. In my experience, the naturalistic orientation of most interpretive educational researchers can obscure the more interventionist goals of transformative assessment functions.  Even when systemic transformative outcomes are documented, one is still left with the challenge of linking transformative outcomes to individual formative and summative outcomes.[3] 
This challenge of documenting systemic impact of assessment practices and  linking them to individual outcomes is what Allan Collins and John Frederiksen began exploring in a groundbreaking 1989 paper that introduced the notion of systemic validity.  In the ensuing two decades, the primary response to this challenge has consisted of what proponents labeled design-based research (DBR); most recently William Penuel, Barry Fishman, et al. have introduced a helpful new variant called design-based implementation research (DBIR).  In subsequent posts, I will argue why DBR and particularly DBIR are ideal for both documenting and enhancing the entire range of learning outcomes associated with digital badges.  In short, I believe that design-based educational research methods can provide individual awardees and the broader community with a framework for documenting general and specific principles for designing and awarding badges, while gradually increasing broad learning outcomes and evidence of those outcomes.  But doing so will have to wait until the infrastructure for issuing and awarding badges is established.  In the meantime, I hope that the badge developers and issuers will consider positive ways in which their emerging practices might be transforming their learning  learning ecosystem, and search of potentially unexpected (and possibly undesirable) transformations as well.

New Appreciation of Transformative Assessment
As badge developers eventually seek additional support from foundations or investors, they will likely need to craft well-articulated plans for formally documenting the impact of that investment on learning outcomes.  Thanks to continuing efforts of others in the DML community (most notably Penuel and Fishman), it is possible that these funders will be more sensitive to the challenges and risks associated with formal evaluation of learning outcomes.  More fundamentally, I predict that the collective efforts of the awardees will result in a coherent set of principles and practices for issuing and awarding digital badges.  Accompanying these principles and practices will be self-evident examples of learning ecosystems that have been fundamentally transformed or entirely created by digital badges. This should provide a broadly meaningful way of helping others appreciate the transformative functions of learning assessment.  Specific examples are likely to help others appreciate that not all transformative functions can be anticipated, and that some transformations are undesirable. Hopefully other examples will emerge in which DBR methods have been systematically applied from the very start.
In conclusion, I believe that the Badges initiative has already succeeded, because it has provided a broadly meaningful context for appreciating the transformative functions of assessment.  While I don’t expect that all badge developers will find this information immediately useful, I believe that this post illustrates how the DML initiative has moved this issue beyond the small subset of assessment and measurement scholars who have been grappling with it.  I hope that this information will help awardees who are not specialists in assessment or validity (or even learning) to anticipate and appreciate the complex issues that they will encounter as they go forward, and begin addressing them.  I also hope that readers will send along suggestions for doing so.

Next Up:  Before writing about DBR and DBIR, I plan another post about the likely impact of the Badges initiative beyond the awardees, and introduce the notion of transcendent assessment.




[1] Case in point, Carla Casilli’s deliberations over the badging lexicon led to the helpful distinction between learners and earners, which both raises and addresses huge issue in assessment nomenclature.  The more agnostic term earner is more precise; whether an earner has actually learned something is bound up in the social contract between the earner and the issuer.
[2] The winners are listed in this announcement; their original proposals are linked to the summaries of the larger set of Stage 2 winners.
[3] With all due respect, I acknowledge the 2008 book Transformative Assessment by the esteemed assessment scholar Jim Popham.  I have found it useful for thinking about the broader consequences of summative and formative assessment, and I think many awardees will as well.  But the vision of transformative assessment I have in mind goes quite a bit farther than the one outlined in this book.

4 comments:

  1. Dan, Very useful analysis, as always. Thank you.

    I was particularly struck by "Things gets a lot more complicated when considering the [sic] how assessing learning with badges might be used to transform existing ecosystems..." This is of special interest to us right now (at Global Kids) as we work with the Hive Learning Networks in Chicago and New York launch their own badging systems. Most recent badging projects are funding one organization (albeit some with many sites) to launch a system; but the Hive's are networks of dozens of learning institutions, each which will use the system for their own purposes, yet do it in collaboration with one another throughout their city. Definitely something to watch.

    And within Global Kids, we are already seeing hints of that transformation. We need more than just a good source of content to badge, and a strong badging ecology; we need more than a robust badging technology, and more than effective outreach to and engagement with our youth to get them to use it and take ownership over it. We need to be ready to live up to the promise badges present to the users within our system.

    If we tell youth on Day 1 "These are the badges you can earn in this program," we sure better make sure we teach them over the year what they need to earn them. Our learning objectives, previously invisible within our lesson plans, are now made visible, empowering the youth to hold us accountable.

    If we tell youth at any of our dozen or so sites that they can earn, say, a public speaking badges, the trainers at those sites better be using a similar rubric, and apply them in a similar way, or the youth will find the earning of them inconsistent at best and capricious at worst. Our previously invisible or isolated assessments, when offered at all, become visible as well, and need to be coordinated.

    I can go on and on. Developing a badging system is not just for our youth but for our entire organization, forcing us to "up our game," standardize our practices, and eliminate the slack that was only available when our practices and policies were invisible to our youth.

    Badges force the wizard out from behind the curtain. And that's a transformation for which we'll all need to be prepared.

    ReplyDelete
  2. David Wiley responded to this post in a series of exchanges. David is correct in stating that badge are ultimately a few lines of code. http://opencontent.org/blog/archives/2397. My references to "badges AS" aims to get at the broader social practices that are enacted around badges.

    I think that this document from Educause will likely serve as the final word for now. I understand that this wording was negotiated by a bunch of folks who are involved in initiative. There definition is quite consistent with Davids:

    Badges are digital tokens that appear as icons or logos on a
    web page or other online venue. Awarded by institutions, organizations,
    groups, or individuals, badges signify accomplishments
    such as completion of a project, mastery of a skill, or marks of
    experience. Proponents suggest that these credentials herald a
    fundamental change in the way society recognizes learning and
    achievement.

    More at http://net.educause.edu/ir/library/pdf/ELI7085.pdf

    Dan

    ReplyDelete
  3. Appreciate reference to DBR [Dede et. al seminal 2009 article discussing methodology for Online PD], and DBIR—as guideposts, and as a way to tackle nascent but highly promising learning endeavors. At this stage in badge development the research methodology at the 30,00o foot level might be more inductive, iterative, and formatively focused versus deductive/summative. Agreed!

    I remember watching a YouTube or Vimeo video of a Chris Hughes keynote to several thousand association/business leaders and said, Do you want to know the secret to create a successful community regardless of the content? …I’ll give it to you in one word (which then appeared on the immense screen behind him)…ITERATE. He said, we were not sure which ideas would take off at Facebook, but we tried many, monitored and iterated toward success (paraphrase). I heard of the same of a prior lead designer at Facebook (CNBC special on The Facebook Phenomena), and he said that they kept analyzing the web traffic data (now the widely used/popular and free Google Analytics tool) to see how we could get frequency and duration of visits increased (stickiness).

    David’s comments above remind me of these same sentiments expressed above. Let’s not “constrict and suffocate” the learning badge effort at this stage of growth or be too critical too early. David does a great job identifying the competing tensions though that need to be considered. In the physical sciences a good metaphor might be the Heisenberg’s Uncertainty Principal, where in quantum mechanics, the more precisely you want to measure the position of a particle, the less likely/certain you are to know is momentum. A little uncertainty is ok at this stage, as we wrestle with determining what the important variables are we seek to measure. Kyle Peck at Penn State also supports this sentiment in a post discussing the potential value for badges for learning: http://www.evolllution.com/opinions/will-badging-overcome-or-feed-the-frankenstein-effect/

    ReplyDelete
  4. Thank you for this post -- I am reading this almost exactly two years AFTER it was written, and am delighted to find this historical reference. Having been heavily engaged in educational technology for more than 20 years, I am myself transformed by the potential that digital badges have to bring the online learning community. I very recently attended a session at a K-12 education conference on using digital badges, but was particularly excited about their potential for all kinds of different learning environments, formal AND informal. It's the application to "informal" environments that really lights my fire and that I believe have the power to transform the activity of learning from process-based to competency-based, and to transform the very notion of what it means to "be educated."

    ReplyDelete