Wednesday, June 13, 2012

Three Firsts: Bloomington’s First Hackjam, ForAllBadges’ App, and Participatory Assessment + Hackasaurus


Dan Hickey and Rebecca Itow
On Thursday, June 7, 2012, the Center for Research on Learning and Technology at Indiana University in conjunction with the Monroe County Public Library (MCPL) in Bloomington, IN put on a Hackjam for resident youth. The six hour event was a huge success. Students were excited and engaged throughout the day as they used Hackasaurus’ web editing tool X-Ray Goggles to “hack” Bloomington’s Herald Times. The hackers learned some HTML & CSS, developed some web literacies, and learned about writing in different new media contexts. We did some cool new stuff that we think others will find useful and interesting. We are going to summarize what we did in this post. We will elaborate on some of these features in subsequent posts, and try to keep this one short and readable.

WHY DID WE DO A HACKJAM?
We agreed to do a Hackjam with the library many months ago. MCPL Director Sara Laughlin had contacted us in 2011 about partnering with them on a MacArthur/IMLS proposal to bring some of Nicole Pinkard’s YouMedia programming to Bloomington. We concluded that a more modest collaboration (like a Hackjam) was needed to lay the groundwork for something as ambitious as YouMedia.

Our ideas for extending Mozilla’s existing Hacktivity Kit were first drafted in a proposal to the MacArthur Foundation’s Badges for Lifelong Learning initiative. Hackasaurus promised to be a good context to continue our efforts to combine badges and participatory assessment methods. While our proposal was not funded, we decided to do it anyways. MCPL initially considered making the Hackjam part of the summer reading program sponsored by the local school system. Even though we were planning to remix the curriculum to make it more “school friendly,” some school officials could not get past the term “hacking.”


Sunday, June 10, 2012

Digital Badges as “Transformative Assessment”

                                                            By Dan Hickey
               The MacArthur Foundation's Badges for Lifelong Learning competition generated immense
interest in using digital badges to motivate and acknowledge informal and formal learning. The
366 proposals submitted in the first round presented a diverse array of functions for digital
badges. As elaborated in a prior post, the various proposals used badges to accomplish one or
more of the following assessment functions:

               Traditional summative functions. This is using badges to indicate that the earner
               previously did something or knows something. This is what the educational assessment
               community calls assessment of learning.

               Newer formative functions. This is where badges are used to enhance motivation,
               feedback, and discourse for individual badge earners and broader communities of earners.
               This is what is often labeled assessment for learning.

               Groundbreaking transformative functions. This is where badges transform existing
               learning ecosystems or allow new ones to be created. These assessment functions impact
               both badge earners and badge issuers, and may be intentional or incidental. I believe we
               should label this assessment as learning.

This diversity of assessment functions was maintained in the 22 badge content awardees who were
ultimately funded to develop content and issue badges, as well as the various entities associated with HIVE collectives in New York and Chicago, who were funded outside of the competition to help their members develop and issue badges.  These awardees will work with one of the three badging platform awardees who are responsible for creating open (i.e., freely-available) systems for issuing digital badges.
            Along the way, the Badges competition attracted a lot of attention.  It certainly raised some eyebrows that the modestly funded program (initially just $2M) was announced by a cabinet-level official at a kickoff meeting attended by heads of numerous other federal agencies.  The competition and the idea of digital badges were mentioned in articles in the Wall Street Journal, New York Times, and The Chronicle of Higher Education.  This attention in turn led to additional interest and helped rekindle the simmering debate over extrinsic incentives.  This attention also led many observers to ask the obvious question: “Will it work?” 
This post reviews the reasons why I think the various awardees are going to succeed in their stated goals for using digital badges to assess learning.  In doing so I want to unpack what “success” means and suggest that the initiative will provide a useful new definition of “success” for learning initiatives.  I will conclude by suggesting that the initiative has already succeeded because it has fostered broader appreciation of the transformative functions of assessment.

Thursday, March 29, 2012

Encouraging reflection on practice while grading an artifact: A thought on badges


When I started teaching I thought back to all of those teachers who made me write meaningless papers into which I put little effort and received stellar grades, and I vowed not to be that teacher. I promised myself and my future students that we – as equals – would discuss the literature as relevant historical artifacts that are still being read because the authors still have something to comment on in today’s society.
But then I stepped into the classroom and faced opposition from my colleagues who thought my methods would not provide students with the opportunities to master the knowledge of the standards. Worst of all, some teachers actually punished students who came from my class because they “knew” the students had not learned how to write or analyze since I did not give traditional tests or grade in a traditional way. 

Wednesday, March 21, 2012

Flipping Classrooms or Transforming Education?

Dan Hickey and John Walsh
Surely you have heard about it by now.  Find (or make) the perfect online video lecture for teaching particular concepts and have students watch it before class.  Then use the class for more interactive discussion.  In advance of presenting at Ben Motz’ Pedagogy Seminar at Indiana University on March 22, we are going to raise some questions about this practice.  We will then describe a comprehensive alternative that leads to a rather different way of using online videos, while still accommodating prevailing expectations for coverage, class structure, and accountability.

Compared to What?
A March 21 webinar by Jonathan Bergman that was hosted by e-School News (and sponsored by Camtasia web-capture software) described flipped classrooms as a place where “educators are actively transferring the responsibility and ownership of learning from the teacher to the students.”  That sounds pretty appealing when Bergman compares it to “teachers as dispensers of facts” and students as “receptacles of information.”



Sunday, March 18, 2012

Some Things about Assessment that Badge Developers Might Find Helpful

Erin Knight, Director of Learning at the Mozilla Foundation, was kind enough to introduce me to Greg Wilson, the founder of the non-profit Software Carpentry. Mozilla is supporting their efforts to teach basic computer skills to scientists to help them manage their data and be more productive. Greg and I discussed the challenges and opportunities in assessing the impact of their hybrid mix of face-to-face workshops and online courses. More about that later.
Greg is as passionate about education as he is about programming. We discussed Audrey Watters recent tweet regarding “things every techie should know about education.” But the subject of “education” seemed too vast for me right now. Watching the debate unfold around the DML badges competition suggested something more modest and tentative. I have been trying to figure out how existing research literature on assessment, accountability, and validity is (and is not) relevant to the funded and unfunded badge development proposals. In particular I want to explore whether distinctions that are widely held in the assessment community can help show some of the concerns that people have raised about badges (nicely captured at David Theo Goldberg’s “Threading the Needle…” DML post). Greg’s inspiration resulted in six pages, which I managed to trim (!) back to the following with a focus on badges. (An abbreviated version is posted at the HASTAC blog)




Sunday, March 11, 2012

Initial Consequences of the DML 2012 Badges for Lifelong Learning Competition

Daniel T. Hickey

The announcement of the final awards in MacArthur’s Badges for Lifelong Learning competition on March 2 was quite exciting. It concluded one of the most innovative (and complicated) research competitions ever seen in education-related research. Of course there was some grumbling about the complexity and the reviewing process. And of course the finalists who did not come away with awards were disappointed. But has there ever been a competition without grumbling about the process or the outcome?

A Complicated Competition
The competition was complicated. There were over 300 initial submissions a few months back; a Teacher Mastery category was added at the last minute. Dozens of winners of Stage 1 (Content and Program) and Stage 2 (Design and Tech) went to San Francisco before the DML conference to pitch their ideas to a panel of esteemed judges.

Thursday, March 1, 2012

Open Badges and the Future of Assessment

Of course I followed the roll out of MacArthur’s Badges for Lifelong Learning competition quite closely. I have studied participatory approaches to assessment and motivation for many years.  

EXCITEMENT OVER BADGES
While the Digital Media and Learning program committed a relatively modest sum (initially $2M), it generated massive attention and energy.  I was not the only one who was surprised by the scope of the Badges initiative.  In September 2011, one week before the launch of the competition, I was meeting with an education program officer at the National Science Foundation.  I asked her if she had heard about the upcoming press conference/webinar.  Turns out she had been reading the press release just before our meeting.  She indicated that the NSF had learned about the competition and many of the program officers were asking about it.  Like me, many of them were impressed that Education Secretary Duncan and the heads of several other federal agencies were scheduled to speak at the launch event at the Hirshhorn museum,

THE DEBATE OVER BADGES AND REWARDS
As the competition unfolded, I followed the inevitable debate over the consequences of “extrinsic rewards” like badges on student motivation.  Thanks in part to Daniel Pink’s widely read book Drive, many worried that badges would trivialize deep learning and leave learners with decreased intrinsic motivation to learn. The debate was played out nicely (and objectively) at the HASTAC blog via posts from Mitch Resnick and Cathy Davidson .   I have been arguing in obscure academic journals for years that sociocultural views of learning call for an agnostic stance towards incentives.  In particular I believe that the negative impact of rewards and competition says more about the lack of feedback and opportunity to improve in traditional classrooms.  There is a brief summary of these issues in a chapter on sociocultural and situative theories of motivation that Education.com commissioned me to write a few years ago.  One of the things I tried to do in that article and the other articles it references is show why rewards like badges are fundamentally problematic for  constructionists like Mitch, and how newer situative theories of motivation promise to resolve that tension.  One of the things that has been overlooked in the debate is that situative theories reveal the value of rewards without resorting to simplistic behaviorist theories of reinforcing and punishing desired behaviors.

Saturday, February 4, 2012

School Creativity Indices: Measurement Folly or Overdue Response to Test-Based Accountability?


Daniel T. Hickey
A February 2 article in Education Week surveyed efforts in California, Oklahoma, and other states to gauge the opportunities for creative and innovative work. One of our main targets here at Remediating Assessment is pointing out the folly of efforts to standardize and measure “21st Century Skills.” So of course this caught our attention.
What might come of Oklahoma Gov. Mary Smith’s search for a “public measurement of the opportunities for our students to engage in innovative work” or California’s proposed Creativity and Innovative Education index?
Mercifully, they don’t appear to be pushing the inclusion of standardized measures of creativity within high stakes tests. Promisingly, proponents argue for a focus on “inputs” such as arts education, science fair, and film clubs, rather than “outputs” like test scores, and the need for voluntary frameworks instead of punitive indexes. Indeed, many of these efforts are described as a necessary response to the crush of high stakes testing. Given the looming train-wreck of “value-added” merit pay under Race to the Top, we predict that these efforts are not going to get very far. We will watch them closely and hope some good come from them. 
What is most discouraging is what the article never mentioned. The words “digital,” “network,” or “writing” don’t appear in the articles, and no consideration of the need to look at the contexts in which creativity is fostered is present. Schools continue to filter any website with user-generated content, and obstruct the pioneering educators who appreciate that digital knowledge networks are an easy and important context for creative and knowledgeably engagement. 

Thursday, February 2, 2012

Finnish Lessons: Start a Conversation


Rebecca C. Itow and Daniel T. Hickey
In the world of Education, we often talk of holding ourselves and adhering to “high standards,” and in order to ensure we are meeting these high standards, students take carefully written standardized exams at the state and national level. These tests are then used to determine the efficacy of our schools, curriculum, and teachers. Now, with more and more states tying these scores to value-added teaching, these tests are having more impact than ever. But being so tied to the standards can be a detriment to classroom learning and national educational success.
Dr. Pasi Sahlberg of Finland spoke at Indiana University on January 20, 2012 to discuss accounts of Finnish educational excellence in publications like The Atlantic and the New York Times, and promote his new book, Finnish Lessons: What Can the World Learn from Educational Change in Finland? One of his main points was that the constant testing and accountability to which the U.S.'s students and teachers are subjected do not raise scores. He argued that frequent testing lowers scores because teachers must focus on a test that captures numerous little things, rather than delving more deeply into a smaller number of topics.

Saturday, December 17, 2011

Another Misuse of Standardized Tests: Color Coded ID Cards?


An October 4, 2011 Orange County Register article that reports a California high school’s policy to color code student ID cards based on their performance on state exams raises several real concerns, including student privacy. Anthony Cody in his blog post “Color Coded High School ID Cards Sort Students By Test Performance” published on October 6, 2011 in Education Week Teacher writes that “[s]tudents [at a La Palma, CA high school] who perform at the highest levels in all subjects receive a black or platinum ID card, while those who score a mix of proficient and advanced receive a gold card. Students who score "basic" or below receive a white ID card.” These cards come with privileges and are meant to increase motivation to perform well on state standardized exams. Followers’ comments and concerns posted to the blog address “fixing identity” and that testing conveys the idea that “learning and achievement isn't reward in itself. … You're not worth anything unless WE tell you are based on this one metric.” These are valid concerns, but the larger issue being highlighted here is the misuse and misapplication of the standardized tests themselves