Wednesday, June 20, 2012

Summer 2012 Hackjam: The Wiki


Rebecca Itow and Dan Hickey
In the Fall 2011, we decided to put on a Hackjam in conjunction with the Monroe County Public Library. We adapted the curriculum outlined in the Hacktivity Kit to fit our needs, and partnered with ForAllSystems to implement a badging system for the event. You can read an earlier post giving an overall account of the event here.  We were particularly interested in aligning the hackjam with a Common Core English standard on multimodal writing.  We also wanted to make sure that all of the hackers learned how to discuss coding and writing for the web in networked spaces.  This was where they would want to go for help in the future.

Why Use a Wiki?
In adapting and designing the curriculum, it became readily apparent that, if we were going to have the participants hacking pages and reflecting on their learning, they would need a central place to do this. We began thinking that the best space would be a wiki because it is meant to be edited by multiple users, but each page can be customized to individual participants’ personality and needs. Rebecca had used Wikispaces with her 9th and 11th grade English students successfully in the past. Her experience in her own classroom combined with her participation in Dan’s online classes where they used “wikifolios” to house work and promote discussion convinced us that wikis were the right space for the type of engagement we wanted to foster.
Rebecca made a simple wiki on wikispaces, using the homepage as the place to access general information such as links to tools and websites that would be used throughout the Hackjam.

Wednesday, June 13, 2012

Three Firsts: Bloomington’s First Hackjam, ForAllBadges’ App, and Participatory Assessment + Hackasaurus


Dan Hickey and Rebecca Itow
On Thursday, June 7, 2012, the Center for Research on Learning and Technology at Indiana University in conjunction with the Monroe County Public Library (MCPL) in Bloomington, IN put on a Hackjam for resident youth. The six hour event was a huge success. Students were excited and engaged throughout the day as they used Hackasaurus’ web editing tool X-Ray Goggles to “hack” Bloomington’s Herald Times. The hackers learned some HTML & CSS, developed some web literacies, and learned about writing in different new media contexts. We did some cool new stuff that we think others will find useful and interesting. We are going to summarize what we did in this post. We will elaborate on some of these features in subsequent posts, and try to keep this one short and readable.

WHY DID WE DO A HACKJAM?
We agreed to do a Hackjam with the library many months ago. MCPL Director Sara Laughlin had contacted us in 2011 about partnering with them on a MacArthur/IMLS proposal to bring some of Nicole Pinkard’s YouMedia programming to Bloomington. We concluded that a more modest collaboration (like a Hackjam) was needed to lay the groundwork for something as ambitious as YouMedia.

Our ideas for extending Mozilla’s existing Hacktivity Kit were first drafted in a proposal to the MacArthur Foundation’s Badges for Lifelong Learning initiative. Hackasaurus promised to be a good context to continue our efforts to combine badges and participatory assessment methods. While our proposal was not funded, we decided to do it anyways. MCPL initially considered making the Hackjam part of the summer reading program sponsored by the local school system. Even though we were planning to remix the curriculum to make it more “school friendly,” some school officials could not get past the term “hacking.”


Sunday, June 10, 2012

Digital Badges as “Transformative Assessment”

                                                            By Dan Hickey
               The MacArthur Foundation's Badges for Lifelong Learning competition generated immense
interest in using digital badges to motivate and acknowledge informal and formal learning. The
366 proposals submitted in the first round presented a diverse array of functions for digital
badges. As elaborated in a prior post, the various proposals used badges to accomplish one or
more of the following assessment functions:

               Traditional summative functions. This is using badges to indicate that the earner
               previously did something or knows something. This is what the educational assessment
               community calls assessment of learning.

               Newer formative functions. This is where badges are used to enhance motivation,
               feedback, and discourse for individual badge earners and broader communities of earners.
               This is what is often labeled assessment for learning.

               Groundbreaking transformative functions. This is where badges transform existing
               learning ecosystems or allow new ones to be created. These assessment functions impact
               both badge earners and badge issuers, and may be intentional or incidental. I believe we
               should label this assessment as learning.

This diversity of assessment functions was maintained in the 22 badge content awardees who were
ultimately funded to develop content and issue badges, as well as the various entities associated with HIVE collectives in New York and Chicago, who were funded outside of the competition to help their members develop and issue badges.  These awardees will work with one of the three badging platform awardees who are responsible for creating open (i.e., freely-available) systems for issuing digital badges.
            Along the way, the Badges competition attracted a lot of attention.  It certainly raised some eyebrows that the modestly funded program (initially just $2M) was announced by a cabinet-level official at a kickoff meeting attended by heads of numerous other federal agencies.  The competition and the idea of digital badges were mentioned in articles in the Wall Street Journal, New York Times, and The Chronicle of Higher Education.  This attention in turn led to additional interest and helped rekindle the simmering debate over extrinsic incentives.  This attention also led many observers to ask the obvious question: “Will it work?” 
This post reviews the reasons why I think the various awardees are going to succeed in their stated goals for using digital badges to assess learning.  In doing so I want to unpack what “success” means and suggest that the initiative will provide a useful new definition of “success” for learning initiatives.  I will conclude by suggesting that the initiative has already succeeded because it has fostered broader appreciation of the transformative functions of assessment.

Thursday, March 29, 2012

Encouraging reflection on practice while grading an artifact: A thought on badges


When I started teaching I thought back to all of those teachers who made me write meaningless papers into which I put little effort and received stellar grades, and I vowed not to be that teacher. I promised myself and my future students that we – as equals – would discuss the literature as relevant historical artifacts that are still being read because the authors still have something to comment on in today’s society.
But then I stepped into the classroom and faced opposition from my colleagues who thought my methods would not provide students with the opportunities to master the knowledge of the standards. Worst of all, some teachers actually punished students who came from my class because they “knew” the students had not learned how to write or analyze since I did not give traditional tests or grade in a traditional way. 

Wednesday, March 21, 2012

Flipping Classrooms or Transforming Education?

Dan Hickey and John Walsh
Surely you have heard about it by now.  Find (or make) the perfect online video lecture for teaching particular concepts and have students watch it before class.  Then use the class for more interactive discussion.  In advance of presenting at Ben Motz’ Pedagogy Seminar at Indiana University on March 22, we are going to raise some questions about this practice.  We will then describe a comprehensive alternative that leads to a rather different way of using online videos, while still accommodating prevailing expectations for coverage, class structure, and accountability.

Compared to What?
A March 21 webinar by Jonathan Bergman that was hosted by e-School News (and sponsored by Camtasia web-capture software) described flipped classrooms as a place where “educators are actively transferring the responsibility and ownership of learning from the teacher to the students.”  That sounds pretty appealing when Bergman compares it to “teachers as dispensers of facts” and students as “receptacles of information.”



Sunday, March 18, 2012

Some Things about Assessment that Badge Developers Might Find Helpful

Erin Knight, Director of Learning at the Mozilla Foundation, was kind enough to introduce me to Greg Wilson, the founder of the non-profit Software Carpentry. Mozilla is supporting their efforts to teach basic computer skills to scientists to help them manage their data and be more productive. Greg and I discussed the challenges and opportunities in assessing the impact of their hybrid mix of face-to-face workshops and online courses. More about that later.
Greg is as passionate about education as he is about programming. We discussed Audrey Watters recent tweet regarding “things every techie should know about education.” But the subject of “education” seemed too vast for me right now. Watching the debate unfold around the DML badges competition suggested something more modest and tentative. I have been trying to figure out how existing research literature on assessment, accountability, and validity is (and is not) relevant to the funded and unfunded badge development proposals. In particular I want to explore whether distinctions that are widely held in the assessment community can help show some of the concerns that people have raised about badges (nicely captured at David Theo Goldberg’s “Threading the Needle…” DML post). Greg’s inspiration resulted in six pages, which I managed to trim (!) back to the following with a focus on badges. (An abbreviated version is posted at the HASTAC blog)




Sunday, March 11, 2012

Initial Consequences of the DML 2012 Badges for Lifelong Learning Competition

Daniel T. Hickey

The announcement of the final awards in MacArthur’s Badges for Lifelong Learning competition on March 2 was quite exciting. It concluded one of the most innovative (and complicated) research competitions ever seen in education-related research. Of course there was some grumbling about the complexity and the reviewing process. And of course the finalists who did not come away with awards were disappointed. But has there ever been a competition without grumbling about the process or the outcome?

A Complicated Competition
The competition was complicated. There were over 300 initial submissions a few months back; a Teacher Mastery category was added at the last minute. Dozens of winners of Stage 1 (Content and Program) and Stage 2 (Design and Tech) went to San Francisco before the DML conference to pitch their ideas to a panel of esteemed judges.

Thursday, March 1, 2012

Open Badges and the Future of Assessment

Of course I followed the roll out of MacArthur’s Badges for Lifelong Learning competition quite closely. I have studied participatory approaches to assessment and motivation for many years.  

EXCITEMENT OVER BADGES
While the Digital Media and Learning program committed a relatively modest sum (initially $2M), it generated massive attention and energy.  I was not the only one who was surprised by the scope of the Badges initiative.  In September 2011, one week before the launch of the competition, I was meeting with an education program officer at the National Science Foundation.  I asked her if she had heard about the upcoming press conference/webinar.  Turns out she had been reading the press release just before our meeting.  She indicated that the NSF had learned about the competition and many of the program officers were asking about it.  Like me, many of them were impressed that Education Secretary Duncan and the heads of several other federal agencies were scheduled to speak at the launch event at the Hirshhorn museum,

THE DEBATE OVER BADGES AND REWARDS
As the competition unfolded, I followed the inevitable debate over the consequences of “extrinsic rewards” like badges on student motivation.  Thanks in part to Daniel Pink’s widely read book Drive, many worried that badges would trivialize deep learning and leave learners with decreased intrinsic motivation to learn. The debate was played out nicely (and objectively) at the HASTAC blog via posts from Mitch Resnick and Cathy Davidson .   I have been arguing in obscure academic journals for years that sociocultural views of learning call for an agnostic stance towards incentives.  In particular I believe that the negative impact of rewards and competition says more about the lack of feedback and opportunity to improve in traditional classrooms.  There is a brief summary of these issues in a chapter on sociocultural and situative theories of motivation that Education.com commissioned me to write a few years ago.  One of the things I tried to do in that article and the other articles it references is show why rewards like badges are fundamentally problematic for  constructionists like Mitch, and how newer situative theories of motivation promise to resolve that tension.  One of the things that has been overlooked in the debate is that situative theories reveal the value of rewards without resorting to simplistic behaviorist theories of reinforcing and punishing desired behaviors.

Saturday, February 4, 2012

School Creativity Indices: Measurement Folly or Overdue Response to Test-Based Accountability?


Daniel T. Hickey
A February 2 article in Education Week surveyed efforts in California, Oklahoma, and other states to gauge the opportunities for creative and innovative work. One of our main targets here at Remediating Assessment is pointing out the folly of efforts to standardize and measure “21st Century Skills.” So of course this caught our attention.
What might come of Oklahoma Gov. Mary Smith’s search for a “public measurement of the opportunities for our students to engage in innovative work” or California’s proposed Creativity and Innovative Education index?
Mercifully, they don’t appear to be pushing the inclusion of standardized measures of creativity within high stakes tests. Promisingly, proponents argue for a focus on “inputs” such as arts education, science fair, and film clubs, rather than “outputs” like test scores, and the need for voluntary frameworks instead of punitive indexes. Indeed, many of these efforts are described as a necessary response to the crush of high stakes testing. Given the looming train-wreck of “value-added” merit pay under Race to the Top, we predict that these efforts are not going to get very far. We will watch them closely and hope some good come from them. 
What is most discouraging is what the article never mentioned. The words “digital,” “network,” or “writing” don’t appear in the articles, and no consideration of the need to look at the contexts in which creativity is fostered is present. Schools continue to filter any website with user-generated content, and obstruct the pioneering educators who appreciate that digital knowledge networks are an easy and important context for creative and knowledgeably engagement. 

Thursday, February 2, 2012

Finnish Lessons: Start a Conversation


Rebecca C. Itow and Daniel T. Hickey
In the world of Education, we often talk of holding ourselves and adhering to “high standards,” and in order to ensure we are meeting these high standards, students take carefully written standardized exams at the state and national level. These tests are then used to determine the efficacy of our schools, curriculum, and teachers. Now, with more and more states tying these scores to value-added teaching, these tests are having more impact than ever. But being so tied to the standards can be a detriment to classroom learning and national educational success.
Dr. Pasi Sahlberg of Finland spoke at Indiana University on January 20, 2012 to discuss accounts of Finnish educational excellence in publications like The Atlantic and the New York Times, and promote his new book, Finnish Lessons: What Can the World Learn from Educational Change in Finland? One of his main points was that the constant testing and accountability to which the U.S.'s students and teachers are subjected do not raise scores. He argued that frequent testing lowers scores because teachers must focus on a test that captures numerous little things, rather than delving more deeply into a smaller number of topics.

Saturday, December 17, 2011

Another Misuse of Standardized Tests: Color Coded ID Cards?


An October 4, 2011 Orange County Register article that reports a California high school’s policy to color code student ID cards based on their performance on state exams raises several real concerns, including student privacy. Anthony Cody in his blog post “Color Coded High School ID Cards Sort Students By Test Performance” published on October 6, 2011 in Education Week Teacher writes that “[s]tudents [at a La Palma, CA high school] who perform at the highest levels in all subjects receive a black or platinum ID card, while those who score a mix of proficient and advanced receive a gold card. Students who score "basic" or below receive a white ID card.” These cards come with privileges and are meant to increase motivation to perform well on state standardized exams. Followers’ comments and concerns posted to the blog address “fixing identity” and that testing conveys the idea that “learning and achievement isn't reward in itself. … You're not worth anything unless WE tell you are based on this one metric.” These are valid concerns, but the larger issue being highlighted here is the misuse and misapplication of the standardized tests themselves

Tuesday, December 13, 2011

Introducing Rebecca

It has been just about six months since I closed up my classroom in sunny Southern California, picked up my life, and moved to Bloomington, Indiana to pursue my PhD in Learning Sciences. I can say that a year ago I certainly did not think I would be posting on a blog about Re-Mediating Assessment. I didn't think I would writing up my research or be helping teachers develop and discuss curriculum that fosters more participation and learning in their classroom. But here I am.

In fact, a year ago I was celebrating Banned Books Week with my AP Language and Composition and Honors 9 English classes, preparing my Mock Trial team for another year of success, starting a competitive forensics team, chairing the AP department, and generally trying to convince my colleagues that my lack of “traditional” tests and use of technology in my almost-paperless classroom were not only good ideas, but actually enhanced learning. A year ago I was living life in sunny Southern California as normal ... then I decided to take the GRE. And I am so glad I did. It has been an interesting journey getting to this moment.

I never thought I would become a teacher. I have an AA in Dance, an AA in Liberal Arts, and a BA in Theatre Directing, but I found that working in the top 99 seat theatre in Los Angeles left me wanting more. When I went back for my MAEd and teaching credential, I was the only one who was surprised. Teaching students, I learned, is very much like directing actors - we want them to come to conclusions, but they need to come to them in their own way in order for the outcome to be authentic.

I have worked as a choreographer, director, and actor. I have taught 10 minute playwriting and directed festivals, as well as developed curriculum around this theme. I studied Tourette Syndrome under Dr. David Commings at the Beckman Research Institute at City of Hope, and informally counseled TS students at the high school. I am a classical dancer and recently picked up circus arts as a hobby.

Each of these very different interests contributed to my teaching. We explored literature through discussion, and often took on the roles of the characters to discuss what a piece said about the society in which it was written and its relevance today. Quite often administrators walked in while students were debating the ethics of the latest redaction of Adventures of Huckleberry Finn or discussing Fitzgerald’s symbolism while dressed to the nines at a Gatsby picnic. Still, I came up against resistance when presenting my methods and ideas to my colleagues; they didn't think that I was teaching if I wasn't giving traditional tests. I had too many A's and too few F's. I knew that I could affect greater change, but I wasn't sure how. Then the opportunity to come to Indiana University and work with Dan Hickey arose, and I had to take it.

Now I am in Bloomington, reflecting on a semester of writing, learning, studying, and creating curriculum. I have immersed myself in the school and culture and work here, and have found smaller networks of people with whom I can engage, play, think, debate, and grow. I am excited and encouraged by the adventures that await in this chapter, and am looking forward.

Monday, December 12, 2011

RMA is back!

After an extended hiatus, Re-Mediating Assessment is back.  In the meantime, lots has happened.  Michelle Honeyford completed her PhD and joined the faculty at the University of Manitoba in Winnipeg.  Jenna McWilliam has moved on to Joshua Danish's lab and is focusing more directly critical theory in new media contexts.  She renamed her blog too.

Lots of other things have happened that my student and I will be writing about.  I promise to write shorter posts and focus more on commentary regarding assessment-related events.  I have a bunch of awesome new doctoral students and collaborations who are lined up to start posting regularly about assessment-related issues.


For now I want to let everybody know that today is the official release day of a new volume on formative assessment that Penny Noyce and I edited.  It has some great chapters.  On the Harvard Education Press website announcing the book, my assessment hero Dylan Wiliam said:
"This is an extraordinary book. The chapters cover practical applications of formative assessment in mathematics, science, and language arts, including the roles of technology and teachers’ professional learning. I found my own thinking about formative assessment constantly being stretched and challenged. Anyone who is involved in education will find something of value in this book."
Lorrie Shepard's foreword is a nice update on the state of assessment.  David Foster writes about using the tools from Mathematics Assessment Resources Services in the Silicon Valley Mathematics Initiative Dan Damelin and Kimberle Koile from the Concord Consortium write about using formative assessment with cutting edge technology. (And we appreciate that the Concord Consortium is featuring their book on their website.

For me the best part was the chapter from Paul Horwitz of the Concord Consortium.  Paul wrote a nice review of his work with Thinker Tools and GenScope and the implications of that work for assessment.  Paul's chapter provided a nice context for me to summarize my ten year collaboration with him around GenScope.  That chapter is perhaps the most readable description of participatory assessment that I have managed to write.  A much more detailed account of our collaboration was just accepted for publication by the Journal of the Learning Sciences and will appear in 2011.

I promise you will be hearing from us regularly starting in the new year.  We hope you will comment and share this with others.  And if you have posts or links that you think we should comment on, please let us know.  I will let the rest of the team introduce themselves and add their bios to the blog as they start posting.

Thursday, April 15, 2010

short-sighted and socially destructive: thoughts on Ning's decision to cut free services

Lord knows I'm not a huge fan of Ning, the social networking tool that allows users to create and manage online networks. I find the design bulky and fairly counterintuitive, and modifying a network to meet your group's needs is extremely challenging, and Ning has made it extremely difficult or impossible for users to control, modify, or move network content. Despite the popularity of Ning's free, ad-supported social networks among K-16 educators, the ads that go along with the free service have tended toward the racy or age-inappropriate.

But given the Ning trifecta--it's free, getting students signed up is fast and fairly easy, and lots of teachers are using it--I've been working with Ning with researchers and teachers for the last two years. So the recent news that Ning will be switching to paid-only membership is obnoxious for two reasons.

The first reason is the obvious: I don't want to pay--and I don't want the teachers who use Ning to have to pay, either. One of the neat things about Ning is the ability to build multiple social networks--maybe a separate one for each class, or a new one each semester, or even multiple networks for a single group of students. In the future, each network will require a monthly payment, which means that most teachers who do decide to pay will stick to a much smaller number of networks. This means they'll probably erase content and delete members, starting fresh each time. The enormous professional development potential of having persistent networks filled with content, conversations, and student work suddenly disappears.

Which brings me to my second point: That anyone who's currently using Ning's free services will be forced to either pay for an upgrade or move all of their material off of Ning. This is tough for teachers who have layers upon layers of material posted on various Ning sites, and it's incredibly problematic for any researcher who's working with Ning's free resources. If we decide to leave Ning for another free network, we'll have to figure out some systematic way of capturing every single thing that currently lives on Ning, lest it disappear forever.

Ning's decision to phase out free services amounts to a paywall, pure and simple. Instead of putting limits on information, as paywalls for news services do, this paywall puts limits on participation. In many ways, this is potentially far worse, far more disruptive and destructive, far more short-sighted than any information paywall could be.

If Ning was smart, it would think a little more creatively about payment structures. What about offering unlimited access to all members of a school district, for a set fee paid at the district level? What about offering an educator account that provides unlimited network creation for a set (and much lower) fee? What about improving the services Ning provides to make it feel like you'd be getting what you paid for?

More information on Ning's decision to go paid-only will be released tomorrow. For now, I'm working up a list of free social networking tools for use by educators. If you have any suggestions, I'd love to hear them.

Update, 4/15/10, 6:48 p.m.: Never one to sit on the sidelines in the first place, Alec Couros has spearheaded a gigantic, collaborative googledoc called "Alternatives to Ning." As of this update, the doc keeps crashing because of the number of collaborators trying to help build this thing (the last time I got into it, I was one of 303 collaborators), so if it doesn't load right away, keep trying.

Friday, April 2, 2010

Diane Ravitch Editorial on the Failure of NCLB

I have long admired Diane Ravitch. While I have disagreed with her on fundamental philosophical grounds, her arguments have always been grounded in the realities of schooling--even if those were the realities of conservative parents and stakeholders.

Now the evidence has shown what some of us predicted and what many of us have known for years: that external tests of basic skills and punitive sanctions were just going to lead to illusory gains (if any) and undermine other value outcomes. Her editorial in today's (April 2) Washington Post is very direct. While I disagree with her on where to go from here, I applaud her for using her audience and her reputation to help convince a lot of stakeholders who have found one reason or another to ignore the considerable evidence against continuing NCLB. Like Jim Popham has been saying for years, all of the improvement schools could make with test scores already happened between 1990 and 2000, once newspapers began publishing test scores.



Certainly this will factor into the pending NCLB reauthorization. Perhaps Indiana's Republican leadership will read this and think twice about going forward with their two core ideas for their Race to the Top reform proposal, even though it was not funded. The twin shells in their reform shotgun is "Pay for Performance" merit pay for Indiana teachers based on basic skills test scores, and "Value Added" growth modeling that ranks teachers based on how much "achievement" they instilled in their kids. For reasons Ravitch summarizes and other concerns outlined in a recent letter and report by the National Academy, the recoil from pulling these two triggers at once might be just enough to blow our schools and our children pretty far back into the 20th century.

Tuesday, March 9, 2010

Video of Barry McGaw on Assessment Strategies for 21st Century Skills (Measurement Working Group)

I just came across a video of a keynote by Barry McGaw at last month’s Learning and World Technology Forum. McGaw heads the Intel/Microsoft/Cisco initiative known as Assessment and Teaching of 21st Century Skills. This high-powered group is aiming to transform the tests used in large-scale national comparisons and education more broadly. Their recent white papers are a must read for anyone interested in assessment or new proficiencies. McGaw’s video highlights aspects of this effort that challenge conventional wisdom about assessment. In this post I focus on McGaw’s comments on the efforts of the Measurement working group. In particular they point to (1) the need to iteratively define assessments and the constructs they aim to capture, and (2) the challenge of defining developmental models of these new skills.


Iterative Design of Assessments and Constructs
McGaw highlighted that the Measurement Working Group (led by Mark Wilson) emphasized the need for iterative refinement in the development of new measures. Various groups spent much of the first decade of the 21st century debating how these proficiencies should be defined and organized. In this abstract context, this definition process could easily consume the second decade as well. Wilson’s group argues that the underlying constructs being assessed must be defined and redefined in the context of the assessment development process. Of this, McGaw said

You think about it first, you have a theory about what you want those performances to measure. You then begin to develop ways of capturing information about that skills. But the data themselves give you information about the definition, and you refine the definition. This is the important point of pilot work with these assessment devices. And not just giving the tests to students, but giving them to students and seeing what their responses are, and discovering why they gave that response. And not just in the case where it is the wrong response but in the case where it is the correct response, so that you get a better sense of the cognitive processes underlying the solution to the task.

In other words, you can’t just have one group define standards and definitions and then pitch them to the measurement group when dealing with these new proficiencies. Because of their highly contextualized nature, we can’t just pitch standards to testing companies as has been the case with hard skills for years. This has always nagged at me in previous consideration, in that they seemed to overlook both the issue and the challenge that it presents (e.g., the Partnership for 21st Century Skills). Maybe now we can officially decide to stop trying to define what assessment scholar Lorrie Shepard so aptly labeled “21st Century Bla Bla Bla.”

The Lack of Learning Progression Models
McGaw also reiterated the concerns of the Measurement Working Groups over the lack of consensus about the way these new proficiencies develop. There is a strong consensus about the development of many of the hard skills in math, science, and literacy, and these insights are crucial for developing worthwhile assessments. I learned about this first hand developing a performance assessment for introductory genetics working with Ann Kindfield at ETS. Ann taught me the difference between the easier cause-to-effect reasoning (e.g., completing the Punnett square) and the more challenging effect-to-cause reasoning (e.g., using a pedigree chart to infer mode of inheritance). We used these and other distinctions she uncovered in her doctoral studies to create a tool that supported tons of useful studies on teaching inheritance in biology classes. Other more well known work on “learning progressions” include Ravit Duncan’s work in molecular genetics and Doug Clements’ work in algebra. In each case it took multiple research teams many years reach consensus about the way that knowledge typically developed.

Wilson and McGaw are to be commended for reminding us how difficult it is going to be to agree on the development of these much softer 21st century proficiencies. They are by their very definition situated in more elusive social and technological contexts. And those contexts are evolving. Quickly. Take for example judging credibility of information on the Internet. In the 90s this meant websites. In the past decade it came to mean blogs. Now I guess it includes Twitter. (There is a great post about this at MacArthur’s Spotlight Blog, as well as a recent CBC interview about fostering new media literacies, featuring my student Jenna McWilliams.)

Consider that I taught my 11-year-old son to look at the history page on Wikipedia to help distinguish between contested and uncontested information in a given entry. He figured out on his own how to verify the credibility of suggestions for modding his Nerf guns at nerfhaven.com and YouTube. Now imagine you are ETS, where it inevitably takes a long time and buckets of money to produce each new test. They already had to replace their original iSkills test with the iCritical Thinking test. From what I can tell, it is still a straightforward test of information from a website. Lots of firms are starting to market such tests. Some places (like Scholastic’s Expert21) will also sell you curriculum and classroom assessments that will teach students to pass the test—without ever actually going on the Internet. Of course ETS know that they can’t sell curriculum if they want to maintain their credibility. But I am confident that as soon as organizations start attaching meaningful consequences to the test, social networks will spring up telling students exactly how to answer the questions.

There is lots of other great stuff in the Measurement white paper. Much if it is quite technical. But I applaud their sobering recognition of the many challenges that these new proficiencies pose for large scale measurement. And they only get harder when these new tests are used for accountability purposes.

Next up: McGaw’s comments about the Classroom Environments and Formative Evaluation working group.

Friday, January 22, 2010

Can We Really Measure "21st Century" Skills?

The members of the 21st Century Assessment Project were asked a while ago to respond to four pressing questions regarding assessment of “21st Century Skills.” These questions had come via program officers at leading foundations, including Connie Yowell at MacArthur’s Digital Media and Learning Initiative, which funds our Project. I am going to launch my efforts to blog more during my much-needed sabbatical by answering the first question, with some help from my doctoral student Jenna McWilliams.

Question One: Can critical thinking, problem solving, collaboration, communication and "learning to learn" be reliably and validly measured?

As Dan Koretz nicely illustrated in the introduction to his 2008 book, Measuring Up: What Educational Testing Really Tells Us, the answers to questions about educational testing are never simple. We embrace strongly situative and participatory view of knowing and learning, which is complicated to explain to those who do not embrace it. But I have training in psychometrics (and completed a postdoc at ETS) and have spent most of my career refining a more pragmatic stance that treats educational accountability as inevitable. When it comes to assessment, I am sort of a born-again situativity theorist. Like folks who have newly found religion and want to tell everybody how Jesus helped them solve all of the problems they used to struggle with, I am on a mission to tell everyone how situative approaches measurement can solve some nagging problems that they have long struggled with.

In short, no, we don’t believe we can measure these things in ways that are reliable and yield scores that are valid evidence of what individuals are capable of in this regard. These are actually “practices” that can most accurately be interpreted using methods accounting for the social and technological contexts in which they occur. In this sense, we agree with skeptics like Jim Greeno and Melissa Gresalfi who argued that we can never really know what students know. This point riffs on the title of the widely cited National Research Council report of the same name that Jim Pellegrino (my doctoral advisor) led. And as Val Shute just reminded me, Messick has reminded us forever that measurement never really gets directly at what somebody knows, but instead provides evidence about what the seem to know. My larger point here is my concern about what happens with these new proficiencies in schools and in tests when we treat them as individual skills rather than social practices. In particular I worry what happens to both education and evidence when students, teachers, and schools are judged according to tests of these new skills.

However, there are lots of really smart folks who have a lot of resources at their disposal who think you can measure them. This includes most of my colleagues in the 21st Century Assessment Project. For example, check out Val Shute’s great article in the International Journal of Learning and Media. Shute also has an edited volume on 21st Century Assessment coming out shortly. Likewise Dan Schwartz has a tremendous program of research building on his earlier work with John Bransford on assessments as preparation for future learning. Perhaps the most far reaching is Bob Mislevy’s work on evidence-centered design. And of course there is the new Intel-Microsoft-Cisco partnership which is out to change the face of national assessments and therefore the basis of international comparisons. I will elaborate on these examples in my next post, as that is actually the second question we were asked to answer. But first let me elaborate on why I believe that the assessment (of what individuals understand) and the measurement (of what groups of individuals have achieved) of 21st Century skills is improved if we assume that we can never really know what students know.

To reiterate, from the perspective of contemporary situated views of cognition, all knowledge and skills are primarily located in the social context. This is easy to ignore when focusing on traditional skills like reading and math that can be more meaningfully represented as skills that individuals carry from context to context. This assumption is harder to ignore with these newer ones that everyone is so concerned with. This is expecially the case with explicity social practices like collaborating and communicating, since these can't even practiced in isolated contexts. As we argued in our chapter in Val’s book, we believe it is a dangerously misleading to even use the term skills in this regard. We elected to use the term proficiencies because that term is broad enough to capture the different ways that we think about them. As 21st Century Assessment project leader Jim Gee once put it
Abstract representations of knowlege, if they exist at all, reside at the end of long chains of situated activity.
However, we also are confident that that some of the mental “residue” that gets left behind when people engage meaningfully in socially situated practices can certainly be assessed reliably and used to make valid interpretations about what individuals know. While we think these proficiencies are primarily social practices, it does not exclude recognizing the secondary “echoes” of participating in these practices. This can be done with performance assessments and other extended activities that provide some of that context and then ask individuals or groups to reason, collaborate, communicate, and learn. If such assessments are created carefully, and individuals have not been directly trained to solve the problems on the assessments, it is possible to obtain reliable scores that are valid predictions of how well individuals can solve, communicate, collaborate, and learn in new social and technological contexts. But this continues to be difficult and the actual use of such measures raises serious validity issues. Because of these issues (as elaborated below), we think this work might best be characterized as “guessing what students know.”

More to the point of the question, we believe that only a tiny fraction of the residue from these practices can be measured using conventional standardized multiple-choice tests that provide little or no context. For reasons of economy and reliability, such tests are likely to remain the mainstay of educational accountabiity for years to come. Of course, when coupled with modern psychometrics, such tests can be extremely reliable, with little score variation across testing time or version. But there are serious limitations in what sorts of interpretations can be validly drawn from the resulting scores. In our opinion, scores on any standardized test of these new skills are only valid evidence of proficiency when they are
a) used to make claims about aggregated proficiencies across groups of individuals;
b) used to make claims about changes over longer times scales, such as comparing the consequences of large scale policy decisions over years; and
c) isolated from the educational environment which they are being used to evaluate.
Hence, we are pretty sure that national and international assessments like NAEP and PISA should start incorporating such proficiencies. But we have serious concerns about using these measures to evaluate individual proficiencies in an high-stakes sorts of ways. If such tests are going to continue to be used on any high stakes decisions, they may well best be left to more conventional literacies, numeracies, and knowledge of conventional content domains, which are less likely to be compromised.

I will say that I am less skeptical about standardized measures of writing. But they are about the only standardized assessments left in wide use that actually requires students to produce something. Such tests will continue to be expensive and standardized scoring (by humans or machines) requires very peculiar writing formats. But I think the scores that result are valid for making inferences about individual proficiency in written communication more broadly, as was implied by the original question. They are actually performance assessments and as such can bring in elements of different contexts. This is particularly true if we can relax some of the needs for reliability (which requires very narrowly defined prompts and typically gets compromized and writers get creative and original). Given that I think my response to the fourth question will elaborate on my belief that written communication is probably the single most important “new proficiency” needed for economic, civic, and intellectual engagement, I think that improved testing of written communication will be the one focus of assessment research that yields the most impact on learning and equity.

To elaborate on the issue of validity, it is worth reiterating that validity is a property of the way the scores are interpreted. Unlike reliability, validity is never a property of the measure. In other words, validity always references the claims that are being supported by the evidence. As Messick argued in the 90s, the validity of any interpretation of scores also depends on the similarity between prior education and training contexts and the assessment/measurement context. This is where things get messy very quickly. As Kate Anderson and I argued in a chapter in an NSSE Yearbook on Evidence and Decision Making edited by Pam Moss, once we attach serious consequences to assessments or tests for teachers or students, the validity of the resulting scores will get compromised very quickly. This is actually less of a risk with traditional proficiencies and traditional multiple choice tests. This is because these tests can draw from massive pools of items that are aligned to targeted standards. In these cases, the test can be isolated from any preparation empirically, by randomly sampling from a huge pool of items. As we move to newer performance measures of more extended problem solving and collaboration, there necessarily are fewer and fewer items and the items become more and more expensive to develop and validate. If teachers are directly teaching students to solve the problems, then it becomes harder and harder to determine how much of an individual score is real proficiency and how much is familiarity with the assessment format (what Messick called construct-irrelevant variance). The problem is that it is impossible to ever know how much of the proficiency is “real.” Even in closely studied contexts, different observers are sure to differ in the validity—a point made most cogently in Michael Kane’s discussions of validity as interpretive argument.

Because of these validity concerns, we are terrified that the publishers of these tests of “21st Century Skills” are starting marketing curricula and test preparation materials of those same proficiencies. Because of the nature of these new proficiencies, these new “integrated” systems raise even more validity issues than the ones that emerged under NCLB for traditional skills. Another big validity issue we raised in our chapter concerns the emergence of socially networked cheating. Once these new tests are used for high-stakes decisions (especially for college entrance), social networks will emerge to tell students how to solve the kinds of problems that are included on the tests. (This has already begun to happen, as in the "This is SPARTA!" prank on the English Advanced Placement test that we wrote about in our chapter and in a more recent "topic breach" wherein students in Winnipeg leaked the essay topic for the school's 12th grade English exam.)

Of course, proponents of these new tests will argue that learning how to solve the kinds of problems that appear on their tests is precisely what they want students to be doing. And as long as you adopt a relatively narrow view of cognition and learning, there is some truth to that assumption. Our real concern is that this unbalanced focus in addition to new standards and new tests will distract from the more important challenge of fostering equitable, ethical, and consequential participation in these new skills in schools.

That is it for now. We will be posting my responses to the three remaining questions over the next week or so. We would love to hear back from folks about their responses to the first question.


Questions remaining:
2) Which are the most promising research initiatives?
3) Is it or will it be possible to measure these things in ways that they can be scored by computer? If so, how long would it take and what sorts of resources would be needed?
4) If we had to narrow our focus to the proficiencies most associated with economic opportunity and civic participation, which ones do we recommend? Is there any evidence/research specifically linking these proficiencies to these two outcomes? If we further narrowed our focus to only students from underserved communities, would this be the same list?