Showing posts with label Dan Hickey. Show all posts
Showing posts with label Dan Hickey. Show all posts

Wednesday, December 21, 2016

Competencies in Context #5: Increasing the Value of Certificates from LinkedIn Learning

By Daniel Hickey and Chris Andrews

The previous post in this series explored LinkedIn endorsements and recommendations. Chris and Dan used those features to locate a potential consultant with particular skills and considered recent refinements to those features. We also explore the new LinkedIn Learning site made possible by the acquisition of Lynda.com. In this post, we explore how endorsements and recommendations might help LinkedIn earn back the roughly $300,000 that they paid for each of Lynda.com's 5000 courses. 

Friday, January 22, 2010

Can We Really Measure "21st Century" Skills?

The members of the 21st Century Assessment Project were asked a while ago to respond to four pressing questions regarding assessment of “21st Century Skills.” These questions had come via program officers at leading foundations, including Connie Yowell at MacArthur’s Digital Media and Learning Initiative, which funds our Project. I am going to launch my efforts to blog more during my much-needed sabbatical by answering the first question, with some help from my doctoral student Jenna McWilliams.

Question One: Can critical thinking, problem solving, collaboration, communication and "learning to learn" be reliably and validly measured?

As Dan Koretz nicely illustrated in the introduction to his 2008 book, Measuring Up: What Educational Testing Really Tells Us, the answers to questions about educational testing are never simple. We embrace strongly situative and participatory view of knowing and learning, which is complicated to explain to those who do not embrace it. But I have training in psychometrics (and completed a postdoc at ETS) and have spent most of my career refining a more pragmatic stance that treats educational accountability as inevitable. When it comes to assessment, I am sort of a born-again situativity theorist. Like folks who have newly found religion and want to tell everybody how Jesus helped them solve all of the problems they used to struggle with, I am on a mission to tell everyone how situative approaches measurement can solve some nagging problems that they have long struggled with.

In short, no, we don’t believe we can measure these things in ways that are reliable and yield scores that are valid evidence of what individuals are capable of in this regard. These are actually “practices” that can most accurately be interpreted using methods accounting for the social and technological contexts in which they occur. In this sense, we agree with skeptics like Jim Greeno and Melissa Gresalfi who argued that we can never really know what students know. This point riffs on the title of the widely cited National Research Council report of the same name that Jim Pellegrino (my doctoral advisor) led. And as Val Shute just reminded me, Messick has reminded us forever that measurement never really gets directly at what somebody knows, but instead provides evidence about what the seem to know. My larger point here is my concern about what happens with these new proficiencies in schools and in tests when we treat them as individual skills rather than social practices. In particular I worry what happens to both education and evidence when students, teachers, and schools are judged according to tests of these new skills.

However, there are lots of really smart folks who have a lot of resources at their disposal who think you can measure them. This includes most of my colleagues in the 21st Century Assessment Project. For example, check out Val Shute’s great article in the International Journal of Learning and Media. Shute also has an edited volume on 21st Century Assessment coming out shortly. Likewise Dan Schwartz has a tremendous program of research building on his earlier work with John Bransford on assessments as preparation for future learning. Perhaps the most far reaching is Bob Mislevy’s work on evidence-centered design. And of course there is the new Intel-Microsoft-Cisco partnership which is out to change the face of national assessments and therefore the basis of international comparisons. I will elaborate on these examples in my next post, as that is actually the second question we were asked to answer. But first let me elaborate on why I believe that the assessment (of what individuals understand) and the measurement (of what groups of individuals have achieved) of 21st Century skills is improved if we assume that we can never really know what students know.

To reiterate, from the perspective of contemporary situated views of cognition, all knowledge and skills are primarily located in the social context. This is easy to ignore when focusing on traditional skills like reading and math that can be more meaningfully represented as skills that individuals carry from context to context. This assumption is harder to ignore with these newer ones that everyone is so concerned with. This is expecially the case with explicity social practices like collaborating and communicating, since these can't even practiced in isolated contexts. As we argued in our chapter in Val’s book, we believe it is a dangerously misleading to even use the term skills in this regard. We elected to use the term proficiencies because that term is broad enough to capture the different ways that we think about them. As 21st Century Assessment project leader Jim Gee once put it
Abstract representations of knowlege, if they exist at all, reside at the end of long chains of situated activity.
However, we also are confident that that some of the mental “residue” that gets left behind when people engage meaningfully in socially situated practices can certainly be assessed reliably and used to make valid interpretations about what individuals know. While we think these proficiencies are primarily social practices, it does not exclude recognizing the secondary “echoes” of participating in these practices. This can be done with performance assessments and other extended activities that provide some of that context and then ask individuals or groups to reason, collaborate, communicate, and learn. If such assessments are created carefully, and individuals have not been directly trained to solve the problems on the assessments, it is possible to obtain reliable scores that are valid predictions of how well individuals can solve, communicate, collaborate, and learn in new social and technological contexts. But this continues to be difficult and the actual use of such measures raises serious validity issues. Because of these issues (as elaborated below), we think this work might best be characterized as “guessing what students know.”

More to the point of the question, we believe that only a tiny fraction of the residue from these practices can be measured using conventional standardized multiple-choice tests that provide little or no context. For reasons of economy and reliability, such tests are likely to remain the mainstay of educational accountabiity for years to come. Of course, when coupled with modern psychometrics, such tests can be extremely reliable, with little score variation across testing time or version. But there are serious limitations in what sorts of interpretations can be validly drawn from the resulting scores. In our opinion, scores on any standardized test of these new skills are only valid evidence of proficiency when they are
a) used to make claims about aggregated proficiencies across groups of individuals;
b) used to make claims about changes over longer times scales, such as comparing the consequences of large scale policy decisions over years; and
c) isolated from the educational environment which they are being used to evaluate.
Hence, we are pretty sure that national and international assessments like NAEP and PISA should start incorporating such proficiencies. But we have serious concerns about using these measures to evaluate individual proficiencies in an high-stakes sorts of ways. If such tests are going to continue to be used on any high stakes decisions, they may well best be left to more conventional literacies, numeracies, and knowledge of conventional content domains, which are less likely to be compromised.

I will say that I am less skeptical about standardized measures of writing. But they are about the only standardized assessments left in wide use that actually requires students to produce something. Such tests will continue to be expensive and standardized scoring (by humans or machines) requires very peculiar writing formats. But I think the scores that result are valid for making inferences about individual proficiency in written communication more broadly, as was implied by the original question. They are actually performance assessments and as such can bring in elements of different contexts. This is particularly true if we can relax some of the needs for reliability (which requires very narrowly defined prompts and typically gets compromized and writers get creative and original). Given that I think my response to the fourth question will elaborate on my belief that written communication is probably the single most important “new proficiency” needed for economic, civic, and intellectual engagement, I think that improved testing of written communication will be the one focus of assessment research that yields the most impact on learning and equity.

To elaborate on the issue of validity, it is worth reiterating that validity is a property of the way the scores are interpreted. Unlike reliability, validity is never a property of the measure. In other words, validity always references the claims that are being supported by the evidence. As Messick argued in the 90s, the validity of any interpretation of scores also depends on the similarity between prior education and training contexts and the assessment/measurement context. This is where things get messy very quickly. As Kate Anderson and I argued in a chapter in an NSSE Yearbook on Evidence and Decision Making edited by Pam Moss, once we attach serious consequences to assessments or tests for teachers or students, the validity of the resulting scores will get compromised very quickly. This is actually less of a risk with traditional proficiencies and traditional multiple choice tests. This is because these tests can draw from massive pools of items that are aligned to targeted standards. In these cases, the test can be isolated from any preparation empirically, by randomly sampling from a huge pool of items. As we move to newer performance measures of more extended problem solving and collaboration, there necessarily are fewer and fewer items and the items become more and more expensive to develop and validate. If teachers are directly teaching students to solve the problems, then it becomes harder and harder to determine how much of an individual score is real proficiency and how much is familiarity with the assessment format (what Messick called construct-irrelevant variance). The problem is that it is impossible to ever know how much of the proficiency is “real.” Even in closely studied contexts, different observers are sure to differ in the validity—a point made most cogently in Michael Kane’s discussions of validity as interpretive argument.

Because of these validity concerns, we are terrified that the publishers of these tests of “21st Century Skills” are starting marketing curricula and test preparation materials of those same proficiencies. Because of the nature of these new proficiencies, these new “integrated” systems raise even more validity issues than the ones that emerged under NCLB for traditional skills. Another big validity issue we raised in our chapter concerns the emergence of socially networked cheating. Once these new tests are used for high-stakes decisions (especially for college entrance), social networks will emerge to tell students how to solve the kinds of problems that are included on the tests. (This has already begun to happen, as in the "This is SPARTA!" prank on the English Advanced Placement test that we wrote about in our chapter and in a more recent "topic breach" wherein students in Winnipeg leaked the essay topic for the school's 12th grade English exam.)

Of course, proponents of these new tests will argue that learning how to solve the kinds of problems that appear on their tests is precisely what they want students to be doing. And as long as you adopt a relatively narrow view of cognition and learning, there is some truth to that assumption. Our real concern is that this unbalanced focus in addition to new standards and new tests will distract from the more important challenge of fostering equitable, ethical, and consequential participation in these new skills in schools.

That is it for now. We will be posting my responses to the three remaining questions over the next week or so. We would love to hear back from folks about their responses to the first question.


Questions remaining:
2) Which are the most promising research initiatives?
3) Is it or will it be possible to measure these things in ways that they can be scored by computer? If so, how long would it take and what sorts of resources would be needed?
4) If we had to narrow our focus to the proficiencies most associated with economic opportunity and civic participation, which ones do we recommend? Is there any evidence/research specifically linking these proficiencies to these two outcomes? If we further narrowed our focus to only students from underserved communities, would this be the same list?

Tuesday, July 7, 2009

Five tips for seeding and feeding your educational community

Dan Hickey's recent post on seeding, feeding, and weeding educators' networks got me thinking, for lots of reasons--not least of which being that I will most likely be one of the research assistants he explains will “work with lead educators to identify interesting and engaging online activities for their students.”

This got me a-planning. I started thinking about how I would seed, feed, and weed a social network if (when) given the chance to do so. As David Armano, the author of "Debunking Social Media Myths, the article that suggests the seeding, feeding, and weeding metaphor, points out, building a social media network is more difficult than people think—this is not a “if we build it, they will come” sort of thing. Designing, promoting, and growing a community takes a lot of work. People will, given the right motives, participate in the community for love and for free, but you have to start out on the right foot. This means offering them the right motivations for giving up time they would otherwise be spending on something else.

A caveat
First, know that I am a True Believer. I have deep faith in the transformative potential of participatory media, not because I see it as a panacea to all of our problems but because participatory media supports disruption of the status quo. A public that primarily consumes media primarily gets the world the media producers decide they want to offer. A public that produces and circulates media expressions gets to help decide what world it wants.

Social media, because of its disruptive and transformative potential, is both essential and nigh on impossible to get into the classroom. This is precisely why it needs to happen, and the sooner it happens, the better.

But integrating participatory media and the participatory practices they support into the field of education is not a simple matter. Too often people push for introduction of new technologies or practices (blogging, wikis, chatrooms and forums) without considering the dispositions required to use them in participatory ways. A blog can easily be used as an online paper submission tool; leveraging its neatest affordances--access to a broad, engaged public, joining a web of interconnected arguments and ideas, offering entrance into a community of bloggers--takes more effort and different, often more time-consuming, approaches.

Additionally, while social networks for educators hold a great deal of promise for supporting the spread of educational practices, designing, building, and supporting a vibrant community of educators requires thinking beyond the chosen technology itself.

Five Tips for Seeding and Feeding your Community

With these points in mind, I offer my first shot at strategies for seeding and beginning to feed a participatory educational community. (Weeding, the best part of the endeavor, comes later, once my tactics have proven to work.)

1. Think beyond the classroom setting.
In the recently published National Writing Project book, Teaching the New Writing, the editors point out that for teachers to integrate new media technologies into their classrooms, they "need to be given time to investigate and use technology themselves, personally and professionally, so that they can themselves assess the ways that these tools can enhance a given curricular unit."

The emerging new media landscape offers more than just teaching tools--it offers a new way of thinking about communication, expression, and circulation of ideas. We would do well to remember this as we devise strategies for getting teachers involved in educational communities online. After all, asking a teacher who's never engaged with social media to use it in the classroom is like asking a teacher who's never used the quadratic equation to teach Algebra.

Anyone who knows me knows what a fan of blogging I am. I proselytize, prod, and shame people into blogging--though, again, not because I think blogging is the best new practice or even necessarily the most enjoyable one. Blogging is just one type of practice among a constellation of tools and practices being adopted by cutting edge educators, scholars, and Big Thinkers across all disciplines. Blogging was, for me, a way in to these practices and tools, and I do think blogging is one of the most accessible new practice for teacherly / writerly types. The immediacy and publicness of a blogpost is a nice preparation for increased engagement with what Clay Shirky calls the “publish, then filter” model of participatory media. This is a chaotic, disconcerting, and confusing model in comparison to the traditional “filter, then publish” model, but getting in synch with this key element of participatory culture is absolutely essential for engaging with features like hyperlinking, directing traffic, and identifying and writing for a public. In a larger sense, connecting with the publish, then filter approach prepares participants to join the larger social networking community.

2. Cover all your bases--and stop thinking locally
One of the neatest things about an increasingly networked global community is that we're no longer limited to the experts or expertises of the people who are within our physical reach. Increasingly, we can tap into the knowledge and interests of like-minded folks as we work to seed a new community.

Backing up a step: It helps, in the beginning for sure but even more so as a tiny community grows into a small, then medium-sized, group, to consider all of the knowledge, experience, and expertises you would like to see represented in your educational community. This may include expertise with a variety of social media platforms, experience in subject areas or in fields outside of teaching, and various amounts of experience within the field of education.

3. In covering your bases, make sure there's something for everyone to do.
Especially in the beginning, people participate when they feel like they a.) have something they think is worth saying, b.) feel that their contributions matter to others, and c.) can easily see how and where to contribute. I have been a member of forums where everybody has basically the same background and areas of expertise; these forums usually start out vibrant, then descend into one or two heavily populated discussion groups (usually complaining or commiserating about one issue that gets up in everyone's craw) before petering out.

Now imagine you have two teachers who have decided to introduce a Wikipedia-editing exercise into their classrooms by focusing on the Wikipedia entry for Moby-Dick. Imagine you have a couple of Wikipedians in your network who have extensive experience working with the formatting code required for editing; and you have a scholar who has published a book on Moby-Dick. This community has the potential for a rich dialogue that supports increasing the expertise of everybody involved. Everybody feels valued, everybody feels enriched, and everybody feels interested in contributing and learning.

4. Use the tool yourself, and interact with absolutely everybody.
Caterina Fake, the founder of Flickr, says that she decided to greet the first ten thousand Flickr users personally. Assuming ten thousand users is several thousand more than you want in your community, you might have the time to imitate Fake's example. It also helps to join in on forums and other discussions, especially if one emerges from the users themselves. Students are not the only people who respond well to feeling like someone's listening.

Use the tool. Use the tool. Use the tool. I can't emphasize enough how important this is. You should use it for at least one purpose other than seeding and feeding your community. You should be familiar enough with it to be able to answer most questions and do some troubleshooting when necessary. You should be able to integrate new features when they become available and relevant, and you should offer a means for other users to do the same.


5. Pick a tool that supports the needs of your intended community, and then use the technology's features as they were designed to be used.

Though I put this point last, it's the most important of all. You can't--you cannot--build the right community with the wrong tools. Too often, community designers hone in on a tool they have some familiarity with or, even worse, a tool that they've heard a lot about. This is the wrong tack.

What you need to do is figure out what you want your community to do first, then seek out a tool that supports those practices. If you want your community to refine an already-established set of definitions, approaches, or pedagogical tenets, then what you're looking for is a wiki. If you want the community to discuss key issues that come up in the classroom, you want a forum or chat function. If you want them to share and comment on lesson plans, you need a blog or similar text editing function.

Once you've decided on the functions you want, you need to stick with using them as god intended. Do not use a wiki to post information that doesn't need community input. Don't use a forum as a calendar. And don't use a blog for forum discussions.

It's not easy to start and build a community, offline or online. It takes time and energy and a high resistance to disappointment and exhaustion. But as anybody who's ever tried and failed (or succeeded) to start up a community knows, we wouldn't bother if we didn't think it was worth the effort.

Sunday, June 14, 2009

the harrison bergeron approach to education: how university rankings stunt the social revolution

I've been thinking some lately about the odd and confusing practice of comparing undergraduate and graduate programs at American colleges and universities and producing a set of rankings that show how the programs stack up against each other.

One of the most widely cited set of rankings comes from U.S. News and World Report, which offers rankings in dozens of categories, for both undergraduate and graduate-level programs. Here, the magazine offers its altruistic rationale behind producing these rankings:
A college education is one of the most important—and one of the most costly—investments that prospective students will ever make. For this reason, the editors of U.S. News believe that students and their families should have as much information as possible about the comparative merits of the educational programs at America's colleges and universities. The data we gather on America's colleges—and the rankings of the schools that arise from these data—serve as an objective guide by which students and their parents can compare the academic quality of schools. When consumers purchase a car or a computer, this sort of information is readily available. We think it's even more important that comparative data help people make informed decisions about an education that at some private universities is now approaching a total cost of more than $200,000 including tuition, room, board, required fees, books, transportation, and other personal expenses.

(To access the entire rankings, developed and produced selflessly by U.S. News and World Report, you need to pay. Click here to purchase the Premium Online Edition, which is the only way to get complete rankings, for $14.95.)

The 2009 rankings, released in April, are in the news lately because of questions related to how the magazine gathers data from colleges. As Carl Bialik points out in a recent post at the Wall Street Journal, concerns over how Clemson University set about increasing its rank point to deeper questions about the influence of rankings numbers on university operations. Clemson President James F. Barker reportedly shot for cracking the top 20 (it was ranked 38th nationally in 2001) by targeting all of the ranking indicators used by U.S. News. Bialik writes:
While the truth about Clemson’s approach to the rankings remains elusive, the episode does call into question the utility of a ranking that schools can seek to manipulate. “Colleges have been ‘rank-steering,’ — driving under the influence of the rankings,” Lloyd Thacker, executive director of the Education Conservancy and a critic of rankings, told the Associated Press. “We’ve seen over the years a shifting of resources to influence ranks.”

Setting aside questions of the rankings' influence on university operations and on recruiting (both for prospective students and prospective faculty), and setting aside too the question of how accurate any numbers collected from university officials themselves could possibly be when the stakes are so high, one wonders how these rankings limit schools' ability to embrace what appear to be key tenets emerging out of the social revolution. A key feature of some of the most vibrant, energetic, and active online communities is what Clay Shirky labels the "failure for free" model. As I explained in a previous post on the open source movement, the open source software (OSS) movement embraces this tenet:
It's not, after all, that most open source projects present a legitimate threat to the corporate status quo; that's not what scares companies like Microsoft. What scares Microsoft is the fact that OSS can afford a thousand GNOME Bulgarias on the way to its Linux. Microsoft certainly can't afford that rate of failure, but the OSS movement can, because, as Shirky explains,
open systems lower the cost of failure, they do not create biases in favor of predictable but substandard outcomes, and they make it simpler to integrate the contributions of people who contribute only a single idea.

Anyone who's worked for a company of reasonable size understands the push to keep the risk of failure low. "More people," Shirky writes, "will remember you saying yes to a failure than saying no to a radical but promising idea." The higher up the organizational chart you go, the harder the push will be for safe choices. Innovation, it seems, is both a product of and oppositional to the social contract.

The U.S. News rankings, and the methodology behind them, runs completely anathema to the notion of innovation. Indeed, a full 25 percent of the ranking system is based on what U.S. News calls "peer assessment," which comes from "the top academics we consult--presidents, provosts, and deans of admissions" and, ostensibly, at least, allows these consultants
to account for intangibles such as faculty dedication to teaching. Each individual is asked to rate peer schools' academic programs on a scale from 1 (marginal) to 5 (distinguished). Those who don't know enough about a school to evaluate it fairly are asked to mark "don't know." Synovate, an opinion-research firm based near Chicago, in spring 2008 collected the data; of the 4,272 people who were sent questionnaires, 46 percent responded.

Who becomes "distinguished" in the ivory-tower world of academia? Those who play by the long-established rules of tradition, polity, and networking, of course. The people who most want to effect change at the institutional level are often the most outraged, the most unwilling to play by the rules established by administrators and rankings systems, and therefore the least likely to make it into the top echelons of academia. Indeed, failure is rarely free in the high-stakes world of academics; it's safer to say no to "a radical but promising idea" than to say yes to any number of boring but safe ideas.

So what do you do if you are, say, a prospective doctoral student who wants to tear wide the gates of academic institutions? What do you do if you want to go as far in your chosen field as your little legs will carry you, leaving a swath of destruction in your wake? What do you do if you want to bring the social revolution to the ivory tower, instead of waiting for the ivory tower to come to the social revolution?

You rely on the U.S. News rankings, of course. It's what I did when I made decisions about which schools to apply to (the University of Wisconsin-Madison [ranked 7th overall in graduate education programs, first in Curriculum & Instruction, first in Educational Psychology] the University of Texas-Austin [tied at 7th overall, 10th in Curriculum & Instruction], the University of Washington [12th overall, 9th in Curriculum & Instruction], the University of Michigan [14th overall, 7th in Curriculum & Instruction, and 3rd in Educational Psychology] the University of Indiana [19th overall, out of the top 10 in individual categories], and Arizona State University [24th overall, out of the top 10 in individual categories]). Interestingly, though, the decision to turn down offers from schools ranked higher than Indiana (go hoosiers) wasn't all that difficult. I knew that I belonged at IU (go hoosiers) almost before I visited, and a recruitment weekend sealed the deal.

But I had an inside track to information about IU (go hoosiers) via my work with Dan Hickey and Michelle Honeyford. I also happen to be a highly resourceful learner with a relatively clear sense of what I want to study, and with whom, and why. Other learners--especially undergraduates--aren't necessarily in such a cushy position. They are likely to rely heavily on rankings in making decisions about where to apply and which offer to accept. This not only serves to reify the arbitrary and esoteric rankings system (highest ranked schools get highest ranked students), but also serves to stunt the social revolution in an institution that needs revolution, and desperately.

In this matter, it's turtles all the way down. High-stakes standardized testing practices and teacher evaluations based on achievement on these tests limits innovation--from teachers as well as from students--at the secondary and, increasingly, the elementary level. But the world that surrounds schools is increasingly ruled by those who know how to innovate, how to say yes to a radical but promising idea, how to work within a "failure for free" model. If schools can't learn how to embrace the increasingly valued and valuable mindsets afforded by participatory practices, it's failing to prepare its student body for the world at large. The rankings system is just another set of hobbles added on to a system of clamps, tethers, and chains already set up to fail the very people it purports to serve.

Friday, May 29, 2009

figuring out "how to go on"

In his paper "Human Action and Social Groups as the Natural Home of Assessment: Thoughts on 21st Century Learning and Assessment," Jim Gee describes what at first glance appears to be two opposing uses of assessment in informal online spaces. As Gee explains,

Assessment for most social groups is both a form of mentoring and policing. These two are, however, not as opposed to each other as it might at first seem (and as they often are in school). Newcomers want to “live up to” their new identity and, since this is an identity they value, they want that identity “policed” so that it remains worth having by the time they gain it more fully. They buy into the “standards.” Surely this is how SWAT team members, scientists, and Yu-Gi-Oh fanatics feel.


At its best, assessment in formal education serves the same dual role; yet something is most assuredly different. What's different is not the degree to which students "buy into" the value of assessment; they see assessment in school as important, just as they would argue that the mentoring and policing of the online spaces they inhabit is an essential element to keeping those spaces alive.

What's different is not the degree of investment; what's different is the degree of relevance. The Yu-Gi-Oh fanatics Gee references want--sometimes desperately--to be accepted into the groups they join, and so they agree to the terms of this belonging, even if it requires being held to at times impossibly high standards of participation. The same is true of novice SWAT team members and scientists; it's less true of 11th graders reading Moby-Dick in an English classroom. To what purpose? they might ask, and they would be right to do so. Until we can align the goals, roles, and assessment practices of the formal classroom--until, that is, we can transform the domain to meet the needs of a participatory culture--investment exists without relevance. Students want A's to get into college; they want A's because that's what their parents tells them equals success; they want A's (or D's, or F's) because their friends tell them they should.

We know that practices are mediated by the tools we use to engage in those practices. We know that writing with a pencil is different from writing with a computer is different from writing with a Blackberry. The notion of "re-mediation" is intended to point to another level of mediation: That the tools that mediate traditional literacy practices get re-mediated by new media, which then re-mediates the practices that we bring to the tools. It's all very meta.

All of this is by way of introduction to re-mediating assessment, a new blog emerging out of a clever little partnership between a plucky crew of assessment-oriented researchers out of Indiana University and MIT. The plucky researchers include:

Daniel T. Hickey, Associate Professor of Learning Sciences at Indiana University, and our intrepid leader. Dan's research focuses on participatory approaches to assessment and motivation, design-based educational research, and program evaluation. He is particularly interested in how new participatory approaches can advance nagging educational debates over things like assessment formats and the use of extrinsic incentives. His work is funded by the National Science Foundation, NASA, and the MacArthur foundation, and has mostly been conducted in the context of digital social networks and videogames. He teaches graduate courses on cognition & instruction, assessment, and motivation, and undergraduate classes on educational psychology.

Michelle Honeyford, a Ph.D. Candidate in the Literacy, Culture, and Language Education Department at Indiana University and the cool head behind this operation. Michelle is a Graduate Research Assistant on the MacArthur-funded 21st Century Assessment Project for Situated and Sociocultural Approaches to Learning, working on a participatory assessment model for new media literacies. Her broader research interests include identity, cultural citizenship, and new literacy studies. Michelle is a former middle and high school English Language Arts teacher, and has taught courses in the teaching of writing at IU.

Jenna McWilliams, the little engine that could. Jenna is a prolific blogger who is working on mastering the art of being both smart and lucky, sometimes simultaneously. She recently got picked up by the Guardian's online site, Comment is Free, and was interviewed about the future of newspapers on the BBC's News Hour program. She currently works as an educational researcher for Project New Media Literacies, a MacArthur-funded research project based at MIT; prior to that, she taught English composition, literature, and creative writing at Suffolk University, Bridgewater State College, and at Newbury College and at Colorado State University, where she earned her MFA in Creative Writing. In Fall 2009, she will begin doctoral study in the Learning Sciences Program at Indiana University, under the tutelage of Dan Hickey, who is her sensei.

Together, this merry band will start working out the simple matter of "how to go on" and how to align classroom practices with the proficiencies called for--indeed, demanded--by a participatory culture.

We're bringing the smart. Wish us luck.