Thursday, April 15, 2010

short-sighted and socially destructive: thoughts on Ning's decision to cut free services

Lord knows I'm not a huge fan of Ning, the social networking tool that allows users to create and manage online networks. I find the design bulky and fairly counterintuitive, and modifying a network to meet your group's needs is extremely challenging, and Ning has made it extremely difficult or impossible for users to control, modify, or move network content. Despite the popularity of Ning's free, ad-supported social networks among K-16 educators, the ads that go along with the free service have tended toward the racy or age-inappropriate.

But given the Ning trifecta--it's free, getting students signed up is fast and fairly easy, and lots of teachers are using it--I've been working with Ning with researchers and teachers for the last two years. So the recent news that Ning will be switching to paid-only membership is obnoxious for two reasons.

The first reason is the obvious: I don't want to pay--and I don't want the teachers who use Ning to have to pay, either. One of the neat things about Ning is the ability to build multiple social networks--maybe a separate one for each class, or a new one each semester, or even multiple networks for a single group of students. In the future, each network will require a monthly payment, which means that most teachers who do decide to pay will stick to a much smaller number of networks. This means they'll probably erase content and delete members, starting fresh each time. The enormous professional development potential of having persistent networks filled with content, conversations, and student work suddenly disappears.

Which brings me to my second point: That anyone who's currently using Ning's free services will be forced to either pay for an upgrade or move all of their material off of Ning. This is tough for teachers who have layers upon layers of material posted on various Ning sites, and it's incredibly problematic for any researcher who's working with Ning's free resources. If we decide to leave Ning for another free network, we'll have to figure out some systematic way of capturing every single thing that currently lives on Ning, lest it disappear forever.

Ning's decision to phase out free services amounts to a paywall, pure and simple. Instead of putting limits on information, as paywalls for news services do, this paywall puts limits on participation. In many ways, this is potentially far worse, far more disruptive and destructive, far more short-sighted than any information paywall could be.

If Ning was smart, it would think a little more creatively about payment structures. What about offering unlimited access to all members of a school district, for a set fee paid at the district level? What about offering an educator account that provides unlimited network creation for a set (and much lower) fee? What about improving the services Ning provides to make it feel like you'd be getting what you paid for?

More information on Ning's decision to go paid-only will be released tomorrow. For now, I'm working up a list of free social networking tools for use by educators. If you have any suggestions, I'd love to hear them.

Update, 4/15/10, 6:48 p.m.: Never one to sit on the sidelines in the first place, Alec Couros has spearheaded a gigantic, collaborative googledoc called "Alternatives to Ning." As of this update, the doc keeps crashing because of the number of collaborators trying to help build this thing (the last time I got into it, I was one of 303 collaborators), so if it doesn't load right away, keep trying.

Friday, April 2, 2010

Diane Ravitch Editorial on the Failure of NCLB

I have long admired Diane Ravitch. While I have disagreed with her on fundamental philosophical grounds, her arguments have always been grounded in the realities of schooling--even if those were the realities of conservative parents and stakeholders.

Now the evidence has shown what some of us predicted and what many of us have known for years: that external tests of basic skills and punitive sanctions were just going to lead to illusory gains (if any) and undermine other value outcomes. Her editorial in today's (April 2) Washington Post is very direct. While I disagree with her on where to go from here, I applaud her for using her audience and her reputation to help convince a lot of stakeholders who have found one reason or another to ignore the considerable evidence against continuing NCLB. Like Jim Popham has been saying for years, all of the improvement schools could make with test scores already happened between 1990 and 2000, once newspapers began publishing test scores.



Certainly this will factor into the pending NCLB reauthorization. Perhaps Indiana's Republican leadership will read this and think twice about going forward with their two core ideas for their Race to the Top reform proposal, even though it was not funded. The twin shells in their reform shotgun is "Pay for Performance" merit pay for Indiana teachers based on basic skills test scores, and "Value Added" growth modeling that ranks teachers based on how much "achievement" they instilled in their kids. For reasons Ravitch summarizes and other concerns outlined in a recent letter and report by the National Academy, the recoil from pulling these two triggers at once might be just enough to blow our schools and our children pretty far back into the 20th century.

Tuesday, March 9, 2010

Video of Barry McGaw on Assessment Strategies for 21st Century Skills (Measurement Working Group)

I just came across a video of a keynote by Barry McGaw at last month’s Learning and World Technology Forum. McGaw heads the Intel/Microsoft/Cisco initiative known as Assessment and Teaching of 21st Century Skills. This high-powered group is aiming to transform the tests used in large-scale national comparisons and education more broadly. Their recent white papers are a must read for anyone interested in assessment or new proficiencies. McGaw’s video highlights aspects of this effort that challenge conventional wisdom about assessment. In this post I focus on McGaw’s comments on the efforts of the Measurement working group. In particular they point to (1) the need to iteratively define assessments and the constructs they aim to capture, and (2) the challenge of defining developmental models of these new skills.


Iterative Design of Assessments and Constructs
McGaw highlighted that the Measurement Working Group (led by Mark Wilson) emphasized the need for iterative refinement in the development of new measures. Various groups spent much of the first decade of the 21st century debating how these proficiencies should be defined and organized. In this abstract context, this definition process could easily consume the second decade as well. Wilson’s group argues that the underlying constructs being assessed must be defined and redefined in the context of the assessment development process. Of this, McGaw said

You think about it first, you have a theory about what you want those performances to measure. You then begin to develop ways of capturing information about that skills. But the data themselves give you information about the definition, and you refine the definition. This is the important point of pilot work with these assessment devices. And not just giving the tests to students, but giving them to students and seeing what their responses are, and discovering why they gave that response. And not just in the case where it is the wrong response but in the case where it is the correct response, so that you get a better sense of the cognitive processes underlying the solution to the task.

In other words, you can’t just have one group define standards and definitions and then pitch them to the measurement group when dealing with these new proficiencies. Because of their highly contextualized nature, we can’t just pitch standards to testing companies as has been the case with hard skills for years. This has always nagged at me in previous consideration, in that they seemed to overlook both the issue and the challenge that it presents (e.g., the Partnership for 21st Century Skills). Maybe now we can officially decide to stop trying to define what assessment scholar Lorrie Shepard so aptly labeled “21st Century Bla Bla Bla.”

The Lack of Learning Progression Models
McGaw also reiterated the concerns of the Measurement Working Groups over the lack of consensus about the way these new proficiencies develop. There is a strong consensus about the development of many of the hard skills in math, science, and literacy, and these insights are crucial for developing worthwhile assessments. I learned about this first hand developing a performance assessment for introductory genetics working with Ann Kindfield at ETS. Ann taught me the difference between the easier cause-to-effect reasoning (e.g., completing the Punnett square) and the more challenging effect-to-cause reasoning (e.g., using a pedigree chart to infer mode of inheritance). We used these and other distinctions she uncovered in her doctoral studies to create a tool that supported tons of useful studies on teaching inheritance in biology classes. Other more well known work on “learning progressions” include Ravit Duncan’s work in molecular genetics and Doug Clements’ work in algebra. In each case it took multiple research teams many years reach consensus about the way that knowledge typically developed.

Wilson and McGaw are to be commended for reminding us how difficult it is going to be to agree on the development of these much softer 21st century proficiencies. They are by their very definition situated in more elusive social and technological contexts. And those contexts are evolving. Quickly. Take for example judging credibility of information on the Internet. In the 90s this meant websites. In the past decade it came to mean blogs. Now I guess it includes Twitter. (There is a great post about this at MacArthur’s Spotlight Blog, as well as a recent CBC interview about fostering new media literacies, featuring my student Jenna McWilliams.)

Consider that I taught my 11-year-old son to look at the history page on Wikipedia to help distinguish between contested and uncontested information in a given entry. He figured out on his own how to verify the credibility of suggestions for modding his Nerf guns at nerfhaven.com and YouTube. Now imagine you are ETS, where it inevitably takes a long time and buckets of money to produce each new test. They already had to replace their original iSkills test with the iCritical Thinking test. From what I can tell, it is still a straightforward test of information from a website. Lots of firms are starting to market such tests. Some places (like Scholastic’s Expert21) will also sell you curriculum and classroom assessments that will teach students to pass the test—without ever actually going on the Internet. Of course ETS know that they can’t sell curriculum if they want to maintain their credibility. But I am confident that as soon as organizations start attaching meaningful consequences to the test, social networks will spring up telling students exactly how to answer the questions.

There is lots of other great stuff in the Measurement white paper. Much if it is quite technical. But I applaud their sobering recognition of the many challenges that these new proficiencies pose for large scale measurement. And they only get harder when these new tests are used for accountability purposes.

Next up: McGaw’s comments about the Classroom Environments and Formative Evaluation working group.