Showing posts with label open access. Show all posts
Showing posts with label open access. Show all posts

Tuesday, October 30, 2012

Introducing Digital Badges Within and Around Universities

Dan Hickey
Sheryl Grant from HASTAC recently posted a detailed summary of resources about uses of digital badges in higher education.[1] It was a very timely post for me as I had been asked to draft just such a brief by an administrator at Indiana University where I work.  Sheryl is the director of social networking for the MacArthur/Gates Badges for Lifelong Learning initiative.  Her job leaves her uniquely knowledgeable about the explosive growth of digital badges in many settings, including colleges and universities.  In this post, I want to explore one of the issues that Sheryl raised about the ways badges are being introduced in higher education, particularly as it relates to Indiana’s Universities.

Tuesday, October 27, 2009

The Void Between Colleges of Education and the University Teaching and Learning

In this post, I consider the tremendous advances in educational research I am seeing outside of colleges of education and ponder the relevance of mainstream educational research in light of the transformation of learning made possible by new digital social networks.

This weekend, the annual conference of the International Society for the Scholarship of Teaching and Learning took place at Indiana University. ISSOTL is the home of folks who are committed to studying and advancing teaching and learning in university settings. I saw several presentations that are directly relevant to what we care about here at Re-Mediating Assessment. These included a workshop on social pedagogies organized by Randy Bass, the Assistant Provost for Teaching and Learning at Georgetown, and several sessions on open education, including one by Randy and Toru Iiyoshi, who heads the Knowledge Media Lab at the Carnegie Foundation. Toru co-edited the groundbreaking volume Opening up Education, of which we here at RMA are huge fans. (I liked it so much I bought the book, but you can download all of the articles for free—ignore the line at the MIT press about sample chapters).

I presented at a session about e-Portfolios with John Gosney (Faculty Liaison for Learning Technologies at IUPUI) and Stacy Morrone (Associate Dean for Learning Technologies at IU). John talked about the e-Portfolio efforts within the Sakai open source collaboration and courseware platform; Stacy talked about e-Portfolio as it has been implemented in OnCourse, IU’s instantiation of the Sakai open source course collaboration platform. I presented about our efforts to advance participatory assessment in my classroom assessment course using newly available wikis and e-Portfolio tools in Oncourse (earlier deliberation on those efforts are here; more posted here soon). I was flattered that Maggie Ricci of IU’s Office of Instructional Consulting interviewed me about my post on positioning assessment for participation and promised to post the video this week (I will update here when I find out).

I am going to post about these presentations and how they intersect with participatory assessment as time permits over the next week or so. In the meantime, I want to stir up some overdue discussion over the void between the SOTL community and my colleagues in colleges of education at IU and elsewhere. In an unabashed effort to direct traffic to RMA and build interest in past and forthcoming posts, I am going to first write about this issue. I think it raises issues about the relevance of colleges of education and suggests a need for more interdisciplinary approaches to education research.

I should point out that I am new to the SOTL community. I have focused on technology-supported K-12 education for most of my career (most recently within the Quest Atlantis videogaming environment). I have only recently begun studying my own teaching in the context of developing new core courses for the doctoral program in Learning Sciences and in trying to develop online courses that take full advantage of new digital social networking practices (initial deliberations over my classroom assessment course are here). I feel sheepish about my late arrival because I am embarrassed about the tremendous innovations I found in the SOTL community that have mostly been ignored by educational researchers. My departmental colleagues Tom Duffy, who has long been active in SOTL here at IU, and Melissa Gresalfi have recently gotten seriously involved as well. The conference was awash with IU faculty, but I only saw a few colleagues from the School of Education. One notable exception was Melissa’s involvement on a panel on IU’s Interdisciplinary Teagle Colloquium on Inquiry in Action. I could not go because it conflicted with my own session, but this panel described just the sort of cross-campus collaboration I am aiming to promote here. I also ran into Luise McCarty from the Educational Policy program who heads the school’s Carnegie Initiative on the Doctorate for the school.

My search of the program for other folks from colleges of education revealed another session that was scheduled against mine and that focused on the issue I am raising in this post. Karen Swanson of Mercer University and Mary Kayler of George Mason reported on the findings of their meta-analysis of the literature on the tensions between colleges of education and SOTL. The fact that there is enough literature on this topic to meta-analyze points out that this issue has been around for a while (and suggests that I should probably read up before doing anything more than blogging about this issue.) From the abstract, it looks like they focused on the issue of tenure, which I presume refers to a core issue in the broader SOTL community: that SOTL researchers outside of schools of education risk being treated as interlopers by educational researchers, while treated as dilettantes by their own disciplinary communities. This same issue was mentioned in other sessions I attended as well. But significantly from my perspective, it looks like Swanson and Kayler looked at this issue from the perspective of Education faculty, which is what I want to focus on here. I have tenure, but I certainly wonder how my increased foray into the SOTL community will be viewed when I try to get promoted to full professor.

I will start by exploring my own observations about educational researchers who study their own university teaching practices. I am not in teacher education, but I know of a lot of respected education faculty who seem to be conducting high quality, published research about their teacher education practices. However, there is clearly a good deal of pretty mediocre self-study taking place as well. I review for a number of educational research journals and conferences. When I am asked to review manuscripts or proposals for educational research carried out in classrooms in the college of education, I am quite suspect. Because I have expertise in motivation and in formative assessment, I get stacks of submissions of studies of college of education teaching that seem utterly pointless to me. For example, folks love to study whether self______ is correlated with some other education relevant variables. The answer is always yes, (unless their measures are unreliable), and then there is some post hoc explanation of the relationships with some tenuous suggestions for practice. Likewise, I review lots of submissions that examine whether students who get feedback on learning to solve some class of problems learn to solve those problems better than students whose feedback is withheld. Here the answer should be yes, since this is essentially a test of educational malpractice. But the studies often ignore the assessment maxim that feedback must be useful and used, and instead focus on complex random assignment so that their study can be more “scientific.” I understand the appeal, because they are so easy to conduct and there are enough examples of them actually getting published to provide some inspiration (while dragging down the over effect size of feedback in meta-analytic studies). While it is sometimes hard to tell, these “convenience” studies usually appear to be conducted in the author’s own course or academic program. So, yes, I admit that when that looks to be the case, I do not expect to be impressed. I wonder if other folks feel the same way or if perhaps I am being overly harsh.

Much of my interest in SOTL follows from my efforts to help my college take better advantage of new online instructional tools and to help take advantage of social networking tools in my K-12 research. While my colleagues in IU Bloomington and IUPUI are making progress, I am afraid that we are well behind the curve. While I managed to attend a few SOTL sessions, I saw tremendous evidence of success that I will write about in subsequent posts. Randy Bass and Heidi Elmendorf (also of Georgetown) showed evidence of deep engagement on live discussion forums that simply can’t be faked; here at IU, Phillip Quirk showed some very convincing self-report data about student engagement in our new interdisciplinary Human Biology Program, which looks like a great model of practice for team-teaching courses. These initial observations reminded me of the opinion of James Paul Gee, who leads the MacArthur Foundation’s 21st Century Assessment Project (which partly sponsors my work as well). He has stated on several occasions that “the best educational research is no longer being conducted in colleges of education.” That is a pretty bold statement, and my education colleagues and I initially took offense to it. Obviously, it depends on your perspective; but in terms of taking advantage of new digital social networking tools and the movement towards open education and open-source curriculum, it seems like it may already be true.

One concern I had with SOTL was the sense that the excesses of “evidence-based practice” that has infected educational research was occurring in SOTL. But I did not see many of the randomized experimental studies that set out to “prove” that new instructional technology “works.” I have some very strong opinions about this that I will elaborate on in future posts; for now I will just say that I worry that SOTL researchers might get are too caught up in doing controlled comparison studies of conventional and online courses that they completely miss the point that online courses offer an entirely new realm of possibilities for teaching and learning. The “objective” measures of learning normally used in such studies are often biased in favor of traditional lecture/text/practice models that train students to memorize numerous specific associations; as long as enough of those associations appear on a targeted multiple-choice exam, scores will go up. The problem is that such designs can’t capture the important aspects of individual learning and any aspects of the social learning that is possible in these new educational contexts. Educational researchers seem unwilling to seriously begin looking at the potential of these new environments that they have “proven” to work. So, networked computers and online courses end up being used for very expensive test preparation…and that is a shame.

Here at RMA, we are exploring how participatory assessment models can foster and document all of the tremendous new opportunities for teaching and learning made possible by new digital social networks, while also producing convincing evidence on these “scientific” measures. I will close this post with a comment that Heidi Elmendorf made in the social pedagogies workshop. I asked her why she and the other presenters were embracing the distinction between “process” and “product.” In my opinion, this distinction is based on outdated individual models of learning; it dismisses the relevance of substantive communal engagement in powerful forms of learning, while privileging individual tests as the only “scientific” evidence of learning. I don’t recall Heidi’s exact response, but she immediately pointed out that her disciplinary colleagues in Biology leave her no choice. I was struck by the vigorous nods of agreement from her colleagues and the audience. Her response really brought be me back down to earth and reminded me how much work we have to do in this regard. In my subsequent posts, I will try to illustrate how participatory assessment can address precisely the issue that Heidi raised.

Sunday, July 5, 2009

opening up scholarship: generosity among grinches

why academic research and open exchange of ideas are like that bottle of raspberry vinaigrette salad dressing you've had in the back of your fridge since last summer


The folks over at Good Magazine are tossing up a series of blogposts under the heading "We Like to Share."

The articles are actually a series of interviews with creative types in a variety of fields who share one characteristic: they believe that sharing of ideas and content is valuable and important. The edited interviews are being posted by Eric Steuer, the Creative Director of Creative Commons--a project which, though I admittedly don't fully understand it, I find deeply ethical and innovative with respect to offering new approaches to sharing and community.

So far, two posts have gone up, the first with Chris Hughes, a co-founder of Facebook and the former online strategist for the Obama presidential campaign, and the second second with Flickr founder Caterina Fake. Talking about how much we've changed in our attitudes toward sharing, Fake explains that
[i]f you go online today you will see stories about Obama sharing his private Flickr photos. So this is how far the world has come: our president is sharing photos of his life and experiences with the rest of the world, online. Our acceptance of public sharing has evolved a lot over the course of the past 15 years. And as people became increasingly comfortable sharing with each other—and the world—that lead to things that we didn’t even anticipate: the smart mob phenomenon, people cracking crimes, participatory media, subverting oppressive governments. We didn’t know these things were going to happen when we created the website, but that one decision—to make things public and sharable—had significant consequences.


Hughes' interview is less overtly about sharing as we typically think of the term, but he points out that the Obama campaign was successful because it focused on offering useful communications tools that lowered barriers to access and then
getting out of the way of the grassroots supporters and organizers who were already out there making technology the most efficient vehicle possible for them to be able to organize. That was a huge emphasis of our program: with people all over the place online—Facebook, MySpace, and a lot of other different networks—we worked hard to make sure anyone who was energized by the campaign and inspired by Barack Obama could share that enthusiasm with their friends, get involved, and do tangible things to help us get closer to victory. The Obama campaign was in many ways a good end to the grassroots energy that was out there.


Both interviews, for as far as they go, offer interesting insights into how sharing is approached by innovators within their respective spheres. But though these posts present their subjects as bold in their embrace of sharing and community, their ideas about what sharing means and how it matters are woefully...limited. Fake uses the Obama example to point out how far we've come; but really, does Obama's decision to make public photos of his adorable family mean much more than that he knows how to maintain his image as the handsome, open President who loves his family almost to a fault? I don't imagine we'd be very surprised to learn that Obama's advisors counseled him to make these photos widely available.

Indeed, the Flickr approach, in general, is this: These photos are mine and I will let you see them, but you have to give them back when you're done. It's a version of sharing, yes, but only along the lines of the sharing we learned to do as children.

The same is true of the picture Hughes paints of a campaign that successfully leveraged social networking technologies. The Obama campaign's decision to use participatory technologies was a calculated move: Everybody knows that a.) More young, wired and tech-savvy people supported Obama than McCain; and b.) those supporters required a little extra outreach in order to line up at the polls on election day. You can bet that if Republicans outnumbered Democrats on Facebook, you can bet Obama's managers would have been a little less quick to embrace these barrier-dropping communication tools.

What we're not seeing so far among these innovators is an innovative approach to sharing--one that opens up copyright-able and patent-able and, therefore, economically valuable ideas and content to the larger community.

I've been thinking about this lately because of my obsession with open education and open access. In particular, educational researchers--even those who embrace open educational resources--struggle with the prospect of making their work available to other interested researchers.

This makes sense to anyone who's undertaken ed research--prestige, funding, and plum faculty positions (what little there is of any of these things) are secured through the generation of innovative, unique scholarship and ideas, and ideas made readily available are ideas made readily stealable. As a fairly new addition to the field, even I have been a victim of intellectual property theft. It's enough to give a person pause, even if, like me, you're on open education like Joss Whedon on strong, feminist-type leading ladies.

But, come on, we all know there's no point to hiding good research from the public. As Kevin Smith writes in a recent blogpost on a San Jose State University professor who accused a student of copyright violation for posting assigned work online,

[t]here are many reasons to share scholarship, and very few reasons to keep it secret. Scholarship that is not shared has very little value, and the default position for scholars at all levels ought to be as much openness as is possible. There are a few situations in which it is appropriate to withhold scholarship from public view, but they should be carefully defined and circumscribed. After all, the point of our institutions is to increase public knowledge and to put learning at the service of society. And there are several ways in which scholars benefit personally by sharing their work widely.


Smith is right, of course, and the only real issue is figuring out strategies for getting everybody on board with the pro-sharing approach to scholarship. The "I made this and you can see it but you have to give it back when you're done" model is nice in theory but, in practice, limits innovation and progress in educational research. A more useful approach might be along the lines of: "I made this and you can feel free to appropriate the parts that are valuable to you, but please make sure you credit my work as your source material." This is a key principle at the core of the open education approach and of what media scholar Henry Jenkins calls "spreadability."

The problem is that there are enough academics who subscribe to the "share your toys but take them back when you're done playing" approach to research that anybody who embraces the free-appropriation model of scholarship ends up getting every toy stolen and has to go home with an empty bag. This is why the open education movement holds so much promise for all of academia: Adherents to the core values of open education agree that while we may not have a common vocabulary for the practice of sharing scholarship, we absolutely need to work to develop one. For all my criticisms of the OpenCourseWare projects at MIT and elsewhere, one essential aspect of this work is that it opens up a space to talk about how to share materials, and why, and when, and in what context. The content of these projects may be conservative, but the approach is wildly radical.