Monday, September 2, 2013

On MOOCs, BOOCs, and DOCCs: Innovation in Open Courses

This post examines the features of Anne Balsamo's DOCC (distributed open collaborative course) in light of current issues in open courses.  This extended post discusses the pros and cons of a distributed approach to curriculum in light of the BOOC (big open online course) on educational assessment that Indiana University is offering in Fall 2013

My friend and neighbor Beth Plale send me a link this article about Anne Balsamo's distributed open collaborative course, or DOCC (pronounced "dock") on Feminism and Technology. Beth is a Professor in the School of Informatics and Computing. Beth studies "big data" and I have learned a lot from her about computing.

I first heard about Balsamo's DOCC from Jenna McWilliams.   Jenna is a doctoral student in the IU Learning Sciences program who helped me expand my learning sciences research into social media back in 2007.  Jenna teases me about coining the term BOOC (big open online course) for the course on educational assessment that we are offering for up to 500 learners starting this September.  But Jenna was quite excited about about the content and the approach of Balsamo's course.  We both laughed about this proliferation of acronyms.  My favorite is small private online course (SPOC).  As Jenna pointed out, this is simply another name for "online course."

Some Serious Innovation in Open Online Coruses
Since I began drafting this post, Balsamo's DOCC has already gotten a lot of press.  Mark Guzdial has written about it on the Computer Educator's Blog.  Cathy Davidson's blog post at HASTAC pointed out that while some find this "acronymic tendency" humorous, it highlights a much-needed search for innovation in instructional models in open courses (here here!).  The title of the September 18 article about the DOCC ("Feminist Anti-MOOC") suggested yet another rant in about the dubious engagement in many massive open courses. But Balsamo's description of the course suggests some serious innovation in learning:
A DOCC is different from a MOOC in that it doesn't deliver a centralized singular syllabus to all the participants. Rather it organizes around a central topic. It recognizes that, based on deep feminist pedagogical commitments, expertise is distributed throughout all the participants in a learning activity," and does not just reside with one or two individuals.
Like many MOOC's this course will feature weekly videos.  In this case the videos will feature interviews and discussions with leading thinkers about feminism and technology.  At more than a dozen participating colleges, professors will offer their own courses for credit and create their own assignments and assessments.  
I really like this idea, because it supports the kind of deep disciplinary engagement and local learning community that seems so lacking in most MOOCs.  Balsamo pointed out
Who you learn with is as important as what you learn.  Learning is a relationship, not just something that can be measured by outcomes or formal metrics.
I could not agree more.  Many of these same ideas are represented in the approach known as Connectivism, as advanced by George Siemans and Stephen Downes. They take the notion of distributed knowledge even further, allowing the majority of the content to be constructed in the interaction among online participants. A particularly noteworthy development in this regard is the History and the Future of Education project that Cathy Davidson and others at HASTAC are heading up.  This is a "coordinated multi-university initiative dedicated to thinking together about the future of the university."  

But is Distributed Content Always Best?
However..... there are some online course contexts where one does not want all of the core course knowledge to be constructed by participants.  This is certainly the case in our upcoming BOOC. There is a core body of knowledge about educational assessment that I want to use as a starting point for creating communities around productive forms of disciplinary engagement. We are using a widely respected textbook that Jim Popham has done a great job keeping up to date.  

But just because we are using a textbook does not mean that Popham gets the last word on all things assessment.  Quite to the contrary.  I disagree strongly with some of his arguments.  For example, he says that grades should be based entirely on things that can be objectively measured; I think that doing so forces teachers and students to focus only on forms of knowledge that can be captured by classroom assessments and external tests. My colleague Cassandra Guarino will be leading the part of the BOOC that focuses on controversial new "value-added modeling" teacher accountability policies. When she and I covered that topic in my previous for-credit version of the course, she felt that Popham's text steered the class to focus on the criticisms of value-added modeling.  She felt they needed a more objective consideration of how they work.  And I agree. I personally think that introducing VAM alongside new standards and new tests will be a disaster (because increased stakes demands that tests be even more "objective" and therefore more removed from each teachers' actual classroom practice).  But value-added teacher evaluation is going to happen.  Someone who takes my course should understand VAM and be prepared to deal with its consequences.

In short, it seems like a distributed approach to a topic like this might do what happened in our class previously--lots of criticism and dismissal, but not enough engagement with the actual disciplinary knowledge up front for that discussion to be truly productive.

Complex Concepts need Concrete Contexts
But in these an many other examples, starting with a core text allowed for a focused discussion of these very issues.  By having the students first rank and discuss the relative relevance of the key concepts in the textbook for their own practice, the three professional networking groups in the class established a sufficiently "personalized" understanding of the disciplinary issues so that they were able to understand and discuss the issues.  This in turn would prepare them understand how the disciplinary ideas in the text took on new meaning and relevance in different contexts of practice.  For example, the students in my class came to appreciate why I think purely objective grading might make more sense in an Algebra class (where most of the knowledge is likely to be very procedural) than in a Composition class (where most of the knowledge is contextual therefore consequential for practice).

In the previous course and the BOOC, students' personal interpretation (in weekly wikifolios) and shared discussion (in threaded comments on the wikifolios) are use to contextualize the nuanced and more advanced course content. For example, Dr. Guarino initially suggested that we have the students instead read the Gate's foundation report on Measures of Effective Teaching.  I agree that this is a great discussion of the evaluation of teaching (particularly because they focus on evaluating teaching and not teachers). But after a decade of teaching this course, I am confident that most of the grad students who take it (mostly MEd students but a mix of doctoral students, researchers etc.) would simply not be able to make sense of it as a formal course assignment.  It is simply too dense and disconnected from the practice of running a classroom or a school.

By participating in the threaded discussions of the student wikifolios where they discussed value added modeling, Cassie and I were able to insert these more abstract and difficult concepts and links to external resources directly into the contextual interactions of the students.  Importantly, we also learned ways to connect these concepts to the students experiences.  For example the future administrators in the educational leadership group looked at VAM differently than the students in the other groups.  We were able to use this knowledge to craft a highly contextualized weekly summary whereby we raised some of these more complex concepts and pointed them to the links.  Interviews with students after the class was over suggested that most of them read and understood the weekly summary and that some (but not all) eventually read the executive summary of the Gates MET report. Most importantly, I am confident that every one of the students remembers that there is a report and other resources out there and will know that they can turn to it next year when VAM gets introduced and starts turning our schools on their heads.

I am curious what others will say about this.  In particular I wonder if people think that the content of the educational assessment BOOC should be more distributed in the future.  If so what would that look like?


  1. Hello, I understand your point about distributed assessment being a problem, particularly if you are concerned about everyone acquiring core knowledge in a particular field. I am one of the instructors teaching a Distributed Open Collaborative Course (DOCC), in my case, a graduate seminar at the University of Illinois, Urbana-Champaign. For our course, we are writing contracts with each student to align with their goals and needs, rather than having a set of standard assignments. That is part of the DOCC idea: using thematic materials, we have designed 16 different courses at 16 different institutions--from small liberal arts colleges to large research universities, from "Hispanic-serving institutions" to community-based courses. While Anne Balsamo has been a key leader for the DOCC and for FemTechNet, it isn't accurate to say "Balsamo's DOCC." Follow us on, or on Twitter @DOCC2013. Thanks!
    Sharon Irish

  2. Sharon--
    Thanks for your comment and suggestions. I (sheepishly) revised the post to reflect your point about the broader organization behind this effort.

    As for the issue about assessment.... I did not really get into the issue of assessing outcomes, but you are right in showing how they loom. I think that your suggested assessment practice makes perfect sense. It would be pretty silly to standardize the assessment for something that is so distributed. That actually is a pretty good basis for evaluating whether a particular set of instructional goals can be distributed. In some cases like my assessment course, it really is possible to define a particular set of knowledge that can be meaningfully assessed and possibly even measured. But in the case of FemTechNet (and in some of the other courses I teach)the learning goals are fundamentally about learning to participate in the practices of a community. And in the case of a community like FemTechNet, both the community and the practices are nascent and emergent. So contracts, portfolios, papers, all make sense because they provide evidence of participation. Perhaps most importantly they can reflect the fact that participation means different things to different participants. even at the same site.

    Good luck with your endevour. I look forward to seeing how things go!

  3. For sure, "participation" means many different things, as you note. But still, for every learner, it would be helpful to know if the DOCC is really contributing to the following goals, and how--acknowledging the roles of place, institutional cultures and student backgrounds. How do we know if we have:
    -developed ethical and equitable practices for more socially just global communities [we have to define these phrases, of course]
    -developed innovative uses for digital technologies that serve important cultural and social needs, anywhere and everywhere [this seems somewhat measurable through portfolios and projects]
    -identified and preserved the history of feminist contribution to technological innovation [I guess we count the quality and numbers of contributions to Wikipedia for example]
    -advance feminist principles of social justice in creating educational models and pedagogies for the future [I guess here we need to specify what outcomes we think align with these principles]

    In any case, I am encouraged that you think portfolios and contracts make sense!

  4. Sharon--
    My theoretical orientation always points me to search for evidence of productive forms of disciplinary engagement. I define productive primarily in terms of engagement in discourse that connects the disciplinary concepts of the course with the varied contexts and experiences of the participants. You can interpret that by looking at discourse, but it is rather hard to formally assess whether or not that discourse has occurred by looking at portfolios or other artifacts. We use portfolios (actually wikifolios) extensively in my assessment BOOC, but we will also have formal assessments because they helpful at ensuring (and motivating) transfer of individual understanding from those interactions. My main point is that you have to be very carefully assessing artifacts like portfolios as you can really screw up the discourse that they can support. If the discussion around portfolios is not disciplinary but rather about what you have to do to get a good grade, that is a bad sign.

  5. I was excited about this model of online course when I saw the announcement. As you noted, Dan, it does come a lot closer to Siemens' and Downes' connectivist model than the common pattern of 2012-13's institutional MOOC. As someone who usually instigates trying to "connectivize" my experience of centralized MOOCS that I'm in by spreading the discussion out from the closed forum of each MOOC site, I was excited to see a university-sponsored effort try a move away from centralization as well.

    As far as the distribution of content curation goes, I sit on both sides of the fence. I like giving individual learners and guides the leeway to decide that a particular resource should be included or not in the course, but I also recognize the value of an expert's curation of a course of study through the literature of a field. As a learner, I don't trust myself to know in advance what the important readings are in a field, and I know that I will never get to that level of expertise in most of the fields I enter. A well tailored reading list is my friend in these contexts. The connectivist model of an online course doe not completely do away with an instructor-curated syllabus, it just admits that participants will naturally bring in their own networks, resources, and examples to the discussion. For the Assessment BOOC, decentralizing the reading list wouldn't mean abandoning a textbook, it just might mean bringing in student-identified examples of blog posts about assessment techniques in the wild.

    I do have a separate curiosity about the Distributed Open Collaborative Course. How is this model open? When I saw the announcement, I looked around for signup links or spaces where discussion would be occurring, and it seems that there are no open spaces associated with the course. There is also value in closed, protected discussion spaces, but it leaves me wondering what sense of "open" the organizers were using when they chose it. Stephen Downes frequently writes about how the American institutional model of the MOOC (which he calls an xMOOC to distinguish from his connectivist cMOOCS) has tended to attempt to break down the MOOC acronym one word at a time. In reaction to the MOOC, some providers are delivering experiences that are either not free (open), massive (the BOOC, perhaps), online (with in-person components), or even courses (see Harvard's new thinking about breaking down courses into modules: ). So it's interesting to see a breakdown of the MOOC actually move closer to the original connectivist model, even though they would criticize this DOCC for lack of openness.

  6. Nate--
    I don't think you have to choose one side of the fence or the other! The point is that open content curation will more appropriate in some contexts while instructor defined content will be more appropriate in other contexts. The real questions is when and for what?

    In the Assessment BOOC, we will open up the content significantly when we get to assessment policies because they differ by state and and role, and because there is a lot of controversy. We can't even scratch the surface of the policies about standardized testing for example so there is no point in even trying. Rather we want students to begin engaging with professional peers. And we really want non-peers like teachers and adminstrators learning how each other think about policies.

    This is all very different than, say the guidelines for making good performance assessments. Our text has a nice straightforward set of guidelines that students should practice using before we open things up.

  7. Daniel wrote, "At more than a dozen participating colleges, professors will offer their own courses for credit and create their own assignments and assessments."

    I'm wondering how fractured this could become and whether, in some cases, it amounts to duplicated and/or wasted effort in the area of assessment. It would make sense for professors to share their assessments with one another, with the understanding that they are all free to use what's useful to them.

    I also wonder what happens when it comes to yet another term in the proliferation of acronyms: OLAs, or "online learning activities" ( If we start taking bits and pieces of MOOCs (or other forms of online learning) and fitting them Lego-like into personalized courses, then how do we devise proper assessments? Is there a way to pull them together so that bits and pieces can be used for customization? It feels as if the hard part is finding the right balance of centrifugal and centripetal forces in all this.

    1. Mark--
      Great Question. Mary-- Yup. It gets pretty messy pretty quickly. I am actually not a very big fan of standardization because I think learning and knowing are highly contextual. Case in point, somebody on the Facebook page for the Educational Assessment BOOC questioned whether somebody who completed the course (which is a trimmed down version of a pretty comprehensive grad-level course) could be deemed an "expert." I suspect this question came from an assessment researcher or scholar. Our course is intended to make someone an expert in their specific context. In that past students who complete my course have told me that even before the course is over that they are deemed the residential assessment expert at their school or program. This is because they know more about assessment than anybody else in that particular setting or domain. Decades of failed assessment reforms have convinced me that most of the assessment solutions designed and promoted by external "experts" fail because the ignore local expertise