Thursday, January 5, 2023

What Does the Media Say about ChatGPT and Education?

Daniel Hickey & Qianxu Morgan Luo

 Like millions of others, we have been quite impressed by the power of ChatGPT.  Numerous media accounts argue that education will never be the same. It is remarkably capable of generating original prose that is not detectable by the current generation of plagiarism detectors like Turnitin. Many have noted that ChatGPT is particularly good at writing "compare and contrast" essays that many educators presumed were difficult or impossible to hack by rewriting information located on the web.

ChatGPT really exploded in December 2022.  We suspect that many educators saw a massive improvement in the depth and quality of take-home exams and end-of-semester essays at that time.  We predict that many of us are going to find our existing approaches to instruction and assessment upended once the new semester begins.

What Does Media Say So Far?

We are systematically analyzing the accounts as they come out. As of today, we are up to 27, which includes both objective reports as well as editorials.  Here is what we have found so far:

  • Eight of them were classified as "worried" or "very worried." These included Stephen Marche's prescient 2021 article in The New Yorker that examined an earlier writing bot.  Recognizing the rapid massive improvement, Marche wrote "the undergraduate essay, the basic pedagogical mode of all humanities, will soon be under severe pressure."
  • Fifteen were classified as "mixed." Many of these were media accounts that aimed to be objective.  New Yorker technology columnist Kevin Roose pointed out that ChatGPT is "ominously good at answering the types of open-ended questions that frequently appear on school assignments" but is "prone to giving wrong answers."  Many suggested in-class handwritten essay exams or asking students to give impromptu presentations on ostensibly authored assignments.
  •  Four were classified as "positive" or "very positive."  Most reminded readers of similar concerns with prior technologies like spell checkers and blamed the risk on shallow instruction.  For example, English Professor Blaine Gretemen's editorial in Newsweek argued that it is "time for a new final exam, one that demands students find out something about themselves, and to tell you in a voice that is their own."

Are our observations of the media coverage so far:

  • As proponents and scholars of online learning, We were surprised by the lack of discussion of the specific consequences for online education.  Such settings are likely to preclude in-class essays or impromptu presentations as a short-term response. 
  • Given all the suggestions that educators will need to assign in-class handwritten essays, We are surprised that no one has mentioned that many young people are incapable of writing legibly at the speed that would be needed for this to be practical.
  • As learning scientists, we worry that there has not been enough attention to the crucial role that writing plays in learning.  As Marlene Scardamalia and Carl Bereiter convinced us in the late 1980s, skilled writers engage in knowledge construction where they use text to overcome the limits of short-term memory.  In contrast to more novice knowledge-telling writers, skilled writers typically know a lot more when they complete an essay, article, or assignment.
  • To the advocates who liken ChatGPT to other innovations (from the slide rule to graphing calculators to Google Translate), a week of experimentation has convinced us that ChatGPT is already more powerful than all of the other technologies combined.  And it is only going to get more powerful.
Where are We Going Next?

This is the first of several posts exploring the implications of ChatGPT. The next post will share a paper that GhatGPT wrote off Dan's online graduate course on Learning and Cognition.  

PS. We missed the excellent article in The Chronicle of Higher Education by Beth McMurtrie.  The title captures its insight: AI and the Future of Undergraduate Writing: Teaching Experts are Concerned, but not for the Reasons You Think.   It makes several excellent points.
  •  Typical high school English and the five-paragraph essay are responsible for training a generation of knowledge tellers.  
  • Many of the suggestions for thwarting ChatGPT are very labor-intensive.  We are currently writing another blog post that will dig more deeply into this issue.
  • It linked to the public page from Anna Mills compiling suggestions for essay prompts that might thwart chatbots.


Thursday, August 26, 2021

New Article about Situative Assessment

The awesome Diane Conrad of Athabasca University guest-edited a special issue of Distance Education on assessment and was kind enough to accept our proposal to present our situative approach to online grading, assessment, and testing:

Hickey, D., & Harris, T. (2021). Reimagining online grading, assessment, and testing using situated cognition. Distance Education42(2), 290-309.

The first part of the paper reframes the  "multilevel" model of assessment introduced in a 2012 article in the Journal of Research in Science Teaching and a 2013 article in the Journal of Learning Sciences for online settings.  

  1. Immediate-Level Ungraded Assessment of Online Discourse via Instructor Comments
  2. Close-Level Graded Assessment of Engagement via Informal Reflections
  3. Proximal Formative ­Self-Assessments
  4. Automated Distal Summative Achievement Tests

The second part of the article introduces ten new assessment design principles, 
  1. Embrace Situative Reconciliation over Aggregative Reconciliation.
  2. Focus on Assessment Functions Rather than Purposes.  
  3. Synergize Multiple Complementary Types of Interaction
  4. Use Increasingly Formal Assessments that Capture Longer Timescales of Learning
  5. Embrace Transformative Functions and Systemic Validity
  6. Position Learners as Accountable Authors
  7. Reposition Minoritized Learners for Equitable Engagement
  8. Enhance Validity of Evidence for Designers, Evaluators, and Researchers  
  9. Enhance Credibility of Scores and Efficiency for Educators
  10. Enhance Credibility of Assessments and Grades for Learners
I was particularly pleased with the new ideas under the seventh principle.  We were able to use Agarwal and Sengupta-Irvings (2019) critique of Engle & Conants (2002) Productive Disciplinary Engagement framework and their new Connected and Productive Disciplinary Engagement framework,  It forms the core of our Culturally Sustaining Classroom Assessment framework that we will be presenting for the first time at the Culturally Relevant Evaluation and Assessment conference in Chicago in late September, 2021.

Thursday, May 13, 2021

Articles Chapters, and Reports about Open Badges

by Daniel Hickey

Thanks to Connie Yowell and Mimi Itow at the MacArthur Foundation's Digital Media and Learning Initiative, I had the pleasure of being deeply involved with digital badges and micro-credentials starting in 2010.  While we no longer have any funding for this work, my colleagues and I are continuing to engage with the community.  I am thrilled to see the continued growth and the wide recognition that micro-credentials offer new career pathways to non-traditional learners.

I get occasional requests for copies of chapters, articles, and reports that we reproduced as well as some general "where do we begin" queries.  Given that we were funded to provide broad guidance from 2012-2017, we produced some things that beginners and advanced innovators have found quite useful. We continued to publish after MacArthur ended the DML initiative and funding ran out. Here is an annotated list of resources.  We hope you find them useful!

Getting Started.

If you are new to badges and microcredentials, this might be a good place to get some basic background:

Where Badges Work Better

We studied the 30 badge systems that MacArthur funded in 2012 to uncover the badge system design principles that might guide the efforts of innovators.  This included general principles and principles for recognizing, assessing, motivating, and studying learning. These findings were collected in a short report at EDUCAUSE and our longer report:

We also did a followup study two years later to determine which systems resulted in a "thriving" badge-based ecosystem.  Most of the constructivist "completion-badge" systems and associationist "competency-badge" systems failed to thrive, many never got past piloting and some never issued any badges.  Turned out that wildly optimistic plans for assessing competency or completion undermined the project.  In contrast, most of the sociocultural "participation-badge" systems were still thriving, in part because they relied on peer assessment and because they assessed social learning rather than individual completion or competency:

 Endorsement 2.0 and Badges in the Assessment BOOC

An important development is "endorsement" in the Open Badges 2.0 Standards.  It allows a "BadgeClass" to carry an endorsement (e.g., from an organization, after reviewing the standards) and for each "assertion" of that badge class to carry an endorsement (e.g., from a member of that organization, after reviewing the evidence in the badge).  Nate Otto and I summarized this feature and EDUCAUSE Review and predicted its s impact in the Chronicle:  

This chapter describes Google-funded "Big Open Online Course" ("BOOC") which really pushed the limits of open badges, including one of the first examples of "peer endorsement" and "peer promotion." It also showed that our asynchronous model of participatory learning and assessment (PLA) could be used at scale to support highly interactive learning with almost no instructor engagement with open learners:

The Varied Functions of Open Badges

This chapter used the BOOC badges to illustrate how badges to illustrate the range of functions of open badges.  It shows how badges support the shift (a) from measuring achievement to capturing learning. (b) from credentialing graduates to recognizing learning, (c) from compelling achievement to motivating learning, and (d) from accrediting schools and programs to endorsing learning:

This chapter used example badges from sustainable/sustainability education to similarly illustrate these four functions of digital badges.  The badges came from Ilona Buchem's  EU-funded Open Virtual Mobility project and the FAO e-Learning Academy from the UN's Food and Agriculture Organization.  BTW, the e-Learning Academy features some of the best self-paced open courses I have ever seen.  the assessments are great and you really can't prank them.  If it says the course will take two hours it is really impossible to earn the badges without spending two hours learning (I tried!):

This 2017 chapter presents the situative model of assessment that was first published in Hickey (2003) in the context of open badges.  It is my response to people like Mitch Resnick who claim that open badges will undermine intrinsic motivation.  I agree with him that they will if you use them as meaningless tokens.  So don't do that Mitch!  Instead take advantage of the fact that badges contain meaningful information and can circulate in social networks and gain more meaning, which has consistently been shown to enhance free-choice engagement:

Validity vs. Credibility

Early on in my journey with digital badges, Carla Casilli blew my mind when her early blogpost explained how the "open" nature of open badges forced us to rethink validity in assessment and testing.  The ability for a viewer to interrogate the evidence contained in a badge or micro-credential means that the credibility of that evidence is more important than the validity of that credential in a traditional sense.  So I was happy to write with her about this important issue: 

Wednesday, May 12, 2021

New articles on Participatory Learning and Assessment (including inclusion)

  Yikes, it has been a long time since we have posted.  Partly what happened is we redirected our energy from blogging to publishing.  Starting in 2019, we began translating the theory-laden design principles to practical steps for readers who may or may not be grounded in sociocultural theories. This was serendipitous in light of the pandemic and the explosion of interest in asynchronous online learning. 

In contrast to our earlier articles, these new articles reflect the influence of current research on power and privilege in the learning sciences.  Each includes design principles and/or steps that are intended to "reposition" minoritized learners.  In particular, the changes reflect the influence of papers by Priyanka Agarwal and Tesha Sengupta-Irving on Connective and Productive Disciplinary Engagement (CPDE, 2019) Each of the descriptions below is hotlinked to a copy of the article.

This first article is a very gentle introduction to online participatory learning and assessment (PLA). It was written for educators with no experience teaching online and who are not grounded in any particular theory of learning

This article describes how we translated the PLA principles into fourteen steps, focusing on engagement routines.  It was written for instructional designers and others who are grounded in more conventional cognitive-associationist and cognitive-constructivist theories of learning

This one introduces ten new situative assessment design principles, building on the "multi-level" assessment model in Hickey and Zuiker (2012).  While it includes the theoretical grounding, it was written for readers who might not be grounded in situative theory.

Wednesday, December 21, 2016

Competencies in Context #5: Increasing the Value of Certificates from LinkedIn Learning

By Daniel Hickey and Chris Andrews

The previous post in this series explored LinkedIn endorsements and recommendations. Chris and Dan used those features to locate a potential consultant with particular skills and considered recent refinements to those features. We also explore the new LinkedIn Learning site made possible by the acquisition of Lynda.com. In this post, we explore how endorsements and recommendations might help LinkedIn earn back the roughly $300,000 that they paid for each of Lynda.com's 5000 courses. 

Tuesday, December 13, 2016

Competencies in Context #4: eCredentialing via LinkedIn Recommendations and Endorsements

by Daniel Hickey and Chris Andrews

In the second post in this series on eCredentialing, Dan discussed how new digital Learning Recognition Networks (LRNs) can simultaneously support the goals of learners, educators, schools, recruiters, and admissions officers. A reader posted a question on that post about how the endorsement practices afforded by these new LRNs build on the existing endorsement practices, like those at LinkedIn. Since its launch in 2002, LinkedIn has grown into the largest digital LRN in existence. So, this is a great question. Dan did some digging using his own network to hunt for someone with very specific competencies, while Chris dug into the recent research and improvements to LinkedIn Endorsements. We also peeked into the new LinkedIn Learning made possible by the acquisition of Lynda.com.

Thursday, November 24, 2016

Competencies in Context #3: Open Endorsement 2.0 is Coming

By Daniel Hickey and Nate Otto

In the third post of this series, we discuss the Open Badge Specification and its shift from the Badge Alliance to the IMS Global Learning Consortium in 2017. We then discuss the crucial Endorsement features that will be supported in the forthcoming 2.0 Specifications. We will use the example of Luis Lopez's HIPAA badge described in the first post in this series to consider how these new features might operate. This illustrates how Endorsement 2.0 will be crucial in the new Learning Recognition Networks that Dan described in the second post in this series

Monday, November 21, 2016

Competencies in Context #2: LRNs for Micro-Masters and eCertificates

By Daniel Hickey

In this detailed post, I discuss the announced release date of the MyMantl Learning Recognition Network (LRN) from Chalk & Wire and argue that such digital LRNs can add value to online career and professional education programs. This includes more conventional continuing education programs and newer MOOC-based "micromasters" programs. Both types of programs promise inexpensive short-term solutions for career entry/change/advance, but they introduce serious challenges for assessment and accountability. New digital LRNs can help.

Monday, November 14, 2016

Competencies in Context #1: New Developments at Portfolium

By Dan Hickey
In this detailed post, I illustrate how the Portfolium ePortfolio platform is breaking new ground with digital badges and new networking features that readily connect learners and potential employers.  In particular, I highlight my own interaction with a student in LA around one of the badges he earned in his coursework. I presented this example in talks at ePIC in Bologna and Mozfest in London and lots of people had questions about it. What I find particularly exciting about these developments is how it shows healthy competition to around the most effective communication about competencies and evidence of competencies among educators, learners, and employers. The communication is crucial because it provides information about the context in which students competencies were developed and (therefore) the range of contexts where those competencies will be most readily deployed in the future.

Monday, August 29, 2016

Badges + ePortfolios = Helping Turn Artifacts into Open Learning Recognition Networks

by Dan Hickey

This post summarizes a meeting between representatives of six leading ePortfolio providers, four digital badge providers, and four professional associations on August 2 in Boston at the Annual Meeting of the Association for Authentic, Experiential, and Evidence-Based Learning (AAEEBL)
We searched for and found synergy between these two crucial technologies that are helping innovators re-imagine how learning can be represented in the Internet era. They are starting to come together to create what some are calling Learning Recognition Networks (LRNs).

This meeting also brings to a close the two-year Open Badges in Higher Education (OBHE) project, carried out with the support of the MacArthur Foundation. We will be discussing the Boston meeting and future directions for LRNs in the next Open Badges Community Call hosted by the Badge Alliance. The call is at 1200 noon EST on August 31 and all are welcome and encouraged to join (meeting at this Uberconference link).