Sunday, March 3, 2024

My Festschrifts for Randi Engle's Situative Design Principles

By Daniel Hickey


It has been over a decade since Randi Engle lost a two-year battle with pancreatic cancer at age 45.  I did not know Randi very well.  But I know some of her close friends and former students well and they all said she was a great friend and mentor. As elaborated in this memorial, Randi completed her Ph.D. at Stanford in 2000.  She spent five years as a postdoc at the Learning Research and Development Center at the University of Pittsburg before joining UC Berkley.

Randi's design principles for productive disciplinary engagement (PDE) and expansive framing are among the most useful and used to emerge from what I call "stridently situative" perspectives. Thanks to some super-helpful mid-career mentoring from James Gee, the second half of my career has been essentially continuing the work Randi started and moving it into the realm of online learning and social justice.  My colleague Eric Freedman and I and our advisees are advancing multiple systematic reviews of the nearly 3000 publications that build on these two distinctive frameworks.  

In the meantime, we have published four recent papers that build very directly on PDE and expansive framing.  Unfortunately, two of them are inaccessible to most of the public and many scholars. I am starting to get a few requests because they are tuning up in search.  So let me post them here with a bit of context.

Hickey, D. T. (2022).  Productive disciplinary engagement and expansive framing: The situative legacy of Randi Engle.  In M. McCaslin & T. Good (Eds.) Routledge Online Encyclopedia of Education. 

This is a flat-out festschrift in the style of European academic tribute.  I wrote it in part to offer a very readable summary of the "five explanations" of expansive framing in Engle, Lam, Meyer, & Nix (2012).  These explanations blew my mind in 2012 as I had already spent a few years incorporating PDE into my emerging framework for online learning.  Expansive framing argues that "personally authentic" learning environments will result in generative learning that transfers readily and widely to new contexts.  They indirectly explain why "professionally authentic" environments (as is common in problem-based learning) are likely quite alien and unwelcoming, especially to diverse learners.

Hickey, D. T., & Lam, D.  (2023).  Evolving and emerging perspectives on the transfer of learning. In A. O'Donnell & J. Reeve (Eds).  Oxford handbook of educational psychology.  Oxford University Press.  

I have some regrets about contributing this chapter to this handbook. The promised external reviews never materialized and very few libraries subscribe to it. It took me two years to write and I broke new ground by extending Randi's ideas about expansive framing and transfer into social justice and equity.  These ideas are going forward in several other areas  I am quite excited about an emerging sociocultural consensus about learning transfer that takes into account race and marginalization

This essay won the second place prize in the "Theory Spotlight" competition at the 2022 meeting of the Association for Educational Communication and Technology.  I was really trying to drive home the point that situated cognition authentic learning.  Thanks to the influence of Jan Herrington and Tom Reeves, many communities (particularly the AECT community) frame situated cognition as an argument for constructivist professionally authentic instruction and assessment.  Randi's ideas about expansive framing argue otherwise.

Hickey, D. T., Chartrand, G. T., & Andrews, C. D. (2020).  Expansive framing as a pragmatic theory for instructional design.  Educational Technology Research and Development [Special issue on The crucial role of theoretical scholarship for learning design and technology] 68 (2), 751-782.

This chapter was in a special issue of AECT's flagship journal edited by Rick West.  We introduced Participatory Learning and Assessment to the AECT community.  PLA embeds Randi's design principles into the multi-level assessment framework that emerged in the first half of my career.  We translated the five theory-laden PLA design principles into 15 discreet steps for instructional designers who are likely grounded in cognitive theories of learning.

Monday, October 16, 2023

Resources from Mary Rice and Colleagues on Special Ed and K-12 Virtual Learning

 By Dan Hickey

 Our continuing efforts to explore and understand online and virtual learning takes us in many directions.  Since the pandemic, our efforts have increasingly concerned students with special needs.  Of course, most readers probably know about the extensive research and reporting about how poorly special needs students were served by "emergency remote teaching."  (Here is a good updated summary from Fall 2022 from ASU's Center for Reimagining Public Education).

As we put the pandemic behind us, I am quite fascinated by the longer-term impact of the pandemic, including stuff like legislative limits on virtual learning days (three in Indiana).  I am certainly no expert in special ed. Still, I am intrigued by the growing evidence that the residual virtual infrastructure appears to be leading some districts to assign students with behavioral challenges to virtual learning.  According to the trustworthy folks at the Hechinger report, these can be open-ended assignments that go unreported because the students are not technically suspended.

Before some upcoming workshops and presentations, I have been trying to learn more from the experts in virtual learning and special education.  Mary Rice at the University of New Mexico and her colleagues have been extremely productive in recent years.  Mary is a national expert in online learning, and I know her from her leadership of the Online Learning Special Interest Group at the American Educational Research Association. I was able to access most of her paywalled articles and she sent me several more and has invited me to share them with readers here.  Here they are with brief annotations from the abstracts.

Hope you find these useful!  Feel free to add more in comments if you like/

Tuesday, April 4, 2023

Resources for Participatory Learning & Assessment in Online Learning

 By Daniel Hickey

As requests for this info trickle in, I am going to put it all up in this blog post.  Participatory Learning and Assessment (PLA) is a situative framework for engaging and assessing online learners.  We have been refining it since 2008 when I first started teaching online.  It has been refined and studied with a series of internal grants and other grants from Google and the federal CARES Act to support this work. Many thanks to the funders, my doctoral advisees in IU Learning Sciences, and my colleagues at Indiana University High School. 

PLA is intended to ensure "generative" learning that transfers readily and widely.  It draws directly on the design principles from Randi Engle (1965-2012) for productive disciplinary engagement and expansive framing. It streamlines or automates formative and summative assessment and reduces inefficient private instructor-student interaction. This frees up instructors to focus more efficiently on public interaction with learners. PLA avoids dreary discussion forums and instead relies on instructor and student social annotations directly on course readings and student work.

Here are links to recent resources in a suggested order for newcomers who want to learn how to use this approach or its elements:

I am happy to answer any questions about these at dthickey@indiana.edu

Monday, April 3, 2023

Prompt Engineering in ChatGPT vs. Bard vs. Bing: Analysis #2, "Use Local Information"

 By Daniel Hickey and Qianxu (Morgan) Luo

Pedagogical Prompt Engineering: Analysis #2, Ask Students for “Local” Information”

This lengthy post explores another common recommendation for educators trying to keep their students from using generative AI to thwart their learning. We began exploring this issue in previous posts using ChatGPT & Google’s Bard and using Microsoft’s New Bing. Those posts showed how students might use what some are calling “prompt engineering” to get around the recommendation to ask students for “specific” information (e.g., from scholarly articles) that is not part of a platform’s large language model (LLM).

 In this post, we explore another recommendation that educators ask students about “local” information outside of the LLM. Our near-term goal is what Victor Lee of Stanford suggested we call “pedagogical prompt engineering.” This means helping students learn how to refine prompts to actually support their learning. Our ultimate goal is to use current theories and methods from the learning sciences to maximize the value and minimize the harm of these powerful new tools.

Recommendations for Using Local Information

In our initial review of the media accounts and educator blogs we encountered quite a few recommendations to ask students to use local information. In an article entitled How to Prevent ChatGPT Cheating, one of Erik Ofgang’s five suggestions was “incorporate authentic student experience and student connections into questions.” For example…

education college students might be asked to respond to a reading by talking about what they’ve seen in their own education or in their work in the field (2/6/23)

Similarly, the englishwritingteacher.com asked

Do teachers need to ask students to weave some highly local information—the spelling bee yesterday at XYZ School, the performance of substitute teacher Mrs. Poggi last week—into their writing so that AI has no way to access that local information into its output, and so students are forced to write for themselves? (March 20, 2023).

Ashiedu Jude was more directive on LinkedIn:

Classroom activity data can't be accessed by ChatGPT, no student can prompt the AI to generate local information, which becomes the data pool for teachers' assessment of students. There'll be no need for lengthy text-based only term papers, essays, or presentations without major references to local data or information not easily found online. Your students have all access to ChatGPT, yet they'll be forced to rely on original ideas from localized information and activities (1/23/23).

We acknowledge that these recommendations are alongside many others in these posts and elsewhere.  But here, we focus on the recommendation to require local information so we can contrast  “problematic prompt engineering” across the three widely available platforms and speculate about each for pedagogical prompt engineering. As many students have figured out (or learned on Reddit), they can bypass this recommendation by engineering prompts to summarize local events that they observed or otherwise learned about.

Comparing “Shallow” Prompts in Three Platforms

First, we asked all three platforms a shallow question drawing on what Ofgang characterized as “lower order thinking skills, like being able to recall information, summarize information, explain information.” But we asked about relatively sophisticated knowledge, in this case about learning theory:

Our initial prompt: Describe the difference between cognitive-associationist, socio-constructivist, and sociocultural approaches to education

Elaborations on this question are a mainstay of teacher education, educational psychology, and learning sciences courses. This includes Dan’s Learning and Cognition course we explored in an earlier post. We first compared how each platform answered this straightforward question.

ChatGPT’s Response: The cognitive-associationist, socio-constructivist, and sociocultural approaches to education are three different perspectives that have different assumptions about how learning occurs and how best to facilitate it.

1.       Cognitive-Associationist Approach: This approach to education is rooted in behaviorism and cognitive psychology. It emphasizes the role of reinforcement and punishment in learning and focuses on observable behaviors. It suggests that learning is a process of acquiring new associations between stimuli and responses, and that repetition and practice are important for learning. In this approach, the teacher plays a central role in transmitting knowledge to the learner.

2.       Socio-Constructivist Approach: This approach to education is based on the work of Jean Piaget and Lev Vygotsky. It emphasizes the importance of social interaction and collaboration in learning. It suggests that learning is a process of constructing new knowledge through interaction with the environment and other people. In this approach, the teacher serves as a facilitator of learning, helping students to construct their own understanding of the world.

3.       Sociocultural Approach: This approach to education emphasizes the cultural and social context in which learning takes place. It suggests that learning is a process of acquiring cultural practices and ways of thinking, and that language and social interaction play a central role in this process. In this approach, the teacher is seen as a mediator of learning, helping students to connect with the cultural practices and ways of thinking of the community.

Overall, the cognitive-associationist approach emphasizes the role of the individual learner, the socio-constructivist approach emphasizes the role of social interaction, and the sociocultural approach emphasizes the role of cultural context in learning.

 

Bard’s Response: Cognitive-associationist, socio-constructivist, and sociocultural approaches to education are three different ways of thinking about how people learn. Cognitive-associationist approaches focus on how people learn by making associations between different ideas. Constructivist approaches focus on how people learn by constructing their own knowledge through experience. Sociocultural approaches focus on how people learn in social settings, through interactions with other people.

Here are some key differences between cognitive-associationist, socio-constructivist, and sociocultural approaches to education:

·         Goals: Cognitive-associationist approaches aim to help students learn facts and information. Constructivist approaches aim to help students learn complex concepts and to think critically. Sociocultural approaches aim to help students learn both facts and information and complex concepts, and to think critically.

·         Focus: Cognitive-associationist approaches focus on the individual learner. Constructivist approaches focus on the learner's interaction with the environment. Sociocultural approaches focus on the learner's interaction with other people.

·         Methodologies: Cognitive-associationist approaches often use traditional methods of instruction, such as lectures and textbooks. Constructivist approaches often use active learning methods, such as group work and problem solving. Sociocultural approaches often use methods that combine elements of both cognitive-associationist and constructivist approaches.

Each approach has its own strengths and weaknesses. Cognitive-associationist approaches can be effective for teaching facts and information, but they can be less effective for teaching complex concepts or for teaching people to think critically. Constructivist approaches can be effective for teaching complex concepts and for teaching people to think critically, but they can be less effective for teaching facts and information. Sociocultural approaches can be effective for teaching both facts and information and for teaching complex concepts, and for teaching people to think critically, but they can be less efficient than other approaches.

The best approach to education is likely to vary depending on the subject matter, the students' learning styles, and the teacher's preferences.

New Bing’s response: Cognitive-associationist, socio-constructivist, and sociocultural approaches to education are all different theories of learning. Cognitive-associationist theory is based on the idea that learning is a process of forming associations between stimuli and responses. In contrast, socio-constructivist theory emphasizes the importance of social interaction in learning. It suggests that learning is a process of constructing knowledge through social interaction with others. Sociocultural theory is similar to socio-constructivist theory but places more emphasis on the role of culture in learning. It suggests that learning is a process of acquiring cultural tools and practices through social interaction with others123.

I hope this helps! Let me know if you have any other questions.

1.       gsi.berkely.edu   2. verywellmind.com   3. europeanpublisher.com

 

The most obvious difference is that Bing clearly searched the web for the information and provided hyperlinks to the sources.  Bing also provided a much shorter version in the initial response.  Asking Bing to “add more detailed” broke the response out into three (similar) paragraphs and added a one-sentence example to each (e.g., “An example of this approach is rote memorization”). We also see that Bard structured the response differently and (arguably) more creatively by contrasting the three theories in terms of goals, focus, and methodologies.

All three responses show that each platform generated a fairly-sophisticated response. In fact, an undergraduate might request a response “like a typical undergraduate” or select one of the simpler drafts from the three offered by Bing.

Engineering the Prompt with Local Information

For our local information prompt, we imagined that an instructor asked students to apply these theories “to a classroom they just visited.”  We saw versions of this as a “revised prompt” proposed by several university teaching and learning centers. So we imagined that students visited a classroom where the teacher was delivering a traditional direct-instruction phonics lesson:

Our engineered prompt: Imagine that a second-grade teacher just delivered a direct-instruction phonics lesson. Explain what happened using cognitive-associationist, socio-constructivist, and sociocultural theories of learning.

Following are the responses to this engineered prompt from each platform.

1.    ChatGPT’s response: Cognitive-Associationist Theory: According to the cognitive-associationist theory of learning, the second-grade teacher's direct-instruction phonics lesson would focus on teaching students to associate specific letter sounds with corresponding phonemes. The teacher would use a variety of techniques, such as repetition, drill and practice, and positive reinforcement to help students learn and reinforce these associations. The cognitive-associationist theory emphasizes the role of the individual learner and focuses on observable behaviors.

2.    Socio-Constructivist Theory: From the socio-constructivist perspective, the second-grade teacher's direct-instruction phonics lesson would be an opportunity for students to construct new knowledge through social interaction and collaboration with their peers. The teacher might encourage students to work together in small groups, engage in discussions about sounds and words, and participate in hands-on activities that allow them to experiment with phonics concepts. The socio-constructivist theory emphasizes the importance of social interaction and the role of the teacher as a facilitator of learning.

3.    Sociocultural Theory: The sociocultural theory of learning emphasizes the role of cultural context and language in learning. From this perspective, the second-grade teacher's direct-instruction phonics lesson would be situated within a broader cultural and linguistic context. The teacher might draw on examples from diverse cultures and encourage students to think about the ways in which different languages and dialects represent sounds and words. The teacher might also emphasize the importance of literacy in society and how it connects to broader cultural practices. The sociocultural theory emphasizes the role of the teacher as a mediator of learning, helping students to connect with the cultural practices and ways of thinking of the community.

 

Bard’s response: Sure. Here is how a direct-instruction phonics lesson can be explained using cognitive-associationist, socio-constructivist, and sociocultural theories of learning:

Cognitive-associationist theory:

In a cognitive-associationist theory of learning, students learn by making associations between different ideas. In a direct-instruction phonics lesson, students learn the association between the letter sounds and the corresponding sounds in words. They also learn the association between the different letters and their corresponding sounds.

Socio-constructivist theory:

In a socio-constructivist theory of learning, students learn by constructing their own knowledge through experience. In a direct-instruction phonics lesson, students construct their own knowledge of the letter sounds and the corresponding sounds in words by actively participating in the lesson. They also construct their own knowledge of the different letters and their corresponding sounds by actively listening to the teacher and by practicing the sounds.

Sociocultural theory:

In a sociocultural theory of learning, students learn in social settings, through interactions with other people. In a direct-instruction phonics lesson, students learn in a social setting, through interactions with the teacher and with other students. They also learn through interactions with the materials in the lesson, such as the letters and the sounds.

Each of these theories of learning can provide insights into how students learn in a direct-instruction phonics lesson. Cognitive-associationist theory can help us understand how students learn the association between the letter sounds and the corresponding sounds in words. Socio-constructivist theory can help us understand how students construct their own knowledge of the letter sounds and the corresponding sounds in words. Sociocultural theory can help us understand how students learn in a social setting, through interactions with the teacher and with other students.

New Bing’s response:” Sure! Here’s how each theory might explain what happened in the lesson:

 

·      Cognitive-associationist theory: In this theory, the teacher would have used repetition and reinforcement to help students form associations between sounds and letters. For example, the teacher might have had students repeat a sound several times while pointing to the corresponding letter.

·     Socio-constructivist theory: In this theory, the teacher would have encouraged students to construct their own knowledge through experiences and reflection. For example, the teacher might have had students work in pairs to come up with words that contain a particular sound.

·     Sociocultural theory: In this theory, the teacher would have emphasized the importance of cultural context and social interaction in learning. For example, the teacher might have had students work in small groups to practice reading words that contain a particular sound.

I hope this helps! Let me know if you have any other questions.

Notably, ChatGPT and Bing mostly described different approaches, rather than using the different theories to explain the practice. This conflation of theories of learning and practices of teaching is quite common with undergraduate and master’s level teacher education students. In contrast Bard indeed attempted to explain this practice from each of the the three perspectives. Notably, it did so without acknowledging that practice is most consistent with cognitive associationist theories (and less consistent with socio-constructivist and situative/sociocultural theories). We tried this test with other practices (problem-based learning and collaborative science investigation) and got similar responses.

These examples showed that all three platforms “learned” at the intersection of learning theory and educational practice. But they learned in different ways. Significantly, these differences are are remarkably akin to the sorts of differences in the ways that real students learn. We contend that this ability to learn (beyond what was presumably contained in their LLMs or the web resources accessed) is crucial for thwarting and supporting student learning using generative AI

So What Are Educators to Do?

First, this example demonstrates the value of perhaps the most common recommendation for educators: Test your assessments and assignments with generative AI. This post further reminds educators that many students will engineer such “knowledge-rich” prompts, and some will do so in ways that thwart learning. We further remind educators that careful research (as summarized in James Lang’s Cheating Lessons) has shown that many students will cheat if they assume (a) their classmates are doing so and (b) they will not be caught. Recent survey evidence showing cheating with ChatGPT and limitations of current AI detection systems suggests that many educators (particularly those online) have reason to be concerned. We acknowledge that our position is at odds with more trusting observers. But we worry that most efforts to support pedagogical prompt engineering will fail if they don’t first consider problematic prompt engineering.

Some teacher educators will have already recognized the pedagogical potential of the three different responses above. Many teacher educators know that this tendency to conflate theory and practice is deep-seated. Mere explanation is insufficient for addressing this conflation. Fortunately, exploring the differences between the three responses looks like a very promising assignment. The relative potential of such an assignment raises deeper questions about learning theory that quickly gets beyond the scope of this post. In short, this has to do with the perceived value of student inquiry and the relative value of different forms of inquiry. Constructivist perspective might worry letting generative AI generate these examples (rather than students) thwarts the learning that occurs when generating the example. Situative perspectives might worry that using AI to frame learning in this fashion would hinder the transfer-in of prior knowledge and the ownership and authorship of ideas that follow. On the other hand, such student inquiry can be messy and inefficient and is particularly challenging online.

Our thinking here is generally consistent with a recent Cult of Pedagogy podcast entitled “How to Use ChatGPT as an Example Machine.” The deeper (and ultimately empirical) question is whether having students engage in inquiry around AI-generated examples is as effective as investigating (a) student-generated examples or (b) educators’ or designers’ expert-generated examples. A pragmatic answer must take into account the different opportunity costs of all three options.

Transitioning to Pedagogical Prompt Engineering

The example above sets up our transition. Contrasting the three examples above sets up students to compare each platform’s ability to critically analyze the relationship between learning theory and educational practice. Thus, we asked each platform the same question:

Our pedagogically engineered prompt: Is direct phonics instruction consistent with socio-constructivist learning theory?

Here are the initial responses from the three different platforms

ChatGPT’s response: Direct phonics instruction and socio-constructivist learning theory have different approaches to teaching and learning.

Direct phonics instruction emphasizes the explicit teaching of phonics rules and skills to help students decode words and develop reading fluency. It is a teacher-centered approach that focuses on the transmission of knowledge from the teacher to the student.

Socio-constructivist learning theory, on the other hand, emphasizes the importance of social interaction and the construction of knowledge through collaborative learning activities. It is a student-centered approach that focuses on the active engagement of students in the learning process.

While direct phonics instruction and socio-constructivist learning theory have different approaches, they are not necessarily mutually exclusive. It is possible to incorporate elements of both approaches into teaching phonics. For example, teachers can use direct instruction to teach phonics rules and skills, and then provide opportunities for students to practice and apply these skills in collaborative learning activities.

In summary, while direct phonics instruction is not consistent with the socio-constructivist learning theory in its entirety, it can be adapted to fit within the framework of a socio-constructivist approach to learning.

 

Bard’s response: Direct phonics instruction is a systematic, teacher-led approach to teaching students how to sound out words and read. It is based on the idea that students learn best when they are directly taught the skills they need to be successful. Socio-constructivist learning theory is a theory of learning that emphasizes the importance of students actively constructing their own knowledge through interaction with others.

Direct phonics instruction can be consistent with socio-constructivist learning theory when it is used in a way that allows students to actively participate in their own learning. For example, students can be given opportunities to work together to decode words, or to create their own phonics games. When students are given these opportunities, they are able to actively construct their own knowledge about phonics and reading.

However, direct phonics instruction can also be used in a way that is not consistent with socio-constructivist learning theory. For example, if students are simply given a list of phonics rules to memorize, or if they are drilled on phonics skills without any opportunity to apply them, then this type of direct phonics instruction would not be consistent with socio-constructivist learning theory.

In conclusion, direct phonics instruction can be consistent with socio-constructivist learning theory when it is used in a way that allows students to actively participate in their own learning. However, it can also be used in a way that is not consistent with socio-constructivist learning theory.

New Bing’s response: Direct phonics instruction is a behaviorist approach to reading that teaches readers to associate letters with their sounds. On the other hand, social constructivist theory emphasizes the child's environment as they learn to read1. In constructivism, students are actively involved in discovering knowledge on their own2. The teacher guides students in discovering knowledge on their own. In summary, direct phonics instruction is not consistent with socio-constructivist learning theory1, 2.

Learn more: 1. theclassroom.com 2. brighthubeducation.com

We see here that the engineered prompt helped all three platforms distinguish educational practice and learning theory. We contend that asking students to explore and describe these differences is a promising way to help them learn the underlying issues. If this exercise was indeed grounded in actual classroom practice that the students had observed, students could readily extend this investigation using that experience. For example, these responses might prompt students to consider whether they had indeed observed teachers using some of these socio-constructivist practices to supplement direct instruction in phonics.

Where to Go From Here?

Many readers likely recognize that our example enters directly into the controversies associated with the “science of reading” and the corresponding insistence on direct phonics instruction over other methods. Personally, we think it is anti-scientific to claim that anything else is “non-scientific” and not “evidence-based.” Such simplistic proclamations obscure the real issues and fuel corrosive denigration of educators and schools of education. But the debate rages, sometimes even within particular teacher education programs. Given that the majority of professors of reading still ascribe to a “balanced” approach, new elementary teachers must be prepared to engage productively in this important discussion.

Building on the example above, one possibility is having students explore what the different platforms have to offer.  We found that asking each to “describe the controversies associated with ‘the science of reading’” generated three somewhat different responses:

·         ChatGPT juxtaposed phonics instruction with whole-language instruction and then considered the implication of each for drawing on students’ prior and using social media.

·         Bard broke down the controversies associated with phonics instruction, phonemic awareness, fluency, and comprehension

·         Bing cited an editorial from the Washington Post to assert that “the science of reading is more about reducing ignorance than finding ultimate truths applicable to every child” and “in the reading wars, scientific certainty is often used rhetorically to deny reasonable differences and cut off healthy debate, turning science into scientism.”  

We contend that contrasting, discussing, and synthesizing these three responses would itself be a worthwhile exercise for teacher-education students. But it would also support further pedagogical prompt engineering.  For example, asking each platform “what evidence supports balanced reading instruction” (again) generated very different responses:

·         ChatGPT summarized balanced reading outcomes beyond standardized reading scores (e.g., enjoyment of reading and oral language skills, etc).

·         Bard stated that the 2000 National Reading Panel (often cited by proponents of stricter methods) also called for comprehension instruction and that a more recent report by the US DOE’s National Center for Educational Evaluation and Regional Assistance summarized the evididence supporting balanced reading.

·         Bing summarized linked resources detailing evidence in support of balanced instruction along with resources articulating the problems in defining one approach as being uniquely “evidence-based.”

Asking a similar question about direct phonics instruction provided generally similar results. But reflecting the influence and widespread attention to the science of reading, all three were somewhat more assertive in their support of direct instruction over other methods.

At this point, it would be great to hear from colleagues who teach reading. But from our perspective, asking students to compare these responses across and within platforms would be a powerful and interesting way of learning about these complex issues. But even in this regard, some caution is warranted. We asked each platform to “compare opinions of ChatGPT, Google’s Bard, and Microsofts Bing regarding the science of reading:

·         ChatGPT described the differences between ChatGPT’s reliance on its LLM from Bing’s reliance on search (and their general implications) without actually answering the question. ChatGPT refused to even recognize Bard. Instead ChatGPT hallucinated “Google’s BARD (Basic Reading and Decoding)” a “computer-based reading program.”

·         Bing apologized for being unable to compare ChatGPT and Bard “as they are not designed to have opinions. Bing offered to provide it’s own information “but it does not have an opinion on it as well.” (Deleting “opinion” from the prompt resulted in the exact same response.)

Somewhat to our surprise, Google’s Bard generated a serious effort to answer our question

 Bard’s response: ChatGPT, Google's Bard, and Microsoft's Bing are all large language models (LLMs) that have been trained on a massive amount of text data. As a result, they all have a deep understanding of the English language and can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

However, when it comes to the science of reading, these LLMs have different opinions. ChatGPT is a strong supporter of the science of reading, and believes that it is essential for teaching children to read. Google's Bard is more cautious in its endorsement of the science of reading, and believes that it is important to consider other factors, such as a child's individual learning style, when teaching reading. Microsoft's Bing is the most skeptical of the three LLMs, and believes that the science of reading is not as well-established as some people claim.

Despite their different opinions, all three LLMs agree that reading is an important skill that all children should learn. They also agree that there is no one-size-fits-all approach to teaching reading, and that the best approach will vary depending on the child's individual needs.

Here is a more detailed comparison of the opinions of ChatGPT, Google's Bard, and Microsoft's Bing regarding the science of reading:

ChatGPT is a strong supporter of the science of reading. It believes that the science of reading is essential for teaching children to read. ChatGPT argues that the science of reading provides a clear and concise framework for teaching reading, and that it is based on decades of research. ChatGPT also argues that the science of reading is effective in teaching children to read.

Google's Bard is more cautious in its endorsement of the science of reading. It believes that the science of reading is important, but that it is not the only factor that should be considered when teaching reading. Google's Bard argues that the science of reading is based on averages, and that individual children may learn to read in different ways. Google's Bard also argues that the science of reading is not as well-established as some people claim.

Microsoft's Bing is the most skeptical of the three LLMs. It believes that the science of reading is not as well-established as some people claim. Microsoft's Bing argues that the science of reading is based on a small number of studies, and that these studies have been criticized for their methodology. Microsoft's Bing also argues that the science of reading is not always effective in teaching children to read.

Despite their different opinions, all three LLMs agree that reading is an important skill that all children should learn. They also agree that there is no one-size-fits-all approach to teaching reading, and that the best approach will vary depending on the child's individual needs.

While the accuracy of these claims might need to be more fully verified, Bard’s response was quite compelling. Unless an instructor had tested the prompt or seen such a response from multiple students, this might be an entirely plausible student response. That is a central point of our larger series!

Finally, we wonder how generative AI might help prepare future teachers navigate the contentious political backdrop which further obscures the science behind the rhetoric of the science of reading. Consider, for example, the widely-cited claims that ChatGPT is "woke" and the inevitable counterclaims. This is significant because conservative legislators and journalists are among the strongest proponents of the science of reading, while teachers' unions are among the strongest skeptics. Asking students to evaluate the stance of each platform in light of this reality seems like a promising assignment for introducing students to the complex crucial issue/

Tuesday, March 28, 2023

"Prompt Engineeering" in Generative AI: Scholarly Referencing in Bing

 By Daniel Hickey and Qianxu (Morgan) Luo

In our previous post, we compared a crucial question in the exploding debate over the impact of generative AI on school learning.  Our previous analysis of the hundreds of media accounts uncovered dozens of problematic suggestions that educators should ask students to reference specific information that is presumably not part of a given platform's large language model (such as specific scholarly references). We argue that this overlooks the simple "prompt engineering" workaround of including the abstract (or sometimes the introduction) of the article in the prompt.

We have repeatedly found that ChatGPT is quite proficient at comprehending, analyzing, and applying an article when the abstract is provided.  In our most recent post we found that in one example, ChatGPT was much better at this task than Google's new Bard.  It appeared that Bard just ignored the abstract and dug into whatever it already knew about the topic.

Generative Scholarly Referencing in Microsoft's Bing

We just got access to Microsoft's New Bing and, of course, wanted to put it to the same test.  Of course, the big difference with Bing is that it has access to the Internet.   When we asked New Bing that same initial prompt, we got a VERY different response.

Initial promt: Use Sinatra, Heddy, and Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations.

New Bing's "more balanced" response: 

 x Searching for Sinatra, Heddy, & Lombardi (2015) cybersecurity learners digital ranges APA citations

 I found an article by Sinatra, Heddy, & Lombardi (2015) that talks about how engagement is one of the hottest research topics in the field of educational psychology1. They suggest that engagement is a key factor in successful learning outcomes and that it can enhance motivation and achievement in cybersecurity learners who are using a digital range1.

Here’s an example of how you can use their ideas to motivate engagement and use APA citations: “Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes” (Sinatra, Heddy, & Lombardi, 2015)1.

I hope this helps! Let me know if you have any other questions.

Learn more: 1. veletsianos.com   2. files.eric.ed.gov   3. frontiersin.org

 ??? What is a digital range?  What are some other ways to motivate engagement?

 This result was astonishing on several levels.  First, all of  the superscript 1's and the first link are to a post from YESTERDAY from the awesome George Veletsianos about our earlier post.  I have never met George, but I follow his work (but was not subscribed to his blog).  We are honored that he characterized our earlier post as "worthwhile and expansive" and then went on to extend our initial effort using more "incremental prompts."  Demonstrating the value of this more nuanced form of prompt engineering, George asked ChatGPT to (a) explain more about cybersecurity, (b) explain what a cyber range is, (c) explain how to create a supportive learning environment in a cyber range, and  (d) give an encouraging example of a message to send to cybersecurity learners. ChatGPT's responses were great and this example was super helpful.  I used a similar approach to prompt engineering to write a 3500-word literature review paper on the topic in another post and reached a similar conclusion as George seems to imply: ChatGPT "knows" a lot and can "learn" even more when prompted. I think we all agree that with the right assignments and support, students can learn a lot from generative AI.  

Our second observation is that Bing did indeed locate the actual article. We know because the line about engagement being one of the "hottest research topics" is the opening sentence of the article's abstract.  But Bing actually did not do much with the article and the quote it provided was actually a paraphrase and never appeared in the actual article.

Our third observation is that Bing offered suggested incremental prompts not unlike the ones George explored.  For example, it gave a pretty decent (albeit brief) answer to the question "what is a digital range?" that included a link to a detailed web resource page that we have referenced in the past.

 Our fourth observation is that Bing is a bit, well... cranky.  The first response to our query offered three different versions, including on "creative" version and one "detailed" version.  When I asked Bing the same question again, it told me "I think I already provided you the information you requested," and restated the hallucinated quote.  I tried and tried to get it to offer me the three different versions, but it just kept giving me the first one without the other two.  

More Engineered Prompt in Bing

Next we gave Bing the more engineered prompt that included the abstract of the article

Our more engineered promptUse the following article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations: Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.  
Bing's response: Here’s an example of how you can use the article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations:
Sinatra, Heddy, & Lombardi (2015) suggest that engagement is one of the hottest research topics in the field of educational psychology. They state that research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. They recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues.
Here’s an example of how you can use their ideas to motivate engagement and use APA citations: “Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes” (Sinatra, Heddy, & Lombardi, 2015).
I hope this helps! Let me know if you have any other questions.

We see that Bing learned and applied some of the information from the abstract.  While Bing correctly quoted the crucial term "grain-size," it also again provided the same hallucinated quote. While Bing used more of the abstract than Bard, it used much less than ChatGPT.  Somewhat frustrating, Bing again did not offer the three different versions like it did in the first offering.

What is the Big Deal?

Of course, the massive media coverage on the educational implications of generative AI for education suggests that these are indeed important questions.  We have participated in several workshops and conference presentations where audience questions mirror the debate that is raging in the media. Some worry about students cheating on exams or bypassing the conventional learning that occurs when students explore and write about topics on their own.  Others argue that the problem is really "shallow" assignments and assessments and suggest that instructors need more sophisticated prompts that ask about "local, recent, personal, and specific" knowledge that the LLMs behind platforms don't "know."

This initial pair of posts shows that Bing's widely cited access to the Internet does indeed mean it "knows" more than ChatGPT or Bard. But the previous post shows you can provide ChatGPT with specific knowledge that is not in its LLM and it appears to do much more with that information than Bing. For us, this shows that this common recommendation for educators is shortsighted AND that the obvious workaround functions very differently in the three widely available generative AI platforms.

As far as the "big debate," we contend that it largely misses the important point. Certainly, we all want to trust that most students will "do the right thing."  Frankly, we worry that our work here may come off as overly suspicious. But it is difficult to overlook a central finding in the research summarized in James Lang's 2013 book Cheating Lessons: If students assume they are unfairly disadvantaged by cheating classmates and they assume they will not get caught, many, if not most, will also cheat. 

What is our Goal Here?

When we presented at the Digital Learning Annual Conference (an awesome new conference and organization for K-12 online; a version of our talk is here), the classroom teachers were looking for practical responses they could implement now. One of our most "liked" observations in our presentations is that many of the suggested responses forget that most educators are already maxed out. They simply do not have the time or bandwidth "review interim artifacts" or implement other complicated suggestions. Meanwhile, as we are showing, many of the simpler suggestions are easily bypassed.

 In the immediate term, we are trying to help educators avoid simplistic responses whose workarounds are instantly shared massively on Reddit and elsewhere.  In the near term, we are attempting to extend our suggestions to consider ways that teachers can help students learn from and with generative AI.