By Dan Hickey
This is the third post about my collaboration with
Iridescent Inc., a science education non-profit in LA. This new post describes how
a key assumption in newer "situative" theories of motivation can
resolve the tensions between prior empiricist and constructivist approaches. When
combined with Design Based Research methods, this assumption can result in a
coherent "roadmap" towards synergy across the three approaches. I
contend that such a roadmap can help Iridescent and other informal STEM
innovators find a route that takes them from current levels of engagement to much
higher levels of engagement, both in terms of quantity and quality.
This post could use some work and some trimming but I need to get it up for my class and colleagues and get on to other things. Will try to clean it up soon
In my first post in this series, I described Iridescent's two primary programs
and my evaluation of one of the programs involving Engineering Design
Challenges at their Curiosity Machine website. Iridescent staff hosted weekly Curiosity
Classes for children and parents and encouraged families to continue working on
and submitting design challenges from home. We found that only about a third of
the families reported submitting from home, and fewer still began working on
new challenges from home.
In my second post in this series, I elaborated on three different approaches
that Iridescent might take to increase family engagement with the Design
Challenges during, between, and after the weekly Curiosity Classes. These approaches
follow from three "grand theories" of learning that many educational researchers
embrace. While different researchers use different labels, I will call them empiricism, constructivism, and situativity. The second post showed how
the first two strategies were "orthogonal." Specifically, empiricist
and constructivist perspectives prescribe very different guidelines for
motivating engagement. These approaches are appealing because they provide educators
and innovators a clear path forward, and promise to work in most any learning
context. Unfortunately, these two paths go in opposite directions. This is
because empiricist perspective characterize engagement in terms of the ways
individuals behave, while constructivist perspectives characterize engagement
in terms of the way typical humans think and process information. Thus, the
incentives, rewards, and competition suggested by empiricist perspectives are
likely to undermine the intrinsically-motivated learning that constructivist
perspectives assume is needed to support meaningful cognitive engagement in the
short term and create self-determined learners in the long term.
The Search for
Synergy
I have been fascinated by this debate for my entire career. In
articles published in 1997 and 2003, I explored whether newer situative
approaches to motivation might offer a way around this tension and reveal new
sources of synergy. To reiterate, situative approaches characterize engagement
as engaged participation. This means
that engagement is characterized as increasingly successful participation in
the social and cultural practices that define "communities of
practice." In the second post, I summarized the implications of situative
perspectives for helping Iridescence motivate extended family engagement. This
revealed that situative perspectives do not lead to the more straightforward
and general paths that follow naturally from empiricist and constructivist
perspectives.
In this new post, I explore how a specific aspect of situativity
and "design-based" research methods from the Learning Sciences can
provide a practical road map that Iridescent could use to find the synergy
needed to draw from all three approaches. In addition to supporting their own programs
and families, I believe that following this roadmap can also result in design
principles that should be useful for helping motivate engagement in museums,
afterschool programs, websites, and and other entities that support informal
science education.
Combining Individual
and Social Activity
Before I can create this roadmap for synergy across
motivational practice, I have to make a big
decision. It turns out that there are two very different ways of combining
the activity of individuals with the activity of social groups. To extend the
roadmap metaphor, if we do not address this issue from the outset, it will be
difficult or impossible to follow the map. More specifically, the evidence of
different types of engagement will point in different directions. So guidance
is needed to inform smaller decisions along the way.
For researchers working in the empiricist and constructivist
perspectives, reconciling individual and social activity is not been
particularly problematic. As I argued in my 2003 paper, researchers in these
traditions typically embrace an "aggregative" reconciliation where
social activity is simply characterized using aggregated notions and evidence
from individual activity. Hence, behavioral theorists Like Sigrid Glenn working
in the empiricist tradition characterize social activity using
"meta-contingenices" to represent the aggregated effects of the incentives
for individual activity. In contrast, researchers working in the constructivist
tradition like Albert Bandura characterize social activity as "collective
efficacy" which represents the sum of the efficacy of individuals.
Unfortunately, this actually exacerbates the problem of having to choose one
perspective over the other. Obviously, the two resulting characterizations of
social activity are incompatible. And from a situative perspective they are
both highly incomplete. As such, an aggregative reconciliation works against
synergy.
A very different kind of reconciliation follows from under-appreciated
assumption about situative theories. Jim Greeno introduced this issue in a 1993
chapter and then further it in his his 1998 presidential address to the
American Psychological Association. His so-called "situative
synthesis" argued that a situative approach treats social and cultural
activity as "primary" and treats both
the behavior of individuals (in empiricist perspectives) and the way that
humans typically process information (in constructivist perspectives) as
"secondary." In Iridescent's' case, this means that they can treat
the way that children and families respond to motivational practice that follow
from empiricist perspectives and rationalist perspectives as "special
cases" of engaged participation.
What this means in practice is innovators are free to try
out different motivational practices, while looking at the consequences of
those practices on the entire range of engagement. Importantly, this approach
provides guidance for resolving the many questions that will come up along the
way when the evidence is unclear or contradictory. Returning to the roadmap metaphor, the
situative synthesis helps guide the decision making process when several
different routes towards the same goal are apparent. By treating social and
cultural practices as primary, innovators can focus first on engaged social participation
and then second on individual behavior and cognition.
An Example of the
Situative Synthesis with a Leaderboard
As I explored in the second post, many working in the
empiricist tradition would suggest that Iridescent explore some sort of competition
or gamification to motivate family engagement. Even though Skinner and other
strict behaviorists argued against competition, it is an increasingly popular
strategy for motivating engagement, particularly with the rise of
market-oriented educational reforms. Indeed, some sort of leaderboard for each
Design Challenge is one of the motivational strategies that Iridescent has
considered. Meanwhile, most working in the constructivist tradition would
vehemently oppose competition or gamification. An alternative approach would
first gather some baseline evidence of all three forms of engagement. The team would
then design some sort of leaderboard that they believe is most likely to
motivate desirable forms of engagement without causing negative consequences. Then
then introduce the new practice and see
what happens. Specifically they would look at how the leaderboard works and
begin gathering evidence of
engagement.
After the first implementation, it is certain that
additional refinements would be needed to make a leaderboard work in this
context as had been envisioned. These refinements would be informed by further
consideration of the suggestions from each of the three perspectives
(summarized on the previous post). As such, it makes sense to postpone any formal
study of engagement until the innovation was working well. In the meantime,
informal assessments of the behavior, intrinsically motivated learning, and
engaged participation can be made along the way to help inform those
refinements. In doing so, the larger project learns how to gather valid
evidence of engagement. This prepares the project for the formal evaluation
needed to validate the innovation and the program as whole.
As long as the team keeps track of the logic behind the
design of the leaderboard, the team is then nicely poised to examine the
engagement that results, make refinements according, and try again. In doing so
they can develop motivational design principles that will be support coherence
for their team going forward. Importantly, this will also result in
motivational design principles that can be used by other innovators facing
similar challenges. This principled iterative refinement is a signature
research method for the Learning Sciences known as Design Based Research. DBR
dates back to the early 1990s when Ann Brown began conducting "design
experiments." DBR has evolved significantly over the years and has been
somewhat controversial (because of concerns over generalizability). From my
perspective, the debate has been settled by the specific reference to DBR in
recently educational RFPs in major federal agencies including the National
Science Foundation (particularly the Cyberlearning initiative) and Institute of Museum
and Library Services (particularly the STEM
Expert Facilitation of Family Learning in Libraries and Museums initiative)
The Challenges of
Assessing Engagement
As introduced in the previous post, each type of engagement is
typically assessed with a different method and studied using a different unit
of analysis. Each method and unit has advantages and disadvantages; synergy
means taking advantages of the strengths and minimizing the weakness of each,
as follows:
Measuring behavioral engagement is rather straightforward: In
our example, Iridescent would examine whether children and families completed,
started, and submitted more Design Challenges from home after the leaderboard
was introduced. They might also look for other behavioral indicators of
potential negative consequences, such as a drop off in engagement as the
leader's score became impossibly high for most families to accomplish. Because
the unit of analysis is typically an individual person, behavioral engagement
can be studies with relatively few participant. Thus, it might be possible to
introduce the leaderboard in the middle of a program and compare engagement in
each family in the first half with the second half.
The measurement of behavior can be better understood using the
metaphor that follows from the first of three "worldviews" advanced by
the philosopher Karl Popper. These worldviews are consistent with our three
"grand theories" of learning. Empiricist perspectives are consistent
with Poppers "mechanistic" worldview. This leads to the metaphor of
the learner as a machine. When
designing machines, inventors and engineers can make changes to a single
machine and see what happens. The results are typically easily measurable and
often speak for themselves.
Assessing self-determination is more challenging. This is
because a given behavior might be the result of the range of forces arrayed
along this table (borrowed from Learning Snippets blog). Returning back to the
example from the previous blog, imagine if Iridescent offered families $20
credit at Amazon for every Design Challenge they submitted. If there were some
sort of criteria and evaluation of quality, it is quite possible that the
incentivized submissions would appear reasonably similar to the ones that
families submitted without the incentive. However, if we administered a survey self-determination
(examples here,
all based on the table below) we would almost certainly find that the cognitive
engagement in the incentivized families falls to the left of cognitive
engagement in the non-incentivized families. Indeed, it is easy to imagine some
children in the incentivized families reporting "amotivated"
engagement (all the way to the left of the table) as they crank out yet another
Design Challenge for their share of the incentive. Similarly, it is difficult
to imagine children in the non-incentivized families demonstrating external or
introjected regulation (because they would just say they did not want to do it
and in most families that would be fine).
This example illustrates why self-report assessment
dominates studies of self-determination. This also illustrates why the typical
unit of analysis is groups of
individuals. Constructivist perspectives are consistent with Popper's second "organismic" worldview. This leads to the metaphor of
learners as growing plants. Comparing self-determination under different
conditions is a lot like comparing the effectiveness of two fertilizers. The agronomist
applies the treatment to two similar plots of plants and then compares average yield
of the two plots.
Following this example, our hypothesis about incentives could
be tested by having two groups of families complete design challenges with and
without incentives, and having parents and children complete an online
self-report survey with each challenge. Unfortunately, such a study would take
a lot of time and require enough families (i.e., 30 per group) to allow
statistical tests to distinguish between random variation and systematic
differences. Particularly with smaller samples, random assignment and other
procedures are needed to control for potential confounds. Furthermore, by the
time the data is gathered and analyzed, it might be too late to act on the
results. This makes self-report measures difficult to use in the kind of
iterative design-based refinements described above. This is one of the reasons
I stopped using self-report measures in my own research once I completed my
dissertation in 1995. However, I returned to self-report surveys in my studies
of the Quest Atlantis educational videogame
once I figured out they could be reframed using the situative synthesis as
describe below
Observing engaged participation is both harder and easier. Because participation and the resulting
identities are highly contextual, it impossible to measure or assess them
accurately. They can only be understood using interpretive methods like
discourse analysis that take into account the specific social and cultural
context in which that participation takes place and where identities are negotiated.
This requires a lot of very training to do well. I know this because I was not
trained in this manner and so I really struggle to do it well. The reason it is
so difficult is because the unit of analysis in of engaged participation is
activity in context. Situative theories are consistent with Popper's third worldview,
contextualism. This worldview of a historical
event as an organizing metaphor for learners. There is no "accurate"
or even "complete" understanding of a given historical event. Rather,
one's interpretation of an event must take into account the context in which
the event occurred. Perhaps more importantly, a "full" interpretation
of a historical event must also take into account the context from which it is
being interpreted.
Fortunately, in other ways, it can be quite easy to observe
engaged participation. To reiterate from the first post, we are concerned here
with disciplinary engagement. Specifically
we are concerned with whether or not children and parents are participating
successfully (and with increasing success) in the engineering design practices
(curiosity, creativity, and persistence) at
the core of Iridescent's mission. There are lots of indicators of this
engagement. Persistence is probably the easiest, as indicated by the number of
times a child or a family redesign their device to meet and exceed the specific
goal of the challenges. All other things being equal, more persistence = more persistence.
Similarly, creativity is displayed by the use of novel solutions that were not
presented or suggested in the instructional materials. Thus, more creativity =
more creativity. The practice of curiosity may might simply be indicated by the
number of Challenges the family pursues at home. One idea to go further might add
links to additional disciplinary resources such as more advanced video or
devices. Families that explore those links after submitting a challenge (and
perhaps while waiting for feedback from the online mentor) are clearly
practicing curiosity more than families who do not.
As elaborated in the first post, the other key aspect of disciplinary
engagement concerns disciplinary concepts.
This is the stuff that disciplinary experts know,
independent of context. In the design challenges, these are concepts like load and tension that are introduced in the inspirational and instructional
videos. Engaged participation is about using disciplinary concepts to
participate more successfully in disciplinary practices. This what I observed
in my first post that I watched when I visited one of the prior family evening program.
I watched a dad gently help use help is daughters use the concepts of thrust and angle as they persistently experimented with the number and
direction of exhaust straws in the Spinning Machine design challenge. While the
girls were "discovering" these concepts in their devices, they were
learning how to use them to participate more successfully in the engineering
design practices. Importantly, they were also learning the technically correct
labels for those concepts.
A key point about engaged participation is that experienced
educators and designers (who are knowledgeable about a particularly educational
context) are mostly likely to "know it when they see it." This is because they are in the best
position to consider all of the contextual factors that influence any interpreted.
This knowledge is needed to interpret observations with enough confidence to
inform curricular designs. For example, a particular level of engaged
participation in an advantaged family with professional parents is not nearly
as convincing evidence of success as that same level of engagement in a
disadvantaged single-parent family. Furthermore, the same incentive is likely
to have a different impact on the advantaged family than a disadvantaged family.
Focusing primarily on such contextual observations is paramount in designing
effective curricular for particular contexts. I believe that this is what
Iridescent has been doing all along, and explains why their programs are
already so engaging for their targeted learners.
Applying the
Situative Synthesis to Iridescent's Design Challenges
To reiterate, the situative synthesis treats individual
behavior and individual cognition as "special cases" of engaged
participation. This means that further refinements of the Design Challenges and
the Curiosity Machine should focus primarily on engaged participation and only
secondarily on behavioral and cognitive engagement. This has several big
advantages. The first is that it allows innovators to use engaged participation
to "settle" the debates. Consider for example, if a leaderboard increases
measured behavioral engagement while decreasing assessed cognitive engagement. Given
the prior research on incentives, this actually seems quite likely. If the introduction
of the leaderboard clearly increases engaged participation as well, this would
presumably trump a small decrease in cognitive engagement.
Perhaps the biggest advantage of the situative synthesis is that
it overcomes the methodological equivalent of "teaching to the test."
When one targets behavioral or cognitive engagement directly, it is hard to
know if the resulting behavior or cognition will "transfer" to other
settings and contexts. This is particularly an issue with behavioral
interventions such as incentives. Hundreds of studies have shown that
free-choice engagement is diminished when incentives are withdrawn. Going back
to our example $20 incentive for each incentive, I am convinced by the prior research
is quite convincing that we would see a dramatic increase in behavioral
engagement when the incentive was available and an equally dramatic decrease when
the incentive was withdrawn. While not as dramatic, a similar "test-prep"
phenomenon is possible with practices for supporting self-determination and intrinsic
motivation. Providing optimal challenge, offering useful performance feedback,
and helping learners related to others are all likely strategies for fostering
self-determination and intrinsic motivation during
the design challenges. Whether or not these desirable forms of cognitive
engagement generalize to other (perhaps less supportive) STEM opportunities is
far less certain.
While proponents of empiricist and constructivist
interventions have conducted studies showing generalizability of increases in behavioral
and cognitive engagement. But these studies were carried out in controlled
laboratory contexts and a find them unconvincing for effects in more particular
contexts, More importantly, I am convinced that evidence of behavioral and
cognitive engagement that follows from interventions targeting engaged participation
are by definition more likely to transfer to other contexts. This is because
these changes are indirect "residual" effects of the intervention. In
other words, evidence of behavioral or cognitive engagement from situative
refinements is itself evidence of generalizability and transfer.
An Example of the
Situative Synthesis with Educational Videogames
As I mentioned above, I stopped using self-report
assessments of motivation after I completed my dissertation. My evaluation
of The Adventures of Jasper Woodbury math
problem solving videos uncovered some interesting effects on self-reported
interest and intrinsic motivation. Yet I was left frustrated that I could not
do anything with those results to improve Jasper Woodbury or the way teachers
used it. From 1995 to 2005, I explored the situative synthesis described above as
it related to educational assessment in studies of educational multimedia (including
the GenScope genetics program and three programs
from the NASA-funded Classroom
of the Future). From 2005 to 2010 I extended that program of research
to assessment
studies of the Quest Atlantis
educational video game with Sasha Barab from 2005-2010.
In 2009 and 2010, I extended the situative assessment research
in Quest Atlantis to include a quasi-experimental study of incentives and their
impact on engaged participation and cognitive and behavioral engagement. Studies
carried out with Eun Ju Kwon and Michael Filsecker compared 100 sixth-graders
playing two different versions of the Quest Atlantis Taiga game. The game took
place in a virtual national park where students served as apprentice rangers examining
water quality for the park ranger (played by their teacher). The inquiry challenge
in this game was figuring who and what was responsible for declining fish
stocks, One version of the game included a public "leaderboard" that
tracked players progress through the game stages. The leaderboard also
displayed status according to the ranger's evaluations of the "field
reports" that players submitted at each stage. The incentivized game also
included digital badges awarded for higher levels of player status; the badges promised
the earner special powers in the game. The other version of the game did away
with both the leader board and the badges and instead appealed to intrinsic
forms of motivation for excelling in the game (i.e., satisfying curiosity, saving
the park, etc., consistent with Malone and Lepper 1987 guidelines)
As reported in the 2014
paper with Michael, we studied engaged participation of the students in
their role of apprentice to the park ranger by coding the content of each
student's field reports. Specifically, we examined (a) how many of a dozen targeted
disciplinary concepts students used and (b) whether they used those concepts
appropriately. It turned out that players in the incentivized condition used
more of the concepts and did so more accurately. We also studied self-reported
intrinsic motivation as students progressed through each of the four game
stages, and studied changes in personal interest in scientific inquiry and
water quality with pre and post surveys. Students in the incentivized condition
reported higher levels of intrinsic motivation and relatively larger gains in
personal interest than the non-incentivized students. Some, but not all of the
difference were statistically significant. Unfortunately, a software problem precluded
a planned comparison of behavioral engagement by looking at total time-on-task
and logins from home. Given that all of the prior research suggested increased
behavioral engagement in the incentivized condition, we were able to convince
the reviewers that that we had sufficiently searched for the hypothesized
negative consequences of the incentive practices and did not find them.
Naturalistic vs. Interventionist Research Goals
One of the big takeaways for me from the Quest Atlantis studies was the fundamental tension between naturalistic studies of the way the world is and interventionist studies focused on changing the world. It took two annual cycles with significant federal funding to refine the experimental design and instrumentation. For example, we struggled to maximize the motivational appeal of the incentives without confounding the experimental design. Case in point, one of the most motivating features of the in-game badges was giving players new tools that would help them solve the STEM challenge. But giving those players an actual advantage would have introduced a serious confound in the experimental design. We also found the peer reviewers were reluctant to allow us to include interactions in the changes of interest from pre-test to the post-test that were on the margins of statistical significance, even though all of the interactions pointed to increased (or smaller decreases) in interest in the incentivized condition. More generally, many of the resources that might have gone into further refining the incentive practice using informal indicators of engagement were consumed by the laborious efforts to more formally observe, assess, and measure engagement.
In retrospect, I view the challenges I encountered as particularly
worrisome for the field. This is because I believe I was well-prepared to deal
with them. I had encountered similar tensions with the situative assessment
studies, and had been engaged in a long running debate with the Educational
Psychologist Richard Walker about the importance of "what is" versus
"what might be" in terms of situative theories of motivation. In a
number of papers, Richard took issue with my argument for the pragmatic value
of locating goals and values primarily in the social context and only secondarily
as internalized by individuals. Ultimately it is a rather philosophical
difference. I agree with Richard that goals and values are transformed as they
are internalized and then again as they are externalized in achievement
contexts. My point was simply that we can assume that those transformations are
carried out using socially constructed knowledge to justify focusing primarily
(if not entirely) on situated models of engagement when attempting to transform
educational practices.
This tension between naturalistic experiments & observations
and interventionist design-based research is not going away any time soon. But
as mentioned above, many federal agencies are making explicit reference to DBR
in their requests for educational research proposals. Particularly promising is
the definition of DBIR (design-based implementation research) by Bill
Penuel and Barry Fishman and colleagues that extend these ideas to broader
systemic transformation. What is emerging across the Learning Sciences and
beyond are numerous "road maps" that include design principles that
appear more productive for engaging and assessing learning than what came
before.
The Challenge of
Program Evaluation
Ultimately the challenge may lie in whether or not the field
can define credible models for evaluating informal and formal educational programs
and curricula. Most funding agencies, and particularly federal agencies, are
still under enormous pressure to eventually "prove" that
interventions "work" in rigorous randomized clinical trials. Such
studies are very expensive to carry out. True experimental design also present
a fundamental challenge to generalizability. Very few schools are led and
structured in ways that make it possible to randomly assign students to
experimental and control conditions. And in many cases, the most appropriate
control conditions requires withholding an obviously valuable treatment (such
as feedback).
A 2003 paper with Steve Zuiker argued that conventional "objective" approaches to program evaluations are very problematic, and particularly so when carried out with by external partners. They tend not to provide evidence and information that is useful for advancing projects and informing the field. Rather, we argued that program evaluation should be a natural extension of the design-based iterative refinements of educational innovations. Specifically we argued that what many innovators to can be characterized as design-based iterative iterative refinements that align immediate and close engagement with proximal indicators of engagement and learning. The impact of those refinements can then be evaluated objectively with distal (and possibly "remote") indicators of engagement and learning. I am still trying to figure out just how this would map to Iridescent's Family Science program. Clearly the completion and submission of Design Challenges from home represents a distal indicator of engagement. While remote-level indicators of engagement are hard to obtain in informal settings, it seems that continued work on Design Challenges after the Family Science program is over might work. At proximal, distal, and remote levels, indicators of transferable learning can likely be obtained by assessing the amount and precision with which families use the disciplinary concepts in their description of their devices and their reflections on their Design Challenges. Having these levels of indicators can provide the objectivity necessary to validate claims regarding efficacy, regardless of whether one is working with external program evaluators.
In closing, I will mention new alternative that looks work exploring is is the Value Creation framework that was recently advanced be Etienne Wenger and his wife Beverly Trayner. Given that is an explicit program evaluation assessment framework that builds directly on Wenger's foundational work on learning communities, it might have quite a bit to offer. I hope to explore this potential in a future post,
A 2003 paper with Steve Zuiker argued that conventional "objective" approaches to program evaluations are very problematic, and particularly so when carried out with by external partners. They tend not to provide evidence and information that is useful for advancing projects and informing the field. Rather, we argued that program evaluation should be a natural extension of the design-based iterative refinements of educational innovations. Specifically we argued that what many innovators to can be characterized as design-based iterative iterative refinements that align immediate and close engagement with proximal indicators of engagement and learning. The impact of those refinements can then be evaluated objectively with distal (and possibly "remote") indicators of engagement and learning. I am still trying to figure out just how this would map to Iridescent's Family Science program. Clearly the completion and submission of Design Challenges from home represents a distal indicator of engagement. While remote-level indicators of engagement are hard to obtain in informal settings, it seems that continued work on Design Challenges after the Family Science program is over might work. At proximal, distal, and remote levels, indicators of transferable learning can likely be obtained by assessing the amount and precision with which families use the disciplinary concepts in their description of their devices and their reflections on their Design Challenges. Having these levels of indicators can provide the objectivity necessary to validate claims regarding efficacy, regardless of whether one is working with external program evaluators.
In closing, I will mention new alternative that looks work exploring is is the Value Creation framework that was recently advanced be Etienne Wenger and his wife Beverly Trayner. Given that is an explicit program evaluation assessment framework that builds directly on Wenger's foundational work on learning communities, it might have quite a bit to offer. I hope to explore this potential in a future post,
References
Bandura, A. (2000). Exercise of human agency through
collective efficacy. Current directions in psychological science, 9(3),
75-78.
Glenn, S. S. (1988). Contingencies and metacontingencies:
Toward a synthesis of behavior analysis and cultural materialism. The
Behavior Analyst, 11(2), 161.
Greeno, J. G., Smith, D. R., & Moore, J. L. (1992).
Transfer of situated learning. In D. Detterman & R. J. Sternberg (Eds.), Transfer
on trial: Intelligence, cognition, and instruction (pp. 99-167). Norwood,
NJ: Ablex.
Greeno, J. G. (1998). The situativity of knowing, learning,
and research. American Psychologist, 53(1), 5-25
Filsecker, M., &
Hickey, D. T. (2014). A multilevel analysis of the effects of external
rewards on elementary students' motivation, engagement and learning in
an educational game. Computers & Education, 75, 136-148.
Hickey, D. T. (2003).
Engaged participation versus marginal nonparticipation: A stridently
sociocultural approach to acheivement motivation. The Elementary School Journal, 401-429.
Hickey, D. T., & Zuiker, S. J. (2003). A new perspective for evaluating innovative science programs. Science Education, 87(4), 539-563.
Malone, T. & Lepper, M. (1987). Making learning fun: A
taxonomy of intrinsic motivations of learning. In R. E. Snow and M. J. Farr
(Eds.) Aptitude, learning, and instruction: Vol. 3. Cognition and affective
process analyses (pp. 223-253). Hillsdale, NJ: Lawrence Erlbaum.
Penuel, W. R., Fishman,
B. J., Cheng, B. H., & Sabelli, N. (2011). Organizing research and
development at the intersection of learning, implementation, and design.
Educational Researcher, 40(7), 331-337.
Wenger, E., Trayner, B., & de Laat, M. (2011). Promoting
and assessing value creation in communities and networks: A conceptual
framework. The Netherlands: Ruud de Moor Centrum.
No comments:
Post a Comment