In mid-April, Stanford University hosted the "Learning Summit 2016: Inventing the Future of Higher Education." For those of us who study how the newer processes and protocols of using student data have ethical and legal consequences, one session in particular should be of interest: Marco Molinaro (UC Davis) moderated a panel on the "Responsible Use of Student Data for Individual and Organizational Improvement" which included speakers Martin Kurzweil (ITHAKA S+R), Mitchell Stevens (Stanford), and Kent Wada (UCLA). Kurzweil provided a recent blog posting summarizing the panel discussion, raising some important points.
Kurzweil parses the distinction between what Randy Bass calls a "integrative model" and a "disaggregative model" of innovating.
Kurzweil also asks the important question, "[D]o students benefit more from being sorted based on predicted outcomes, or from being allowed the freedom to explore and to fail?" This question becomes increasingly important when examining what motivates learning, what happens to students who cannot cope with failure, and what obligation institutions bear to provide support for their students. The difference between what Bass calls an "integrative model" and "disaggregative model" is, I would like to suggest, less about how innovation occurs and more about who controls the influence of that innovation. To draw on our work for an example, open digital badges are igniting a deep conversation about the "value" and "currency" of microcredentials as they begin to circulate in social media and as they lead to new career and educational opportunities. A formal example of this discussion is the recent publication by the American Council on Education, "Quality Dimensions for Connected Credentials," and an informal example of this are the notes taken during a recent Open Badges in Higher Education talk on badges in the humanities (see lines 164-199; chat). Badges re-route curricular control from a centralized authority of "knowledge acquisition" and instead shift that power into the hands of those issuing the badges. My co-authors and I recently explored this is in more detail, but I will say here that "disaggregative models" of innovation threaten entrenched, institutional control of credentialing, knowledge, and skills. Ethically, I would say a further question is deserving of attention: if the means (microcredentialing via disaggregated systems of learning) justify the ends (individuals getting a job or additional educational opportunity), is autonomy threatened? In other words, do "disaggregative models" threaten the autonomy of learners, or do they encourage individuals to seek out credentialing systems that have local or national value, build their own curricular pathways to achieve future goals, and seek jobs or additional education that can help them build a better - more free - life? This is all to say that perhaps the ethical question in such "disaggregative models" is not so much with the individual technology, but with the promise or peril that such innovation brings to individuals.
In an additional thread regarding what Kurzweil calls a "responsibility to use [data]" he describes what some of us in the learning analytics and ethics field call an obligation to act. Several years ago, my colleagues and I argued institutions become responsible for acting if the examined data predicts student difficulty or failure. When institutions begin modeling their data to predict student outcomes, they take on the ethical obligation to help students who may be at risk. This, of course, raises many more questions of legal and ethical consequence which are, thankfully, taking root in the literature, especially in the learning analytics and educational data mining fields.
This is an exciting time to parse the ethical aspects of examining, modeling, and intervening with student data!
No comments:
Post a Comment