What happens when we use data to make decisions about our learners?
Does data drive us to make progress or to continue doing the same thing? (Spoiler alert: I don't know).
An ouroboros is an ancient symbol that shows a snake or dragon eating its own tail (if you’re watching season 2 of Loki, it’s also the name of the character who wrote the TVA handbook, but that’s not the point here). It typically represents the cycle of life and death, infinity, or wholeness. Reading that description suggests positivity to me, but I’ve always seen the ouroboros symbol itself as somewhat threatening or ominous. I’m not sure if that is intended or not. But it’s what I think of when I consider larger implications of data-driven decision making, especially as it becomes driven by big data and algorithms.
This is also what I was thinking about when I read the paper, “AI, Algorithms, and Awful Humans” by Daniel J. Solove and Hideyuki Matsumi. What is the impact of relying on data to make decisions about learners? Are we destined to repeat a lot of the same mistakes we have made in the past and essentially eat our own tails?
Quantifying learning
Solove and Matsumi discuss the opportunities and limitations of quantification overall:
“Quantification can certainly lead to insights we might not otherwise recognize. But the fact that we can see certain things through quantification doesn’t mean that quantification is a superior way of knowing or that it should be the only way of examining things,” Solove and Matsumi continue: “Certainly, statistics can be quite useful, and particular attempts to rank, score, or infer based on aggregated standardized data can be valuable. But these practices can be fraught with danger because algorithmic systems do not simply see the world; they simplify it” (p9).
We do a lot of quantifying when it comes to looking at outputs, both in our traditional education systems and in customer education practices. For instance, this NPR article uses test scores to measure math and reading progress in US students. On the customer ed side, “success” is often defined by proving ROI to the business; a number of articles reference broader metrics, like NPS score, change/decrease in support tickets, and churn rate in addition to learning-specific metrics like course completion rates.
I’m not saying that these data are worthless, but looking at the data in such a basic manner fails to address a lot of questions. Are students developing the skills they need to prepare them for the everyday tasks, problem-solving, and critical thinking required for informed decision-making ahead? Are people using a product able to address the problems they’re hoping to solve?
Does relying too much on data and AI-driven data lead us to focus too narrowly on–and conflate–outputs vs. outcomes?
AI is already being used in educational contexts to increase retention. I’m sure it will be (or rather is being) used in businesses in order to do similar things.
It makes me wonder, what are we missing when we base reactive decision-making on AI-produced outputs? Or even just larger datasets that aren’t relying on AI at all? How does how we classify data in order to make decisions and design algorithms change how we think about the individuals who occupy those classifiers?
Outputs and the ouroboros
I’ve asked a lot of questions above, and as the spoiler alert on the post states, I don’t have the answers. I do think it’s worth taking time to consider, though, how a focus on outputs can lead us to look like the ouroboros: a snake eating its tail. Is that what progress really looks like?
Focusing on outputs and whether or not learners achieve those educational outputs isn’t inherently a negative thing. I’m more concerned about when we become tunnel-vision focused on making sure learners achieve these outputs without ever taking a step back to asses whether or not these outputs are leading to valuable outcomes, or changed behaviors that have positive results for the learner. We could make a change to our instruction that increases course retention without having an impact on learner behavior. Is that a valuable change, in the end?
It’s not a direct parallel, but this over-simplification reminds me of articles I read about AI autophagy (Model Autophagy Disorder, or MAD) over the summer. Scientists at Stanford and Rice have theorized that feeding AI with only AI-generated content creates an eventual breakdown of the models: the data outputs are less accurate and less diverse once MAD takes effect. From a July 2023 article:
The data at the edges of the spectrum (that which has fewer variations and is less represented) essentially disappears. Because of that, the data that remains in the model is now less varied and regresses towards the mean. According to the results, it takes around five of these rounds until the tails of the original distribution disappear - that's the moment MAD sets in.
What happens to the learners represented on the tail ends of our data? Are they being served?
Read more about AI autophagy here if you’re as fascinated (or horrified?) as me by the topic.
The Solove and Matsumi article concludes by stating, “In the end, good decisions depend upon good humans” (p19).
We’re living in a world where we are expected to use data to drive decision-making, whether it’s a human or an algorithm making the decision (or a mix of the two).
What I’m encouraging for learning designers here is a step back: a reminder to look past our outputs and periodically re-evaluate our outcomes.
What changes are our learners seeing after they complete our instruction? Is that valuable to them? Why or why not?
Perhaps the answers to those things change our learning goals and thus our outputs. Or maybe they don’t. But that’s still data-driven decision-making, right?
Here, I’ve mused about the focus on outputs leading to an ouroboros in learning contexts. Next week, I’m going to go a little more practical and review some tools and resources I’ve found helpful as an instructional designer.
I did ask a lot of questions in this post. Do you have any thoughts about how to respond? Let me know in the comments below!
Thanks for reading and see you next week!