You don’t have to be extremely online to have noticed the flood of articles about using AI in the past year. Chatbot comparisons, prompt engineering, doom-screaming-into-the-void. Or maybe you’ve just encountered stuff that you are pretty sure AI generated. That LinkedIn post? Or maybe even an email from your manager? We are drowning in content about AI, and here I am, treading water and considering: why am I avoiding it, anyway?1 And what about instructional design in particular makes me want to avoid it?
In my last post, I promised to return with a post sharing the tools and resources that help me the most. I started drafting that post, but part of it involved me commenting on my avoidance of AI. Suffice it to say: my tools and resources post is coming, but this isn’t it.
Using AI to complete a task versus create
As the title implies, I’ve mostly been avoiding generative AI tools like ChatGPT. But I haven’t fully avoided them. I have tinkered. I have had some successes. I’ve had many failures.
ChatGPT has especially helped me as a co-pilot of sorts when I am trying to figure out how to get something done in Excel or Google Sheets. My spreadsheet skills are decent, but I don’t live in them everyday. I’m often trying to find the precise search phrase that will bring me to the formulas I need to accomplish my tasks. This has been difficult in the past. ChatGPT’s conversational abilities make it a bit easier because I can simply ask, “That didn’t work, what should I try next?” Based on these experiences, AI is most helpful to me when I need to get a task done and I’m not sure what the best first–or next–steps are. But there could still be problems lurking there: do I know why I’m doing what I’m doing in my spreadsheet? Could I replicate it? I’d probably have to return to AI.
Where I’ve struggled to find generative AI useful is in launching more creative endeavors. For assistance in drafting a presentation outline, or even coming up with topics for this newsletter, ChatGPT has left me yawning. Drafting these things from a blank page has value to me. The process of creating allows me to know something deeply, better improve on each of its facets, and even helps me better understand my own thoughts.
The trouble with an AI copilot
Writing is thinking. Doing is learning. What parts are being stripped away by AI?
Over the past few months, I’ve struggled to increase the weight I can back squat (I promise this is relevant). I was finally able to make some gains with some tweaks to how I thought about the process: which points of my feet were in contact with the floor. Imagining driving the floor away rather than lifting the weight up. Driving my knees outward to engage the correct muscles. Realistically, I think I had the strength to back squat the weight I did today a few months ago, but I didn’t quite know how. I had the tools (my body, some weight plates) but not the skills. Doing is learning.
This is important in teaching because AI is going to change how we interact with the subjects we teach and how we expect our learners to learn. How do we make sure our learners have not only the tools, but the skills? How do we demonstrate the value of learning-by-doing to them instead of a Matrix-style download of information?
Back to awful humans
There are certainly a lot of conversations happening about how to teach folks how to use generative AI in responsible and useful ways. But even armed with that knowledge, can we trust ourselves to not rely on the robots a bit too much? To skip some steps and sacrifice a fuller understanding as a result?
The Solove and Matsumi article on "AI, Algorithms, and Awful Humans" I pondered last week considers what happens when we mix human and AI decision-making. On page 13:
“Empirical studies show that people readily defer to automated systems, overlook errors in algorithms, and deviate from algorithmic output in ways that render a less accurate result. Moreover, Green notes, “people cannot reliably balance an algorithm’s advice with other factors, as they often overrely on automated advice and place greater weight on the factors that algorithms emphasize.” Studies show that [a]utomation can also create a diminished sense of control, responsibility, and moral agency among human operators.”
How much of my learning design do I want to turn over to AI? From Solove and Matsumi again: “Ben Green points out an even more fundamental conflict between algorithmic and human decision-making – algorithms offer “consistency and rule-following” whereas humans offer “flexibility and discretion (p12).”2
Instructional design, to me, is about that flexibility and discretion: deeply knowing my learners, the greater context, and the goals they need to reach for success. What am I missing when I hand over part of that design process?
Here, I’ve thought about the impact of AI on the learning process. Next week, I’m going to (actually this time) go a little more practical and review some tools and resources I’ve found helpful as an instructional designer.
What have your experiences been with generative AI? Where have you found it most or least helpful? I’m interested to hear what other folks have experienced–comment below with your thoughts!
Thanks for reading, see you next week!
Besides all the social justice reasons, of course. Like the fact that facial recognition software often discriminates against Black people and voice recognition in cars has a harder time understanding women. Or that AI can perpetuate racial bias in determining insurance rates. Or excessively tags photos of women’s bodies as sexually suggestive. Really I could go on for a while tagging various examples of how AI can perpetuate racism and sexism but that’s not the point of this post.
The Ben Green article Solove and Matsumi cite is: Ben Green, The Flaws of Policies Requiring Human Oversight of Government Algorithms, 45 Computer Law & Security Rev. 1, 7 (2022).