Have robots posing as students taken over my asynchronous online class?
Spoiler: No. At least, I don't think so.
Since ChatGPT’s release, educators have been grappling to deal with student use of generative AI in their courses. Lots of questions abound: Should generative AI be banned? Should we be incorporating generative AI into our assignments? Should we be using AI detectors? Is using AI plagiarism? Does AI help students learn? Does AI stifle critical thinking?
Is there one answer to any of these?
My approach
Spoiler alert: I don’t think there’s any one answer to these questions. I’m always iterating on my teaching philosophy and the activities in my course, and just like anything else, my approach to AI will change and adapt as time goes on.
Jason Gulya posted about generative AI in his courses recently (actually, he posts about it all the time with a lot of nuance, so definitely recommend a follow there). It was his post that made me consider putting my approach into words. In his post, he discusses why he doesn’t use AI detectors or progress tracking. Note: Some instructors require students to submit assignments using something like a Google Doc, where progress tracking is an option. They do this so they can more easily tell through revision history whether or not a student has simply copied and pasted full paragraphs from generative AI.
Putting on my instructional designer hat
Okay, my instructional designer hat and my teacher hat are pretty much the same hat at this point. But, from an ID perspective, I like to think about AI as I’d think about anything else in my course: how is it helping or hurting the students achieving the course objectives?
Here’s what I said in response to Jason’s post:
I've been trying to approach content I think is AI-generated by students from the angle of "how could this content be improved for this particular objective" instead of attacking from the AI angle.
Most of my students that seem to be using AI are submitting work that is oftentimes shallow or is not critically examining course materials like I think they should be–so I focus on that, because it's the end goal anyway. I'd rather them take my feedback and use it to better implement AI or revise how they are using AI vs. extinguish its use completely.
I legit just copied and pasted my comment there, so please excuse my off-the-cuff stream of consciousness writing that was not touched by generative AI tools.
Notice that I said my students “that seem to be using AI.” I tend to think I have a good sense of when something submitted doesn’t sound like it’s in the student’s own voice, but research has shown that humans are actually pretty bad at detecting AI-generated writing. I’d rather try to steer my students toward meeting the course learning objectives with the assistance of AI over falsely accusing a student of copying and pasting from ChatGPT.
How I actually discussed AI with my students in my course syllabus
Just because my approach above isn’t explicitly anti-AI doesn’t mean I just told my students to go ham with Claude, or pretended it didn’t exist in my syllabus. For greater context around the below points, I teach an asynchronous online course.
If you’re interested, you can read the syllabus statement I included this semester here.
Here’s the first paragraph as a preview, where I get real with my students:
I’m going to be real with you here: when I’m ‘grading’ your work, I’m really just trying to provide you with feedback on how you’ve thought about an idea. I don’t use AI to provide this feedback, and I work really hard to try and put this course together so that you can think about the world through a critical lens. I’m not saying you can’t do this with the assistance of AI, but it’s not very fun to provide feedback on something that’s clearly just been generated by a robot. The idea of courses turning into robots talking to robots makes me deeply sad. In this first module, we’ll also be reading about how while AI can be helpful in times of stress or cognitive overload, it can also hinder our critical thinking skills.
The article I mention here is called AI Eases Our Mental Load at the Expense of Critical Thinking. Like they do with the syllabus and all of our other readings, I have them annotate this using the Hypothesis social annotation tool so we can have a conversation around the article.
My semester mid-point reflections
I’m about halfway through the semester. Do I think I’m still grading robots sometimes? Yeah, definitely. Is it the majority of my students, or even close? No.
Even when I do think I’m grading robots, it gives me a moment to pause and reflect. The students who seem to be submitting copied and pasted text from generative AI don’t seem to do it all the time.
So where are they doing it, and why? Is it because they are too busy and are trying to simply keep up with the work? I think this is the case sometimes because some of my students have admitted they’re working full time with a full course load as well. I’m pretty flexible about late work, but the education machine may have trained them well and the threat of zeroes and late penalties may hover in their minds.
Or is it because they don’t understand the course material or the assignment? If so, how can I better scaffold that content?
It seems that I’m ending with more questions than answers. I hope this provided you with some considerations to shape your own way to approach generative AI with learners.