How narratives about AI in the workforce are problematic for critical thinking in teaching and learning
Why wouldn't students turn to AI as a tool for efficiency and productivity when that's the dominant narrative around its potential?
Much of the discourse around AI1 in the workplace centers on increasing efficiency and productivity of both employees and companies as a whole. “Experts see AI-powered intelligent content streamlining tasks, improving productivity, and freeing up employees for higher-level work,” purports a CIO article from February. Or, consider this February 2025 article from Forbes entitled “How AI Can Maximize Productivity in an Organization.”
So what happens when we take these narratives about the benefits of AI and apply them to teaching and learning? Whether it be in a formal classroom or on-the-job training, emphasizing efficiency and productivity is counter to the conditions required for learning new skills.
“Freeing up humans for higher-level work”
To be fair, the CIO article I quoted above does state that more streamlined everyday tasks could free up employees for work that requires more intensive thinking or collaborating.
Thinking about the long term, though, something about this doesn’t quite sit right with me. Our discussions about AI are limited to considering the current workforce, which did not have access to AI tools while completing their education. We assume they have the critical thinking and creative skills necessary to focus on higher-level work, while AI takes over some of the less intensive tasks.
But is widespread access to AI going to prepare the future workforce in the same way?
The transactional model of US education
I’m going to make some broad generalizations here, and I know these statements don’t apply to every single student in every single classroom across the US.
The traditional model of schooling, where students receive grades for their work,2 encourages a learning environment that values efficiency and productivity over the trial, error, and growth that’s required for learning.
Learning is inherently messy. It involves being wrong about things, figuring out how or why you were wrong, and making corrections. It involves uncomfortable thinking that I refer to as a “tangly feeling in my brain” when concepts just aren’t clicking into place.
In schools, we assign grades for final products. Students are rewarded (or punished) for their end result, and the process of learning itself is generally not recognized or examined. Because of this, the product of learning is valued much more than the process. I’m not nearly the first person who’s talked about the idea of promoting process over product in learning—but I think trying to incorporate process-based objectives into a system which inherently rewards products is difficult to do well.
Why wouldn’t students use AI for efficiency and productivity?
Students are behaving rationally given the conditions under which they are asked to operate. They need good grades for assignments. They’re often time-strapped, participating in activities, part-time jobs, and have familial obligations. Using AI seems like the natural solution: it’s being touted for increasing efficiency and productivity in the workplace. Why wouldn’t students turn to it for the same reasons?
When the emphasis in schools is about completing an assignment and earning a grade over the messy process of learning, the efficiency in which ChatGPT provides responses is an obvious choice.
This brings me to wonder: What “higher-level work” are we leaving for students? And what will they see as “higher-level work” to tackle in the workforce after years of simplistic AI transactions?
Yes, there are discussions about strategies to redesign assignments, teach processes for collaborating with AI, and overall AI literacy happening. But these discussions are happening more slowly than technology is developing (that’s not a critique on the conversations themselves and more so on society’s constant need to forge ahead at light speed with new technologies). And they’re happening in the confines of a system that’s still going to reward learning products over critical thinking and the learning itself.
Conflicting narratives
So how can we continue to have conversations with students about using AI in ways that encourage brainstorming, critical thinking, and innovation when the dominant conversation about AI in the workforce is about streamlining work?
Even beyond the formal classroom, how can we encourage people to sit in the discomfort and uncertainty of learning new knowledge and skills before immediately turning to their preferred LLM for answers?
If I had the answers to these questions, I probably wouldn’t be writing this post. But it’s something that’s been marinating in the back of my mind as I consider designing learning experiences for a wide range of audiences: college students, colleagues, new product users.
Thinking about ways to inject just little bits here and there of that mind stretch required to grow. Making the discomfort of learning part of the conversation. Humanizing not knowing. That’s something we’re markedly better at than the LLMs, right: being able to state when we don’t know something? How can we find more value in that?
Thanks again for reading, friends, and have a great week 👋🏻
I’m really talking about large language models, or LLMs, in this post. I know AI means something much broader, but colloquially most have been using AI and LLMs as interchangeable. I don’t think that’s right, but I’m just rolling with it for the purposes of this post.
And eventually, at a larger scale, a job for their degree.