A colleague and friend who teaches at a liberal arts college in California recently shared with me that she spent in the ballpark of 40 hours last summer redesigning a research paper assignment to be “AI-proof” in her fall seminar. At the end of spring semester, her students will graduate into jobs that require them to use AI daily. We are teaching students to hide the very literacy their future demands.
The call to “AI-proof” our assignments is reverberating through higher education — from faculty lounges to academic technology roundtables. It’s urgent, it’s sincere, and it’s fueled by genuine pedagogical care. And honestly? Much of it is brilliant. Faculty are innovating at a rapid pace, reimagining assignments to protect authenticity and preserve intellectual honesty in a world where generative AI writes, summarizes, and simulates with astonishing speed. These AI mitigation strategies aren’t born of paranoia. They’re born of craft — an earnest desire to keep learning human.
The Pedagogical Craft of Mitigation
Educators have been remarkably inventive in this space, creating multi-layered strategies that do more than block AI — they build discernment. Here’s a look at the some of them, their power, their limits, and their impact on student learning:
Table comparing eleven AI mitigation strategies in higher education.
These approaches certainly work. Some better than others, and combining them can be powerfully effective. Click to see examples of strategies in practice. They slow down the learning process, foreground reflection, and reward originality. They represent the best instincts of liberal arts education — the impulse to protect learning as a relational act, not a transaction.
But here’s the paradox: the more time we spend perfecting these defenses, the further we drift from the future we’re supposed to be preparing students for.
The Problem with Perfecting Resistance
We’re designing moats when we should be building bridges.
As faculty, we’ve become experts in preventing students from using the very tools they’ll need to thrive beyond graduation. While we construct sophisticated “AI-proof” systems — password-protected PDFs, multi-phase proctoring, closed platforms — the professional world is racing ahead, expecting graduates who can think with AI ethically, effectively, and creatively.
We are, unintentionally, teaching students a skill that has no future application: how to learn without the tools they’ll be required to use for the rest of their lives.
The deeper problem extends beyond individual assignments. When faculty independently prune “AI-able” work without departmental coordination, the result is curricular chaos:
- Duplicated efforts across courses
- Gaps in skill progression
- Students experiencing five different AI policies across five courses in their major
This work cannot be done in isolation. It requires departmental conversation about shared outcomes, scaffolded skill development, and coherent AI policies across the major.
When “Integrity” Turns Into Invasion
Surveillance is not pedagogy.
There’s a line we shouldn’t cross: installing surveillance software on student computers. That doesn’t teach integrity; it broadcasts distrust. It says, “Your laptop — your window to creativity, exploration, and daily necessity — is now a controlled asset I can monitor at will.” If that’s our definition of academic integrity, we’ve already surrendered the idea of education as a partnership.
And we know where this logic can spiral: from “protect the test” to “police the person.” History shows how quickly tool-level monitoring becomes life-level monitoring. It’s a short walk from integrity to intrusion.
Some respond, “Fine — go back to pen and paper.” Honestly, that’s less dystopian than spyware. But let’s not romanticize blue books. For many students, English (or any academic register) isn’t their first expressive mode — and most of us actually learned to write by typing. Picture the blue-book era: you discover your real argument halfway through, realize the perfect sentence belongs three paragraphs up, and you’re frozen in ink. No cursor. No drag. No re-ordering. No thinking-through-revision — the essence of writing itself. You start drawing arrows, crossing out paragraphs, performing neatness while throttling cognition.
And outside a surgical theater, almost no profession rewards “compose your best argument by hand in 40 minutes while a clock screams at you.”
So yes — if the choice is creepy spyware or smudged ink, I’ll take ink over intrusion. But both miss the point. Neither reflects how people actually think, write, collaborate, verify, or revise in 2025. Both are control systems — one analog, one algorithmic — aimed at the wrong target.
Liberal Arts, Not Lie Detectors
The moral center of teaching is trust.
At bottom, the surveillance classroom sends one message: we don’t trust our students. The liberal arts should do better than that. We’re meant to be the standard-bearers of inquiry, dialogue, and moral imagination. If the best we can offer is dashboards and suspicion, we’ve traded away our pedagogical soul.
I say this with deep respect for colleagues doing heroic work — often under pressure, often while fielding understandable anxiety from administrators, parents, and even their own instincts to protect what matters most about education. These concerns are real. The fear is legitimate. We’re witnessing a paradigm shift unfolding before our eyes and in our classrooms, and we desperately need every perspective at the table to navigate it thoughtfully. The AI-resistant strategies you’ve built are evidence of care, craft, and commitment to authenticity. That work matters. Your voice matters.
But panic is not a strategy. And control is not pedagogy.
The way forward isn’t to out-police our students; it’s to out-design the problem. If our energy goes into designing around distrust, we’ll starve the very habits of mind we claim to teach. Design for evidence of learning, not evidence of catching. Trust as default, transparency as practice, rigor as design. That’s how the liberal arts lead.
The Shift: Performance as Pedagogy
From catching to coaching.
Here’s what changed my thinking: watching professionals in action — including myself.
In my work, I don’t submit written reports to prove what I know. I present. I facilitate. I respond to questions I didn’t anticipate. I think on my feet, synthesize in real time, and demonstrate understanding through dialogue and improvisation. That’s how the professional world actually assesses expertise — not through pristine documents composed in isolation, but through performance: the ability to explain, adapt, defend, and collaborate under pressure.
Why aren’t we teaching that?
If we want integrity without surveillance and rigor without nostalgia, change the mode of evidence. Make learning observable, human, and grounded in the communication skills the world actually values.
Design for performance, not policing:
- Oral assessments — brief, coached defenses that make reasoning visible
- Video essays — planned, revised, reflective storytelling with sources and documented process
- Live presentations with Q&A — synthesis under light pressure, supported by artifacts
- Recorded demonstrations — show the build, the test, the fix; narrate the decisions
These aren’t just “AI-proof”; they’re future-proof. They develop the soft skills employers actually demand: clear communication, adaptive thinking, grace under uncertainty. They teach improvisational, situational leadership — the ability to demonstrate what you know when someone asks a question you didn’t prepare for.
And here’s the bonus: in designing these performance-based assessments, we’re also teaching technological literacy. Students learn video editing, audio production, visual storytelling, digital composition — the multimodal fluencies that define 21st-century communication. Each iteration gives them practice. Each presentation builds confidence.
This is how I learned to show what I know. This is how your students will need to show what they know.
Want inspiration? Talk to your campus teaching, learning, and technology center. They’re already piloting these approaches. They have tools, templates, rubrics, and pedagogical frameworks ready to support you. You don’t have to reinvent this alone.
From here, the path forward becomes clear: make learning too specific, too process-visible, too human to fake.
The Transdisciplinary Turn: From Resistance to Responsiveness
The question isn’t “Should students use AI?” The question is “How do we teach them to use it well, critically, and humanely — within our disciplines?”
That’s the transdisciplinary challenge now facing every liberal arts curriculum. It’s not just a question for computer science or writing programs — it’s a shared design problem spanning philosophy, biology, studio art, and sociology alike.
An AI-responsive curriculum embraces both sides of the coin:
- AI Resistance ensures cognitive integrity — the ability to think unaided, to wrestle with ideas, to claim one’s voice
- AI Integration ensures cognitive fluency — the ability to think with tools, to discern when to trust them, and to synthesize machine assistance with human judgment
Neither is optional. Together, they form the new liberal art: technological self-awareness — the capacity to understand not just what we know, but how we come to know it alongside intelligent systems, and what remains distinctly, necessarily human in that process.
What AI Literacy Looks Like in Practice
A responsive curriculum asks students to:
Document their AI use as part of their process — showing how the tool informed, shaped, or misled their work.
Biology example: Generate a preliminary literature scan with AI; verify each citation; identify misrepresentations; reflect on what the AI got wrong about recent research methodology.
Reflect on the ethics of automation within their discipline — what’s lost, what’s gained, what must remain human.
Philosophy example: Prompt AI to construct an ethical argument; use course readings to identify logical gaps, hidden assumptions, or misapplied concepts; turn the AI’s output into the object of analysis itself.
Evaluate AI outputs for accuracy, bias, and context — building critical reading and synthesis skills across modalities.
Integrate multimodal expression — text, image, sound, video, data — to demonstrate learning that transcends the written word and develops the communication fluencies their futures demand.
Engage in meta-learning — understanding not just what they know, but how they came to know it alongside intelligent systems.
This is what AI literacy in the liberal arts should look like: a blend of philosophical questioning, technological discernment, creative practice, and performative demonstration.
A Call to the Faculty
The hard work of AI literacy doesn’t fall on students. It falls on us.
We’re the ones who must rethink assessment, let go of some control, and reimagine academic integrity not as suspicion but as shared inquiry. We can’t expect students to navigate this complexity ethically if we aren’t modeling how.
I’m sensitive to the constraints. I see the pressures — departmental, institutional, accreditation-driven. Many of you are teaching overloads, navigating budget cuts, fielding impossible demands. I know some of you are skeptical, exhausted, or both. That’s valid. This is hard work, and it requires support, time, and institutional commitment that isn’t always there.
But I also believe this: the liberal arts — with their long tradition of self-reflection, interdisciplinarity, and humanistic questioning — are exactly where this reimagining must begin. We’ve always been the ones asking not just what to teach, but why and how. That’s our strength. That’s our calling.
The Future We Should Be Building
All those AI-resistant strategies? Keep them. They’re valuable. They’re proof that faculty care deeply about authenticity and intellectual honesty. But don’t stop there.
Pair them with the equally essential work of AI fluency — teaching students to engage, critique, and co-create with intelligent systems. Add performance-based assessments that make learning visible, human, and grounded in the communication skills the world actually demands.
Because the future of education won’t belong to those who can simply resist AI. It will belong to those who can work wisely with it — and demonstrate that wisdom through voice, presence, and adaptive thinking.
So here’s a challenge to us:
This semester, design one AI-resistant assignment. Next semester, design one that teaches AI fluency and requires students to perform their learning — through presentation, video essay, oral defense, or live demonstration. Compare what you learn from each. Share your findings with colleagues. Coordinate as a department. Connect with your teaching and learning center. Experiment together. Build coherence.
Because the real work isn’t deciding whether AI belongs in our courses — it’s deciding what kind of intelligence we’re teaching students to cultivate, and what kinds of humans we’re helping them become.
When our assignments are too human to fake and our learning too authentic to outsource, we will have done more than “AI-proof” education.
We’ll have future-proofed it.





