Beyond AI-Proofing: Designing for Integrity, Fluency, and the Future of Liberal Arts Learning

A colleague and friend who teaches at a liberal arts college in California recently shared with me that she spent in the ballpark of 40 hours last summer redesigning a research paper assignment to be “AI-proof” in her fall seminar. At the end of spring semester, her students will graduate into jobs that require them to use AI daily. We are teaching students to hide the very literacy their future demands.

The call to “AI-proof” our assignments is reverberating through higher education — from faculty lounges to academic technology roundtables. It’s urgent, it’s sincere, and it’s fueled by genuine pedagogical care. And honestly? Much of it is brilliant. Faculty are innovating at a rapid pace, reimagining assignments to protect authenticity and preserve intellectual honesty in a world where generative AI writes, summarizes, and simulates with astonishing speed. These AI mitigation strategies aren’t born of paranoia. They’re born of craft — an earnest desire to keep learning human.

The Pedagogical Craft of Mitigation

Educators have been remarkably inventive in this space, creating multi-layered strategies that do more than block AI — they build discernment. Here’s a look at the some of them, their power, their limits, and their impact on student learning:

Table comparing eleven AI mitigation strategies in higher education.

 

These approaches certainly work. Some better than others, and combining them can be powerfully effective. Click to see examples of strategies in practice. They slow down the learning process, foreground reflection, and reward originality. They represent the best instincts of liberal arts education — the impulse to protect learning as a relational act, not a transaction.

But here’s the paradox: the more time we spend perfecting these defenses, the further we drift from the future we’re supposed to be preparing students for.

The Problem with Perfecting Resistance

We’re designing moats when we should be building bridges.

As faculty, we’ve become experts in preventing students from using the very tools they’ll need to thrive beyond graduation. While we construct sophisticated “AI-proof” systems — password-protected PDFs, multi-phase proctoring, closed platforms — the professional world is racing ahead, expecting graduates who can think with AI ethically, effectively, and creatively.

We are, unintentionally, teaching students a skill that has no future application: how to learn without the tools they’ll be required to use for the rest of their lives.

The deeper problem extends beyond individual assignments. When faculty independently prune “AI-able” work without departmental coordination, the result is curricular chaos:

  • Duplicated efforts across courses
  • Gaps in skill progression
  • Students experiencing five different AI policies across five courses in their major

This work cannot be done in isolation. It requires departmental conversation about shared outcomes, scaffolded skill development, and coherent AI policies across the major.

When “Integrity” Turns Into Invasion

Surveillance is not pedagogy.

There’s a line we shouldn’t cross: installing surveillance software on student computers. That doesn’t teach integrity; it broadcasts distrust. It says, “Your laptop — your window to creativity, exploration, and daily necessity — is now a controlled asset I can monitor at will.” If that’s our definition of academic integrity, we’ve already surrendered the idea of education as a partnership.

And we know where this logic can spiral: from “protect the test” to “police the person.” History shows how quickly tool-level monitoring becomes life-level monitoring. It’s a short walk from integrity to intrusion.

Some respond, “Fine — go back to pen and paper.” Honestly, that’s less dystopian than spyware. But let’s not romanticize blue books. For many students, English (or any academic register) isn’t their first expressive mode — and most of us actually learned to write by typing. Picture the blue-book era: you discover your real argument halfway through, realize the perfect sentence belongs three paragraphs up, and you’re frozen in ink. No cursor. No drag. No re-ordering. No thinking-through-revision — the essence of writing itself. You start drawing arrows, crossing out paragraphs, performing neatness while throttling cognition.

And outside a surgical theater, almost no profession rewards “compose your best argument by hand in 40 minutes while a clock screams at you.”

So yes — if the choice is creepy spyware or smudged ink, I’ll take ink over intrusion. But both miss the point. Neither reflects how people actually think, write, collaborate, verify, or revise in 2025. Both are control systems — one analog, one algorithmic — aimed at the wrong target.

Liberal Arts, Not Lie Detectors

The moral center of teaching is trust.

At bottom, the surveillance classroom sends one message: we don’t trust our students. The liberal arts should do better than that. We’re meant to be the standard-bearers of inquiry, dialogue, and moral imagination. If the best we can offer is dashboards and suspicion, we’ve traded away our pedagogical soul.

I say this with deep respect for colleagues doing heroic work — often under pressure, often while fielding understandable anxiety from administrators, parents, and even their own instincts to protect what matters most about education. These concerns are real. The fear is legitimate. We’re witnessing a paradigm shift unfolding before our eyes and in our classrooms, and we desperately need every perspective at the table to navigate it thoughtfully. The AI-resistant strategies you’ve built are evidence of care, craft, and commitment to authenticity. That work matters. Your voice matters.

But panic is not a strategy. And control is not pedagogy.

The way forward isn’t to out-police our students; it’s to out-design the problem. If our energy goes into designing around distrust, we’ll starve the very habits of mind we claim to teach. Design for evidence of learning, not evidence of catching. Trust as default, transparency as practice, rigor as design. That’s how the liberal arts lead.

The Shift: Performance as Pedagogy

From catching to coaching.

Here’s what changed my thinking: watching professionals in action — including myself.

In my work, I don’t submit written reports to prove what I know. I present. I facilitate. I respond to questions I didn’t anticipate. I think on my feet, synthesize in real time, and demonstrate understanding through dialogue and improvisation. That’s how the professional world actually assesses expertise — not through pristine documents composed in isolation, but through performance: the ability to explain, adapt, defend, and collaborate under pressure.

Why aren’t we teaching that?

If we want integrity without surveillance and rigor without nostalgia, change the mode of evidence. Make learning observable, human, and grounded in the communication skills the world actually values.

Design for performance, not policing:

  • Oral assessments — brief, coached defenses that make reasoning visible
  • Video essays — planned, revised, reflective storytelling with sources and documented process
  • Live presentations with Q&A — synthesis under light pressure, supported by artifacts
  • Recorded demonstrations — show the build, the test, the fix; narrate the decisions

These aren’t just “AI-proof”; they’re future-proof. They develop the soft skills employers actually demand: clear communication, adaptive thinking, grace under uncertainty. They teach improvisational, situational leadership — the ability to demonstrate what you know when someone asks a question you didn’t prepare for.

And here’s the bonus: in designing these performance-based assessments, we’re also teaching technological literacy. Students learn video editing, audio production, visual storytelling, digital composition — the multimodal fluencies that define 21st-century communication. Each iteration gives them practice. Each presentation builds confidence.

This is how I learned to show what I know. This is how your students will need to show what they know.

Want inspiration? Talk to your campus teaching, learning, and technology center. They’re already piloting these approaches. They have tools, templates, rubrics, and pedagogical frameworks ready to support you. You don’t have to reinvent this alone.

From here, the path forward becomes clear: make learning too specific, too process-visible, too human to fake.

The Transdisciplinary Turn: From Resistance to Responsiveness

The question isn’t “Should students use AI?” The question is “How do we teach them to use it well, critically, and humanely — within our disciplines?”

That’s the transdisciplinary challenge now facing every liberal arts curriculum. It’s not just a question for computer science or writing programs — it’s a shared design problem spanning philosophy, biology, studio art, and sociology alike.

An AI-responsive curriculum embraces both sides of the coin:

  • AI Resistance ensures cognitive integrity — the ability to think unaided, to wrestle with ideas, to claim one’s voice
  • AI Integration ensures cognitive fluency — the ability to think with tools, to discern when to trust them, and to synthesize machine assistance with human judgment

Neither is optional. Together, they form the new liberal art: technological self-awareness — the capacity to understand not just what we know, but how we come to know it alongside intelligent systems, and what remains distinctly, necessarily human in that process.

What AI Literacy Looks Like in Practice

A responsive curriculum asks students to:

Document their AI use as part of their process — showing how the tool informed, shaped, or misled their work.
Biology example: Generate a preliminary literature scan with AI; verify each citation; identify misrepresentations; reflect on what the AI got wrong about recent research methodology.

Reflect on the ethics of automation within their discipline — what’s lost, what’s gained, what must remain human.
Philosophy example: Prompt AI to construct an ethical argument; use course readings to identify logical gaps, hidden assumptions, or misapplied concepts; turn the AI’s output into the object of analysis itself.

Evaluate AI outputs for accuracy, bias, and context — building critical reading and synthesis skills across modalities.

Integrate multimodal expression — text, image, sound, video, data — to demonstrate learning that transcends the written word and develops the communication fluencies their futures demand.

Engage in meta-learning — understanding not just what they know, but how they came to know it alongside intelligent systems.

This is what AI literacy in the liberal arts should look like: a blend of philosophical questioning, technological discernment, creative practice, and performative demonstration.

A Call to the Faculty

The hard work of AI literacy doesn’t fall on students. It falls on us.

We’re the ones who must rethink assessment, let go of some control, and reimagine academic integrity not as suspicion but as shared inquiry. We can’t expect students to navigate this complexity ethically if we aren’t modeling how.

I’m sensitive to the constraints. I see the pressures — departmental, institutional, accreditation-driven. Many of you are teaching overloads, navigating budget cuts, fielding impossible demands. I know some of you are skeptical, exhausted, or both. That’s valid. This is hard work, and it requires support, time, and institutional commitment that isn’t always there.

But I also believe this: the liberal arts — with their long tradition of self-reflection, interdisciplinarity, and humanistic questioning — are exactly where this reimagining must begin. We’ve always been the ones asking not just what to teach, but why and how. That’s our strength. That’s our calling.

The Future We Should Be Building

All those AI-resistant strategies? Keep them. They’re valuable. They’re proof that faculty care deeply about authenticity and intellectual honesty. But don’t stop there.

Pair them with the equally essential work of AI fluency — teaching students to engage, critique, and co-create with intelligent systems. Add performance-based assessments that make learning visible, human, and grounded in the communication skills the world actually demands.

Because the future of education won’t belong to those who can simply resist AI. It will belong to those who can work wisely with it — and demonstrate that wisdom through voice, presence, and adaptive thinking.

So here’s a challenge to us:

This semester, design one AI-resistant assignment. Next semester, design one that teaches AI fluency and requires students to perform their learning — through presentation, video essay, oral defense, or live demonstration. Compare what you learn from each. Share your findings with colleagues. Coordinate as a department. Connect with your teaching and learning center. Experiment together. Build coherence.

Because the real work isn’t deciding whether AI belongs in our courses — it’s deciding what kind of intelligence we’re teaching students to cultivate, and what kinds of humans we’re helping them become.

When our assignments are too human to fake and our learning too authentic to outsource, we will have done more than “AI-proof” education.

We’ll have future-proofed it.

Polygnosis and the New Frontier: Finding Our Bearings

Continuing reflections from the Connecticut College AI & Liberal Arts Symposium

If the first post was about waking up, this one is about finding your bearings.

At the symposium, the mood shifted. The panic over AI—the moral fog, the productivity hype—gave way to something quieter and braver: curiosity. Once we accepted that AI is no longer a visitor but a roommate, the real questions emerged:

How do we live and learn beside it?
How do we create with a system that accelerates answers but doesn’t guarantee understanding?

Those questions led me back to a word that keeps earning its keep the more I use it: polygnosis—many ways of knowing.

Where the Term Came From (and Why That Matters)

Confession: I thought I coined polygnosis. It arrived during a late-night exchange with a generative model while I was trying to name the frontier beyond “inter-” and “trans-.” A neat ego moment—until I realized the idea has been with us for millennia in different clothing. The point isn’t originality; it’s precision. Polygnosis names what I couldn’t quite put my finger on: not a new discipline, but a way of composing knowledge across differences—human and non-human—without flattening them.

Polygnosis isn’t a theory; it’s a temperament for learning beside machines.

 

From Disciplines to Directions

Interdisciplinary work still starts with the disciplines—a chemist and a poet at the same table, each speaking their dialect.

Transdisciplinary work stretches the table, turning it from a fixed surface into a living workspace—one where ideas, methods, and even machines can join the dialogue.

Polygnosis begins there. It sets a stage where the conditions of learning can play out—less about blending fields, more about cultivating the stance that lets multiple knowledges coexist and collaborate. In a world where learning happens with other intelligences, that stance isn’t a luxury; it’s survival literacy.

And yes, that’s a liberal-arts value at its core: the capacity to survive—and thrive—through changing times by reading context, holding paradox, and practicing judgment.

The Mirror Problem (and Kranzberg’s Reminder)

A recent Harvard study asks a piercing question: Which humans are our models built on? The answer won’t shock you—mostly Western, English-speaking, educated, highly online adults. That narrow slice of humanity quietly became the template for what many models assume is “normal.”

We can’t call that neutrality. We can only call it inheritance.

Melvin Kranzberg’s First Law lands squarely here: technology is neither good nor bad; nor is it neutral. Models aren’t villains or saints; they’re mirrors tuned to the cultures that polished them. The risk isn’t that they’re “biased”—it’s pretending they’re not.

So the task isn’t to scrub away particularity; it’s to expand the story the machine can tell and we can interpret. That is polygnosis in practice.

From Bias to Balance

Every prompt is a vote for what counts as knowledge. If we want better answers, we need better questions:

Whose perspective might be missing here?

How would this read through another cultural lens?

What assumptions am I reinforcing by treating this response as universal?

This isn’t performance or politeness. It’s epistemic honesty—being clear about where knowledge stands when it speaks. We’re not optimizing for optics; we’re optimizing for accuracy with context.

The Liberal Arts as a Living Laboratory

The liberal arts have always practiced polygnosis—even if they never used the word. They train the muscles we need now: interpretation, comparison, translation, discernment.

Keep it simple and grounded:

In a composition class, a student uses an LLM to draft an intro. The work isn’t to grade the draft; it’s to ask how the model thinks and where the student’s voice should diverge.

In biology, image models help visualize ecosystems. The lesson isn’t “cool pictures”; it’s What does the model miss about living systems, and why?

In history, we compare a human summary and an AI summary of the same source, side-by-side, then mark what each foregrounds and erases.

Down-to-earth, doable, honest. Not spectacle—pedagogy.

Polygnosis as Ethos (Not Coursework)

To theorize polygnosis is to pin the butterfly. I’d rather see it fly.

Polygnosis is an ethos—a discipline of attention. It’s curiosity with a spine, method with humility, design as dialogue. In classrooms, studios, and labs, it looks like:

  • Co-creating a syllabus with an AI, then annotating its blind spots in the margins.
  • Asking a model to produce three interpretations from different cultural frames—and having students choose, remix, or reject them with reasons.
  • Building default prompts that say: “Answer without assuming age, ethnicity, or region; if an assumption is necessary, make it explicit.” Not for show. For truth.

What We Should Do Next (Practical and Small)

Set defaults that widen the lens. “Unless specified, respond for a general audience; flag any cultural or demographic assumption you made.”

Teach self-audit. Before accepting an answer, ask the model (and the student): What perspective did this come from? What would challenge it?

Diversify inputs. If we feed narrow corpora, we’ll get narrow mirrors. Bring in sources—texts, datasets, voices—that a general model is likely to underrepresent.

Reward interpretation. Grade the reasoning about outputs, not just the outputs themselves. We’re cultivating readers of intelligence, not just users.

None of this requires a new program or a moonshot grant. It requires habits, modeled consistently.

Polygnosis in Practice: Building Tools That Build Us

Theory becomes real when we put it to work. I’m thrilled to be teaching a class in the spring, Crafting Digital Identity, which aims help learners create professional web presence. This is where polygnosis isn’t an abstraction—it’s practice.

Students learn to prompt and code custom tools that help them craft portfolio narratives, articulate their professional voice, and position themselves strategically for graduate programs or the workforce. They build the tools, then use those tools to build their presence—websites, social media strategies, personal statements that sound like them, not like a template.

But here’s where polygnosis shifts from concept to practice: we don’t accept outputs at face value. We interrogate them using frameworks like Dakan and Feller’s 4Ds (Discover, Discern, Design, Deploy) and Mike Caulfield’s SIFT (Stop, Investigate the source, Find better coverage, Trace claims back to the original context). These aren’t just media literacy buzzwords—they’re diagnostic lenses. They help students ask: Where did this voice come from? What cultural assumptions are baked into this recommendation? How do I preserve my authenticity while leveraging algorithmic assistance?

This is polygnosis applied: using AI to amplify human agency while maintaining critical distance. Students aren’t consumers of AI-generated content; they’re collaborators who understand the system well enough to bend it toward their actual needs. They learn to see the seams, question the defaults, and design with intention.

The result? Graduates who can code a custom GPT to help draft a cover letter, then edit it with the discernment of a liberal arts thinker. Students who understand that personal branding isn’t about projecting an image—it’s about translating their complex, multifaceted selves into narratives that resonate across contexts. Professionals who can speak fluently to both recruiters and academics because they’ve learned to toggle between knowledge systems without losing coherence.

That’s the frontier we’re walking: not just teaching about AI, but teaching with and through AI in ways that make students more capable, more critical, and more themselves.

Finding Our Bearings

We’ve moved past the question of whether AI belongs in the academy. It’s already in the room, auditing everything. The better question is how we keep our humanity expansive, not defensive.

Polygnosis gives us a compass, not a map. It doesn’t dissolve disciplines; it stretches the table so more kinds of knowing can join the work. It asks us to prefer coherence over consensus, dialogue over default, and context over speed.

We don’t need a new field. We need a new fidelity—to curiosity, to complexity, to the courage of unknowing.

The frontier isn’t out there anymore. It’s between us. Within us. And, increasingly, beside us.


Next time: How the 4Ds and SIFT frameworks anchor practical AI pedagogy—and why critical making matters more than critical thinking alone.

Reflections on the 2/14/24 AI Pedagogy Workshop and Project: metaLAB (at) Harvard

Today at the last minute, I was able to hop on a webinar over the noon hour with folks over at the Harvard metaLAB hosted by Sarah Newman. Maha Bali, who co-led the, “Learn With AI: 10 Ways to Try AI in Your Classroom Right Now,” webinar, helped facilitate and MC the event. I wasn’t familiar with the metaLAB, which describes itself as, “an idea foundry, knowledge-design lab, and production studio experimenting in the networked arts and humanities.” One of my student assistants majoring in Computer Science got to sit in on part of it during his shift so I’ll look forward to catching up with him when he’s back next time.

The purpose of the meeting was two-fold: a) introduce the AI Pedagogy Project website and the story about its development; b) provide breakout rooms for folks from all over the world to network and discuss some questions. Some screenshots will follow in this post.

Built mostly by Harvard students to deliver non-technical materials to explain many things about generative AI, the AI Pedagogy Project is a curated collection of crowdsourced assignments which I highly recommend taking a look at. You can even upload your own!  Sarah did a great job explaining the inclusive values at the heart of the project which, paraphrasing her, aims to break down the sometimes inherent gatekeeping that technical experts have about AI. Sarah and her team want to make AI as accessible and transparent as possible and mitigate the alarmist narrative in the media.

In the breakout room to which I was randomly assigned, I met a librarian from USC and a professor from the University of Winnipeg. We were given these prompts:

It was only 7 minutes but I think we got through this pretty quickly because there were only 3 of us. There were 202 people on the call.

There was a poll from which I’ll share some screenshot highlights. (Pro-tip: Click on the images to make them bigger so you don’t have to squint especially if you are reading this on your mobile device!)

Regarding the AI Pedagogy Project website,

What is a metaphor for AI you would use?

They read some of these out loud as they trickled in. They got progressively wittier (at least those Sarah and Maha read out loud musing). After Sarah read, “Frankenstein” out loud, I simply couldn’t resist and typed in my answer, “Nice Narcissus.” OMG, Sarah read that one out loud as well and I sure was glad it was anonymous! < digital blush >

Here are a few good ones. These crowdsourced polls are awesome! Nice to know I am not ALONE to ponder and make sense of this stuff!

I enjoyed the networking and learning about the AI Pedagogy Project and website. What a cool open educational resource!

 

Why we shouldn’t fear MOOCs

We are at a significant crossroads in higher education, in the liberal arts especially. A staggering economy for graduates combined with public outcry about high tuition and student loans is all bringing the value of a liberal arts education into question: a perfect storm. What’s most disturbing is a lingering doubtful perception about the return on investment made manifest by many media sources, occasionally influencing elected officials to poke fun at the arts and humanities. While many lament the advent of MOOCs, online learning has been around for nearly two decades. It’s yesterday’s news. But as Michael Roth, president of Wesleyan, has elegantly written, the liberal arts DO matter now more than ever. So the current promulgation and growing abundance of freely available content is a powerful incentive and opportunity to re-visit and re-invigorate traditional entry-level curricula in new fruitful directions. History 101 can shed the first 4 weeks of materials, mostly review content that’s easily flipped, and develop new, higher quality activities in class.

Beyond the hype, at the core of MOOCs, especially connectivist MOOCs, is a genuine community sharing of open resources, an extension of the historical mission of 20th century public libraries with print publications, to connect citizens with electronic access to assets of knowledge.  The real value of open online learning is that it has solved the access issue for knowledge-thirsty netizens around the world. There’s a subtle efficiency at play here. What’s to come? Motivated self-directed learners will find ways to imbibe introductory level course materials that will push faculty to redesign richer learning goals in first year seminars. These future students will be there, and we want to attract them. It’s an exciting time for Ed Ttech folks, especially those of us just getting started in earnest with blended learning efforts, to revisit why we used technology in the first place, namely to assist and augment sound instructional design. Open online courses will not destabilize the foundation of higher education. It’s natural to be fearful at first, and it’s even healthy. The natural instinct of self-preservation brings out the best in everyone. But as we let go of this panic, there is much to reflect and build on. As George Siemens points out in a recent interview:

MOOCs are not replacement models. They don’t replace the existing university systems. They augment it and help those universities become more relevant in the digital space. We’ve known in online and distance learning for 20 years or more that students who are at risk, you can’t just give them access. There have to be support systems in place that help those students to succeed.

Not to fear, the ominous specter of online learning, free or otherwise, will not impact the tremendous value of personalized, meaningful relationships between faculty expert guides and students in our classrooms. The abundance of open materials reveals that content is no longer king; relationships and networked connections between faculty and students matter much, much more. As Harold Jarche shared on Twitter:

Relationships cannot be automated; drones and droids won’t even come close. What’s certain, some schools are going to close; but as history will show, institutions that collaborate around the sharing of knowledge and resources with an eye toward the distribution of course redesign efforts have much to gain.

ISD and Learning Theories revisited: re-discovering my Master’s work from ’04

Back in 2004, I graduated from Penn State with a Master degree in Instructional Systems Design. With the steady increase of educational technology and technology tools in F2F and online learning, the College of Education has re-branded the department as Learning and Performance Systems. I’m really glad they kept the “Systems” piece in the title as this is such a key part of systems thinking and the classic “systematic approach” to design (i.e., the ADDIE model and others). On Friday, I’ll be teaching an “introduction to instructional design” online module. It’s a primer for the instructional technology student apprentices and program (ITAP), an initiative of the NY6 Consortium. Here is the reading list I put together for this short-class: RequiredReadingsTheoryandPractice Below are two video clips I created last week with yours truly starring in a one-man show, a cheerful talking head. You can also check out one of our student technology assistants and apprentice’s site here. Please make this budding blogger’s day by posting a comment!

It was a real treat for me was see that my design prototype is still “live” on an quiet test server at Penn State… Very cool and kudos to my IT pals in the College of Agricultural Sciences for maintaining it over the years. You can check out the Invasive Species website and the ISD primer videos below.

While classic instructional design approaches provide an important and critical lens through which to view any design or re-design of instruction (I think of Carol Twigg’s work in particular when it comes to redesign) constructivist theories continue to influence and energize my thinking and work in academic technology as it did back in 2004 as a graduate student and webmaster at Penn State. What a fun refresher to revisit this old stuff… gets me thinking in new ways that hopefully benefits others, not to me mention me. I guess it’s not that old after all! It might also be time to think about how I could be using this for work to further my research and even studies… à la MOOC or more formally back in grad school?

Overview of Operation Invasive Species (Master’s thesis based on this prototype I designed and built)

An instructional design primer