Beyond AI-Proofing: Designing for Integrity, Fluency, and the Future of Liberal Arts Learning

A colleague and friend who teaches at a liberal arts college in California recently shared with me that she spent in the ballpark of 40 hours last summer redesigning a research paper assignment to be “AI-proof” in her fall seminar. At the end of spring semester, her students will graduate into jobs that require them to use AI daily. We are teaching students to hide the very literacy their future demands.

The call to “AI-proof” our assignments is reverberating through higher education — from faculty lounges to academic technology roundtables. It’s urgent, it’s sincere, and it’s fueled by genuine pedagogical care. And honestly? Much of it is brilliant. Faculty are innovating at a rapid pace, reimagining assignments to protect authenticity and preserve intellectual honesty in a world where generative AI writes, summarizes, and simulates with astonishing speed. These AI mitigation strategies aren’t born of paranoia. They’re born of craft — an earnest desire to keep learning human.

The Pedagogical Craft of Mitigation

Educators have been remarkably inventive in this space, creating multi-layered strategies that do more than block AI — they build discernment. Here’s a look at the some of them, their power, their limits, and their impact on student learning:

Table comparing eleven AI mitigation strategies in higher education.

 

These approaches certainly work. Some better than others, and combining them can be powerfully effective. Click to see examples of strategies in practice. They slow down the learning process, foreground reflection, and reward originality. They represent the best instincts of liberal arts education — the impulse to protect learning as a relational act, not a transaction.

But here’s the paradox: the more time we spend perfecting these defenses, the further we drift from the future we’re supposed to be preparing students for.

The Problem with Perfecting Resistance

We’re designing moats when we should be building bridges.

As faculty, we’ve become experts in preventing students from using the very tools they’ll need to thrive beyond graduation. While we construct sophisticated “AI-proof” systems — password-protected PDFs, multi-phase proctoring, closed platforms — the professional world is racing ahead, expecting graduates who can think with AI ethically, effectively, and creatively.

We are, unintentionally, teaching students a skill that has no future application: how to learn without the tools they’ll be required to use for the rest of their lives.

The deeper problem extends beyond individual assignments. When faculty independently prune “AI-able” work without departmental coordination, the result is curricular chaos:

  • Duplicated efforts across courses
  • Gaps in skill progression
  • Students experiencing five different AI policies across five courses in their major

This work cannot be done in isolation. It requires departmental conversation about shared outcomes, scaffolded skill development, and coherent AI policies across the major.

When “Integrity” Turns Into Invasion

Surveillance is not pedagogy.

There’s a line we shouldn’t cross: installing surveillance software on student computers. That doesn’t teach integrity; it broadcasts distrust. It says, “Your laptop — your window to creativity, exploration, and daily necessity — is now a controlled asset I can monitor at will.” If that’s our definition of academic integrity, we’ve already surrendered the idea of education as a partnership.

And we know where this logic can spiral: from “protect the test” to “police the person.” History shows how quickly tool-level monitoring becomes life-level monitoring. It’s a short walk from integrity to intrusion.

Some respond, “Fine — go back to pen and paper.” Honestly, that’s less dystopian than spyware. But let’s not romanticize blue books. For many students, English (or any academic register) isn’t their first expressive mode — and most of us actually learned to write by typing. Picture the blue-book era: you discover your real argument halfway through, realize the perfect sentence belongs three paragraphs up, and you’re frozen in ink. No cursor. No drag. No re-ordering. No thinking-through-revision — the essence of writing itself. You start drawing arrows, crossing out paragraphs, performing neatness while throttling cognition.

And outside a surgical theater, almost no profession rewards “compose your best argument by hand in 40 minutes while a clock screams at you.”

So yes — if the choice is creepy spyware or smudged ink, I’ll take ink over intrusion. But both miss the point. Neither reflects how people actually think, write, collaborate, verify, or revise in 2025. Both are control systems — one analog, one algorithmic — aimed at the wrong target.

Liberal Arts, Not Lie Detectors

The moral center of teaching is trust.

At bottom, the surveillance classroom sends one message: we don’t trust our students. The liberal arts should do better than that. We’re meant to be the standard-bearers of inquiry, dialogue, and moral imagination. If the best we can offer is dashboards and suspicion, we’ve traded away our pedagogical soul.

I say this with deep respect for colleagues doing heroic work — often under pressure, often while fielding understandable anxiety from administrators, parents, and even their own instincts to protect what matters most about education. These concerns are real. The fear is legitimate. We’re witnessing a paradigm shift unfolding before our eyes and in our classrooms, and we desperately need every perspective at the table to navigate it thoughtfully. The AI-resistant strategies you’ve built are evidence of care, craft, and commitment to authenticity. That work matters. Your voice matters.

But panic is not a strategy. And control is not pedagogy.

The way forward isn’t to out-police our students; it’s to out-design the problem. If our energy goes into designing around distrust, we’ll starve the very habits of mind we claim to teach. Design for evidence of learning, not evidence of catching. Trust as default, transparency as practice, rigor as design. That’s how the liberal arts lead.

The Shift: Performance as Pedagogy

From catching to coaching.

Here’s what changed my thinking: watching professionals in action — including myself.

In my work, I don’t submit written reports to prove what I know. I present. I facilitate. I respond to questions I didn’t anticipate. I think on my feet, synthesize in real time, and demonstrate understanding through dialogue and improvisation. That’s how the professional world actually assesses expertise — not through pristine documents composed in isolation, but through performance: the ability to explain, adapt, defend, and collaborate under pressure.

Why aren’t we teaching that?

If we want integrity without surveillance and rigor without nostalgia, change the mode of evidence. Make learning observable, human, and grounded in the communication skills the world actually values.

Design for performance, not policing:

  • Oral assessments — brief, coached defenses that make reasoning visible
  • Video essays — planned, revised, reflective storytelling with sources and documented process
  • Live presentations with Q&A — synthesis under light pressure, supported by artifacts
  • Recorded demonstrations — show the build, the test, the fix; narrate the decisions

These aren’t just “AI-proof”; they’re future-proof. They develop the soft skills employers actually demand: clear communication, adaptive thinking, grace under uncertainty. They teach improvisational, situational leadership — the ability to demonstrate what you know when someone asks a question you didn’t prepare for.

And here’s the bonus: in designing these performance-based assessments, we’re also teaching technological literacy. Students learn video editing, audio production, visual storytelling, digital composition — the multimodal fluencies that define 21st-century communication. Each iteration gives them practice. Each presentation builds confidence.

This is how I learned to show what I know. This is how your students will need to show what they know.

Want inspiration? Talk to your campus teaching, learning, and technology center. They’re already piloting these approaches. They have tools, templates, rubrics, and pedagogical frameworks ready to support you. You don’t have to reinvent this alone.

From here, the path forward becomes clear: make learning too specific, too process-visible, too human to fake.

The Transdisciplinary Turn: From Resistance to Responsiveness

The question isn’t “Should students use AI?” The question is “How do we teach them to use it well, critically, and humanely — within our disciplines?”

That’s the transdisciplinary challenge now facing every liberal arts curriculum. It’s not just a question for computer science or writing programs — it’s a shared design problem spanning philosophy, biology, studio art, and sociology alike.

An AI-responsive curriculum embraces both sides of the coin:

  • AI Resistance ensures cognitive integrity — the ability to think unaided, to wrestle with ideas, to claim one’s voice
  • AI Integration ensures cognitive fluency — the ability to think with tools, to discern when to trust them, and to synthesize machine assistance with human judgment

Neither is optional. Together, they form the new liberal art: technological self-awareness — the capacity to understand not just what we know, but how we come to know it alongside intelligent systems, and what remains distinctly, necessarily human in that process.

What AI Literacy Looks Like in Practice

A responsive curriculum asks students to:

Document their AI use as part of their process — showing how the tool informed, shaped, or misled their work.
Biology example: Generate a preliminary literature scan with AI; verify each citation; identify misrepresentations; reflect on what the AI got wrong about recent research methodology.

Reflect on the ethics of automation within their discipline — what’s lost, what’s gained, what must remain human.
Philosophy example: Prompt AI to construct an ethical argument; use course readings to identify logical gaps, hidden assumptions, or misapplied concepts; turn the AI’s output into the object of analysis itself.

Evaluate AI outputs for accuracy, bias, and context — building critical reading and synthesis skills across modalities.

Integrate multimodal expression — text, image, sound, video, data — to demonstrate learning that transcends the written word and develops the communication fluencies their futures demand.

Engage in meta-learning — understanding not just what they know, but how they came to know it alongside intelligent systems.

This is what AI literacy in the liberal arts should look like: a blend of philosophical questioning, technological discernment, creative practice, and performative demonstration.

A Call to the Faculty

The hard work of AI literacy doesn’t fall on students. It falls on us.

We’re the ones who must rethink assessment, let go of some control, and reimagine academic integrity not as suspicion but as shared inquiry. We can’t expect students to navigate this complexity ethically if we aren’t modeling how.

I’m sensitive to the constraints. I see the pressures — departmental, institutional, accreditation-driven. Many of you are teaching overloads, navigating budget cuts, fielding impossible demands. I know some of you are skeptical, exhausted, or both. That’s valid. This is hard work, and it requires support, time, and institutional commitment that isn’t always there.

But I also believe this: the liberal arts — with their long tradition of self-reflection, interdisciplinarity, and humanistic questioning — are exactly where this reimagining must begin. We’ve always been the ones asking not just what to teach, but why and how. That’s our strength. That’s our calling.

The Future We Should Be Building

All those AI-resistant strategies? Keep them. They’re valuable. They’re proof that faculty care deeply about authenticity and intellectual honesty. But don’t stop there.

Pair them with the equally essential work of AI fluency — teaching students to engage, critique, and co-create with intelligent systems. Add performance-based assessments that make learning visible, human, and grounded in the communication skills the world actually demands.

Because the future of education won’t belong to those who can simply resist AI. It will belong to those who can work wisely with it — and demonstrate that wisdom through voice, presence, and adaptive thinking.

So here’s a challenge to us:

This semester, design one AI-resistant assignment. Next semester, design one that teaches AI fluency and requires students to perform their learning — through presentation, video essay, oral defense, or live demonstration. Compare what you learn from each. Share your findings with colleagues. Coordinate as a department. Connect with your teaching and learning center. Experiment together. Build coherence.

Because the real work isn’t deciding whether AI belongs in our courses — it’s deciding what kind of intelligence we’re teaching students to cultivate, and what kinds of humans we’re helping them become.

When our assignments are too human to fake and our learning too authentic to outsource, we will have done more than “AI-proof” education.

We’ll have future-proofed it.

Polygnosis and the New Frontier: Finding Our Bearings

Continuing reflections from the Connecticut College AI & Liberal Arts Symposium

If the first post was about waking up, this one is about finding your bearings.

At the symposium, the mood shifted. The panic over AI—the moral fog, the productivity hype—gave way to something quieter and braver: curiosity. Once we accepted that AI is no longer a visitor but a roommate, the real questions emerged:

How do we live and learn beside it?
How do we create with a system that accelerates answers but doesn’t guarantee understanding?

Those questions led me back to a word that keeps earning its keep the more I use it: polygnosis—many ways of knowing.

Where the Term Came From (and Why That Matters)

Confession: I thought I coined polygnosis. It arrived during a late-night exchange with a generative model while I was trying to name the frontier beyond “inter-” and “trans-.” A neat ego moment—until I realized the idea has been with us for millennia in different clothing. The point isn’t originality; it’s precision. Polygnosis names what I couldn’t quite put my finger on: not a new discipline, but a way of composing knowledge across differences—human and non-human—without flattening them.

Polygnosis isn’t a theory; it’s a temperament for learning beside machines.

 

From Disciplines to Directions

Interdisciplinary work still starts with the disciplines—a chemist and a poet at the same table, each speaking their dialect.

Transdisciplinary work stretches the table, turning it from a fixed surface into a living workspace—one where ideas, methods, and even machines can join the dialogue.

Polygnosis begins there. It sets a stage where the conditions of learning can play out—less about blending fields, more about cultivating the stance that lets multiple knowledges coexist and collaborate. In a world where learning happens with other intelligences, that stance isn’t a luxury; it’s survival literacy.

And yes, that’s a liberal-arts value at its core: the capacity to survive—and thrive—through changing times by reading context, holding paradox, and practicing judgment.

The Mirror Problem (and Kranzberg’s Reminder)

A recent Harvard study asks a piercing question: Which humans are our models built on? The answer won’t shock you—mostly Western, English-speaking, educated, highly online adults. That narrow slice of humanity quietly became the template for what many models assume is “normal.”

We can’t call that neutrality. We can only call it inheritance.

Melvin Kranzberg’s First Law lands squarely here: technology is neither good nor bad; nor is it neutral. Models aren’t villains or saints; they’re mirrors tuned to the cultures that polished them. The risk isn’t that they’re “biased”—it’s pretending they’re not.

So the task isn’t to scrub away particularity; it’s to expand the story the machine can tell and we can interpret. That is polygnosis in practice.

From Bias to Balance

Every prompt is a vote for what counts as knowledge. If we want better answers, we need better questions:

Whose perspective might be missing here?

How would this read through another cultural lens?

What assumptions am I reinforcing by treating this response as universal?

This isn’t performance or politeness. It’s epistemic honesty—being clear about where knowledge stands when it speaks. We’re not optimizing for optics; we’re optimizing for accuracy with context.

The Liberal Arts as a Living Laboratory

The liberal arts have always practiced polygnosis—even if they never used the word. They train the muscles we need now: interpretation, comparison, translation, discernment.

Keep it simple and grounded:

In a composition class, a student uses an LLM to draft an intro. The work isn’t to grade the draft; it’s to ask how the model thinks and where the student’s voice should diverge.

In biology, image models help visualize ecosystems. The lesson isn’t “cool pictures”; it’s What does the model miss about living systems, and why?

In history, we compare a human summary and an AI summary of the same source, side-by-side, then mark what each foregrounds and erases.

Down-to-earth, doable, honest. Not spectacle—pedagogy.

Polygnosis as Ethos (Not Coursework)

To theorize polygnosis is to pin the butterfly. I’d rather see it fly.

Polygnosis is an ethos—a discipline of attention. It’s curiosity with a spine, method with humility, design as dialogue. In classrooms, studios, and labs, it looks like:

  • Co-creating a syllabus with an AI, then annotating its blind spots in the margins.
  • Asking a model to produce three interpretations from different cultural frames—and having students choose, remix, or reject them with reasons.
  • Building default prompts that say: “Answer without assuming age, ethnicity, or region; if an assumption is necessary, make it explicit.” Not for show. For truth.

What We Should Do Next (Practical and Small)

Set defaults that widen the lens. “Unless specified, respond for a general audience; flag any cultural or demographic assumption you made.”

Teach self-audit. Before accepting an answer, ask the model (and the student): What perspective did this come from? What would challenge it?

Diversify inputs. If we feed narrow corpora, we’ll get narrow mirrors. Bring in sources—texts, datasets, voices—that a general model is likely to underrepresent.

Reward interpretation. Grade the reasoning about outputs, not just the outputs themselves. We’re cultivating readers of intelligence, not just users.

None of this requires a new program or a moonshot grant. It requires habits, modeled consistently.

Polygnosis in Practice: Building Tools That Build Us

Theory becomes real when we put it to work. I’m thrilled to be teaching a class in the spring, Crafting Digital Identity, which aims help learners create professional web presence. This is where polygnosis isn’t an abstraction—it’s practice.

Students learn to prompt and code custom tools that help them craft portfolio narratives, articulate their professional voice, and position themselves strategically for graduate programs or the workforce. They build the tools, then use those tools to build their presence—websites, social media strategies, personal statements that sound like them, not like a template.

But here’s where polygnosis shifts from concept to practice: we don’t accept outputs at face value. We interrogate them using frameworks like Dakan and Feller’s 4Ds (Discover, Discern, Design, Deploy) and Mike Caulfield’s SIFT (Stop, Investigate the source, Find better coverage, Trace claims back to the original context). These aren’t just media literacy buzzwords—they’re diagnostic lenses. They help students ask: Where did this voice come from? What cultural assumptions are baked into this recommendation? How do I preserve my authenticity while leveraging algorithmic assistance?

This is polygnosis applied: using AI to amplify human agency while maintaining critical distance. Students aren’t consumers of AI-generated content; they’re collaborators who understand the system well enough to bend it toward their actual needs. They learn to see the seams, question the defaults, and design with intention.

The result? Graduates who can code a custom GPT to help draft a cover letter, then edit it with the discernment of a liberal arts thinker. Students who understand that personal branding isn’t about projecting an image—it’s about translating their complex, multifaceted selves into narratives that resonate across contexts. Professionals who can speak fluently to both recruiters and academics because they’ve learned to toggle between knowledge systems without losing coherence.

That’s the frontier we’re walking: not just teaching about AI, but teaching with and through AI in ways that make students more capable, more critical, and more themselves.

Finding Our Bearings

We’ve moved past the question of whether AI belongs in the academy. It’s already in the room, auditing everything. The better question is how we keep our humanity expansive, not defensive.

Polygnosis gives us a compass, not a map. It doesn’t dissolve disciplines; it stretches the table so more kinds of knowing can join the work. It asks us to prefer coherence over consensus, dialogue over default, and context over speed.

We don’t need a new field. We need a new fidelity—to curiosity, to complexity, to the courage of unknowing.

The frontier isn’t out there anymore. It’s between us. Within us. And, increasingly, beside us.


Next time: How the 4Ds and SIFT frameworks anchor practical AI pedagogy—and why critical making matters more than critical thinking alone.

Co-Creating with the Machine: What AI Reflects Back

A reflection on the 2025 Connecticut College AI & Liberal Arts Symposium, exploring how AI is reshaping liberal arts education through cross-disciplinary learning, human connection, and a shared sense of awakening.

Introduction

Across three autumn days at the AI and the Liberal Arts Symposium at Connecticut College, the conversations felt less like a conference and more like a collective act of awakening.

First, a huge thank-you to our colleagues and friends at Connecticut College for hosting such a fantastic event. This was, without question, the best professional development experience I’ve had since the pandemic.

I only found out about it a few weeks ago, so I wasn’t able to submit a proposal this time—but I highly recommend the symposium to anyone interested in the intersections of AI and the liberal arts. It’ll return next year, and you can bet I’ll be responding to the call for proposals to share the exciting work we’re doing at Skidmore that contributes to this dynamic and evolving space.

I arrived early and after registering, participated on a group tour of Connecticut College’s amazing arboretum. It’s open to the public and I highly recommend a visit.

College Center at Crozier-Williams

Welcome packet

Vista with weeping conifers

Reflection in the Pond

 

AI isn’t just a new tool for the liberal arts — it’s a mirror.

Everywhere, that mirror reflected something back: our assumptions about knowledge, our fatigue with disciplinary boundaries, our uneasy faith in human judgment. Some framed AI as a pedagogical partner, others as a provocation. But beneath every debate ran a shared undercurrent — that the liberal arts must not retreat from AI, but reinterpret themselves through it.

Beyond Silos: Following the Phenomenon

One recurring theme was the generative convergence of disciplines, where boundaries became bridges. Panelists from across fields described how AI resists neat categorization: it writes like a humanist, reasons like a scientist, and fails like an artist.

A digital humanities panel explored how generative tools can help students see structure in story or bias in data. An environmental studies group used AI-generated imagery to visualize climate change as cultural narrative rather than scientific abstraction. A philosophy instructor co-taught a course with a data scientist, letting students interrogate both logic and ethics in the same breath.

These moments revealed a shift — not from one discipline to another, but beyond discipline entirely — into what several speakers called transdisciplinary learning: inquiry that follows the phenomenon, not the field.

It’s an approach that feels truer to the liberal arts than ever — dynamic, synthetic, and driven by wonder rather than walls.

The Liberal Arts Awakening

Across sessions, a pattern emerged — one that keynote speaker Lance Eaton gave a name to in his address, The Sleep of the Liberal Arts Produces AI. His metaphor caught fire throughout the symposium. In panels and workshops afterward, people kept returning to it: the idea that AI didn’t replace us — it revealed where we’d already fallen asleep.

“AI didn’t replace us — it revealed where we’d already fallen asleep.”

That sleep took many forms.

Dismissal — the academy’s habit of treating new media and emerging technologies as distractions rather than dialogues.
Fetishization — the way we mistake performance of intellect for presence of curiosity.
Externalization — the quiet outsourcing of our public mission to private systems and paywalled knowledge.

Panelists didn’t treat these as abstract critiques; they tied them to practice. A librarian showed how paywalled scholarship feeds commercial AI systems — what she called academic fracking. A literature professor confessed that she once told students to avoid ChatGPT, only to later use it with them to analyze power structures in Victorian novels. A group of students described AI as their learning partner, not a shortcut — proof that the boundaries between tool and teacher are already blurring.

“AI didn’t wake the liberal arts — it found them stirring.”

 

The Human Element: Productive Struggle, Rediscovery, and Redesign

What made the symposium electric wasn’t the technology — it was the humanity pulsing through every discussion. Faculty spoke less about how to control AI and more about how to stay human beside it.

One recurring idea was productive struggle — not as an obstacle to learning, but as its catalyst. AI tools created just enough uncertainty to be generative. Students found themselves asking new kinds of questions: What should I be doing less of? What does originality look like now? How do I make the best use of time with a professor, when the “expert” is increasingly a facilitator of knowledge, not its gatekeeper?

Faculty, too, found themselves in unfamiliar territory. Long-held routines were challenged by tools that could draft, translate, or simulate. The struggle wasn’t about obsolescence — it was about reorientation. What habits of mind are worth keeping? What does rigor mean when the machine can “write” an answer?

And in that discomfort, something vital reemerged: the shared space of learning. Office hours became less about solving and more about sense-making. Less about correctness and more about discernment. Students didn’t need someone to check their work — they needed someone to help them recognize what kind of thinking it was.

In that spirit, the liberal arts reasserted their enduring role — not as defenders of tradition, but as designers of discernment. When algorithms simulate knowledge, discernment becomes the highest art form.

AI may have accelerated this shift, but the liberal arts were always headed there. What emerged across the symposium was a deeper understanding: that growth comes from tension, that rediscovery often begins with unlearning, and that the future of learning may look less like mastery and more like a shared choreography of questioning.

Epilogue: Sora and the Mirror

After the symposium, that metaphor of the mirror stayed with me — especially as I experimented with Sora, a tool from OpenAI that turns words into video. I had received early access just before the conference began. By the time it ended, I had shared all six of my invite codes with colleagues who were curious, eager, and already dreaming up experimental test cases. Invite codes are a fascinating way for software companies to roll things out.

Watching that rollout unfold felt strangely familiar — like history rhyming. Back in 2002, when I was a webmaster in the College of Agricultural Sciences at Penn State, a computer science grad student forwarded me a link to something called Google Beta. “You should check this out,” he said. I did. I joined. And unknowingly, I stepped into something that would transform how the world searches and knows.

Before parting ways, a few of us made videos — short visual essays. You only get 10 or 15 seconds to try out your prompts. A philosopher in conversation with a clone of herself. A student’s dream rendered into shifting light and architecture. A reimagined classroom set in the year 2130. Each piece asked, in its own way: If AI can imagine with us, who decides the shape of the story?

Ultimately, Sora has that same quality — a shimmer of arrival. Something just beginning to shape the future of creation, while reflecting back the questions we haven’t stopped asking.

The technology isn’t just extending imagination. It’s echoing it. It’s reflecting it. And in that echo, it’s asking us what kind of storytellers we want to be.

Conclusion: The Liberal Arts, Awake

By the final plenary, the tone had shifted from anxiety to resolve. The liberal arts weren’t under threat — they were awakening.

The symposium closed not with consensus, but with a shared rhythm: a refusal to let automation define what it means to learn.

Across those three days, AI became less an existential threat and more an existential invitation — not to escape technology, but to wake beside it.

Curious. Critical. And still — profoundly human.

 


Meta Description:
A reflection on the 2025 Connecticut College AI & Liberal Arts Symposium and early experiments with Sora — exploring how AI challenges, reshapes, and ultimately mirrors the soul of liberal arts learning.

Why Work Still Feels Broken and What I’m Learning to Fix

Today, I tuned into an Alumni Learning Consortium webinar titled, Why Are We Here? Creating a Work Culture Everyone Wants, led by Jennifer Moss. Jennifer’s presentation was about her newest book, Why Are We Here: Creating a Work Culture Everyone Wants. It was packed with insights about burnout, trust, flexibility, and the deeper reasons so many people feel disconnected from their work. While the lecture format delivered a lot of information quickly, it also surfaced something deeper: how much we’ve normalized exhaustion since the pandemic and how hard it is to imagine a different rhythm.

This post is part reflection, part practice. I’ve been thinking about what resonated with me, and how I can apply it—not just as an idea, but as a leader trying to do things differently.

We are still stuck in a mode of pandemic-induced productivity obsession. Yes, we innovated and adapted quickly. But that speed came with consequences: burnout, detachment, time poverty, and a cultural valorization of being “always on.” Moss made the case that it is time to relearn the basics of how to behave like healthy humans at work.

That starts with a few uncomfortable truths. First, burnout is not a personal failure. It is a design flaw. A system problem. Leaders often drive burnout without realizing it, setting the tone, modeling urgency, never taking time off themselves. It is no wonder the rest of us follow their lead, right into chronic exhaustion.

Second, the old idea of work-life balance is no longer useful. Moss proposes something better: work-life harmony. A flexible, personal, purpose driven approach to how we spend our energy. Harmony does not mean equal time. It means intentionality. It means asking, Where does work fit into the goals of my life? not the other way around. After all, no one on their deathbed wishes they had checked more email at 6 a.m.

What stood out to me most was the framing of passion driven burnout. Moss distinguished between harmonious passion, where work energizes and coexists with life, and obsessive passion, where work controls us, making everything else feel secondary. That framing hit close to home. For many of us in mission driven roles, the danger is not disengagement. It’s over engagement without boundaries.

So what do we do? Moss suggests starting small and starting local. Pick one of the six root causes of burnout: unsustainable workload, lack of control, unfairness, lack of recognition, mismatch of skills and values, or social disconnection. Then ask yourself, Where can I push for clarity or change? Talk with your manager. Identify inefficiencies. Suggest shorter meetings. Focus on reducing stressors by just 5 percent this month.

She also reminds us that trust is the foundation of all healthy teams, and that it must be built through consistency, not charisma. Checking in, not checking up. Asking what someone needs, not just what they are doing. Creating rituals of transparency and feedback. Replacing exit interviews with stay interviews.

Perhaps the most future facing part of Moss’s talk was her handling of AI anxiety and ageism. With half of all workers expected to need re-skilling in the next two years, and intergenerational friction still strong in many organizations, she argued that peer mentorship and flattened learning hierarchies will be essential. Gen X, in particular, is often caught in a quiet struggle. They are promoted slower, caregiving more, and least likely to speak up. We need better models for leadership across generations, and clearer invitations to let everyone’s wisdom in.

This talk was not just about what we have lost. It was also a reminder of what we gained during the most surreal years of our lives. The moments of humor, of flexibility, of emotional honesty. Moss urged us not to forget what the pandemic revealed about our capacity to adapt, our hunger for meaning, and our ability to connect, even in chaos.

Workplace wellbeing will not be solved by perks or posters. It is built through trust, autonomy, and reflection—and by leaders and colleagues willing to ask (and re ask), Why are we here?


Practicing What She Preached: Questions for a Leader (and Those Who Hire Them)

After sitting with Moss’s message, I began thinking not just about culture, but about leadership at every level. This is not just about CEOs or campus presidents. It is about department chairs, project leads, IT directors, and anyone trying to foster trust while navigating complexity, burnout, and the push toward hybrid work.

If we are serious about building the kind of workplace Moss describes: flexible, purpose-driven, humane. It has to start with questions.

Below are two sets of questions I have begun using:
– One for myself, as a check-in tool to stay aligned
– One for hiring or mentoring senior leaders, especially CIOs, who often sit at the heart of systems change


Five Self-Reflection Questions 

When have I modeled vulnerability and transparency? How did my team respond?
Trust starts with visibility and imperfection.

How am I giving my colleagues autonomy and clarity, not just assignments?
Checking in, not checking up. It matters.

Where might I unintentionally be contributing to burnout? How can I reduce friction this month by 5 percent?
One meeting canceled. One unclear process improved. Small moves add up.

What hidden stressors are affecting me and others around me, and how do I make space to hear them?
Ask better questions. Listen without solving.

Am I managing up with the same honesty I ask of my team?
It is not just about leading down—it is about naming inefficiencies and clarifying priorities upward too.


Five Questions to Ask Any Human Centered Leader Like A CIO

Can you share a specific example of how you have led with empathy and self awareness during a high pressure project, especially when your team was under resourced or feeling burned out?
This reveals servant leadership under stress, not just tactical decision making.

How do you ensure that your IT strategy and operations are aligned with the broader goals and pressures faced by students, faculty, and staff—especially when everyone is being asked to do more with less?
Tests for vision and situational awareness.

Describe a time when you had to build up your team’s morale and sense of value in an environment where recognition, compensation, or resources were limited. What specific actions did you take?
Looks for creativity, values based leadership, and retention thinking.

How do you foster collaboration and open communication across departments, especially when you need buy in from stakeholders who may not initially support IT initiatives?
Evaluates cross silo influence and cultural fluency—core to 360 degree leadership.

When you feel insecure about your own knowledge or abilities, how do you model both vulnerability and assertiveness to your team?
This probes for humility, clarity, and the balance between transparency and guidance.


These two lists, one inward and one outward, are part of the same project: building workplaces that work for people.

Jennifer Moss reminded us that culture is not a product. It is a set of choices, repeated often, guided by questions like these. And the more we ask them—out loud, together—the closer we get to the kind of workplace everyone wants.

On the ephemera of web tools

So cool! Here we are in Feb 2024, and here’s a blog post that was never published. A rare find! Here’s the draft from whenever it was! Probably in the vicinity of 2012. 

A few years back, I came across a fabulous do-it-yourself animation tool called xtranormal. I’d totally forgotten I’d made this ages ago for free during the ‘MOOC hysteria,’ the specter of which is probably best captured by Alan Levine in one of his many posts titled just that.
In this animated MOOCup (mockup) interview I whipped up, Computer Science postdoc, Clark Cable, a graduate of MOOC College, discusses his credentials and badges during an inverview with hiring manager, Jane Joplin. He is literally shown the door with the limited gestures the freemium version of the software afforded at the time. 

One of my favorite authoring tools, Xtranormal has put its freemium plan rest. Adieu, fine tool. May some semblance of your original code live on forever in other droids.