The Margins (2/15/2026)
where we think about speedy grief, at-home essays, ThinkBit wearables, and teacher deepfakes
A professor I know told me that she has stopped assigning take home papers in her introductory course. I keep seeing more and more AI slop show up in emails, social media comments, and other forums of “human” communication. And, last week, an AI voice at a train station kept making the announcements on arrivals, delays, etc.
Each of these moments is small. None of them is individually a crisis. But every time I encounter one of these small moments, I notice a recurring, and perhaps snowballing, feeling: there’s a tiny bit of sadness. A quick sigh. Or an “Ugh.” Or, “Gosh, this sucks.” Or, “I can’t believe this is where we’re headed.”
There are many conversations happening on how some changes might be good for human progress, how some might be harmful, and how some might be “inevitable.” All of those responses assume the important work is evaluation, but evaluation requires space and time, which we are missing.
And so, lately, I’m trying to let myself feel the sadness and not try to immediately decipher whether it is nostalgia and fear of change or ethical intuition. The sadness keeps accumulating. Right when I think I’m getting used to one form of dehumanization in society, another one pops up. That conversion is happening so fast that I (we?) haven’t had time to mourn any one change before the next arrives.
The history nerd in me wants to understand how this fits into the ways that humans have historically felt about technological disruptions. And, while there were always individual criticisms of various technologies, fears of progress, and “ughs” about “this generation,” a lot of the biggest transitions unfolded over years. We’ve seen many of those in the last two decades. Social media. Smartphones. Faster internet access.
And what stands out is that we had more time to grieve what was lost, to name it, and to build new rituals around the changes.
This feels different, because the pace is compounding. Every week brings a new “ugh.” And the most shared experience of the AI era right now seems to be collective, half-articulated sadness. The quick comment to a spouse at dinner. The group chat with the eye-roll emoji. The rushed scroll past some thing that used to be done by a person.
I’m left wondering what does it cost us (emotionally, relationally, socially) to encounter this much change with this little room to breathe and process?
I think for those of us who are having any of these feelings, the grief is worth naming. Grief need not mean that the change is bad. You can grieve the end of something and still believe what comes next might be better. But you have to be allowed to feel the loss first. Otherwise, the grief just gets disguised, and we lose the ability to distinguish between what we should fight to preserve and what may actually be a sign of better times a-coming.
A fictional normative case study based on current events:
Dr. Cole has taught political theory at a mid-sized public university for eleven years. Her upper-division seminar on democratic thought is built around four long essays, each one scaffolded across weeks of reading, discussion, and revision.
Two weeks ago, her dean circulated a new policy: effective next semester, no major assessment may be completed outside of class unless it can be verified as the student’s own work. The reasoning being that students who use AI to write or polish their essays are producing work that is not fully theirs. Students who don’t use AI spend more time, often produce less refined prose, and increasingly receive lower grades.
Dr. Cole knows the dean isn’t wrong about the problem. She’s seen the papers that are high quality writing, especially recently, but not typical undergraduate-level thinking. She’s also seen what happens to the students who don’t use AI: they turn in messier drafts, they take longer, and some of them have started asking whether the effort is worth it.
But Dr. Cole also knows that the learning that the take-home essay allows is not replaceable with in-class writing. She’s seen for decades that the best thinking isn’t happening live in her classroom, but instead at 2am on a Thursday, staring at a paragraph that won’t come together, rereading Hobbes for the third time because the argument still feels incomplete.
She has to restructure her course, but how to do that without giving up her pedagogical goals?
What would you do?
For more on reasoning through ethical case studies like this, check out my upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12.
New Wave of Human Progress?
They say intellectual labor being offloaded to machines is no different than physical labor being offloaded to machines; we’ll adapt and move on to something better.
I say even that offloading has had massive implications for human health and development because we still haven’t adapted to not needing to use our bodies as much (obesity epidemic, booming fitness industry, fitness trackers).
One might object that maybe we will just similarly build “brain gyms” where people go to practice deep thinking that they no longer need to do on a daily basis.
I reply that a world in which we need the cognitive equivalent of a Peloton just to maintain our thinking skills seems more like artificially compensating for a loss, rather than “evolving onto the next thing.”
Students are generating deepfakes of their teachers in criminal/compromising situations. How will policies adapt to regulate yet another form of social media usage outside the classroom that has implications in the classroom?
SFUSD’s move to redact information from a contract with OpenAI is a textbook lesson in how not to approach ethical decision-making. Sometimes I worry that my book spent some time belaboring the obvious, this is unfortunate validation that I did not.
I initially read a headline about the UT system introducing standards for controversial topics as a positive. I was so very wrong.
Leave a comment if this made you think more about something!
Image by Franz P. Sauerteig






