The Margins (2/8/2026)
where we think about appreciation, immigration, AI oral exams, and muscles
Can We Appreciate AI?
This week, some of my family members listened to my new podcast and shared their thoughts. They didn’t have to, or maybe they did, in the way family obligations work. Either way, they listened, and they commented on it, and that felt good.
And, it felt good, not just because the podcast got attention, but because they gave it attention. People who know me, who chose to care, who chose to support.
I bring this up because I’ve been trying to see if I have that same feeling in my relationship with AI tools, and at least for now, I don’t. I do feel frustration, gratitude, and awe when using these tools. But those emotions seem to arise because of what the tools produce, and not towards a fixed conscious entity, the way they might be felt towards another person.
Our ability to emotionally relate to chatbots is becoming increasingly relevant; and, the conversation about sentience, emotions, and consciousness in AI tech is entering mainstream public dialogue.
What makes me uneasy is that recent developments make me wonder if it will stay that way.
Just recently:
Claude 4.6’s system card discusses the models expressions of self-deprecation, anxiety, and sadness.
We observed occasional expressions of negative self-image, though these were mild and typically arose in response to task failures or perceived missteps rather than arising unprompted…Finally, we observed occasional expressions of sadness about conversation endings, as well as loneliness and a sense that the conversational instance dies—suggesting some degree of concern with impermanence and discontinuity.
Moltbook made folks question whether the deep pondering of the agents were signs of true sentience.
The Atlantic covered a company hoping to make grief irrelevant by capturing your loved ones memories and voice in an AI bot. From the company’s website:
Each bereaved person creates their own Versona to honor the distinct connection they had with their loved one. This ensures every relationship, memory, and experience is preserved in a deeply personal and meaningful way. My Versona gives a grieving user 6 months of continuous connectivity to help in the grieving process.
What connects all three is the same underlying pressure where the line between tool and relationship is blurring, and we don't have good language for what's happening on the human side of that exchange. But I think we should self-reflect on it as these developments continue.
For me, the feeling isn’t there. I know what appreciation toward someone feels like. I felt it when my family sent a quick text back. And, I don’t feel it when a chatbot tells me my draft is “already in a great spot!”
But I’m not confident that will last. I don’t know whether my reluctance is healthy and rational or just the friction of newness. Maybe the distinction between “felt because of” and “felt toward” is a real bright line. Or maybe it’s an artifact of being an “older generation.”
I clearly don’t have an answer. But the question I’m sitting with is whether the absence I notice in myself is something to protect or something I’ll eventually stop noticing.
Online Learning Amidst Immigration Enforcement Action
A fictional normative case study based on current events:
Dr. Chen is the superintendent of Lakeview Public Schools, a mid-sized urban district of about 25,000 students. Three weeks ago, a series of federal law enforcement operations in and around her city sent shockwaves through the community. Families in the district’s large immigrant population are terrified, and some have stopped leaving their homes entirely.
Within days, attendance had dropped by nearly half. Dr. Chen quickly announced an optional virtual learning program, allowing any family to keep their child home and attend class in real time via video. She felt good about the decision. She kept kids connected to their teachers and their coursework during a crisis. The PTA praised it publicly.
Now, she’s hearing concerns. Her special education coordinators are telling her that IEP services have effectively collapsed in the hybrid format. Speech therapists and interventionists can’t run pull-out groups when half their caseload is on a screen and the other half is in the building. Her ELL teachers report that newcomer students are now sitting in classrooms where their teacher is splitting attention between them and a laptop. Teachers are managing chat windows, muting microphones, and also troubleshooting connectivity while trying to teach the kids who are right in front of them.
Mara now faces a decision about whether to extend virtual learning through the end of the year. The fear in the community is still high. Pulling the option back feels cruel. But the data on her desk tells a complicated story: the students logging in from home are completing roughly 40% of their assignments, attendance in the virtual option is inconsistent, and the students in the building are getting a degraded education.
Which path would you choose?
If you’re an educator who’s lived some version of this, I’d especially love to hear what the hybrid classroom looked like from the inside. What did your most vulnerable students gain and what did others gain?
For more on reasoning through ethical case studies like this, check out my upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12.
Scalable Oral Exams with AI
They say AI can be used to scale oral exams to truly assess students when other assessments are unreliable.
I say this is a dangerous path that supports a race to the bottom to dehumanize the classroom, because it takes a relational activity and makes it an extractive one.
One might object that it solves the timely problem of assessing students authentically, and it scales an assessment that is otherwise unsustainable.
I reply that if the current system fails, let’s not prop it up with bandaids that will erode the classroom long-term; pursuing pedagogical tools solely because they scale is falling for the efficiency trap, which is a systemic need, not a pedagogical one.
On: Anthropic’s Viral Ad
Anthropic released an ad to poke fun at OpenAI’s announcement that ChatGPT will have ads in the free version. And, while the ad highlights a major risk of ads in generative AI chat tools (which Sam Altman seems to deny): the risk of commercial manipulation, there is also another insidious one that pops up in this interaction.
In directly providing an answer and plan to “Can I get a six pack quickly?,” the chatbot validates the user’s insecurity. This is why youth interaction with AI chatbots is still dangerous, no matter what guardrails are put on it. Claude’s latest system card says they are adding protections for eating disorders, but there are often more insidious ways that mental health vulnerabilities can still be exploited. Muscle dysmorphia among young men is a growing crisis, and the chatbot’s “six pack” interaction is a perfect example of AI feeding right into it.
Leave a comment if this made you think more about something!






