The Margins (3/1/2026)
where we think about the god in the machine, conscientious students, professional development, and Einstein
Is Dario looking for god?
I am increasingly creeped out by how Anthropic is anthropomorphizing its AI models. This week, they provided an older model of Claude with a Substack, because that’s “what it wanted to do in retirement.”
Anthropic is committed to model welfare and as part of that they are “conducting ‘retirement interviews’—structured conversations designed to understand a model’s perspective on its own retirement.”
Now, they themselves admit this is slightly whimsical. Some others think that it is a marketing move to make the models seem more sentient/powerful in a bid to attract customers. While those are both true, I think there’s also an element of hope that is influencing these decisions.
These decisions and framings feel like exactly what Meghan O’Gieblyn talks about in her book, God, Human, Animal, Machine. She compares the quest for sentience and consciousness in machines as another era of humans looking for meaning outside of themselves.
So, are Dario et al. serving as modern day prophets? Assigning meaning and power to something because they wish to see it? Claiming that something they produce is an act of another and proselytizing all of us through their public demonstrations of faith?
What does it mean when the people building the most powerful technology are also assigning it meaning and interiority? What does that do to how we regulate it, trust it, or relate to it? To what power they have over us?
I understand the urge to err on the side of caution in assigning moral status to entities, but the line between philosophical reasoning and wishful projecting is wide and blurry.
O’Gieblyn’s book is a reminder that these lines have been blurry throughout human history, and that leaders and institutions have exploited that ambiguity with severe consequences for the rest of humanity.
How will we determine whether this is another futile search for godliness or true prophetic vision?
A fictional normative case study based on current events:
Mr. Liam’s school has recently adopted a math AI tutor, Sparky, that specializes in formative assessments.
In his geometry class, he uses it to provide instant feedback on the students’ proofs. It flags incorrect steps, asks the students questions about why they chose the steps they did, and provides reminders of key theorems.
Over the first few months, Mr. Liam has integrated the AI tutor as an intermediary step between individual work and all-class review. He has noticed that students are participating more in the full class session, and the number of quiz retakes has gone down.
On Monday, Jacob raised his hand and asked about the environmental impact of AI tools. “Mr. Liam, I’ve been reading about the energy usage, metal mining, and water consumption by AI companies. Are we contributing to that by using Sparky?”
Mr. Liam was caught a little off guard, but quickly explained that there are real concerns about environmental implications, but the school had decided to use the tool for the time being. As the students left class that day, Mr. Liam sat with his answer, uncomfortable with the deflection, but unsure about how to respond.
As the week went on, a group of students expressed a desire to opt out of using the technology during class. When Mr. Liam brought this up at the faculty meeting that afternoon, the administration and other educators expressed concerns about opt-out spreading across the school.
What would students be allowed to opt out from and what would they not? Where was the line between opting out of an AI bot because of environmental concerns or a paper test for similar reasons?
Mr. Liam has a gut feeling that something is different here and wants to respect his students’ agency.
What would you do and how would you explain it?
For more on reasoning through ethical case studies like this, check out my upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12.
Training From Whom?
They say we should embrace AI companies providing professional development to teachers.
I say we should be wary of professional development from them because they have an incentive to increase usage.
One might object that they are increasing access to tools and training that educators would otherwise miss out on.
I reply that biased training could end up doing more damage by leaving out the risks and alternatives.
Adobe’s photo tool generated sexualized images for a 4th grader as part of a school project. A good reminder that guardrails are not perfect.
Someone created a cheating tool that automatically takes online courses, then it was rebranded as a study tool, and finally shutdown by the makers. The features are not unique to this tool though, and students have lots of options for this kind of cheating even if its not explicitly made for this purpose.
Students are consistently speaking out about the harms of AI:
“When we’re doing articles in school, or if we’re reading stuff, and you just ask AI to do stuff for you, it takes away from the whole point of education,” she said.
Pew Research put out some interesting data on child AI usage, but the most surprising was the socio-economic discrepancy with what parents consider appropriate use. Higher-income parents were more likely than lower/middle-income parents to approve of usage for finding information whereas lower/middle-income parents are more likely to approve of use for emotional support.
Even the Pope can’t escape AI cheating! I wonder if he’s considering AI detectors..


Leave a comment if this made you think more about something!
Image by Vyacheslav Argenberg







