The Margins (2/22/2026)
where we think about my inability to vacation properly, private schooling, access rhetoric, and Zuckerberg
I’ve been traveling this week, and it’s been pleasant to be out of the AI bubble. But, as I was pleasantly aware of how AI has not come up once in a conversation, I wondered if this “obliviousness” was going to rapidly change if some advancement in AI spreads around the world overnight, the way the pandemic did.
This analogy isn’t mine and is being used to send that exact warning: that AI changes are about to hit the world like the pandemic did.
However, I am reminded just how slow global technology adoption and adaptation is. For all the urgency that those of us working in the space feel, there’s a huge swath of people globally who couldn’t care less yet and won’t no matter how strong the latest model is.
It’s tempting for some to adopt a savior mentality and see the lack of tech literacy as something to be solved. But, most communities would see marginal if any benefit from AI adoption on an individual basis. The farmers, restaurants, shopkeepers, and service staff will not have “upended” careers. Their lives will shift through second-order effects (supply chains, pricing, labor markets), but the direct rapid disruption narrative doesn’t map onto their reality.
This disconnect isn't just anecdotal. When we see the recent comment from Altman that we are “unprepared” and the viral graphic about AI tool usage globally, it’s a reminder that for a lot of the tech industry the entire world is lived on and through a computer. That makes rapid improvements in the AI technology astounding and earth-shattering when in reality much of the world’s population lives outside these bubbles.
Now, I realize there are existential risks that AI brings that don’t depend on individual users (warfare, financial crisis, etc), and so these people can still become unwilling victims of any dangerous outcomes. But, some of the narrative around the urgency and pace implies that social and economic changes will happen overnight for humans, and I’m hesitant to believe that.
Change will be slowed down by human bottlenecks. New pharmaceutical discoveries will still require human trials. Better diagnosis algorithms will still depend on human care providers. Better AI models are not going to pluck fruit, sell it in their stores, and cook it into desserts for us by the end of this year. Or next year. Or the year after that.
That’s why the comparisons to pandemics fall apart. Because, in many ways there are big differences. The pandemic hit every community with equal risk and no one got to say “no, thank you.” With AI, most of the world is still saying exactly that and will be for a long time.
A fictional normative case study based on current events:
Aniya is the mother of 3 children and has recently moved to a new neighborhood. She and her wife, Kara, chose the new town because it’s known for its school district, and their first child will be entering kindergarten next year.
Aniya and Kara have the resources to send their children to a private school, but both were committed to sending children to public schools well before they became parents. For Aniya, it was because of philosophical commitments to public schooling, and a desire to ensure their children encounter a diverse community of peers. For Kara, it was more personal; she credits a lot of her own growth to her experiences with her public school teachers and friends. Both of them want to support and invest in public schooling.
The district has all the right metrics. They have a diverse student body, high test scores, funded extracurricular programs, and a low student-teacher ratio. However, this year the school announces a massive investment in learning technologies that were supposed to further improve math and reading scores in the district.
Aniya and Kara are concerned. The school plans on replacing almost half the school day with screen time. Students will have all primary instruction on devices with additional time for some class activities, primarily in music and art. Both of them are worried about what this means for socio-emotional development, community learning, and even data privacy for their children.
The other parents in the community seem mostly supportive or indifferent to the changes. There is a lot of trust in the school, and it is well-earned over the years.
What would you do?
For more on reasoning through ethical case studies like this, check out my upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12.
Access to What?
They say we need to give every student access to AI.
I say we should slow down and consider the risk before we adopt huge AI plans for schools because we still do not know the potential harms or have evidence of benefits.
One might object that some students already have access to the tools, so we are just leveling the playing field.
I reply that access to what matters, and if AI is net bad for students, then we don’t want equitable access.
Harvard Gazette published a podcast on AI & education this week, and what stood out to me was this quote that sounds way too similar to the comments we saw about cellphone usage. I really hope we’re not waiting over a decade to do something about it this time.
We did a survey with 7,000 high school students. We asked them, “Do you feel that you are relying on AI too much for your learning?” And almost half of those students said that, “Yes, I feel that I’m using AI a bit too much.” And then over 40 percent said, “I tried to limit my usage, but it was so difficult I failed.”
On that note, the argument from Zuckerberg that Meta has an interest in keeping communities safe is dismissive of the nature of addiction in humans. We don’t give up easily on things just because we know they make us unsafe/unhealthy if the thing itself is engineered to be addictive.
Zuckerberg said there’s a misconception that the more attention the company captures, and the more time people spend on its apps, the better it is for Meta’s bottom line, regardless of harms they may encounter.
“But if people feel like they’re not having a good experience, why would they keep using the product?” Zuckerberg said.
After a long time, I saw a school using AI and could appreciate the use case as a great immediate benefit. Albany County is using AI bots to follow-up with absent students to triage interventions. Assuming that the vendor was screened for security, this is a great high reward, low risk adoption.
Google put out a feedback tool in Google Classroom. A new study shows people still prefer human empathy when they know the source…
This was an interesting interaction when I asked Claude to check my writing for mistakes:


Leave a comment if this made you think more about something!
Image by G.C.








