The Margins (3/8/2026)
where we think about incentives, e-tests, capitalism, and polar opposites
How do we balance the hope of “can” with reality of “will"?
Every day there is a new vision of how technology can improve learning and education. Everything from AI tutors and metrics to online assessments and teacher tools. Every conference I go to has sessions on how to harness the power of AI to improve student learning and thinking (including ones that I’ve led). People are sharing anecdote after anecdote of how they are using AI to improve their own thinking. All of this establishes some notion of what the technology can do.
At the same time, teachers and parents are worried about the impact of the technology on students. There are students who are emotionally attached. Others who are using it to cheat on papers and online assessments. And, students who are becoming dependent on the tools and offloading their thinking. This is all a picture of what cautious voices are worried the technology will do.
This distinction is the one that I’ve been sitting with the most when I see truth both in the potential of the technology and still remain very concerned about the use of the technology.
The problem largely lies in the reality that while the technology can be used to do many powerful and productive things, our current incentive structures in education are set up such that those are not the things that by and large will be real use cases. Students who are time-pressed, stressed about future admissions and careers, and attempting to maximize on grades are going to be drawn to the shortcuts and not the positive possibilities.
Of course, those of us who have intrinsic motivations, developed appreciations for our own thinking, and want to push the possibilities of our work are approaching the question very differently than those students. What would it take for the "can" to become the "will" for our students too, and are we willing to change the structures that would require?
A fictional normative case study based on current events:
Ms. Lidol is on a state task force charged with deciding whether state testing will shift online next year. The state has been debating this proposal on and off for over two years, and the funding for test development will finally expire if it isn’t used this year.
Some task force members have shared stories from rural schools about the ease of not having to send students to testing centers. The saved cost, less travel time, and general ease of access is all appealing for resource-strapped school communities. Others point out that the accessibility options are far greater with online testing, including screen readers, different font options, and built-in custom timing.
On the other hand, the parents association is afraid that online testing will mean more screen time in schools. Some teachers have expressed that they will have to spend more time teaching students how to use the technology, which will detract from the actual curriculum And, the IT staff are scrambling to manage device and broadband equity.
Ms. Lidol has experienced the push to more screen time as more tests move online and “testing condition” practice sessions take over the classroom. But, she also has seen the difficulty of making accommodations for a diverse school population, which inevitably leaves some students shortchanged.
Ms. Lidol is the tie-breaking vote, and the task force has to submit a decision within a week.
What would you do and how would you explain it?
For more on reasoning through ethical case studies like this, check out my upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12.
Ethics by Capitalism?
They say Anthropic’s quick rise to fan favorite AI system is a sign that market forces will favor responsible AI companies.
I say this actually reveals how easy ethics-washing is for the mass market because one flash battle is leading to summary judgments.
One might object that this was a major pivot point for the future of AI & government partnerships, and Anthropic set an example of standing their ground.
I reply that the presence of other market actors and lack of enforcement for one company’s bright lines on others means consumer choice matters very little at this level.
Polar Opposites: Some schools are debating complete ed tech bans, just as other schools embrace more screen time with AI.
Baseline: Many states around the country are in various stages of passing chatbot safety laws, which will hopefully establish some baseline protections for children.
Future of Work: Anthropic put out a study this week with what occupations face the highest risk of AI automation. Think about which ones require higher education!


Leave a comment if this made you think more about something!
Image by Mario Hagen







