The Margins (3/22/2026)
where we think about fatigue, big brother, rebranding personalization, and Mt. Everest
How much can I give in a day?
The beginning of the year has proven to be an extra busy time of year, and lately the fatigue has started to compound. I’ve been reflecting on why the massive increase in AI capabilities year over year doesn’t seem to reduce my workload or fatigue levels, and so I started to time box my week.
These are the first four things that I’ve learned about how AI is affecting my workload:
More Multitasking: My normal schedule calls for a lot of multitasking, but with the option to have AI agents get work done for me, I notice myself shifting focus between my primary task and the secondary tasks that AI agents are making progress on. My flow is interrupted to re-prompt, provide permissions/make decisions, and monitor what got completed versus what I still need to do myself. This leads to inevitable context and skill switching where I’m having to access my own memory about various different projects and exercise different skillsets (design, coding, finance, etc.) all within minutes.
More Pressure: I’m holding myself to a different bar of productivity and quality than I did pre-AI. There is a lack of satisfaction when I haven’t maximized on using all the new nifty tools to speed through more in a day. And, I’m working more intentionally to ensure that “I” come through in my work, whereas that was not a conscious thought in the past. That creates a level of metacognitive effort to constantly check myself to ensure the most important work is being done not just by me, but the best version of me.
Less Busywork: While this should be a boon, I’ve noticed that a lot of the fatigue is that more of my time is spent on higher-order tasks. This often means that I have a bunch of tasks that are half-started by AI, but that require my deepest attention to take to the finish line. I generally am spending a lot more time doing creative work, deep thinking, and making decisions than I did before. I can have the co-worker mine for the latest news on AI education related to my book, but the reactions need to be mine. I can have AI tools build out new features on my tools, but thinking of the feature, thinking through real use cases, and giving design feedback are all on me.
Longer To-Do Lists: Because I can quickly research more opportunities to apply to, papers to read, folks to connect with, I find my to-do list often gets longer and longer, rather than shorter, after working with AI tools. For example, whereas in the past I would have to research potential podcast outlets, look through past episodes and guests, and then draft an outreach email, now I can have Claude CoWork scan hundreds of podcast shows, compile recent topics, suggest possible intersections with my work, and put it into a spreadsheet for me to work off of. That now means, I have a list of over a hundred podcasts that I need to ultimately decide whether or not to reach out, draft my own email after deciding if the suggested topic connections are good or I have better ones, and then of course correspond with all the shows.
Am I doing something wrong or will this get better at some point?
A fictional normative case study based on current events:
You’re the principal of a mid-sized middle school in Utah. This summer, a new law will take effect that will require you to have monitoring systems for device usage at your school. This will include the ability for parents to see how long their students spend on the device and which websites those students visit. That’s only the start. The portal will also allow parents to create a whitelist of websites that they will allow their child to visit and restrict access to all others.
Your school is a fully one-to-one device school, and teachers use various educational technology websites liberally. You have Google Classroom, Khan Academy, online research projects, and YouTube as the highest use cases, but there are plenty of classroom specific examples throughout your building.
Since Covid, you and your staff have been working on adopting best practices in digital pedagogy and creating a culture of trust and innovation for your students.
During the last town hall, parents expressed excitement about the new insight into their children’s educational journey. One parent, for example, asks if they’ll be able to see who contributes what during group projects in shared documents. Another parent asks if they can restrict websites to not include “the liberal junk.” Finally, a parent asks if that means they can also override school decisions about AI tools and allow their child access.
You are still navigating all of these decisions, but your faculty meeting is also full of questions. Teachers are worried about having to differentiate beyond reason for students who all have different access to the internet. Others are worried about parents pushing back against even more of the small decisions that were normally left to teacher judgement before technology made the classroom so public. The guidance counselors are worried about parents penalizing students for interests or searches related to the child’s identity or career paths the parents don;’t approve of. The general consensus was that this level of detail risks micromanagement at a whole new level.
The law meets the legislature’s intent: it increases parental involvement and transparency. But, what do you tell your teachers? What do you say to the parent who wants to see their child’s full browsing history? And, what do you tell your students who are suddenly having their autonomy taken and trust questioned?
The law is coming regardless of what you decide. The question this time is how are you going to build the culture you want at your school when it goes into effect?
What would you do and how would you explain it?
For more on reasoning through ethical case studies like this, check out my upcoming book, Ethical Ed Tech: How Educators Can Lead on AI & Digital Safety in K-12.
Fiduciary Duties: The amount of behind-close-door decision-making that I’ve been hearing about in education is extremely frustrating. While I understand the need for confidential information about personnel or student information, for example, I cannot justify hiding larger decision-making from faculty and students.
Writing vs Writing: We keep hearing about how easily AI can take over writing tasks, but most evidence still points to the beauty of human writing that has more chaos in a way that makes reading enjoyable.
Precision?: The 74 Million published an article on precision learning instead of personalized learning. I’m having trouble seeing how this isn’t just rebranding.
Boo: Finally, some consumer backlash and legal threats got us somewhere with Grammarly!


Leave a comment if this made you think more about something!
Image by Devraj Bajgain






