




I’ve been thinking a lot about how we teach AI — not the big, dramatic stuff, but the tiny architectural nudges I give Claude Code every day. The ones that feel obvious to me but invisible to a model. This piece is about that moment, and the larger shift it signals in how humans and AI actually learn from each other.
When the Teacher Becomes the Student
I spend a surprising amount of time teaching AI how I think.
Not in the grand, philosophical sense — more in the “no, please don’t query Shopify by SKU, use the variant ID” sense.
The small stuff. The stuff that feels too obvious to mention until the model does the wrong thing and I hear myself sigh, Why am I still correcting this? Aren’t we past this stage?
It’s a strange moment to be building with tools like Claude Code.
They’re strong enough to scaffold entire systems, but not always strong enough to choose the right branch in the system. They can write an elegant Slack workflow, but still need me to steer them away from assumptions I would never make because I’ve lived inside my own architecture for years.
So the question I keep coming back to is this:
When does the teacher become the student?
When does the model stop needing my nudges and start learning the shape of my mind?
Why the “obvious” things aren’t obvious to a model
Here’s the part I remind myself:
LLMs don’t inherit context. They infer it.
When the model picks SKU instead of variant ID, it’s not making a mistake — it’s choosing the most statistically common interpretation of the request. That’s all. It doesn’t know that my data is arranged the way it is because of eight years of Wildwoven evolution, half-broken Shopify conventions, and my compulsive need to impose order on chaos.
LLMs don’t see that history.
They only see the text in front of them.
And prompting — this whole new craft we’re all learning — is meta-cognitive. It’s asking a model to reason about its own reasoning. Humans struggle with that. Of course models do too.
But something interesting is happening
When I correct Claude once, it gets dramatically better on the next try. Not perfect, not permanently “trained,” but noticeably more attuned to what I meant — not just what I asked.
It’s subtle, but you can feel the model leaning forward, picking up your rhythms, paying attention to what you correct and what you let pass.
It’s the beginning of a shift.
Not quite “the student becomes the master,” but something adjacent:
The feedback loop is compressing.
The gap between what I think and what the model produces is shrinking.
What it will take to cross the threshold
I don’t think the flip happens when models get “smarter.”
I think it happens when they get contextual.
When they can:
Remember how I build systems
Infer purpose instead of just syntax
Run small internal simulations before committing to a plan
Gently question an assumption that doesn’t line up with my past choices
Treat corrections not as events, but as signals
At that moment, I stop being the teacher in the tedious sense — the “no, not that ID, this one” sense — and start being the teacher in a much more interesting way: shaping the model’s taste, judgment, and heuristics.
The way a mentor teaches, not a taskmaster.
The last skill we’ll give up
Of all the things AI has already taken on — drafting, scaffolding, debugging, synthesizing — the final frontier seems to be the meta layer.
The work of teaching the model how to think about its own thinking.
And that’s still very human.
For now, the corrections are part of the collaboration.
Not a sign of failure, not a sign that the models aren’t “there” yet — just a reminder that thinking is contextual, and context takes time.
But the shift is coming. I can feel it every time the second draft lands closer than it has any right to.
One day soon, the model won’t just write the code.
It will understand why I prefer variant IDs in the first place.
And that’s when the teacher will finally (and happily) become the student.