Untrained Data

I Don't Want AI to Think for Me

Claude's "learning mode" invites a slower, more deliberate use of AI — one that might reshape how we think about education, agency, and the internet itself.

By Richard Udell

Published April 7, 2025

Just like everyone else, I’m feeling the weight of LLMs. The existential questions. The skill decay. The unease that maybe we’re automating away something essential about being human. Since AI entered the mainstream, I’ve felt a growing responsibility to approach technology with more intention — to create with it more than I consume through it. To shape my relationship with these tools instead of letting them shape me.

So when I saw Claude’s new “learning mode”, I didn’t just see a feature. I saw a signal.

Claude’s learning mode refuses to give direct answers. Instead, it prompts you to think. It asks questions. It behaves less like a magic box and more like a thoughtful teacher. That shift — from delivery to dialogue — feels both subtle and profound.

And yes, let’s be clear: you could recreate this mode right now in any LLM with a prompt like:

"You are an educator whose goal is to help me master [subject]. Do not give me direct answers. Use Socratic questioning. Prioritize long-term understanding over short-term correctness."

In fact, people have already been doing this (Internet2, Instructure, Khan academy, Center for Humane Technology, multiple AI-ed tutor-style start-ups) for a while.

But what makes Claude’s version exciting is that it reflects a culture shift, not a technical one. It makes learning the default. It assumes users want growth, not just output. And it points toward a future where AI helps us think — not think for us.

Where This Could Go

I’ve said before that using LLMs can feel like having a refreshing drink — something quick, easy, and satisfying. But Claude’s learning mode reminded me that sometimes, what we really need isn’t refreshment. It’s friction. A bit of resistance that forces us to think instead of outsource.

Now imagine students logging into school computers that only run this version of Claude. They can still access the full power of an LLM — but only through the lens of learning. No shortcuts. No essays written for you. Just a thoughtful, persistent tutor asking you to explain yourself.

That’s not automation. That’s amplification.

It’s a hopeful vision. But also, a complicated one. If every interaction with the internet gets funneled through an AI model — especially one tuned to our individual learning styles and personal goals — we risk losing something messier, harder to replicate: the way we used to learn by getting lost. The late-night research spirals, the confusing forum threads, the feeling of figuring things out the hard way. The open web supported those moments. And while that doesn’t have to be lost in an AI-first world, it’s a tension we need to hold with care.

Claude’s learning mode doesn’t have to erase those moments—it could actually support them. But only if we stay intentional. I want to help preserve the spirit of discovery that shaped my own messy, sometimes frustrating, but ultimately meaningful learning experiences — even as we embrace AI’s potential to deepen and personalize how we grow. The interface is changing. That’s exciting. But let’s make sure the values that got us here don’t quietly disappear along the way.

We’d also gain something in return: an interface that meets us where we are — not with rigid roles, but with a quiet responsiveness. Maybe it helps us focus when we need to, or gently reminds us to step away and take a walk. It’s less about toggling between modes and more about technology that bends around the rhythm of our lives — tuning into the different versions of ourselves that show up across a day, a week, a season.

That shift isn’t just about convenience — it touches something deeper. It changes how we relate to technology: not just as tools we control, but as systems that can respond and adapt — ready to step forward when we need them, and stay respectfully in the background when we don’t. The risk, of course, is that in making things too smooth, we erase the friction that helped us grow. But if we stay curious and intentional, there’s a path where AI doesn’t just help us do more — it helps us become more.

What I’m Doing Next

Over the next few weeks, I’m going to interview a handful of educators — teachers I know, people in my network — to get their take on all of this:

This is the first post in what I hope will be a short series. I’ll share what I learn as I go — not to make grand claims, but to think out loud with people who think deeply about how we learn.

Because the future of education might not come from a feature. It might come from a prompt.

And the best thing we can ask AI? Might just be: ask me something better.