• Turing Post
  • Posts
  • Who is in the Driving Seat?! Learning with AI

Who is in the Driving Seat?! Learning with AI

Use AI With Your Kids Like This

In the first two episodes of our AI Literacy series, we explored the foundations of raising AI-literate kids – from understanding AI as a new kind of everyday environment to seeing children themselves as philosophers who question what machines really “know.” Together, those conversations set the stage for what comes next: how to move from recognizing and questioning AI to actually learning and co-creating with it.

In this episode, Stefania immediately took the reins to demonstrate how it looks and feels to learn with AI. We were not talking about abstract definitions anymore. We were clicking through demos, testing interfaces, and asking: does this tool actually help a student learn, or does it just pour more content into their lap?

That is the tension at the heart of learning with AI. Who is in the driver’s seat – the system, or the learner?

There are two ways to approach this series: you can watch or listen to our conversation as-is, old school – as if you are joining us live. Or you can read it here, as a curated reflection on what it means to raise AI-literate kids in a world that’s still figuring out AI itself. Because how we talk to our children about AI will shape how they talk back to it.

Watch it here → or read along

Please check the Resources section, there is plenty of awesome material.

If this sparked something for you, pass it on, share it – the more we talk about AI literacy, the stronger our collective compass becomes.

Families, Cars, and Joint Media Engagement

Learning has become even more mobile and available than it was before. You don’t have to be at your laptop, with your book, or at your desk. Now learning can easily happen in the car.

After the demos we’re about to discuss below, Stefania asked me how I, as a parent, use AI for learning. With a lot of driving on my plate, I came up with this option: let kids pepper ChatGPT (or any other model on your phone) with questions on long drives – about history, science, or whatever curiosity strikes. Let’s give AI the chance to be our traveling tutor!

Kids don’t hesitate to ask any questions they want. The car becomes a rolling seminar – equal parts trivia night and philosophy class. But – and that’s very important – the key is parental presence. When AI makes a mistake, me or my husband can step in to correct or fact-check, showing kids that even an AI must be challenged. You don’t have to know everything, what you need is – to be able to doubt.

It becomes natural to say: “That doesn’t sound right – let’s check.”

This matters because it turns what could be blind trust into a live demonstration of critical thinking. The children see that even powerful systems must be questioned. They also learn a subtle but vital skill: how to challenge an authority politely, test it, and seek verification.

Stefania told me that researchers at Sesame Street’s Joan Ganz Cooney Center have a term for this: joint media engagement. Their studies show that when children and parents co-use media, learning outcomes improve dramatically. Kids stay engaged longer, remember more, and connect the material back to family conversations. AI doesn’t replace the parent; it becomes a third teammate at the dinner table or on the road trip. And you as a parent still have control. 

Another thing that changed with GenAI models is their voice interface. The “feel-like-real” conversation amplifies the learning effect. Unlike typing into a phone, speaking to an AI feels more comfortable. Kids can rapid-fire why questions without friction, and the system never runs out of patience. Parents, meanwhile, model how to probe further: “That’s interesting, but can you give us the source?” or “Let’s ask it a different way.”

And here’s one of my favorite effects: AI revives the art of asking questions. Adults often stop asking because they fear looking ignorant. Kids don’t have that problem – and when they watch their parents openly ask ChatGPT about Marie Curie or black holes, they learn that curiosity is not embarrassing. They feel empowered by it! And parents start to remember and feel that too.

In the car, a chatbot becomes more than a homework helper. It becomes a family dialogue partner, one that encourages questioning, correction, and wonder – the very core of AI literacy.

And because the chatbot never loses patience, kids rediscover the joy of asking endless whys, what ifs, hows and whats. Magical.

Google’s Learn Your Way: Personalization or Just More Text?

So now, let’s switch to a few demos that Stefania and I tried. First, she showed me a fresh experiment from Google: Learn Your Way. It’s always fun to play with something new – it rarely works, but you can see if it has potential.

What do they ask you to do? Upload a PDF, a textbook, or pick a premade topic. The system then generates personalized study guides, complete with slides, questions, and even audio prompts.

On paper it sounds powerful. In practice? We tried “learning about learning.” The system dutifully produced slides, a mind map, and long explanations about instincts versus behaviors. But the text was dense, the slides uninspired.

As Stefania put it: “Most of the high schoolers I know would get bored at this point.”

The biggest flaw was in personalization that was promised but didn’t deliver. There were only two options: “a high schooler who likes skateboarding” or “an undergrad who likes music”. We went with a high schooler but the guide looked no different from a generic handout. There was no reason to make that choice. 

In this case, Google overcomplicated the tool, steering us into information overload.

NotebookLM: A Study Buddy That Listens to You

The mood shifted when Stefania introduced her project in NotebookLM, another Google experiment. I’ve been experimenting with NotebookLM for writing projects, so it was very interesting to see how Stefania structured her project for learning Japanese. 

Here, you bring your own sources: a textbook chapter, a Wikipedia entry, a YouTube video. NotebookLM ingests them and turns them into study tools – audio overviews, flashcards, and a mini-podcast if you study better through listening. 

Flashcards – the recent addition to the tool – looked especially cool for learning languages. Instead of trawling through generic Quizlet or Anki decks, NotebookLM built flashcards directly from her textbook. It made a lot of difference.

Why does this matter for kids? Because ownership of the material is motivating. Most study apps give children pre-packaged material. That works for drilling vocabulary, but it rarely connects to what they’re actually learning in school that week. And it feels disconnected. With NotebookLM, a child can upload the exact chapter their teacher assigned, or even their own notes, and get study aids built from that. Suddenly it’s not just “another app” – but it’s your own cozy space that you control and fill with things.

This makes a huge psychological difference. Kids are more motivated when they recognize their own words, drawings, or assignments reflected back at them. Instead of learning from an impersonal dataset, they’re dialoguing with their material. And when parents join in – uploading family reading, history projects, or even coding exercises – the tool becomes a bridge for joint study.

In that case – the learner was steering.

Study Modes Across the Big Three: Gemini, ChatGPT, Claude

We compared how the large chatbots are reinventing themselves as study companions.

  • Gemini offered a “guided learning” mode. We asked about Marie Curie, but instead of giving a full answer, it first asks you three questions. When you choose what you’d like to learn, it replies with personalization. Following our prompt “middle schooler who is into music”, it even tried metaphors: Marie Curie’s discovery of radioactivity explained as a “magic guitar humming on its own.”

I thought text with metaphors was an overcomplication, and also it was not clear for me why the model didn’t give an answer first and then asked more questions. 

The best part about Gemini – it actually provided sources.

  • ChatGPT’s Study and Learn mode felt more structured. It delivered key points, asked comprehension questions, and prompted reflection. Sometimes it overdid the prompting – dropping students into long instructions before they’d even asked their own questions.

What I especially liked was that it asked a question for the student to think about in the middle of its reply.

The bad thing – it didn’t provide sources.

  • Claude went the furthest. And the longest – it generated an entire poster-style PDF on Curie’s life, complete with quotes and visuals.

Impressive, but also risky: it’s harder to check if all that polished information is true. Without reliable sourcing, a beautiful artifact can mislead as much as it can teach. And generating that code took a while, which means latency becomes part of the user experience – something model providers need to be aware of.

Across these examples, a pattern emerged: the more the system “teaches,” the more passive the learner becomes. It feels great when a model asks you not only clarifying questions but tries to guide your thinking into a new direction. 

Overall, using “study mode” felt like being in the driver’s seat, but with a driving instructor.

Flipping the Script: The Socratic Math Tutor

Then Stefania showed me something different – a math tutor she had helped design. Unlike the others, this system never gives the answer. It detects mistakes, diagnoses misconceptions, and generates new practice problems targeted to that error.

Make a distribution mistake in algebra? The system recognizes it, explains why, and serves up three new problems to practice. All of this is built on a taxonomy of 55 common middle school math errors – research-backed, teacher-approved.

AI was not in the driver’s seat here. This was AI as a Socratic guide, nudging the student forward without taking away the work of learning.

That demonstration was quite impressive. While Khanmigo or Project Chiron offer conversational tutoring, they often stumble on the math itself. By grounding the system in a library of real misconceptions, Stefania’s tool avoids the hallucination trap.

That was something very promising: learning tools that are less about answers and more about questions.

When Kids Break the Models and Why is it Important

Up to this point we were testing how AI behaves as a study partner – guiding, overloading, or patiently prompting. But learning with AI is never only about absorbing what the system offers. Kids flip the script. They poke, stretch, and sometimes deliberately break the models. And that’s where some of the most powerful lessons appear.

When one of my sons was 5 years old, he wanted to generate Sonic the Hedgehog in one of the models. The system refused – copyright constraints. The child experimented with this and that, dictated it with a few prompts (he couldn’t yet type back then, so voice assistance was very handy) and then he asked for “a hedgehog that looks like Sonic, but isn’t Sonic.” The model promptly delivered a perfect Sonic.

Oh how happy he was. We talked about the art of prompting, and how models work. But that also gave us a perfect opportunity to have a thoughtful discussion about copyright, dataset limits, and why models sometimes refuse to show certain results. A five-year-old had not only “hacked” the system but also uncovered a real-world governance issue. What a great teachable moment. 

And then Stefania said something that stuck with me: â€śHaving these friction moments in the technology, it's almost like we want to design them, right? It would be much cooler if the system said: hey, we hid a mistake – your job is to find it. Or: there’s something here that doesn’t add up, can you spot it? That’s much closer to real learning.”

Image Credit: On the Good-Enough Effect: Children Reflect on their AI-Generated Portraits

She pointed out how often children try to “break” AI deliberately: changing their hair or wearing glasses to confuse a vision classifier, asking trick questions to catch a chatbot in error, pushing an image generator with contradictory prompts. 

When kids find those edge cases, they feel empowered. And the next step, said Stefania, is to channel that – not just laughing when the system fails, but asking: how would you fix it? How would you redesign the prompt or the data so it works better?

This, she said, builds intellectual humility, especially in the face of powerful AI systems.

She linked it to Andrej Karpathy’s idea of jagged intelligence: AI can ace complex tasks like math Olympiads but fail at trivial ones, such as counting letters in a word. That unevenness is hard for people to grasp, but noticing it helps kids see AI not as magical or omniscient, but as a flawed technology to be tested and questioned.

I very much agree with her. When the media talks about AI being sentient or super-intelligent, these mistakes – this jagged intelligence – show that it is, in fact, a technology that breaks. You just need to know how to talk to it and how to challenge it with questions. That awareness gives you a kind of immunity and preparedness, allowing you to use AI as a tool that can significantly augment your capabilities.

The Family Challenge: Test the Knowledge Cut-Off

Since in this episode we are focusing on learning and on how to check whether models give reliable information, we thought it would be fun to end with a family challenge that will help you with that. 

Our challenge for this episode is simple but revealing:

  1. Find an AI model with a known knowledge cut-off date.

  2. Ask it about two topics – one before the cut-off, one after.

  3. Notice how it responds when it doesn’t know.

Does it admit ignorance, or does it hallucinate? Does it fill the gap with guesses?

Try elections, natural disasters, or recent discoveries. You’ll see how different models handle uncertainty – and spark conversations about truth, trust, and the limits of machine knowledge.

You can try Claude Opus 3 (knowledge cut-off is August 2023) or Gemma in AI Google Studio (knowledge cut-off is August 2024), or some older models on Hugging Face.

Closing Reflection

So, who is in the driving seat? With the tons of options now available, we need to learn along with kids how to drive this machine, getting the best out of it without surrendering our own agency. It’s important to realize that the promise of AI study tools is in creating a space where questions can multiply. And where the trust can live.

Trust and humility are the real literacy skills here. Trust, not in the machine’s authority, but in our ability to question it. Humility, not in giving up control, but in admitting – as parents, teachers, and learners – that we don’t know everything, and that’s the point. These systems are uneven, jagged, brilliant at one task and bafflingly clumsy at another. Recognizing that paradox is what keeps us in the driver’s seat.

Children show us the way forward. They break the models, laugh at their mistakes, and then ask why. They remind us that learning isn’t linear, that it thrives on play, error, and debate. When families use AI together the technology stops being a black box and becomes part of a shared journey of exploration.

The future of AI literacy is about teaching kids to question machines and all the rest. It is about teaching them to trust themselves – to ask, to doubt, to reframe, and to steer. If we can keep that spirit alive, then no matter how advanced the tools become, the real driver of learning will always be human curiosity.

👉 Next in the series: Co-Creating with AI – how to build alongside machines without losing sight of what creativity means.

Resources and further reading

Play with AI and ML

Reply

or to participate.