- Turing Post
- Posts
- How To Raise An AI Architect
How To Raise An AI Architect
Welcome to the 1st Episode of our AI Literacy Series
It’s one thing to track AI infrastructure trends from a desk stacked with research notes, livestreams, and release logs. It’s another to watch your own children – and I have five – slip so easily into ChatGPT, Midjourney, and whatever new tool lands in their feeds that you realize: fluency has already arrived.
I’m also a heavy user of generative AI in my own work, and it’s clear adoption has crossed a threshold. In the mid-2010s, “AI literacy” meant recognizing what a narrow system could and couldn’t do – an image classifier, a toy robot, a chatbot that knew only a handful of intents. Today’s generative models are multimodal, persistent, context-sensitive, and frighteningly good at outputs that read, sound, and look like ours. They don’t just answer prompts; they tilt opportunities and influence decisions.
Generative and other types of AI is no longer something you “learn to use.” It’s the environment we all live in. It’s woven into search ranking, gameplay, homework feedback loops – even those shouted “Alexa, what is…?” questions across the kitchen. And while we’re all hurtling toward AI-augmented cognition at a fast pace, the challenge of learning how to best make use of this technology literacy is being rewritten. Now, it’s about communication, about asking the right questions of the outputs, spotting when information retrieval went sideways, and collaborating with systems that update their knowledge in real time, under the hood, without asking permission or updating the user.
Pause on that. These systems are continuously rewriting themselves as we use them. That’s the world our kids are growing up in. And they should be the architects of that, right?
That means that along with learning how to read and write, youth will need to learn how to have a critical understanding and use of AI, as another crucial literacy. And of course, AI Literacy isn't a brand-new idea. Its foundations were laid over the past decade. But the scale, scope, and urgency of AI literacy have been so dramatically amplified by genAI that, for most people, it feels like a completely new concept.
Which is why we’re launching the AI Literacy Series. Not to water down AI for children, but to reimagine how we – as builders, parents, and interpreters – prepare the next generation to live, think, and create inside an always-on model ecosystem. And how we should build for them. It’s also about how to be an awesome parent with or without prior knowledge about AI and machine learning.
My partner in this is Stefania Druga, who was building AI literacy long before Attention Is All You Need hit arXiv. In 2012 she founded Hackidemia, an NGO that brought coding and robotics to kids in 73 countries, training over 400 local mentors. By 2016 she was at the MIT Media Lab Scratch team, where she launched Cognimates – a platform where kids could train image classifiers, script chatbots, and program robots to respond to the physical world. It was a playground that showed AI isn’t magic – it’s data, code, and design choices you can touch, tweak, and sometimes break.

Image Credit: created by ChatGPT
There are two ways to approach this series: you can watch or listen to our conversation as-is, old school – as if you are joining us live. Or you can read it here, as a curated reflection on what it means to raise AI-literate kids in a world that’s still figuring out AI itself. Because how we talk to our children about AI will shape how they talk back to it.
Also, please check the Resources section, there is plenty of awesome material.
First of all, What is AI Literacy?
A big problem that everyone is insisting that we should hire people based on "AI literacy," teach "AI literacy," & develop skills for "AI literacy" yet not only is there no agreement on what AI literacy is, but also a lot of what people call AI literacy is already out-of-date.
— Ethan Mollick (@emollick)
1:07 AM • Jul 31, 2025
Everyone’s calling for “AI literacy” – in hiring, in education – yet there’s no shared definition, and much of what passes for literacy is already outdated. In 2021, it was enough to know how to trigger a skill on Alexa or train a Teachable Machine model. In 2025, with LLMs embedded in Office, Google Search, iOS, and classroom tools, the baseline is far higher.
In one of her first papers, Stefania defined AI literacy as the ability to read and write with AI. Now, it became the ability to read, write and create with AI. It breaks down into three linked competencies:
Reading AI – Critically consuming AI outputs and understanding the systems behind them.
Writing AI – Using AI to extend and refine your own thinking and expression.
Creating with AI – Designing and building new things in partnership with a system, from interactive games to custom-trained models.
“That means knowing how to ask questions about AI, understanding how it works, and being able to build with it – not just staying on the receiving end, and not just on the creator end either. Can a child, or anyone, use AI to co-author? Today that might mean generating text or code, but it could just as well mean prototyping a game, training a model, or designing an experience. It’s about all the ways we can understand and critically use the technology. And then – creation: how we use it to augment our workflows and truly co-create with it.
Implicit in all of this is critical thinking – knowing when to use AI, how to use it, and when to decide, “this isn’t the task or the moment for it.”
AI literacy matters – that’s why all the articles in this series are free to read.
If you'd like to support this work, you can sponsor the series (just drop me a note at [email protected]), upgrade your subscription, or gift one to a friend.
And if nothing else – sharing this piece helps more than you know. Thank you.
But what about cheating?
When there is creation – there is often a challenge of plagiarism. Post-ChatGPT, plagiarism anxiety is front and centre in schools. A recent conversation still plays in my head: a professor of economics, arms flung wide in frustration – “I can’t openly allow my students to use ChatGPT because I don’t know how to control it. But I also can’t stop them. What am I supposed to do?” Druga’s work with Harvard’s Project Zero at the Graduate School of Education offers a brilliant and practical solution. Instead of imposing rigid bans, they co-designed a framework with teens to make the process of using AI transparent and intentional. And it’s a simple as ‘post it’ sticky notes.
It’s called Graidients, it is simple but effective:
Define the Task: A student is assigned a task, such as writing an essay on a novel.
Brainstorm Uses: On sticky notes, the student and their peers list all the possible ways AI could be used: checking grammar, generating a title, outlining bullet points, summarizing chapters, rewriting sentences, or even writing the entire essay.
Map the Spectrum: These notes are placed on a board with a spectrum ranging from "Definitely Okay" to "Gray Area" to "Not Okay."
Facilitate a Dialogue: This visual map sparks a crucial conversation. Students, guided by their teacher, collectively decide where each use-case falls. This "social contract" clarifies expectations and reveals a diversity of perspectives. A student might say, "I never thought of using it to critique my argument," transforming the tool from a potential shortcut into a Socratic partner.
Druga has taken this a step further by creating a digital tool that translates the final, agreed-upon "board" into a custom system prompt. When the student uses an LLM for their assignment, this prompt instructs the AI to adhere to the established rules, holding both the student and the AI accountable. This approach brilliantly reframes the "cheating" problem as an opportunity for metacognition and ethical reasoning.

Image Credit: MyAI Compass by Stefania Druga
From P-Doom to Practical Empowerment
We also talked about the discourse around AI that is often shadowed by "P-Doom" (Probability of Doom) – the fear of existential risks from superintelligence. While these are important long-term considerations, Druga argues that focusing on them can be paralyzing for young people. When confronted with abstract, massive problems like algorithmic bias or AI-driven surveillance, the natural response is helplessness.
Her solution is to bring these problems down to a tangible, human scale.
Make it Concrete and Actionable: She shares a touching anecdote of a boy in the UK who loved creating images with generative AI. After learning about the significant carbon footprint of training and running large models, he made a personal decision to limit himself to one creation per week, thinking carefully about what he wanted to make. This demonstrates a nuanced understanding of trade-offs and personal responsibility – a form of practical AI ethics.
Empower Through Creation: With her platform Cognimates, Druga teaches children to build and train their own simple AI models. In doing so, they inevitably encounter bias firsthand. When their model trained on an un-diverse dataset makes mistakes – it’s a failed project. They then learn that they can fix it by gathering more diverse data. This transforms them from victims of biased systems into architects who know how to build better ones.
Advocate for a Better Architecture: The answer to the risks of centralized AI, she suggests, lies in a more decentralized future. The rise of powerful, open-source models that can run locally on a device points toward a world with greater privacy, less environmental impact, and more user control. Supporting this ecosystem is a constructive way to counter the dominance of a few large tech companies. It’s also important to discuss this with kids.
Do we all really need to know about AI?
Sometimes parents say something like this, "what if my child wants to be a veterinarian, or a singer, or a farmer. Why do they need AI literacy?"
Druga argues that we are witnessing a fundamental shift in the nature of work. The 20th century prized hyper-specialization. The AI-driven 21st century, however, may favor the generalist. As AI models become incredibly proficient at specialized tasks (like radiology or legal discovery), the premium on human value will shift to skills that AI struggles with:
Cross-Domain Thinking: Seeing connections that specialized AI agents cannot.
Creative Problem-Solving: Applying knowledge from one field to solve a novel problem in another.
Asking the Right Questions: The skill of effective prompting is, at its core, the skill of inquiry. As Druga puts it, “We think because it's a natural language that everyone knows how to do it, but that's not true. Learning how to ask the right questions is a skill that one has to learn. And it is part of AI literacy because knowing how to ask the right question also means you have the right mental model of how the technology works and what it can or cannot do.”
Lifelong Learning: The single most important skill will be the ability to learn continuously.
AI literacy becomes the meta-skill that enables this adaptability. It empowers the veterinarian to use AI for faster diagnostics, the musician to co-create novel melodies with a generative model, and the farmer can use AI to monitor crops and livestock, optimize water and fertilizer use, predict harvest times, control pests precisely, and forecast market prices for better profits. It’s not about replacing their passion – it’s about enriching, augmenting it.
Bill Gates recently advised new graduates to “embrace AI tools” but not expect stability in career paths – the coming decades may involve multiple career shifts and continuous upskilling. This underscores a reality: learning how to learn (often with AI’s help) will be the most important skill of all. In families, this might translate to encouraging kids to follow their curiosity and learn by doing. If your child is interested in cars, maybe they’ll explore how electric vehicles use AI for self-driving; if they love art, perhaps they’ll try an AI image generator to enhance their creativity (while also discussing the ethical issues around AI art). The specific knowledge might evolve, but an attitude of lifelong learning and adaptability will serve them in any field.
What can you do now, at home? 7 activities
Spot the AI – Keep a family log of every AI encounter during the week: Spotify recommendations, face recognition on your phone, Google Maps, Tesla’s self-driving – compare notes at dinner, guessing how each system works. Who has more AI clues? It’s a simple way to spark curiosity about the hidden algorithms shaping daily life. Then →
Draw What’s Inside (Stefania’s suggestion) – Challenge kids to draw what they think is inside Alexa, Google Home, or ChatGPT, or anything from their list of AI clues. “Where does the data come from? Where does it go? Is there data? Is there a person in a box like talking to you, typing?” Share, then discuss using ChatGPT or other LLM the real process. This turns abstract systems into tangible ideas you can explore together.
Teach a Tiny Model – With platforms like Cognimates or Teachable Machine, train a simple image or text classifier. Start with biased data, see the poor results, then fix them with better examples. It’s hands-on proof that AI reflects – and can be improved by – its training data.
Green/Gray/Red Board – For a school project, use Graidients/My AI Compass idea and map possible AI uses into green (OK), red (not OK), and gray (needs discussion) zones. Export or write down your agreement and keep it visible as a “use policy” reminder.
Local vs. Cloud – Run a small local model (via llama.cpp or MLC Chat) alongside a cloud chatbot. Compare speed, accuracy, and privacy implications. Discuss when “good enough” local AI might be better than sending data to big servers.
Automate Together – Brainstorm small, safe tasks AI could automate for the family, then help your child “vibe-code” an app or script with Replit/Lovable/Claude/GPT-5 or anything you are used to. Even a simple chatbot or to-do list automation can make coding feel personal and purposeful.
Energy Budget Challenge – Give the household a weekly “compute budget” for AI use. Batch requests, swap images for text where possible, and favor local tools. Share one AI energy-impact fact at dinner and adjust your rules together. You can always use AI Energy Score on Hugging Face as reference.
The Takeaway: We’re All Learning Together
AI literacy is not (necessarily) about turning every child into an ML engineer. It’s about turning them into AI engineers! Just kidding (but maybe not). It’s about:
Agency: Knowing how and when to use AI.
Critical thinking: Evaluating AI’s strengths and weaknesses.
Co-creation: Leveraging AI to enhance human work, not replace it.
Adaptability: Being ready for a job market that will change multiple times over a career.
The most comforting message from this conversation between me and Stefania is that we're all learning together. Experts, engineers, builders, entrepreneurs, researchers, artists, and parents are all navigating this rapid technological shift in real time.
We can’t have all the answers but we can keep exploring, cultivating curiosity, critical thinking, and co-creation. That is the point and the opportunity. Treat AI as a subject to read, a medium to write with, and a system your kids can reshape.
If you feel overwhelmed and bombarded in this ocean of information where we're desperately looking for wisdom and knowledge, you're not alone. This as your community where we're all learning together. And I think it's very important to have these conversations.
Everyone is learning and the field is moving fast, but you know, people take time to adopt things. So we still have time to do the right thing.
And that’s the spirit of the AI Literacy Series.
If you try any of the activities, send us your boards, sketches, and home policies. We will fold the best into a living playbook as the series unfolds.
Resources and further reading
AI competency framework for students (UNESCO, aug 2024)
AI4K12 – Five Big Ideas with grade‑band charts and printable poster
Day of AI Curriculum – free MIT RAISE curriculum
Graidients (Jan 2025) by Harvard
This puzzle game shows kids how they’re smarter than AI by University of Washington
Students Are Using AI Already. Here’s What They Think Adults Should Know (sep 2024) by Harvard GSE
Kids teach AI a little humanity with Cognimates (YouTube by MIT Media Lad)
What are artificial intelligence literacy and competency? A comprehensive framework to support them (jun 2024) by Chiu et al.
The 4As: Ask, Adapt, Author, Analyze - AI Literacy Framework for Families (June 2021) by Druga et al.
Stefania’s Druga publications
Bigger isn't always better: how to choose the most efficient model for context-specific tasks (may 2025) by Sasha Luccioni
The AI for Education project (AI4ED) by Northeastern University
Play with AI and ML

Reply