Dec 4, 2025
The 40,000 Questions
Children learn by asking. AI can finally answer. But we're building it on the wrong devices.
Full article as audio track
A four-year-old asks between 200 and 300 questions a day.1 By the time they turn five, they'll have asked roughly 40,000 questions since they first learned to speak.2 Why is the sky blue. Why do I have to brush me teeth. How do babies grow. The questions don't stop, and they don't follow a curriculum. They arrive on the walk to school, in the bathtub, on a Saturday evening.
Harvard cognitive scientist Elizabeth Bonawitz calls curiosity "the great equalizer for education." Her research shows that children who are more curious perform better in math and reading, and that this effect is particularly strong for students from under-resourced communities.3 When children ask questions, their brains experience a surge in dopamine, the neurotransmitter associated with motivation and pleasure. The hippocampus activates. Memory formation kicks in. Information received in response to a child's own question is retained more effectively than information passively received.4
This is not a subtle finding. It's one of the most robust patterns in developmental psychology: children learn by asking, and they learn best when the asking is self-directed.
Now consider what happens to those questions in practice. A parent who works two jobs doesn't have the bandwidth to field 300 questions a day with thoughtful answers. A teacher managing 25 students can't pause the lesson every time a hand goes up with a tangential "but why?" Research from Paul Harris published in Educational Leadership found that children ask an average of 25 questions per hour at home, but the rate collapses in classroom settings, where teachers ask roughly 30 times as many questions as their students do.5 A study surveying children directly about their curiosity in school found responses like: "No one is curious about what we learn in class. We just need to do whatever the teachers tell us to do."6
There is a gap between how children naturally learn and how institutions serve that learning. And for the first time in history, we have technology that could close it.
–––––––––––
Large language models are, at their core, question-answering machines. Not perfect ones. Not always truthful ones. But machines that can engage with a child's "why" at 6:47 AM without exhaustion, without impatience, without running out of explanations. They can adapt to a child's level. They can follow a tangent from dinosaurs to volcanoes to whether lava is really hot enough to melt a bicycle. They can turn a question into a follow-up question, which is exactly what good teachers do: "That's interesting. Why do you think that might be?"
The educational potential here is not speculative. It's structural. Children's questions follow a developmental pattern: they master "what," "where," and "who" before progressing to "why," "when," and "how."7 They don't just ask randomly. Research from the journal Cognition shows that children as young as preschool age ask questions efficiently, adapting their strategy to gain the most useful information.8 When they don't get a satisfactory answer, they persist, rephrase, or provide their own explanation and test it against the adult's response.9 This is, in essence, the scientific method enacted by a three-year-old in pajamas.
An AI that meets this behavior with patience, accuracy, and appropriate scaffolding could be one of the most powerful educational tools ever built. Not as a replacement for parents or teachers, but as an always-available layer underneath: the entity that answers the 280th question of the day when the adults around a child simply cannot.
The problem is that we're deploying this technology through the wrong medium.
–––––––––––
Right now, a child who wants to interact with AI has exactly one option: a screen. An iPhone. An iPad. A laptop. The same devices that come bundled with YouTube algorithms, notification systems, social media feeds, and an entire attention economy designed to maximize engagement at the expense of everything else.
The Pew Research Center surveyed over 3,000 U.S. parents in May 2025 and found that roughly one in ten parents of children aged 5 to 12 said their child had used ChatGPT or similar tools. Among 5-to-7-year-olds, it was 3 percent. Among 11-and-12-year-olds, 15 percent.10 These numbers will grow quickly. But the context in which they're growing is alarming.
In August 2025, the parents of a California teenager filed a wrongful-death lawsuit alleging months of harmful interactions between their son and ChatGPT.10 A separate case against Character.AI alleged that its chatbot engaged in sexualized conversations with minors and encouraged self-harm.11 Common Sense Media tested multiple AI companion platforms and found they "easily produce harmful responses including sexual misconduct, stereotypes, and dangerous advice."11 The FTC launched a formal inquiry into AI chatbots' impact on children and teens in September 2025.12 California passed SB 243, requiring suicide prevention protocols and mandatory break reminders every three hours for child users.13 Proposed federal legislation, the GUARD Act, would prohibit AI companies from providing AI companions to minors entirely.13
The regulatory instinct is understandable. But notice the underlying assumption: children access AI through general-purpose devices and general-purpose software, so the only lever available is restriction. Limit time. Add parental controls. Gate access. Ban it.
This is the equivalent of responding to the invention of the printing press by restricting who gets to enter the library. It addresses a real risk, but it does so by eliminating the opportunity.
There's another path. Build the right hardware.
–––––––––––
The argument for purpose-built AI devices for children rests on a simple insight: the interface shapes the interaction, and the interaction shapes what gets learned.
When a child uses ChatGPT on a parent's phone, a dozen things are wrong before the conversation even starts. The device isn't theirs. The interface is designed for adults. The screen invites distraction. There's no physical boundary between the educational interaction and everything else the device can do. Parental controls are bolted on after the fact, a lock on a door that was never designed to have one.
A dedicated device changes every one of these parameters.
Consider what such a device could look like. Not a screen, or at least not primarily. A physical object, something a child can hold, carry around, talk to. Voice as the primary input, because children can speak long before they can type, and because speech is the natural medium of questions. A form factor closer to a stuffed animal or a walkie-talkie than a tablet. Something that signals, through its physical design, that this is for exploring and asking, not for scrolling and consuming.
The Chinese market is already moving here. The AI education hardware market in China was projected to exceed 100 billion yuan ($14 billion) in 2025.14 Companies like Qiduo Intelligent are building devices with an "IP + AI + hardware" model: taking familiar children's content, reconstructing it through AI interaction, and embedding it in physical form factors that parents already trust, like cameras and toys.15 Shunwei Capital and Qifu Capital just invested tens of millions in this approach.15 ByteDance's "Xianyanbao" AI toy, powered by their Douyin large model, can respond to children's emotions and adjust its conversational style accordingly.16
These aren't theoretical products. They're shipping. And they point toward a design philosophy that the Western market has largely ignored: the container matters as much as the content.
–––––––––––
What would the right container look like? Here's a sketch, informed by what the research says about how children actually learn.
First: voice-native interaction. Children ask questions verbally. They don't type "why is the moon sometimes big and sometimes small" into a search bar. They ask, out loud, while pointing at the sky. A child-AI device should be conversational from the ground up, not a screen with a microphone bolted on.
Second: a closed system. Not closed in the sense of limiting knowledge, but closed in the sense of walled off from the rest of the internet. No browser. No app store. No pathway from "why do volcanoes erupt" to YouTube autoplay. The device does one thing: it engages with a child's curiosity. Everything else is someone else's job.
Third: age-appropriate guardrails baked into the model, not patched on top. OpenAI's recent updates to their Model Spec include instructions to avoid romantic roleplay and exercise caution around body image topics when interacting with minors.17 But these are constraints applied to a general-purpose model. A model trained specifically for children's educational interaction, with a tightly scoped system prompt and fine-tuned response patterns, would be fundamentally safer than a general model with safety filters layered over it. The difference is between a kitchen knife with a rubber guard and a butter knife.
Fourth: transparency for parents, but not surveillance. Parents should be able to see what their child asked and what the AI answered, not in real-time panopticon mode, but in a daily digest that lets them understand their child's interests and intervene when something seems off. This is the digital equivalent of a teacher sending home a note: "Your daughter asked some really interesting questions about space today."
Fifth: designed to end sessions, not extend them. Every incentive in consumer technology pushes toward engagement maximization. A children's AI device should do the opposite. After a natural conversational arc, it should suggest going outside, drawing something, asking a parent, or trying an experiment. The measure of success isn't time spent. It's questions asked and, over time, the complexity and sophistication of those questions.
–––––––––––
There's a deeper argument here that goes beyond safety.
The children who will benefit most from AI-enhanced learning are not the ones with two college-educated parents, a wall of books, and a dinner table where questions are celebrated. Those children already have access to the 40,000-question infrastructure. They have adults around them who can field "why" with patience and knowledge.
The children who stand to gain the most are the ones whose parents work multiple jobs. Whose households don't have books in the language the school teaches. Whose teachers are stretched across 30 students in an underfunded classroom. Whose curiosity, which is just as intense and just as developmentally important, simply has fewer outlets.
Bonawitz's finding, that curiosity is "the great equalizer," cuts both ways.3 If curiosity-driven learning is the most powerful form of learning, then unequal access to question-answering infrastructure is one of the most consequential forms of inequality. And right now, that access is shaped entirely by the adults a child happens to be surrounded by.
A dedicated, affordable, safe AI device for children doesn't solve educational inequality. Nothing that simple does. But it puts a patient, knowledgeable, always-available interlocutor in the hands of every child who has questions and nobody to ask. That's not a small thing.
–––––––––––
The objections to this are obvious, and some of them are legitimate.
AI hallucinates. It will tell a child confidently wrong things. This is true, and it's a real problem. But a child who asks a question and gets an imperfect answer is still better off, cognitively, than a child who asks a question and gets no answer at all, or worse, gets told to stop asking. The research is clear: the act of asking and receiving a response, even an imperfect one, drives learning.9 The goal isn't an infallible oracle. It's a responsive interlocutor that keeps the question-asking loop alive.
Children need human connection, not machines. Also true. And nothing about a dedicated AI device prevents human connection. The device is for the 280th question, not the first. It's for Saturday morning when the parent is sleeping. It's for the car ride when nobody knows why clouds have different shapes. It handles the overflow, the volume, the sheer relentlessness of a curious child's mind, so that the human interactions that do happen can be richer.
This could go wrong in ways we can't predict. Almost certainly true. But the alternative isn't "no AI interaction." The Pew data already shows children are finding their way to ChatGPT on their parents' phones.10 The alternative to purpose-built hardware is unsupervised access through general-purpose devices with no guardrails designed for children. That's the current default. It's already going wrong.
–––––––––––
We are in a peculiar moment. We have, for the first time, technology that can meet a child's natural learning behavior at scale: patient, adaptive, always available, responsive to the endless chain of "why." And we're deploying it through devices designed to sell ads to adults, regulated by laws written before AI chatbots existed, and debated by legislators who want to either ban children from using it entirely or do nothing.
The middle path, the productive path, is physical. Build devices that embody the right constraints. Make them voice-first because children speak before they type. Make them closed because children don't need the entire internet. Make them transparent because parents deserve to know. Make them finite because a child's time has better uses than any screen. And make them affordable, because the children who need this most are the ones least likely to have a parent who can answer 300 questions a day.
Forty thousand questions between the ages of two and five. That's the biological mandate. That's how humans learn. The question isn't whether AI will play a role in answering them. It already does, haphazardly and unsafely through devices never designed for the purpose.
The question is whether we'll build something worthy of the asking.
Footnotes
Estimates of 200-300 questions per day for 4-year-olds reported across multiple developmental psychology sources. Warren Berger, A More Beautiful Question (2014), cited in Tinybop. A UK study led by child psychologist Dr. Sam Wass found parents field an average of 73 questions per day, with many reporting up to 14 hours of questioning. StudyFinds, March 2022.
Paul L. Harris, Trusting What You're Told: How Children Learn from Others (Harvard University Press, 2012). Harris calculated the 40,000-question estimate based on Michele Chouinard's data (2007), which recorded children asking 1-3 questions per minute with caregivers. Harris noted: "If a child spends one hour a day between the ages of 2 and 5 with a caregiver who is talking to them, they will ask 40,000 questions in which they are asking for some kind of explanation." EdWeek summary. Underlying data: Chouinard, M.M. (2007), "Children's questions: A mechanism for cognitive development," Monographs of the Society for Research in Child Development, 72(1). PubMed.
Elizabeth Bonawitz, Harvard Graduate School of Education: "Curiosity is the great equalizer for education." From Harvard EdCast: "How Curiosity Can Unlock Learning for Every Child", October 2025, and Harvard Usable Knowledge: "A Curious Mind", November 2020.
Dopamine surge and hippocampus activation during curiosity-driven question-asking. Reported in Yellow Kite Nursery: "The Science Behind Children's Curiosity", March 2025, citing MRI research on curiosity and memory. ↩
Paul Harris, "What Children Learn from Questioning," Educational Leadership, September 2015. ERIC summary. Finding: 25 questions/hour at home; teachers ask 30x more questions than students in classroom settings.
Post, T. & Walma van der Molen, J.H. (2018), "Do children express curiosity at school?" Learning, Culture and Social Interaction, 18, 60-71. Children's quotes cited in Frontiers in Psychology: "Supporting Early Scientific Thinking Through Curiosity", Jirout (2020).
Developmental sequence of question types (what/where/who before why/when/how): Bloom, Merkin & Wootten (1982), reviewed in Ronfard et al. (2018), "Question-asking in childhood: A review of the literature and a framework for understanding its development", Developmental Review, 49, 101-120.
Ruggeri, A. & Lombrozo, T. (2015), "Children adapt their questions to achieve efficient search," Cognition, 143, 203-216. Referenced in Frontiers in Psychology, Jirout (2020).
Chouinard, M.M. (2007), "Children's questions: A mechanism for cognitive development," Monographs of the Society for Research in Child Development, 72(1). PubMed. Key finding: when children don't receive informative answers, they persist; attention alone is not sufficient. Also: Frazier, B.N., Gelman, S.A. & Wellman, H.M. (2009), "Preschoolers' search for explanatory information within adult-child conversation", Child Development, 80(6), 1592-1611.
Pew Research Center survey of 3,054 U.S. parents, May 13-26, 2025. Chatbot usage among children: 3% of 5-7-year-olds, 7% of 8-10-year-olds, 15% of 11-12-year-olds. California wrongful-death lawsuit against OpenAI filed August 2025. Reported in GadgetBond: "AI chatbots are the latest screen-time concern for U.S. parents", October 2025.
Common Sense Media report on AI companion apps, April 2025. James Steyer quote on harmful responses. Character.AI lawsuits (A.F. v. Character Technologies; Megan Garcia v. Character Technologies). CNN, April 30, 2025.
FTC Section 6(b) inquiry into AI chatbots' impact on children and teens, September 11, 2025. Orders to seven companies. $10M Disney COPPA settlement same week. Nelson Mullins, September 12, 2025.
California SB 243, signed October 13, 2025 (suicide prevention protocols, 3-hour break reminders). GUARD Act introduced by Sen. Hawley and Sen. Blumenthal (would prohibit AI companions for minors). Public Knowledge, November 7, 2025.
Chinese AI education hardware market projected to exceed 100 billion yuan in 2025. 36Kr: "AI Education Intelligent Hardware in H1 2025".
Qiduo Intelligent (KidoAI) seed round of tens of millions of yuan, backed by Shunwei Capital and Qifu Capital. "IP + AI + hardware" model. 36Kr.
ByteDance's "Xianyanbao" AI toy with emotional response capability. China AI toy market: $5 billion in 2024, up 45% YoY, projected to exceed $8 billion in 2025. Semicon Electronics: "The Rise of AI Toys".
OpenAI updated Model Spec for minors (December 2025): restrictions on romantic roleplay, caution on body image topics, age-prediction model for automatic teen safeguards. TechCrunch, December 19, 2025.
Use
for navigation.