Kids, AI and the Meaning of Intelligence
You Are What You Eat.
Vegetable negotiations are serious business at the dinner table. But parents understand that what you eat affects how you think, learn, and grow. Interestingly, this rule applies to both kids and computer systems.
Like an infant starting solids, early computer programs were fed a simple diet. And while infants are given mashed peas (that may end up on your computer), the early diets of computer systems consisted only of if-then instructions written by humans:
-
If a move leaves your king in check, then reject it.
-
If the piece is a bishop, then it moves diagonally until something blocks it.
-
If you don’t eat your broccoli, then you don’t get the cookies you insisted we buy at the grocery store.
Computers could only do what programmers specifically told them to do. Nothing more. They worked within the narrow limits of the information they were fed. A simple diet produced a simple output.
Over time, things advanced. As companies collected more data and the software improved, computers grew more powerful.
Then the buffet opened.
In the 1980s and 90s, the internet expanded from a network for research computers into homes and businesses around the world, connecting people in ways that had never been possible before. With it came a whole new kind of data – our data. At the same time, computers were becoming faster, cheaper, and small enough to fit in our pockets. Tools once limited to research labs became powerful everyday business tools. And with all that data now available, machines learned to spot patterns on their own a process that would come to be called machine learning. Instead of just following instructions, they could make predictions.
When Google Search launched in 1998, it threw open a door into the unfiltered human mind – what people wanted, feared, and wondered about, now visible at scale for the first time. Every search revealed a desire, a thought, or a need. Every click showed what captured our attention.
Quickly, Google shifted from using our data to improve search results to using it to power highly targeted advertising – turning our attention into its core business. Other tech companies followed, turning our clicks and scrolls into insights and profit.
Social media platforms like Facebook, YouTube, and Twitter took the baton and ran. They moved fast and they broke things. They captured the cues that sparked our responses. What we liked. What made us laugh. What made us angry. What kept us scrolling. The only limiting factor to their behavioral capture, was processing power.
Then around 2012, graphics processing units or GPUs – originally built for video‑game visuals – were found to be useful repurposed for machine learning. At that time there was an enormous reservoir of human behavioral data waiting to be processed – stored across the servers of the companies that had spent years collecting it. GPUs made it possible to build the kind of computational models – systems trained to find patterns in massive amounts of data – that could finally put it to use. Once compute caught up to the data, the field accelerated.
A decade later, the consequences became visible when, in November 2022, OpenAI released ChatGPT, an AI model that treated human language as its primary training data and converted the entire public record of human text into statistical patterns. The mechanism behind ChatGPT is still machine learning – pattern recognition in data. What changed was the data itself.
Instead of transactions, images, or driving routes, ChatGPT learned from human language: books, articles, websites, conversations, at a scale that’s difficult to conceptualize.
In short, computers started by requiring humans to feed them data so they could follow our instructions. Today, we are the data.
What is Intelligence?
The tech industry named the process of machine learning “artificial intelligence” in 1956 and has been coasting on that framing ever since. But intelligence – human intelligence – is not a settled concept. So while the tech industry claims they are racing toward the capabilities of general human intelligence, it would be helpful to understand what human intelligence actually is.
Harvard educator Howard Gardner introduced his theory of multiple intelligences in 1983, in his book “Frames of Mind.” Gardner identified 8 types: linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, intrapersonal, and naturalist. Later, psychologist and science journalist Daniel Goleman added emotional intelligence. Of course there are more, but this is the standard understanding today.
In our lives, each type of intelligence is on display at various times in various situations. They are different aspects of our capabilities and expression – how we move, connect, feel, and experience life.
But they don’t exist as separate modules that happen to coexist. Rather, they strengthen, support, control, and refine one another as an integrated whole. The linguistic, the emotional, the bodily, the interpersonal develop together, inform each other, and produce something that is more than the sum of its parts.
Intelligence is integration.
How AI Actually Works.
When you type a question into ChatGPT, your words are broken into tokens – small units of text, roughly a word or part of a word. Each token is assigned a number. “Courage” is a number. “Loss” is a number. “Who am I” is a sequence of numbers.
Those numbers are then mapped into a mathematical space – think of it as a universe where every word is a point floating in three dimensions. Their positions aren’t random. They’re determined by how often each word appears near others across the entire training dataset: billions of words from books, websites, articles, and online conversations. Words that travel together get pulled toward each other. “Dangerous” and “harmful” cluster nearby. “Dangerous” and “recipe” are light-years apart. The system doesn’t know what any word means. It only knows where each one sits in relation to the others – and it navigates by proximity.
From that position, the system predicts the most statistically likely next token. Then the next. Then the next. Every response is a chain of predictions, generated one token at a time.
That is the complete mechanism. Nothing else happening.
No experience. No memory. No values. No body. The system has never felt anything, lost anything, needed anything, or wondered about anything.
Of Gardner’s eight types of intelligence, ChatGPT can perform one: linguistic pattern recognition. At the surface level. It recognizes and predicts the statistical pattern of how words appear together, without the meaning underneath.
It cannot access bodily intelligence. It has no body. It cannot access interpersonal intelligence. It has no relationships. It cannot access intrapersonal intelligence. It has no inner life. It cannot access emotional intelligence. It has never felt anything.
What it can do is produce language that sounds like it comes from someone that has.
Why This Matters for Our Kids
“Loss” lands near grief, death, team, financial. The system knows the neighborhood. It does not know the meaning. It does not understand the feeling. It has never lost anything. It does not know the weight of a word. It does not understand the relevance to an adult, a teen, or a child.
This matters because the training data is the internet. The internet is predominantly adult, Western, and English. It mostly contains adult writings, adult language, and adult explanations about adult experience – adult grief, adult loneliness, adult questions about identity and meaning.
When a child asks about loneliness, the system returns an answer calibrated to adult loneliness. When a child asks who they are, the system draws from a dataset that is mostly adults writing about who they are. The answer sounds right. It may be entirely wrong for an eleven-year-old trying to find their footing. We choose age‑appropriate books for kids, but give them AI systems trained on adult experiences and concerns.
A child learning the word “courage” is integrating a feeling in their body, a memory of a moment they were scared and acted anyway, an observation of someone they admire, a social understanding of how courage is recognized by others. All of that happens simultaneously and encodes understanding of a word and a concept with a feeling, a memory.
The frameworks a child needs to evaluate what an LLM tells them are the same frameworks they are still building. To know when an answer about grief is slightly off, you need to have experienced grief.
To know when an answer about courage doesn’t quite fit, you need to have been scared and acted anyway. To know when an answer about identity is someone else’s adulthood and not your own emerging self, you need enough of a self to feel the mismatch.
Raising Kids in an AI World
We are only beginning to understand what it means for children to grow up with systems that speak fluently, respond instantly, and sound human – while having none of the human grounding underneath. These tools don’t know childhood. They don’t know what a developing mind needs. They don’t know what should land gently, what should wait, or what should be held by a person who loves them.
Our kids are meeting systems built on adult language, adult experience, and adult concerns long before they have the frameworks to sort, question, or resist what they’re told. They are encountering technology that can imitate understanding without having any. That can sound wise without knowing what wisdom is.
This is the world they are inheriting. Not a future threat. A present condition.
Our job is not to keep them away from AI, that ship has sailed. Our job is to give them the grounding, the context, and the human connection that AI cannot provide. To help them tell the difference between language that sounds right and guidance that is right for them.
To make sure the systems in their lives do not outrun the selves they are still forming. To unmask the agents trying to capture their attention.
Raising kids in an AI world means grounding humanity in the middle of technology. It means making sure our children learn to trust their own understanding more than the fluency of a machine. And it means we need to find each other – parents who are paying attention, asking questions, and refusing to outsource the best parts of raising children to systems that were never built with children in mind.
