New Study Reveals AI’s Blind Spot: Children

Posted by Ron Schmelzer, Contributor | 13 hours ago | /ai, /business, /innovation, AI, Business, Innovation, standard | Views: 16


While regulators and educators scramble to make sense of generative AI’s impact on adults, a quieter, more urgent reality is emerging. Children are already deep into using these tools, and the tools weren’t built with them in mind. A new study released this week from The Alan Turing Institute, supported by the LEGO Group, finds that 22% of children aged 8-12 in the UK have already used tools like ChatGPT, Gemini, or Snapchat’s My AI, with some using it several times a week.

That alone would be surprising. But what’s more telling is that none of these tools were designed with children in mind.

“Children’s experiences with this technology are significantly different from those of adults, so it is crucial that we listen to their perspectives to understand their particular needs and interests,” said Dr. Mhairi Aitken, Senior Ethics Fellow at The Alan Turing Institute.

AI Is Already Dividing Schools and Futures

The research, combining survey data from over 1,700 children, parents, and teachers with in-depth school workshops, surfaces several surprising conclusions.

One of the starkest findings? A growing access gap between children in private versus public schools.

In private schools, over half of children reported using generative AI. In private schools, 52% of children had used generative AI, compared to just 18% in state schools. Children in private schools also used the tools more frequently and teachers in those schools were significantly more aware of student use.

“This has the capacity to widen the digital divide with impacts for the competence of state school students in a key future technology,” the report warns.

Vulnerable Kids Are Diving into AI Without Guardrails

Children with additional learning needs stand out in the data. They’re not just using AI more often, they’re using it in different, deeper ways.

For those in the survey reporting that they have additional learning support, 53% said they used AI to express thoughts they couldn’t easily communicate. 39% used it to seek personal advice versus 16% of their neurotypical peers. 30% used it to play with friends (vs. 19%), and 37% used it for companionship (vs. 22%).

These are not just homework hacks. These are signs that AI is becoming an emotional and social tool for vulnerable children without corresponding safeguards or oversight.

Kids Are Asking: Why Doesn’t AI Look Like Me?

In creative workshops, children of color voiced a specific frustration: AI-generated images often didn’t reflect their appearance, language, or interests. Some became discouraged. Others simply stopped using the tools.

The issue isn’t theoretical. It’s personal. When a child asks AI to generate an image of “a kid like me” and gets a white, American stereotype in return, that moment registers.

“Children of colour often felt frustrated or upset that generative AI tools did not produce images that represented them,” the study found.

Representation, in this case, determines whether a child stays engaged or walks away, with many who found the output results not matching their interests going back to using other creative tools instead.

A Surprise Concern Among Children: AI’s Environmental Cost

In one of the more unexpected takeaways, children in the school workshops voiced concerns about the environmental footprint of generative AI, especially energy and water use.

After learning about AI’s energy demands and water usage, some children made a clear decision: they didn’t want to use it anymore.

According to the report, “When asked how they would like generative AI to be developed and used in the future, many of the children’s responses related to actions they felt need to be taken to address the environmental impacts of generative AI.”

That’s not a talking point from adults or activists. It’s pre-teens and young adults connecting cloud computing to climate impact and adjusting their behavior in response.

Parents and Teachers Are Less Worried About AI-Powered Cheating Than You Think

Mainstream concern around students using AI often focuses on cheating. But that’s not what parents or teachers seem most worried about, according to the report.

In the study and workshops, only 41% of parents listed cheating as a top concern, but 82% worried about exposure to inappropriate content and77% were concerned about misinformation.

Teachers echoed those concerns. While many reported positive experiences using AI themselves, nearly half believed their students’ engagement was declining. Others worried about reduced creativity in student work.

In contrast, teachers were largely supportive of AI use among children with additional learning needs, reflecting what the kids themselves reported.

What the Report Wants Policymakers and Developers to Do Next

The report doesn’t just highlight risks, it also outlines clear, practical steps that AI technology platform developers and policymakers can take to reduce many of these risks. The study suggests that children should be part of AI tool development, especially for tools they’re already using. To fix issues of representation, outputs need to reflect all children, not just a narrow slice.

There remains a critical skills gap among educators and school administrators. The study suggests that AI literacy efforts should be enhanced to reach state schools and under-resourced classrooms. Furthermore these efforts should also find ways to explain the costs of AI use from an environmental perspective.

The Real Takeaway: Children Are Already Shaping the Future of AI

This research lands at a critical moment. AI tools are evolving fast but kids aren’t waiting for permission or policies. They’re already experimenting, adapting, and sometimes even coming to depend on these tools for their interactions.

The real danger isn’t that children are using AI. It’s that the tools weren’t built for them, and we’re not listening to what they need.

They’re the first generation to grow up with these systems. While Millennials were defined by the emergence of the Internet and Gen Z defined by the dominance of social media, this next Gen Alpha will see the rapid expansion of AI use in their daily lives, a movement perhaps even more fundamental than social media, mobile phones, and the Internet.

What they see and experience in their lives will shape how they trust, use, and challenge AI for decades to come.



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *