When Convenience Writes the Curriculum

One of the major limitations of Generative AI is that we don’t know how it thinks, or even if it truly ā€œunderstandsā€ anything at all. Its responses are based on patterns in vast amounts of data, not reasoning or lived experience. That means it can confidently generate information that is wrong, biased, or misleading. In school settings, especially with younger students, this is a real concern: lessons, explanations, or examples generated by AI may seem credible but could contain subtle errors or gaps in understanding. While AI could, in theory, help create practice exercises, summarize texts, or assist students with special needs, it cannot replace the critical guidance and nuanced judgment of a teacher. At my grade level, the risks of using AI outweigh the benefits, because students are still learning how to think independently, ask questions, and verify information for themselves.

AI is everywhere now. Google searches, social media, even music is overrun with the slop. We are aware of some of the costs, and it is certain that more will become apparent as we continue to rely on it. And the more I look into it, the more I realize how little we actually understand it.

AI works through something called gradient descent. It calculates responses in ways that would take longer than a human lifetime to track, for just one single prompt. We don’t know how it arrives at most of its answers. Sure, we can double-check facts, but more and more of the content online is generated by AI. Every fact, image, or article we find is part of an echo chamber made by machines trained on humanity’s collective knowledge. And we just keep feeding it.

We would never give untested foods to children. We would never put untested playground equipment in their playgrounds. We carefully consider what goes into their bodies and how they move. Yet we are already using untested lessons, untested words, and untested ideas in tools meant to teach their minds.

I’m not anti-AI. There are areas where AI is genuinely life changing. It can help detect cancer in scans, assist students with disabilities, and handle repetitive tasks that free humans to focus on creativity. But that’s not where most of the money, research, and development are. Capitalism is steering AI toward convenience and efficiency, not toward care or caution. The reality is that AI is a powerful, risky tool, and humans care more about convenience than about understanding or preservation.

Currently, we don’t understand enough about AI to know what all the risks are, so we are unable to make an informed decision about if AI should be used in education. I’m not anti-AI, but I do think that we should slow down development, because right now, we’re letting AI write the future in a language we don’t understand.

Please check out https://ifanyonebuildsit.com/ for more information on the potential threats of AI.