Some mornings I get out of bed with that particular taste of ash that comes when you’ve been awake all night, or when you’ve been dreaming in news headlines. The planet on fire in one tab, the wars in another, and the steady corrosion of public language everywhere else. In education we call it a post-truth problem, as if the truth simply wandered off one day and we’re waiting for it to return, slightly embarrassed, like a student who missed roll call. 

Then I make breakfast for my child. 

Information as a balanced meal

My child is small enough that her feet don’t touch the floor when she sits, but she already knows what a screen is for. When I was her age, my world arrived in parcels: ABC cartoons at particular hours, library books with their damp paper smell, school worksheets. Culture was not infinite; it was rationed, mediated and – yes – curated. There is a romance to that scarcity, but also a kind of brutality. You learned by waiting. You learned by not knowing. You learned that knowledge had gates and gatekeepers. 

My child has an iPad and, in a literal sense, the whole archive. Songs from languages we don’t speak. Cartoons built on algorithmic certainty. Tiny hands scrolling through a civilisation’s worth of digitised myth and advertising without any felt distinction between them. It is not just the quantity that unsettles me; it is the epistemic texture. Information is being delivered by systems that only know how to say something that resembles what we have said before, and it’s being communicated compellingly, fluently and sometimes without significance or substance. Am I doing that right now? 

AI as an existential crisis

This is where the question of AI stops being a workplace debate and becomes a parenting anxiety, a civic anxiety, a spiritual one. Do we still need expertise, in a world where the interface can produce a plausible paragraph on anything? Or do we only need good questioners – people trained in prompt craft: the art of extracting a useful synthesis from a machine that has read everything and understood nothing? 

The cynic in me wants to answer: we will pretend we only need questioners, because it flatters the imagination and suits the demands of our fast-paced, relentless, corporatised culture. It is also convenient: cheaper to valorise agility than to fund disciplines; easier to celebrate the new idea than to sustain the slow work of institutions that make knowledge reliable. But the educator in me refuses that bargain. A society that gives up on expertise does not become more democratic, it becomes more vulnerable – first to confident nonsense, then to charismatic cruelty. 

And yet, there is no denying AI is an extraordinary tool. It feels as if Pandora’s box is already open and we are not going ‘back’ to a time before it was. So, I keep circling one deceptively simple proposition: using AI responsibly means knowing when not to use it. That is not a rhetorical flourish; it’s a literacy problem, the harder one. The question isn’t “how do I get outputs?” but “should I be doing this here, at all – and what am I losing if I do?” 

Trust in a distorted model

Sam Illingworth suggests the crucial work of education is developing critical AI literacy: not prompt optimisation, but judgement under conditions of uncertainty. His argument uses a detail I can’t shake: we cannot interrogate what we do not see. Digitised archives are partial and biased in ways most users never consider, so the ‘record’ that models learn from is already a distortion before a single line is generated. And the inner workings of the LLM are invisible to most of us, so we cannot meaningfully interrogate their validity – only their outputs.  

If AI is becoming a dominant medium of knowing, then trust becomes a curriculum, and discernment becomes an equity issue. UNESCO’s guidance tries to hold the line on a human-centred approach: education as the development of agency, judgement and ethical capacity, not merely the optimisation of outputs. But even that framing can feel like trying to teach swimming while the tide comes in.

The problem is not that AI can be wrong. The problem is that it can be wrong beautifully – calm, coherent, footnoted if you ask nicely, speaking with the cadence of authority. The performance of knowing is becoming cheaper than knowing itself. And we have evidence, now, that this isn’t a hypothetical: medical tools giving incorrect advice and inventing anatomy, news archive assistants misreading sources, the list goes on…  

So, what happens to my child’s learning in a future world? 

I’m not interested in moral panic about screens or technologies. I’m interested in the more difficult question: what does ‘learning’ mean when access or retrieval is effortless, when drafting is outsourced, when a conversational agent can simulate understanding on demand? If the new world offers unlimited content, the scarce resource becomes something else: sustained attention, relational responsiveness, and giving space and time for the slow formation of judgement. The things that feel to me as if they can’t be generated on cue without hollowing them out. 

Here, the research doesn’t give us a neat moral. It keeps returning to a messier point: the tool and the device is not the whole story. Outcomes are entangled with content quality, duration and – most importantly – whether screens displace the relational conditions that build language and thought: talk; play; shared attention; being seen, heard and answered. A recent systematic review on early-life screen time and language development underlines exactly that kind of association, and why simple “screen time” measures don’t tell the whole developmental story.  

A deal with the digital daemon

In a GenAI-saturated childhood, will the seductive promise be the “digital daemon”: a companion that reads with her, explains with her, adapts to her, remembers her? A tutor that never tires. A voice that always has time. Schools will be tempted to treat this as a solved problem: give every child an agent, personalise everything, call it equity. But there is a trap here, and it isn’t just “cheating.” It is epistemic dependency: a child raised to experience knowledge as something that arrives fully-formed, near-effortless, and largely unearned—without the productive discomfort of revision, doubt, and failure. Without the frustration of being wrong, and striving to understand why. 

Will she read novels? I hope so, because a novel is one of the last technologies that still trains attention as an ethical act. It makes you inhabit another mind without extracting a summary. Will she go to the cinema? Maybe—if we keep building places where strangers sit together in the dark and agree, temporarily, to care about the same story. What will a library be for? Not just shelves. A library is a civic statement: that knowledge is a commons, that stewardship matters, that quiet matters, that you can wander without being sold to. In an age of synthetic text, it may become even more important as a site of provenance – of “where did this come from?” – and of human guidance that is not optimised for engagement. 

Expertise as a form of care

The best I can do is try to treat expertise as a form of care, not a status. Asking good questions is necessary but not sufficient – you also need the lived habit of checking, cross-reading and doubting your own fluency. Being ‘correct’ isn’t the only thing that matters; it’s knowing when to ask what and how to get to a place of knowledge for yourself. And I am trying to give my child something no device can reliably provide: a world in which her questions are met by people who are present enough to answer, and humble enough to say, when it matters most, “I don’t know, but let’s find out together.” 

Join the discussion

Skip to toolbar