How Artificial Intelligence Is Impacting Wellbeing
- Dr. Emily Holliday

- Jan 16
- 4 min read
Artificial intelligence is no longer a distant or abstract concept for children and young people. It is woven into their everyday lives powering search engines, social media feeds, games, recommendation systems and, increasingly, conversational tools that respond instantly, adapt to personal preferences and feel almost human. Growing up alongside AI presents real opportunities for learning, creativity and inclusion. At the same time, it raises important questions about attention, emotional development, relationships and wellbeing that we are only just beginning to understand.
This article offers a high-level exploration of the emerging impact of AI on wellbeing, drawing on behavioural insights, current data and relational perspectives, with a particular focus on how education settings can support healthy, resilient engagement with technology.
AI, Attention and Cognitive Load
AI systems are designed to be efficient, personalised and engaging. For young people, this can mean faster access to information, tailored learning support and interactive experiences that spark curiosity. However, the same features can also shape attention in less helpful ways.
Algorithm-driven platforms often prioritise speed, novelty and continuous engagement. Behavioural research suggests that frequent interaction with highly responsive systems can reduce tolerance for delay, making sustained focus and deep thinking more challenging. When answers are instant and content is endlessly refreshed, young people may have fewer opportunities to practise patience, reflection and effortful problem-solving skills closely linked to long-term wellbeing and academic success.
AI can also influence decision-making. Recommendation algorithms subtly guide choices about what to watch, read or buy, often without users being fully aware. For developing minds, this raises questions about autonomy and agency: how decisions are formed, and how much space remains for independent judgement.
Emotional Regulation and the “Always Available” Listener
One of the most striking developments is the rise of conversational AI that can feel empathetic, non-judgemental and constantly available. For some young people, these tools provide a space to explore ideas, practise communication or even articulate emotions they struggle to share elsewhere.
From a wellbeing perspective, this is complex. On one hand, AI can act as a low-stakes outlet—helping young people name feelings, rehearse conversations or feel less alone in moments of stress. On the other, emotional regulation is fundamentally relational. It develops through real human interactions where empathy, boundaries and mutual understanding are experienced, not simulated.
There is a risk that frequent reliance on AI for emotional support may reduce opportunities to build coping strategies through relationships with peers, teachers and caregivers. Unlike humans, AI does not truly understand context, cannot be accountable, and cannot replace the emotional depth of human connection.
Why Some Young People Turn to AI
Young people may be drawn to AI for several reasons:
Safety and anonymity: AI does not judge, gossip or disclose, which can feel safer when exploring sensitive thoughts.
Control: Interactions can be ended instantly, giving a sense of power and predictability.
Accessibility: AI is available 24/7, unlike most human support systems.
These factors are particularly relevant for young people who feel misunderstood, socially anxious or reluctant to burden others. While this highlights unmet emotional needs, it also underscores the importance of ensuring AI complements, rather than replaces, human support.
Risks of Over-Reliance and Blurred Boundaries
Over-reliance on AI carries several risks. Misinformation remains a significant concern; AI-generated responses can sound confident while being inaccurate or biased. Young people may lack the critical literacy skills to evaluate these outputs, especially when information is presented conversationally rather than as a source to be questioned.
Boundaries can also blur. When AI feels “friendly” or relational, it may be harder for young users to distinguish between tool and companion. This can affect expectations of real relationships, where responses are slower, less predictable and emotionally complex.
There are also concerns around data, privacy and commercial influence. AI systems learn from user interactions, and young people may not fully understand how their data is used or how content is shaped by underlying business models.
Deepfakes, Trust and Identity
The rise of deepfake technology adds another layer of complexity to wellbeing. AI-generated images, audio and video can convincingly mimic real people, making it harder to trust what we see and hear.
For young people, this has implications for identity, relationships and safety. Deepfakes can be used for humour and creativity, but they can also enable bullying, harassment or reputational harm. The psychological impact of seeing one’s likeness manipulated or not knowing whether content is real can undermine trust and increase anxiety.
At a developmental stage where identity formation is central, the erosion of visual and digital trust can be particularly destabilising.
The Role of Education in Building Digital Resilience
Education settings play a crucial role in supporting wellbeing in an AI-shaped world. Beyond teaching technical skills, schools and colleges can nurture digital resilience the capacity to engage with technology thoughtfully, critically and in ways that support mental health.
Key approaches include:
Critical AI literacy: Helping students understand how AI works, its limitations, and how algorithms influence behaviour and choices.
Emotional awareness: Encouraging reflection on how technology makes them feel, not just what it enables them to do.
Healthy habits: Promoting balance, boundaries and intentional use, including when to step away.
Relational support: Ensuring strong, trusted human connections remain central, so AI does not become a substitute for care.
Ethical discussion: Creating space to explore questions of trust, identity, consent and responsibility in a digital age.
Looking Ahead
AI is not inherently good or bad for wellbeing it is a powerful influence whose impact depends on how it is designed, used and supported. For children and young people, the challenge is not simply learning to use AI, but learning how to live alongside it while maintaining attention, emotional health and meaningful relationships.
By combining behavioural insights with relational, human-centred approaches, educators, families and policymakers can help young people navigate this evolving landscape with confidence, curiosity and care.





Comments