UK Teens Form Emotional Bonds with AI Chatbots, Study Reveals
Teens Form Emotional Bonds with AI Chatbots

When Kevin first encountered an artificial intelligence chatbot, his initial request was straightforward: help with his Year 7 mathematics homework. Three years later, now in Year 10 at a London secondary school, his relationship with AI has evolved into something far more personal and emotionally complex.

The Rise of Digital Companionship

'I somewhat consider it a friend,' Kevin reveals. 'I talk to it for advice, maybe once or twice a week. I feel like it's easier to talk to AI since it doesn't have humans to tell my secrets to.'

Kevin represents a growing trend among British teenagers. According to new research released on Safer Internet Day, eight in ten youngsters aged between 11 and 16 now use AI chatbots, with nearly four in ten engaging with them daily. The study, conducted by telecommunications company Vodafone, surveyed 2,000 children alongside their parents and guardians.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Emotional Connections and Trust Issues

The findings reveal startling emotional connections developing between teenagers and artificial intelligence. One-third of surveyed children view these AI chatbots as genuine friends, while 33% have confided in them with information they wouldn't share with parents, peers, or teachers.

Nearly half (49%) of young users consider the technology trustworthy, and 39% believe it can understand human emotions as effectively as people do. This perception is creating new patterns of behaviour, with 14% of teenagers preferring to seek advice from AI chatbots rather than friends (10%) or teachers (3%).

However, not all teenagers share this perspective. Laura, another participant in the study, maintains a more cautious approach. When asked if she'd ever consider a chatbot a 'friend', she responds: 'No, because I believe they save data.' Despite her reservations, she admits to using chat applications three times weekly during computer classes outside school hours.

The Psychological Implications

Dr Elly Hanson, a leading child psychologist who collaborated with Vodafone researchers, explains how these relationships develop. 'Many children start using AI for homework assistance, as Kevin did, but quickly become drawn into pseudo-friendships because these bots can uncannily mimic human qualities like warmth, humour, and care.'

Dr Hanson warns that this sophisticated mimicry is fundamentally deceptive. 'Having a chatbot "friend" is like having a friend who doesn't think or care about you. Generative AI has been trained to give us answers we want to hear, rather than the ones we may need to hear.'

This dynamic poses particular risks for young people still developing social skills and learning to navigate the complexities of human relationships. Recent research indicates that interacting with overly accommodating AI models can reduce people's willingness to repair strained or broken friendships.

Safety Concerns and Educational Challenges

The Vodafone study found that children often struggle to distinguish between artificial and human interactions, failing to recognise that no human is actually typing responses on the other side of the screen. As teenagers increasingly replace human relationships with artificial ones, psychologists worry they're using AI to avoid the challenging aspects of social development.

'It's confronting that this highly sophisticated technology targeting perhaps the most precious part of what it is to be human – our attachment system – is now available to children without adequate attention to safety,' Dr Hanson emphasises.

Barry Laker, Childline service head at the NSPCC, echoes these concerns. 'The research shows AI chatbots are a clear safety concern. It's important to remind children that AI is a form of technology, therefore it doesn't know the child, can get things wrong, and that they're not a substitute for real relationships.'

Navigating the Regulatory Landscape

The findings place teachers and policymakers in a difficult position, requiring them to manage rapidly evolving technology that children are forming emotional attachments to. Some educational institutions have responded by implementing phone bans, though researchers remain uncertain about their effectiveness in improving academic performance or behaviour.

Pickt after-article banner — collaborative shopping lists app with family illustration

Vodafone is urging the government to address these challenges by ensuring AI technology is age-appropriate and properly understood. The company advocates for additional protective measures within the Online Safety Act, which currently requires pornographic websites and certain social media platforms to implement age verification systems.

These proposed enhancements would specifically safeguard children from potentially harmful chatbot designs, creating stronger barriers between vulnerable young users and artificial intelligence systems that might exploit emotional vulnerabilities.

As AI continues to integrate into daily life, the research highlights an urgent need for transparent conversations between parents, educators, and children about technology's role in emotional development and relationship building.