Advanced Television

Ofcom: 33% UK kids have adult social media accounts

October 11, 2022

By Colin Mann

A third of UK children aged between 8 and 17 with a social media profile have an adult user age after signing up with a false date of birth, according to research commissioned by comms regulator Ofcom.

Yonder Consulting found that the majority of children aged between 8 and 17 (77 per cent) who use social media now have their own profile on at least one of the large platforms. And despite most platforms having a minimum age of 13, the research suggests that six in 10 (60 per cent) children aged 8 to 12 who use these platforms are signed up with their own profile.

Among this underage group (8 to 12s), up to half had set up at least one of their profiles themselves, while up to two-thirds had help from a parent or guardian.

Why does a child’s online age matter?

When a child self-declares a false age to gain access to social media or online games, as they get older, so does their claimed user age. This means they could be placed at greater risk of encountering age-inappropriate or harmful content online. Once a user reaches age 16 or 18, some platforms, for example, introduce certain features and functionalities not available to younger users – such as direct messaging and the ability to see adult content.

Yonder’s study sought to estimate the proportion of children that have social media profiles with ‘user ages’ that make them appear older than they actually are. The findings suggest that almost half (47 per cent) of children aged 8 to 15 with a social media profile have a user age of 16+, while 32 per cent of children aged 8 to 17 have a user age of 18+.

Among the younger, 8 to 12s age group, the study estimated that two in five (39 per cent) have a user age profile of a 16+ year old, while just under a quarter (23 per cent) have a user age of 18+.

Risk factors that can lead children to harm online

In line with its duty to promote and research media literacy, and as set out in Ofcom’s roadmap to online safety regulation, it is publishing a series of research reports designed to further build its evidence base as it prepares to implement the new online safety laws. Given the protection of children sits at the core of the regime, the wave of research crucially explores children’s experiences of harm online, as well as understanding children’s and parents’ attitudes towards certain online protections.

Commissioned by Ofcom and carried out by Revealing Reality, a second, broader study into the risk factors that may lead children to harm online  found that providing a false age was only one of many potential triggers.

A range of risk factors were identified which potentially made children more vulnerable to online harm, especially when these factors appear to coincide or frequently co-occur with the harm experienced. These included:

  • a child’s pre-existing vulnerabilities such as any special educational needs or disabilities (SEND), existing mental health conditions and social isolation;
  • offline circumstances such as bullying or peer pressure, feelings such as low self-esteem or poor body image;
  • design features of platforms which either encouraged and enabled children to build large networks of people – often that they didn’t know; or exposed them to content and connections they hadn’t proactively sought out; and
  • exposure to personally relevant, targeted, or peer-produced content, and material that was appealing as it was perceived as a solution to a problem or insecurity.

The study indicated that the severity of any impact can vary between children. This ranged from minimal transient emotional upset (such as confusion or anger), temporary behaviour-change or deep emotional impact (such as physical aggression or short-term food restriction), to far-reaching, severe psychological and physical harm (such as social withdrawal or acts of self-harm).

Children’s and parents’ attitudes towards age assurance

A third research study published – commissioned jointly by Ofcom and the Information Commissioner’s Office under our DRCF programme of work – delves deeper into children’s and parents’ attitudes towards age assurance.

Age assurance is a broad term encompassing a range of techniques designed to prevent children from accessing adult, harmful, or otherwise inappropriate content, and to help platforms tailor content and features to provide an age-appropriate experience. It encapsulates measures such as self-declaration (as discussed above), hard identifiers such as passports, and AI and biometric-based systems among others.

It finds that parents and children are broadly supportive of the principle of age assurance, but also identifies that some methods raise concerns about privacy, parental control, children’s autonomy and usability.

Parents told Ofcom that they were concerned with keeping their children safe online, but equally wanted them to learn how to manage risks independently through experience. Many also didn’t want their children to be left out of online activities that their peers are allowed to take part in, and others felt that their children’s level of maturity, rather than simply their numerical age, was a primary consideration in the freedom they had.

Parents felt that the effort required for an age assurance method should be proportionate to their perceived potential risks. Both parents and children leaned towards “hard identifiers”, such as passports, for traditionally age-restricted activities like gambling or accessing pornography.

Social media and gaming tended to be perceived as comparatively less risky. Children tended to prefer a ‘self-declaration’ method of age assurance for these platforms and services, due to the perceived ease of circumvention and desire to use them without restrictions.

Some parents felt that minimum age restrictions for social media and games were quite arbitrary, and, as Ofcom highlights above, facilitated their child’s access. When prompted with different age assurance methods for social media and games, parents often preferred “parental confirmation” as they considered it afforded them both control and flexibility.

The Online Safety Bill

The Online Safety Bill will require Ofcom to assess and publish findings about the risks to children of harmful content they may encounter online.

It will also require in-scope services that are likely to be accessed by children to assess the risks of harm to youngsters who use their service, and to put in place proportionate systems and processes to mitigate and manage these risks.

Ofcom already regulates UK-established video sharing platforms and will shortly publish its key findings from its first full year of regulation. This report will focus on the measures that platforms have in place to protect users, including children, from harmful material and set out its strategy for the year ahead.

 

Categories: Articles, Business, Consumer Behaviour, Policy, Regulation, Research, Social Media

Tags: , ,