Articles Default

Protecting Minors Online: Can Age Verification Truly Make the Internet Safer?

Getty Images

The drive to protect minors online has been gaining momentum in recent years and is now making its mark in global policy circles. This shift, strongly supported by public sentiment, has also reached the European Union.

In a recent development, Members of the European Parliament, as part of the Internal Market and Consumer Protection Committee, approved a report raising serious concerns about the shortcomings of major online platforms in safeguarding minors. With 32 votes in favour, the Committee highlighted growing worries over issues such as online addiction, mental health impacts, and children’s exposure to illegal or harmful digital content.

What Is In The Report

The report discusses the creation of frameworks and systems to support age verification and protect children’s rights and privacy online. This calls for a significant push to incorporate safety measures as an integral part of the system’s design, within a social responsibility framework, to make the internet a safe environment for minors. 

MEPs have proposed sixteen years as the minimum age for children to access social media, video-sharing platforms, and AI-based chat companions. Children below sixteen can access the above-mentioned platforms with parental permission. However, a proposal has been put forth demanding that an absolute minimum age of thirteen be set. This indicates that children under 13 cannot access or use social media platforms, even with parental permission. 

In Short:

  • Under 13 years of age: Not allowed on social media
  • 13-15 years of age: Allowed with parents’ approval
  • 16 years and above: Can use freely, no consent required

MEPs recommended stricter actions against non-compliance with the Digital Services Act (DSA). Stricter actions range from holding the senior executives of the platforms responsible for breaches of security affecting minors to imposing huge fines. 

The recommendations include banning addictive design features and engagement-driven algorithms, removing gambling-style elements in games, and ending the monetisation of minors as influencers. They also call for tighter control over AI tools that create fake or explicit content and stronger rules against manipulative chatbots.

What Do Reports And Research Say?

The operative smoothness and convenience introduced by the digital and technological advancements over the last two decades have changed how the world works and communicates. The internet provides a level field for everyone to connect, learn, and make an impact. However, the privacy of internet users and the access to and control over data are points of contention and a constant topic of debate.  With an increasing percentage of minor users globally, the magnitude of risks has been multiplied.  Lack or limited awareness of understanding of digital boundaries and the deceptive nature of the online environment make minors more susceptible to the dangers.  Exposure to inappropriate content, cyberbullying, financial scams, identity theft, and manipulation through social media or gaming platforms are a few risks to begin with. Their curiosity to explore beyond boundaries often makes minors easy targets for online predators.

Recent studies have made the following observations (the studies are EU-relevant):

  • According to the Internet Watch Foundation Annual Data & Insights / 2024 (reported 2025 releases), Record levels of child sexual abuse imagery were discovered in 2024; IWF actioned 291,273 reports and found 62% of identified child sexual abuse webpages were hosted in EU countries. 
  • WeProtect Global Alliance Global Threat Assessment 2023 (relevant to the EU) reported an 87% increase in child sexual abuse material since 2019. Rapid grooming on social gaming platforms and emerging threats from AI-generated sexual abuse material are the new patterns of online exploitation.
  • According to WHO/Europe HBSC Volume on Bullying & Peer Violence (2024), one in six school-aged children (around 15-16%) experienced cyberbullying in 2022, a rise from previous survey rounds. 

These reports indicate the alarming situation regarding minors’ safety and reflect the urgency with which the Committee is advancing its recommendations. Voting is due on the 23rd-24th of November, 2025. 

While these reports underline the scale of the threat, they also raise an important question: are current solutions, like age verification, truly effective?

How Foolproof Is Age Verification As A Measure?

The primary concern in promoting age verification as a defence mechanism against cybercrime is the authenticity of those verification processes and whether they are robust enough to eliminate unethical practices targeting users. For instance, if the respondent (user) provides inaccurate information during the age verification process, are there any mechanisms in place to verify its accuracy? 

Additionally, implementing age verification for children is next to impossible without violating the rights to privacy and free speech of adults, raising the question of who shall have access to and control over users’ data – Government bodies or big tech companies. Has “maintenance of anonymity” while providing data been given enough thought in drafting these policies? This is a matter of concern. 

According to EDRI, a leading European Digital Rights NGO, deploying age verification as a measure to tackle multiple forms of cybercrime against minors is not a new policy. Reportedly, social media platforms were made to adopt similar measures in 2009. However, the problem still exists. Age verification as a countermeasure to cybercrime against minors is a superficial fix.  Do the Commission’s safety guidelines address the root cause of the problem – a toxic online environment – is an important question to answer.

EDRI’s Key arguments:

  • Age verification is not a solution to problems of toxic platform design, such as addictive features and manipulative algorithms.
  • It restricts children’s rights to access information and express themselves, rather than empowering them.
  • It can exclude or discriminate against users without digital IDs or access to verification tools.
  • Lawmakers are focusing on exclusion instead of systemic reform — creating safer, fairer online spaces for everyone.
  • True protection lies in platform accountability and ethical design, not mass surveillance or one-size-fits-all age gates.
Read the complete article here:
https://edri.org/our-work/age-verification-gains-traction-eu-risks-failing-to-address-the-root-causes-of-online-harm/ | https://archive.ph/wip/LIMUI: Protecting Minors Online: Can Age Verification Truly Make the Internet Safer?

Before floating any policy into the periphery of execution, weighing the positive and negative user experiences is pivotal, because a blanket policy based on age brackets might make it ineffective at mitigating the risks of an unsafe online space. Here, educating and empowering both parents and children with digital literacy can have a more profound and meaningful impact rather than simply regulating age brackets. Change always comes with informed choices. 

0 comments on “Protecting Minors Online: Can Age Verification Truly Make the Internet Safer?

Leave a Reply

This website is using cookies to improve the user-friendliness. You agree by using the website further.

Privacy policy