AI & Tech News

EU Sets Minimum Age Limit to Access AI Chatbots and Social Media

Europe Inspired By Australia's Policy on Social Media Ban

Key Highlights:

  • EU pushes a unified minimum age of 16 for accessing social media, video platforms, and AI assistants across the bloc.
  • Mandatory parental consent will be required for kids aged 13–16, supported by an EU-wide age verification and digital ID system.
  • Platforms face strict penalties, including bans and executive liability, for failing to protect minors from harmful content and manipulative design practices.

There has been a growing chatter around child safety rules online this year, and rightly so. After Australia passed legislation banning social media for kids under 16, the European Parliament has also jumped in advocating for strict child safety rules online. In fact, it has been doing that for a long time now.

But, this week it has finally announced some rigid measures to keep minors safe online. The European Parliament has backed the proposal of setting a minimum age of 16 to access social media platforms, video-sharing sites, and AI assistants across the European Union. Teens aged between 13 to 16 would still be allowed to use these services with verified parental consent. Not to mention it’s a welcome change. 

The European Parliament has backed proposal for stricter child saftey rules

The proposal was passed this week with 483 votes in favor, 92 against, and 86 abstentions. Lawmakers and regulators these days have been quite actively scrutinizing video-sharing, social media, and AI platforms amid growing concerns around kids’ physical and mental health risks. With the ease of smartphone and internet accessibility, social apps have been a go-to place for kids, where they find manipulative design, featuring infinite feeds and autoplay among others. These reportedly interfere with kids’ behavior, mental/physical well-being, and learning.

For that very reason, the European Parliament has also said that platform designs need to be changed and points out that many of these platforms rely on mechanisms that encourage kids toward prolonged use. In order to enforce the new age limit, lawmakers also support the rollout of an EU-wide age verification app and the European Digital Identity (eID) wallet.

The parliament has also said that these tools must be accurate, secure, and privacy-centric. However, it also warned that existence of such tools does not reduce platforms’ responsibility to make their services inherently safe and appropriate for kids. While implementing all the changes, compliance with the Digital Services Act (DSA) remains a central priority, according to the European Parliament.

If companies repeatedly fail to protect minors and ignore safety specification requirements, the senior executives of such platforms could face personal liability. The inclusion of personal accountability suggests how seriously lawmakers are treating online safety violations these days. Lawmakers have also asked the Commission to target persuasive technologies such as influencer marketing, targeted advertising, dark patterns, and algorithmic nudging. Going forward, many of these practices will now be addressed under the upcoming Digital Fairness Act (DFA), which is expected to bolster consumer protection standards of digital services.

Also read: FTC orders Google, OpenAI, Meta to report on AI chatbot safety for children & teens

Online platforms could face EU-wide ban in case of violations

More importantly, if a platform fails to comply with the EU’s online child safety rules, they can face outright bans across the EU — and that’s something big. The Parliament also wants a ban on engagement-based recommendation systems for minors, arguing that algorithms designed to maximize watch time or interaction often expose younger users to harmful and inappropriate content.

In addition to social platforms, the European Parliament also calls for the aapplication of Digital Services Act protections to online video platforms and also proposes a ban on loot boxes, fortune wheels, quick pay-to-progress systems, and other randomized gaming features. Concerns around “kid-fluencing” have also been taken into consideration, which is why recommendations to restrict online platforms, that allow children to generate content for financial incentives, have been made.

In an age where AI is predominantly taking over our lives, lawmakers want to ensure that minors are protected from deepfakes, AI companion chatbots, autonomous AI agents, and apps capable of creating synthetic nudity. They have also warned that the rapid advancement of AI tools is not only increasing risks but also exposing minors to misinformation and triggering privacy violations.

Also read: UK Government to Tackle AI-Generated Child Abuse Images With New Law

Member states have already started to acting on propsed rules

The EU member states have already started acting by rolling out national responses in relation to smartphone restrictions in schools, cyberbullying regulations, helplines, and wider digital well-being campaigns. Policymakers are reportedly also preparing a coordinated inquiry into social media’s impact on well-being and an EU-level action plan to combat cyberbullying.

Rishaj Upadhyay
Rishaj is a tech journalist with a passion for AI, Android, Windows, and all things tech. He enjoys breaking down complex topics into stories readers can relate to. When he's not breaking the keyboard, you can find him on his favorite subreddits, or listening to music/podcasts
You may also like