The idea of restricting social media access for children under 16 would have sounded radical just a few years ago. Today, it’s becoming a mainstream policy discussion.
It sounded radical because social media was long framed as neutral infrastructure — a place for connection, creativity, and self-expression — rather than as a system optimized for engagement and advertising. Platforms assured parents, regulators, and the public that young users were safe and supported, and that they even benefitted from these environments.
That narrative has steadily eroded. Internal research, lawsuits, and independent studies have painted a different picture: Platforms are generating billions in advertising revenue from teenage users, while downplaying or obscuring evidence linking intensive social media use to negative mental health outcomes, when in fact, reducing usage improves confidence and well-being.
What was once defended as a social good is now increasingly scrutinized as an attention economy that monetizes vulnerability. Against that backdrop, age-based restrictions no longer appear radical, but a reasonable reaction.
Australia moved first on plans to introduce age-based restrictions for social media platforms, which went into effect in December of 2025.
While there is no single European rule yet, the EU is also signaling a clear direction: Child protection online is shifting from guidance to enforcement. Countries like Denmark, Norway, and France are already outlining national approaches, and EU-level initiatives increasingly point toward stronger age safeguards and design obligations. In the US, states like California have also passed legislation requiring parental consent and time restrictions on social media platforms.
At first glance, this looks like an age-gating debate. In reality, it’s something deeper: a question about whether the current digital model — which is built around engagement optimization, behavioral profiling, and persuasive design — can ever be considered fair for children.
This question is no longer theoretical, and what began as isolated policy debates has turned into a global trend. Policymakers are converging on the same conclusion: The existing digital model places too much responsibility on users, and too little accountability on platforms, especially when it comes to children.
Why children are changing the rules of the internet
European data protection law has long recognized that children require special protection. But in practice, many digital services have treated them like smaller adults.
Children have been navigating:
- Behavioral profiling systems they don’t understand
- Interfaces designed to maximize attention, not well-being
- Privacy choices that assume legal and cognitive maturity
For years, the industry relied on a fragile assumption: If users can technically give consent, the system is legitimate. But that assumption is now breaking down.
Recent EU initiatives make this tension explicit. The Digital Services Act (DSA) and the Digital Fairness Act (DFA), which support consumer trust and safety online, approach the problem from different angles, but they point in the same direction. They also don’t exist in isolation. For years, Europe has been building a layered framework for children’s online protection — from the GDPR’s special rules for children’s consent, to audiovisual media regulations that limit harmful content and advertising, to broader policy initiatives such as the EU Strategy on the Rights of the Child and the Better Internet for Kids agenda.
What’s changing now is not the goal, but the intensity: Expectations are shifting from high-level principles and guidance to enforceable obligations that directly shape platform behavior and design.
From risk management to fairness by design
The DSA focuses on outcomes. It makes platforms legally responsible for the systemic risks their services create — especially for minors. This includes how content is amplified, how engagement is engineered, and how personal data is used. One of its clearest signals is the prohibition of profiling-based advertising to minors, which forces platforms to become age-aware and rethink long-standing growth models.
In parallel, the DFA focuses on methods. It challenges the design practices that create those risks in the first place, like dark patterns that rely on asymmetrical power between users and platforms and interfaces that steer people to make choices they may not fully understand.
Neither the DSA nor DFA framework is solely about children. But together, they expose the most visible ways that the current model fails.In practice, this shift is already visible in parts of the digital ecosystem. Some platforms have moved away from treating age as a simple compliance check and instead use it to shape the experience itself, offering separate environments for younger users, limiting recommendation intensity, reducing data-driven personalization, and enforcing stricter defaults around sharing and communication.
This is why the conversation is shifting away from “How do we verify age perfectly?” toward a more fundamental question: How do we design digital experiences that adapt responsibly to age, without defaulting to excessive data collection?
Similar principles have long existed in gaming and operating system ecosystems, where younger users encounter restricted chat functions, time limits, reduced monetization pressure, and clearer guardrails by default.
These approaches do not rely on trust or disclosure alone.
They embed restraint directly into the design, demonstrating how fairness by design can translate from principle into practice. Children are not the exception — they are the clearest proof that relying only on consent and self-regulation is no longer sufficient.
Age gating sits at the intersection of these two frameworks. It translates abstract obligations around risk, fairness, and proportionality into concrete design decisions about what is allowed — and what is not — for different users.
Age gating returns with a new purpose
Age gating is not new. For years, it was treated as a formal requirement with minimal practical impact: Enter a birthdate, move on.
The difference today is about why age gating matters.Australia’s approach shows a willingness to draw hard access boundaries. In Europe, the approach is more nuanced but equally consequential. National discussions, such as those underway in France or Denmark, combined with existing EU-level obligations under the DSA and long-standing child protection principles in EU law, point toward a future where platforms are expected to actively prevent harmful experiences for minors — not simply warn against them.
This fundamentally changes the role of age gating:
- As a policy trigger, not a checkbox
- As a design input, not just a restriction
- As a capability that determines which experiences are appropriate, proportional, and fair
Think about age gating like being at a wedding. When children attend, certain common-sense measures are implemented, and nobody considers them exclusion.
For example, you’d serve children juice, not sparkling wine. You’d want to protect their ears from loud music. You might have them leave the venue to go to bed before midnight. Not because weddings are bad — but because the environment is adapted to who is present.
Digital services should work the same way. Age awareness is not about banning participation, but about adjusting the experience: what is shown, how it is optimized, and how much data is used. When age gating only exists on paper, but the experience stays unchanged, protection remains only theoretical.
In practice, age gating does not require radical new products. Imagine a social media app that knows a user is under 16. The experience could look materially different without excluding participation:
Recommendations are less aggressive and less optimized for endless engagement. Personalization relies on contextual signals rather than behavioral profiling. Advertising, if present at all, is contextual and non-targeted. Default settings favor privacy, limited sharing, and reduced notifications.
The same platform can still exist — but the rules governing data use, amplification, and optimization change based on who is using it. That is what age-aware design looks like in practice.
The hidden trade-off: protection vs. data minimization
There is a paradox at the heart of the age-gating debate.
Stricter age controls often increase pressure to collect more personal data. Many proposed restrictions implicitly depend on new forms of age verification or age assurance — from requiring government-issued IDs to estimating a user’s age through biometric or behavioral signals.
In many cases, however, policymakers have not specified how these checks should be performed, how long data should be retained, or how sensitive information should be protected. This creates real risks: data breaches, misuse of identity information, and exclusion of users who lack formal identification. It also concentrates highly sensitive data in systems never designed to hold it.
Solving child protection by collecting more identity data is not a sustainable answer. It replaces one risk with another — and undermines trust in the process.
This is why the conversation is shifting away from “How do we verify age perfectly?” toward a more fundamental question: How do we design digital experiences that adapt responsibly to age, without defaulting to excessive data collection?
Age gating, in this sense, is not just about blocking access. It’s about enabling proportionality.

Gating access is not the same as fixing harm
Many of the current proposals focus on who can access social media, rather than how those platforms operate. The underlying concern is children’s exposure to harmful content, addictive design patterns, and negative mental health impacts — but these harms are not limited to minors.
Infinite scroll, algorithmic amplification, persuasive interfaces, and behavioral targeting affect users across all age groups. They also disproportionately impact other vulnerable groups, including elderly users and people with disabilities, who may be more susceptible to manipulative design, cognitive overload, or misleading cues.
Restricting access based on age may reduce exposure for some, but it does not address the systemic mechanics that create harm in the first place.
This is precisely why recent EU efforts increasingly focus on platform behavior rather than user responsibility — from limits on profiling and targeting of minors to scrutiny of recommender systems, default settings, and persuasive design. By addressing these root causes, regulation aimed at protecting children has the potential to make digital environments safer, clearer, and fairer for everyone.
This is why the EU’s regulatory direction matters. The DSA explicitly bans profiling-based advertising to minors, and the DFA challenges the manipulative design patterns behind it. Together, they point toward a broader shift: not just keeping children out, but changing what is allowed inside — especially when it comes to targeting, personalization, and data use.
What this means for brands and marketing leaders
It would be easy to see this as a problem for social media platforms and regulators alone. That would be a mistake.
The expectations emerging today will also apply to:
- Branded apps & loyalty programs
- Gaming and entertainment platforms
- Data-driven personalization experiences
For marketing leaders, the implications are strategic:
Age-aware experiences will become the standard
Not every user needs the same level of personalization, tracking, or optimization. Designing experiences that adjust based on age — with fewer data dependencies — will soon be expected. In practice, this means treating age not as a marketing attribute, but as a governance signal that determines what data use and engagement patterns are appropriate.
Consent alone is no longer the finish line
Especially for minors, regulators are signaling that legal consent does not automatically mean fair processing. That logic will extend beyond children.
Trust will depend on restraint
The most sophisticated data strategy is useless if users perceive it as exploitative. But restraint cannot rely on promises alone. In an environment where platform behavior has repeatedly diverged from public commitments, trust increasingly depends on enforceable rules, technical safeguards, and the ability to verify what systems actually do. Fairness is becoming a performance factor — and accountability a prerequisite.
Children as the catalyst, not the exception
Australia may be leading with clear restrictions, while Europe advances through a broader set of frameworks, from data protection and media regulation to the DSA and the emerging digital fairness agenda. But the direction is unmistakable.
The era of “just put up a policy and hope for the best” is ending.
Age gating, child protection, and digital fairness are converging into a single expectation:
If you build digital experiences, you are responsible for how they affect the people who use them — especially those least able to protect themselves.
Children are forcing the digital ecosystem to confront its own limits, and the companies that learn how to design fairly for them will be the ones users trust tomorrow.
We’ve got you covered, from tracking to trust.








