Age verification has moved from an optional safeguard to a legal obligation in a growing number of jurisdictions, and the cost of getting it wrong is rising fast.
From record-breaking Federal Trade Commission (FTC) enforcement actions to sweeping new laws in the U.S., UK, Australia, and beyond, regulators are demanding more than good intentions. They want documented processes, auditable records, and systems that work.
Businesses that cannot demonstrate compliance are finding themselves exposed to fines that run into the hundreds of millions, reputational damage that is difficult to recover from, and in some jurisdictions, criminal liability.
The pressure is coming from every direction. Legislators across party lines have made protecting kids online one of their few points of genuine agreement. Bills targeting children’s data, social media access, and harmful content have advanced with bipartisan support among U.S. federal and state legislators, as well as in legislatures internationally.
Consumers — and parents in particular — are paying attention: trust in how companies handle children’s data has become a meaningful factor in purchasing decisions and brand loyalty.
But the compliance challenge is real. Age verification sounds straightforward until you encounter a patchwork of overlapping laws with different age thresholds, consent requirements, and enforcement regimes.
The technical reality compounds this: a gate placed on a website is not the same as a gate integrated into the systems that govern what data gets collected, shared, and sold.
This article explains what age verification compliance requires, which businesses are most at risk, what happens when companies fall short, and how to put a defensible solution in place.
At a glance
- Age verification is now a legal requirement under COPPA, CPRA, CAADC, and a growing number of U.S. state and international laws.
- Major enforcement actions demonstrate that regulators will pursue the largest platforms.
- Civil penalties, criminal liability, state attorney general investigations, and reputational damage are all real consequences of non-compliance.
- More than 19 U.S. states have passed harmful content age verification laws; Australia, the UK, and the EU have all introduced or are developing enforceable age-gating requirements.
- Effective age verification must be embedded into your consent flow and not deployed as a separate, standalone tool.
What Is Age Verification, and Why Does It Matter Now?
Age verification is the process of confirming that a person attempting to access digital content or services meets a defined age threshold before that access, or data collection, is granted. It has historically been associated with regulated industries like alcohol, gambling, and adult content, but the regulatory landscape has shifted dramatically.
Children’s data is heavily regulated in U.S. and global privacy law. Laws governing how businesses collect, use, and share minors’ personal information have multiplied at both state and federal levels, and enforcement is intensifying. Businesses need to be sure that their age verification measures will be sufficient to withstand scrutiny.
Who Counts as a Minor?
The answer varies by jurisdiction and law, which creates complexity for businesses operating across multiple markets. For example:
COPPA (U.S.): Protects children under 13
COPPA 2.0 (passed Senate, March 2026): Extends coverage to under 17
U.S. state privacy laws: Range from under 13 to under 18
CAADC (California): Covers users under 18
GDPR/GDPR-K (EU): Member State age of digital consent ranges from 13 to 16
UK Online Safety Act: Applies protections to all children under 18
Australia’s Online Safety Amendment: Targets under-16s on social media
For businesses with international audiences, designing for the most protective standard is increasingly the pragmatic choice.
Age Verification Laws: The U.S. and Global Regulatory Landscape
The regulatory frameworks governing age verification now span dozens of jurisdictions, from U.S. federal and state law to national legislation across Europe, Australia, and Asia. Understanding them begins with the law that set the template for children’s online privacy protection worldwide.
COPPA: The Foundation of U.S. Children’s Online Privacy
The Children’s Online Privacy Protection Act (COPPA) has been U.S. law since 1998, but it has never been more actively enforced. The FTC’s comprehensive amendments to the COPPA Rule, finalized in early 2025, represent the most significant overhaul since 2013 and reflect the scale of concern about how children’s data is being monetized.
COPPA requires operators of websites and online services directed at children under 13 — or those with “actual knowledge” that they are collecting data from minors — to obtain verifiable parental consent before collecting any personal information. Civil penalties reach up to USD 53,088 per violation.
The 2025 COPPA amendments introduced stricter requirements around:
- Data retention and deletion
- Third-party sharing for advertising and analytics purposes
- Age screening obligations for “mixed audience” sites
- Enhanced oversight for industry Safe Harbor programs
The FTC has also made clear that age verification is a priority. In February 2026, the Commission issued a policy statement confirming it will not pursue enforcement against operators that collect minimal information solely for the purpose of age verification. This policy statement is a meaningful signal of where regulatory expectations are heading.
CPRA and CAADC: California Raises the Bar
California continues to set the pace on children’s privacy in the U.S. The California Privacy Rights Act (CPRA) imposes additional obligations on businesses that handle minors’ data, including opt-in requirements for the sale or sharing of personal information for those under 16.
The California Age-Appropriate Design Code Act (CAADC) goes further still. Modeled on the UK’s Children’s Code, the CAADC requires businesses to assess and mitigate privacy risks to users under 18, apply protective default settings, and limit the collection of data that is not necessary to deliver the service. Although legal challenges have temporarily delayed full enforcement, the direction of California’s regulatory intent is clear.
Learn more: Why protecting kids online is forcing a rethink of digital design
The State-Level Surge: 19 States and Counting
Federal law is not the only concern. By the end of 2025, roughly half of all U.S. states had enacted laws requiring age verification to access potentially harmful content — including pornography, gambling, alcohol, and social media platforms — with more legislation expected through 2026.
A U.S. Supreme Court ruling in June 2025 upholding Texas’s age verification statute has further reinforced legislative confidence. States including Florida, Georgia, Louisiana, New York, Tennessee, and Utah have all introduced requirements to verify ages, obtain parental consent, or both.
The following examples illustrate the range of requirements, as well as the active litigation that continues to test their enforceability:
- Florida: Prohibits minors under 14 from creating accounts on platforms with addictive design features, and requires parental consent for 14- and 15-year-olds. Currently enforceable following a November 2025 appellate ruling, though constitutional litigation continues.
- Georgia: Requires platforms to verify users’ ages and obtain parental consent for anyone under 16, though enforcement remains blocked by a preliminary injunction pending appeal.
- New York: The SAFE for Kids Act requires verifiable parental consent before providing algorithmically curated content feeds to users under 18.
- Louisiana: Required social media platforms with more than 5 million users to verify ages and obtain parental consent for users under 16, though a federal court struck the law down as unconstitutional in December 2025; the state is appealing.
This patchwork of requirements means that a business operating in multiple U.S. states cannot rely on meeting a single standard. A centralized, configurable age verification solution that can be adapted by jurisdiction is not a luxury; it is a compliance necessity.
International Age Verification Requirements
The U.S. is not acting alone. Age-gating requirements are now part of regulatory frameworks across multiple jurisdictions.
European Union
The Digital Services Act (DSA) prohibits profiling-based advertising to minors and requires all online platforms accessible to minors to protect their privacy, safety, and security. The European Commission published guidance in July 2025, and enforcement may be under way.
Separately, GDPR-K — the child-specific provisions of the GDPR — requires age-appropriate consent and grants children and parents significant rights over data.
DSA violations can result in fines of up to six percent of global annual revenue; GDPR-K violations carry penalties of up to EUR 20 million or four percent of global annual turnover.
United Kingdom
The Online Safety Act came into full force for child protection on 25 July 2025, requiring platforms to use highly effective age assurance to prevent children from accessing pornography, self-harm content, and other harmful material.
Regulator Ofcom had opened 21 investigations by October 2025 and has already issued fines for age check failures. Non-compliance carries penalties of up to GBP 18 million or 10 percent of global turnover, whichever is higher.
Australia
The Online Safety Amendment, which came into force on 10 December 2025, prohibits social media companies from allowing users under 16 to hold accounts. Non-compliant platforms face fines of up to AUD 50 million.
India
The Digital Personal Data Protection (DPDP) Act requires verifiable parental consent before processing the personal data of children under 18.
Brazil
A law passed in September 2025 requires age verification for social media access, linking accounts for users under 16 with a parent, and banning loot boxes in video games. Non-compliance carries fines of up to BRL 50 million.
Age Verification Failures: The Real Cost of Non-Compliance
Regulatory penalties are the most visible consequence of inadequate age verification, but they are not the only one. Businesses that fail to implement compliant age-gating face a range of financial, legal, reputational, and operational consequences.
Financial Penalties: The Numbers Are Large
Financial penalties for children’s data violations have escalated sharply over the past decade, and the trajectory is upward. The cases below are drawn from U.S. federal enforcement alone. They do not include state attorney general actions, international fines, or the mounting wave of civil litigation.
Taken together, they demonstrate that regulators are willing to pursue the largest platforms and impose penalties that reflect the scale of harm.
Epic Games (Fortnite)
Year
2022
Authority
FTC / DOJ
Regulation
COPPA
Amount
USD 275 million (COPPA penalty); USD 520 million total
The largest civil penalty ever imposed for violating an FTC rule. The FTC and Department of Justice alleged that Epic collected personal data from children under 13 without parental consent, matched minors with adult strangers in real-time voice chat, and enabled default communication settings that exposed children to harassment.
A separate USD 245 million settlement covered dark pattern billing practices, bringing the total to USD 520 million.
Google and YouTube
Year
2029
Authority
FTC / New York Attorney General
Regulation
COPPA
Amount
USD 170 million
The FTC and New York Attorney General settled allegations that YouTube had illegally collected personal information from children viewing child-directed content, without parental consent, in violation of COPPA.
The penalty, which was USD 136 million to the FTC and USD 34 million to New York, was the largest COPPA fine on record at the time.
TikTok / Musical.ly
Year
2019 settlement; 2024 lawsuit ongoing
Authority
FTC / DOJ
Regulation
COPPA
Amount
USD 5.7 million (2019); penalty in 2024 action not yet determined
TikTok’s predecessor platform Musical.ly paid USD 5.7 million in 2019 to resolve FTC allegations of COPPA violations, which was the largest COPPA fine at that time. As part of that settlement, TikTok was subject to a court order requiring ongoing COPPA compliance.
In August 2024, the DOJ and FTC filed a new complaint alleging TikTok knowingly continued to allow children to create accounts and collected their data in violation of both COPPA and the 2019 consent order.
TikTok’s motion to dismiss was substantially denied in November 2025, and the case is proceeding. With approximately 37.5 million U.S. users under 18, the potential penalty exposure is substantial.
Disney
Year
2025
Authority
FTC / DOJ
Regulation
COPPA
Amount
USD 10 million
A federal judge approved a USD 10 million settlement in December 2025, requiring Disney to pay a civil penalty over FTC allegations that it mislabeled child-directed videos on YouTube as general audience content, allowing third-party collection of personal data from children under 13 without parental notification or consent.
HoYoverse (Genshin Impact)
Year
2025
Authority
FTC / DOJ
Regulation
COPPA
Amount
USD 20 million (proposed settlement)
In January 2025, the FTC announced a proposed settlement requiring HoYoverse to pay USD 20 million, delete improperly collected data from players under 13, and ban the sale of loot boxes to under-16s without parental consent. The settlement was pending court approval as of March 2026.
These figures represent only settled enforcement actions. COPPA currently permits civil penalties of up to USD 53,088 per violation. Given the millions of users affected in major platform cases, potential penalties before settlement are orders of magnitude larger.
Criminal Liability
Financial penalties are not the only exposure. In some jurisdictions, deliberate or reckless violations of children’s data privacy laws can give rise to criminal liability for company officers and directors — not merely for the corporate entity.
The UK’s Online Safety Act is the most explicit example currently in force. Senior managers of in-scope services can face criminal liability if they fail to comply with a confirmation decision issued by Ofcom in relation to children’s safety duties.
Criminal action can also be taken against senior managers who fail to ensure their companies respond to information requests from Ofcom.These are not theoretical provisions. Ofcom has already launched enforcement programmes and opened investigations, with fines issued and the criminal liability framework now active.
In the U.S., certain state laws similarly create personal liability for senior officers in cases of serious or repeated violations, independent of corporate liability.
The EU’s Digital Services Act allows national authorities to impose sanctions on individuals responsible for compliance failures at Very Large Online Platforms (VLOPs), though no individual criminal prosecutions under the DSA have been publicly confirmed to date.
Civil Litigation and State Attorney General Actions
In addition to federal enforcement, companies face civil litigation and action from state attorneys general. The scale of that litigation is now enormous. A coalition of 42 U.S. attorneys general filed actions against Meta Platforms over alleged harm to young people and youth mental health, citing knowing and deliberate design choices that exposed children to addictive features and unsuitable content.
That litigation has now produced its first jury verdict. On March 24, 2026, a New Mexico jury found Meta liable for violating state consumer protection law. This is the first time a state-led action has resulted in a jury holding a major platform accountable for child safety failures. The jury ordered the company to pay USD 375 million in civil penalties. Meta has said it will appeal.
Snapchat and TikTok, also named in the New Mexico action, reached settlements earlier in 2026. Further trials involving Meta, YouTube, and other platforms are scheduled for later in the year.
State AGs can act independently of the FTC under COPPA, which explicitly authorizes them to bring actions on behalf of state residents. The coordination between state and federal enforcement means that a single compliance failure can generate simultaneous actions in multiple jurisdictions. And as the New Mexico verdict demonstrates, those actions can now result in nine-figure jury awards.
Reputational Damage
Quantifying reputational harm is difficult, but the pattern is consistent. Enforcement actions attract media coverage. Press coverage focuses on the most damaging allegations, such as a company knowingly exposing children to data collection, targeted advertising, or harmful content.
The association of a brand with child safety failures is difficult to reverse, and increasingly affects purchasing decisions among privacy-conscious consumers.
Usercentrics’ own State of Digital Trust Report 2025 found that consumers are increasingly aware of and active about their data privacy. This is a trend that makes the reputational consequences of children’s data failures more commercially significant than ever.
Emerging Risks Making Age Verification More Critical Than Ever
Age verification is not a problem that can be solved once and set aside. The landscape of risk is expanding socially, politically, and technologically.
AI-Generated Content and Synthetic Media
Generative AI has dramatically lowered the barrier to creating content at scale that is persuasive, personalized, and potentially harmful. Content that would previously have required significant production resources — including realistic depictions of people, manipulative messaging, and exploitative material — can now be generated cheaply and in volume.
Without robust age gating, platforms and digital services have limited ability to prevent AI-generated material from reaching minors.
Regulators are responding with concrete action, not merely signals. Three developments in the U.S. and EU are particularly significant for businesses operating in this space.
The updated COPPA Rule, which came into effect in June 2025, explicitly requires separate, verifiable parental consent before a child’s personal data can be used to train or develop AI systems.
This is not a general principle; it is a specific, enforceable obligation with a compliance deadline of April 22, 2026. Companies that collect data from children and feed it into AI models without separate parental consent are already in violation.
In September 2025, the FTC launched a formal inquiry under Section 6(b) of the FTC Act into the impacts of AI chatbot companions on children and teens. This is a direct signal that the agency views AI-mediated interactions with minors as a priority enforcement area, not merely a future concern.
The EU AI Act requires that AI-generated content, including deepfakes, must be clearly disclosed and labeled, ensuring that users — particularly minors — are aware of its artificial nature.
The Act also explicitly recognizes children as a distinct vulnerable group requiring specialized protection.The European Parliament has additionally proposed an EU-wide minimum age of 16 for access to AI companions, reflecting the speed at which AI-specific child protection obligations are developing.
Social Media, Mental Health, and Political Pressure
The link between social media use and harm to young people — including anxiety, depression, disordered eating, and exposure to self-harm content — has become one of the most politically charged topics in digital regulation.
Platforms have faced congressional hearings, state attorney general investigations, and a wave of civil litigation that explicitly frames their practices in terms of the tobacco industry’s history of concealing known harms. That comparison is no longer merely rhetorical, as the March 2026 New Mexico jury verdict against Meta demonstrated.
This political environment makes age verification a cross-party priority. As Usercentrics’ analysis of child protection online notes, protecting children online is one of the few policy areas that commands broad bipartisan support.
Legislative and regulatory responses have been sustained across successive governments and across jurisdictions. Given the pace of enforcement activity in 2025 and 2026, there is little indication that momentum is slowing.
The Attention Economy Under Scrutiny
The behavioral design practices that underpin much of the modern internet — including infinite scroll, push notifications, algorithmically curated content feeds, and dark patterns in consent and purchase flows — are now explicitly regulated in several jurisdictions.
- The DSA in the EU requires platforms to assess and mitigate the risks posed by addictive design features, including infinite scroll, autoplay, and push notifications, to the mental health and wellbeing of minor users.
- The UK Children’s Code requires protective defaults for all minor users.
- Connecticut prohibits features designed to significantly increase a minor’s engagement, such as infinite scroll.
- New York’s SAFE for Kids Act requires verifiable parental consent before serving algorithmically curated feeds to users under 18 and bans notifications to children between midnight and 6 a.m.
- Vermont’s Age-Appropriate Design Code Act, signed into law in June 2025, prohibits features that promote compulsive use, including autoplay and late-night push notifications.
- Arkansas has gone further still, prohibiting platforms from using designs or algorithms that the platform knows, or should know, causes a minor to develop an eating disorder, attempt suicide, or sustain a social media addiction.
The regulatory direction is clear and the conversation has shifted from “How do we verify age perfectly?” to “How do we design digital experiences that adapt responsibly to age?”
Age verification is the gateway, but the obligations that follow identification of a minor are expanding rapidly, and the design of the experience itself is increasingly the subject of legal scrutiny.
Third-Party Data Ecosystems
Most digital services do not operate in isolation. They rely on networks of analytics providers, advertising technology, and data management tools, many of which collect their own data from site visitors.
Under COPPA, ad networks and third-party technology providers are required to comply with age-related restrictions when websites notify them that their service is directed at, or knowingly accessed by, children. Platforms cannot simply outsource responsibility for COPPA compliance to their technology vendors.
The 2025 COPPA amendments have made this more demanding. Companies must now obtain separate, verifiable parental consent before sharing a child’s personal data with third parties for advertising, analytics, or AI training purposes.
This is not bundled into a single general consent; it requires a distinct, specific authorization. For businesses running standard tag-based marketing and analytics stacks, the implication is significant: without controls that automatically block or restrict third-party data flows when a minor is identified, compliance is not achievable through an age gate alone.
Without a consent management platform that integrates age verification with data collection controls, it is not sufficient simply to place an age gate on a website. The gate must be connected to the systems that govern what data is collected, processed, and shared. For minors, most of that processing must be blocked or restricted automatically. The two functions need to operate from a single source of truth.
Usercentrics Age Verification Gate addresses this directly. Built natively into the Usercentrics consent management platform, it ensures that the age check and the consent flow are operating as a single system and not as two separate tools that need to be connected.
Age thresholds, redirect behavior for underage users, data collection controls, and audit logging are all configured and managed from the same Admin Interface already in use for privacy compliance.
What Effective Age Verification Looks Like
Regulators have become increasingly clear about what they expect from age verification. Good intentions and a self-reported date of birth are not sufficient.
Ofcom, the UK’s online safety regulator, has explicitly stated that self-declaration of age does not meet the standard of “highly effective” age assurance required under the Online Safety Act.
The FTC and state regulators have taken a similar position in the U.S. context. An effective, compliant age verification process shares several characteristics:
It operates before any data collection begins
Nothing should be collected, processed, or shared before a user’s age status is confirmed. An age gate that loads tracking scripts before the check is complete does not satisfy regulatory requirements. Regulators have made clear they will look at what the technology is doing, not just whether a gate exists.
It is configurable by jurisdiction and by obligation type
Given the patchwork of U.S. state laws and international requirements, age thresholds and associated consent obligations must be adaptable. What applies in California may differ from what applies in Florida, Georgia, or under GDPR-K in Germany.
Even within a single state, different thresholds can trigger different obligations. In California, for instance, COPPA’s threshold for children under 13 and CPRA’s threshold for children under 16 each carry distinct requirements.
It is auditable
Regulators expect documentation. Every age verification event should be logged, timestamped, and available on request. In an enforcement investigation, the ability to demonstrate that a system was functioning correctly, and that minors were identified and protected, can be the difference between a manageable outcome and a significant penalty.
It automatically applies appropriate protections
Identifying a minor is only the first step. Once a minor is detected, data collection must be blocked, sale and sharing of data must be prevented, and applicable rights — including “Do Not Sell or Share” under CPRA — must be applied automatically.
Under the 2025 COPPA amendments, this includes blocking the sharing of children’s data with third-party advertising, analytics, and AI systems without separate parental consent.
It is integrated, not bolted on
A standalone age verification tool that operates separately from a business’s consent management infrastructure creates gaps. Age status and consent must be managed from a single source of truth. Otherwise, the age gate and the data collection layer can fall out of sync, leaving the business exposed even when the gate itself is functioning.
How Usercentrics Age Verification Gate Supports Compliance
Usercentrics Age Verification Gate is built directly into Usercentrics Consent Management Platform, which already enables compliance with federal regulations like COPPA and state laws like CPRA along with those in many other states.
When a visitor arrives at a site, the Age Verification Gate appears before any content is accessed or data collected. Visitors who confirm they meet the age threshold proceed to the standard consent banner as usual.
Visitors identified as minors are automatically redirected to a page configured by the business, with data collection blocked and applicable rights — including opt-out and deletion rights under CPRA and COPPA — applied immediately.
Everything is managed from within the existing Admin Interface: age thresholds, redirect behavior, branding, and audit logs. No separate tools, no additional integrations, no engineering resources required for configuration.
