In November 2024, Australia’s Parliament passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024. This new law has grabbed international attention as a serious attempt to address harm to children as the result of using social media platforms. We look at what the Amendment’s scope, social media platforms that are affected, how companies need to comply, and what potential penalties are.
What is the Australian Online Safety Amendment?
The Online Safety Amendment amends the Online Safety Act 2021. It’s designed to create specific requirements for children’s access to social media platforms, the most notable being a ban for children in Australia under age 16 from holding accounts on these platforms. Companies operating affected social media platforms will be expected to introduce and enforce age gating to prevent children from using the platforms.
It’s common under a number of international laws to restrict the access to social platforms for children under age 13, and age verification is required, though often circumvented. 13 is the same age at which people generally become categorized as adults under many data privacy laws — or they enter a mid-range including ages 13 to 16. Typically by that age, where consent is required, it now must be obtained directly from the individuals, rather than from a parent or guardian.
The Online Safety Amendment dovetails with broader data privacy law in that it includes specific privacy protections, including limiting children’s use of the platforms, and retention of personal data that’s collected on them. Noncompliance penalties are also substantial.
Why was the Australian Online Safety Amendment introduced?
The bill was introduced to address ongoing concerns about the impacts of children’s access to social media platforms. The impacts of exposure to social media at a critical development are not yet fully understood, so the full harm potential is not yet known. Studies have already shown negative impacts on children’s and teens’ mental health, and there have been criminal cases involving predators accessing and manipulating children through social platforms.
Children’s activities online also aren’t always well monitored or carefully limited, and the prevalence of Wi-Fi and mobile devices makes access to social platforms ever easier.
What has the reaction been to the Australian social media ban for children?
The law has been controversial in some circles. Not unexpectedly, there have been mixed reviews, including whether the law goes too far, not far enough, or misses the mark in its intent. Critics make a variety of claims, including:
- the law may introduce new risks and cause more harm
- the law’s scope and exclusions are insufficient or incorrectly targeted
- children’s autonomy is compromised
- children are digitally savvy and will easily find ways around the ban
- opportunities for learning and growth will be stifled
- significant burdens will be levied on social media platforms to create and manage age restrictions
Perhaps ironically, it has been noted that by requiring social media platforms to collect and use potentially sensitive personal information from children to verify age and enforce the law’s requirements, greater risk to their privacy and safety, as well as to privacy compliance with other laws, may result.
To accompany the new law and its requirements, Australia’s eSafety Commissioner has provided content and services for educators, parents, and others, targeting the topic of children’s online safety, and how to support children’s safe activities online. This touches on an important point: that it will take a variety of measures, from legal to educational to parental, to safely manage children’s use of digital social spaces.
Who has to comply with the Australian Social Media Minimum Age law?
Certain social media platforms, noted as “age-restricted social media platforms” are required to self-regulate under the law. They must take “reasonable steps” to prevent Australian children under age 16 (“age-restricted users”) from creating or using accounts or other profiles where potential harms are considered likely to occur.
Children under the age of 13 are required to be explicitly excluded in the platforms’ terms of service to remove any ambiguity about at what age it is appropriate to start using social media.
The eSafety Commissioner will be responsible for writing guidelines on the “reasonable steps” that the affected age-restricted social media platforms are required to take. The new law does not include in its text specifics like what age estimation or verification technology may be used or what the reasonable steps guidelines will include.
The law’s text does not explicitly reference any current social platforms, as popular ones tend to change over time. However, in the explanatory memorandum, the government noted that the law is intended to apply to companies like Snapchat and Facebook (parent company Meta), rather than companies offering services like messaging, online gaming, or services primarily aimed at education or health support, with Google Classroom or YouTube given as examples.
However, such distinctions can be tricky, as a number of social media platforms that would likely be included do also enable functions like messaging and gaming, for example. There are legislative rules that can set out additional coverage conditions or specific electronic services that the law includes or exempts.
Businesses have one year from the passage of the Social Media Minimum Age bill to comply, so enforcement will likely begin as of or after November 2025.
What measures do companies need to take to comply with the Online Safety Amendment?
Social media platforms that meet the Amendment’s inclusion requirements will need to implement or bolster functions on their platforms to verify user age and prevent children under 16 from creating or maintaining accounts. Presumably, the platforms will also need to purge existing accounts belonging to children. The law does not specify what technology should be used or how age should be verified, as this changes over time.
The definition of a user on these social media platforms involves being an account holder who is logged in, so children who are not logged in to accounts can continue to access contents or products on these platforms if available. As platforms are not able to access nearly as much user data from those who are not logged in, many platforms significantly limit functionality to individuals who are not logged-in account holders.
Other existing privacy law requirements dovetail with the Online Safety Amendment’s requirements, particularly the Privacy Act 1988. For example, covered platforms can only use collected personal data for the purpose of compliance unless explicitly permitted under the Privacy Act or user informed and voluntary user consent is obtained. This information must then be disclosed or destroyed after its use for that specific purpose.
What other legal actions address children’s use of social media?
Around the world there are a number of laws that address children’s privacy and online activities, though they are more broad and don’t explicitly target social media use — some of them predate the relevant platforms’ existence.
Additionally, broader regional data privacy laws, like the Privacy Act 1988, are relevant, and they all include specific and stringent requirements for accessing and handling children’s data, as well as consent requirements.
In the United Kingdom there is the U.K. Online Safety Act, which has a section dedicated to “Age-appropriate experiences for children online.”
In the EU, there is the Digital Services Act (DSA), which focuses on a wide range of digital intermediary services. It’s aimed at “very large online platforms”, aka VLOPs, and very large online search engines, or VLOSEs. The list of designated VLOPs does include social media platforms. The DSA imposes strict requirements to address risks that their operation poses to consumers, as well as aiming to protect and enhance individuals rights, particularly relating to data privacy, including those of minors.
In the United States, the Children’s Online Privacy Protection Act (COPPA) has been in place since 2000, though revised several times by the Federal Trade Commission, and aims to protect children under age 13 and their personal information. COPPA is more broad, however, and not focused only on social media platforms, though they are covered under its requirements.
There are efforts to introduce substantial legislative updates, referred to as “COPPA 2.0”, which would further modernize the law, including raising the compliance age from 13 to 16 to protect more children. It would also include more stringent requirements for operators of social platforms if there are reasonable expectations that children under 16 use the platform.
At present, compliance is only required if there are known children under 13 using the services. Insisting that they don’t know for sure if children use the platforms is a common excuse to avoid compliance requirements, though children’s presence on social media platforms is widely known.
A number of social media platforms have been charged with COPPA violations, including Epic Games, which makes the popular video game Fortnite, and the video app TikTok (parent company ByteDance). Interestingly, in early November 2024, the Canadian government ordered that TikTok’s Canadian operations be shut down due to security risks, which the company is appealing. The order will not likely affect consumer use of the app, however.
What are the penalties for violating the Australian Online Safety Amendment?
The Privacy Act applies to compliance and penalties as well, as violations of the Amendment will be considered “an interference with the privacy of the individual” for the purposes of the Privacy Act. The Information Commissioner will manage enforcement of the Social Media Minimum Age law, and concompliance fines will be up to 30,000 “penalty units”, which as of the end of 2024 equals AUD 9.5 million.
A penalty unit is a way to standardize and calculate fines, accomplished by multiplying the current value of a single penalty unit — which is determined by the Information Commissioner and regularly updated to reflect inflation — by the number of penalty units assigned to the offence.
The Information Commissioner will also hold additional powers for information gathering and the ability to notify a social media platform and publicly release information if it’s determined the platform has violated the law.
Independent review of the law is required within two years of it coming into effect, so by November 2026.
The future of online data privacy on social media platforms
Australia’s Online Safety Amendment has significant implications for data privacy and children’s autonomy as governments, educators, and parents — in that country and around the world — struggle to balance children’s use of social media to enable connection, education, and entertainment while keeping them safe from misinformation and abuse.
The Social Media Minimum Age law places strict requirements on relevant platforms to implement age verification, prevent and remove account-holding by children, and also ensure the security of sensitive information required to do these verifications. The penalties for failing to adequately achieve this ban are steep, and compliance won’t be easy given how fast technologies change and how savvy many children are online. The amendment may well require its own amendments in a relatively short period of time.
There will be a lot of attention over the next two years on how this law rolls out and what works and doesn’t to fulfill requirements. The required report after the first two years should also prove illuminating, and provide guidance for other countries considering similar measures, or looking to update existing data privacy legislation to better protect children.
Companies implementing best practices for data privacy compliance and protection of users of websites, apps, social media platforms, and more should ensure they are well versed in relevant (and overlapping) laws, including specific requirements for special groups like children.
They should consult qualified legal counsel about obligations, and IT specialists about the latest technologies to meet their needs. They should also invest in well integrated tools, like a consent management platform, to collect valid consent for data use where relevant and inform users about data handling and their rights.