Australia’s online watchdog has criticised the world’s biggest social platforms of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Exposed in Opening Large-scale Review
Australia’s eSafety Commissioner has outlined a concerning pattern of failure to comply amongst the world’s most prominent social media platforms in her first formal review following the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish sufficient safeguards to stop minors from using their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification systems, highlighting that some platforms have permitted children who originally stated themselves under 16 to subsequently claim they were older, effectively circumventing the law’s intent.
The findings represent a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has stressed that simply showing some children still maintain accounts is inadequate; platforms must instead furnish substantive proof that they have put in place comprehensive systems and procedures intended to stop under-16s from creating accounts in the first place. This shift reflects the government’s commitment to ensure tech giants accountable, with potential penalties looming for companies that fail to meet the statutory obligations.
- Enabling formerly prohibited users to re-verify their age and restore account access
- Permitting multiple tries at the identical verification process with no repercussions
- Insufficient safeguards to prevent accounts for under-16s from being established
- Inadequate notification systems for parents and the general public
- Lack of transparent data about regulatory measures and account deletions
The Scope of the Challenge
The substantial scale of social media usage amongst young Australians underscores the regulatory challenge confronting both the authorities and the platforms themselves. With millions of accounts already restricted or removed since the implementation of the ban, the figures paint a picture of widespread initial non-compliance. The eSafety Commissioner’s conclusions suggest that the technical and procedural obstacles to implementing age restrictions have proven far more complex than anticipated, with platforms struggling to distinguish genuine age declarations from fraudulent ones. This intricacy has placed enforcement authorities grappling with the fundamental question of whether existing age verification systems are sufficient for the purpose.
Beyond the technical obstacles lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed strict identity verification requirements, citing data protection worries and the real challenge of verifying age digitally. However, the Commissioner’s report suggests that some platforms may not be making adequate commitment to deploy the infrastructure mandated legally. The shift towards active enforcement represents a critical juncture: either platforms will substantially upgrade their compliance infrastructure, or they risk facing substantial fines that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Numbers Reveal
In the first month subsequent to the ban’s introduction, Australian regulators indicated that 4.7 million accounts had been suspended or deleted. Whilst this number initially looked to prove enforcement effectiveness, later review reveals a more layered picture. The considerable quantity of account takedowns indicates that many under-16s had managed to establish accounts in the first place, demonstrating that preventive controls were inadequate. Additionally, the data prompts inquiry about whether suspended accounts represent real regulation or merely users closing their accounts voluntarily in in light of the latest limitations.
The minimal transparency concerning these figures has troubled independent observers trying to determine the ban’s genuine effectiveness. Platforms have revealed scant details about their implementation approaches, effectiveness metrics, or the profile of deleted profiles. This opacity makes it challenging for regulators and the wider public to determine whether the ban is working as intended or whether younger users are just locating alternative ways to access social media. The Commissioner’s insistence on comprehensive proof of consistent enforcement practices reflects growing frustration with platforms’ reluctance to provide full information.
Sector Reaction and Pushback
The social media giants have addressed the regulator’s enforcement action with a combination of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, stressed its commitment to complying with Australian law whilst simultaneously arguing that accurate age determination continues to be a significant industry-wide challenge. The company has called for a different approach, proposing that robust age verification and parental approval mechanisms implemented at the app store level would be more effective than enforcement at the platform level. This position demonstrates broader industry concerns that the existing regulatory system puts an unrealistic burden on individual platforms.
Snap, the developer of Snapchat, has adopted a more assertive public position, stating that it had suspended 450,000 accounts following the ban’s implementation and claiming to continue locking more daily. However, industry observers question whether such figures reflect authentic adherence or merely reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the regulatory requirement to actively exclude an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, pointing to privacy issues and technical constraints, creating a standoff between authorities and platforms over who carries responsibility for execution.
- Meta maintains age verification ought to take place at app store level instead of on individual platforms
- Snap states to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical challenges as impediments to effective age verification
- Platforms maintain they are making their best effort whilst challenging the ban’s general effectiveness
Wider Questions Regarding the Prohibition’s Impact
As Australia’s under-16 online platform ban moves into its implementation stage, key concerns persist about whether the law will accomplish its intended goals or merely push young users towards unregulated platforms. The regulatory authority’s initial compliance assessment reveals that following implementation, substantial gaps remain—children continue finding ways to bypass age verification mechanisms, and platforms have struggled to prevent new underage accounts from being created. Critics contend that the ban’s success depends not merely on regulatory vigilance but on whether young people will truly leave major social networks or simply shift towards alternative services, secure messaging apps, or virtual private networks designed to mask their age and location.
The ban’s worldwide effects increase the complexity of assessments of its success. Countries such as the United Kingdom, Canada, and several European nations are observing Australia’s initiative closely, exploring similar regulatory measures for their own citizens. If the ban proves ineffective at reducing children’s online activity or cannot protect them from damaging material, it could undermine the case for similar measures elsewhere. Conversely, if enforcement becomes sufficiently rigorous to genuinely restrict underage usage, it may inspire other governments to pursue similar approaches. The outcome will potentially determine international regulatory direction for the foreseeable future, making Australia’s regulatory efforts scrutinised far beyond its borders.
Those Who Profit and Who Loses
Mental health advocates and organisations focused on child safety have endorsed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators contend that removing young Australians platforms built to maximise engagement could reduce anxiety, enhance sleep quality, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also removes legitimate uses of social media for young people—maintaining friendships, obtaining educational material, and participating in online communities around shared interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families dispute.
The ban’s practical impact extends beyond individual users to influence content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations find it difficult to engage young people through channels they previously employed effectively. Meanwhile, the ban inadvertently advantages large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects go well past the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a significant shift from passive monitoring to active enforcement, marking a critical turning point in the implementation of the age restriction. The regulator will now compile information to determine whether platforms have omitted “reasonable steps” to restrict child participation, a regulatory requirement that extends beyond simply noting that young people stay within these systems. This strategy requires demonstrable proof that companies have introduced suitable mechanisms and procedures designed to exclude minors. The enforcement team has signalled it will pursue investigations methodically, developing arguments that could trigger significant fines for failure to comply. This transition from oversight to action reflects increasing dissatisfaction with the companies’ present approach and signals that consensual engagement by itself is insufficient.
The implementation stage raises critical issues about the sufficiency of sanctions and the concrete procedures for maintaining corporate responsibility. Australia’s statutory provisions offers enforcement instruments, but their efficacy hinges on the eSafety Commissioner’s readiness to undertake formal action and the platforms’ capacity to respond substantively. International observers, notably regulators in the Britain and Europe, will closely monitor Australia’s enforcement strategy and outcomes. A robust enforcement effort could create a template for additional countries contemplating comparable restrictions, whilst failure might undermine the overall legislative structure. The next phase will prove crucial whether Australia’s innovative statutory framework delivers genuine protection for teenagers or remains largely symbolic in its effect.
