Australia’s internet regulator has criticised the world’s biggest social platforms of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to prevent new accounts. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Uncovered in First Major Review
Australia’s eSafety Commissioner has detailed a troubling pattern of non-compliance among the world’s biggest social media platforms in her inaugural review since the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively neglected to establish appropriate safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about structural gaps in age verification systems, noting that some platforms have permitted children who originally stated themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings represent a significant escalation in the regulatory action, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has made clear that simply showing some children still hold accounts is insufficient; platforms must instead furnish substantive proof that they have established robust systems and processes designed to prevent under-16s from creating accounts in the outset. This shift reflects the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that fail to meet the legal requirements.
- Allowing previously banned users to re-verify their age and regain account access
- Enabling multiple tries at the identical verification process without penalty
- Weak mechanisms to block accounts for under-16s from being created
- Insufficient notification systems for parents and members of the public
- Absence of publicly available information about enforcement efforts and user account terminations
The Magnitude of the Problem
The substantial scale of social media activity amongst Australian young people highlights the compliance challenge facing both the government and the platforms in question. With millions of accounts already removed or restricted since the implementation of the ban, the figures paint a picture of extensive early non-compliance. The eSafety Commissioner’s conclusions indicate that the operational and technical barriers to implementing age restrictions have proven far more complex than anticipated, with platforms having difficulty to distinguish genuine age declarations from false claims. This complexity has left enforcement authorities grappling with the core issue of whether current age verification technologies are sufficient for the purpose.
Beyond the technical obstacles lies a broader concern about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed stringent age verification measures, citing privacy concerns and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms may not be making adequate commitment to deploy the infrastructure mandated legally. The move to active enforcement represents a critical juncture: either platforms will substantially upgrade their compliance infrastructure, or they stand to incur significant penalties that could transform their operations in Australia and potentially influence compliance frameworks internationally.
What the Statistics Demonstrate
In the opening month after the ban’s introduction, Australian regulators reported that 4.7 million accounts had been limited or deleted. Whilst this figure initially appeared to show regulatory success, further investigation reveals a more complex picture. The substantial number of account takedowns suggests that many under-16s had managed to establish accounts in the initial stages, indicating that preventative measures were insufficient. Moreover, the data casts doubt about whether suspended accounts constitute authentic compliance or simply users closing their pages voluntarily in response to the latest limitations.
The minimal transparency regarding these figures has disappointed independent observers attempting to evaluate the ban’s true effectiveness. Platforms have revealed scant details about their enforcement methodologies, success rates, or the profile of suspended accounts. This opacity makes it challenging for regulators and the wider public to assess whether the ban is working as intended or whether young people are merely discovering other methods to access social media. The Commissioner’s insistence on detailed evidence of structured adherence protocols reflects mounting dissatisfaction with platforms’ reluctance to provide comprehensive data.
Sector Reaction and Opposition
The social media giants have responded to the regulatory enforcement measures with a mixture of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which runs Facebook and Instagram, emphasised its commitment to complying with Australian law whilst simultaneously arguing that accurate age determination remains a significant industry-wide challenge. The company has called for a different approach, proposing that robust age verification and parental approval mechanisms put in place at the app store level would be more effective than platform-level enforcement. This stance reflects wider concerns across the industry that the existing regulatory system puts an impractical burden on individual platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had suspended 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, sector analysts dispute whether such figures reflect authentic adherence or merely reactive account management. The core conflict between platforms’ business models—which historically relied on maximising user engagement and growth—and the regulatory requirement to systematically remove an entire age demographic persists unaddressed. Companies have long resisted stringent age verification, citing privacy issues and technical constraints, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta contends age verification should occur at app store level rather than on individual platforms
- Snap asserts to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups cite privacy concerns and technical obstacles as impediments to effective age verification
- Platforms contend they are making their best effort whilst questioning the ban’s general effectiveness
Wider Considerations About the Ban’s Effectiveness
As Australia’s under-16 online platform ban enters its implementation stage, fundamental questions remain about whether the legislation will accomplish its intended goals or merely push young users towards less regulated platforms. The regulator’s first compliance report reveals that despite months of implementation, significant loopholes exist—children continue finding ways to circumvent age verification systems, and platforms have struggled to prevent new underage accounts from being established. Critics argue that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply shift towards other platforms, encrypted messaging applications, or VPNs designed to conceal their age and location.
The ban’s international ramifications increase the complexity of assessments of its impact. Countries including the United Kingdom, Canada, and multiple European countries are watching Australia’s approach closely, evaluating similar regulatory measures for their own populations. If the ban fails to reduce children’s digital engagement or does not protect them from dangerous online content, it could undermine the case for comparable regulations elsewhere. Conversely, if enforcement becomes sufficiently rigorous to truly restrict underage participation, it may inspire other nations to implement similar strategies. The outcome will probably shape global regulatory trends for years to come, making Australia’s implementation efforts examined far beyond its borders.
Who Gains and Who Loses
Mental health campaigners and child safety organisations have endorsed the ban as a essential measure to counter algorithmic manipulation and exposure to harmful content. Parents and educators argue that removing young Australians platforms designed to maximise engagement could reduce anxiety, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the risks to mental health associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also eliminates legitimate uses of social media for young people—maintaining friendships, accessing educational content, and participating in online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families challenge.
The ban’s practical impact extends beyond individual users to impact content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban inadvertently benefits large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unexpected outcomes suggest the ban’s effects extend far beyond the simple goal of child protection.
What Happens Next for Enforcement
Australia’s eSafety Commissioner has signalled a notable transition from passive monitoring to direct intervention, marking a critical turning point in the execution of the youth access prohibition. The regulator will now compile information to determine whether platforms have neglected to implement “reasonable steps” to block minors from using, a legal standard that goes further than simply recording that young people stay within these systems. This method demands demonstrable proof that companies have implemented suitable mechanisms and protocols meant to keep out minors. The enforcement team has stated it will launch probes methodically, building cases that could result in significant fines for breach of requirements. This move from observation to action reflects growing frustration with the platforms’ current efforts and signals that consensual engagement by itself is insufficient.
The enforcement phase presents significant concerns about the appropriateness of fines and the practical mechanisms for ensuring platform accountability. Australia’s statutory provisions delivers compliance mechanisms, but their success depends on the eSafety Commissioner’s willingness to pursue formal action and the platforms’ ability to adapt substantively. International observers, particularly regulators in the Britain and Europe, will closely monitor Australia’s regulatory approach and results. A robust enforcement effort could create a blueprint for additional countries considering comparable restrictions, whilst failure might compromise the entire regulatory framework. The forthcoming period will prove crucial whether Australia’s innovative statutory framework translates into substantive defence for teenagers or becomes largely performative in its impact.
