Australia To Block Under-16s From Social Media
The Australian government argues that social media’s design is dragging kids into unhealthy screen time
Ever wondered how long you could survive without Instagram or Facebook? Tough to picture, right? For many teens—and plenty of adults too—social media isn’t just a hobby; it’s part of everyday life. Hours vanish as we scroll through endless videos and posts. Now, Australia is stepping in for under-16s, rolling out new rules to help young people spend less time stuck to their screens. From December 10, social media platforms in Australia will be required to block kids under 16 from creating accounts and to deactivate any that already exist.
The Australian government argues that social media’s design is dragging kids into unhealthy screen time and exposing them to content that can affect their wellbeing. That concern is driving this unprecedented ban, a step that’s won over a lot of parents. A government-backed study from earlier this year revealed just how deep the problem runs: Nearly every kid aged 10 to 15 is on social media, and most of them have already run into disturbing content. That includes everything from misogynistic posts and violent videos to material encouraging eating disorders and even suicide.
Australia has already called out the platforms that will fall under the ban. Apart from Instagram and Facebook, the list includes all the big names like TikTok, Threads, X, Snapchat, YouTube, and Reddit. Australia says this list isn’t final, as more platforms could be added. The authority will judge each one by a few key factors: is the app essentially built for social interaction, does it let users engage with others, and can people upload their own posts? Still, some platforms won’t be affected, such as YouTube Kids, Google Classroom and WhatsApp.
So how will this ban actually work? The rules won’t target kids or their parents. The pressure is squarely on the tech giants. If platforms fail to keep under-16s out, they could be hit with massive penalties of up to $49.5 million. The government says it’s up to these companies to take practical action to block young users, using solid age-verification tools even if the government isn’t naming exactly which ones. Possible tools include ID verification, facial or voice recognition, and systems that estimate age automatically. The government says platforms should use multiple checks — and that self-declared ages or parental sign-offs are no longer acceptable.
It’s currently difficult to tell if this ban will truly be effective. With no clarity on what age-checking tools platforms will adopt, critics worry the system might backfire — catching the wrong people while missing the underage users it’s meant to stop. Critics are also worried about how much data these checks will require platforms to collect and store — and what could happen if that information is mishandled.