Headline

Policing childhood online: Will age-gated social media actually protect the next generation?

As Australia implemented a social media age restriction law requiring platforms like TikTok, Instagram, and YouTube to take “reasonable steps” to block Australians under 16 from creating or keeping accounts, the world waits and watches to see how it’ll play out. The ban aims to protect youth from harms like cyberbullying and mental health risks, though children can still view public content and use messaging apps, with platforms facing hefty fines for non-compliance.

Is age-gating social media the answer to protect the next generation?

Australia isn’t the only one. In May, a bill was signed in Texas that will require Apple and Alphabet’s Google to verify the age of users of their app stores, highlighting the debate over whether and how to regulate smartphone use by children and teenagers. The law requires parental consent to download apps or make in-app purchases for users aged below 18. US state Utah already passed a similar law earlier last year, and US lawmakers have also introduced a federal bill.

Meta platforms Facebook and Instagram, TikTok, and YouTube are facing courtroom scrutiny over allegations that their platforms are fueling a youth mental health crisis.

Read more: Smart but depressed or dumb but happy: The Internet offers the red pill-blue pill choice

TikTok already faces regulatory pressure in Europe to better identify and remove accounts belonging to children under 13. Consequently, the ByteDance-owned platform will start rolling out new age-detection technology across Europe in the coming weeks.

As the world watches Australia’s experiment unfold, the real test will be whether governments, tech companies, and society can move beyond reactive bans toward building digital spaces where child safety is designed in, not patched on after harm has already occurred.

Britain too is considering tightening rules on children’s use of social media. Prime Minister Keir Starmer has warned that children were at the risk of being pulled into “a world of endless scrolling, anxiety and comparison.”

And now, Spain has banned social media for under-16s.

Social media, while connecting the world, also harbors hidden dangers. From spreading misinformation and fuelling mental health issues to enabling cyberbullying and data exploitation, its dark side impacts individuals and society alike. Understanding these risks is crucial to using digital platforms responsibly and safeguarding personal well-being and public discourse.

Mental health is one of the crucial areas impacted by social media. Last year, seven French families filed a lawsuit against social media giant TikTok, accusing the platform of exposing their adolescent children to harmful content that led to two of them taking their own lives at 15.

Is it Working?

In the case of the Australian ban, the responsibility lies entirely on the social media platforms to detect and deactivate accounts, with fines up to $49.5 million if they fail to take reasonable steps to block underage users. This is probably why platforms like Google’s YouTube, Instagram owner Meta and other social media firms have agreed to comply, even as Reddit filed a lawsuit in Australia’s highest court seeking to overturn the ban, citing that the ban interfered with free political communication, citing the constitution.

But this is the only way it would work – force social media giants to check themselves or hurt them financially. A jury just heard that Meta and YouTube designed apps to addict kids. Meta Platforms and YouTube deliberately designed products they knew would be addictive for children, a lawyer for a woman suing the two companies told jurors in California at a trial that will test whether Big Tech platforms can be held liable for their app design.

Many platforms say they don’t mean to expose children to harmful content. Last year July, OnlyFans, an internet content subscription service, widely known for being popular with sex workers who produce pornography, vowed that it vets every user and all content to keep children off its porn-driven platform. However, as per a Reuters investigation, hundreds of sexually explicit videos and images of minors, from toddlers to teens, appeared on the website. If we want such websites to do a better job maybe we have to compel them.

Enter AI

With AI entering this mix, teen minds are facing even more risk. According to a study, when it comes to cyberbullying, AI-generated messages can be just as or even more harmful than human-written ones. Even though platforms like Gemini have ostensibly age-appropriate protections in place, it doesn’t take much to surpass those and have a chatbot talking dirty to you.

There has been a growing official voice across the world condemning the surge in nonconsensual imagery on Elon Musk’s social media site X.  This month, the European Commission said that the images of undressed women and children being shared across the platform were unlawful and appalling.

How Real is the Safety of this Ban?

Australia’s sweeping social media age restriction law marks a turning point in how governments are willing to intervene in the digital lives of young people. As similar measures emerge across the US and Europe, the global debate is not just about whether children should be protected online, but how and at what cost to privacy, free expression, and platform accountability.

Read more: Social media, the Frankenstein we must learn to live with

Yet regulation alone is not a silver bullet. The repeated gap between platform promises and real-world enforcement, the rise of AI-amplified harm, and the persistence of abusive and non-consensual content all point to a deeper problem, safety cannot be outsourced entirely to algorithms or fines. As the world watches Australia’s experiment unfold, the real test will be whether governments, tech companies, and society can move beyond reactive bans toward building digital spaces where child safety is designed in, not patched on after harm has already occurred.

Navanwita Bora Sachdev

Navanwita is the editor of The Tech Panda who also frequently publishes stories in news outlets such as The Indian Express, Entrepreneur India, and The Business Standard

Recent Posts

The ePlane Company builds the Digital Twin of India’s first electric air taxi with NVIDIA Omniverse Libraries

The ePlane Company, Indian developer of electric Vertical Takeoff and Landing (eVTOL) aircraft, announced that…

2 hours ago

Funding alert: Tech startups that raked in moolah this month

The Tech Panda takes a look at recent funding events in the tech ecosystem, seeking…

2 hours ago

Ness Digital Engineering names Sudip Singh CEO to drive next phase of AI-led enterprise growth

Ness Digital Engineering, a global provider of intelligent data and software engineering services, today announced…

2 days ago

AI Launches: Fintech, Chatbot, Crypto & Voice AI

The Tech Panda takes a look at recent launches in the superfast field of Artificial…

2 days ago

Adani Commits US$ 100 B to Sovereign AI Infrastructure

The Adani Group announced one of the world’s largest integrated energy-compute commitments, a direct investment…

3 days ago

Generative AI in eKYC: The next phase of fraud prevention & compliance automation

As digital financial services continuously grow across India, the demand for faster onboarding and stronger…

3 days ago