The Australian government has begun talks this week with executives from major social media platforms to discuss the implementation of its upcoming “under-16 social media ban.” Communications Minister Anika Wells said the government is meeting with representatives from Meta, Snapchat, YouTube, and TikTok to outline how the world-first age restriction policy will be enforced. Wells emphasized that platforms must “actively cooperate with the government” to ensure their systems comply with the new law.
The ban, set to take effect on December 10, aims to protect children from harmful and manipulative algorithms. Wells stated that while social media has its benefits, “harm to children should never be tolerated,” urging platforms to work with eSafety on both technical and administrative adjustments. Although Elon Musk’s X (formerly Twitter) is not part of this week’s meetings, it is scheduled to hold discussions with the government in November.
Reactions from social media companies have been mixed. Many expressed concern that the policy is being implemented too hastily, arguing there hasn’t been enough time to develop practical solutions. However, the government’s Age Verification Technology Trial Report found that Australia already has multiple privacy-safe and effective methods to verify users’ ages. Still, it acknowledged that no single method can solve every issue, meaning platforms may need to combine various tools to ensure compliance. Violators could face fines of up to AUD 49.5 million.
Under the new rules, platforms will be required to take “reasonable steps” to detect and deactivate underage accounts and prevent re-registration. They cannot rely solely on self-declared ages and must continuously monitor and improve verification systems. However, they are not required to verify every user’s age, nor can they mandate government ID as the only form of proof. Personal verification data will not be stored, with records focused on systems and processes instead.
The government stressed that the core goal of the law is to protect children’s online safety while avoiding unnecessary burdens on platforms and users.