What is presented as child protection is, in practice, the foundation of a nationwide identity and surveillance infrastructure that creates new risks for the very children it claims to protect.
Reassuring Language With Concerning Implications
Australia’s new social media minimum age law comes into effect on 10 December 2025. Platforms will be required to take “reasonable steps” to prevent anyone under 16 from holding an account. On paper this is framed as a child-safety measure. In practice it establishes the foundations for identity checks, biometric scanning and verification systems that extend far beyond teenagers and far beyond social media.
Government fact sheets claim that Australians will not be forced to use a government ID or Digital ID to prove their age. The wording appears reassuring. Yet the law still requires platforms to verify age, making identity verification unavoidable in practice. And like all verification infrastructure, once it exists, it seldom remains limited to its initial purpose.
The pattern is familiar: a system presented as a narrow safety measure becomes the entry point for broader monitoring infrastructure. Whether applied to vehicle movement or online identity, the justification is the same, and the long-term consequences reach far beyond the stated goal.
A Disturbing Pattern Is Undeniable
When we connect the elements, a clear pattern emerges:
- 📱 Social media age verification laws.
- 📸 Biometric age estimation.
- 🧾 Digital ID legislation and identity wallets.
- 🚦 City wide ANPR tracking networks.
- 🗂️ Gradual integration of government and private verification systems.
None of these systems stand alone. Together they form a lattice that can be tightened over time. Optional today. Required tomorrow. We have seen this pattern before, where rules marketed as voluntary become mandatory in practice.
Perth’s Sudden Spike in ANPR Cameras: A Case Study in Quiet Infrastructure Expansion
Across Perth, hundreds of new Automatic Number Plate Recognition cameras have appeared with little public explanation. These devices track vehicles by location and timestamp. The primary justification mentioned in older reporting was that they help protect children or locate “people at risk”. That explanation simply does not match the scale of the network being deployed today.
Mass ANPR deployment is the type of foundation needed for:
- 🚗 15 minute city style movement boundaries.
- 💸 ULEZ style zone charging and automatic fines.
- 🛰️ Real time behavioural tracking across an entire metropolitan area.
- 📡 Future integration with identity linked travel permissions or compliance systems.
When the stated purpose does not match the scale of the infrastructure, public scrutiny becomes not only reasonable but necessary.
Biometric Age Assurance: A Concerning Shift in Online Norms
The emerging global trend for age assurance includes:
- 📸 Real time face scans to estimate age.
- 🔍 Frequent re checks if someone may appear underage.
- 🔗 Possible future matching against government databases.
- 🗄️ A shift toward storing face images for verification history.
This means that forums, gaming communities and hobby sites may eventually be required to:
- 👤 Request photos from users before they can post.
- 🛂 Run automated AI checks to decide whether someone “looks under 16”.
- 💾 Store biometric evidence to prove compliance.
Normalising AI scanning and photography as a condition for participating online is a major shift in civil liberties. It undermines anonymity, suppresses vulnerable voices, and concentrates biometric data in systems that are difficult to oversee.
Digital ID Is Emerging Everywhere, But the Justifications Never Match
Digital identity systems are being introduced across multiple countries in lockstep. Each country frames Digital ID as the solution to a different problem, but the systems being built are structurally identical.
- 🇦🇺 Australia says it is needed to protect children online.
- 🇬🇧 The United Kingdom says it is to stop illegal immigration.
- 🇨🇦 Canada says it will improve digital government services.
- 🇪🇺 The European Union says it will support secure digital wallets.
- 🌍 The World Health Organisation frames it as necessary for global health certification.
- 💼 The World Economic Forum calls it part of trusted digital ecosystems.
Different narratives. Same infrastructure.
When identical systems are introduced for incompatible reasons, the reasons are unlikely to be the real drivers.
Child Safety as a Justification for Expanding Surveillance Infrastructure
Families already have the tools to manage what their children can access online. Every major operating system includes parental controls. Routers can enforce site blocks. Parents can configure devices directly and set reasonable boundaries.
I recently published a guide on this exact topic. It shows how effective practical measures can be when parents are empowered instead of sidelined.
If the true motivation were child safety, we would focus on:
- 👪 Digital literacy for parents and children.
- 🔒 Device and network controls that already work.
- 🛡️ Better moderation and safer design by platforms.
Instead we are normalising identity checks, face scans, and nationwide surveillance infrastructure.
A Better Way to Respond When Platforms Demand ID or Face Scans
Cybersecurity educator and parent Family I.T. Guy makes a critical point: when a platform demands ID or a facial scan, most people treat it as a hurdle to get past so their child can keep using the service. It feels like an inconvenience to solve, when in reality it is a decision point.
Here is his walkthrough explaining how these verification demands expose families to real-world risk:
As he explains, these systems create large databases of passports, driver’s licences, and children’s biometric data, and those databases are repeatedly breached. Discord’s age-verification vendor was hacked, exposing 70,000 IDs, including children’s documents. Roblox is now requiring facial age estimation or government ID for chat, sending that data to a third-party processor. Promises that these images are deleted have already been broken.
This friction point is not something to bypass. It is a moment to stop and ask:
- ❓ Why does this platform require this level of access?
- ⚖️ Is the service worth trading biometric data or government ID?
- 🧩 What does this normalise for my child?
- 🚫 Is ‘no’ the healthier and safer choice?
You are allowed to say no. No platform is entitled to your child’s face or your family’s identity documents. Safety features that require biometric surrender are not safety; they are data collection pipelines. Your family’s safety does not come from trusting platforms or governments with permanent sensitive data. It comes from deliberate choices you make at home about what risks you are willing to accept.
The request for ID or a face scan is not a step in the signup process. It is a red flag. Treat it as a pause point to reassess whether the service is worth the cost.
A Child-Safety System That Puts Children at Risk
Australia’s social media ban for under 16s may be presented as a child protection measure. But the mechanisms required to enforce it create the architecture of a broad identity and surveillance system that applies to every adult in the country. And once these systems exist, they seldom remain limited in scope.
More importantly, they do not make children safer. They create large, permanent databases of passports, driver’s licences, and biometric data that will be breached, copied, sold, or misused. A child whose face or identity document is leaked cannot simply reset their identity. Systems built in the name of child safety are, in practice, exposing children to long-term, irreversible risk.
The question is not whether we should keep children safe online. The question is whether the price should be a society where constant verification, face scanning, and mass tracking become normal parts of everyday life, and where the data we are told will protect children may be the very thing that puts them in danger.