Age Verification in the Data Protection Landscape: A Major Problem for Protecting Minors?
Tuesday 16th May 2023
Protecting children online is at the forefront of regulatory agendas across the globe.
In 2023 alone we have seen:
- TikTok fined £12.7m for illegally processing child personal data,
- ChatGPT being temporarily banned in Italy for (amongst other data privacy issues) child safety concerns, and
- this month the Federal Trade Commission accusing Meta of failing to implement appropriate parental controls on Instagram and Facebook
We are seeing more regulatory scrutiny when it comes to protecting children online. Particularly, in the UK, with the adoption of the Children’s Code (sometimes known as the age appropriate design code), which explains how the UK GDPR applies in the context of children under 18 using digital services. In addition, the Online Safety Bill which, when it becomes law, will place stronger obligations on online services providers to protect children from illegal content.
It is worth noting that the Children’s Code applies to both services offered to children and where children will likely access a service but they are not necessarily the provider’s target audience.
Age verification is an important part of data protection compliance. Despite there being multiple ways to verify the age of users of online services, compliance rates in this area are low.
It is a legal requirement in the UK that those offering services take steps to verify the age of users with a level of certainty appropriate to the risks posed by the provider’s processing activities. The law recognises that children need special protection when their data is collected due to them being less aware of the risks associated with companies processing their data. Verification can also protect children from content which is not age appropriate, and prevent them from accessing it.
Furthermore, there are a myriad of issues with processing children’s personal data to fulfil a contract, or on the basis of consent, given that the age of consent across EU member states ranges from ages 13-16.
Methods of age verification range from self-declaration (usually confirming birth-date), to access to mobile phone records, to third party verification partners, to checking of ID documents, to biometric data processing.
In summary, where online services are directed at children, age appropriate design, privacy notices and consent/parental consent must be considered at the point of design. Where services are not aimed at children, providers must take reasonable steps to verify the age of users so as not to expose children to inappropriate content.
Any business operating in the technology sector should carefully consider age appropriate design and regulations surrounding the protection of children before making their services available.