Instagram tests with AI, other age verification tools
Instagram is testing new ways to verify the age of people using the service, including an artificial intelligence tool for facial scanning, having mutual friends verify their age, or uploading an ID.
But the tools won’t be used, at least not yet, to block kids from the popular photo and video sharing app. The current test only checks if a user is 18 or older.
Face-scanning AI, especially among teens, set off alarm bells on Thursday, given Instagram parent Meta’s checkered history in protecting user privacy. Meta stressed that the technology used to verify people’s age cannot recognize a person’s identity — only age. Once the age verification is complete, Meta said so, and Yoti, the AI contractor it partnered with to perform the scans, will remove the video.
Meta, which owns both Facebook and Instagram, said that if anyone tries to edit their date of birth on Instagram from Thursday 18 to 18 or older, they should verify their age using one of these methods.
Meta continues to question the negative effects of its products, especially Instagram, on some teens.
Children must technically be at least 13 to use Instagram, similar to other social media platforms. But some get around this by lying about their age or having a parent do it. Meanwhile, teens ages 13 to 17 have additional restrictions on their accounts — for example, adults they’re not connected to can’t message them — until they turn 18.
Using uploaded IDs is not new, but the other two options are. “We give people several options to verify their age and see what works best,” said Erica Finkle, Meta’s director of data management and public policy.
To use the face scan option, a user must upload a video selfie. That video is sent to Yoti, a London-based startup that uses people’s facial features to estimate their age. Finkle said Meta isn’t yet trying to locate children under 13 using the technology because it doesn’t keep records of that age group — which would be necessary to train the AI system properly. But if Yoti predicts that a user is too young for Instagram, they will be asked to prove their age or have their account deleted, she said.
“It never, uniquely, recognizes anyone,” said Julie Dawson, Yeti’s Chief Policy and Regulatory Officer. “And the image will be deleted immediately once we’ve done it.”
Yoti is one of several biometric companies benefiting from a push in the UK and Europe for stronger age-verification technology to prevent children from accessing pornography, dating apps, and other internet content intended for adults – just to say the least. Not to mention bottles of alcohol and other off-limits items in brick-and-mortar stores.
Yoti has worked with several major UK supermarkets on facial scanning cameras at self-scan checkouts. It has also started verifying the age of youth-oriented French video chat room app Yubo users.
While Instagram is likely to deliver on its promise to remove an applicant’s facial images and not use them to recognize individual faces, the normalization of facial scanning raises other societal concerns, said Daragh Murray, a senior lecturer at the University of Essex. Law school.
“It’s problematic because there are a lot of known biases when trying to identify by things like age or gender,” Murray said. “You’re essentially looking at a stereotype; people are just different.”
A 2019 study by a US agency found that facial recognition technology often performs unevenly based on a person’s race, gender, or age. The National Institute of Standards and Technology found higher error rates for the youngest and oldest people. There is no such benchmark yet for facial analyzes estimating age. Still, Yeti’s published analysis of the results reveals a similar trend, with slightly higher error rates for women and dark-skinned people.
Meta’s facial scanning move is different from what some of its tech competitors are doing. Microsoft said Tuesday it would stop providing facial analysis tools to its customers who “claim to infer” emotional states and identity traits such as age or gender, citing concerns about “stereotyping, discrimination or unfair denial of services.”
Meta itself announced last year that it would shut down Facebook’s facial recognition system and remove the facial prints of more than 1 billion people after years of investigation by courts and regulators. But it indicated then that it would not completely abandon face analysis, moving away from the general tagging of photos on social media that popularized the commercial use of facial recognition towards “narrower forms of personal authentication.”