Meta’s Opening Statement — Joint Select Committee on Social Media and Australian Society (4th September, 2024)

Meta Policy ANZ
Meta Australia Policy Blog
6 min readSep 11, 2024

On the 4th September, Meta’s Global Head of Safety Antigone Davis and Regional Director of Policy (ANZ/Korea/Japan/Pacific Islands) Mia Garlick, appeared at the Australian Governments Joint Select Committee on Social Media and Democracy. See the full opening statement from Antigone Davis below.

Opening statement

Thank you Chair and Deputy Chair for the opportunity to appear before you again. At the end of our last hearing, you asked us that our next appearance be in person and I’m grateful for the Committee’s flexibility in accommodating my travel schedule so I could be here to answer your questions today.

As the Committee considers recommendations regarding online safety and well-being, I’d like to provide context on the ongoing global conversations surrounding youth online protection, particularly age assurance, that may be informative to your work.

Before turning to these, I want to recap the approach that we take on Facebook and Instagram to promote a safe and age appropriate experience on our services.

Investing in safety is critical to our long-term sustainability. People will not use our services, advertisers will not spend their money on our platforms if we fail in this regard.

Over the years we have steadily increased our investment in proactive detection technology to find and remove problematic content before people report it to us. Today, for categories of content — like nudity, terrorism, or child exploitation — well over 95% of the content that we remove, we take down proactively.

In addition to these scaled solutions, we also have steadily built tools to allow people to customise their experience to match their individual needs and preferences — such as our comment filtering tools and our see less of this content feature.

We want people, especially young people, to navigate the online world safely and confidently when they use our services. To do this and create an environment where young people can engage safely all of this work is informed by consultations with experts in child development, mental health and safety.

As part of our effort to provide age-appropriate experiences we take a multi-layered approach to understanding someone’s age:

  • We require users to provide their date of birth when they register new accounts, a tool called an age screen. Those who enter their age (under 13) are not allowed to sign up. The age screen is age-neutral — in other words the options offered do not assume that someone is old enough to use our service, and we restrict people who repeatedly try to enter different birthdays into the age screen.
  • We’ve been investing in and developing AI age estimation tools. While this technology is evolving and is far from perfect, it plays an important role in the safeguards we provide. We continue to try and improve its efficacy.
  • In addition to AI, we have teams reviewing reported accounts that appear to be used by people who are underage, and also can place checks on accounts that appear underage in the course of content review. If these people are unable to prove they meet our minimum age requirements, we delete their accounts.
  • Where we require age verification — for example, if a teen tries to age up once they are on our apps — we’ve developed an industry first menu of options to verify age — it allows users to do so by submitting ID documents, or uploading a video selfie for face-based age prediction through Yoti — a 3rd party vendor based out of the UK which provides privacy preserving, age estimation services.

Our focus on age assurance enables us to provide specific protections for young users (13–18). These safeguards include:

  • Defaulting new teen accounts to private.
  • Preventing adults over 18 from starting a private conversation on Instagram DM and Messenger with a teen who is not connected with them.
  • Limiting the discoverability of a teen’s account.
  • Age gating for sensitive content that we may allow on our platform for public interest, newsworthiness or free expression value, that may be disturbing or sensitive for younger users.
  • In addition to removing content that violates our policies, we also aim to make it harder for teens to see potentially sensitive content. For example, we have a sensitive content control that’s defaulted to the most restrictive setting for people under the age of 18.
  • We also limit advertisers’ ability to reach young people. We only allow advertisers to target ads to people under 18 based on their age and geography.

We regularly consult with experts in adolescent development, psychology and mental health to help make our platforms safe and age-appropriate for young people, including improving our understanding of which types of content may be less appropriate for teens.

We understand the importance of getting local feedback and insights, which is why we partner with a range of child safety and mental health partners in Australia. For example, in November 2023, we partnered with the Australian Federal Police Centre to Counter Child Exploitation, Kids Helpline, and US-based organisation NoFiltr (Thorn) to inform young people about sextortion.

This country is blessed with some of the best youth safety and mental health organisations, which is why with many of them we don’t just partner on projects here in Australia but also globally. For example, PROJECT ROCKIT is one of 11 organisations globally on our Safety Advisory Council and we’ve just engaged Orygen to update our global suicide and self-injury checkpoint.

We also know that teens move interchangeably between many websites and apps. The average teenager uses dozens of applications on their phone — in some cases as many as 40 apps or more. Many of these apps have different standards or safety features, which are constantly changing or have new features added which can be challenging for parents and guardians to keep up. Only by creating industry-wide protections will teens actually be safer.

We believe there is a better way to implement legislation that will create simple, efficient ways for parents to oversee their teens’ online experiences, and that is an OS/app store approach to age verification.

Parents should be able to approve their teen’s app downloads, and we support legislation that requires app stores to get parents’ approval whenever their teens under 16 download apps. With this solution, when a teen wants to download an app, app stores would be required to notify their parents, much like when parents are notified if their teen attempts to make a purchase. Parents can decide if they want to approve the download. They can also verify the age of their teen when setting up their phone, negating the need for everyone to verify their age multiple times across multiple apps.

This way parents can oversee and approve their teen’s online activity in one place. They can ensure their teens are not accessing adult content or apps, or apps they just don’t want their teens to use. And where apps like ours offer age-appropriate features and settings and parental controls, we can better ensure parents and teens use them. This also reduces privacy concerns that some may have, with the request for each app to obtain an ID to verify a user’s age.

Snap, TikTok and Match have all made public statements supporting this kind of approach — as have we. Additionally, the International Centre for Missing and Exploited Children is leading a campaign in support of this approach, has offered model legislation, and has been urging industry in this direction.

We believe the best way to help support parents and young people is a simple, industry-wide solution where all apps are held to the same, consistent standard.

But we aren’t waiting, we continue to launch new tools on Instagram and Facebook that provide greater controls for parents. These include allowing parents to set time limits, see who their teen is following and who is following their teen, and providing them with an option to see when their teen reports something to us

We are actively exploring how to ensure the use of these tools and advance age-appropriate experiences on our services and we welcome the opportunity to engage constructively in the online safety reforms and discussions happening in Australia at present.

We’re committed to protecting young people from abuse on our services. We’ll continue working with parents, experts, industry peers, across the world, and with the Australian Government to try to improve child safety, not just on our services, but across the internet as a whole.

I look forward to your questions.

--

--