The New York SAFE for Kids Act Faces a Legal Minefield

Jess Miers
Chamber of Progress
6 min readMar 15, 2024

The New York SAFE for Kids Act (S7694/A8148), emerges as the latest legislative endeavor aimed at safeguarding minors from the purported perils of social media. Predicated on the unfounded belief that platforms with content curation mechanisms are intrinsically addictive, the bill overlooks the sophisticated and protective role of algorithms in moderating user experience.

However, beneath its protective veneer lies a deeper issue: the bill’s provisions not only challenge the essence of innovation but also raise significant constitutional questions, setting the stage for yet another legal battle that could render it obsolete.

The Bill Runs Head First Into the First Amendment

The New York SAFE for Kids Act ventures into contentious legal territory by privileging chronological feeds over personalized algorithmic curation of content. This stance rests on the flawed assumption that non-curated feeds are inherently safer, overlooking the proven benefits of algorithmic curation in tailoring online experiences to individual users. Such a directive fundamentally challenges the essence of editorial discretion protected under the First Amendment.

Indeed, the courts have consistently held that any legislation affecting the design or operation of platforms — thus influencing the nature and dissemination of speech — demands rigorous constitutional scrutiny.

As highlighted by recent decisions in NetChoice v. Bonta, NetChoice v. Griffin, and NetChoice v. Yost, laws imposing governmental control over speech access, particularly under the pretext of protecting minors, encounter a formidable obstacle in justifying their constitutionality under strict scrutiny.

For instance, the NetChoice v. Bonta decision critically noted:

“[T]he Act’s restrictions on the functionality of the services limit the availability and use of information by certain speakers and for certain purposes and thus regulate[s] protected speech.”

Echoing this sentiment, the NetChoice v. Griffin court observed:

“If the State’s purpose is to restrict access to constitutionally protected speech based on the State’s belief that such speech is harmful to minors, then arguably Act 689 would be subject to strict scrutiny.”

Similarly, in NetChoice v. Yost:

“As the [Supreme] Court explained, ‘[s]uch laws do not enforce parental authority over children’s speech and religion; they impose governmental authority, subject only to a parental veto.’ The Act appears to be exactly that sort of law. And like other content-based regulations, these sorts of laws are subject to strict scrutiny.”

Despite lawmaker attempts to dodge First Amendment challenges, these recent decisions underscore a fundamental principle: algorithmic curation, as a conduit for speech delivery and user engagement, is safeguarded by the First Amendment. Legislation attempting to prescribe specific content delivery methods, by favoring one type of feed over another, intrudes upon the domain of regulated speech.

The Bill’s Age Verification Requirement is Likely Unconstitutional

Age verification mandates also introduce significant legal and practical challenges, mirroring the issues we previously flagged in similar legislative efforts in Connecticut and New Jersey. This requirement obligates social media platforms to implement “commercially reasonable methods” for age verification, effectively forcing them to amass a trove of sensitive user data. This not only heightens the risk of security breaches but also encroaches upon users’ rights to receive information.

The collection of detailed personal information necessary for age verification exposes users and platforms alike to heightened risks of cyberattacks, with personal and biometric data becoming prime targets for malicious actors. Additionally, it opens the door to potential misuse of this information by government agencies in jurisdictions with stringent regulatory frameworks, further compromising user privacy.

The constitutionality of such age verification requirements has also been consistently questioned by the recent NetChoice litigation, highlighting the role of age verification in restricting access to information that users are legally entitled to receive.

The reluctance of adults and teenagers to comply with these invasive measures is well-founded, as demonstrated by the legal pushback against California’s Age Appropriate Design Code. The Court’s injunction against the Code criticized it for exacerbating privacy and security issues rather than mitigating them, through the compelled collection of additional personal data:

“Based on the materials before the Court, the CAADCA’s age estimation provision appears not only unlikely to materially alleviate the harm of insufficient data and privacy protections for children, but actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information.” — NetChoice, LLC v. Bonta, №22-cv-08861-BLF (N.D. Cal. Sep. 18, 2023).

In a similar vein, the Arkansas Court underscored how age verification significantly impedes adults’ access to constitutionally protected speech, deterring engagement with online platforms:

“Requiring adult users to produce state-approved documentation to prove their age and/or submit to biometric age-verification testing imposes significant burdens on adult access to constitutionally protected speech and ‘discourage[s] users from accessing [the regulated] sites.’” (citing Reno v. ACLU).

These findings show a profound clash between age verification mandates and the core tenets of free speech and privacy; another formidable legal obstacle for New York’s legislative proposal, following in the footsteps of failed attempts in other states to implement similar measures.

Teens Have First Amendment Rights Too

The bill increases access barriers for users under 18, undermining their critical need for online support and community. FTC Commissioner Alvaro Bedoya spotlighted the significant chilling effect this legislation could have on teenagers seeking essential support for mental or physical health issues through social media communities like r/stopdrinking, r/mentalhealth, r/EatingDisorders, and r/AuntieNetwork. These platforms often serve as vital lifelines, and for New York’s youth, the bill risks cutting off these crucial connections.

This concern is not hypothetical but grounded in legal findings. The Court in NetChoice v. Bonta acknowledged these risks, particularly for LGBTQ+ youth in hostile environments, emphasizing the challenge they may face in accessing vital information online:

“LGBTQ+ youth — especially those in more hostile environments who turn to the internet for community and information — may have a more difficult time finding resources regarding their personal health, gender identity, and sexual orientation.”

Accordingly, the bill also fails to recognize the established rights of minors. As reinforced by the Supreme Court in Brown v. Entertainment Merchants Association, minors possess a First Amendment right to access legally protected information, independent of parental or governmental oversight. This principle is echoed by the NetChoice rulings, which challenge the constitutionality of mandating parental consent for accessing such information. For instance, in NetChoice v. Yost, the court observed:

“Foreclosing minors under sixteen from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media’s harm to children.”

And in NetChoice v. Griffin, the Court highlighted the practical difficulties parental consent mandates impose:

“[I]t is likely that once Act 689 goes into effect, the companies will err on the side of caution and require detailed proof of the parental relationship. As a result, parents and guardians who otherwise would have freely given consent to open an account will be dissuaded by the red tape and refuse consent — which will unnecessarily burden minors’ access to constitutionally protected speech.”

These recent decisions also demonstrate that trade associations like NetChoice are not only authorized to represent their member companies but also possess the legal standing to advocate for the rights of minors using these platforms.

This development serves as a potent warning to New York lawmakers: if they fail to safeguard the rights of younger users, others are prepared and legally empowered to step in.

In sum, New York’s SAFE for Kids Act faces both legal and practical challenges .

In its zeal to shield young New Yorkers from the digital boogeyman, the bill is set to bulldoze the very advancements and safeguards that have rendered the online sphere safer for kids. What emerges is not protection, but a perplexing scenario where the Legislature snatches away vital digital lifelines and educational tools, leaving its youth digitally stranded.

With that, New York is about to join the hall of fame for legislative folly alongside Arkansas, California, and Ohio. Their approach isn’t just misguided; it’s an exercise in futility, with a touch of legislative arrogance that presumes to know better than the very individuals it purports to protect.

--

--

Jess Miers
Chamber of Progress

Senior Counsel, Legal Advocacy at Chamber of Progress