State of the Net 2023: Takeaways on Trust & Safety, AI, Cybersecurity, Privacy and Infrastructure Investments

The Foundry
9 min readMar 21, 2023

--

By Daniela Guzman Peña, Ekene Chuks-Okeke, Khristal Thomas

Disclaimer: Any opinions expressed in this publication are those of the authors. They do not purport to reflect the opinions or views of the authors’ employers or affiliated organizations.

On March 6, 2023, industry and government stakeholders attended the 19th annual State of the Net conference, the largest annual information tech policy conference in the U.S. If you have been having a hard time keeping up with the fast pace of recent developments in internet law and policy, read our recap to quickly get up to speed!

Topics covered in this post: Transparency and Trust & Safety, Cybersecurity, AI Governance, Children’s Privacy, and Infrastructure Investments.

CONTENT MODERATION AND TRUST AND SAFETY

The Trust & Safety industry is facing emerging government legislation, a slew of judicial hearings, and an increasingly loud social debate on how internet platforms should make content moderation decisions. Social media companies have absorbed powers that were once held only by governments; now private platforms that host user-generated content can promote free speech or limit access to information.

In a discussion moderated by Ashkhen Kazaryan, Senior Fellow at Stand Together, experts from across the industry discussed how transparency reporting for online platforms could create a more accountable internet for users, but not without its limitations. The panel also discussed the efficacy of independent watchdogs on influencing platform policies, the way internet regulations and company policies are integrating an international human rights framework, and the potential and pitfalls of leveraging AI solutions for content moderation.

Transparency is a means to an end

As the EU rolls out transparency reporting requirements for all online intermediaries through the Digital Services Act, the question is whether this kind of regulation will actually create a more accountable digital public sphere. Rebecca MacKinnon, Vice President of Global Advocacy at the Wikimedia Foundation, emphasized that transparency is a means to an end, where the end should be protecting human rights. Rebecca highlighted that transparency reports should convey why platforms limit certain information, why certain content is amplified, and what companies do with user data. Additionally, she notes that transparency reporting shouldn’t just keep the companies accountable to users; reporting should also keep governments accountable to citizens. Companies should report on content take-downs and user data requests, to make the internet a more democratic place.

The “Supreme Court” of Content Decisions

David Ryan Polgar, founder of the non-profit organization All Tech Is Human, highlighted the evolution of society’s perception of social media — from “tech company” to “public square”, where they can express their opinions, make a livelihood, start a revolution, and anything in between. Just like in a well-functioning democracy, users of large tech platforms have higher expectations of corporate transparency, accountability, and oversight. Facebook’s Oversight Board responded to these expectations, and was created to help the company answer thorny questions around content moderation, by using independent judgment to uphold or reverse the company’s content decisions.

Suzanne Nossel, CEO of PEN America and a member of the Oversight Board, underscored the value of the independent appeals process, which allows users to challenge Facebook’s content decisions. She highlighted that beyond offering clarity and predictability to users on content decisions, the Board also integrates an international human rights framework into its rulings, which has a reverberating global impact on how stakeholders, including investors, hold platforms accountable. While the Board has the authority to make recommendations to Facebook and Instagram on content decisions, there are some limitations, Suzanne explained. The independence of the Oversight Board also means it depends on company cooperation for its investigations and has a limited ability to drive sweeping content policy changes.

“Not enough people on the planet to moderate content”

David from All Tech is Human underlined that despite the thoughtfulness behind the Oversight Board, there is an “Achilles’ heel of social media” in the start-up ethos of high-growth and scalability, which is at odds with the nuance and complexity of speech. All panelists acknowledged the paradox of making thoughtful but swift content moderation decisions at scale, while limiting human and automated error. Tomer Poran, the VP of Solution Strategy at ActiveFence, a Trust & Safety platform solutions vendor, highlighted AI for its potential to offer data to content moderators so they can make more informed decisions. But Tomer also recognized that whether it’s humans or machines in the loop, bad actors are constantly trying to understand content policies and algorithms to get around content moderation systems. Trust & Safety teams at companies understand that they are facing an adversary and are increasingly hiring former counterintelligence workers.

AI GOVERNANCE

Government, industry, and public interest groups agree that there should be some regulation on Artificial Intelligence to establish a duty of care, prevent discrimination, protect privacy and other rights. However, AI regulation is complex right now. Because the nature of the technology is constantly evolving, not all risks can be preempted in a single piece of regulation and standard setting in different industries will be difficult.

So, how can we be guided in using and innovating with AI? As Bertram Lee, Jr. of the Future of Privacy Forum stated, all existing laws and regulations apply to the use of AI, but it is still unclear how to test AI tools for their compliance with these laws. Matters to consider include: how to adapt to different communities’ contexts about how they want their data to be used, how to make AI compliance practical and doable for firms, and how to establish standards for the use of AI in HR. Some states like California and Connecticut are leading the way with proposals on AI governance.

The National Institute of Standards and Technology (NIST) released its AI Risk Management Framework on January 26, 2023, recommending guidelines for a risk-based approach to the development of trustworthy AI systems. While the framework is non-binding because NIST is a standard-setting organization and not a regulatory one, it is still a valuable tool for businesses seeking AI governance guidance. The NIST Framework recommends conducting impact assessments on AI tools before they are launched, to test for their inclusion of transparency, privacy, security and other values.

The European Union is considering a draft AI Act. Due to the similarity of protections in the European Human Rights Convention and civil rights statutes in the US, the EU and the US’ regulatory perspective on AI activity may be well aligned. On this, Elham Tabassi (Chief Of Staff, Information Technology Laboratory, NIST) noted that having international standards that prioritize risk and trustworthiness can be a good backbone and common ground for advancing these discussions.

CYBERSECURITY

The White House released the highly anticipated U.S. National Cyber Strategy in March 2023, which seeks to build and enhance collaboration around five key pillars: (1) Defend Critical Infrastructure; (2) Disrupt and Dismantle Threat Actors; (3) Shape Market Forces to Drive Security and Resilience; (4) Invest in a Resilient Future; and (5) Forge International Partnerships to Pursue Shared Goals. Each of the five pillars are broken down into strategic objectives but here’s a brief summation of what each pillar entails:

  1. Defend Critical Infrastructure — Build confidence in the resilience of U.S. critical infrastructure, regulatory frameworks will establish minimum cybersecurity requirements for critical sectors.
  2. Disrupt and Dismantle Threat Actors — Increased collaboration between the private sector and international partners to disrupt malicious actors (i.e. ransomware).
  3. Shape Market Forces to Drive Security and Resilience — Embed cybersecurity requirements into federal grant programs to incentivize companies to improve their security.
  4. Invest in A Resilient Future — Continue to advocate for the diversification of the cyber workforce and prioritize cybersecurity research and development.
  5. Forge International Partnerships to Pursue Shared Goals — Work with international allies and partners to counter cyberthreats and create reliable supply chains for information sharing.

On the backdrop of this release, the Internet Education Foundation had the pleasure of hosting a fireside chat with the Acting National Cyber Director, Kemba Walden at the State of the Net Conference. During this discussion she highlighted a key element of the new Cybersecurity Strategy that deviates from the 2018 National Cyber Strategy: shifting the burden of cybersecurity from individuals and putting the responsibility in the hands of software developers and other institutions with the requisite resources and expertise to address the threat. “The President’s strategy fundamentally reimagines America’s cyber social contract” said Acting National Cyber Director Kemba Walden. “It will rebalance the responsibility for managing cyber risk onto those who are most equipped to bear it”.

Within the strategy the White House is proposing that legislation establish liability for software makers which fail to take reasonable precautions to secure their products and services. Working with the private sector and Congress to draft such a bill, which would include an adaptable safe harbor framework to protect companies that already promote “security by design” principles.

Much of the national cybersecurity strategy builds on existing work being conducted throughout the Biden Administration. The Biden administration anticipates it will publicly release the implementation plan for the strategy in the coming months. The Office of the National Cyber Director (ONCD) will lead implementation of the strategy and plans to submit an annual effectiveness report to the President, Congress, and the Assistant to the President for National Security Affairs.

PRIVACY

Several laws are being advanced currently to protect children’s online privacy and safety. California’s Age-Appropriate Design Code Act (AADC) goes into effect in July 2024, and the Kids Online Safety Act (KOSA) Bill is being finalized at the House Committee level. An amendment to the Children’s Online Privacy Protection Act (COPPA) is also underway, to expand protections currently available to kids 13 and younger, to teenagers under 18.

Because these laws require providers of online services or products that are likely to be accessed by children and teenagers to take additional security and privacy measures, businesses now have the implied responsibility of confirming the ages of users of their sites. If signed into law by the governor, the Utah Social Media Bill will require parental consent for teenagers to sign up for social media sites.

While increased privacy supports a secure, trustworthy internet, “age verification” undermines privacy because of the additional data users will have to supply, potentially copies of ID or even biometric data with face scanning. Also, requiring parental consent for teens could impact LGBTQ and other vulnerable youth adversely. Natalie Dunleavy-Campbell of the Internet Society advocated for the ideal of an “open, globally connected, secure and trustworthy internet” to guide the consideration of privacy and safety measures.

The consensus of the panel was that “more privacy for everyone” benefits kids as well, and this is possible with a comprehensive omnibus federal privacy law. However, Jamie Susskind (Tech Advisor to Senator Marsha Blackburn) noted that comprehensive privacy and children’s privacy are not mutually exclusive.

In a fireside chat, Travis LeBlanc, a member of the Privacy and Civil Liberties Oversight Board (PCLOB) highlighted some priorities of the 5-member bipartisan executive board that oversees national security and balances civil rights concerns. These include: navigating the privacy and civil liberties implications of America’s focus on the threat of domestic terrorism; privacy in biometrics particularly at borders and airports; the impact of warrantless wiretapping under Section 702 of the Foreign Intelligence Surveillance Act (FISA) on US persons’ privacy; and balancing necessity and proportional requirements in the implementation of Executive Order 14086 on Signals Intelligence.

INFRASTRUCTURE INVESTMENTS

The National Telecommunications and Information Administration is on a mission to expand broadband connectivity in the US, to give every American access to high-speed internet. This is backed by about $50 billion funding in the Bipartisan Infrastructure Deal.

TikTok shared its $1.5 billion proposal for a new US data security regime called “Project Texas”, to quell national security concerns about the social media app. This includes measures such as: submitting all code used in the TikTok app for review (by representatives of the US government and Oracle) before they are implemented in the app, and setting up a new entity to manage security and controls for US users’ data, led by independent directors who have no affiliation with TikTok or ByteDance. Whether this plan will give the US government enough comfort remains to be seen.

The National AI Research Resource (NAIRR) Task Force recently released its roadmap for developing national infrastructure for AI. Currently, most of the AI research in the US is conducted at well-funded universities and big tech companies. NAIRR plans to increase diversity in the AI research environment by making resources (such as high quality datasets and research funding) available to more academic institutions. It will also advocate for the development of trustworthy AI in a way that upholds privacy, civil rights and civil liberties by requiring participating AI researchers to observe guidelines adopted by NAIRR, such as conducting impact assessments recommended by the NIST AI Framework.

THERE’S MORE:

To dive deeper into the conversation about what’s going on with the web, listen to the latest episode of the Foundry podcast “Tech Policy Grind” on Apple Podcasts and everywhere you get your podcasts! You can also watch all the sessions from the conference on the State of the Net YouTube channel here.

--

--

The Foundry

Professionals passionate about technology law and policy