The Difficulties of Regulating AI

Wendy Turner-Williams
5 min readSep 18, 2023

--

AI Regulation Requires Data Regulation

As AI expands its reach into diverse sectors, regulating it becomes not just a technical challenge, but also a data-centric one. This complexity, combined with a rapidly evolving landscape, necessitates the intervention of genuine experts — those actively working in AI, data, ethics, privacy, and security.

Complexities of AI Technology and Data

As if the technology that enables AI is not complicated enough, the data that feeds our business, and social interactions also fuels AI. The complexities of data are just as complicated as AI, and sadly very few understand the data world for many reasons including:

  • Variability of Data Sources: AI algorithms require diverse datasets. “90% of the world’s data was generated in the last two years” (Forbes, 2018), emphasizing the growth and vastness of data sources over the last five years.
  • Inherent Biases in Data: Data biases can lead AI models astray. “MIT studies (2018) showed gender classification systems had error rates of 34.7% for darker-skinned women”, revealing data bias consequences.
  • Data Fragmentation: With data dispersed everywhere, consistent regulation is challenging. “By 2025, 80% of data will be unstructured” (IDC, 2019), highlighting fragmentation concerns.
  • Lack of Transparency: Deep learning models, for their potential, remain largely black boxes. According to OpenAI (2019), understanding these models is “one of the biggest challenges in AI today”.

Government’s Challenges in AI Regulation

There is no AI regulation without Data, and Data usability regulations. The reality is the complicated world of data is much more complicated than the ability to use a metric, or to read a report. The processes of collecting, storing, conforming, moving, visualizing, or doing advanced analytics like data science, or machine learning require expertise. This is not something one technology enables; it requires sets of technologies, and it requires years of experience. Data is a discipline. A few of the Government challenges are:

  • Lack of Experience: Governments often lack AI-specific knowledge. A GAO report (2018) indicates “only 10% of federal agencies reported using AI”, hinting at limited familiarity.
  • Misunderstandings about Data Control: While big tech companies dominate the AI space, their control over user data isn’t absolute. A PwC survey (2019) revealed “68% of business leaders believe tech companies should be responsible for data they don’t own but process”.

Focusing regulation on Big Tech is ridiculous and furthers the clarity about the lack of Government knowledge. First, ethics is a philosophy. One that each company, and person may have a very different view on. Second, you must have clear definition as to what ethics is, and how to implement and monitor it. Third, big tech companies’ produce tools, like a instrument. But it’s the users of the instrument who write, and then play their own music. There are millions of users. These users often create their own tools (instruments) or use the thousands of other tool providers in the world. Focusing on the big tech companies is like focusing on the .00001%. When what we really need to do is focus literacy, fluency, and self-regulation efforts on the 99.99999%.

Look at the Conundrum of Social Media Regulation

One of the most prominent examples highlighting the challenges governments and tech companies face in regulating AI-driven systems is social media. These platforms, powered by complex AI algorithms, have emerged as primary influencers of public opinion and societal behavior. The technology company provides a platform in which their users then decide how to leverage the platform, with their data and their intent. This has led to unprecedented concerns related to misinformation.

The challenges of misinformation were evident in recent political upheavals. The Pew Research Center (2020) found “64% of Americans believe fake news has caused ‘a lot of confusion’ about basic facts of current events”.

  • The Platform vs. Content Dilemma: Big tech companies primarily provide platforms for users to share and consume content. However, these platforms do not inherently own the vast amount of data their users produce. As Facebook’s Mark Zuckerberg stated in a 2018 Senate hearing, “We provide tools for users to share their content, but we do not own that content.”
  • User Intent and Control: Beyond the data itself, the intent behind the content shared on these platforms is even more challenging to manage. “In a Pew Research study (2019), 55% of adults reported that they sometimes or often see made-up news on social media”, which hints at the vast diversity of user intent, from genuine information sharing to deliberate misinformation.
  • Governments’ Inadequate Grasp: Governments’ attempts to regulate content on social media often falter due to a lack of deep understanding of the technology and its implications. A common criticism of the 2020 Senate hearings on social media was that lawmakers often appeared unfamiliar with the platforms’ basic functionalities, as noted by multiple tech journalists.
  • Beyond Social Media: While social media remains the most visible aspect, it’s merely one application of data. There are countless other sectors — from healthcare to finance — where data drives decisions daily. Each sector comes with its unique set of challenges, making the broader picture of data regulation even more complex.

Implication

The challenges of regulating social media underscore the broader issue: technology’s rapid pace often outstrips regulatory understanding and response. Moreover, responsibility cannot solely rest on the shoulders of platform providers. It requires a collaborative approach, involving governments, platform providers, content creators, and, most importantly, the users themselves.

As with AI and data at large, experts actively working within the intersections of technology, ethics, privacy, and content creation are crucial to shaping meaningful, informed regulatory frameworks for social media.

The Solution: Empower the Experts

Professionals actively working in AI, data, ethics, privacy, robotics, and security understand AI’s intricacies. They are the key, as they have their hands on the literal keyboards. The experts who acquire the data, conform it, store it, model it, govern it, and protect it. These experts need to shift left and do the following.

  • Unified Voices: This community must collaborate for a cohesive voice, influencing lobbyists, lawmakers, and tech giants.
  • Crafting Frameworks: These experts, having hands-on experience, can best create practical guides and monitoring methodologies.
  • Promoting Interdisciplinary Collaboration: AI transcends mere tech. An EY report (2020) emphasizes “the need for AI to be tackled from a multi-faceted perspective”.

A Call to Arms

AI’s responsible evolution depends on the informed efforts of those who truly understand it. Harnessing this expertise will ensure a balanced, ethical, and effective AI-driven landscape. We urge professionals in these domains to unite, and to join TheAssociation to guide quality, ethical AI into the future.

--

--