Considerations for The Future of AI Governance

What are, if any, existing guidelines to steer AI development? What does future regulation look like?

Justin Yi
Impact Labs
6 min readFeb 10, 2021

--

Araya Peralta

Background

Artificial Intelligence (AI) has, in recent years, skyrocketed in popularity due to its astounding capacity to perform tasks that would otherwise require human intelligence. The adoption of AI within applications that seemingly touch every aspect of normal life should come as no surprise. Like the makings of a perfect storm, AI – fueled by the advent of immense computational power and vast data warehouses – has undoubtedly found its way into your favorite eCommerce retailer, social media outlet, and even a certain digital publication platform, leveraging data to make more informed recommendations and targeting strategies for ease of use and a better end product.

Not a bad deal, right? Well, not so fast: AI development has not come without its fair share of hiccups and oversights, especially when considering the implications for historically marginalized groups¹²³. And while it’s tempting to simply dismiss issues such as racial discrimination to technological growing pains, examination of the pervasive use cases and scale of these data informed declarations indicates a glaring obligation to be especially skeptical and critical of these glittering systems. Moreover, as with any other shiny new toy, it is the industry leading companies that monopolize resources and compute for research and development of said systems — and with no real incentive to slow the ever growing production of these data driven models, they can potentially wipe out competitors along with substantive discussion of how AI ought to be.

As a result, society is understandably moving away from the public view of AI as a magical cure all, and instead assuming a more pragmatic outlook concerning AI systems – which leave more to be desired in both explainability and trustworthiness. Despite the well documented use cases and tales of accuracy rivaling their human counterparts, AI systems and their inner workings are quite difficult to explain, and are largely considered as black box systems, even to those who play major roles in building them.

For the sake of argument, let’s assume that there exist (and there are) altruistic organizations that are committed to the pursuit of bringing to light the potential pitfalls of AI systems — what sort of obstacles might impede their progress?

AI regulation in the US, as it stands, is currently in its infancy. Existing discussion has merely offered high level guiding principles and recommendations for future work. It has yet to establish practical regulatory responses and protocols to guide this rapidly developing field — something that has proven difficult for two reasons:

  1. The groggy response of policy to the ever-changing tech sphere.
  2. The enigmatic, black box models that even AI developers have difficulty understanding.

We find ourselves at a standoff wherein AI advancements are becoming both increasingly impactful and obfuscated from users and affected groups. This necessitates proper thoughtful considerations of how AI development should be guided to promote fairness and integrity of existing systems. Moreover, this foundational basis can then be built upon to address further reaching ethical queries surrounding AI and its future use.

Araya Peralta

Mechanisms for Verifiable Claims

In order to better enable the discussion of trustworthy AI within policy and industry, it is necessary to invest in the development of discerning mechanisms that claims of AI systems are to be pitted against, allowing for technologists and policymakers to constructively discuss the failures that currently plague AI systems. Detailed in Towards Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims and authored by Miles Brundage et al., this work is the brainchild of AI researchers, industry figures, and policy experts alike – offering insights from AI Governance-pioneering institutions such as OpenAI, Google Brain, Future of Humanity Institute, etc.

Brundage seeks to equip AI accountability with the teeth to better verify claims made by designers and developers of AI for concrete demonstration to regulators, the public, and each other. Proposed is a three fold “toolbox” of mechanisms – for a comprehensive outline of all the proposed concepts, we refer the reader to the paper itself.

Institutional Mechanisms

  • Third-party Auditing: An idea borrowed from other industries (think banking), third party entities can be organized to provide an alternative to self-reporting claims. This allows for many questions of implementation, such as what is being audited (data, learning algorithms, outcomes, objective functions) and to what extent (how to be conducted, preparedness of code bases for auditing, auditing the auditors).
  • Red Teaming Exercises: Self-deployed attacks on developing systems for developers to demonstrate and build awareness of ways in which their own systems can be misused.
  • Bias and Safety Bounties: Incentivize crowd-sourced criticism of AI systems for bias and inequity from knowledgeable AI community members – already well established for other facets of software and cybersecurity applications. This process can introduce a broader testing data corpus on which vulnerable AI systems can be evaluated.
  • AI Incident Sharing: Dismantle traditional views on AI ownership and foster collaboration between AI entities by sharing information and constructing an archive of documented incidents (e.g. the Chicago Convention required that all airliners share crash data, dramatically reducing the number of future incidents).

Software Mechanisms

  • Audit Trail: Standardize auditing record content that proves useful in determining trustworthiness and fairness in systems. There is still work to be done to distill large audit logs into meaningful and relevant historical data.
  • Interpretability: Motivate a concerted push for interpretable AI systems, however, there still exists an innate subjectivity of said explainability to the interpreter. Balance must be found to ensure that complex models are not diminished to simply interpretable ones and that lay people can understand how more complex models generate predictive decisions.
  • Privacy-preserving Machine Learning: Push for development of standard procedure and tools when handling sensitive data/models — examples include federated learning (a machine learning paradigm in which data resources are decentralized and hidden, and model updates are aggregated), encryption, etc.

Hardware Mechanisms

  • Hardware Security Features: Specialization of AI hardware requires specialization of secure enclaves to ensure security within machine learning contexts.
  • High-precision Compute Measurement: Need for standardization of compute measurement reporting for effective verification of resource usage claims.
  • Funding of Computer Power Resources in Academia: Call for governing bodies to increase support for greater compute for academic researchers to hold industry leaders accountable and conduct their own research.

Closing Remarks

As AI continues to terraform the landscape of modern society’s present and future, it is becoming increasingly paramount to establish standards on which to base AI governance efforts in the form of concrete mechanisms. However, in establishing rigid guidelines there is potential to constrict the scope of AI safety as a whole to a mere checklist of narrow hoops to jump through that may not even materialize in practical applications. While Brundage offers a basis upon which to discuss and evaluate model fairness, much more work is to be done to discourage the devolution of AI policymaking into a shouting match between legislators and technological pundits, each from their respective ivory towers. Incremental improvements to the state of AI development must be made while acknowledging the asymmetries of power and existing social infrastructure that act to prevent progress toward ensuring trustworthy AI. The paper also recognizes the missing benefit that would have come from increased diversity among contributors – to name a few: gender, racial, and socio-economic diversity.

If you found this content interesting (or concerning) please seek out further resources, some of which can be found below, to engage with the ongoing research and conversation of AI Governance – and keep an eye out for publications to follow via ACM at UCLA’s social impact arm.

--

--