The Role of Professional Norms in AI Governance: Some Observations and Outline of a Framework
--
by Urs Gasser & Carolyn Schmitt
It is hard to keep up with the various efforts underway that develop ethical principles and other norms aimed at governing a broad range of AI-based technologies (we use the shortcut “AI” for readability only). Current initiatives involve governments, international organizations, standard setting organizations, tech companies, and civil society organizations, to name just some of the drivers. In addition to the development of ethical norms and governance principles, the quest for adequate accountability schemes and enforcement mechanisms continues — with mixed success and serious complications, as the recent cancellation of Google’s external AI Ethics Council suggests. These developments and struggles can be interpreted in different ways: Efforts by companies to establish ethical norms and boards, for instance, can be seen as mere marketing or PR strategies or “ethics washing” rather than effective attempts at (self-)regulating AI. Or codes of ethics developed by professional associations might be criticized as being largely ineffective.
In a forthcoming chapter for the Oxford Handbook of Ethics of AI (edited by Markus Dubber, Frank Pasquale, and Sunit Das) we take two steps back and reflect on the different types of professional norms that emerge around the use and development of AI (with a focus on the latter) and discuss possible governance effects of such norms. In a nutshell, our discussion acknowledges that professional norms can only play a limited role in governing AI governance and are certainly not sufficient responses (some of our earlier reflections on layered governance here), but also points — we’re (still) optimists, after all — to some interesting “bottom-up” dynamics that demonstrate their potential as a reservoir of contextual norms that might fuel a range of accountability mechanisms for AI.
Here are some of the key observations and hypotheses from our paper, which hopes to make a small contribution to the amazingly rich body of scholarship at the intersection of AI, norms, and professions/professionalism (please refer to the paper for the more nuanced discussion, including examples, references, etc.):
- The three central concepts interacting in our paper — AI, the profession, and professional norms — are widely ambiguous and evolving concepts, each with extensive history, theory and practice. Despite the complex dynamics of de- and reprofessionalization in general and the uncertainty norms of AI in particular, our discussion leads to the hypothesis that we might be witnessing the advent of what might be called “AI professions” with a corresponding professional norms nucleus. Future research is needed to validate this hypothesis (a starting point could be the in-depth analysis of the various norms, methodologically not completely different from what inspired the work on digital constitutionalism in the Internet realm).
- Mirroring larger dynamics, our paper suggests that AI and the proliferation of ethical codes and principles challenge “traditional” aspects of professions and professional norms while opening up windows for new movements towards professionalization. For instance, professional norms are no longer predominately stipulated by and administered by professional associations (while organizations such as IEEE and ACM play a vital role too, as discussed in the paper), but also emerge from particularly powerful corporations as well as from NGOs and assemblages of normative statements by employees, for example. These networks of norms add new layers to the stack of professional norms, which interact with other modes of governance (later in the paper, we point towards the design challenge of how to organize the interplay among different norms of governance, building upon our previous interop work).
- One area where we hope our paper adds to the current debate about professional and ethical norms around AI is by mapping and briefly discussing the different types of governance effects professional norms might have. Specifically, we examine how professional norms interface with different types of accountability mechanisms. In addition to observing how some of the ways are currently implemented into practice, we also discuss interfaces between norms and other accountability mechanisms, including the court of public opinion and the legal system, using GDPR as a case in point.
Overall, our research for this paper highlights the complexity and rapid speed of development in this area: since beginning this project in mid-2018, we witnessed the emergence (and dissolution) of new principles and initiatives from a variety of sources, as well as critical public discourse about the development of professional and ethical norms and (the lack of) accountability mechanisms. In the paper, we don’t attempt to encapsulate every aspect of the current landscape. Rather, we offer the following (work-in-progress) framework for conceptualizing and hypothesizing about the role of professional norms in the governance of AI:
A number of components of the proposed (tentative) framework are worth highlighting.
The framework reflects our point shared above that the current landscape of professional norms that are likely candidates to display governing effects on AI-based technologies, broadly defined, is largely disjointed and in flux. The body of relevant norms doesn’t present itself in the form of a coherent normative structure, such as a single code of ethics, but is rather a patchwork consisting of existing as well as emerging norms. They can be more general or (AI-)specific in nature and emerge, like in the case of traditional norms of the profession, from membership associations, but also from non-traditional norm-setting organizations and individual enterprises. Professional norms also interact in various ways with other norm types at different places in the norm hierarchy, including legal requirements.
Professional norms in the context of technologies that embrace a rich set of subdisciplines, methods, and tools under the umbrella of the catchword AI may emerge along the full lifecycle of AI-based systems, including their design, development, testing, deployment, support, and phase out. Future work could analyze the role of professional norms for each phase of system development, which could be further be broken down to into more specific activities, depending on the AI-method that is invoked in these applications, such as data preprocessing, application of algorithms, and model selection in the case of machine learning techniques.
We argue in the paper that governing effects might stem from professional norms that are input-oriented in that they are dealing with the circumstances under which AI-based technologies are created, or output-oriented by addressing the use of such technologies. In some instances, these two ideal-type categories might overlap or interact with each other. Examples of the former are general professional norms that apply to computer scientists, engineers, and research and data ethicists who are involved in the creation of AI-based systems as well as comprehensive emerging ethical norm-sets aimed at governing AI as currently developed by various actors. Examples of output- or application-oriented norms are norms of the legal, medical, and educational, and other professions that might deploy AI-based systems in their professional contexts that are governed by professional norms. An example of the interplay between the two types of norms is, for instance, a situation in which a professional that is generally bound by one set of norms (e.g., a physician) enters the “sphere” of the other (e.g., by advising the development of an AI medical expert system).
With respect to the governing effects of the various norms applicable in the different professional contexts with AI-based technology involvement, we suggest to roughly distinguish between direct and indirect governance effects. The main example of a direct governance effect is a situation in which the behavior of a professional in charge of developing or using AI is immediately guided in the intended direction by a particular norm set forth in the applicable body of professional norms. A case in which a company, whose employees might have violated norms of the profession, is held liable in the court of law or in the court of public opinion with reference to professional best practices might serve as an example of an indirect governing effect of professional norms. Whether a certain norm exhibits direct or indirect effects depends on various contextual factors, including the specificity of the norm, the professional culture, the existence of robust accountability mechanisms, doctrinal approaches to professional liability, the existence of strong reputation markets, etc.
Again, the paper is cautious when it comes to the governance effects of professional norms, especially given the current lack of robust accountability mechanisms, but also in the light of various conceptual, normative, and empirical uncertainties. That said, based on our review of the past, present, and possible future of professional norm dynamics as applied to AI, we recommend an open and nuanced approach when debating the promise and limits of professional norms as part of the governance toolkit.
Skepticism is, of course, appropriate in the light of the political economy and the massive power asymmetries and resulting inequalities, as highlighted most recently by the AI Now Report on “Discriminating Systems: Gender, Race, and Power in AI,” and apparent hostility from corporations in response to employee activism. Within the context of the paper and professional norms, we anticipate such discussions of diversity and inclusion will impact the evolution of professional norms of AI professions and shape the respective accountability mechanisms moving forward.
Through this paper, we offer a view that looks at professional norms (including ethical norms of the emerging AI profession) as one context-sensitive element in the toolbox that might serve a productive role when embedded in a blended governance and accountability framework of AI, which needs to include robust legal safeguards and enforcement mechanisms. We hope the paper and proposed framework contribute to such a design project and offer at least some “food for thought.”