A Plan for Developing Global AI Policy

AnthroPunk, Ph.D.
3 min readAug 16, 2018

--

S. A. Applin

Image: “Robots” ©ricardodiaz11

AI-related policy is as much a human communication and negotiation problem as it is a prioritizing of AI technologies as to what policy must be triaged. While there are key policy issues that will occur that will greatly impact the landscape for setting general Artificial Intelligence policy and priorities, a critical goal is to increase our capacity to globally cooperate with other nations to achieve sensible and ethical general AI policy. A precondition for this is to assess the present world trajectory politically over the next five years, and to use the results to determine how this will impact what we are able to do, both internally and cooperatively, to direct, monitor, and manage AI policy within a global context.

First of all, the current trend, towards nationalism and withdrawing inside borders, indicates a need to improve understanding of what each culture or country may require. This could manifest as a range of policies that are structured to be ethically dynamic and able to adjust to cultural differences in ethical framing (an ongoing concern — see Applin 2017: Autonomous Vehicle Ethics, Stock or Custom?). Secondly, the willing or unwillingness of different national actors to collaborate on AI policy, together with the range of meanings and processes ascribed to “ethics” by the various parties involved in the narrative of creating policy must be established. Thirdly, the heterogeneity that exists between and within nations must be better documented and understood. Each has multiple actors, with varying degrees of ethical orientation and varying degrees of technical competence within their borders.

As we imagine these same issues extended to other nations and their agendas, the human relations of determining policy will become an internal educational process as much as it is an external and global conversation. Furthermore, as countries form factions and alliances, the nature of how to safeguard humanity against unforeseen and potentially dangerous AI outcomes will hinge directly on our capacity for education, diplomacy and cooperation with regard to technological concerns.

Policy must be developed to promote strategies for effective collaborative communication and education of actors and actor nations in a position to champion policy. We need to increase awareness of broader global outcomes, much more so than at present. This will be increasingly difficult should the worldwide trend towards nationalism continue. We would like to assume that all would unite for the good of ethics and humanity overall, but cultural differences with respect to ethics, together with political and cultural trajectories, must be considered to develop cooperative policy to influence processes impacting nearly every aspect of human trade, interaction, and, perhaps, human survival.

If we specifically ask which technology for AI would have the most impact at a policy level, I would argue strongly that it would be the biases and assumptions within present AI development, and its current design, that precludes supporting human agency. Ideally, people need to be able to make choices from the options available at any point in a process. Automation, particularly AI automation, derived from Machine Learning is not currently created with the dynamic adaptation of humans in mind, much less co-adaptation. Going forward, policies to prioritize and protect significant human choice is the overall arching issue of critical importance to developing AI policy. (See also Applin and Fischer 2015: New Technologies and Mixed-Use Surveillance: How Humans and Algorithms are Adapting to Each Other).

--

--

AnthroPunk, Ph.D.

(S.A. Applin, Ph.D.) AnthroPunk looks at how people promote, manage, resist and endure change; how people hack their lives (and others) http://www.posr.org