The AI Policy Landscape

May update: translating a table of contents from Google Docs to Medium requires hard coding, so further updates will be published dynamically through this Google Docs link. Please visit that link for the latest updates to the document below.

This is an evolving collection of information and links about who is doing what in the realm of AI policy, laws and ethics. This list is perpetually under construction. I am furiously going through my research notes to extract topics, people, organizations and resources, but this is an ongoing and time consuming effort. Rather than let the perfect be the enemy of the good, I wanted to post this unfinished resource now and update it continuously.

If you add suggestions in the comments I will integrate them.

For basic definitions of AI for policy-makers see this article.

The table of contents below links to the sub-sections in the document. You may also find a browser search (command-f) useful as the document is very long.

Latest changes from April 8 — reorganized the topics, added table of contents linking, and updated a number of topics

Contents

Ethics, Values, Rights, Transparency, Bias, Norms and Trust

Broad Safety and Security Issues with AI, AGI, ASI and Malicious Use

Personal Privacy, Information Security, Individual Access Control and the Future of Trust

Law Enforcement, Security and Autonomous Weapons Systems

Economic and Humanitarian Issues

AI and the Law

AI, Government and Regulation

AI, Human Interactions, Society and Humanity

AI Policy events

AI Policy Organizations

AI News Sites

AI Policy People

Ethics, Values, Rights, Transparency, Bias, Norms and Trust

In his book Machines of Loving Grace, John Markoff writes, ‘The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.’

AI ethics and bias

Ethics and Governance of Artificial Intelligence Fund:The goal of the Ethics and Governance of Artificial Intelligence project is to support work around the world that advances the development of ethical AI in the public interest, with an emphasis on applied research and education. Works with the Berkman Klein Center at Harvard and the MIT Media Lab

Moral Machine Platform: crowdsourced expectations of how an autonomous vehicle should make moral decisions.

OpenEth is founded on the principle that ethics and morality are measurable, definable and computable, across cultures, time, and geography.

Council for Big Data, Ethics, and Society brings together researchers from diverse disciplines — from anthropology and philosophy to economics and law — to address issues such as security, privacy, equality, and access in order to help guard against the repetition of known mistakes and inadequate preparation.

AI ethics and autonomous vehicles

The modern version of the Trolley Problem. Should an autonomous vehicle protect its passenger at all costs, even if it means swerving into a crowd of pedestrians? Or should the vehicle perform utilitarian calculations to cause minimal loss of life even if that means killing its passenger(s)? Surveys indicate humans want the latter in the abstract, but they want their personal vehicle to protect them and not apply a utilitarian approach. Mercedes has decided its duty is to protect the passenger. Regulation could help, but may be detrimental in the long run if it delays adoption of autonomous vehicles, which are highly likely to save lives overall. This may not be a frequent problem if autonomous cars are prevalent and radically reduce the number of accidents.

These issues will likely be resolved through traditional negligence law although this is complicated by issues of agency where AIs are not explicitly programmed to take particular actions. If a vehicle is fully automated, with a human driver no longer actively steering, the question arises as to whether damage can still be attributed to the driver or the owner of the car, or whether only the manufacturer of the system can be held liable. Policy-makers need to determine tradeoffs between cost, convenience, and safety

Human experimentation and manipulation

Human experimentation has been closely scrutinized in the psychology field for generations. What is the responsibility organizations owe to people with regard to subtly manipulative tools like nudging (software that prompts you with reminders)? Do we need codes of conduct around these types of technological experiments that may promote technology addiction?

Encoding fairness, policy, laws and values in AI

Michael Kearns, Aaron Roth, Shahin Jabbari, Matthew Joseph, Jamie Morgenstern UPenn: conduct research on how to encode different concepts of fairness from law and philosophy into machine learning.

Fairness, Accountability and Transparency in Machine Learning Workshop: Bringing together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning. They have an extensive bibliography on AI ethics.

Gender bias

Microsoft Cortana is built to push back against abuse and harassment.

Heather Roff: Moral AI discusses how representations of gender are becoming embedded in technology and expressed through it.

The NY Times Explores AI’s White Guy Problem: most AI researchers are white men, job descriptions tend to favor male applicants, and some AI systems don’t work well for minority groups.

Racial Bias

Machine learning systems trained with biased data will produce biased results. Image recognition software has categorized black people as gorillas, misread images of Asians as people blinking, and had difficulty recognizing people with dark skin. More seriously, and AI tool used to assess the risk of recidivism was found to be biased against black defendants and in favor of white defendants. Biased predictive policing tools could also perpetuate stereotypes.

Income bias

AIs may disproportionately benefit high-income communities. Also, the negative effects of AI — like social media or gaming addictions — may disproportionately impact lower-income communities. There are also concerns that many new technologies that are too dangerous or unproven for advanced economies are tested on developing economies where liability concerns are lower. AI and automation are predicted to disproportionately impact low-skill jobs first.

Human Centric Methodologies to Guide Ethical Research and Design

Matt Chessen wrote an article advocating that public-policy professionals should collaborate with AI technologists in training machine learning systems to encode values and eliminate bias. Additionally, the public policy profession needs a new specialty in big data and AI ethics.

Broad Safety and Security Issues with AI, AGI, ASI and Malicious Use

Artificial General Intelligence and Artificial Super-Intelligence (sentient AIs and the Singularity) are worth considering, but are still science-fiction. The bigger immediate concern is the misuse of AI either through negligence or malice.

AI Doomsday Planning

CSER and ASU held an event “Envisioning and Addressing Adverse AI Outcomes” that looked at multiple future scenarios where AIs could spawn social, economic and political disasters.

Matt Chessen foresees AI-enabled virtual reality as being so addictive, much of humanity will give up on reality and stop breeding.

If you don’t want a Terminator scenario, don’t build Skynet.

Malicious Use of AI

Matt Chessen wrote about how an authoritarian regime might use an optimization algorithm and social scoring to control a population, and how it could get out of hand.

Personal Privacy, Information Security, Individual Access Control and the Future of Trust

Machine and Deep Learning systems require large amounts of data. Some of that data may be collected in private spaces like our homes, and we may reveal very intimate information to these systems. Emerging information fiduciary concepts — similar to restrictions on doctors and lawyers using client information for their own benefit — could be applied to AI and tech generally.

Data collection and use

Chatbots like Xiaoice are considered by many users to be a real-time, always available friend. Users tell these bots their intimate secrets and even proclaim love for them. Humans also tend to let their guard down when talking to AI personal assistants. Veterans are more likely to reveal sensitive information to a virtual therapist. AI interfaces will likely become popular in medicine and education, where sensitive information may be revealed and collected. This raises questions about how this very intimate data might be used by corporations or governments.

Companies may benefit from maintaining private data-sets, but citizens may benefit from public data-sets.

Excellent summary of threats and positive uses of big data tech.

Children’s privacy

Toys are increasingly integrating artificial intelligence systems. Parents may not understand that these systems are collecting data. Children likely do not have the sophistication to understand what they should and should not say about these systems and may disclose PII or very private information.

Trusting AIs decision-making

How do we create trust in AI systems as we increasingly automate every aspect of our lives, including very personal communications like email? And what are the norms and liability when AI systems violate that trust?

Machine learning systems are more probabilistic than algorithmic and may not have auditable decision-trees. How can we trust the AI systems we use? What happens when systems — perhaps those that filter fake news — in fact are filtering out news with a certain point of view, enclosing us in an ideological bubble?

IBM: learning to trust AI and robotics

AI and its integrity, availability and reliability:

How do we prevent AI from being hacked, spoofed or fooled?

Evolving AI Lab research indicates deep learning image recognition tools are easily fooled.

AI, propaganda and disinformation

Much like high-frequency trading has transformed stock markets, high-frequency messaging may dramatically alter public opinion. Over the long term, stock prices are still considered measures of value, but over the short-to-medium term they are heavily influenced by algorithms that seek only to extract value through manipulative trading. Similarly, AI-driven bot networks may heavily manipulate public opinion on issues, muddling the truth, undermining democratic speech and drowning out civil discussions online. These tools will be precisely targeted to individuals based on their specific personality profiles.

AI and psychometrics

There are concerns that AI psychometric systems could be weaponized for political purposes.

Computational Propaganda

Politicalbots.org are a team of researchers investigating the impact of automated computer scripts–computational propaganda–on public life. This work includes analysis of how tools like social media bots are used to manipulate public opinion by megaphoning or repressing political content in various forms: disinformation, hate speech, fake news, political harassment, etc.

BotWatch is an online publication, built to generate multidisciplinary discussion around bots. As bots become a commonplace in our lives, BotWatch raises the very basic human questions on meaning, creativity, language, and expression.

AI-enabled, machine-driven communication

Matt Chessen, published an article describing how AI-enabled machine driven communications tools (MADCOMs) will radically enhance computational propaganda. Machine driven speech may drown out human speech online.

AI and countering trolling and fake news

Conversational AI: A project by the NY Times and Jigsaw designed to identify online harassment in comment sections. Conversational AI on GitHub.

Fake News Challenge is a grassroots effort of over 100 volunteers and 71 teams from academia and industry around the world. Our goals is to address the problem of fake news by organizing a competition to foster development of tools to help human fact checkers identify hoaxes and deliberate misinformation in news stories using machine learning, natural language processing and artificial intelligence

Students at WV University are working on AI tools to detect and combat fake news.

Law Enforcement, Security and Autonomous Weapons Systems

AI and autonomous weapons systems

AIs have been used in weapons like the Tomahawk missile for decades, but these systems are improving dramatically. Lethal autonomous weapons systems (LAWS) have the potential to operate — and choose to kill — fully autonomously once deployed. The power of these weapons raises the possibility of a new arms race and their autonomy raises international human rights and humanitarian law concerns. Opponents argue that LAWS will lack human judgement and context, and will be unable to judge proportionality — traits necessary to satisfy the law of war. Since these weapons can wait passively to strike, they also raise issues — similar to landmines — about inadvertently targeting civilians. Some opponents argue that LAWS armies will make it easier for advanced countries to fight wars since LAWS can reduce the risk of death to their own forces. Some proponents argue LAWS will enable highly precise targeting, reducing both the lethal force needed and civilian collateral damage. They also argue that human soldiers frequently fire on friendly forces, inadvertently target civilians, and use disproportionate force, and effective LAWS systems may act with more precision and discretion.

International Convention on Conventional Weapons in Geneva discusses LAWS issues.

The International Committee for Robot Arms Control — or ICRAC (spelled “aikræk”) for short — is an international not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons.

The Campaign to Stop Killer Robots: NGO collective working to preemptively ban fully autonomous weapons.

UN Office for Disarmament Affairs has background on LAWS issues.

Dr. Heather Roff, researcher at ASU’s Global Security Initiative and New America writes about LAWS issues.

Defense Science Board Summer Study of Autonomy

AI and policing

AI enables police surveillance at scale “matching thousands of photos from social media against photos from drivers’ license databases, passport databases, and other sources, then taking the results and crossing them with other kinds of records”

The ACLU revealed and The Verge reported that police in Baltimore used a face identification application called Geofeedia, together with photographs shared on Instagram, Facebook, and Twitter, to identify and arrest protesters. The ACLU believes this tool is marketed to target people of color.

The ACLU suggests social media companies “clear, public, and transparent policies to prohibit developers from exploiting user data for surveillance”

We need to develop standards for what is acceptable for law enforcement use of big data and AI, and how they will be held accountable for abuse.

Pre-crime

Companies like Hitachi are launching crime-prediction software and here are concerns this could be used to arrest people before they have acted. Colorado-based Intrado sells police a service that instantly scans legal, business and social-media records for information about persons and circumstances that officers may encounter when responding to a 911 call at a specific address. AI can also predict suicide attempts and perhaps may be used to intervene.

Economic and Humanitarian Issues

AI, automation and jobs

Some argue that AI-enabled automation is different than past industrial revolutions and will result in mass blue and white collar unemployment. Others argue that the nature of work will change but the numbers of jobs will not. Some argue AI will help mid-skill workers succeed in now-unfilled high-skill job. Most argue that economies need improved education and skill-building programs and enhanced job transition programs for people displaced by new technologies. Governments, industry and society will need to create new programs, regulations and standards to adjust for disruptions.

AI and economic inequality

AI’s will likely create outsized economic gains for their creators. This could push additional income to capital vice labor and result in increasing economic inequality. Rising incomes and the emergence of the global middle class over the last thirty years have been correlated with increasing economic and political liberalization. Increased inequality threatens these trends and could promote populist backlashes.

AI and disincentives to innovation

Standards could benefit large, first movers and stifle innovation. Standards could also promote interoperability and ‘safe’ AI systems.

Open Source AI systems could stifle innovation. Eg TensorFlow is very useful but could create a homogenized group of practitioners. Or the availability of these systems could promote innovation. The balance is unknown.

In the United States, AI technologists are getting huge private sector offers out of college. This disincentivizes them from academia -where they may struggle to repay student loans — or startups where the risk and uncertainty are much higher. Populist immigration restrictions may also inhibit the availability of AI talent that is in short supply.

AI for Development

Ai-d.org is a non-profit organization established to support research on AI for Development (AI-D). A focal point of current AI-D efforts is the coalescence and distribution of data sets in support of research.

Patrick Meier regularly blogs about drones for humanitarian activities.

AI and the Law

The International Association for AI and the Law is a nonprofit association devoted to promoting research and development in the field of AI and Law, with members throughout the world. IAAIL organizes a biennial conference (ICAIL), which provides a forum for the presentation and discussion of the latest research results and practical applications and stimulates interdisciplinary and international collaboration.

The International Bar Association issued a report detailing the gap between current legislation and new laws necessary for an emerging workplace reality. The IBI Global Employment Institute report assesses the law at different points in the automation cycle — from the developmental stage, when computerisation of an industry begins, to what workers may experience as AI becomes more prevalent, through to issues of responsibility when things go wrong

Due process

Machine Bias in criminal sentencing: COMPAS, an AI sentencing tool, consistently scores blacks as greater risks for re-offending than whites who committed similar or more serious crimes.

Legal liability

Reasonable foreseeability is a key factor for negligence. How do you determine whether an AIs actions were reasonably foreseeable when machine-learning systems learn and adapt, and could produce results the developer didn’t anticipate? There may also be multiple AIs interacting in unexpected ways.

The EU Parliament asked the EC to propose liability rules on AI and robotics, and recommended a code of ethical conduct.

AI, Government and Regulation

There is a growing sense that some governments are not able to cope with today’s challenges. AI could help governments manage the growing difficulty of analysis and decisionmaking in an increasingly complex world. Or AI advances in the private sector could expose government’s shortcomings if it isn’t able to adapt to the future.

The White House generated a report of ‘Preparing for the Future of Artificial Intelligence,’ and a companion “National Artificial Intelligence Research and Development Strategic Plan,” in 2016. The White House also co-hosted public workshops on AI policy areas and requested information from the public on AI issues.

Japan has pushed for basic rules on AI at the G7 meetings in 2016.

South Korea is developing a robot ethics charter.

AI, Human Interactions, Society and Humanity

Instead of humans programming software, AI bots may shape culture and thereby program human beings through the manipulation of our information space.

AI and affective computing

Machines are becoming effective at both portraying realistic human emotions and detecting human emotions in video, text and speech. This could enable better human-computer interactions, but could also be used to manipulate people. Also, some people may not like emotional machines. Building morality and values into AI systems will be critical if we want their decisions to reflect our laws, policies, and virtues.

Soul Machines works on humanizing the interface between man and machines.

Affectiva: is leading the effort to emotion-enable technology.

AI and love

Dr. Julia Mossbridge, IONS Innovations Lab, leads work on developing AIs that have a loving, caring outlook towards human beings.

Matt McMullan’s quest for AI enabled sex robots.
Matt Chessen explores whether AI partners will be preferable to humans and will contribute to human extinction.

AI’s and education

Computing: the human experience project by Grady Booch and others on how computing has changed humanity

Human-Computer Interactions:

ArticuLab, Human-Computer Interaction Institute at Carnegie Mellon University: focuses on human-computer interactions including AI

AIs and human dignity

The concern is that people may simply take orders from their AI system that is directing an enterprise. How do we preserve human dignity so humans and AIs work together, and workers are not simply minions for AIs that make extremely complex business decisions?

What are the ethics of hiring human beings to work jobs specifically so AIs can learn how to do the job and replace them? Are these silent workers protected? Are US companies utilizing fair labor practices when outsourcing these services from abroad?

Humans may not treat their human-like AIs well. How does this negative behavior translate over into interactions with human beings? And what are the implications when a human is in the loop with the AI system and must face hidden abuse? Is abuse to AI systems a marker for mental health issues or potential abuse elsewhere?

Some principles: Treat bots like you would treat a human being. There may be a human being on the other end of the bot curating its behavior. Your speech may also be training the bot how to interact with other people. (See Tay for how this can go wrong).

What are the implications when AI virtual agents further shield us from interactions with other human beings?

Rights for AI systems

The issue is whether sentient or human-like AI systems deserve any rights. Rather speculative since AIs are nowhere near this level of capability.

AIs transforming what it means to be human

Elon Musk says humans need to become cyborgs to stay relevant.

Anupam Rastogi argues that what we call AI is actually ‘Intelligence Augmentation’ for humans and true AI is still in the future.

AI and addiction

AIs could make things like social media and video games more addictive due to psychometric personalization and machine learning.

AI Policy events

2016

Overview of the US White House AI workshops:

February 22, 2016 San Francisco, California Workshop on the Ethics of Online Experimentation:This workshop therefore aims to draw together researchers from inside and outside of the computer science community to jointly identify and discuss the ethical issues raised by the specific kinds of experiments that are a routine part of running a production online service.

May 24, 2016: Legal and Governance Implications of Artificial Intelligence in Seattle, WA

June 7, 2016: Artificial Intelligence for Social Good in Washington, DC

June 28, 2016: Safety and Control for Artificial Intelligence in Pittsburgh, PA

July 7: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term in New York City

“CCITT/ITU-T 60th Anniversary Talks on Artificial Intelligence (AI)” ITU WTSA-16 (26 October 2016 — Hammamet, Tunisia)

AI: is the future finally here?” ITU Telecom World 2016 (16 September 2016 — Bangkok, Thailand)

Artificial Intelligence for a sustainable future: friendly companion or threatening conqueror?” ITU Kaleidoscope 2016 Jules Verne’s corner (14–16 September 2016 — Bangkok, Thailand)

November 16–19 Data Transparency Lab 2016: The DTL is an inter-institutional collaboration, seeking to create a global community of technologists, researchers, policymakers and industry representatives working to advance online personal data transparency through scientific research and design.

November 18, New York, NY 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning: FAT/ML Co-located with Data Transparency Lab 2016. This workshop aims to bring together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning.

November 19, 2016, New York University Law School, Workshop on Data and Algorithmic Transparency (DAT’16). The Workshop on Data and Algorithmic Transparency (DAT’16) is being organized as a forum for academics, industry practicioners, regulators, and policy makers to come together and discuss issues related to increasing role that “big data” algorithms play in our society.

Princeton Envision Conference: A student-run conference to bring together current and future leaders in harnessing technology for a brighter future; sub-section on AI. Dec 2–4, 2016 Princeton University

8 December, 2016 Machine Learning and the Law NIPS Symposium Barcelona, Spain: This symposium will explore the key themes of privacy, transparency, accountability and fairness specifically as they relate to the legal treatment and regulation of algorithms and data. Our primary goals are (i) to inform our community about important current and ongoing legislation (e.g. the EU’s General Data Protection Regulation); and (ii) to bring together the legal and technical communities to help form better policy in the future.

December 12, 2016 — Barcelona The 1st IEEE ICDM International Workshop on Privacy and Discrimination in Data Mining

2017

January 5–8, Asilomar CA Beneficial AI: conference hosted by the Future of Life Institute. 2017 conference produced the Asilomar Principles ranging from research strategies to data rights to future issues including potential super-intelligence. (Summary version)

January 19–20, 2017, Philadelphia Fairness for Digital Infrastructure at UPenn.

4th February 2017, 3rd International Workshop on AI, Ethics and Society San Francisco, USA:The focus of this workshop is on the ethical and societal implications of building AI systems.

Feb 19–20, Oxford. Bad Actors and AI Workshop: FHI hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence

April 4 Valencia, Spain Ethics in NLP workshop at EACL 2017 focuses on ethics issues surrounding natural language processing

April 4, Perth Austrailia, FAT/WEB: Workshop on Fairness, Accountability, and Transparency on the Web The objective of this full day workshop is to study and discuss the problems and solutions with algorithmic fairness, accountability, and transparency of models in the context of web-based services.

May 17–19, 2017, Phoenix, AZ The Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics.

June 7–9 in Geneva AI for Good Global Summit: ITU and XPRIZE organized event for industry, academia and civil society work together to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity. The Summit aims to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and others.

AI Policy Organizations

The World Economic Forum’s Council on the Future of AI and Robotics will explore how developments in Artificial Intelligence and Robotics could impact industry, governments and society in the future, and design innovative governance models that ensure that their benefits are maximized and the associated risks kept under control.

Data & Society’s Intelligence and Autonomy Initiative develops policy research connecting the dots between robots, algorithms and automation. Our goal is to reframe debates around the rise of machine intelligence.

AI Now Initiative: Led by Kate Crawford and Meredith Whittaker, AI Now is a New York-based research initiative working across disciplines to understand AI’s social impacts. The AI Now Report provides recommendations that can help ensure AI is more fair and equitable.

The USC Center for Artificial Intelligence in Society’s mission is to conduct research in Artificial Intelligence to help solve the most difficult social problems facing our world.

Berkman Klein Center for Internet and Society at Harvard University:The Berkman Klein Center and the MIT Media Lab will act as anchor academic institutions for the Ethics of Governance and Artificial Intelligence fund and develop a range of activities, research, tools, and prototypes aimed at bridging the gap between disciplines and connecting human values with technical capabilities. We will work together to strengthen existing and form new interdisciplinary human networks and institutional collaborations, and serve as a collaborative platform where stakeholders working across disciplines, sectors, and geographies can meet, engage, learn, and share.

The Stanford One Hundred Year Study on Artificial Intelligence, or AI100, is a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.

MIT Media Lab, AI, Ethics and Governance Project project will support social scientists, philosophers, and policy and legal scholars who undertake research that aims to impact how artificial intelligence technologies are designed, implemented, understood, and held accountable

The MIT Laboratory for Social Machines develops data science methods — primarily based on natural language processing, network science, and machine learning — to map and analyze social systems, and designs tools that enable new forms of human networks for positive change.

MIT Solid (derived from “social linked data”) is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols.

The Partnership on AI: Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society. Partners include Apple, Amazon, Facebook, Google, Microsoft, IBM, ACLU and OpenAI.

OpenAI is a non-profit artificial intelligence research company. Their mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible; advancing digital intelligence in the way that is most likely to benefit humanity as a whole

University of Wyoming Evolving AI Lab: focuses on evolution in AI and other bio-inspired techniques

The Future of Life Institute’s mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Hosts the Beneficial AI Conference.

The Machine Intelligence Research Institute is a research nonprofit studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: The purpose of this Initiative is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.

AI Now is researching the social impacts of artificial intelligence now to ensure a more equitable future. The workshop produced a report that summarizes many of the key social and economic issues with AI.

Fairness, Accountability and Transparency in Machine Learning community.

UNICRI is creating a center for AI and Robotics at the Hague, headed by Irakli Beridze

The Allen Institute for Artificial Intelligence: AI2, founded by Paul Allen and led by Dr. Oren Etzioni, conducts high-impact research and engineering to tackle key problems in artificial intelligence.

The Future of Humanity Institute FHI houses the Strategic AI Research Centre, a joint Oxford-Cambridge initiative developing strategies and tools to ensure artificial intelligence (AI) remains safe and beneficial.

The Cambridge Center for the Study of Existential Risk: goals are to significantly advance the state of research on AI safety protocol and risk, and to inform industry leaders and policy makers on appropriate strategies and regulations to allow the benefits of AI advances to be safely realised.

The Alan Turing Institute Data Ethics Group The group will work in collaboration with the broader data science community, will support public dialogue on relevant topics, and there will be open calls for participation in workshops later this year, as well as public events.

Leverhulme Centre for the Future of Intelligence: Our mission at the Leverhulme Centre for the Future of Intelligence (CFI) is to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal: to work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades.

AI Austin:Encouraging practical and responsible design, development and use of Artificial Intelligence to expand the opportunities and minimize harm in both local and global communities.

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: An incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies

UC Berkeley Center for Human-Compatible AI: The goal of CHAI is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.

Industry organizations

The Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence.

Major Corporate Researchers

Facebook Artificial Intelligence Researchers (FAIR) seek to understand and develop systems with human level intelligence by advancing the longer-term academic problems surrounding AI.

Google: Tensor Flow:An open-source software library for Machine Intelligence. Deep Mind: We’re on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.

IBM: Cognitive Computing and IBM Watson technologies: Watson products and APIs can understand all forms of data to reveal business-critical insights, and bring the power of cognitive computing to your organization.

Microsoft: AI research group and a major focus on chatbot technology and frameworks. CNTK is an open source deep learning framework.

Amazon: Amazon AI services bring natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS), and machine learning (ML) technologies within the reach of every developer.

AI News Sites

Scout.ai combines science fiction and journalism to bring you frequent online dispatches on the future of technology.

Kurzwilai.net covers AI and emerging technologies.

Import AI is curated by Jack Clark (now with OpenAI) and is a weekly newsletter of AI tech and policy developments.

Law and AI: A Blog Devoted to Examining the Law of Artificial Intelligence, AI in Law, and AI Policy

Singularity Hub AI Archives: news about technology and policy

AI Policy People

Generally

Eric Horvitz,Veteran AI scientist, former president of the Association for the Advancement of Artificial Intelligence

Russ Altman, a professor of bioengineering and computer science at Stanford

Barbara Grosz, the Higgins Professor of Natural Sciences at Harvard University and an expert on multi-agent collaborative systems;

Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;

Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;

Alan Mackworth, a professor of computer science at the University of British Columbia and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

Tenzin Priyadarshi, Director, Ethics Initiative, MIT Media Lab

Iyad Rahwan, Scalable Cooperation, Ethics, MIT Media Lab

Kate Crawford (Microsoft Research & New York University)

Meredith Whittaker (Google Open Research)

Madeleine Clare Elish (Columbia University)

Solon Barocas (Microsoft Research)

Aaron Plasek (Columbia University)

Kadija Ferryman (The New School)

Phil Howard, Politicalbots.org, Oxford:

Sam Woolley, Politicalbots.org, Univ Washington, Jigsaw:

Nathan Benaich (@NathanBenaich) — Tech investor with expertise in AI, previously a Partner at Playfair Capital and research scientist at the University of Cambridge. Organizes the London AI meet up and the annual Research and Applied AI Summit.

Joanna Bryson (@j2bryson) — Computer scientist at the University of Bath, and an affiliate at the Princeton Center for Information Technology. She has expertise in designing intelligent systems into working AI systems to help understand natural intelligence.

Alex Champandard (@alexjc) — Senior gaming AI programmer, and founder of Creative.AI, a project that explores how AI can help to perform perform creative tasks. Also directs the nucl.ai conference that helps creative industries use AI.

The CyberCode Twins ( @cybercodetwins ) — #mitCCbc Alumni, #AngelHackHACKcelerator. America and Penelope Lopez are known as the “The CyberCode Twins.” The twin sisters are on a mission to make communities safer thru wearable tech and mobile apps. Follow their journey on Twitter and on their Youtube channel.

Oren Etzioni (@etzioni) — CEO at the Allen Institute for AI, which conducts AI research and engineering to contribute to the good of humanity. Also a professor at the University of Washington’s computer science department, and a partner at investment firm Madrona.

John C. Havens (@johnchavens) — Executive director of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. He is also the author of Heartificial Intelligence, a book about building humanity into machines.

Patrick Lin: the the director of the Ethics + Emerging Sciences Group at California Polytechnic State University

Ethics

Paula Boddington, Oxford

Miles Brundage, Oxford

Joanna Bryson, Bath

Judy Goldsmith, Kentucky

Ben Kuipers, Michigan

Toby Walsh (chair), UNSW Australia | Data61 | TU Berlin

Data Ethics

Prof Luciano Floridi, Professor of Philosophy and Ethics of Information, Oxford Internet Institute, University of Oxford

Dr Jonathan Cave, Economist Member, UK Regulatory Policy Committee; Senior Fellow in Economics, University of Warwick

Dr Jennifer S. Davis, Faculty of Law and Centre for Intellectual Property and Information Law, University of Cambridge

Dr Phyllis Illari, Senior Lecturer in Philosophy, Department of Science and Technology Studies, UCL

Charles Raab, Professorial Fellow in Politics and International Relations, School of Social and Political Science, University of Edinburgh

Prof Burkhard Schafer, Professor of Computational Legal Theory, Law School, University of Edinburgh

ML and Law

Solon Barocas Microsoft Research

Kay Firth-Butterfield University of Texas, Lucid AI. First public AI Ethics Advisory Panel, Professor, Author. Law and Ethics of AI. AI and Social Justice

Krishna P. Gummadi Max Planck Institute for Software Systems

Sara Hajian Eurecat-Tech Centre Catalonia

Mireille Hildebrandt Vrije Universiteit Brussel

Ian Kerr University of Ottawa

Neil Lawrence University of Sheffield

Deirdre Mulligan UC Berkeley

Aaron Roth University of Pennsylvania

Yair Zick National University of Singapore

— —

About the author:

I am a State Department Foreign Service Officer on a fellowship at the George Washington University where I am studying artificial intelligence. Any opinions in this document have been collated from other sources or are personal views and do not represent the opinions of the U.S. Government, Department of State or any other organization.

--

--

Matt Chessen
Artificial intelligence policy, laws and ethics

AI focused DiploTechy writer of fiction & non-fiction. Looking for a literary agent. Author of Broad Horizons http://amzn.to/1UxH4aE Opinions mine not USG