Why We’re Hosting Assembly
Taking on the Challenges to Social Equality Posed by AI
Artificial Intelligence-based technologies are increasingly embedded in our daily lives and society. AI moderates content on social networking platforms, powers our virtual assistants, and recognizes our faces in photographs and video feeds. In the coming decades, it will likely drive our cars, steer our justice system, and make decisions about our health, our education, and our employment status.
The pervasive use of AI-based technologies creates the potential for dramatic social impact on both local and global scales. To ensure that AI brings more benefit than harm, policymakers, engineers, and other relevant experts need to work together. Unfortunately, true collaboration across these sectors is difficult. Most policymakers don’t understand AI-based technologies, and engineers often fail to identify the potential negative social costs of their projects. To prepare our global society for the rise of AI, we must find a way to bridge these disciplinary gaps, and make smarter decisions.
Assembly Gathers an Interdisciplinary Cohort to Confront Emerging AI Ethics Problems
At the Berkman Klein Center, we’re encouraging interdisciplinary collaboration and knowledge transfer through Assembly, an annual program that gathers technologists, policymakers, developers, managers, and other professionals to confront emerging problems related to the ethics and governance of artificial intelligence. Assembly is a collaboration with the MIT Media Lab and a part of our broader joint Ethics and Governance of Artificial Intelligence Initiative.
In 2018, the nineteen-person Assembly cohort included academic AI researchers, product managers, communications experts, a creative researcher, machine learning engineers, public policy experts, data scientists, and a historian. Each year, the cohort and program is designed to bring together participants from a range of backgrounds.
Assembly Participants Grapple With Societally Pressing Questions in the Field of Artificial Intelligence
The four-month Assembly program has three major components: 1) an ideation process, 2) a short course led by Jonathan Zittrain and Joi Ito, and 3) a twelve-week collaborative development period, where participants divide into teams to develop concrete solutions to real problems.
In 2018, the cohort explored issues and questions, such as:
- How can local policymakers better understand the problems, opportunities, and questions that are raised by the use of AI within their communities?
- How can we improve the accuracy and fairness of AI algorithms?
- What choices do we want to make, as individuals and as a society, regarding how our data, images, and facial recognition will be used?
The cohort approached these and many other questions from different disciplines and with varying methodologies. By the end of four months, participants had created six projects, including “EqualAIs,” a privacy tool that circumvents facial recognition systems using adversarial attack, and “Dataset Nutrition Label Project,” a diagnostic label for datasets that aims to drive higher data standards.
After the program ends, teams are encouraged to move forward with their projects. For example, the Dataset Nutrition Label Project team continues to collaborate, including writing a paper, presenting their project at conferences, and exploring international partnerships.
Join Assembly 2019’s Cohort
We look forward to bringing a third Assembly cohort to Cambridge in 2019 to develop solutions to artificial intelligence ethics and governance problems. Next year’s program will run from March 11 to June 15, 2019. If you’re a professional with an interest in artificial intelligence, ethics, and governance, learn more and apply: http://bkmla.org/apply.html. Applications close Sunday, September 2, 2018.