Assembly: Building An Interdisciplinary Project From The Ground Up
by Hilary Ross
From March to June 2019, the Berkman Klein Center and MIT Media lab are co-hosting the third iteration of the Assembly program. Assembly brings together a small cohort of technologists, managers, policymakers, and other professionals to confront some of tech’s biggest problems. This year, Assembly is focused on the ethics and governance of AI.
During Assembly, the cohort learns together and creates projects that offer ways to move forward on thorny AI ethics problems. After arriving at Harvard from their respective homes, the program kicks off with two weeks of intensive team building, learning, and project ideation. The remaining twelve weeks are focused on project development. During these weeks, participants return to their jobs and work together remotely.
Since we’re halfway through this year’s program, we thought we’d share what we’ve been doing — for readers interested in AI ethics, participating in the next version of Assembly, or even hosting a similar program. In the past three years, we’ve iterated on Assembly’s model and structure.This post discusses how we set up participants to succeed in this year’s program, and highlights nascent Assembly project ideas.
The First Two Weeks
Days 1 and 2: Team Building
The opening days of the program emphasized introductory and team-building activities, so the cohort would start to feel comfortable learning and working together. For example, in one ice breaker activity, the cohort had a chance to ask one question of each participant. B Cavello shared, “As a project manager, I was keenly interested in finding out what topics interested people. What surprised and delighted me were the other types of philosophical and personal questions people asked — that I had not only not thought to ask, but caused me to learn about myself.”
Given that Assembly brings participants together across disciplines, it was critical to make space to surface differing assumptions and language. Two facilitators from Harvard’s Bok Center for Teaching and Learning led a session highlighting how differences in our academic training influence how we identify and solve problems. The participants read an AI ethics paper together, in pairs, and narrated how they interpreted the paper. What did they find most important? How did they read citations? Did they dig into charts, or ignore them? The exercise illustrated how disciplinary training influences how we approach, identify, and solve problems.
Days 3–5: Learning and Ideation
During the first week, the group learned from numerous speakers. Given that AI ethics is an expansive and complicated problem space, we wanted the cohort to hear from researchers and practitioners with different viewpoints and expertise.
To kick things off, Professors Jonathan Zittrain and Joi Ito, the program’s faculty leads, led a mini-course on the overall state of the AI field, the design and training of AI systems, and the deployment and governance of AI systems.
“The lectures from JZ and Joi were really essential because they brought the ethical issues in AI into focus,” participant Hong Qu told us. “They also clearly laid out the big unanswered questions, such as loci of responsibility and agency, how AI can perpetuate and amplify harm to vulnerable groups, and why ideals like fairness, interpretability, and explainability are elusive.”
Other speakers included Professor Krzysztof Gajos from Harvard School of Engineering and Sciences, Kade Crockford from the ACLU of Massachusetts, and Miranda Bogen from Upturn. The breadth of topics covered gave the project teams multiple points of entry to their specific problems.
The remaining time during the first week was focused on project ideation, facilitated by our staff team. First, the cohort brainstormed specific problem spaces where they wanted to work. From there, they identified potential intervention ideas. By the end of week one, Assembly staff helped the cohort divide into four project teams, based on shared interest in problem spaces. As participant Iason Gabriel put it, “We went through a lot of post its!”
Days 6–10: Scoping and Refining
The second week was entirely devoted to project ideation, starting with a creative workshop, facilitated by Sarah Newman. The teams built initial prototypes of their ideas, using a variety of materials, like wire, colored paper, string, and grommets.
Over the next four days — through many discussions, post-it exercises, whiteboard drawings, and feedback sessions — each group explored their problem space and began to generate more specific ideas for a prototype, provocation, or intervention.
During this formative project development week, teams had multiple opportunities for feedback, including from program staff, Berkman Klein Center Fellows, IDEO CoLab designers, and Assembly’s expert advisors. In addition, many teams also set up individual meetings with Berkman Klein Center and MIT Media Lab researchers, practitioners, and experts.
By the end of the second week, the teams were focusing on four problem areas:
- Bias in natural language processing (NLP) systems
- Understanding and contextualizing the positionality of machine learning systems
- Promoting the equitable distribution of wealth from data and its use in AI, particularly focusing on underrepresented groups
- Educating military and government decision-makers about the challenges, limitations, and considerations for implementing AI in the contexts of governance and surveillance
At the end of the two weeks, most of the cohort returned home to their full-time jobs. During the project development weeks, they spend about twenty hours a week working remotely with their Assembly teams.
We’re five weeks into the project development period, with seven more to go. During these weeks, teams regularly meet to collaborate remotely, narrow their project question, further research the problem space, and develop their project output.
Currently, teams are exploring:
- Creating an architecture diagram that maps the chatbot landscape, to better understand how NLP systems are created and where harm might be introduced into these systems.
- Crafting a workshop for machine learning (ML) practitioners to highlight how ML systems hold positionality, explain classification caveats, and illustrate the resulting scaled implications for ML/AI systems.
- Designing a website about the general issues of misrepresentation and underrepresentation in datasets, focusing on the implications on diversity and equity, particularly related to hiring.
- Mapping the networks that contribute to or build AI surveillance tools
This is a deeply iterative process, so teams continue to get structured feedback from their fellow cohort members, Assembly staff, and program advisors.
We’re so glad to be on this journey with Assemblers, and we look forward to learning from their final projects.