PAIR Symposium 2020: Recap

People + AI Research @ Google
People + AI Research
3 min readNov 20, 2020

On Wednesday, November 18, PAIR hosted a virtual symposium focused on boundary objects for participatory machine learning.

In three sessions hosted from London, Boston and Seattle, respectively, we featured perspectives from artists, researchers, policy makers, data scientists, designers, engineers and more on what boundary objects for machine learning can be, and what it looks like to put them into practice.

London

The London session, hosted by PAIR co-founder Fernanda Viégas, explored boundary objects in technology and included the following talks, as well as a panel discussion:

Boundary Objects and Coordination for Creativity and Innovation

Charlotte Lee (Associate Professor, Human Centered Design & Engineering, University of Washington) discussed the past and future of boundary objects research and how boundary objects can help us think about participatory ML in support of coordination for innovation and creativity.

Ethics Parallel Research

Annelien Bredenoord (Professor of Ethics of Biomedical Innovation, University Medical Center Utrecht) discussed ethics parallel research, which is an approach for the early ethical evaluation of biomedical technologies. You can find more of her work on this topic here.

Boundary Object: Counterdata Sets

Catherine D’Ignazio (Assistant Professor of Urban Science and Planning at MIT) discussed counterdata sets, arguing that counterdata sets challenge mainstream, hegemonic, oppressive definitions of a social phenomenon as it has been described (or ignored) in mainstream datasets. You can find more of Catherine’s work on this topic at datafeminism.io.

Human Data Interaction in AI: “Everyone wants to do the model work, not the data work”

Nithya Sambasivan (Staff HCI Researcher, Google) discussed human-data interaction in AI, arguing that data is the critical infrastructure of AI, and yet data work and workers are under-valued relative to novel model building, leading to data cascades and downstream harms.

Boston

The Boston session, hosted by PAIR co-founder Martin Wattenberg, explored objective functions as boundary objects and featured the following talks as well as a panel discussion:

A twist on loss functions as boundary objects

Ed Chi (Principal Scientist, Google Brain) discussed neural architectures as boundary objects — and learning objects.

Intentional ignorance is a value-laden choice

Margaret Mitchell (Research Scientist, Ethical AI, Google) discussed the effect of intentional decisions — or a lack thereof — in machine learning model outcomes.

Algorithms and Interpretations: Two Stories

Sendhil Mullainathan (Roman Family University Professor of Computation and Behavioral Science at University of Chicago) discussed algorithm interpretability for discovery.

Seattle

The Seattle session, hosted by PAIR co-founder Jess Holbrook, explored participatory ML boundary objects in the field, and featured the following talks as well as a panel discussion:

Causal maps as boundary objects for understanding AI development “territories”

Donald Martin (Social Impact Technology Strategist, Google) discussed causal maps as boundary objects for understanding AI development “territories”, arguing that to navigate the perils and promises of AI together, the Tech/Research and User/Stakeholder communities need shared understandings of their respective “theories of action.” You can find more of his research here.

Mosaic Virus: using datasets as part of a creative practise

Anna Ridler (Independent Artist & Researcher) discussed the creation of her work Mosaic Virus, for which she photographed and hand classified a dataset of ten thousand tulips.

Simulating Intelligence: Prototyping for ML

Mindy DelliCarpini (Head of UX Engineering, Google Search & Assistant) discussed prototyping principles and approaches for machine learning. You can read more about Mindy’s work here.

--

--

People + AI Research @ Google
People + AI Research

People + AI Research (PAIR) is a multidisciplinary team at Google that explores the human side of AI.