On #AINow: Beyond Transparency, what is design and ethics in algorithms and artificial intelligence?

Are we giving AI engineers and creators the right tools to be ethical and create ethical products, algorithms, and software?

caroline sinders
7 min readJul 14, 2016

Last Friday, at NYU’s Skirball Center, the White House hosted a symposium on Artificial Intelligence, ethics, health, and machine learning. Led by Kate Crawford, a prinicipal researcher at Microsoft Research, and Meredith Whittaker, lead for Google Open Source Research Group. The day time events (invitation only) consisted of lightening talks from researchers at IBM Watson, Microsoft, policy makers, lawyers, artists and data visualizers such as Jer Thorp (blprnt). It was an incredibly diverse crowd, from careers to gender to race, and was something that the organizers had intended and carefully curated for the event itself. To create and germinate better discussions around AI, and to make better artificial intelligence, the group better be diverse, and AINow beyond succeeded with that. The day broke off into whiteboarding and post it note session and culminated into two later sessions open to the public, featuring White House tech liaisons to the head of Google Deepmind and well known academics as well as Intel’s Genevieve Bell. But the provocations throughout the day seemed to be: what do ethics in big data and algorithms mean? And how do we create these systems ethically as well as transparently?

Deputy Mayor Alicia Glenn open the public event by focusing on machine learning. “Machine learning and automation are shaping the market places, institutions and consumers of today…At best, it can promote equity…[but] it can also discriminate intended or unintended…” She raises an incredibly important point, “What are the prospects of ensuring that the people who are the developers of AI look like the real people? We don’t want a disparity between the developers and the people. Can a diverse workforce enforce this? We are at a critical moment where we can look at these questions…” Glenn stated. #AINow focused intensely on these issues- the issues of diversity and it’s relationship to algorithmic creation and intervention into users’ lives. Jacky Alciné’s discovery of the Google Image algorithm misidentifying black people as gorillas was mentioned, specifically to highlight what trauma, mistake, and pain happen with there is a dearth of diversity within creators of algorithms. Glenn’s statements highlight the incredible need of how diversity does affect creation.

Nicole Wong, White House Deputy Chief Technology Officer, brought up a 90 day study she lead, and the questions of what is different from big data vs big analytics, and catching a large volume of data around users. Where was the users’ consent in that data collection, and did users understand it? She went further to highlight part of the issues with machine learning is having data corpuses full of bias and those data sets reinforcing those biases. Data isn’t perfect, nor is it neutral. I took that as data sets which were missing key data, aka data sets that are training machine learning algorithms but are not actually very diverse data sets to begin with. An example of that missing data sets in practice would be something like training an image recognition software only on white faces (cough, cough Google). As a side note, the conference seemed to actually cover algorithms and machine learning more than artificial intelligence.

R. David Edelman, Special Assistant to the President for Economic and Technology policy brought up police bots, and chat bots fighting law tickets. He used this example of a move towards automation not a a threat to labor industry but rather an example of how American culture and society needs to radically shift with it’s idea of continuing education. “ It’s telling us something about our education system: we live in a world right now, with the education we gain…that is going to carry us for 40 years and that’s an outdated notion. We need to reconcile that. The reality is we need to de-stigmatize career education,” Edelman stated. Emerging and radical adoption of artificial intelligence and automation is creating a new realization that careers are much more flexible than we have ever conceived of and that the places we start our careers in education will radically change what we do, and how we do it, over time. Aka, if you start off in one industry, you’ll most likely move to another a decade and will need new skills to enter that industry, as opposed to gaining skills in your early 20s with the expectations those skills will last until retirement age, whenever that will be. Artificial intelligence ethics is a labor problem, an educational problem, as well as a code problem.

Yann LeCun, from Facebook’s AI group, talked about the rise of interest in artificial intelligence and attributed that to the general knowledge and fan fare around DeepMind, an attribution I agree with. LeCun highlights the fundamental issues we are facing as machine learning practitioners, and there’s still a long way to go in making better machine learning algorithms and perhaps mimicking human thought. “I’m excited about from the technical side, the idea that machines can observer the world and learn by observing the world, and learning by being taught explicitly by what to do. What makes machines much more useful to us, to understand the world and fill in the blanks…we want machines to have a deep knowledge of the world but we don’t have a solution…” LeCun stated. I wonder, how do you teach the world to machine, or culture? Can culture be distilled into a database, a series of queries, or into code?

But my most favorite quotes was this one: “I want my thinking machine, I want everything technology can do…I want us to have a fantastic technical today, tomorrow, and the future. I want every success out of technology. The problem isn’t the technology side, the problem is that its out of pace with society and we live in a technocracy. Every arbitrary decision made by a designer in Silicon Valley affects what we do…” Latanya Sweeney, professor of government and technology at Harvard University.

and this one:

“Machine learning described as sophisticated iterative adjustments of parameters. It would be worked out in context and relationship to…and informed by knowledge and skill and practices of into who’s lives become a part?” Lucy Suchman, a leading HCI and design thinker and professor at Lancaster University.

As a designer, Sweeney’s quote and Suchman’s provocations are specific things I took as a call to arms, and they were the first people to mention design specifically specifically at the symposium. Numerous speakers expounded upon the need for transparency and ethical parameters around algorithms, artificial intelligence, and machine learning. But what does that mean in practice? How do you make algorithms transparent? Is it open source code? What about those that can’t read code? When I hear the word ‘transparency,’ I immediately start to think how do we do that and what does it look like, what does it feel like to interact with? What would transparency around an algorithm be? Nicole Wong mentioned a need for user consent- what does consent look like for algorithms, is it a checkbox for an agreement for data collection, or a pop up window saying “hi we are collecting this X thing, turn back now or we are taking your dataz.” The need for transparency in algorithms is a design problem. Design is the very specific and literal explanation that articulates to users what a thing is doing, and how it is being done, what data is being captured, and when. Design can obfuscate or reveal, design can make things transparent or opaque, so that articulation may be a hyperbolic articulation, a white lie or an incredible bold faced lie around what the product/algorithm/thing may or may not do. However, design is the thing that every user interacts with with every product, and design is the manifestation of code into a thing. Design is inherently political, because it takes the ideas created in code and makes it into a understandable thing for people.

#AINow had my head spinning with ideas and provocations, but I also wondered, how many designers vs technologists vs policy makers participated in the event? How many designers, in general, are like me attending this event, open to the public, post work? How many designers are ethically minded, working in machine learning, and thinking about flexible interfaces, growing data corpuses, transparency of data collection, curious about the parameters of production design when it intersects with algorithms? Should we have guidelines, awareness, or our own ethical parameters around this algorithm + data set probably surveils our users? I know I’m sometimes the only designer in the room saying “hey, X Y Z is probably not a good idea, because that algorithm does this, that form field collects this kind of data, and it’s creating false patterns and a misconstrued greater data set.” Genevieve Bell noted at the conference, “For AI to be successful, its not just engineers and computers scientists talking to each other, it involves policy design, art, psychology, philosophy. There is something amazing about imaging that confluence of conversations.” A diversity of conversations in setting parameters around AI will actually create better guidelines for understanding what AI does well, and what AI does poorly, and what AI does dangerously. To Bell’s statement, I ask: what can we, the designers, the makers of the products that use machine learning that intervene in people’s daily lives can do? How do we design ethically and transparently for people? How do we take hard technical concepts and distill them for a variety, a majority of users without hurting, subverting or subterfuging our users with their data? Mustafa Suleyman mentioned at #AINow, “we should avoid talking about AI as a person. These are machine learning systems, which we designed, control, and direct…” Machine learning is person made, person led, person taught, and person designed. There are dark patterns in UI, but what are the dark patterns we create in design for machine learning? This is a concept I am working towards: what does it mean to create transparent and ethical design in machine learning and what does that look like, and what does it feel like?

--

--

caroline sinders

Machine Learning Designer and Researcher | Artist | Instigator| online harassment researcher, fellow digital Harvard Kennedy School