Role Models in AI: Mira Lane

Mira Lane at the Microsoft Teams product launch in 2016 / photo credit: Microsoft

Meet Mira Lane, the Director of Design & Ethics in Artificial Intelligence for Business at Microsoft. Mira’s background as an artist and computer scientist gives her a unique perspective into how new technologies are shaping our world. We interviewed Mira as part of AI4ALL’s Role Models in AI series, where we feature the perspectives of people working in AI. Check back here on Wednesdays this winter for new interviews.

As told to Nicole Halmi of AI4ALL by Mira Lane

NH: As the Director of Design + Ethics in the AI for Business at Microsoft, what does a typical day at work look like for you?

ML: My group is responsible for all of the design and ethics work that happens in the AI for Business organization at Microsoft. We look outward from Microsoft to see what AI can unlock and to see what new business opportunities we can explore using AI. My team is responsible for everything from design collateral and visual artifacts, to thinking about the ethical implications of technology and how it might impact the people and societies that are using it, to engaging in very experimental and divergent thinking about creative AI applications.

Why is it important to consider ethics when developing product strategy and design for products that use AI?

AI has the potential to do a lot of good in the world. To make that possible, we need to make sure these technologies are aligned with our moral values and our ethical principles. We need to really deeply understand the data that we’re using to create the foundation of our system, and how the resulting algorithms might impact people and communities that come into contact with them.

The key question is, how do you bring transparency into the AI design, development, and training process? I think it requires a shift in how you would traditionally design and experience technology.

Where do you see AI making the biggest impact in the next 5 years? What are some of the important things we should be doing now to create an ethical future for AI?

I believe that AI will transform every major industry. I think we need to be engaging in conversations around the impact of these technologies. I know there are many conversations happening around these technologies within the tech industry, but the conversations need to happen more broadly and publicly.

Two conversations in particular need to happen more. First, we need to shine a light on how these technologies are being used, where they’re being deployed, the strengths, the limitations. This needs to be a really honest conversation about the goals, who’s “winning,” who’s not, where there’s manipulation, and where there’s exploitation. Second, there needs to be a broader discussion around our collective moral values and how technology should fit into our societies and our lives. These conversations require brutal honesty. There has to be a clear path for addressing areas where we don’t agree or where there’s a strong disconnect between moral values.

These conversations can happen in a variety of places. I think most companies should start thinking about a broad ethics board within their company. We’ve done that within Microsoft. There are also a number of initiatives driven in the industry by groups like Partnership on AI and IEEE that probably need to have an expanded role outside of the industry.

How did you get interested in the work that you’re doing now? And how did you get interested in AI?

I was actually initially interested in how AI might affect the creative community. About a year ago, I wrote an internal paper [at Microsoft] about the intersection of technology and artists, and I shared it with a few vice presidents around the company. One VP in particular, who was starting a new business group within the Microsoft AI and research group, proposed that I run his new design team. I agreed to take the job, but only if it included the ethics group as well as the design group. I really wanted to create a discipline around ethics that was dedicated to thinking through the implications of AI and helping the organization be more thoughtful about and accountable for what we’re designing and building. I’m really happy we’ve done that, because it could transform the way we build products around here.

You have a practice as an artist. How does your art influence your thinking about technology and vice versa?

I think artists tend to look at the world differently — maybe even a bit humanistic — than technologists. Artists play with technology to see what it can do and how it can be exploited. When I look at technology, I like to see where the edges are.

A compilation of 3 of Mira’s videos for the “Search for Meaning” exhibition at Seattle University in 2017

For example, there’s a fairly new technique in AI called generative adversarial networks (or GANs for short). In short, this technique is comprised of two neural networks that are competing against one another; one is a judge and one generates content. The one that’s generating content will generate something (an image or text, for example) and the judge will tell you whether the content is similar to the training set or not. The two networks will train in tandem, so that over a period of time, the content generator will get very good at fooling the judge into thinking that it’s creating things that are authentically close to or part of the training set.

Examples of the synthetic artwork generated by GANs trained on Mira’s artwork

When I heard about this technique, I immediately thought about how I could exploit this to augment my creative work. I convinced some researchers here to collaborate with me to create a creative AI system. The synthetic artwork it creates is completely novel but feels like my work. I would never use it as a final product, but it makes for a very good companion and starting point to build on top of.

You have a degree in math and computer science. Did you always know that you wanted to focus on those areas when you were growing up?

I’ve always been a math person. My mother has a master’s in mathematics. My father is what I would call a mad scientist/engineer. He pushed me into computer science, because he could see this was where the future was heading. I grew up in Canada, and the University of Waterloo [where Mira received a bachelor’s degree] has a world-class computer science program. Interestingly, it’s taught out of the Mathematics department, so I ended up with a very strong foundation in mathematics and computer science.

Who were your role models growing up?

My father has always been a huge role model for me. He’s a lateral thinker and is insanely inquisitive and creative.

Aside from my father, I’ve always been deeply fascinated with Einstein’s brain — the way he was able to look at the universe from such a completely different perspective and utterly transform how we think about our place in it. I also admire his sense of social justice and his deep need to respect human rights. I keep one of his books, “Ideas and Opinions,” next to my bed and I read through the essays once in a while. It’s a set of his most profound essays, and the topics range from atomic energy and relativity to human rights and religion. It’s an incredible book.

Mira Lane is the Director of Design & Ethics in Artificial Intelligence for Business at Microsoft. Her team is responsible for design strategy and ethical frameworks for new products leveraging AI. Mira’s background as an artist and computer scientist gives her a unique perspective into how new technologies are shaping our world. Mira holds numerous patents across platforms and collaborative interfaces. Her art has been featured in film festivals and galleries.

Check out Mira’s art here and find her on LinkedIn here.

Follow along with AI4ALL’s Role Models in AI series on Twitter and Facebook at #rolemodelsinAI. We’ll be publishing a new interview with an AI expert on Wednesdays this winter. The experts we feature are working in AI in a variety of roles and have taken a variety of paths to get there. They bring to life the importance of including a diversity of voices in the development and use of AI.