Why Women Should Lead our A.I. Future

Carlos E. Perez
Intuition Machine
Published in
7 min readDec 4, 2017

--

Photo by Mike Wilson on Unsplash

The usual argument about women and AI is that women are grossly underrepresented in this field and that we should have more women contributions and involvement. Diversity of perspective is one of the motivations for this. There are plenty of examples of how AI is designed without considering one half of the human population. Megan Alzner writes about this:

If we all want to make AI-driven products that solve real problems and are sustainable businesses, we need the best. This is going to require a variety of minds on projects, and that means increasing the number of women on engineering teams.

I however will argue here about something beyond the need for diversity. I will argue that our A.I. future should be led my women and not by men. The reason for this is that women have a greater intuitive understanding of what makes us all human. Women have a natural inclination to focus on the important things that make us human. To maximize the benefit of AI technology we must focus on how AI improves our humanity and therefore we need to understand, at the very least, what makes us human and not what makes us machines.

Women have brains that are wired very differently from men. A study in 2013 revealed some empirical evidence for this. The study concludes that the amount of connections between the left and right side of brains differ between men and women Women have brains are tuned for “interhemispheric communication” while men’s brains a tuned for “intrahemispheric communication.” As a consequence of this wiring men are “optimized” for performing tasks requiring perception and coordination. In contrast women are “optimized” with tasks integration of analytic and intuitive modes (See my other post on the coordination of rational and intuitive intelligence).

I was reading through the AI Index the other day. The AI Index is a collective effort to summarize and track progress in the AI field. It was created and launched as a project of the One Hundred Year Study on AI at Stanford University. The AI Index is meant to be like an open source report with the aim of facilitating informed conversations about AI that is grounded on data. The initial report encourages others to contribute to this endeavor by providing more data, analyzing data and also recommending other data sources.

The report also introduces a new index, the AI Vibrancy Index which:

aggregates the measurements from academia and industry (publishing, enrollment and VC investment) to quantify the liveliness of AI as a field.

The AI Index contains several ‘hockey stick’ charts that reveal the exponential growth of Deep Learning. I’ve collected a few of these charts myself a year ago. It’s always effective to show some exponential growth charts to get your audience interested (Nothing motivates better than greed).

The report does not address one of the better motivations for the need to track AI. Specifically, issues regarding societal risks are absent from this initial report. The plan is to later introduce metrics related to AI safety, predictability, fairness of algorithms, privacy and ethical implications.

A lot of unique and valuable insight can be found in the remarks that also come with the report. Barbara Grosz makes the following insightful comment about some missing metrics. Specifically, she talks about the need to track the quality of AI interactions with people:

the quality of an AI technology’s interactions with people or of the ways in which AI enabled systems affect people, both as individuals and in societies.

Parsing requires no consideration of the mental state of the producer of the utterance being parsed, and for many situations in which machine translation and question answering have been tested it is also possible to ignore mental state and in particular the purposes crucial to an utterance’s meaning. Not so with dialogue.

Daniela Rus describes many of the benefits of AI in tackling our biggest challenges:

On a global scale, AI will help us generate better insights into addressing some of our biggest challenges: understanding climate change by collecting and analyzing data from vast wireless sensor networks that monitor the oceans, the greenhouse climate, and the plant condition; improving governance by data-driven decision making; eliminating hunger by monitoring, matching and re-routing supply and demand, and predicting and responding to natural disasters using cyber-physical sensors. It will help us democratize education through MOOC offerings that are adaptive to student progress, and ensure that every child gets access to the skills needed to get a good job and build a great life.

In short, she highlights one of the blind spots of AI research. Specifically, one should work on AI not for AI or Automation sake but rather to solve real human problems. You simply are not going to get this perspective articulated among the general AI research community. We do know that AI will solve big problems, but we don’t articulate specifically what those problems are. Almost every time, these big problems are big human problems.

Rus comments further:

On a local scale, AI will offer opportunities to make our lives safer, more convenient, and more satisfying. That means automated cars that can drive us to and from work, or prevent life-threatening accidents when our teenagers are at the wheel. It means customized healthcare, built using knowledge gleaned from enormous amounts of data. And counter to common knowledge, it means more satisfying jobs, not less, as the productivity gains from AI and robotics free us up from monotonous tasks and let us focus on the creative, social, and high-end tasks that computers are incapable of.

You don’t get this kind of perspective from men. Men look at cars from the perspective of the coolness factor. Automated self-driving cars are cool and something that will enable us to make the “cannonball run” in record time. It’ll allow us to watch a movie (or catch a nap) while “driving” the car at the same time. Women however focus on what truly is more important: the health of our children and our own well being.

When it comes to jobs, it’s not about doing more with less, but rather having jobs that are “satisfying”. Said differently, jobs that are meaningful and not “bullshit jobs”. Nursing and teaching are two jobs where we find a majority of women. These are both jobs where one’s contributions can be extremely meaningful. The job of a nurse is both analytic and intuitive. A nurse must be able to grasp a complex medical field at the same time be able to be the advocate of a patients needs. To build advanced AI interfaces, one will need a similar mix of talents.

Let’s examine the job of teachers. This require not only mastering of a subject, but exemplary communication skills and empathy for one’s students. I wrote earlier that the sexiest job in the future would the teaching of machines. The same kind of talent found in teachers may in fact be the prized talent required for AI developers.

If you will notice, these two comments in the AI Index report are from women. You can go read the other comments in the report. The comments by male researchers rarely ever discuss our humanity in relationship with AI technology. However any serious discussion of the future of AI, whether it is about the near term effects of job loss due to narrow AI automation or a far in the future existential threat of “super intelligence” becoming self-aware, demands that we address own humanity in relationship with this technology. The kind of people who are best equipped to understand this (as a consequence of evolution) are women and certainly not men. Therefore, women should not only be a minority participant, but women should also lead the AI revolution.

It is up to all of us to enable and encourage more women participation in the AI revolution. It is not just a matter of the need for greater diversity, it is also a matter of our own health and well being. It is ultimately a matter of our own survival as a human species.

In relation to this, here’s a recent talk about an AI system that tracks fleeing people from a drone strike: https://theintercept.com/2017/12/05/drone-strikes-israel-us-military-isvis/ . Yes, some doctoral student didn’t think much of about the morality of his entire study.

I leave you with this interesting study which I discovered in Harold Jarche’s blog about “Our future is networked and feminine”:

Source: https://www.inc.com/magazine/201306/leigh-buchanan/traits-of-true-leaders.html

Jarche argues that feminine traits are advantages in social networks and that for radical innovation we need to be able to leverage these networks:

http://jarche.com/2017/01/innovation-in-perpetual-beta/

There is sufficient group-think, even in the world of Deep Learning that ideas across disciplines like cognitive psychology, neuroscience, physics, game theory, biology etc. are critically important to future research. The most innovative ideas for Deep Learning will emerge from ideas coming out of left field. In fact, I will point out that two of 2017's most innovative ideas in Deep Learning, “Capsule Networks” (Sara Sabour) and “MAML” (Chelsea Finn) involved women lead-contributors.

Exploit Deep Learning: The Deep Learning AI Playbook

--

--