Ethical AI: Products that Love People

Love people — it’s our number one value at integrate.ai. Each member of our team interprets and lives it differently in her behaviour towards colleagues, customers, and communities. For our product and engineering teams, we embody this value by flipping the design paradigm from building products that people love to building products that love people.

What on earth does this mean?

Algorithms neither love nor suffer (despite all the hype about super-intelligent sentient robots). The products that algorithms power neither love nor suffer. But people build products and people use products. And a product that loves is one that, as Tyler Schnoebelen laid out in his recent Wrangle Talk, anticipates and respects the goals of the people it impacts. For us, this means building models with evaluation metrics beyond just precision, recall, or accuracy. It means setting objective functions that maximize not only profits, but the mutual benefit between company and consumer. It means helping businesses appreciate the miraculous nuances of people so they can provide contextual experiences and offer relevant products that may just make a consumer experience enjoyable.

Tyler presenting at Wrangle

It also means keeping our team up to date on the latest techniques to develop ethical algorithms, so we can anticipate where things may go wrong and take proactive measures to prevent that. For example, data scientists often encounter class imbalances, where a majority group may be well represented in a data set but a minority group is not. This makes it hard to train an algorithm to accurately represent minority interests. But, as Moritz Hardt et al have convincingly shown, we can do some engineering gymnastics when training a supervised learning algorithm to make sure it treats minority populations fairly — almost like giving subgroups a head start in a race to achieve a fair outcome.

We care deeply about these and other issues related to ethics, policy, and regulation of AI. Kathryn Hume from our team recently explored the foundations of AI ethics in greater depth at a wonderful event organized by Annick Dufort and Graham Taylor from NextAI.

Her slides explain:

  • What supervised learning is and why it creates ethical traps
  • Examples of ethical issues from deep learning and sentiment analysis
  • Different challenges with bias in deep learning and traditional machine learning
  • Survey of regulatory, conceptual, and technical solutions

Her slides ends with the four product development maxims we espouse at integrate.ai (well, at least for now — we’re a startup, so this is all a work in progress).

Some commentary:

  • By maximum mutual lifetime value, we mean we want to find the sweet spot where both businesses and consumers win. Consider this example. If a bank applies AI to identify changes in spending behaviour, e.g., someone shifting from eating out at fancy restaurants every Friday night to staying home and eating beans and rice, that could signal a risk the bank wants to protect itself from or an opportunity to help a consumer ensure future financial health and goals. Our perspective has implications for what we do once our models identify this change. We want to help people achieve financial stability, not strip people of their ability to have credit.
  • Contextual integrity is a theory of privacy developed by Helen Nissenbaum, one of our advisors. The essence is that privacy is violated when we share data across social contexts that should be kept separate (e.g., our employers learning about confidential information we share with our doctor). There is immense promise in integrating data about people as they behave in different contexts: this is our core business premise. And we want to do this while respecting people’s rights to privacy. We’re applying differential privacy to make sure we keep data safe and are excited to set a standard for doing privacy right in the age of AI.
  • Finally, we think about how consumers are represented by and may be impacted by our models. We’ve been inspired by the work of researchers like Rich Zemel et al, who achieve group fairness (the proportion of members in a protected group receiving positive classification is identical to the proportion in the population as a whole) and individual fairness (similar individuals should be treated similarly) with “intermediate” representations of people mindful of protected attributes. We look forward to researching this further with the Vector Institute, MILA, and our friends at Element AI.

Again, these maxims are our first draft as we start building our product and business. We look forward to broadening the conversation, receiving feedback from others working to create the AI-enabled world in which we want to live. We’re planning to post a debate between a few members on our data science and product teams later this week and look forward to hearing your thoughts, comments, and feedback at the next AI in the 6ix on September 27!