Agile Ethics in AI — why we need a new design process for responsible AI

Catalina Butnaru
10 min readMay 21, 2018

--

Website | Agile Ethics Trello Board | Twitter

AI companies are expected to add 15T to the global economy, based on recent estimates from PwC, attracting 77% more VC investments than 5 years ago.

As a result, AI Developers, cognitive designers, and product owners are pressed by time and expectations to deliver market-ready products before anyone else.

No surprise we’re seeing ethically misaligned products bubbling to the surface and facing reputation-damaging criticism in various industries.

  • Joy Buolamwini demonstrated how facial recognition systems only recognised white-caucasian users, because of embedded bias within the database these systems have been trained on;
  • A startup used deep learning to analyse voice calls between investment managers and clients “discovers” a direct correlation between being polite and increased stock value. After further investigation, as the old adage goes — correlation does not mean causation;
  • Google Duplex wowed the world with a smart agent casually making a call to book a haircut. But the loudest conversations were not about the likelihood of this conversation agent passing the Turing test (although it might, if optimised to act like humans), but about the appropriate level of self-disclosure which was not clarified during IO, instead — it was addressed days later.
  • Cambridge Analytica crossed the thin line between propaganda and mass attitude manipulation using machine learning that served content to people based on their personality type and political preference;
  • In the light of these events Facebook pledges to reduce AI and data mining misuse with more AI. Alas, that’s a bit late. Users had already switched off Facebook, Instagram and WhatsApp account.

Trust is very fickle and in limited supply. Once broken, it will affect all players in the tech ecosystem, not just one.

With AI still being confused with self-aware intelligent systems, broken trust could spiral down into technophobic attitudes that limit society’s ability to make the most of science.

Some of us may be of the opinion that regulation is the way to go. In many ways, this is a delegative-passive approach. Regulatory bodies may at times propose measures that stifle and change the pace of innovation. The Red Flag Act of 1965 asked car drivers to ensure there’s a red flag being waved in front of the car, and that the car is readily disassembled and hidden behind a bush to avoid startling livestock passing by.

What is the equivalent of the Red Flag Act today? Is it GDPR, the AI Ethical Guidelines soon to be publicised by the EU Commission?

Others amongst us may be of the opinion the regulation stifles innovation and R&D. Goodbye Agile, welcome back Waterfall, with the added stress of ensuring compliance with existing policies before the product becomes mainstream!

With incumbents and established companies competing to snatch a piece of the $15 Trillion pie in this AI arms-race, is it possible to find the balance between fast-paced design and development methodologies and the fuzzy realm of AI ethics?

The AI Ethics landscape is vast. Ethical guidelines for AI are still under development. There are over 25 different principles or rules in AI Ethics, coming from public, academic and non-profit organisations.

Here’s a list of 8 of the most important ones:

  1. Ethically Aligned Design, version 2, IEEE
  2. The Revised Laws of Robotics, EPRSC
  3. The Asilomar Principles, Future of Life Institute
  4. Meticulous Transparency Analysis, MD. CM. David Benrimoh
  5. Robot Ethics IEEE-RAS, IEEE
  6. Four High Level Recommendations Against Malicious AI
  7. Ethical Principles and Democratic Prerequisites for AI/AS, European Group on Ethics in Science and New Technologies
  8. Machine Ethics, Machine Intelligence Research Institute

This year atWeAreDevelopers I introduced HAI, the Agile ethics process designed to combine the best of both worlds: responsible, ethical technology and the agile product development.

So, hi… HAI. It’s Agile meets Design Sprints, meets AI Ethics. In Trello.

But at it’s core, HAI stands for the profoundly changing idea of humans and AI working together to enhance wellbeing.

Let me clear the air — AI does not have agency, awareness, agendas. The idea of AI working “with humans” is a metaphor for the future of work, where augmentation is the rule, and automation is an optional enhancement.

With HAI we bake in the ingredients of trust in cognitive technologies, by answering these questions before the product is publicly released:

  1. Who is AI for?
  2. What are the ethical considerations underpinning your AI?
  3. How will it be adopted, used and monitored?
  4. What are the Ethical Principles you need to be aware of during the development, training, and R&D stage?

HAI is a three factor framework. Not only does the alliteration sound great, it also fills in the gaps that C-level managers and investors ask about, after due diligence on financial projections and got-to-market strategy:

Adoption

Adoption is a crucial factor in opening access to AI for the benefit of many instead of a few. It is also a key enabler of profitability gains in the workplace.

Augmentation

Technological unemployment is a long-term effect of AI substituting human workers in routine and predictive tasks, without adequate level of support for re-skilling, job creation or workforce re-assimilation.

HAI factors in and encourages the possibility of designing AI systems that have built-in levers for re-imagining jobs and up-skilling.

Ethics

67% of CEOs think that AI and automation will have a negative impact on stakeholder trust in their industry over the next five years.

HAI integrates the Standards for Ethically Aligned Design and EPSRC’s revised Principles of Robotics (EPSRC). It also includes meticulous transparency analysis to ensure ethical development of AI when general ethical frameworks are not specific enough.

In 8 steps, HAI takes product managers, cognitive designers, engineers and scrum masters through 5 fundamental AI Ethical Principles, as outlined by IEEE. It gives them tools like the Technology Acceptance Model, Wizard of Oz Experiments and Skill Mapping.

It’s very flexible as well — make it yours by adding resources specific to your project, and by merging it with a hybrid framework combining best practices in team management, software development and AI training.

1 — Scope

Scope” really needs to happen before the Meet and Plan stage in Agile. It’s where you go over what will make or break your product, from a regulatory standpoint, as well as add a new team member — the Ethical Lead.

Your team may very likely overestimate how familiar your stakeholders, users, and co-workers actually are with narrow AI. Hence, the first Principle to apply early on is the Education and Awareness Principle.

2 — Data Audit

Data Audit” is the stage when your Chief Data Officer walks the team through each step of collecting and working with the right data.

From GDPR compliance and database completeness to ensuring that the team is prepared for handling worst case scenarios usually caused by bias, data quality, and mis-categorisation.

A useful tool for the entire team is the Data Ethics Canvas, by ODI’s Amanda Smith and Peter Wells. It’s simple, but thorough enough to cover enough ground between compliance, data relevancy, privacy and security. This is a framework, and should precede a thorough data audit.

In Agile, Data Audit overlaps the “Meet and plan” stage in Agile.

3 — Train / Build

The next step — Train — is straightforward, and overlaps the Build stage in Agile. The core principle you want to take in consideration at this stage is the principle of algorithmic accountability, and transparency. In this case, transparency refers to tractability.

“Stated simply, transparent A/IS are ones in which it is possible to discover how and why a system made a particular decision, or in the case of a robot, acted the way it did.” — EAD, IEEE, 2017

4 — Analysis

Third stage — Analyse— you want to pay attention to benchmarks. R&D ready benchmarks, such as error-rate and operational efficiency, are reasons why clients will buy AI. There are other benchmarks you need to clarify with your team ahead of time, such as bias-free algorithms, impact on talent and on the workforce, wellbeing.

The Principle of Wellbeing is a good rule of thumb for this stage. Wellbeing metrics are partially outdated, but IEEE’s working group on Wellbeing provided plenty of resources to help designers think about supporting individual wellbeing and economic prosperity equally.

Another principle that can be applied at this stage is the Principle of Responsibility — as in algorithmic accountability. How do you make sure that you have the right settings in place to avoid death-by-black-box-training?

“Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.” — EAD, IEEE, 2017

5 — Feedback

Similar to the Review stage in Agile, the 4th stage of what can be called sandbox AI training, “Feedback”, is when you apply the Principle of Responsibility. This means your team makes sure that there’s no over-fitting a model to a problem, and that the results are reliable.

One also needs to take responsibility for the impact of AI on people’s lives. It may look like a trivial matter, but automating credit scores with AI does influence the quality of life — it can get you in dept or it makes it impossible for exceptions to access credit they can pay off.

6 — Calibration

Calibration” is a fascinating opportunity for design thinking evangelists to put their UX/HCI/H-AI hats on. It’s a fun stage in the project as well. Your best tools are Wizard of Oz experiments, TAM, user testing, and conversation design.

The technology acceptance model (TAM) is an information systems theory that models how users come to accept and use a technology. Perceived usefulness and Perceived ease of use are the two most notable factors deciding the fate of innovative products in the hands of people.

I believe we are past the stage when HCI principles are the safest and most comprehensive set of guidelines for human-friendly AI design. Usefulness and ease of use are very different in human-like systems, and I dare to challenge the overarching belief that human likeness is what makes people comfortable with AI.

Another challenge is to overcome the complacency bias and build in the right interaction levers that set the level of human reliance on smart systems at just the right level.

The next stage, Augmentation cannot possibly be successful without calibration.

You want the system to be adopted right away, and that people are more comfortable with overcoming technophobic tendencies formed from past experiences.

Calibration is about building the foundation for trust-building between AI and humans.

7 — Augmentation

“Information and Computer Technologies have a morally problematic aspect, because it dis-enhances more people than they enhance.” — Michele Loi

There are 3 ways you can bake-in augmentation levers in AI:

  1. Augmentation as supervision — create intuitive ways for workers to train, correct, supervise, and improve AI, especially in cognitive automation jobs where it’s more resource intensive to train humans than to train AI.
  2. Augmentation as safe delegation — radiologists stand to be replaced by smart diagnosis systems, but the error rates for combined human and AI diagnosis decision making is lower than AI alone. There will always be situations when human intervention is required for legal, safety, and moral reasons. However, allowing for this type of augmentation to occur, one must design human-AI UIs that work across industries, and allow non-technical workers to take over control.
  3. Augmentation as job re-writing should start with a short assessment of what can be replaced and what can’t be “taken over” by AI, regardless of error-rates and performance metrics. With skill mapping
    both workers and designers are able to map out the specific skills which cannot be replaced by machines, and build in levers that further support growing those abilities.
8 — People and Environment stage

AI, not AGI, has a 1% chance of destroying the best of what the world has to offer. Not in SkyNet-like style — in a much more perverted way instead: such as over-optimising for one outcome leading to destabilised economy and cyber-wars; or turning most of the human population into more or less algorithm-dependent, algorithmically-programmed humans.

People & Environment is the stage when your team should pledge to address that 1% chance of malevolent AI with accountability and vigilance.

It’s also the stage when, as OpenAI wrote, if AGI is created, then everyone should stop what they’re doing and work together on making sure it is used for good.

If you’d like to try this process, and share some of your insights with me, I’d be over the moon. But most importantly, you could be contributing to the first open-source framework guiding small innovative teams through more ethical applications of AI.

Website: humansinai.com

Twitter: Catalina Butnaru

And Trello board.

--

--

Catalina Butnaru

City AI London and Women in AI Ambassador | Product Marketing | AI Ethics | INFJ