AI and the Algorithmic Accountability Act: 3 things you can do right now to avoid costly mistakes

Laura Kornhauser
6 min readMay 2, 2019

--

Companies around the world are facing growing pressure to increase the transparency around the algorithms that make predictions that drive their business decisions. In Washington, recently proposed legislation makes meaningful strides in the regulatory catch-up game around AI and machine learning. This article provides an overview of the good, the bad, and the ugly about AI and how you can get rid of the ugly and reduce the bad.

The secrecy and complexity that surround emerging technologies such as artificial intelligence have concerned regulators and consumers for decades. Recently, these concerns have focused upon AI models, which are both gaining popularity and producing unfortunate examples of discrimination and bias. You do not have to look far into the headlines to find examples of the costly risks that “black box” AI models introduce. It is alleged that Amazon built an AI recruiting tool that discriminated against women and, just a few weeks ago, after settling a $5M lawsuit, Facebook was sued by the Department of Housing and Urban Development for alleged unfair targeting of housing ads.

In order to address these concerns, three Senators recently proposed the Algorithmic Accountability Act (“the AAA”), which seeks to regulate the bias, discrimination, privacy and security risks posed by the algorithms now employed by a wide variety of businesses.

To date, many AI models possess little of the ‘interrogatability’ necessary to truly understand how they function. In order to trust a machine learning model, rigorous and frequent testing of that model is essential. Yet many companies do not perform this ‘on-demand’ examination, let alone have a process in place to conduct this type of analysis in a reliable and repeatable fashion. Making these models more transparent after the fact will be difficult and costly.

While there is no guarantee that the AAA will become law, there is no doubt that more and more steps will be taken by lawmakers and regulators towards algorithmic regulation. Simultaneously, consumers are demanding more control over their data. This momentum has already produced regulations such as Open Banking and the EU’s PSD2 across the globe. With all of these drivers at play, the time for companies to take action is now.

Think “wait and see” is the right approach? Be sure to consider the costs. Facebook’s announced in their earnings release last week that they estimated and reserved $3 billion “in connection with the inquiry of the FTC into our platform and user data practices…We estimate that the range of loss in this matter is $3.0 billion to $5.0 billion.”

What is actually being proposed in this bill?

In short, the AAA would apply to companies that:

  • Make more than $50 million in revenue per year;
  • Possess data for at least 1 million people or devices;
  • Are commercial companies that mainly act as data brokers.

The AAA would empower the U.S. Federal Trade Commission (FTC) to require that companies “conduct automated decision system impact assessments and data protection impact assessments” for algorithmic decision-making systems. This will force companies to evaluate algorithms in terms of their “accuracy, fairness, bias, discrimination, privacy and security.” These assessments would apply to both existing and new decision-making systems and models.

While details are currently lacking on what these assessments mean, the AAA does describe what it will ask companies to provide: a general description of their AI model, the cost and benefits of these models, and a thorough risk assessment associated to privacy and security of personal information.

This could mean, for example, that companies need to show complete transparency in their model development, prove how they address bias, show how they communicate results to consumers and the extent to which consumers can correct or object to its results.

Should the AAA become law, enforcement will not be easy. Regulators and businesses will need new tools to address bias, and to develop, understand, and monitor models. For existing “black box” machine learning models, where the process between inputs and outputs are opaque, it will be a big challenge to comply with these potential new regulations.

How would this impact your business?

The answer to this question will depend entirely on how you’ve built and operated your models in the past. Companies that have a strong AI model development protocol and have already incorporated concepts such as “interrogatability” and “explainability” will have much less difficulty in ensuring and proving their compliance.

For companies who have machine learning models lacking interrogatability and explainability, including a limited understanding of their training data, it will be costly to back-track the transparency and bias of their existing AI models.

One thing is clear: for all companies it will increase the regulatory burden and will most likely require extra resources, who should not only bring a technological perspective but also a human, legal and behavioral science point-of-view.

Why you cannot afford to wait and need to start preparing now

“The future is already here, just not evenly distributed.” — William Gibson

Even though this bill may not pass, there are many signs that both regulators and consumers need and desire greater oversight of predictive models. Given the focus on these issues, here are a few of the risks associated with the “wait and see” approach:

  1. Cost. It will be expensive to rebuild existing models for increased transparency.
  2. Lost Revenue. By contrast, avoiding the issue by failing to integrate AI into your business will result in a competitive disadvantage.
  3. Consumer Credibility. One doesn’t need to look beyond the examples already provided to see the impact undetected bias or discrimination can leave upon your reputation.
  4. Stakeholder Credibility. Board members, investors, and partners may soon begin to question your compliance plans.
  5. Being wrong. Bias and discrimination can produce incorrect risk assessments, which result in poor decision-making.

For these reasons (and many more), companies need to take an active role in finding and addressing bias issues. So even without formal regulations in place yet, companies should start preparing now.

Here are 3 things companies can start doing today to proactively address these risks:

  1. Check for Transparency. Review your current AI development process, and evaluate its transparency and bias perspectives. Determine where the weak points are in your model development process, and begin implementing changes that result in greater transparency and understanding.
  2. Test models and training data for bias. Test how your model performs across populations. This includes a thorough understanding of the model’s training data, and what biases may exist in that data that need to be addressed. Documenting these test findings and your commensurate reactions will be critical in proving that you are taking transparency seriously.
  3. Consider appointing AI Accountability staff. Similar to companies having financial or quality controllers, in the case of software and AI models, it would only be natural to have people checking for side effects and other negative impacts, including discrimination and bias. It takes a multi-disciplinary approach to realize full transparency and this goes beyond the responsibilities of data scientists and will require additional staff.

How can companies like Stratyfy help?

Stratyfy’s solutions leverage AI/machine learning in a fully explainable way and are transparent by design due to the inherent-nature of our underlying proprietary algorithms. These algorithms allow your people to impart their knowledge and expertise into models to compensate for missing, incorrect or bias data. Users can robustly test, adjust and audit models to address bias, tracking actions automatically for easy one-click reporting. Our software is perfectly positioned to help companies with their AI accountability challenges. For example:

  • Stratyfy allows you to supplement your data with human knowledge, proactively addressing places where your data is skewed, sparse or missing. The combination of both data and human input can be more fair, less biased (and maybe even more accurate) than each one of them separately, allowing you to greatly improve the quality of your models.
  • Stratyfy’s bias analysis module allows companies to identify model bias early on and proactively address it by selectively adjusting the model to address these biases. This functionality can be applied to monitor and detect bias in any model.
  • Models built using Stratyfy’s algorithms, can then be adjusted to address bias. This type of selective tuning — whispering into the ear of the AI to correct for model bias — is not possible with other off-the-shelf machine learning approaches. Most of our clients are able to address bias with minimal impact on the profitability of their models.

CONTACT INFO: If you’d like to learn more about how Stratyfy can help you proactively address bias in your models, please reach out to me or info@stratyfy.com to see how we can help.

--

--