How to implement fair and unbiased technology with IBM Watson OpenScale

Manish Bhide
Trusted AI
Published in
3 min readSep 1, 2020

The movement to eliminate bias touches every corner of society, and information technology — especially artificial intelligence — is no exception. In his June 8, 2020 Letter to Congress on Racial Justice Reform, IBM CEO® Arvind Krishna reaffirmed IBM’s commitment to advance racial equality, asserting the company’s historic commitments to equal opportunity and justice. He quoted the 1953 statement of IBM President Thomas J. Watson, who said:

“. . . Each of the citizens of this country has an equal right to live and work in America. It is the policy of this organization to hire people who have the personality, talent and background necessary to fill a given job, regardless of race, color or creed.”

Of the three recommendations Arvind Krishna offered to Congress — police reform, responsible technology policies, and expanding opportunity — IBM has the power to influence two directly. From the design of products and services, to the development and validation of best practices for their implementation and use, IBM is committed to help clients ensure that that their technology and offerings support the pursuit of justice and racial equity. This blog outlines how IBM helps organizations achieve these goals using IBM Watson OpenScale.

Using AI to prevent bias in hiring

AI models may be used to improve equity in the hiring process. A retail client using model monitoring employed an AI model to create a short list of eligible candidates for interviews. Retail companies receive thousands of applications and AI models can help narrow applicant lists to identify the best candidates for each position. It is critical that AI models not perpetuate bias in hiring. How can bias compromise fairness in hiring? A model trained to prefer a candidate who “has her own transport,” may inadvertently eliminate a qualified person of color. Why? The National Equity Atlas reported in 2015 that nearly 15 percent of people of color neither own nor have access to a car. This rate is more than double that of whites, at 6.5 percent. The model that privileges car ownership may bias hiring away from people of color. To detect and avoid bias, the client used IBM Watson OpenScale not only to ensure that the model was bias-free at the time of design but also that it continued to remain bias-free at runtime. Read more about how explainable AI helps enterprises monitor models for biased decision-making.

Using AI to prevent bias in training

Training and education for in-demand skills is key to expanding economic opportunity for disadvantaged communities. And AI can help expand opportunity by playing a more constructive role in employee training. For instance, an IBM client uses an AI model to recommend a set of trainings for in-demand, high-paid skills in cloud or cybersecurity. Historically, these trainings would have been offered only to employees with advanced degrees. But with the advent of specialized training for “new collar” jobs, many employees now possess the necessary skills once achievable exclusively through four-year college degrees. When models exhibit preferences toward employees with college or graduate-school degrees, they may inadvertently express bias toward groups of people whose qualifications may not fit a stereotypical educational model.

Using AI to audit and report model behavior

Whenever AI models are used, businesses must maintain a historical record of decisions based on those models. This is especially critical for regulated industries like banking. Governance best practice includes maintaining an audit trail of AI model bias, model quality and model drift. IBM Watson OpenScale helps address these requirements by keeping a historical record of all model input, output and documentation of all model metrics, ensuring that clients can explain the behavior of the AI model and prove to regulators that the AI models they use have not shown bias in the past.

Fairness in hiring and training is not limited to human behavior. Technology can play an equally important role in promoting fairness and preventing bias, especially with AI, which begins with monitoring and ensuring AI models behave properly. Using IBM Watson OpenScale for model monitoring helps enterprises catch hidden biases in AI models before they become a problem, a must-have for all enterprises employing AI models in decisions regarding human performance.

--

--