Responsible AI and why your organization needs it

Cognizant AI
3 min readSep 10, 2021

By Dr. Jillian Powers

Cognizant AI

Responsible AI, in full, covers governance, ethics, and transparency.

A Responsible AI program provides the guidelines and direction for data analytics and ML (machine learning) development within organizations and institutions that focuses on harm-reduction, oversight, and understanding

As legislation, expectations, public opinion, and capabilities shift, Responsible AI allows organizations to realize the transformative potential of AI while demonstrating regulatory compliance and ethical standards/best practices.

However, currently we have a very limited idea of what Responsible AI entails. Responsible AI is more than just audit readiness, data security, or regulatory compliance. As outlined in the Universal Declaration of Human Rights (UDHR) and articulated for Cognizant by our Office of Ethics and Sustainability, the human and civil rights, as well as the well-being of people (individually and collectively) must be considered when developing AI systems.

AI development, like AI models themselves, are complicated and we do have reason for concern. AI can cause harm at scale. A decision can send a worker onto an active track for repair and cause injury, death, or significant material or immaterial damage. An AI model or system could experience a breach in security, be co-opted for other outcomes, or used in a manner that infringes upon privacy. Without proper transparency and oversight an AI-enabled system could lead to the repeatable discrimination of a protected or historically excluded populations, exacerbate existing inequalities, and bar people from opportunity and a dignified human life. In addition, the development of AI systems strain resources and contribute to climate change.

Since the deployment of AI enabled solutions are just one small piece of the socio-technical matrix, impact resonates and expands out. To account for this, the approach we take at Cognizant considers the larger social and human implications of our technology and is purposeful about the impact our solutions have for our clients and for people, so that together, with our associates, we can realize a more sustainable, equitable, participatory, and enjoyable world. By taking a data-driven, full-context, and people-first approach we can identify more than risk, but expansive and sustainable forms of value.

For example:

Responsible AI delivers value beyond short-term profits, efficiency, or reduction in costs. By benefiting users, customers, employees, and society at large, we elevate the customer and user experiences. Responsible AI improves long term-futures and ensures regulation readiness because it takes privacy and security to data sustainability by applying a human-centered lens to the PII of data subjects and the sensitive information within enterprises. It builds trust and broadens revenue streams by centering the power of design because understanding AI systems requires silo-breaking collaboration and accessible communication. It strengthens stakeholder relationships and improves service/solution quality because it’s honest. Bias exists in our data, models, and our world; a responsible approach to AI systems seeks to ensure AI is fair, unbiased and representative from end-to-end and in teams. It improves retention and talent acquisition because it provides people with a way to participate; clear processes and incentives for engagement create a culture where every individual is empowered to protect people, minimize risk, and discover spaces of humane value.

Responsible AI can be a crucial investment. Spending on AI systems by 2022 is expected to reach over $ 77 billion, yet currently only 10% of those models actually get deployed into production. Poorly defined or executed programs, issues of data quality, model development and model shelf-life, as well as lack of standards, good practices, and transparency impact the scaling and adoption of AI.

To successfully scale, requires responsibility. A data-driven, full-context, and people-first approach to AI governance, ethics, and transparency honors the process as the competitive advantage. It’s a path towards uncovering sustainable data-driven innovation because it intentionally sits in the messy intersection of corporate and social value.

There are lots of necessary and messy human parts to technology development and data-enablement; it requires intention, organizational change, ethical and clearly documented data science practices, as well as expansive forms of collaboration. This isn’t easy, and if anyone tells you otherwise, they’re selling you magic means.

What are you doing to bring responsibility to the table, what’s your approach? Where can we begin, or how can we support?

Let’s work together.

About the author:

@Dr. Jillian Powers is the Global head for Responsible AI at Cognizant

--

--