On bias, black-boxes and the quest for transparency in Artificial Intelligence

Virginia Dignum
3 min readJan 26, 2018
Related image

An increasing number of researchers, practitioners and policy makers are realizing that much needs to be done to deal with bias in data and algorithms, and to promote transparency of AI models. Only in this way can the proper use of AI can be ensured and benefits to people’s lives and support for fundamental human rights can be expected

Opacity in Machine Learning, the so-called black-box effect, is often mentioned as one of the main impediments for transparency in Artificial Intelligence. Machine Learning algorithms are developed with the main goal of improving functional performance. This leads to complex functions that are optimised to provide the best possible answer to the question at hand (e.g. recognise pictures, analyse x-ray images, classify text….) but they do it by fine-tuning outputs to the specific inputs by approximating a function’s structure without giving any insights on the structure of the function being approximated.

On the other hand, machine learning algorithms are trained with and reason about data that is generated by people, with all its short-comings and mistakes. We all use heuristics to form judgements and make decisions. Heuristics are simple, efficient rules that enable efficient processing of inputs guaranteeing a usually appropriate reaction. Heuristics…

--

--