The case for innovation to increase its space with ethics.
When Uber and Lyft drivers went on strike to protest for better working conditions it didn’t look or feel like anything extraordinary. There was however something else brewing — at the surface level we are seeing a group of people angry about benefits and working conditions but in the larger context, we are experiencing the first wave of workers protesting the eventuality of being replaced by machines.
Generally, job functions are either being moved to locations with ‘more relevance’ to drive efficiency, or people are being replaced by other people in locations with lower taxes for the employer or access to better infrastructure and education, to name a few reasons. Uber is relying on self-driving cars becoming a reality one day — in 10 or 20 years time worker protests will be different because humans will become irrelevant to employers as we become replaced by machines. Or so goes the argument.
What happens when a group of people becomes irrelevant? When voices of people do not matter because people are simply just not ‘needed’ in a specific context to meet a need in a society, in an economy, an organization or a sector? What happens to the people in the margins when the very fabric of the society changes? What does it look like when the nature of work changes? And within the UN Refugee Agency (UNHCR) how does that impact our efforts to integrate refugees and others into the morphed fabric?
Although the future of machines taking over all of our jobs is distant, we should start to understand what irrelevance and being replaced mean in the age of intelligent machines. We often speak at the UNHCR about being relevant in the future, but do we really mean being replaced by others, because surely our sector will remain relevant — we just will have different actors delivering the services we are in charge at the moment. Right?
A year ago we started to work on a project together with the Division of Human Resources for artificial intelligence and machines (AI/ML) to support recruiters in screening talent to UNHCR jobs. We heard corridor murmurs of how machines may one day replace UNHCR jobs, and how we can’t trust a machine to do the right decision. We also heard people saying they have been waiting for the day a machine does their job so that they can focus on more relevant things.
To make sure we are communicating a relevant picture of AI/ML in UNHCR we point out that humans are (still) making the ‘final decision’ and reviewing the steps of recruitment decisions, particularly on who we view as ‘talented’ and who is ‘screened out’. We are also making sure we use language around this project not to allude that machines are ‘more’ intelligent than humans. There is fear, caution, and disengagement around this project, as well as excitement and eagerness to move faster.
We have learned that we are in fact in the early days of machines replacing us in various work processes, and in a way taking our inaugurate piece of a larger puzzle of growing tension between being replaced and staying relevant.
How can we work around the tension between humans and machines in the context of UNHCR? We found out that one solution to work on this tension is through ethics for these two principal reasons.
- Work in AI/ML provides us with insights that bring more humanity into our processes, not less. We are seeing an opportunity to have conversations that elevate the importance of putting people first in our work processes and that ethical thinking and actions in fact are our greatest value added;
- Through teaching machines to replicate colleagues’ cognitive thinking, we unearth meaningful conversations around ethical decisions around AI/ML, and beyond. Looking at recruitment bias is just the tip of the (melting) iceberg.
UNHCR’s Innovation Service is in love with questions and conversations to drive mindsets from narrow to broad, from gaps to bridges. So whilst working on the project, these key questions emerged:
- How do we create innovation processes that are ethical, how do we supervise machine learning processes in UNHCR?
- How do we systemically unveil human bias when working in AI/ML context?
- When we start to deploy machines to more areas of our work, who decides on what is ethical from an institutional level, who is accountable?
- Are we prepared to examine the ethics of when machines know more ways to ‘operate in the most efficient way’? Do we trust machine recommendations and decisions?
Having worked in this space, many of us at the Innovation Service believe ethics will become the cornerstone of our work in innovation. Much like climate change and deploying automation to our existing processes, ethics and how we can practically apply the principles of ethics across the organization is something we are excited about.
We hope to see more investment in gaining an understanding of how ethics and disruptive innovation will intersect and affect organizational and personal values, teamwork and our ability to foster diverse leadership in UNHCR.
Until then, don’t worry. There’s a human verifying the machine decisions making sure you are not being screened out by a machine. Or is there?
JK. All good. There is.
No, but really…is there a human behind the machine making the final decisions?