The Void: Building trust for Artificial Intelligence in the humanitarian context

UNHCR Innovation Service
UNHCR Innovation Service
14 min readApr 8, 2019

By Sofia Kyriazi, Artificial Intelligence Engineer

Diagram by Hans Park.

Do first impressions matter?

When I first arrived at UNHCR, snuggled in a small office in the basement, situated at the heart of a new team focused on the European refugee situation, I didn’t necessarily believe that there was space for programmers to innovate within the organisation. While the team was rushing to collect the number of arrivals for the day through dozens of emails, amongst printed Excel sheets of data and coloured highlighters strewed across tables, I was perhaps naive in face of the challenges at UNHCR.

My impression was that even if this team was ready to integrate new ways of thinking about technology — how possibly could we change the mindsets of the dozens of colleagues on the frontline of an emergency collecting this data? We had colleagues in another team sending unstructured data through emails, partially because they were so overwhelmed, and partially because they didn’t have time to rethink data processing. In those first few moments, I felt like a fish (okay maybe a whale) out of water.

Luckily, one of the immediate lessons I learned is that first impressions don’t really matter. Through a small nudge, we were quickly able to move this data from heavy emails to a new automated approach for data collection and processing. When you’re trying to change behaviours and ignite trust in something new, even the smallest win can be the start of something big.

As an Artificial Intelligence (AI) Engineer, I’ve realised one of the primary needs in creating change, specifically, as it relates to technical challenges, is making these challenges and the technology behind the possible solutions accessible and trusted. So what exactly is the difference between automation and AI? When you’re creating a computer-oriented solution, the initial action is to automate collecting and transforming data — but that is just the beginning. You can have your solution executing a series of actions, to perform tasks that people used to perform and that is automation as well. Now when the actions are not trivial, and the tasks require more thinking (i.e. the detection of patterns), machine learning can assist in processing large amounts of data and complex thinking, weighing and combining of models. In turn, you are modelling a real-world environment for the machine to resemble and teaching it how to make decisions. And that is Artificial Intelligence.

The first step to begin experimenting with data and testing an Artificial Intelligence theory (hypothesis) is the need to have data. At UNHCR, everyone is using data in their day-to-day work — even if they don’t recognise it as “data” per se. The number of arrivals I mentioned previously? Data. Free text within surveys conducted with refugee communities? Data. Traditional humanitarian focus group recordings? Data. But to have interpretability of this data, it needs to be structured in a way that can allow for basic visualisation. We need to be able to play with the data, but in many instances, data isn’t accessible to allow for this type of experimentation. More importantly for AI, you need the data to be structured so that it can be processed and analysed.

In the end, it’s not about just having the data (because there is a lot of it), but truly understanding how people treat their “data”, what they do with their everyday tasks, what absorbs the most time in their processes, and what it is that they wished they knew, but don’t. Any kind of activity that follows certain steps to achieve a goal — whether we recognise it or not — is a cognitive process. We process information, combine it with our knowledge, identify a pattern and make a decision, based on our reasoning.

There are different sides to this. There is potential in automating such processes, which is commonly associated with losing control over the process, or just semi-automating and still keeping ourselves part of the process to supervise the flow of the work, and adjust our role to the process or even the process can change to adapt to our needs. A fully automated process, is, for example, the redirection of some of the emails we receive to the “SPAM” folder on our email client. A semi-automated process would be the “writing assistants” that detect possible improvement on anything we type, but it is up to the human to approve of the suggestions or not. You could potentially change the process, add to it, evaluate historic decisions, and detect additional information needs that would advance and optimise your workflow.

All this requires a need more powerful for success than the data itself: trust. And people usually associate trust in AI with trust in the results, but what we are really talking about is trust in developing the AI solution with the engineers, trust in sharing the cognitive process and participating in the design of the solution.

The alarm bell of negative positives

Science fiction has spent years preparing the world for the unofficial takeover of Earth by robots. From movies to graphic novels to radio productions nearly a hundred years ago — there has been an association of the near-apocalypse with future technology. In general, people do not fear the true positives of artificial intelligence. For example, a true positive in the case of cancer identification is, when a patient has cancer and the AI algorithm detects that this is a case of a patient that has cancer, therefore you have a true fact matching with the machine’s classification.

Whether it’s the humanitarian or private sector, people will readily welcome the accuracy and the predictions that will enable them to do their work better and more efficiently. And despite the endless true positives that artificial intelligence can bring, the one false positive will overwhelm any pragmatic feelings one initially had to the technology. As Peter Haas, a robotics researcher who is afraid of robots, explains, “The machine never fails gracefully, and that is what is scary.”

Recently, we held the first Artificial Intelligence Workshop at UNHCR, where the team was presenting an AI solution to better screen applicants for our Human Resources Department. It did not take long for the false positive to manifest itself within the room. Almost instantly a colleague raised their hand eagerly, as if they engineered this thought for this first time, and stated that we “needed to be careful.” Our colleague’s reaction quickly skipped to two main points, with their hand still waved in the air: artificial intelligence will cost people their jobs and if there was ever to be a mistake in the screening process — UNHCR could end up paying large amounts of money to account for the machine’s mistake. Despite the overwhelming evidence that the machine would not cost anyone their job, the trust that it wouldn’t still needed to be built strategically. Additionally, the possibility of the false negative (the machine inaccurately screening out one candidate) outweighed the thousands upon thousands of successfully screened candidates that would save UNHCR staff time and money.

One missing piece of the puzzle in building trust around artificial intelligence is that behind every machine there are humans. If we look at a typical classification exercise where a machine is classifying dogs and wolves, and the machine accidentally classifies a husky dog as a wolf, people will argue, “Well a human could make that mistake — they look so similar!” But in reality, our cognitive processes would place much more blame on the machine for this misclassification than it would a human. The machine is not human and it is playing the role of an expert. And we trust experts to simply not make mistakes. The wolf is fine enough and maybe misclassification won’t affect anyone’s life, but what if a human gets misclassified by a machine?

So, how does this translate to real life? The most severe case we wield in artificial intelligence theory is the case of cancer that was previously mentioned. The example being now that a machine miss-classifies the type of cancer a patient has, and because of this false negative, their life is lost. This would be an absolute worst-case scenario.

What people miss in this process though is that the human expert — i.e. the doctor — is still included in this process and has the final say in classification, and AI can assist, not just to classify the patient, but to unravel the way the decision has been made, but the decision, in the end, is up to the doctor to make. That’s their role, and the machine is there to just make their job a bit easier. This also is the same case for our solution that we’re developing with our Human Resources Department. The machine is merely acting as an assistant to the expert, and there is a marriage between the human and technical approach.

Everyone is afraid of mistakes

We don’t want machines to make mistakes. And we also don’t want humans to make mistakes. Even when your solution creates a false positive, we should not completely disregard a project because of trust. You expect and trust your car to get you from place A to place B. But if your car breaks down on the side of the road, you would repair it — and this approach should be utilised for AI solutions as well. We have to fine-tune our solutions and that requires humans in the process to detect mistakes, working hand-in-hand with AI Engineers and System Developers to “unravel the black box” of new technology and rebuild the trust.

What we have discovered at the Innovation Service is also the important distinction between recognising the potential of artificial intelligence and trust. Ultimately, we cannot be speaking to people in the organisation if the AI solutions don’t have an interest for the users they were created for, and are not strategically positioned — and communicated — for their needs. How can people identify the need for an innovative solution if they don’t know what artificial intelligence can do for them and if they cannot recognise the need for a change in their processes? If we can frame our investment in the future of AI by recognising its potential, rather than full confidence in the machine itself, there is more opportunity to change mindsets and create value for users. This framing is the bridge that builds opportunities from potential to trust.

Investing in the potential of AI

Most people are fearful of change. Now combat that with new technology and we have a recipe for doubt. In reality, the majority of change as it relates to artificial intelligence has already been experimented on in other sectors and in the humanitarian context. And what’s great about the work that is currently being done, is that collaboration lies at the heart of the innovative work being undertaken.

If we turn our heads to academia, there is an immense amount of work being down across labs and research groups amongst universities. Academia has already started to create diverse cohorts of professors, associates, and students from fields that have not previously worked together. In my Masters of Human Media Interaction, there were students from Cognitive Science, Computer Science, Psychology, Aeronautic Engineering, Designers and many more fields, that came together to define challenges and work with interdisciplinary concepts. These solutions included projects such as: emotion detection for storytelling where conversational agents adapt to the user’s perception of the story that would then change the flow of information given to the user. An additional example could be creating a simulation for fire training during a flight, where augmented reality tools (Engineers) go hand-in-hand with image recognition (Computer Science) features. For artificial intelligence to be successful, we need to work across disciplines to tackle the difficult questions surrounding ethics, moral philosophy, and prejudice in how we build our machines.

An example of how academia and the humanitarian sector are currently collaborating is a project from the Airbel Center at the International Rescue Committee (IRC) and Stanford University’s Immigration Policy Lab. This project is a combined effort to optimise the resettlement of people in a new country, according to the market needs of the country of destination. The algorithm uses historical data on refugee demographics, local market conditions, individual preferences and outcomes to generate predictions that suggest an ideal location for resettled refugees. This actionable information can then be harnessed to better inform decisions about where refugees are settled in the United States.

It is an extremely complicated project technically, but the concept itself is really simple. It derives from a need, the need to have resettlement conducted in the most productive way for society to benefit from arrivals to be integrated into the job market and to improve the quality of life and the dignity of people that are resettled. The IRC describes the algorithm as “part of a larger enterprise to revolutionise refugee resettlement, by harnessing private capital, data and volunteers to change the calculus for host countries in determining whether they resettle — and enable many more refugees to start a new life in a welcoming country.”

Another example, housed within the Innovation Service, is the project I spoke about before for screening external candidates. We call this collaboration: Project Nero. Nero has an AI solution running in the back but in simple terms, it is a screening tool for UNHCR’s recruiters to be able to reduce the time they spend on screening an applicant for a specific talent pool. Once the machine performs analysis on the characteristics of an applicant it labels the applicant and gives some indicators as to why they were selected as good or bad. The concept again is very simple, and the need was there due to time and capacity limitations for UNHCR staff. But the real innovation here comes from people identifying the need (UNHCR’s Human Resources Department) for such a solution, to break the circle of the repetitiveness of the processes that recruiters are accustomed.

Create a pathway for harnessing the potential of AI in humanitarian innovation

We believe that resistance often is a lack of clarity. When it comes to the adoption of new technology, change lends itself to uncertainty. So what are the first steps to embracing the potential of AI in humanitarian innovation? These are a few actions you can already start taking to put out the flames of fear around artificial intelligence:

Collaborate with academia: There are already groups of people that need a higher motivation and purpose for what they are researching. Most of the projects in academia combined with market challenges find applications. There is a massive opportunity to not only collaborate but to drive and influence research in humanitarian contexts. Allowing academia to enter the humanitarian field more strategically, would be a union of forces.

Build an understanding that data is an asset: Often people don’t even know they are dealing with data — because in our case, the data represents people, and within UNHCR there is often the belief that the processes being followed are so unique that out of the box solutions would not work properly. In the humanitarian context, we must not only attempt to bring these technologies into our organisation but to make these concepts accessible and jargon-free for people working across other departments. Once people understand the story that the data is telling, they can translate that into knowledge and furthermore detect opportunities rather than problems in their challenges.

Hire more data-driven people: I often get asked if the humanitarian sector is even ready for AI Engineers. Maybe it is a big step to jump straight to AI Engineers in your team, but there is an obvious need to bring in people that combine knowledge from other fields and introduce a new expertise in a team. We need people who have an ease with experimentation and testing of theories using the data. This expertise can then be combined with people in-house who have the knowledge to interpret what the data means. By combining both strengths new ways of making decisions and measuring impact can be tested.

Bring in more data-driven processes: When it comes to decision making processes, data is essential. The more informed a decision is, the better the impact it can have. Since hundreds of decisions are made every day, wouldn’t it be better to have at least data to justify why a decision was made, and measurements for the decision’s impact so that in the future, we can also know what to expect? Experts know what to expect because they have experimented with decisions in the past, and that is knowledge that should be documented, along with the appropriate matching data and process knowledge that can be transferred to an AI solution.

Slowly build the trust through bright spots: In the above section, I highlighted a few bright spots of AI solutions that are already being tested in the humanitarian context. People don’t necessarily want to be the first person to bring a completely new technology into their organisation — and luckily — it’s likely you aren’t. Even if you can’t find examples in your own organisation, you can look to lessons learned from academia, the private sector, the public sector, and dozens of other fields that have already taken the first step. These are the stories you can use to build trust in experimenting with an AI solution. When we received the initial request to build an AI solution with our HR Department, one of the catalysts in this collaboration was the market research they had already done in the private sector where these potential solutions were thriving. Find your bright spots and tell those stories to influence others.

I would be surprised to see UNHCR surviving as an organisation without integrating expertise from unconventional fields of sciences and arts. This is not to say that UNHCR is not important and can be replaced. UNHCR now is not the same as it was, as the phenomenon of population flow is not as it was. Societies are technologically oriented, the power of receiving instant updates from the news and the ability to virtually connect with each other has changed our communication. Humans may not have evolved much biologically in the past 100 years, but their needs have changed, and technology helped to achieve that.

At UNHCR, leaders can help integrate new and technical expertise. Management plays a great role in this, as to how they can form and accept a change in the team they are leading, a slow and steady restructure for the future. And since most managers would jump to find people with experience in technologies to help them better deliver their services, I’d also urge you to think about diversity when seeking out these talents. Have in mind that this is a female AI engineer writing you this story, and we have a lot of ideas on how you can bring diversity into your AI solutions. Stay tuned, we’re just getting started.

A quick note to say that we have deliberately left out the discussion of ethics and bias in artificial intelligence for this article. In forthcoming editorials, we will address the complexity surrounding these issues, how they interrelate and highlight some of the main challenges they present to how we collect and process data in the humanitarian context.

This essay was originally posted in the recently released publication — UNHCR Innovation Service: “Orbit 2018–2019”. The publication is a collection of insights and inspiration, where we explore the most recent innovations in the humanitarian sector, and opportunities to discover the current reading of innovation that is shaping the future of how we respond to complex challenges. From building trust for artificial intelligence, to creating a culture for innovating bureaucratic institutions and using stories to explore the future of displacement — we offer a glance at the current state of innovation in the humanitarian sector. You can download the full publication here. And if you have a story about innovation you want to tell (the good, the bad, and everything in between) — email: innovation@unhcr.org.

--

--

UNHCR Innovation Service
UNHCR Innovation Service

The UN Refugee Agency's Innovation Service supports new and creative approaches to address the growing humanitarian needs of today and the future.