Decolonial Humanitarian Digital Governance
“Before we start, I’d like to acknowledge that the decisions we make in this room today may have implications into the future and far beyond the lifetime of this project, team or organisation. We make those decisions with that in mind.”- The Long Time Project
This is the first public ‘writing out loud’ of my dual fellowship with the Berkman Klein Center and the Carr Centre for Human Rights, both at Harvard University. In the spirit of a true exploration, rather than starting from a point of ‘expertise’ and homogeneity, I offer this as an emergent thought process, and to invite in participation. I must also point out that though this blog is written through the lens of my journey, my learning isn’t mine alone — it is greatly informed by so many people, disciplines and research and I am grateful to the countless people that have gracefully shared their knowledge and wisdom so that we might collectively grow together.
“Who and what gets fixed in place to enable progress? What social groups are classified, corralled, coerced and capitalized upon so others are free to tinker, experiment and engineer the future?” Ruha Benjamin
I started off my exploration with this hypothesis: Can humanitarian digital policy be decolonized?
I started with this, as I’ve often argued that the humanitarian aid system perpetuates hierarchical, patriarchal, hegemonic views of what ‘development’ and ‘progress’ looks like, ignoring other world-views, and underlying systemic and structural pillars of inequality and bias. As the humanitarian aid system increasingly intersects with technology systems developed in the context of Western capitalism and in small pockets of privileged power, I have not been alone in raising concerns on the implications of collision of two systems that are fundamentally patriarchal and hegemonic on those that are minoritized in the Global South.
To put in context, the range of digital or technology systems that humanitarian actors engage with is incredibly wide. This can range from (as a snapshot):
- Systems used to support the coordination of aid efforts — for example, biometric digital identities with refugee assistance programs or earth observing technology in disaster relief
- Digital platforms or digital transformation with partners — for example, e-government services that link social protection and citizen welfare
- Technology innovations to support affected populations access to aid — for example, ICTs, apps, digital ledgers, hardware and many others
Even what we constitute as ‘humanitarian’ has shades of grey — whether it’s at the pointy end of a crisis, to support during peacetime. The wide array of aid provision does not easily exist within the carefully designed terminology found in textbooks and political resolutions.
Additionally, who works on technology or digital innovations within humanitarian institutions is wildly divergent — mostly within institutions innovation labs, or within digital or thematic teams — then deployed to the global south for implementation or testing (the notion of testing on vulnerable groups is already contentious and has its own set of critiques).
The diversity of use cases, contexts, stakeholders and owners does have one common thread however. As humanitarian organisations do not tend to traditionally have the requisite digital or technology expertise in-house, they partner externally to achieve their aims. And this is the sticking point — it easily falls into expertise and partnerships that predominantly come from small communities of public-private, technology partners, and academic institutions from the global north. What this does (whether implicitly or unconsciously) is to reinforce a dominant, hegemonic narrative that assumes:
- The experiences of global civil society and its actors, is homogeneous
- The singular ‘Silicon Valley’ values that underpin such digital policies are the ones that all people aspire to, regardless of where they live or their cultural, societal, economic, geographic bearings
- Power dynamics will continue to be affirmed in the hands of those that currently hold it without considering the cascading impacts of those policy decisions on those that are most going to be affected by it
The appropriateness and impacts of digital technologies and AI within aid systems is still being discovered and understood. An incredible amount of work has gone into establishing data governance guidance and data protection protocols but not as much into broader policy governing the deployment and use of these technologies sector wide. Often they are siloed into communities or thematic areas of work, or perhaps resulting in codes of conduct — but not systematically looking at a sector wide approach that interrogates whether the use of these technologies paradoxically expose, mitigate or expand harm on often marginalised, minoritised constituents AND whether the digital futures we are chasing merely replicate or reinforce existing or past inequalities?.
“..the relationship between tech industries and those populations who are outside their ambit of power — women, populations in the Global South, including black, Indigenous and Latinx communities in North America, immigrants in Europe — is a colonial one” Sareetyta Amrute, Data and Society
So, could humanitarian actors play a more intentional role in designing just and equitable digital futures? Could we in fact, unshackle ourselves from our neo-colonial humanitarian mental models, and push back against the hierarchies of techno-chauvinism and meritocracy? Could we use this moment in time to design worlds that don’t imagine some figures, especially populations in the Global South — to merely be passive beneficiaries and outside of the borders of expertise we seek? Could we invert the pathways of tech colonialism in the aid sector?
As I started my exploration, I realised that my original hypothesis needed more rigour. There were tensions inherent in my original starting point. Firstly, what in fact constitutes digital policy? For whom? For what purpose? What influence does policy have in fact over systems? Different actors have their own policy making instruments — that are incumbent on different leverage points that service multiple agendas. Considering the multiplicity of use cases, actors, and contexts — what I was softly heading to was the governance of the deployment and use of digital systems and technologies within the aid sector. Notwithstanding the work in data governance and data protection in aid, there are still gaps in governance in terms of how and when we deploy tech innovation, the digital systems we are creating and supporting to create, supply chains that we are a part of, due diligence processes, accountability and risk ownership, and many other elements. In addition to this, are questions of immunity (traditionally, humanitarian organisations do not go beyond the individual institutional governance mechanisms, that often are bordered by the institution’s immunity); and appropriateness of the technology innovations we deploy.
How then do we design digital governance systems that speak to these complex, intertwined issues? Instead of merely looking at digital governance in terms of control, could weaving in feminist and decolonial approaches help us liberate our digital futures so that it is a space of safety, of humanity — for those whom we are meant to support? Are these approaches ways in which we can design new forms of digital humanism?
“We could build systems for durability, but instead some dipshits told you we needed to move fast and break things” Audrey Watters
Could a digital governance approach consider questions like the following:
- How then do we go beyond what we are merely legally required to do versus what is right to do? And importantly, are humanitarian actors willing to go beyond their immunity?
- How do we extend digital governance to go beyond the fortresses of individual institutions to a multi-actor, sector wide approach that is emergent and iterative?
- Can and should governance systems help users realize and/or amplify their rights and in fact use it as a way to hold humanitarian actors to account?
- How might governance systems actually flatten power in decision making
- Can governance systems help us monitor our accountability to our promises to affected populations?
- Who has the power to draw the conclusions from assessments done?
- Just because we *can* deploy a specific asset or tech, *should* we?
- How do we increase percentage of risk/harm that gets absolved by humanitarian institutions rather than that risk/harm being pushed further down the chain to affected populations?
- How might we incentivise governance systems to do the right thing?
Digital technologies and AI mask ideologies of power, and are wed to a market ideology of dominance. To intentionally carve a different type of ideology would require governance systems that are informed by different knowledge sources into decision making. That prioritise those that are most impacted by or are on the receiving end of that initiative, rather than cantering donor or aid institutions privileges. Good digital governance in the vein that this research is pursuing then, must disrupt the idea of ‘solutionism’, must also critique the systems in which that technology is being deployed and the impacts of that deployment — now and into the future.
And this is when my hypothesis started to fork out. Where I originally thought to use elements of futures methodologies to analyse future states and imaginaries of what we might collectively desire, I realised this was not enough for the rigour of what is needed to be achieved. What I have learned in my practice of strategic foresight within systems and institutional transformation is that the facilitation of the method does not automatically result in a change in policy and/or strategic decision making. And that is because the application of insights isn’t weaved into and within how change actually occurs. Often in humanitarian/development work, we are fire-fighting the now — the problem right in front of us. We design solutions and interventions aimed to problem solving what is immediately in front of us without necessarily assessing three things:
- The complexity of the system in which that issue lives
- Just as rights are not static, neither is harm. What is the current and future theory of harm that might arise out of that solution/intervention
- What might be the impact of the solution/intervention on future generations and on our planet
- Who holds the fiduciary duty to future digital selves of affected populations?
I now radically pursue the idea of foresight within ethics systems to inform governance. Can we consider the ethics of intervention through the lens of constraining future good and mitigating future bad? Might this in fact be where it can add rigour and value?
“Our radical imagination is a tool for decolonization, for reclaiming our right to shape our lived reality.” ― Adrienne Maree Brown
Lastly, I realised that my focus on decolonisation was somewhat, and honestly — skewed.
Decolonisation is the process of undoing and giving up social and economic power, and restoring what has been taken away in the past, which arguably includes reparations. As Eve Tuck and Wayne Yang argue — decolonisation is also not a metaphor for diversity and inclusion nor is it a replacement for social justice efforts. Would the efforts of humanitarian aid ever include reparations? Could we ethically profess to even do so? We often argue that for the humanitarian system to change, it would involve giving up the status quo that we have a strangle-hold on. However, could this ever be achieved through the result of this one approach/framework? Am I being authentic, am I being honest if I were to declare it so? The answer was no.
What I was unpacking in actuality, was the notion of decoloniality — an aspiration to restore, renew, elevate, rediscover, acknowledge and validate the multiplicity of lives, lived-experiences, culture and knowledge of indigenous people, people of colour, and colonised people as well as to decenter hetere/cis-normativity, gender hierarchies and racial privilege. I think of this as how do we exist in plurality — in a multiverse so to speak. And to include this in governance, not as a tokenistic or virtue-signalling flag, but rather to help us consider different lenses, perspectives, sources of truth in even how we think about what is right, what is fair and what is just. How might decolonial and feminist approaches help us reframe our starting points and in fact influence governance design? This isn’t about just getting different under-represented groups around the table, but rather how might we shift the knowledge and experiences we draw from in the very design and decision making of policy and governance frames. It is to ensure that we are considering the multiplicity of ways in which issues of rights, privacy and agency are understood and experienced the world over, and not imposing just one or a narrow valued judgement on these issues.
Through applying a decolonial lens to governance, might we in fact be able to intentionally design for equity rather than for privilege?
“Decolonisation, expressed by your lips, differs from the decolonisation that comes from within, as a revolutionary concept that speaks about rehumanization — a fundamental planetary project” — Sabelo Ndlovu-Gatsheni
Thus, I am gently landing on Decolonial Humanitarian Digital Governance. An emergent process that is grounded in the following hypothesis: How do we not lock people into future harm, indebtedness or future inequity? It seeks to help start answering the questions unearthed through the journey thus far. Importantly, its primacy is to shift the focus of current humanitarian digital efforts that prioritise the problem solving of now, to one that aims to mitigate future harm and inequity. It aims at not binding nor narrowing the governance actions of humanitarian actors to merely their institutional legal liabilities and privileges but rather allows self-regulation for a shared responsibility for our collective futures across all actors and constituents in the aid sector.
The Decolonial Humanitarian Digital Governance is a framework that is:
- Multi-Dimensional (intentionally includes plurality of perspectives affected populations, minoritized groups, humanitarian and tech governance actors, activists, and ethicists)
- Rigorously analyses future harms and impacts on future generations
- Interrogates who absorbs future harm
- Grounded in the rights and equity of impacted minoritized people
- Is emergent and acts as a compass (not a checkbox)
I see it visualised in three weaves:
And how we might judge this? Though not complete, perhaps some questions we might weave into that could include:
1. For the improvements it makes relative to what is replaced
2. For its understanding and active management of unintended consequences
3. For its mitigation and absorption of current and future harm
4. For its ability to cultivate an evolved awareness of rights, accountability and collective humanity
And that is where I am up to. This is a journey that isn’t complete by any measure. In fact, this framework must never be complete. It must never be static. The complexities we are dealing with are continuously evolving, and our commons have irrevocably shifted. We must unpack decades of mental models and behaviours as the range of choices about the type of futures we want to inhabit — has expanded exponentially, and the choices we make now will decide our collective fates.
To design more flourishing and liberated futures for all, we must uncover the plurality that is available to all of us, to frame how we see the world.