Asylum Bot — what’s not to like?

Well this is annoying and rubbish. The Guardian and Mashable both feature this cute and trendy app solution to “help refugees”. In fact, it is a sort of social vanity project, which (along with many others) gratifies techy social justice enthusiasts without any scrutiny of the end product, virtue signalling for all through plentiful likes and RTs, with no need to genuinely involve refugees.

Disclosure: I’ve been working with asylum seekers and refugees in UK for over 20 years , I am also a legal professional and a techy, with experience of running online autonomous “bots”. Furthermore I have been closely involved with setting up an online refugee project in the last 18 months that has been really successful (featured in tech press such as MIT review and the verge, as well as mainstream media BBC, Times, Independent) primarily due to an enormous amount of offline time contributed behind the scenes by incredibly dedicated volunteers. So this is in good faith. Grumpy and affronted, and a bit ranty, bringing in a few other issues that are on my mind, but in good faith nonetheless. Online anonymity is a bit of a fantasy, but I’m gonna use a psuedonym.

Facebook comments on the Guardian article quickly degenerate into the usual poisoned arguments about immigration. Even writing this puts me in a very tricky spot - I am not criticising social do-gooders from the usual rightist standpoint (snowflakes, etc), and that seems to make it almost impossible to get a substantive response. I got blocked on twitter :( by the creator, who’s styled as “Robin Hood of the internet”, he also seems to have deleted that Guardian article from donotpaybot’s twitter. He has been over-hyped, and it’s gonna be uncomfortable to reel it back in.

The bot seems to be broken down right now, but it’s clearly been ill thought-through. It’s a classic example of the silicon valley mentality where enough tech can solve any problem. Unfortunately this is naive and seems most often to serve PR and career progression, while treating potential beneficiaries in a paternalistic and disempowering way. In this particular case, a main advantage of this bot seems to be that no human needs to deal with these migrants, which is also dehumanising of everyone involved.

There are also issues around automation of work, and the impacts on people in jobs that are taken over by robots, but I don’t think those issues need to be discussed in depth here as there are more immediate problems with the bot. I was very impressed by the bot’s success with parking fines, it’s an ingenious and effective way to help people pursue justice where legal assistance is not cost-effective, but asylum is another matter.

Asylum law a notoriously complex and dynamic field, with new primary legislation coming out almost every year, and people face severe consequences if their asylum or asylum support application is botched, and the gap in legal provision is not caused by the triviality of the cases (as with parking fines), it is created by much specific and tougher set of circumstances.

In UK in 2000 the OISC was set up as a regulatory organisation specifically to address the issue of sub-standard immigration advice, and that year’s round of immigration legislation criminalised “bogus” (unregistered) immigration advisers. It may be slightly tricky to apply the law to a bot, but presumably it’s creator, and possibly even the platform — FB messenger — could be liable. Sanctions can include jail time, as OISC never tires of gleefully pointing out. It’s bizarre (or maybe just a bit crap) that this hasn’t been taken into account and addressed explicitly, but then as the Mashable article points out:

“So many lawyers are charging hundreds of pounds simply for copying and pasting documents, so I hope to one day replace them.”

The bot is trying to solve an artificial problem, which is in fact symptomatic of deeper, more intractable conditions. So if the bot is succeeds in the short term, it will be undermined by the same factors that led to the situation. If there is a gap to fill, then it is in legal advisers or immigration caseworkers, who should be available to guide migrants through the process of seeking asylum and asylum support. There’s no denying that a drastic shortage of decent advice for asylum seekers exists, but these conditions have been created by withdrawal of government funding to legal services and to the UK Home Office (where asylum cases get decided). While this can be seen as part of the UK “austerity” programme, these areas have been targeted in order demonstrate a commitment to cut state resources expended on migrants. So if the bot short-circuits (or “disrupts”) the difficulties asylum seekers face when making claims, the same factors that created those difficulties will ensure that any benefit is short-lived. Refugees are pawns in this game, but by failing to challenge the deeper structural problems, this bot just perpetuates the game, rather than creating actual improvement.

Asylum seekers in UK, or people who might try and use the app from abroad, will be submitting highly sensitive personal information to the bot. I assume that the tech community can work thorough the many potential data protection pitfalls such as: failing to explain which of the data being gathered is subject to the act or how it is being handled, failing to make a declaration or register with the UK ICO, storing data outside the EU, etc etc, Since these are well known tech issues I will focus on the issues that relate to how this could play out for asylum seekers.

It gets worse, that was all quite theoretical, there are much more immediate risks for people using the bot. The major hazard for potential claimants in UK is that they can fit their situation into the script that the bot uses, and complete their application form (which seems to have been downgraded from an asylum application to an asylum support application, this seems to have been an afterthought by the creator, who might also have benefited from some timely advice from an expert), and submit that to the UK government. There are no second chances in this process, so everything that is submitted to the HO will be scrutinised as it proves the claimants’ credibility, if it can be shown that the story is at all conflicting or contradictory.

The UK context is Theresa May’s “Hostile Environment” a set of government policies and practices deliberately designed to make life hard for immigrants (initiated by the current PM in her former role as Home Secretary with responsibility for immigration). Any asylum claimant who submits incorrect information may face a number of very negative consequences — they may be thrown out of or disqualified from their accommodation, and have their support stopped (since the 90's asylum seekers’ support has been handled by a separate state agency where it is “ghettoised” with even worse conditions that mainstream state benefits and housing).

Worse still, since the asylum support application is directly connected to the asylum application, they may be thrown into indefinite detention, or still worse than that, they may be deported, back to the country they are claiming asylum from, where they may face ill-treatment or even death.

This may all sound hyperbolic, but it is an extremely harsh, callous and unsympathetic system, complemented by shocking levels of indifference and inefficiency. The “culture of disbelief” in treatment of asylum claims has been well documented for many years. I won’t go into that either, since refugee advocates can do so ably.

As someone who’s been around for a while, it’s frustrating that over a year after the shocking spectacle of poor Aylan Kurdi’s body on the beach galvanised startup culture to work on refugee issues, so little seems to have been achieved. There are some examples of grass roots tech projects that have had a real effect (such as the brilliant phone cards for refugees in Calais facebook group), these seem to have come from refugee activists, not tech people (this is where it gets a bit rantier, and critical of the sector more broadly).

Instead there seems to have been a massive amount of wasted energy, enthusiasm, and goodwill on planning projects that never got going, there’s been endless duplication of obvious bright ideas (e.g multiple failed “airbnb for refugees” hosting projects), but no coordination or focus. This reflects lack of leadership from the refugee sector too, who have shown characteristic lack of agility in seizing opportunities to help on the ground or take the initiative on setting the public agenda. It’s sickening to see tweets of “NGO professionals” showing off their logos and hashtags at demonstrations, or worse still in front of crowds of refugees, and really reveals the lack of viability of a lot of online activism. I would be glad to be proved wrong in this, but i’ve not been able to find reports or stats that dispel this negative impression, despite masses of online activities; campaigns, hackathons, conferences amd meetups.

I have some simple questions about the bot project, some of which are the absolute basic criteria for projects in the voluntary sector that assist refugees.

  • Have any refugees contributed or been consulted?
  • Any refugee agencies advised you?
  • What about the OISC? Is this “legal advice” or not?
  • Who is the data controller for the purposes of data protection?
  • Is there a plan, or just a load of tech??
  • Why on earth did you block me on twitter???

If you can’t or would rather not answer these questions, then I beg you to take the bot down.

Just to reiterate, the parking fine bot is great, the asylum bot is a potential disaster. Welcome to give me a shout Joshua, if you’d like to talk it over.