Innovator Q&A: Grassroot’s Luke Jordan
With personal and often sensitive user information on hand, how should crowdsourcing intermediaries, communication platforms, and other civic tech organisations be approaching the issue of user privacy?
Luke Jordan is the executive director of Grassroot, a civic tech organisation that describes itself as a platform for community organisers, activists, and social movements to organise their neighbours, working towards the vision of “a nation self-organising from the ground-up”. To support this, they create and deploy mobile communication tools that let people create groups, call meetings, and even take votes and record decisions even from a basic phone.
They have a total user base of 35 000 users, with core users (with deep and continuous engagement) of over 4 000 — all of whom are involved in social activism and, in some cases, protest, making privacy a primary concern. We spoke to Luke about the development of their privacy policies, as well as the strategy and thinking behind their approach to managing user privacy.
Q. What did you set out to do? How has that shifted since your start?
Grassroot deploys mobile tools for community organisers in marginalised communities. Our constituency is field-based or community-based organisations. They are our users, and we build tools that make it easier for them to do the kind of routine, drudge work of organising or calling meetings, recording decisions. Community leaders and local activists use our app to create groups, recruit people to those groups, send out meeting notices, record decisions and take votes. That’s what we set out to do. Obviously, along the way we’ve tweaked a lot. We’ve really tried to do user-focused design and iteration. We deploy a revision of the application about once every two weeks. This includes small tweaks and changes in wording. It is pretty much continuous. We are very sensitive to what users are concerned about.
Q. At what stage did you start thinking about privacy?
Privacy came up right at the beginning. Someone put it to us — and I think this is a good articulation of the issue — that we have the social graph of a lot of community-based activists in the application, but that’s what it requires to make it work. So obviously we have an obligation.
We were very sensitive to this from the beginning. It drove two major strategic decisions before we got to the policy, and I think those are more consequential. Anybody can write a policy, but it is when it impacts big strategies that it actually shows that it matters. One is that this is the reason we are a non-profit company, rather than for-profit, because we made a decision never to do ads, because ads would require us to violate users’ privacy. The only way we could ensure that we would never do ads, is to make sure we had no fiduciary duty as a board to do so. This meant we had to be a non-profit. The second major decision was to blind ourselves to any kind of logs or activity going through the platform.
Q. Why do you think more people in civic technology aren’t incorporating privacy thinking into strategy from the beginning?
I think most of the time people care about what matters at the moment. They are therefore reacting to things that are a pressure upon them. If you think about having six competing top priorities, I understand why sometimes privacy gets pushed back until something comes along that forces you to do it. Facebook’s development of a policy around when they disclose info to law enforcement is a good example of this. They had no policy until law enforcement came and said ‘we want your data’. They came up with an ad hoc policy in response to this, and their users pushed back, forcing them to come up with a proper policy.
The reality is that thinking about privacy is often reactive, and I don’t think that is confined to civic tech. If you look across government and corporate too, people come up with privacy policies on a reactive basis most often. Until it becomes embedded in the culture, it will remain a reactive thing.
Q. How demanding are your users about their own data and information privacy?
To be honest, our users often ask us to “violate” (in a sense) their privacy. Occasionally somebody will accidentally quit a group and will call us up asking to be added back. According to our policy we can’t interfere in a group, because then we would see the membership roster of that group, and we intend not to do that. If we then get a call from the group organisers asking us to facilitate this, we have to ask them to explicitly give us permission to do so. Over time we have built ways to be able to respond to requests like that without seeing anything. Users often assume we see more than we do. They will say something like ‘oh you will have seen we had a meeting on the weekend’, but we actually don’t see any of that.
Having said that, we do get asked about the privacy by users when they first start using the platform. It is important that we have a good answer at that point — and we do. They do ask ‘how do we know we can trust you, aren’t you looking at all of our data?’. Once we explain the policy, encryption and so on, they are satisfied.
Q. Is there an awareness or concern about privacy and tracking from users?
I do think there is a clear understanding from users that you get tracked through your data. I think with our users, there is less of a feeling of safety as a default. Rather than thinking ‘I am safe unless someone violates my privacy’, they might be thinking ‘I’m not safe’.
If you take the example of Abahlali baseMjondolo — a major shack dwellers movement in Durban — some of the local activists who work for them have been assassinated.
They understand the threats they face, and they believe that if someone wants to find them they will. These types of people exist in near-permanent insecurity. So, they are concerned, but more so about being betrayed than what’s happening on their phones.
The one thing we do tell them is that SMSs are insecure. We also tell them that if they need to call for a meeting or decision that could put them at risk, it is best not to use us. And that is part of the trust we have built with them: being honest about what the platform does and what it doesn’t, use it up to this point and beyond ‘that’, don’t.
Q. How do you handle language policy in relation to privacy, within the context of South Africa’s eleven official languages?
That’s an example of reactivity within our own organisation. We have not yet been asked for the privacy policy in languages other than English. All of our field work is conducted in whatever the local language is. So typically, if someone asks about privacy, it will be explained by the field facilitator in the language of the engagement.
Part of the reasoning here, is that our website is not typically the point of engagement with our user — beyond the blog or news section where we post a lot of community stories. The About page — for example — is often the fourth or fifth channel through which people find out about us. Our users are not randomly surfing the website. They are finding out about us from their neighbours, from our field workers. They are accessing us through the USSD, or through Facebook and Twitter. The day that someone says ‘I really want to read your privacy policy in isiZulu’, we will translate it that day.