At Doteveryone, we believe there’s great value in responsible technology development — both for individual technology users and for society overall. And since we started thinking about how to make creating and maintaining responsible technology easier, we’ve heard from organisations of all kinds who agree with us. (If you’re from one of them, see below for more about our new community Slack.)
Today we’re launching our Trustworthy Tech Partners prototyping programme with cohorts from Bethnal Green Ventures, B Corps and Civic Hall. Together we’ll explore how our 10 aspects of responsible technology fit with their organisations’ real experiences and plans, and working out how easy it is for companies to provide the kind of evidence that would demonstrate trustworthiness.
To start with, the prototype will focus on ways organisations can demonstrate their responsibility and ethics to current and potential customers. Through a series of weekly exercises, our Trustworthy Tech Partners will gather evidence of their current practices (and future plans!) which can be shared to demonstrate how honest, reliable and competent they are. They’ll also reflect on how well they’re doing as providers of ethical and responsible technology, and whether there are aspects of responsibility they (or we!) haven’t yet thought about.
Once the exercises are complete, Doteveryone will review the evidence contributed by each partner. We’ll look at:
1. The level of ethical and trustworthy performance overall
2. The evidence to support this performance, and how useful it is to demonstrate reliability and honesty and competence to someone outside the organisation
3. The effort taken to produce the evidence, and the experience of gathering this (as a prototype of taking part in a future trustmark process, and what that might be like).
We think that the big picture, cutting across both technical and business choices, will let us see more clearly how the partners are being responsible and ethical (or not!), as well as how they are making choices and considering risks (in other words, how reliable and competent they are).
This isn’t just about specific issues like what personal information is gathered by a service and how it is used. It’s about how all the small choices add up, affecting what happens now and in the future. Who gets to see that personal information? Are those people appropriately trained? Are they paid a reasonable wage? Could the company be bought tomorrow, totally changing the business purpose and how information is used? If we can answer questions like this, can we build up a picture of an organisation and the products and services it makes — and would that picture help customers make tech choices?
In particular we’re interested in the tough tradeoffs organisations often have to make, and how they can explain these to show their practices and intent to those who care. As an example, balancing how how to meet regulations that require users to be shown a lot of legal conditions, with offering a usable and accessible experience to mobile users on a small screen. We want to understand more of these, and to spot challenges which stop organisations acting as responsibly as they would like to be, to see if there might be ways we could help unblock things.
We want to find out:
- Do the 10 aspects of responsible technology we’ve outlined align with organisations’ own thinking about ethics and trustworthiness? What about their experiences of what their customers expect?
- Is producing documentation of this sort useful for companies in demonstrating their trustworthiness to current or potential consumers?
- Does this process help them learn about their ethical performance, or spot ways to act more responsibly?
- Does it take a reasonable and proportionate effort to document things like this? What support is needed?
- How does this idea for a trustmark process compare to other marks, standards, certifications and pledges which are used in the technology sector, in ethical business or in related areas? Is there overlap with other standards?
As a result of this work, we’ll start to figure out whether this sort of open evidence base might be part of a more accountable future for technology. What happens when companies set out what they do and explain their decisions? What might be possible if others can see what is happening ‘under the hood’? (Of course, there are many more considerations than we’re able to explore in a single prototype — for example, how to interpret vast amounts of data or how to validate the information people submit.)
It’s not always easy to make ethical choices when you’re designing, delivering and maintaining technology. (Paying good wages can cost more, not spamming your users with requests to share can limit virality.) But just because it’s hard doesn’t mean it’s worth doing. It’s the opposite: society depends on key technology being trusted and trustworthy. We believe that being responsible can unlock value in other ways too. Doteveryone is going to learn more about this with our Trustworthy Tech Partners in the coming weeks, as we explore our ideas for how organisations can demonstrate their responsible and ethical practices.
If you are working on trustworthy tech too, let us know so we can work together. Doteveryone has set up a Slack online chat space to help folks support each other in developing responsible technology — if you’d like to be part of this community, join the Trustworthy Tech Slack. (And if Slack doesn’t suit you, let us know what else might work instead.)