Legal Robot’s commitment to Algorithmic Transparency

Dan Rubins
Legal Robot
Published in
3 min readJan 13, 2017

Earlier today, we were glad to see the Association for Computing Machinery (ACM) release a statement on Algorithmic Transparency and Accountability. As a young AI company, this is something we feel very strongly about and believe will become increasingly important as algorithms grow in their influence. ACM’s statement outlines seven principles: Awareness, Access and Redress, Accountability, Explanation, Data Provenance, Auditability, and Validation & Testing.

Algorithms of all kinds already impact our daily lives. We all know that credit scores are calculated with an algorithm, and thanks to recent news cycles, many more people know that social media feeds use algorithms to display posts that you will have a stronger emotional response to. However, as algorithms become more advanced and more complicated, they can be harder to explain. Is it easy to explain (without using calculus) how Legal Robot uses backpropagation to train its neural networks? No, but we don’t need to get all high and mighty about it either.

For us, committing to transparency and accountability in the algorithms we create means providing explanations of the algorithms themselves, but also the input to those algorithms, checks and balances on those algorithms, a way to question the results, and make things right again.

Legal Robot now publicly makes these commitments, which we believe uphold ACM’s well-crafted principles. Furthermore, in each quarter’s transparency report, we will report on our progress towards implementation. This will not be an overnight change, in fact, we are committing to a whole lot of effort here — but this is important. We will also share our experiences with other companies and institutions so they may do the same. Finally, we call upon our fellow machine learning and legal tech companies to think about these principles and take action of their own.

Principles

1. Awareness: we will make our owners, designers, builders, users, and other stakeholders of analytic systems aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.

2. Access and Redress: we will adopt mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions.

3. Accountability: we will demonstrate to our users how decisions are made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results.

4. Explanation: we will produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made.

5. Data Provenance: we will provide a description of the way in which the training data was collected, along with an exploration of the potential biases induced by the human or algorithmic data-gathering process.

6. Auditability: All models, algorithms, training and test data, as well as decisions, will be recorded and kept for a reasonable amount of time so they can be audited in cases where harm is suspected. However, we will not provide sensitive user information, like decisions or other algorithmic output from private legal documents, to anyone but their owner (doing so would violate our privacy policy, terms of service, and ethics).

7. Validation and Testing: We will use rigorous methods to validate our models and document those methods and results. In particular, we will explore ways to conduct routine tests to assess and determine whether the model generates discriminatory harm. We will publish a description of the methods and the results of such tests in each quarter’s transparency report.

We’re always happy to talk. If you have any questions or comments, let us know at hello@legalrobot.com.

--

--

Dan Rubins
Legal Robot

Founder & CEO @LegalRobot - a legal tech startup making the law less painful for everyone & improving Access to Justice