It was late on a warm Thursday night in February. I was stuck. Dim light glanced off six sets of hunched shoulders evenly spaced around the small kitchen table. We waited. They waited. I… well, I desperately tried to turn 7 vowels into a viable play.
It was Scrabble. And it was life.
My Mom and I ended up winning that game—much to my sister’s chagrin— and I thought of it today after once again reading a 1998 paper, “The Extended Mind”, by Andy Clark and David Chalmers . The authors present an idea that’s simultaneously strange and obvious: We use the environment around us to help ourselves think. As a matter of fact the savvy thinker can actually enhance their mental performance using the environment. Anyone who’s ever used scrap paper for a math test or doodled on a napkin is familiar with the phenomenon. This is the central tenet of what Clark and Chalmers call, “Active Externalism”. This is why I pull out my phone with its handy calculator app instead of trying to do long division in my head. So, too, the reason I shuffled my Scrabble tiles over, and over, and over, imploring those little wooden letters to reveal their secrets and help me crush my family.
A more recent piece by Joicho Ito of the MIT Media Lab builds on the ideas of extended cognition by describing an, “Extended Intelligence”, which embeds human cognition in a massive, restless fabric of objects, entities and the relationships between them. What does that mean? It means that while we think of ourselves as special snowflakes we must acknowledge the power of the storm. As we act in the network of the world we leverage other people and things to consume, process and relay information. An individual is capable of a lot, but many hands make light work, and those of us who understand the power . Conceiving of intelligence as distributed—more a chorus than a solo—is manifesting changes in the technologies we produce. The much-hyped world of Artificial Intelligence is no exception.
There are two dominant schools in the field. Artificial Intelligence, or AI, is the discipline most are familiar with. It pursues machines which replicate the workings of the brain. Intelligence Augmentation, on the other hand, is less of a household name. It argues for the effective use of technology in augmenting human intelligence. Should we design better computers that work like people? Or design them to work better for people? My own opinions fall in line with the latter. I want the things we create to help us be better at being ourselves, in a myriad of contexts, instead of helping us avoid those contexts. Rather than try to remove the human element, the systems and experiences I admire most facilitate, enhance and inspire their inhabitants.
Computing and Law
I work as a product designer at a startup building research tools for attorneys. Legal tech, as our space is called, is not a vacuum. There are dozens of companies trying to change legal practice in various ways. I chose to join Ravel because I think we share a uniquely humanistic perspective when it comes to the practical applications of machine learning and data. As the concept of AI has gone mainstream a growing fear has emerged in conversations discussing the future of law. The concern is the that we will soon reach a time when trials are decided entirely by AIs. Products on the market are already working toward a world of robot judges and digital courtrooms. For the lawyers it’s a threat to their livelihood, equivalent to other industries doing similar soul-searching in the face of the tech war drums. For us, though—the would-be litigants—it has subtler implications.
Our legal system isn’t perfect but it’s built around a core task of interpretation; an outlaying of facts which are then considered by humans, who bring to bear the full sum of their life experiences in determining guilt, or lack thereof. It’s likely inevitable that certain legal matters will be good candidates for automation. Many more will not. The most important cases in law are those that set, “precedent”, or, “guidelines for what to do next time this happens”. They are, by definition, a response to unprecedented events. And it is exactly this, the unprecedented, that our most advanced simulations of human intelligence struggle with.
Pioneering work is happening the world over to develop computers that can think and reason with the rapidity and fluidity we possess, but many practical and theoretical barriers stand between the state of the art and a nuanced thinking machine. Perhaps one day AI will be able to recognize and reason about the soup of value, exigency and circumstance that make up a lawsuit, but in the meantime, here at Ravel, we’re working to extend the intelligence of the most powerful analysis-synthesis machine ever made: the human mind. Much of the way law is and has been practiced has been based on anecdotes and subjectivity. Our mission as an organization, and mine as a designer, is to build tools that lend professionals an objective, data-driven perspective without ignoring the humanity which is critical to legal analysis and advice.
How do we do it?
Doing legal research is hard work. I wrote a post about the work here. Lawyers are tasked with finding very specific pieces of language in a collection of text that’s longer than you could read in a lifetime. Not only that but they have to make sure the information they find is relevant to the particulars of their own case and hasn’t been overturned by someone else. And then there’s the whole matter of determining how and when to present that information to their judge. I used to imagine that most of lawyering happened as heated arguments in high-walled courtrooms. It turns out the there’s a lot of homework, too, and designing for extended intelligence can make it less painful.
At Ravel we focus on three components of the research process. The first is finding things. We’re digitizing the Harvard Law Library’s collection of case law dating back to before the Constitution was signed. These texts are indexed and we apply machine learning techniques like topic modeling and named entity recognition to extract relevant information from them. Once the cases are sliced, diced and added to the system we give lawyers the ability to search and filter by a wide variety of attributes, making it easier and faster to find the material they’re looking for. We also include these extracts in the presentation of the case, making it easier to tell if it’s actually relevant to the work at hand.
Second, we employ data visualization to help attorneys gain unique perspective on the issues they face. Cases cite one another to support and illustrate the arguments they make. This creates a very literal network of related documents. By presenting this network visually an attorney can rapidly identify important cases in unfamiliar domains and find less popular cases that would have gone unnoticed several pages down a traditional list. Visualization affords the ability to see, and viscerally experience, the relationships of legal opinions to one another, giving lawyers new means of identifying important material.
Third, we build analytic tools on top of these texts. We design and develop novel mechanisms for demonstrating the hidden patterns in citations, language, and the behaviors of the judiciary. By surfacing these patterns we create new windows into the world of law that inspire researchers to lines of inquiry they might never have found otherwise. Sometimes a slight nudge from data can spark a flurry of new questions. Isaac Asimov put it nicely when he said the most exciting phrase to hear, “…the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’
The design challenge at the heart of data products like ours will be understanding what kind of work it should help people do. What are the various parts of a decision process? Which of those parts are well-suited to the strengths of computers and which require the human touch? How can we design to facilitate a healthy relationship between the two? In the case of our work this means a reading experience that’s as important as our data quality, filters that are as robust as they are forgiving and a hundred other balancing acts in service of the user experience. It also means knowing not to make decisions for our users when we don’t need to.
The best way to help people do research is to help them ask better questions. In our time where, “insights”, are everywhere the role of knowledge workers is increasingly to separate signal from noise. Even today our data cups runneth over, and the tide is rising exponentially. I don’t know what the future lawyer will look like, exactly, but smart money says the person who has learned to extend their intelligence, adapted to the great network and mastered their relationship with their tools will be most successful. Success will mean knowing when to shuffle your scrabble tiles.