(Artificial) intelligence from the Legal Marketing Association conference

Artificial intelligence begins with natural intelligence…

I just returned from the annual LMA conference in Las Vegas where I joined a panel of industry experts to discuss artificial intelligence in law. It was a great event overall, and I have to thank my fellow panelists Elonide Semmes, Mark Greene, Craig Courter and Ryan McLead for their expertise and support.

AI in the legal industry is buzzing more than the racks of flashing CPUs in an IBM Watson data center. It’s not the first time I’ve seen this industry distracted by shiny things, but this time it’s different. We’re still figuring out what the “the new normal” will be post-recession, new competition from alternative legal service providers to technology startups seems to emerge daily, buyers of legal services are becoming smarter and more informed and the whole legal ecosystem is looking for “innovation”. And when we add amazing stories of recent AI achievements coupled with the threat of robot lawyers, we may have a perfect storm of technical proportions fueled by marketing-hype gusts of hot air.

So, what do you need to know to weather this storm? That was the question we set out to answer, and my fellow panelists and I broke down the discussion into different topics; Mark defined AI, I discussed the vendor landscape, Craig shared a global law firm perspective, Ryan discussed expert systems and business models, and Elonide covered AI potential for legal marketers. I hoping LMA makes the session video available — I’m confirming that and will share if possible — but for now you can download our presentation materials here if interested.

It was great discussion that explored many areas, and the audience asked some very good questions including (and I summarize):

  1. Business model conflict? Ryan described two potential business models: the traditional firm model, where lawyers continue billing by the hour, and a new “software” model where legal expertise is scaled beyond billable hours and “packaged” using technology such as expert systems. There were questions regarding this new model and how realistic it is for existing firms. The consensus was that the two models may be, for the most part, incompatible. While there are a select few firms (and more frequently, alternative legal service providers) who have tested new models and systems, the push-back from those firmly embedded in the traditional model cannot be underestimated and requires new leadership, culture…and compensation structures.
  2. Putting the expert in expert systems. The panel was asked about the amount of input and cooperation required from subject matter experts — i.e. lawyers — to build legal expert systems. The panel agreed that this is obviously a crucial step in the process and the amount of input depends on the complexity of the system being built, etc. Perhaps a non-answer, but it did start an interesting follow-on discussion about approaches used by legal process improvement (LPI) professionals to analyze and optimize certain areas of legal work and how they are similar to expert system construction. It’s clear to me that expert systems require “legal process automation” approaches that mirror current LPI approaches or lean six sigma methodologies, though the overall process and result is more technology driven.
  3. Does size matter? There were interesting questions and comments regarding firm size vs. ability to use AI and similar technologies. Mark quoted “It’s no longer the big that eat the small, but rather the fast that eat the slow” and Craig commented that small firms may actually have an advantage when testing new data-driven technologies as they don’t face the same regulatory and privacy concerns as large, global firms such as Baker McKenzie. New technologies have leveled the playing field before, but I think AI is a catalyst for deeper change and reflection on business models and approaches that, frankly, have needed change for many years.
  4. One app or many? Craig suggested that firms may likely end up with many different AI “apps” in use, each with very different functions. An audience member asked about the dangers of “siloed” applications. The panel responded no, separate and perhaps disconnected apps isn’t necessarily a problem. But disconnected data is a problem. This was a theme repeated throughout the session: AI applications often rely on large amounts of quality data, and without this, their value is severely limited. Someone suggested that “AI is 90% data and 10% technology”. Good point.
  5. The legal AI landscape. I shared a “market map” of current AI vendors in the legal industry during my talk and received great feedback and questions following the session. So much so, that I think it’s worth expanding on this in it’s own post. Stay tuned.