AI is the hot topic in digital; it has seen the rise of chatbots in everyday life, from contesting parking tickets and finding new recipes, to the announcement that IBM Watson can diagnose cancers better than any human doctor can. Some argue that all this excitement around Artificial Intelligence is just another hype cycle, but I’m confident that the current surge of interest in AI is not just hype and will only increase. Why? Because we are seeing AI deliver tangible and reliable benefits to business that it simply has not done before.
I have seen unprecedented interest and demand in AI services as businesses realise the value and efficiencies that it can bring. Businesses are investing more money than ever before in AI technologies, and this is only going to increase as the technologies evolve and learn.
The key reason for this surge in interest is that AI has finally reached the point where it can be applied to a range of use cases and companies are wanting to get ahead of the game so they can reap the benefits as soon as possible. Whether it’s easing the burden of governance and compliance obligations using AI decision support tools modelled on the knowledge of experts, or developing a chatbot to answer complex customer questions, Artificial Intelligence can be applied across all industries and the legal profession is no exception. The use of AI to determine whether a case should be taken on or not is now common place in large law firms and it was recently announced that AI has been used to predict decisions of the European Court of Human Rights to 79% accuracy.
I went to a talk on the impact that Artificial Intelligence will have across industries, and the majority of attendees were surprised at how far AI has come and it’s range of uses; from beating a human at the ancient game of Go and winning Jeopardy, to analysing enormous amounts of information to make recommendations and harnessing sentiment analysis to understand users’ reactions. Regardless of whether this technology is being used to demonstrate its ability to win intellectual games or to make recommendations to doctors on how to treat a medical condition, there is a need to regulate this technology and set the parameters for how we should be working with Artificial Intelligence. Which is why I see law playing a much larger part in the development and implementation of Artificial Intelligence in the next year.
A key concern regarding the use of technology and law seems to be the assumption that judges understand the technology and software that is being used and is increasingly becoming crucial to court cases. However, historically judges have predominantly relied on the outcomes of external reviews and audits as evidence, rather than understanding and analysing how technologies have been developed and the impact that this can have once implemented. This highlights a real need for the legal system to be brought up to date with the use and regulations of technology, and AI is no exception. How prepared is our legal system for such advanced technology?
When we talk about Artificial Intelligence, the topic of driverless cars is unavoidable. They’re the most publicised use of AI in the media and they hold a certain fascination in part because they are seen as the ultimate test of the limitations of AI. You will have heard that the AI system in driverless cars is learning how to make decisions in a split second when the car is placed in a dangerous situation; will it decide to hit the bus in front or the elderly woman walking on the pavement? If a child runs into the road, will it direct the car to swerve into a wall and risk the lives of the family that are passengers? These are moral dilemmas that any of us would struggle to answer, yet we’re expecting a machine to calculate what the best outcome is. Before long, we will have cases brought against companies developing driverless cars to assess whether the AI system made the right decision in these scenarios, or whether the training data used led to a faulty output. AI is posing problems by introducing concepts that current law does not cover.
The legal system therefore needs to evolve to cover these concepts, and we need to have confidence that judges understand the technologies being used and are ready to tackle such cases where morality and technology overlap. Google, Facebook and Amazon recently announced that they have formed a council to formulate ethical rules to govern how robots and computer programs should behave in the future, but why are technology providers having to step into this role? Surely our legal system should lead the way in steering how Artificial Intelligence should be regulated, as establishing clearly defined parameters will enable companies to be held accountable for how they are developing and using artificial intelligence systems.
This in turn would prompt a change in the public perception of AI. A key issue that we encounter when speaking to businesses about Artificial Intelligence is their perception of it; they are either extremely cynical or they have extremely high expectations of what it can do. This is in part due to the media’s overblown coverage of Artificial Intelligence, whether it’s a TV show where Artificial Intelligence become sentient beings or the latest success whereby AI has been proven to be more successful than a human. Fear surrounding General AI is being propounded by the media and is creating in an atmosphere of fear around AI and the role that it will play in the future. Establishing regulations for Artificial Intelligence would help calm this nervousness surrounding it in the media as it would send a clear message that controls are in place, and companies are to be held responsible against them.
This in turn would provide the industry with the opportunity to propel the message that AI is a powerful tool that should be used to help humans make decisions and do their jobs even better, not to replace them. There is often trepidation when talking about AI, as people are wary of the impact that it will have on jobs traditionally performed by humans. AI’s have different strengths and capabilities to their human supervisors, and it’s the synergy between human and AI that is most exciting but also the most appealing. We need to make the most of the opportunity that AI presents us with and use it to complement and inform our human actions, rather than empowering AI to take these actions on our behalf. In terms of the practical application of Artificial Intelligence within the legal profession, it can be used to automate tasks, streamline processes, facilitate and expedite research, flag up relevant precedents or answer complex regulatory questions immediately and often with greater accuracy. Such tools would help free up the time that lawyers can spend on more complex, bespoke legal tasks. For example, Kim, Riverview’s virtual assistant, is designed to help legal teams make better and quicker decisions such as suggesting the best order in which to renegotiate a series of corporate contracts. The use of AI in such examples complements human intellect and decision-making, as it streamlines arduous, time consuming processes by analysing the enormous amount of data that is available and ensures that all relevant information has been presented to inform human decision making, thus allowing legal teams to make better decisions, quicker.
There are examples of it being used effectively and ever more increasingly within law firms, but the entire industry needs to evaluate the impact that AI will have on every aspect of our lives, and set the precedent for how this should be regulated and monitored, or we risk being in a position where the law and those in the legal profession are left trailing even further behind technological advances. AI is being implemented in all industries to better inform human decisions and customer service, and instead of fearing for what this could mean, lawyers should embrace the speed and accuracy that Artificial Intelligence presents. After all, AI cannot develop creative legal arguments so expert legal intervention by compassionate lawyers will still be needed.
Blog post written for Matter AI in November 2016.