Virtual lawyers: the future of the legal sector with ChatGPT

Springbok AI
Springbok AI
Published in
9 min readJan 24, 2023
Generated using OpenAI’s DALL·E 2

OpenAI opened the ChatGPT beta in late November 2022, attracting a million users within the first 5 days. For context, that’s the fastest user acquisition and growth for any product or platform in the history of the internet!

The ultimate goal of AI research is to create a robot capable of replicating the capabilities of a human, streamlining laborious processes and maximising output. Following the release of ChatGPT, there has been speculation in professional circles as to whether OpenAI has made a giant leap that could even threaten jobs in specialised industries such as the legal sector.

Below, we share our assessment of the anticipated risks, opportunities and overall impact of ChatGPT within the legal sector: virtual lawyers, dubious or potentially misleading legal advice, and heightened privacy risks. We teamed up with legal professionals from industry leaders Sheridans, and legal charity Access Social Care (ASC), as well as barristers and academics, to share our insights. This article is the second in our ChatGPT series.

Why lawyers won’t be replaced with bots — yet!

With the continuous progress of AI development, the term “virtual lawyer” may be set to take on a new meaning. Currently, it refers to a solicitor who works remotely, with the widespread adoption of AI still appearing to be a distant possibility.

However, in the post-ChatGPT era, the term may begin to refer less to human lawyers working remotely, and more to computer programs — efficient, cost-effective, and reliable, right? Is this what the future holds?

In short, probably not. This answer may come as a surprise, yet in order to understand why this is the case, let’s dig into the inherent risks associated with open domain generative models like ChatGPT and its predecessors.

Alongside risks with the technology, there are also risks associated with the licensing for ChatGPT. As one of our own commercial solicitors routinely reminds us to do, “always review the impact of the vendor’s [in this case, OpenAI’s] terms of service!”

In our view, the most significant limitations of ChatGPT revolve around the following 4 points:

  1. ChatGPT’s out-of-date and often incorrect information base
  2. ChatGPT is limited in its strategic and creative output
  3. ChatGPT has a limited capacity to retain context
  4. OpenAI logs all responses for future training, raising data security concerns

Let’s deep dive into these points one by one. Then, we will delve into how law firms can leverage ChatGPT and utilise it to enhance conventional chatbots and certain lines of work today.

1. ChatGPT’s out-of-date and often incorrect information base

The legal field is constantly evolving, with laws and regulations in a constant state of flux. However, ChatGPT, in its current form, struggles to keep pace with these changes.

One common scenario is lawyers or their clients seeking information on recently amended legislation. For example, the UK has undergone significant changes in its legislation since Brexit, and is preparing to completely eschew EU-related laws by the end of 2023.

This problem stems from the fact that ChatGPT lacks a strong temporal sense and is prone to providing outdated information. This, combined with the fact that it is only trained on information available in 2021, means that, even when provided with accurate information about recent regulations, it can often supply out-of-date information,

A risk directly adjacent to out-of-date information is that ChatGPT often produces wholly incorrect information, under the guise of plausible-sounding drivel.

The wider internet which ChatGPT was trained on is a mecca for self-proclaimed experts and self-published authors, so the credibility of the training data is not always reliable.

ChatGPT currently lacks the capability to provide references and citations to its training data, and is even unable to lead you to the source of its information via a link. This means users have no guarantee on the trustworthiness of the output.

As Antonia Gold, a Partner at Sheridans, explains:

“Lawyers owe their clients a high standard of care, and one key pillar of that is to give sound legal advice. While ChatGPT continues to (sometimes) respond with incorrect answers, its viable client-facing use-cases would be limited.”

Information can look very accurate, but this isn’t always the case. ChatGPT can quote legal cases that don’t exist and explain established legal doctrines and court processes incorrectly.

Kari Gerstheimer, CEO at Access Social Care, Springbok’s long-standing AI partner, came to a similar conclusion when evaluating ChatGPT for her product and charity’s legal practice: “ChatGPT’s answers sound good but are far too general at the moment. We would need a lot of resources to train the content to be tailored to our context and audience, and to get things right”

2. ChatGPT is limited in its strategic and creative output

The allure of ChatGPT is the speed at which it’s able to create *seemingly* unique and tailored responses.

However, it’s easy to conflate seamless synthesis of ideas and phrases (ChatGPT’s raison d’être) with strategic or creative thought.

To be clear: Large Language Models’ (LLMs) responses do not materialise from imagination, but from its training data.

One example of an objective standard for this is the fact that ChatGPT does not pass the Lovelace test for creativity. I.e. ChatGPT’s performance is not “beyond the intent or explanation of its programmer”.

Usman Roohani, a Commercial Barrister at 4 New Square, emphasises the imperative nature of the human touch in his field:

The ability to leverage this technology within the practice of commercial law is still, in my view, a very long way off. The work of a commercial lawyer, in any field, is necessarily reliant on a depth of human experience which is learned in practice over years (and decades)

Antonia Gold from Sheridas, Springbok legal partner, shares a similar perspective:

Our legal work at Sheridans is specialised around strategic advice and less on busy execution work. There is often a lot of nuance and we advise on the associated client-specific risks in a commercial context — we won’t be relying on AI to be replacing that anytime soon.”

3. ChatGPT has limited capacity to retain context

While our faith in our legal representatives rests in their experience and ability to carry through your case to the end, our confidence in ChatGPT dwindles when we realise that it cannot do the same. ChatGPT does have some “memory” based on the conversation thus far, but this memory has a hard stop at around 1500 words.

This has several implications. This word limit may have a significant effect on a lawyer’s ability to provide accurate and complete legal advice when using ChatGPT. For example, if a client is asking a detailed question about a specific legal issue and the input exceeds 1500 words, the model will not have access to all the information provided and will be a convincing-sounding flop.

4. OpenAI logs all responses for future training, raising data security concerns

OpenAI’s licensing policies and operating practices throw up serious Data Security concerns which are crucial to be aware of before sharing any content with OpenAI’s ChatGPT.

All text sent to ChatGPT is retained in clear-text on OpenAI’s (presumably US) servers that might not have the same level of security as your firm’s systems. In other words, any text you insert into ChatGPT can (and likely will) be used and retained by OpenAI.

There is a risk that this data could be accessed by unauthorised parties, as the input data used to generate text is stored temporarily in the model’s memory. Therefore, sharing any commercially sensitive, PII or client privileged information with OpenAI as part of ChatGPT prompts presents a major risk to data privacy, data security and GDPR.

Until properly addressed, we realistically see the Data Security risk as the main reason why ChatGPT should be used with caution in the legal industry. In practice, to use ChatGPT securely, law firms have two options:

  • Limit the use cases to the sharing of only non-sensitive information
  • Work with NLP experts to develop tailored solutions that enable the execution of their desired use case(s)

Despite the current challenges, the use of ChatGPT technology holds immense potential for the legal industry. Through careful implementation and strategic direction, it has the ability to revolutionise various aspects of the legal sector. In the next section, we will delve into specific examples of how law professionals can apply this technology today.

How can my law firm leverage ChatGPT?

There are many opportunities to start preparing your firm for ChatGPT Professional being released later in 2023. Here, our Founder & CEO Victoria Albrecht outlines what the service might look like.

For now, imagine ChatGPT as a talented intern with excellent copywriting skills. The two questions to get started with when thinking about use cases while managing risk are:

  1. In which divisions or sub-divisions of our firm and for specific tasks are we currently spending a lot of resources on writing? (Any kind of writing!)
  2. Which of those tasks and topics do not require access to any confidential information?

Here are some low-hanging fruit application areas:

  • Creating drafts for content marketing: LinkedIn posts, client newsletters, internal newsletters, drafts for blogs
  • Drafting first versions of legal letters, contract clauses, and emails
  • Summarising long documents and extracting keywords
  • Explaining complex legal concepts in layman’s terms
  • Improving existing chatbots or supercharging new chatbot development

Why human-in-the-loop is crucial in the legal sector

The one guiding principle for utilising ChatGPT in any sector that handles sensitive information such as the legal sector that has become clear is:

Every implementation of ChatGPT tooling must have a human-in-the-loop at every stage: the process of combining machine and human intelligence to obtain the best results in the long-term.

This process should include a human reviewing content fed into ChatGPT to ensure that it does not contain sensitive information that should not be shared with OpenAI, plus a human review of content outputted by ChatGPT to ensure accuracy and minimise exposure to risk.

Notice we used the word “drafting” in the previous section. Similar to how you would your interns’ drafted client emails, or legal letters, anything created by ChatGPT has to be carefully reviewed and edited too.

What will happen to conventional chatbots?

For many years, law firms have been implementing chatbots as a way to improve both the client and the employee experience. Hogan Lovells was an early adopter in this space.

These conventional chatbots were built with frameworks like Dialogflow (fairly simplistic) or Rasa (more advanced, conversational), often by NLP-specialised teams like ours or, in some exceptional cases like Mishcon de Reya, in-house tech teams.

In working with law firms over the years, we have observed a consistent emphasis on maintaining control and transparency over three important components:

  1. The model input, specifically the training data. This is furthermore important if the training data constitutes firm-internal IP.
  2. The chatbot output, which includes the chatbot responses to the user.
  3. Data security: company policy frequently dictates chatbots to be hosted on-premise

Given the highly regulated nature of the legal industry, these three priorities are likely to remain in place for the foreseeable future. As such, conventional chatbot technology will continue to play a significant role in creating high-quality chatbots, particularly those that retrieve information from a central knowledge base.

Despite not being a full replacement for conventional chatbots, ChatGPT will provide ample opportunities to streamline the process of creating conventional chatbots especially regarding:

  1. Content generation: ChatGPT streamline the process of drafting chatbot copy, and generating baseline training data.
  2. Chatbot testing: ChatGPT can be used to emulate users for testing purposes reducing the reliance on the availability of human testers.
  3. Information extraction: ChatGPT will help summarise and format information from central knowledge bases appropriately

Our future predictions

As ChatGPT becomes more widely available, regulation will require law firms (and other regulated industries) to be fully transparent regarding their use of ChatGPT.

Already, China has outlawed AI-generated media and fake news that has not been labelled as such. Perhaps legal letters, copywriting and reports written by LLMs will one day require similar indications.

It’s not hard to envision a future where all law firms adopt software that seamlessly integrates with their internal processes, allowing for transparent accountability of how ChatGPT was used in the creation of a specific output, and who made any necessary edits. This would be especially important when the output is shared with clients or used to make important decisions.

If future versions of ChatGPT are able to address some of the major concerns, such as being deployable on-premise or able to restrict the sources of information used in its responses, allow less human-in-the-loop requirements, the possibilities could be quite exciting.

For many years, law firms have recognised the value of chatbots in improving client and employee experiences, and it is certain that ChatGPT has shaken things up. However, for the time being, ChatGPT has promising potential to enhance conventional chatbots, rather than replace them.

This article is part of a series exploring ChatGPT and what this means for the chatbot industry.

If you’re interested in anything you’ve heard about in this article, reach out at victoria@springbok.ai!

Find our blog landing page here.

--

--