Getty Images/iStockphoto
How ChatGPT can advance AI in the law industry
AI technology has made it easier for lawyers to do their job effectively. The emergence of generative AI is no exception. But lawyers who use this tool must do so responsibly.
With the boom in generative AI and instant popularity of tools such as ChatGPT, most sectors of the economy are trying to figure out how the new technology applies to them.
While these tools are most effective for content creators such as writers, artists and marketers, their use in fields such as medicine and law can become problematic.
However, for Jake Heller, CEO and co-founder of Casetext, ChatGPT is the next evolution of technology that is helping lawyers do their job better. Casetext is an online legal research tool that uses AI to assist lawyers during their research process.
In this Q&A, Heller reveals that while ChatGPT is an upgrade from previous generative AI tools, "it still can't pass the bar exam," so to speak.
How has AI technology changed in the past few years for lawyers?
Jake Heller: What I saw when I practiced law was that we were often held back by the technology. We were often held back by what we were able to provide to our clients in a fast, effective and efficient manner. The technology was just so slow. It would oftentimes make it so I can't find the relevant information to help my client.
About 10 years ago, we saw this inflection point in technology, where the cutting edge of it would enable me to find the right information faster. It enabled me to put together a first draft memorandum faster. It enabled me to fill out the forms necessary to help a client faster. In the early days, this stuff wasn't very good, but it was better than not having it.
About five years ago, we started working on this new technology called transformers and large language models. The very first one of these to become popular was a smaller one called BERT that came out in 2018.
We immediately saw the applicability. It couldn't do what ChatGPT does and write something eloquent. What it could do is help understand what a search query that a lawyer input into our system meant and find things within our database of a billion pages of law that are actually relevant to what the lawyer is looking for.
About three years ago, GPT-3 came out. At that point, now we're starting to talk, draft and interact with AI. But it's kind of operating on a high school level. Every so often, it was genius, and every so often, it was just really kind of mediocre.
And then, three months ago, ChatGPT came out, which is the next kind of evolution in technology. It's improved dramatically in terms of its ability to understand nuanced language, to respond intelligently.
It still isn't at a place where I think a lawyer can rely on it for most of their most important tasks. But it's already today at a place where a legal professional could brainstorm with it. Like 'hey, help me think about five ideas for my upcoming deposition.' Or 'help me think about the first draft or press release about some product' or 'help me think about the first draft for a contract.'
Some say that what ChatGPT spits out is coherent nonsense. And given that the AI chatbot is current only through the end of 2021, how can the technology assist lawyers if its data is not up to date?
Heller: I wouldn't rely on ChatGPT. In our profession, if you get something wrong, somebody may end up in jail. The business may go bankrupt. It's irresponsible to put something in front of people that is going to produce even a small quantity of errors. For these kinds of factual questions, it's inappropriate to rely on something like ChatGPT.
Jake HellerCEO and co-founder, Casetext
However, ChatGPT is better suited for things like drafting even a section of a contract. If, as a lawyer, I asked ChatGPT to draft me a section of a contract and just slapped that into my document and called it a day, I would not be doing my job. In the same way, if I asked a legal assistant to do a first draft for me, I have a responsibility as a lawyer to make sure that it's up to date. For the fact that I have a very smart legal assistant doing a first draft for me that may be wrong sometimes, it still helps.
It offloads some of the mental work. I can delegate that and work on something else. It gives you some ideas you may have otherwise not have had. It helps in these more creative tasks that are not the main and most important applications of artificial intelligence for law.
Where artificial intelligence is really going to make a difference is where you have an AI model that's even smarter. For example, it can read the information in front of it, read the contract, read cases, read other laws. It answers accurately based on what the document says and how you should interpret it.
ChatGPT is not quite there yet. We're seeing in our early testing that these large language models are rapidly advancing to a point where their reading comprehension, understanding of complex and nuanced documents, and ability to synthesize what they just read are at the postgraduate level. And once you're there, once you've graduated, now you're in a very different world and you'd start relying on this for actual important work.
For me, it's not just ChatGPT. It's what's around the corner.
How can this AI help lawyers without taking away the jobs of legal assistants?
Heller: AI will definitely take tasks away from legal assistants and lawyers. I don't think it will take away jobs.
In the legal industry, we've been through some massive improvements in efficiency already. We used to do research by going down to the library. There was a time before email when we just literally wrote letters to each other. There was a time before document processing when we had to type all this stuff. And you would think as we become more efficient -- researching online, communicating more effectively and efficiently -- that the industry would shrink and that wages would go down, and people would be fired. The exact opposite happened.
There are more lawyers now than ever have been in the history of American lawyering. They make more money now by a lot. The legal industry globally is a trillion-dollar industry. With the increase in productivity that lawyers have thanks to technology, they can now offer a better service for less and less cost. There's a huge unmet demand for legal services. As we became more efficient and effective, the demand for our services increased. The services the law firms are able to provide to their clients today, thanks to the advances in technology so far, are immensely better.
There's so much left to do now in a world where AI takes away the time-consuming, annoying parts of the work.
How can lawyers preserve the ethical foundation of their profession while still using AI technology such as ChatGPT?
Heller: The first and most important rule of professional conduct for lawyers is the duty to the client to provide great counsel. So lawyers should not just rely on these tools by themselves. The first and most important thing is you don't just copy and paste ChatGPT and put that in the brief or the contract.
It is your duty to check the work and make sure it's right. Verifiability in these tools will make it very easy for lawyers to verify what they know, and confirm and do their due diligence on the output of this technology.
Another important piece too is that some of these systems, like ChatGPT, are learning from the stuff you input into it. It is imperative that you do not input sensitive content unless you know the system is one that is trusted from a security perspective. The last thing you want is you representing Pfizer on some major pharmaceutical issue, and then somebody asks a question later, and it starts spitting back private information.
A lot of technologies can be used for immense good and can also be really dangerous if used inappropriately.
Editor's note: This Q&A has been edited for clarity and conciseness.
Esther Ajao is a news writer covering artificial intelligence software and systems.