Getty Images/iStockphoto
Generative AI poised to upend legal sector
Some industries might be wary of generative AI, but not legal. We find out how -- and why -- attorneys are embracing it in this Q&A with eDiscovery AI's founder, Jim Sullivan.
Generative AI might or might not turn out to be as monumental an enterprise technology trend overall, but it's taking off in the legal world. It can help lawyers draft arguments, summarize case law, analyze contracts and sift through mountains of files to look for useful bits of information pertaining to a case. GenAI's utility has given rise to many legal AI startups.
One of those startups, eDiscovery AI, specializes in the classification and review of electronic documents, audio and video related to legal cases. Today, multiple human reviewers go over files in review for a case to determine what is and isn't relevant. The company, based in Bloomington, Minn., aims to automate that process for legal firms with its tools. We discussed the future of AI in legal with the company's founder, Jim Sullivan.
Editor's note: This Q&A was edited for clarity and brevity.
Explain the complexities of discovery and documentation in legal.
Jim Sullivan: Right now, litigation involves two parties that are disputing something. Each party has to produce all the case documents relevant to the other party. When you receive the other party's documents, you need to determine what's relevant to support your case.
Historically, we've used just a roomful of attorneys going through documents one at a time to determine that. There are about 45,000 human document reviewers in the country, and their job is to go through documents one at a time, classifying them to determine whether they're relevant to the case.
We have a number of tools that kind of make it more efficient to cull down the data set. But generative AI is the first real tool with which we can now review documents like a human being would classify, understand whether it's relevant to your case, explain why and even summarize the documents so you can get an understanding of what it's talking about. The technology is so much better than people in classifying documents -- people sometimes disagree on a document's relevance, and they go with the majority of the panel.
How much has digital content increased in the last couple of decades for legal discovery?
Sullivan: The data volumes are just absolutely getting out of control. And that's where it's been kind of like this arms race over the last 20 years: Data volumes increase, then technology improves to help handle those larger data volumes. Machine learning predicts classifications of documents, and other technologies that reduce the size of the data set have been around for a long time.
Generative AI is the biggest leap we've ever seen, where now it can kind of do the job for you. It can absolutely look through and classify any document -- text or image -- to determine what it is about. There are so many other cool use cases in the legal industry or even outside the legal industry for using AI to streamline processes, to generate drafts for briefs, for making legal arguments. We're helping you with legal research. We're analyzing contracts, extracting relevant material -- a lot of the things traditionally done by paralegals. AI is really well equipped to handle a lot of those tasks.
Can attorneys trust generative AI to be accurate? These processes are so crucial to success or failure in the courtroom and in their businesses.
Sullivan: We determine accuracy based on statistics and testing before we trust anything, before we use the summaries. We need to verify that it is accurate and correct. We have to do that in every case that we're using it on. We don't say, 'It's accurate.' We say, 'We have actually checked to make sure that it's accurate because the validation step is the part that matters the most.'
When we're using AI for any use case -- whether it's reading a draft or classifying documents -- we need to verify that what it created is correct. We calculate that on how many times it is correct and incorrect to determine its error rate. Ultimately, we're looking to beat a human review.
A human reviewer is going to identify a relevant document correctly probably about 70%, maybe 80% of the time. We're missing 20% to 30% of all relevant documents, and everyone knows it. With AI, we can statistically prove that we're finding 90% or more of the relevant documents. It's not perfect. It definitely makes mistakes, but it makes mistakes at a rate far less than what a human would make, and the types of mistakes it makes are actually a lot better because it's not missing your very obvious important documents. Its mistakes are on those documents where two or three people would disagree anyway.
Attorneys take pride in their ability to craft arguments for the courts -- do they really want to let AI do it for them?
Sullivan: You can definitely keep your touch and feel on your arguments, but having someone else get things started -- using paralegals to create a draft of a document is super common -- is a smart idea. It makes things faster. Using AI is kind of the same thing. And if you don't want to use the words that AI creates, you can always just say to the AI, 'Create an outline of different arguments or ideas.'
If you don't want it to create content, have it brainstorm for you by saying, 'Hey, this is my case, write some arguments in my favor and some arguments that are against my case so I can understand them,' and use that as a starting point.
I think the big struggle for attorneys is the fact that it's all based on that billable hour. Clients are saying, 'I expect AI is going to reduce my cost. What are you doing to do that?' Law firms are looking at this and either using generative AI for doc review or using it to generate and create documents. Now, if it takes half the time, you're billing your client half as much. The hourly rate rewards inefficiency -- but I would still expect that a lawyer who can create things much quicker and bills their clients less is going to be the type of firm that's going to get more work.
Has any one large language model emerged as the strongest for legal work?
Sullivan: Every language model out there is better or worse at some things. Whenever a new model comes out, we analyze it very deeply to make sure we understand what it is and isn't good at. Some are better at understanding human language. Some are better at coding. Some are better at generating images. Some are better at math and science.
Understanding which models are best for which use cases is incredibly important -- understanding that 'Oh, I'm doing privileged review, this model is going to be better.' We have context that we have to factor in. We have prices that we factor in. Determining what model is best for which use cases is a huge part of what we're doing. However, I do think that in the future, we are going to see more customized, focused models geared toward one use case or a very narrow set of use cases instead of just having GPT-4, which is probably the best example of a brute-force model that they shoved in everything and works really well. But it's bulky, it's expensive to run because it requires a lot of power. If you can have specialized models that can run in a fraction of the power, I think it's going to be a lot more beneficial.
What will generative AI for law firms look like in five years?
Sullivan: We are in for a major shift in everything we're doing. The legal industry is probably going to be further back on that shift because it's more conservative. Larger organizations take more time to change. But we are in for a revolution that's far bigger than anything that's ever happened in our lifetime. You're already seeing Wendy's using AI to take drive-through orders. You're seeing it replace people in call centers, and the quality of the voices are getting better, the speed and the accuracy of their speaking is getting better.
There are going to be so many places where AI has a huge impact in real life, in how people work. But I think everyone has to be aware of how it's working and how it's coming in, and I don't think anyone is at risk of losing their job if they're willing to learn and understand the technology. People who embrace AI absolutely are going to be able to do more, better, faster, and other people are going to struggle to keep up with that. But if you think that you're not going to use AI for the next 10 years, I hope you're retiring before 10 years is up -- because you're going to have a rough time.
Don Fluckinger is a senior news writer for TechTarget Editorial. He covers customer experience, digital experience management and end-user computing. Got a tip? Email him.