Industries evolve through innovations in technology, and the legal profession is one of those industries.
Although often slow to adapt — yes, some of us still use fax machines and Word Perfect — attorneys are now regularly using online filing systems to file lawsuits in court instead of walking a paper copy of a lawsuit to the clerk’s office for filing. We implemented virtual technology to meet with clients and attend court proceedings during the COVID-19 shutdowns. The newest technology to hit the legal field is a form of artificial intelligence (AI), and its use is already the subject of litigation.
One of the current cases that is talked about is the use of AI by attorneys in a personal injury action. The case began with a passenger on an airline, sitting in the isle seat, who was struck by a metal beverage cart. As the story goes, the passenger sustained an injury, and filed suit against the airline. For whatever reason, the passenger did not file his lawsuit within the statute of limitations (the statutory period of time to file a claim for a specific cause of action) but nevertheless, his attorneys filed a lawsuit in federal court, equipped with a brief (written legal arguments) of why the lawsuit was timely and the statute of limitations did not apply.
The attorneys for the passenger filed their arguments using legal research that was obtained through use of ChatGPT, a software program which uses artificial intelligence to provide information, research, and answer questions — and provides it in a way that is convincing and conversational. If you research this online you will discover a vast world of artificial intelligence programs and uses — much more than what I am covering here.
I should also give a bit more background information to make this story make more sense if you are not familiar with the legal field. When an attorney files a brief, or legal memorandum, in a court, the attorney makes various arguments which are based on laws. The arguments that the attorney makes are supported by the constitution, statutes, regulations and case law (other court cases that have been decided). The attorney has a duty and legal obligation to make arguments that are based on true legal theories — actual laws, court holdings, and reasonable conclusions and interpretations that may be drawn from laws and previous court decisions. In other words, an attorney can’t just make something up and call it a legal argument.
In this personal injury case, when opposing counsel and the judge researched the cases that the injured passenger’s attorneys cited, opposing counsel and the judge could not locate the cases. The presiding judge reportedly contacted other courts (courts to which the passenger’s attorney had provided citations) to verify the existence of the cases which were in the brief. The outcome? The cases that supported the legal arguments of the passenger’s attorneys were fake. Made up. Nonexistent.
What we understand now, is that the attorneys used ChatGPT for their legal research. It was reported that the attorneys further did their “due diligence” in confirming the accuracy of their legal research by using ChatGPT. Apparently, ChatGPT communicated back to the attorneys that the legal research was accurate, and must have done it in a confident and convincing way, because those attorneys took ChatGPT at its word.
If you are reading this and think that this is absurd — I do, too. Prior to hearing about this case, I had no idea that there are attorneys who might be using artificial intelligence to conduct legal research, let alone rely on that AI-generated legal research to make an argument in court.
However, the use of AI in legal document preparation is common enough to inspire judges to implement local practices relating to its use. For example, I discovered a report stating that one federal judge in the Northern District of Texas issued a mandatory rule, that in all legal briefings, the attorneys appearing in court must attest (promise or swear) that either: no portion of the brief is drafted by generative artificial intelligence, or, that any language drafted by generative artificial intelligence was checked for accuracy, using traditional legal research publications or databases, by a human being. I also found a report of a federal magistrate judge in the Northern District of Illinois adopting a similar rule, that any generative artificial intelligence tool used in legal drafting must be disclosed.
I also found articles targeting attorneys who use, or are contemplating use of, artificial intelligence in their practices. Prior to researching these issues, I was naïve to the different forms of artificial intelligence and how it may be used in the legal field. As you learn more, you see the number of risks and pitfalls associated with the use of AI in this arena.
It was a tough lesson for the injured passenger’s attorneys. The federal judge who presided over the personal injury case not only dismissed the case, but also sanctioned the attorneys for making legal arguments that were based on the imaginary cases generated by ChatGPT. It was reported that the attorneys already set up their own continuing legal education courses to remedy their actions. The attorneys were also fined $5,000 and ordered to write a letter to their client explaining what they did. They were ordered to write letters to the various real-life judges their imaginary cases purported to cite. All that, plus their names published in each news article reporting on this subject, all available to the public. On a more positive note, the actions of those attorneys are a public service to all other practitioners — what not to do.
From the looks of it, AI in the field of law has only started. Earlier this year there were reports of a company which planned to introduce an “attorney robot” of sorts. The description involved an AI service that litigants would listen to, via headphones, while in court. The software would listen to the court proceeding in real time and then tell the litigant what to say or do in response.
The number of court rules and ethical rules that a program like that may violate is numerous. Most courts do not allow use of transmitting technology, cell phones, or recording devices. Most states have ethical and practice rules that must be followed, including a general principle that legal advice must be provided by a practitioner that is licensed to practice law, and, who is also competent to provide assistance on a specific matter. Will generative artificial intelligence have malpractice insurance for all of the bad advice it is inevitably going to generate?
The lesson here is that if you decide to use artificial intelligence in the legal world, please use caution. There is a reason why there are legal research databases such as Westlaw and Lexis. Many of these are free and available to the public through libraries — each county in our region has a public law library. Further, many cases are available on the internet though websites such as FindLaw and Casetext. Federal and state constitutions, statutes, and regulations can usually be found quickly and free, online. Best practice, stay away from AI for legal purposes and research, at least for now.