(Reuters) - The perils of artificial intelligence keep spilling into U.S. courtrooms, with more judges issuing rules for attorneys using the technology and another lawyer inadvertently citing cases made up by ChatGPT.

Colorado Springs, Colorado, broadcaster KRDO reported this week that a local attorney cited nonexistent cases generated by OpenAI's ChatGPT in a court filing in a case over contested car payments.

The young lawyer, Zachariah Crabill, said he unknowingly included the "fictitious" cases in his client's motion, according to court documents. He apologized in a May 11 affidavit and said ChatGPT accurately answered previous inquires, so it "never even dawned on me that this technology could be deceptive."

Crabill, who did not respond to a request for comment, moved to withdraw from the case last week. His firm, Baker Law Group, declined to comment and did not respond to follow-up questions.

The case echoes another in New York, where a federal judge is weighing sanctions for two lawyers and their law firm after one of the attorneys, Steven Schwartz, included six cases made up by ChatGPT in a legal brief in his client’s personal injury case against an airline. Schwartz also apologized and cited ignorance of AI's research limitations.

OpenAI's ChatGPT website currently warns users that the program "may occasionally generate incorrect information." The company did not immediately respond to a request for comment.

Some courts are taking preventative action. At least four federal judges have recently issued orders governing how attorneys with cases before them use AI tools.

Senior U.S. District Judge Michael Baylson in Philadelphia on June 6 began requiring lawyers to disclose whether AI was used for court filings and to certify that "each and every citation" was verified for accuracy.

Judges in Texas, Illinois and Washington, D.C., have also mandated similar disclosures since last month.

U.S. Magistrate Judge Gabriel Fuentes in Chicago now requires lawyers to reveal any "specific AI tool" used for legal research or document drafting. Judge Stephen Vaden of the U.S. Court of International Trade ordered specific steps to safeguard data, declaring that AI technologies "challenge the Court's ability to protect confidential and business proprietary information from access by unauthorized parties."

Professional conduct rules issued by the American Bar Association do not explicitly address artificial intelligence. However, existing ethics rules apply to lawyers using the technology, experts have told Reuters, including rules regarding competence and confidentiality.

Our Standards: The Thomson Reuters Trust Principles.

Thomson Reuters

Sara Merken reports on privacy and data security, as well as the business of law, including legal innovation and key players in the legal services industry. Reach her at sara.merken@thomsonreuters.com

magnifier linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram