US District Judge of the Northern District of Texas, Brantley Starr, introduced a new requirement for lawyers appearing in court before him to confirm that their legal documents were not solely drafted by artificial intelligence (AI) without human verification. The judge’s decision stems from concerns over the possibility of AI tools generating misleading or inaccurate information.
Judge Starr emphasized the need for lawyers to independently validate any data derived from AI systems by cross-referencing it with traditional databases. He expressed apprehension regarding the potential biases that can be present in AI platforms, going as far as suggesting that they can create false quotes and citations. Judge Starr aims to ensure that legal professionals understand the limitations of AI-generated content and take responsibility for its accuracy through manual verification processes.
The judge’s Starr decision was influenced by witnessing a demonstration during a conference hosted by the 5th Circuit US Court of Appeals, where the panelists showcased how AI platforms could fabricate false cases. The recent incident involving a lawyer in Manhattan who cited bogus cases generated by an AI tool also contributed to the implementation of the requirement. Although Judge Starr considered banning the use of AI in the courtroom, he decided against it after consultations with legal experts.