A Colombian judge, Juan Manuel Padilla Garcia of the First Circuit Court in Cartagena, has made history by using an AI text generator, ChatGPT, in a court ruling. This marks the first known instance of a legal decision being made with the assistance of AI.
In a court document dated January 30, 2023, Judge Garcia stated that he utilized the AI tool to ask legal questions regarding the case and included its responses in his decision. He emphasized that the purpose of including the AI-generated texts was to optimize the time spent in drafting judgments and not to replace the judge’s decision.
The case in question involved a dispute between a health insurance company and a family seeking coverage for medical treatment for their autistic child. The legal questions entered into the AI tool included inquiries about the responsibility for paying for the child’s therapies and about the jurisprudence of the constitutional court in similar cases.
Judge Garcia included the full responses from the chatbot in his decision, making him the first known judge to do so. He also added his own insights into applicable legal precedents and stated that the AI was used to extend the arguments of the adopted decision.
It is worth noting that Colombian law does not prohibit the use of AI in court decisions, but AI systems like ChatGPT have been criticized for providing biased, discriminatory or incorrect answers. This is because the language model does not have an actual “understanding” of the text, but instead synthesizes sentences based on probability from the examples used to train the system.
Despite the implementation of filters by ChatGPT’s creators, OpenAI, to eliminate problematic responses, the tool is still considered to have significant limitations and should not be used for consequential decision-making.
While this case marks the first known admission of a judge using AI in a court ruling, some courts have already begun to controversially use automated decision-making tools in determining sentences or releasing criminal defendants on bail. This has been heavily criticized by AI ethicists who argue that such systems regularly reinforce racist and sexist stereotypes and amplify pre-existing forms of inequality.
Although the Colombian court document suggests that the AI was mostly used to expedite the drafting of the decision and its responses were fact-checked, this landmark case is likely a sign of things to come. The use of AI in the legal field is an evolving and controversial issue, but one that demands attention as the technology advances and becomes more integrated into our daily lives.