When former Trump attorney Michael Cohen filed a routine motion in federal court last month seeking a shortened probation, no one could have predicted the strange turn the case would soon take. Included in the legal brief were citations of three court decisions that Cohen claimed backed his request for leniency. However, upon closer examination, an alert federal judge discovered something unexpected — the cited cases did not exist.
What followed has become a cautionary tale for the AI age, highlighting the very real dangers of new technologies that can generate false information that looks convincingly real.
In response to the skeptical judge’s inquiry, Cohen revealed that the fabricated cases were the result of research he had conducted using Google’s new Bard AI chatbot. Still, in an experimental beta phase, Bard aims to provide users with informed responses on a vast array of topics. However, as was made clear by Cohen’s experience, the AI system is far from infallible.
In his statement to the court, Cohen confessed that he had mistakenly believed Bard worked “like a super-charged search engine” that would only provide accurate information. Unaware that AI systems can generate fictional content, Cohen took the AI-created legal citations at face value and passed them along to his attorney, who in turn included them in the brief.
For Cohen, a former lawyer who is now serving a federal sentence on campaign finance and other charges, the embarrassing episode has served as a crash course on the risks of emergent technologies. “As a non-lawyer I have not kept up with emerging trends (and related risks) in legal technology,” Cohen wrote in his admission to the court.
The speculative nature of AI systems presents a particular challenge for the legal profession, which is built upon precedent and relies heavily on existing case law. Attorneys must now be increasingly vigilant that the information they utilize is authentic, particularly if it comes from AI tools like ChatGPT or Bard.
Indeed, this is not the first cautionary tale to emerge from AI’s infiltration into the practice of law. Earlier this year, two lawyers were sanctioned after an appellate brief they submitted was found to contain over a dozen fictitious case citations also fabricated by ChatGPT.
As these incidents reveal, AI promises accessibility and ease of information but does not yet possess the same fidelity to truth that the legal system demands.
For those like Cohen who grew up in a pre-AI age, adapting to this new reality poses a challenge. The instantaneous answers provided by chatbots can lull us into a false sense of credibility. Until the technology improves, users must maintain a healthy skepticism about the information delivered.
The legal profession has long valued wisdom and erudition, gained over time through diligent research and analysis. Artificial intelligence allows anyone to access a wealth of knowledge with just a few keystrokes. But as Michael Cohen learned, when utilizing the panoply of facts AI provides, we must separate truth from fiction. Truth is often the first casualty when navigating the uncharted waters of an automated world.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
