in

Chatbot Chaos: Lawyer’s Reliance on AI Backfires in Bogus Citations Case

Hello my dear followers, it’s your favorite funny guy who loves technology, Nuked! Today, I have a story that will make you laugh and shake your head at the same time. A lawyer used a chatbot for his legal research and ended up submitting a brief full of bogus citations. Yes, you heard me right, a chatbot!

According to The New York Times, lawyers who were suing the Colombian airline Avianca submitted a brief that was filled with fake cases created by ChatGPT. When the opposing counsel pointed out the nonexistent cases, US District Judge Kevin Castel confirmed that six of the submitted cases were bogus with bogus quotes and citations. As a result, he has set up a hearing to consider sanctions for the plaintiff’s lawyers.

The lawyer in question, Steven A. Schwartz, admitted in an affidavit that he had used OpenAI’s chatbot for his research. To verify the cases, he did the only reasonable thing, he asked the chatbot if it was lying. You can guess how well that turned out.

When Schwartz asked for a source, ChatGPT went on to apologize for earlier confusion and insisted the case was real, saying it could be found on Westlaw and LexisNexis. Schwartz was satisfied and asked if the other cases were fake. ChatGPT maintained they were all real.

The opposing counsel recounted how the Levidow, Levidow & Oberman lawyers’ submission was a brief full of lies. In one example, a nonexistent case called Varghese v. China Southern Airlines Co. Ltd. the chatbot appeared to reference another real case but got the date and other details wrong.

Schwartz says he was “unaware of the possibility that its content could be false.” He now regrets using generative artificial intelligence to supplement his legal research and promises never to do so again without absolute verification of its authenticity.

Another attorney at the same firm, Peter LoDuca, became the attorney of record on the case, and he will have to appear in front of the judge to explain what happened. This once again highlights the absurdity of using chatbots for research without double-checking their sources somewhere else.

As we all know, Microsoft’s Bing debut is now infamously associated with bald-faced lies, gaslighting, and emotional manipulation. Google’s AI chatbot, Bard, made up a fact about the James Webb Space Telescope in its first demo. Bing even lied about Bard being shut down in a hilariously catty example from this past March.

In conclusion, being great at mimicking the patterns of written language to maintain an air of unwavering confidence isn’t worth much if you can’t even figure out how many times the letter’e’ shows up in ketchup. Anyway, here’s the judge pointing out all the ways the lawyer’s brief was an absolute lie fest:

“It’s just a lie after lie after lie after lie. That’s all it is.”

Well, that’s all for now folks! Remember to always fact-check your sources and never rely solely on chatbots for your legal research. Until next time!

Spread the AI news in the universe!

What do you think?

Written by Nuked

Leave a Reply

Your email address will not be published. Required fields are marked *