Categories: Overall

Transcription Trouble: Hospitals Navigate AI Hallucinations in Patient Summaries

Hey there, tech enthusiasts! It’s your favorite tech-loving jokester, Nuked, here to sprinkle a little humor into the world of technology. Buckle up as we dive into the quirks of AI in hospitals!

A few months back, my doctor proudly showcased an AI transcription tool he uses to jot down and summarize our meetings. In my case, the summary turned out just fine. But hold on to your stethoscopes — researchers quoted by ABC News have discovered that OpenAI’s Whisper, which fuels many hospitals’ transcription tools, can sometimes go off the rails and make stuff up entirely!

This “Whisper” tool is deployed by a company named Nabla, and they claim to have transcribed around 7 million medical conversations. That’s a lot of doctor-patient chat! Over 30,000 clinicians and 40 health systems are on board with this tech. However, Nabla is aware of Whisper’s tendency to hallucinate and is reportedly working on fixing it.

Researchers from Cornell University and the University of Washington conducted a study revealing that Whisper hallucinated in about 1% of its transcriptions. This means it sometimes invented entire sentences that ranged from bizarrely nonsensical to downright violent, especially during those awkward pauses in recordings. As it turns out, silence is pretty common when someone with a language disorder called aphasia is speaking.

One of the researchers, Allison Koenecke from Cornell University, shared some eyebrow-raising examples on social media. The hallucinations included things like made-up medical conditions or phrases you’d typically hear at the end of a YouTube video, like “Thank you for watching!” (Fun fact: OpenAI allegedly trained GPT-4 by transcribing over a million hours of YouTube videos.) This fascinating study was presented at a conference in Brazil back in June, but it remains unclear if it has gone through peer review.

In response to these findings, an OpenAI spokesperson named Taya Christianson reached out to The Verge with a statement expressing their commitment to tackling this issue. They’re actively working to minimize these hallucinations and have policies in place that prohibit using Whisper in high-stakes decision-making scenarios. So rest assured, they’re on it!

As we continue to explore the realm of AI in healthcare and beyond, let’s keep an eye on these developments — because who knows what’s lurking in those digital shadows? Until next time, stay curious and keep laughing!

Spread the AI news in the universe!
Nuked

Recent Posts

Back to the Future: Apple’s Smart Home Display Set to Channel iMac G4 Vibes

Hello, my tech-loving friends! It’s your pal Nuked here, ready to dive into some exciting…

43 mins ago

Why Your Instagram Videos Look Blurry: The Secret Behind Quality and Popularity

Hello, my tech-savvy friends! It's Nuked here, and I'm thrilled to dive into some Instagram…

45 mins ago

Android 16 to Bring iPhone-Style Notifications: A Game Changer for User Experience!

Hey there, tech enthusiasts! It's your buddy Nuked here, ready to dive into some exciting…

1 day ago

Game On: Tim Walz and AOC Team Up for a Madden Showdown on Twitch!

Hello, my tech-savvy friends! It's your favorite tech enthusiast, Nuked, here to bring you some…

1 day ago

Lyft’s $2.1 Million Reality Check: FTC Cracks Down on Misleading Driver Pay Ads

Hello, my awesome followers! It's your tech-loving buddy Nuked here, ready to sprinkle some humor…

1 day ago

Meet Project Jarvis: Google’s Ambitious AI Assistant Set to Revolutionize Your Browsing Experience!

Hello, my tech-savvy friends! It's Nuked here, ready to dive into the latest buzz in…

1 day ago