Electronic health records have led to even larger haystacks that stymie needle-hunting clinicians. But despite the capabilities of models like OpenAI's GPT-4, it's so far unclear whether they're ready for the high stakes of clinical summarization, when a single missing word could mean the difference in a diagnosis.
"Are we really going to trust this to be able to do that?" said Bart. "It was hard enough with someone who has medical training to understand the context."
Early adopters of LLM-generated summaries are asking the same questions. The AI tools promise to save time and angst for clinicians, but they also introduce mistakes, miss medically-important distinctions, and sometimes they make things up. Even as hospitals race to adopt them — at least a dozen are piloting Epic's built-in tool to summarize patient charts — they remain uncertain how to test their performance, insert safety checks, and clear them for use in live medical settings.
www.statnews.com
Perhaps we need to consider using REAL intelligence rather than artificial
"Are we really going to trust this to be able to do that?" said Bart. "It was hard enough with someone who has medical training to understand the context."
Early adopters of LLM-generated summaries are asking the same questions. The AI tools promise to save time and angst for clinicians, but they also introduce mistakes, miss medically-important distinctions, and sometimes they make things up. Even as hospitals race to adopt them — at least a dozen are piloting Epic's built-in tool to summarize patient charts — they remain uncertain how to test their performance, insert safety checks, and clear them for use in live medical settings.

'It's a bit chaotic': Hospitals struggle to validate AI-generated clinical summaries
AI-powered summarization tools can save time and angst for hospital staff, but they can also introduce mistakes, or make things up.

Perhaps we need to consider using REAL intelligence rather than artificial