Analysing accuracy, balancing bias: Can ChatGPT be trusted to ease the care documentation burden? GOLTC DATA SCIENCE INTEREST GROUP WEBINAR
Analysing accuracy, balancing bias: Can ChatGPT be trusted to ease the care documentation burden? GOLTC DATA SCIENCE INTEREST GROUP WEBINAR
23 May, 2024
EventsNews
Artificial Intelligence
Published:
11 Apr 2024
Analysing accuracy, balancing bias: Can ChatGPT be trusted to ease the care documentation burden?
GOLTC DATA SCIENCE INTEREST GROUP WEBINAR
Care managers dedicate 6 hours per week transcribing handwritten notes from meetings into electronic databases. Suddenly, AI is being used to automatically generate audio recordings of case management meetings into case notes, freeing up workers to spend more time assessing needs and arranging care. But how reliable are these tools? There can be racial disparities in automatic speech recognition. And how do we quantify the accuracy of free text generated by AI?
Date: Thursday 23 May 2024
Time: 15:30-16:45 BST | 10:30-11:45 EDT
Link: Register to join on Zoom
This webinar will cover:
- Racial disparities in automated speech recognition (Allison Koenecke, Cornell Department of Information Science)
- Adapted large language models can outperform medical experts in clinical text summarization (Dave Van Veen, Stanford Center for Artificial Intelligence)
- Discussion: What does this research tell us about using AI in long-term care?