mast3r - stock.adobe.com
Researchers Develop Simulated Medical Interviews to Train AI Models
Researchers developed a dataset of simulated clinical conversations focused on respiratory cases for use in the development and training of medical artificial intelligence.
A study published in Scientific Data earlier this month showcases how researchers developed a dataset of simulated medical conversations focused on respiratory conditions to support the development and training of medical artificial intelligence (AI) models in the future.
Medical conversations, in which a clinician speaks with a patient to assess their health needs and concerns, is a critical aspect of routine examination and care. The study notes that these conversations provide a wealth of clinically valuable information but are difficult to use in research and AI development because of patient privacy concerns and data-sharing laws.
With these limitations in mind, the researchers set out to develop a simulated medical conversation dataset that could be used to train healthcare AI models. They began by gathering a team of resident doctors in internal medicine, physiatry, anatomical pathology and family medicine, and senior Canadian medical students, to record simulated medical conversations using Objective Structured Clinical Examinations (OSCE) on Microsoft Teams.
OCSE was chosen as it provides a standardized method to test students’ clinical skills and for its ability to handle unpredictable patient behavior. The researchers then chose medical conditions for the simulations based on the prevalence of the condition and its mortality rate if left untreated. They included respiratory, musculoskeletal, cardiac, dermatological, and gastrointestinal disease cases. The doctor or medical student playing the patient in the simulated conversation chose a case to present with.
Those playing the patients in the simulated conversations were encouraged to respond in the way a patient typically would in a clinical setting, based on their experience. Those playing the clinicians were told to take a patient’s history as they normally would to help inform a differential diagnosis. Individuals playing the clinicians were not told beforehand the disease case or diagnosis that the “patient” had chosen to present with, in order to accurately simulate the clinical setting and prevent leading questions.
The recorded conversation audio was then cleaned to remove extraneous information, conversation transcripts were manually corrected, and audio files and transcripts were reviewed for quality control to ensure that all mistakes from the previous two steps had been removed or corrected. This resulted in 272 of complete mp3 audio files and corresponding transcript text files.
The complete dataset of simulated conversations comprised of 78.7 percent respiratory cases, 16.9 percent musculoskeletal, 2.2 percent gastrointestinal, 1.8 percent cardiac, and 0.4 percent dermatological.
These data can be utilized in multiple ways, but use cases are limited by the small number of non-respiratory illnesses, the researchers noted. However, the audio recordings could be used to test the accuracy of transcription tools and speech recognition software. Additionally, they could be used to detect and fix speech-to-text errors.
The transcript text files could be annotated with tags to develop Named-Entity Recognition (NER) tools and train natural language processing (NLP) algorithms to help build educational models, such as avatars to train medical students for OSCEs.
Overall, the dataset helps fill a significant need in the development of medical AI, particularly those designed for free text-based functions, such as symptom extraction and disease classification, according to the researchers.