When I was a #radiology resident, I often heard, “One view is no view.” When generating radiology reports for most exam types, one image is no image. Although GPT-4V is an impressive technology, it is important to exercise extreme caution when using it for medical imaging. OpenAI was likely aware that many individuals would test the capabilities of their #GPT4V model on medical images. To their credit, they stated (https://buff.ly/3rAHQvv), "[We] do not consider the current version of GPT-4V to be fit for performing any medical function or substituting professional medical advice, diagnosis, or treatment, or judgment." However, “The Dawn of LMMs” paper (https://buff.ly/46zzgvS) claimed “the effectiveness of GPT-4V in medical image understanding,” where they “conducted a detailed investigation into the application of GPT-4V in radiology report generation” with this being one of the “accurate examples.” Several posts (e.g., https://buff.ly/46M41xR) have identified some of the errors and shortcomings, and here’s my take as an MSK radiologist. As mentioned in the article (https://buff.ly/3MyIgt1) by Pranav Rajpurkar and Matthew Lungren MD MPH, a new generation of generalist medical AI models is coming. In this regard, Microsoft (https://buff.ly/3JmdaE5) and Google (https://buff.ly/3ZXw6zV) have presented some interesting developments. Radiology recently published an article showing, despite limitations, the potential benefit of large multimodal models (LMMs) (https://buff.ly/3POZN1w) with well-balanced commentary by Felipe Kitamura, MD, PhD and Eric Topol, MD (https://buff.ly/3Q7DuFO) that sparked some great discussions initiated by Rick Abramson, MD, MHCDS, FACR (https://buff.ly/3FmBweg). Clinical context and priors will be a part of the #LMMs. After all, that’s how we radiologists practice – every day. If this blog is any indication (https://buff.ly/48Tht4H), you can expect more LMMs to be developed that go beyond the limitations of relying solely on datasets like MIMIC-III (https://buff.ly/3QdlZnr). ⚠ Be aware that there may be some completely nonsensical papers, such as XrayGPT. #AI has the potential to revolutionize medical imaging despite challenges. We need to look beyond efficiency if we want greater clinical adoption of AI and also consider the radiologists' dilemma, as pointed out by Saurabh Jha in his excellent article (https://buff.ly/48CsWpv) - why I also agree with Nina Kottler, MD, MS when she says we should focus on optimizing the teams of radiologists + AI and prioritize education. It is an exciting time to be involved in research and development, but it is crucial for vendors and researchers in this field to have radiologists on their team to ensure that the technology is safe and effective. It's important to ask the right questions, be aware of the limitations, and avoid overhyping the potential of AI in healthcare. #GenerativeAI #GenAI #InteractiveAI #ChatGPT
AI in Radiology Practices
Explore top LinkedIn content from expert professionals.
-
-
A new study published in JAMA Network Open shows that a generative AI model can generate chest X-ray reports of similar clinical accuracy and textual quality to radiologist reports, while providing higher textual quality than teleradiologist reports. The study also found that the AI model could identify findings missed by radiologists in a handful of cases. These findings suggest that generative AI has the potential to play a valuable role in chest X-ray interpretation, especially in settings where radiologists are scarce or time-constrained. Generative AI models can be used to generate chest X-ray reports in a variety of ways. For example, they can be used to generate preliminary reports that radiologists can review and finalize. Or, they can be used to generate reports for patients in remote areas where radiologists are not available. The use of generative AI in chest X-ray interpretation has the potential to improve the quality and efficiency of care for all patients.
-
Today, on World Cancer Day, we recognize the profound impact cancer has on individuals and families worldwide. My father had stage IIIB adenocarcinoma of the lung, with his left upper lobe removed, and my uncle succumbed to small cell lung cancer. Both were non-smokers. These stories underscore the urgency of advancing our detection methods. It's a personal mission for many, driven by the hope that through technology, particularly the fusion of Knowledge AI and Big Data AI, we can unveil these silent killers early enough to make a difference. Here's a proposed 10-step protocol for deploying an algorithm capable of early detection of solitary lung nodule cancer, leveraging blood biomarkers, radiology, and other modalities: Data Collection and Integration: Gather extensive datasets covering various patient demographics and stages of lung cancers. Big Data Infrastructure: Develop efficient data handling for structured and unstructured data. Knowledge AI Models: Utilize medical knowledge to enhance AI models. Machine Learning and Deep Learning: Apply AI techniques for identifying early-stage cancer patterns. Radiology Image Analysis: Train AI for advanced image recognition of lung scans. Blood Biomarker Detection: Develop algorithms for non-invasive blood test analysis. Predictive Modeling: Personalize risk assessments using predictive models. Clinical Validation: Ensure model accuracy through extensive clinical trials. Integration into Clinical Workflows: Collaborate with healthcare providers to incorporate AI into existing processes. Continuous Learning and Improvement: Establish a system for regular AI model updates based on new data and discoveries. By following these steps, we can harness AI's power to transform early lung cancer detection, potentially saving countless lives. The fusion of Knowledge AI and Big Data AI offers hope, turning silent stories into beacons of progress. Through early detection, we aspire to beat cancer.
-
I am very excited to announce the latest additions to the Med-Gemini family of models! In our latest research, we bring generative #AI to multimodal #medicine including reporting 3D #radiology scans for the first time. We explore potential across 2D #radiology / #pathology / #dermatology / #retina images and outcomes prediction from #genomics using our 3 latest Med-Gemini models. Some highlights: 🧪 state-of-the-art (SOTA) AI-based chest X-ray (CXR) report generation based on expert evaluation, exceeding previous best results across two separate datasets by an absolute margin of 1% and 12%, where 57% and 96% of AI reports on normal cases, and 43% and 65% on abnormal cases, are evaluated as "equivalent or better" than the original radiologists' reports 🧪 first ever large multimodal model-based report generation for 3D computed tomography (CT) volumes using Med-Gemini-3D, with 53% of AI reports considered clinically acceptable, although additional research is needed to meet expert radiologist reporting quality 🧪 surpasses the previous best performance in CXR visual question answering (VQA) and performs well in CXR classification and radiology VQA, exceeding SOTA or baselines on 17 of 20 tasks 🧪 in histopathology, ophthalmology, and dermatology image classification, Med-Gemini-2D surpasses baselines across 18 out of 20 tasks and approaches task-specific model performance 🧪 beyond imaging, Med-Gemini-Polygenic outperforms the standard linear polygenic risk score-based approach for disease risk prediction and generalizes to genetically correlated diseases for which it has never been trained. Check out our paper for more details: https://lnkd.in/gg3xqzPi Med-Gemini is the result of a months-long sprint by a large group of incredibly talented and hard-working folks: Lin Yang Shawn Xu Timo Kohlberger Yuchen Zhou Ira Ktena, PhD Atilla K. Faruk Ahmed Farhad Hormozdiari Tiam Jaroensri Eric Wang Ellery Wulczyn Fayaz Jamil Theo Guidroz Charles Lau, MD, MBA Siyuan Qiao Yun Liu Akshay Goel, M.D. Kendall Park Arnav Agharwal Nicholas George Yang Wang Ryutaro Tanno David Barrett Wei-Hung Weng S. Sara Mahdavi Khaled Saab Tao Tu Dr Sreenivasa Raju Kalidindi Mozzi Etemadi Jorge Cuadros OD PhD Greg Sorensen Yossi Matias Katherine Chou Greg Corrado Joëlle Barral Shravya Shetty David Fleet S. M. Ali Eslami Daniel Tse Shruthi Prabhakara Cory McLean David Steiner Rory Pilgrim Christopher Kelly Shekoofeh Azizi Daniel Golden Google Health #GoogleHealth #AI #machinelearning #medicine #radiology #medicalai
-
The FDA approved 873 AI healthcare algorithms in 2025. That's more than the previous 5 years combined. We're not preparing for an AI revolution in healthcare. We're living through it right now. But most healthcare leaders are missing the real story behind these numbers. Here's what I learned after tracking every single FDA AI approval: Google just got clearance for cardiac arrest detection on smartwatches ↳ Pixel Watch 3 can detect loss of pulse and call emergency services ↳ This isn't just a cool feature - it's life-saving technology ↳ Consumer devices are becoming medical devices Microsoft launched 3D medical imaging AI that reads scans faster than radiologists ↳ MedImageParse processes complex 3D images in seconds ↳ Radiologists can focus on interpretation instead of analysis ↳ Diagnosis speed just increased by 10x But here's the part nobody's talking about: The FDA released comprehensive AI guidance in January 2025. This provides the first complete framework for AI device lifecycle management. Translation: The regulatory uncertainty that killed healthcare AI startups for years is over. What this means for every healthcare organization: 1/ AI integration is no longer experimental - it's strategic 2/ Competitive advantage will come from implementation speed 3/ Organizations that wait will be left behind permanently The companies building AI-first healthcare workflows today will dominate the next decade. The companies waiting for "proof of concept" will become footnotes. Which camp is your organization in? 💭 Comment with your experience using any of these devices ♻️ Repost if you believe AI will transform healthcare delivery 👉 Follow me for realistic takes on healthcare technology adoption
-
Researchers at Johns Hopkins Kimmel Cancer Center have developed an AI-powered liquid biopsy for early lung cancer detection. This test analyzes DNA fragment patterns in blood and, validated through prospective studies, identifies high-risk individuals needing follow-up CT scans. This advancement could boost screening rates and reduce mortality. https://lnkd.in/gnzmf6_Q
-
Thank you for letting me share my thoughts as I finish the weekend outpatient #MRI list... 😬 😅 At our site the volume of #ARIA studies continues to increase. Subjective comparison with prior studies adds a great deal of time to the day, as does the process of adjudication as two dozen neuroradiologists and trainees are reading these with variable sensitivity and different approaches (e.g. we get SWI and GRE- which do you look at first? Priors are often added much later and addendums are requested. We are asked to compare studies that are standard MRI brain rather than the ARIA protocol- how do you handle this?) We also suffer from difficulties with the required reporting template- a problem not unique to us as I learned moderating the discussion of subject experts at the recent International Neuroimaging Symposium 2025 Ana M. Franceschi, MD PhD Tammie Benzinger. Through the American College of Radiology Neuro Commission Education and Dementia Committees we are tackling this issue. No challenge in radiology has changed my mind about #AI assistance before this. The need for consistent interpretation and reporting is vital here- for patient care and for radiologist sanity. We all want to do it perfectly, but how can we when we - the subspecialty trained "leaders" in the field- can't agree? PS- kudos to icometrix for the foresight to integrate the radiology report with the study output -so helpful. 👏 👏 And as CTO Dirk Smeets is a coauthor of the Radiological Society of North America (RSNA)-ACR Common Data Elements for ARIA, we know it will endure as part of the multidisciplinary, multifaceted collaboration in our efforts to standardize language in #radiology and all of medicine. https://lnkd.in/gwVZPsuW #Alzheimersdisease #ARIA #Patientcare #Neuroradiology #Radiologhy #Neurology #AI #CDE #Structuredreporting #MedicalEducation
-
Key Factors for Implementing AI-Enhanced Clinical Information Systems Successfully Based on industry reports and first-hand observations, it's evident that the integration of Artificial Intelligence (AI) into healthcare is well underway. However, the gap between a successful implementation and an unsuccessful one can be wide. Here are some key factors that maximize the probability of a successful implementation of an AI-enhanced clinical information system. Key Factors for Successful Implementation: 1. Organizational Leadership, Commitment, and Vision 👓 : Leadership buy-in is crucial. A clear organizational strategy for AI needs to be in place to guide the implementation process. 2. Improving Clinical Processes and Patient Care 👩⚕️ : The end goal should be better patient outcomes. Make sure the AI system aligns with this objective. 3. Involving Clinicians in Design and Modification 💻 : Those who will use the system should have input into its design. This ensures relevance and encourages adoption. 4. Maintaining or Improving Clinical Productivity 📈 : The new system should not disrupt workflow. Ideally, it should increase efficiency, perhaps by automating routine tasks. 5. Building Momentum and Support Among Clinicians 🌟 : Early wins can build momentum. Open communication and training are key for securing clinician support. A 🏥 Vignette: Radiology at Hospital X vs Hospital Y ✅ Hospital X: Dr. Smith, head of radiology, involved her team from the start. They pinpointed specific challenges that AI could address. The result: diagnostic accuracy improved, and image reading time dropped by 25%. The department's capacity increased, patient wait times fell, and the team's initial skepticism turned into strong support for the AI system. ⛔ Hospital Y: In contrast, Hospital Y’s administration relied on an external committee with no clinical experience. Dr. Johnson, a senior radiologist, felt sidelined. The system generated multiple false positives, creating bottlenecks and reducing efficiency. The morale dropped, and the project was ultimately abandoned. These contrasting stories underline the importance of each key factor in implementing AI-enhanced clinical systems. Hospital X succeeded due to its thoughtful approach, while Hospital Y serves as a cautionary tale of what can go wrong when these factors are ignored. #HealthcareAI #ClinicalInformatics #Leadership #PatientCare #ImplementationSuccess
-
Penda Health and OpenAI released a paper yesterday Here are a few takeaways I pulled together from an implementation lens: 📍 Location: Nairobi, Kenya 🏥 Clinics involved: 15 🧑⚕️ Providers involved: 100+ mid-level providers 👥 Patients included: 39,849 total visits (across AI and non-AI groups) 📊 External review: 108 independent physician reviewers (29 based in Kenya). Reviewed a random sample of 5,666 patient visits to assess diagnostic and treatment errors. Used for evaluation only. 🛠️ Product Implementation: AI Consult - The earliest version of AI Consult was discontinued, it required clinicians to actively prompt it, which disrupted workflow - In 2025, Penda redesigned the tool to act as a passive “safety net” during diagnosis and treatment planning. Clinicians remained in control and had final say - A traffic light system (red/yellow/green) flagged potential errors in real time, based on Kenyan national clinical guidelines, Penda’s internal protocols, and Kenyan epidemiological context - Clinician notes (with patient identifiers removed) were shared with the OpenAI API at key points during the visit to generate feedback. Patients provided consent and could withdraw their data at any time - GPT-4o was selected for its lower latency, enabling faster responses during live patient sessions. At the time of implementation, more advanced models hadn’t been released. o3 has since become the highest-performing model via HealthBench - A key implementation challenge: deciding when to input feedback. Some clinicians document asynchronously, so timing affected whether suggestions were helpful or disruptive 📈 Results - In the AI group, clinicians made fewer mistakes across the board. This included asking the right questions to ordering tests, making diagnoses, and choosing treatments - Projected impact: ~22,000 diagnostic and ~29,000 treatment errors potentially averted annually - Red alerts dropped from ~45% to ~35% of visits over time, suggesting clinicians learned with use - 100% of surveyed clinicians said AI Consult helped, though the survey method wasn’t detailed - No statistically significant difference was found in longer-term patient outcomes (measured 8 days later) A follow-up thought: The paper makes a case that implementation is now the bigger challenge and not model capability. That seems true for the context of this study, but model capability would still have to be evaluated safely in more specialized care settings like oncology or neurology. Thanks Karan Singhal, Robert Korom, Ethan Goh, MD for sharing this work publicly! Paper in the comments.
-
Using AI to Predict the Spread of Lung Cancer. In a novel pilot study of non-small cell lung cancer (NSCLC) patients. A team of scientists from Caltech and Washington University School of Medicine in St. Louis has utilized artificial intelligence (AI) algorithms, asking computers to predict which cancer cases are likely to metastasize. March 06, 2024 Excerpt: Predictions about progression of lung cancer have important implications in terms of an individual patient's life. Physicians treating early-stage NSCLC patients face the extremely difficult decision of whether to intervene with expensive, toxic treatments, such as chemotherapy or radiation, after a patient undergoes lung surgery. In some ways, this is the more cautious path because more than half of stage I–III NSCLC patients eventually experience metastasis to the brain. Although many others do not. For those patients, such difficult treatments are wholly unnecessary. In the new study, recently published in the Journal of Pathology, the collaborators show AI holds promise as a tool that could one day aid physicians in this decision-making. The Journal of Pathology First published: 04 March 2024 AI-guided histopathology predicts brain metastasis in lung cancer patients Haowen Zhou, Mark Watson, Cory T Bernadt, Steven (Siyu) Lin, Chieh-yu Lin, Jon H Ritter, Alexander Wein, Simon Mahler, Sid Rawal, Ramaswamy Govindan, Changhuei Yang, Richard J Cote https://lnkd.in/e5bJtR8T https://lnkd.in/exiuNAJi