New AI Tool Helps Doctors Read Chest X‑Rays Better
Posted on 21 Jul 2025
The U.S. health care system ranks poorly in critical health indicators, including life expectancy, and patients often seek better outcomes with fewer costs. Doctors aim for precise, first-time diagnoses, especially using tools like chest X-rays, which are essential for identifying conditions such as pneumonia, tuberculosis, heart issues, and even gut problems. However, these images can be difficult to interpret—even for experienced physicians—and may miss rare or emerging diseases, as seen during the early COVID-19 pandemic. There is a growing need for accurate, accessible, and equitable diagnostic support tools to help overcome these limitations. A team of researchers has now developed a new artificial intelligence (AI) tool that improves chest X-ray interpretation by reducing diagnostic errors, speeding up assessments, and increasing global access to high-quality diagnostic software.
Developed by a team at Arizona State University (Tempe, AZ, USA), the new AI tool, called Ark+, was created to assist in the interpretation of chest X-rays using open science principles. The researchers trained the model using over 700,000 X-ray images from multiple global datasets and enhanced it by incorporating detailed physician notes—expert labels typically omitted by proprietary systems. This full supervision approach allowed Ark+ to accrue and reuse medical knowledge more effectively than traditional self-supervised models. Ark+ is built to detect a wide range of lung issues, identify rare diseases from limited samples, and adapt to new diagnostic tasks without needing full retraining. Additionally, it was designed to work securely, remain resilient to data inconsistencies, and resist bias, making it a versatile tool for real-world deployment in health care systems.
In a proof-of-concept study published in Nature, Ark+ outperformed proprietary tools developed by major tech companies such as Google and Microsoft, particularly in diagnosing both common and rare lung conditions, including COVID-19 and avian flu. Its open-access nature allows other researchers to build on it and tailor it to local clinical environments. The team aims to further commercialize the tool for hospitals and adapt it for other imaging types like CT and MRI. By making all models and code publicly available, the researchers are working toward a future where safe, intelligent, and accessible AI can support diagnostic efforts in all healthcare settings, especially in underserved or rural regions.
“We believe in open science, so we used public data and a global data set as we think this will more quickly develop the AI model,” said Jianming “Jimmy” Liang, lead author of the Ark+ study. “By making this model fully open, we are inviting others to join us in making medical AI more fair, accurate, and accessible.”
Related Links:
Arizona State University