News|Articles|December 19, 2025

NIH’s investigational AI technology unlocks faster, clearer retinal imaging with 75% less data

Author(s)Logan Lutton
Listen
0:00 / 0:00

Key Takeaways

  • RRTGAN reduces retinal scan data needs by 75%, improving pixel resolution and enabling early disease detection.
  • The technology enhances early detection of age-related macular degeneration and diabetic retinopathy, allowing timely intervention.
SHOW MORE

The Residual Transformer Generative Adversarial Network (RRTGAN) generates high-quality, cellular-level retinal images from minimal data inputs, overcoming the limitations of previous slow and burdensome imaging methods.

Researchers at the National Institutes of Health (NIH) have developed a novel artificial intelligence and imaging method for retinal scans that reduces the amount of data needed by three-fourths, according to the results recently published in npj Artificial Intelligence. The technology is now one step closer to being used in a practice setting.

The AI method is called Residual in Residual Transformer Generative Adversarial Network (RRTGAN) and resolves individual cone receptor cells in the eye when combined with imaging. This technology uses only a fourth of the required data to make a 3D map of the eye in a clinical setting and improves the pixel resolution.

“Getting the most advanced ophthalmic imaging technologies into the hands of healthcare providers will vastly improve the ability to detect retinal diseases earlier, and guide treatments to prevent vision loss,” Johnny Tam, Ph.D., investigator at NIH’s National Eye Institute (NEI) and senior author of the study, said in a news release.

Detecting cellular-level changes earlier means clinicians can intervene with treatments sooner, potentially preserving patient vision more effectively. The technology helps spot subtle changes in the cone outer segment tips (COST) layer, which are critical biomarkers for early age-related macular degeneration detection before significant vision loss occurs. Additionally, rapid, high-resolution scans allow for more frequent and efficient monitoring of the microvascular changes in the retina caused by diabetes.

Current methods of adaptive optics imaging, called adaptive optics optical coherence tomography (AOOCT), are lengthy and can be burdensome for patients and clinicians. They require the patient to sit very still while hundreds of images of their retina are taken, which leads to lots of data being generated.

Clinicians have the option to take fewer images with higher resolution, but this could compromise diagnosis because it is not as thorough.
To test the effectiveness of RRTGAN, Tam and his colleagues recruited four participants and tested four eyes, comparing RRTGAN to other AI methods, ESRGAN and SwinIR. RRTGAN consistently achieved superior results in Peak Signal-to-Noise Ratio (PSNR), demonstrating better pixel-level fidelity to the ground truth images. Furthermore, RRTGAN secured a superior Fréchet Inception Distance (FID) score of 85.8 compared to ESRGAN's 104.4 and SwinIR's 127.8.

RRTGAN was also superior in perceptual metrics that align with human visual judgment, specifically by testing Deep Image Structure and Texture Similarity (DISTS) and Learned Perceptual Image Patch Similarity (LPIPS).

DISTS is a metric used to measure the similarity between AI images and the original image, and LPIPS is a metric that judges the similarity between two images in a way that aligns with human perception.

When compared with ESRGAN and SwinIR, RRTGAN resulted in a 29% reduction in error in DISTS and a 31% reduction in LPIPS error. This meant RRTGAN’s restored images looked more natural and accurate to human observers. Critically, analyses of cell spacing confirmed that the AI did not hallucinate data, ensuring clinical reliability.

Newsletter

Get the latest industry news, event updates, and more from Managed healthcare Executive.


Latest CME