Wednesday May 11, 2016

Deep Learning Spots Disease Early Using Chest X-Rays

Researchers from the National Institutes of Health in Bethesda, Maryland, have developed a deep learning-powered framework that detects diseases from chest X-rays. Their system then creates detailed captions for the X-rays, making it easier for doctors to screen patients and detect critical diseases early. The team used our CUDA programming model and GPUs to train their neural network, which identifies disease and describe its contexts, such as location, severity, size or affected organs.

The NIH researchers used a public dataset of chest X-ray images to train a convolutional neural network to recognize diseases. Then, in what may be a first for radiology images, they trained a recurrent neural network to describe the context of the disease. The paired networks, built using our cuDNN libraries and the Torch deep learning framework, produce richer, more accurate image annotation results. They used NVIDIA Tesla K40 GPUs to train their model, gaining significant speedups.