Thursday December 29, 2016

How Deep Learning Is Reinventing Hearing Aids

More than 75 percent of people who need hearing aids don’t wear them. Their biggest frustration: Hearing aids don’t work well in noisy situations. To improve hearing aids, Wang developed a deep learning program to separate speech from noise. As a first step, he and his team trained a neural network to use volume, frequency and other qualities of sound to tell the difference between speech and noise. Next, researchers had to teach their neural network the sound of speech, as well as a range of background noises. This included a standard set of IEEE spoken sentences, sounds from a hospital cafeteria and 10,000 movie sound effects آ— everything from exploding bombs and breaking glass to everyday sounds you’d hear in a living room or kitchen. To accelerate training, researchers used the CUDA parallel computing platform, NVIDIA TITAN X GPUs and cuDNN with the TensorFlow deep learning framework.