Thursday August 25, 2016

Handing Over VR’s Toughest Challenge to GPUs

In the real world, our hands are our guides. We feel with them, we manipulate with them, we explore with them. We use them to eat, dress and primp ourselves, make a living, and connect with others. And yet, in the virtual world, we’re lucky if we can use them at all. A team of researchers at Purdue University hopes to change that with DeepHand, a deep learning-powered system for interpreting hand movements in virtual environments. By combining depth-sensing cameras and a convolutional neural network trained on GPUs to interpret 2.5 million hand poses and configurations, the team has taken us a large step closer to being able to use our dexterity while interacting with 3D virtual objects.

DeepHand fulfills the long-time vision of its lead researcher, Karthik Ramani, the Donald W. Feddersen Professor of Mechanical Engineering, at Purdue. GPUs are helping the cause by speeding up the training of convolutional neural networks such as the one created for DeepHand. Ramani and his two graduate student researchers, Ayan Sinha and Chiho Choi, used NVIDIA GPUs to train their network, and Ramani said they were able to complete the process 2-3 times faster than if they’d used CPUs.