Dissertation Talk: Efficient Robot Learning of Robust Grasping Policies from Synthetic Training Datasets
Presentation | May 7 | 9-10 a.m. | 310 Soda Hall
Rapid and reliable robot grasping of a wide variety of objects remains a Grand Challenge for robotics due to sensor noise, imprecise control, and partial observability. Deep neural networks trained on datasets of human-labeled or self-supervised grasps can be used to rapidly plan grasps across a diverse set of objects, but data collection is tedious and performance may asymptote with training dataset size. To reduce data collection time, I propose to generate synthetic training datasets of millions of 3D point clouds and robot grasps using geometric models of grasp success and image formation. In this talk I present the Dexterity-Network (Dex-Net), a framework for generating datasets by analyzing mechanical models of contact forces and torques under stochastic perturbations across thousands of 3D object CAD models. I describe generative models for training policies to lift and transport objects from a tabletop or cluttered bin using a parallel-jaw (two-finger) or suction cup gripper. I explore methods for learning robust policies that transfer from simulation to reality and that can decide which gripper to use. To substantiate the method, I describe thousands of experimental trials on a physical robot which suggest that deep learning on synthetic Dex-Net datasets can be used to rapidly plan successful grasps across a diverse set of novel objects.