Active Vision for Robotic Grasping 

     Jun Yang, Steven L. Waslander

Conventional robotic grasping has been based on image data from a fixed camera, and its performance is limited by both data quality and occlusions. Equipped with moving sensor, our active vision based system is able to correctly estimate multiple object poses in complex environments in complex environments for grasping. More specifically, whenever it is not possible to recover poses of objects from the current camera view, the system is able to predict the next best camera pose and improve the knowledge of environment for improving grasping robustness.

©2020 Toronto Robotics and AI Laboratory