We leverage fundamental computer vision principles and deep learning to advance automotive perception in the task of 3D object detection - the task of estimating the six degrees of freedom pose and dimensions of objects of interest.
UAV/UGV teams show strong promise for a range of applications, thanks to their complementary capabilities of maneuverability and endurance. In this project, we demonstrate some of the first automated docking of drones on moving ground vehicles, and push the state of the art in quadrotor aerodynamic modeling, onboard absolute and relative pose estimation, and precision control.
Dynamic Camera Clusters (DCCs) are a group of cameras where one or more cameras are mounted to an actuated mechanism such as a gimbal available on most drones. DCCs help with active viewpoint manipulation thereby having the ability to point to feature rich areas achieving higher accuracy in Visual SLAM applications.
Conventional robotic grasping has been based on image data from a fixed camera, and is limited by data quality and occlusions. Equipped with moving sensor, our active vision based system is able to achieve high accurate performance in complex environments. Whenever it is not possible to recover poses of objects from the current camera view, the system is able to predict the next best camera pose and improve the knowledge of environment for improving grasping robustness.
One of the challenging aspects of incorporating deep neural networks into robotic systems is the lack of uncertainty measures associated with their output predictions. Recent work has identified aleatoric and epistemic as two types of uncertainty in the output of deep neural networks, and provided methods for their estimation. This project aims to resolve challenges involved in estimating these two forms of uncertainty for a verity of perception tasks involved in robotics, including but not limited to object detection.