We leverage fundamental computer vision principles and deep learning to advance automotive perception in the task of 3D object detection - the task of estimating the six degrees of freedom pose and dimensions of objects of interest.
Dynamic Camera Clusters (DCCs) are a group of cameras where one or more cameras are mounted to an actuated mechanism such as a gimbal available on most drones. DCCs help with active viewpoint manipulation thereby having the ability to point to feature rich areas achieving higher accuracy in Visual SLAM applications.
UAV/UGV teams show strong promise for a range of applications, thanks to their complementary capabilities of maneuverability and endurance. In this project, we demonstrate some of the first automated docking of drones on moving ground vehicles, and push the state of the art in quadrotor aerodynamic modeling, onboard absolute and relative pose estimation, and precision control.
One of the challenging aspects of incorporating deep neural networks into robotic systems is the lack of uncertainty measures associated with their output predictions. Recent work has identified aleatoric and epistemic as two types of uncertainty in the output of deep neural networks, and provided methods for their estimation. This project aims to resolve challenges involved in estimating these two forms of uncertainty for a verity of perception tasks involved in robotics, including but not limited to object detection.