top of page
Object-Level SLAM
Uncertainty-aware 3D Object-Level Mapping with Deep Shape Priors
Ziwei Liao*, Jun Yang*, Jingxing Qian*, Angela P. Schoellig, and Steven L. Waslander
International Conference on Robotics and Automation (ICRA), 2024.
An uncertainty-aware object-level mapping system that can recover the 3D model, 9-DoF pose, and the state uncertainties for target unseen objects.
Active Pose Refinement for Textureless Shiny Objects using the Structured Light Camera
Jun Yang, Jian Yao and Steven L. Waslander.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
When imaging objects are highly reflective, the structured light camera produces depth maps with missing measurements., in this work, we present an active vision framework for estimating 6D object for shiny objects.
Multi-view 3D Object Reconstruction and Uncertainty Modelling with Neural Shape Prior
Ziwei Liao and Steven L. Waslander
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024.
We propose a 3d object modeling approach that relies on neural implicit representation and provides both an object reconstruction and an uncertainty measure for each object.
Paper Video
POV-SLAM: Probabilistic Object-Aware Variational SLAM in Semi-Static Environments
Jingxing Qian, Veronica Chatrath, James Servos, Aaron Mavrinac, Wolfram Burgard, Steven L. Waslander, Angela P. Schoellig
Robotics: Science and Systems (RSS), 2023.
We propose an object-aware, factor-graph SLAM framework that tracks and reconstructs semi-static object-level changes.
6D Pose Estimation for Textureless Objects on RGB Frames using Multi-View Optimization
Jun Yang, Wenjie Xue, Sahar Ghavidel, and Steven L. Waslander
International Conference on Robotics and Automation (ICRA), 2023.
We introduce a novel 6D object pose estimation framework that decouples the problem into a sequential two-step process. We use only RGB images acquired from multiple viewpoints.
Next-Best-View Selection for Robot Eye-in-Hand Calibration
Jun Yang, Jason Rebello, Steven L Waslander
20th Conference on Robots and Vision (CRV). IEEE, 2023.
We formulate this task as a non-linear optimization problem and introduce an active vision approach to strategically select the robot pose for maximizing calibration accuracy.
POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes
Jingxing Qian*, Veronica Chatrath*, Jun Yang, James Servos, Angela P Schoellig, Steven L Waslander
Robotics: Science and Systems (RSS). 2022.
We propose a framework that introduces a novel probabilistic object state representation to track object pose changes in semi-static scenes.
Next-Best-View (NBV) Prediction for Highly Reflective Objects
Jun Yang and Steven L. Waslander
International Conference on Robotics and Automation (ICRA), 2022
In this work, we propose a next-best-view framework to strategically select camera viewpoints for completing depth data on reflective objects.
Probabilistic Multi-View Fusion of Active Stereo Depth Maps
​
Jun Yang, Dong Li and Steven L. Waslander.
IEEE Robotics and Automation Letters (RA-L), 2021
​
In this work, we propose a probabilistic framework for scene reconstruction in robotic bin-picking. We estimate the depth data uncertainty and incorporated into a probabilistic model for incrementally updating the scene.
ROBI: Reflective Object In Bins Dataset
Jun Yang, Yizhou Gao, Dong Li and Steven L. Waslander.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
In this paper, we present the ROBI dataset, a public dataset for 6D object pose estimation and multi-view depth fusion.. The dataset includes texture-less, highly reflective industrial parts in robotic bin-picking scenarios.
bottom of page