Tong Qin wins IROS 2018 Best Student Paper Award

On October 4th 2018, the paper "Online temporal calibration for monocular visual-inertial systems" by Ph.D. student Tong Qin wins the Best Student Paper Award in the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2018) at Madrid, Spain.

In this paper, we propose an online approach for calibrating temporal offset between visual and inertial measurements. Our approach achieves temporal offset calibration by jointly optimizing time offset, camera and IMU states, as well as feature locations in a SLAM system. Furthermore, the approach is a general model, which can be easily employed in several feature-based optimization frameworks. Simulation and experimental results demonstrate the high accuracy of our calibration approach even compared with other state-of-art offline tools. The VIO comparison against other methods proves that the online temporal calibration significantly benefits visual-inertial systems. The source code of temporal calibration is integrated into our public project, VINS-Mono.

IROS 2018

Code for VINS-Mono is now available on GitHub

A Robust and Versatile Monocular Visual-Inertial State Estimator

VINS-Mono is a real-time SLAM framework for Monocular Visual-Inertial Systems. It uses an optimization-based sliding window formulation for providing high-accuracy visual-inertial odometry. It features efficient IMU pre-integration with bias correction, automatic estimator initialization, online extrinsic calibration, failure detection and recovery, loop detection, and global pose graph optimization. VINS-Mono is primarily designed for state estimation and feedback control of autonomous drones, but it is also capable of providing accurate localization for AR applications. This code runs on Linux, and is fully integrated with ROS. For iOS mobile implementation, please go to VINS-Mobile.

Authors: Tong Qin, Peiliang Li, Zhenfei Yang, and Shaojie Shen from the HUKST Aerial Robotics Group

Code: https://github.com/HKUST-Aerial-Robotics/VINS-Mono

Videos:

EuRoC dataset

Indoor and outdoor performance

AR application

MAV application

Mobile implementation

Code for VINS-Mobile is now available on GitHub

Monocular Visual-Inertial State Estimator on Mobile Phones

VINS-Mobile is a real-time monocular visual-inertial state estimator developed by members of the HKUST Aerial Robotics Group. It runs on compatible iOS devices, and provides localization services for augmented reality (AR) applications. It is also tested for state estimation and feedback control for autonomous drones. VINS-Mobile uses sliding window optimization-based formulation for providing high-accuracy visual-inertial odometry with automatic initialization and failure recovery. The accumulated odometry errors are corrected in real-time using global pose graph SLAM. An AR demonstration is provided to showcase its capability.

Authors: Peiliang LI, Tong QIN, Zhenfei YANG, Kejie QIU, and Shaojie SHEN

Code: https://github.com/HKUST-Aerial-Robotics/VINS-Mobile

 

Fei GAO received 2016 IEEE-SSRR Best Conference Paper Award

Fei GAO received 2016 IEEE-SSRR Best Conference Paper Award

The paper “Online quadrotor trajectory generation and autonomous navigation on point clouds” by Ph.D. student Fei GAO just won the Best Conference Paper Award in the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) at Lausanne, Switzerland. Our group got the best paper award for this conference two years in a row.read more

Trajectory Generation for Aerial Robots

We develop online methods to generate safe and smooth trajectories for aerial navigation through unknown, complex, and possibly dynamic environments. We use convex optimization tools to ensure both collision avoidance and dynamic feasibility.

Micro aerial vehicles (MAVs), especially quadrotors, have drawn increasing attention in recent years thanks to their superior mobility in complex environments that are inaccessible or dangerous for human or other ground vehicles. In autonomous navigation missions, quadrotors should be able to online generate and execute smooth and safe trajectories from a start position to a target position, while avoiding unexpected obstacles. The generated trajectories should have the guarantee of safety and smoothness considering the dynamic ability of the quadrotor.

img_4506In this project, some novel methods are developed to generate safe and smooth trajectories in cluttered environments. Based on good localization and mapping techniques, a flight corridor with safety guarantee is obtained in the cluttered environments first, following an optimization-based algorithm to assign a global optimal trajectory within the flight corridor entirely. Our works are implemented onboard a quadrotor and are suitable for fast online re-planning, making them able to work in unknown dynamic environments with unexpected obstacles.

Our algorithms can be widely used on various types of mapping modules, such as laser-based octomap and point clouds or monocular dense mapping. Both simulation results and indoor and outdoor autonomous flights in unknown cluttered environments show the good performance of our methods.

 

Gradient-based online safe trajectory generation for quadrotor flight in complex environments

By Fei GAO

We propose a trajectory generation framework for quadrotor autonomous navigation in unknown 3-D complex environments using gradient information. We decouple the trajectory generation problem as front-end path searching and back-end trajectory refinement. Based on the map that is incrementally built onboard, we adopt a sampling- based informed path searching method to find a safe path passing through obstacles. We convert the path consists of line segments to an initial safe trajectory. An optimization- based method which minimizes the penalty of collision cost, smoothness and dynamical feasibility is used to refine the trajectory. Our method shows the ability to online gener- ate smooth and dynamical feasible trajectories with safety guarantee. We integrate the state estimation, dense mapping and motion planning module into a customized light-weight quadrotor platform. We validate our proposed method by presenting fully autonomous navigation in unknown cluttered indoor and outdoor environments.

 

Tracking a moving target in cluttered environments using a quadrotor

By Jing CHEN

We address the challenging problem of tracking a moving target in cluttered environments using a quadrotor. Our online trajectory planning method generates smooth, dynamically feasible, and collision-free polynomial trajectories that follow a visually-tracked moving target. As visual observations of the target are obtained, the target trajectory can be estimated and used to predict the target motion for a short time horizon. We propose a formulation to embed both limited horizon tracking error and quadrotor control costs in the cost function for a quadratic programming (QP), while encoding both collision avoidance and dynamical feasibility as linear inequality constraints for the QP. Our method generates tracking trajectories in the order of milliseconds and is therefore suitable for online target tracking with a limited sensing range. We implement our approach on-board a quadrotor testbed equipped with cameras, a laser range finder, an IMU, and onboard computing. Statistical analysis, simulation, and real-world experiments are conducted to demonstrate the effectiveness of our approach.

 

Online quadrotor trajectory generation and autonomous navigation on point clouds

By Fei GAO

We present a framework for online generation of safe trajectories directly on point cloud for autonomous quadrotor flight. Considering a quadrotor operating in unknown environments, we use a 3-D laser range finder for state estimation and simultaneously build a point cloud map of the environment. Based on the incrementally built point cloud map, we utilize the property of the fast nearest neighbor search in KD-tree and adopt the sampling-based path finding method to generate a flight corridor with safety guarantee in 3-D space. A trajectory generation method formulated in quadratically constrained quadratic programming (QCQP) is then used to generate trajectories that constrained entirely within the corridor. Our method runs onboard within 100 milliseconds, making it suitable for online re-planning. We integrate the proposed planning method with laser-based state estimation and mapping modules, and demonstrate the autonomous quadrotor flight in unknown indoor and outdoor environments.

 

Online generation of collision-free trajectories for quadrotor flight in unknown cluttered environments

By Jing CHEN

We present an online method for generating collision-free trajectories for autonomous quadrotor flight through cluttered environments. We consider the real-world scenario that the quadrotor aerial robot is equipped with limited sensing and operates in initially unknown environments. During flight, an octree-based environment representation is incrementally built using onboard sensors. Utilizing efficient operations in the octree data structure, we are able to generate free-space flight corridors consisting of large overlapping 3-D grids in an online fashion. A novel optimization-based method then generates smooth trajectories that both are bounded entirely within the safe flight corridor and satisfy higher order dynamical constraints. Our method computes valid trajectories within fractions of a second on a moderately fast computer, thus permitting online re-generation of trajectories for reaction to new obstacles. We build a complete quadrotor testbed with onboard sensing, state estimation, mapping, and control, and integrate the proposed method to show online navigation through complex unknown environments.

 

Improving octree-based occupancy maps using environment sparsity with application to aerial robot navigation

By Jing CHEN

We present an improved octree-based mapping framework for autonomous navigation of mobile robots. Octree is best known for its memory efficiency for representing large-scale environments. However, existing implementations, including the state-of-the-art OctoMap [1], are computationally too expensive for online applications that require frequent map updates and inquiries. Utilizing the sparse nature of the environment, we propose a ray tracing method with early termination for efficient probabilistic map update. We also propose a divide-and-conquer volume occupancy inquiry method which serves as the core operation for generation of free-space configurations for optimization-based trajectory generation. We experimentally demonstrate that our method maintains the same storage advantage of the original OctoMap, but being computationally more efficient for map update and occupancy inquiry. Finally, by integrating the proposed map structure in a complete navigation pipeline, we show autonomous quadrotor flight through complex environments.

 

Quadrotor trajectory generation in dynamic environments using semi-definite relaxation on nonconvex QCQP

By Fei GAO

We present an optimization-based framework for generating quadrotor trajectories which are free of collision in dynamic environments with both static and moving obstacles. Using the finite-horizon motion prediction of moving obstacles, our method is able to generate safe and smooth trajectories with minimum control efforts. Our method optimizes trajectories globally for all observed moving and static obstacles, such that the avoidance behavior is most unnoticeable. This method first utilizes semi-definite relaxation on a quadratically constrained quadratic programming (QCQP) problem to eliminate the nonconvex constraints in the moving obstacle avoidance problem. A feasible and reasonably good solution to the original nonconvex problem is obtained using a randomization method and convex linear restriction. We detail the trajectory generation formulation and the solving procedure of the nonconvex quadratic program. Our approach is validated by both simulation and experimental results.

 

Invited Talk: RACV 2016

On September 20th 2016, professor Shaojie Shen was invited by the 2016 Symposium on Research and Application in Computer Vision (RACV 2016) to have a talk about “Robust Autonomous Flight in Cluttered Environment”, on the panel of Computer Vision for Robotics at Shanghai Technology University. read more

Dense Mapping for Autonomous Navigation

We develop real-time methods for generating dense maps for large-scale autonomous navigation of aerial robots. We investigate into monocular and multi-camera dense mapping methods with special attention on the tight integration between maps and motion planning modules.

Without any prior knowledge of the environment, our dense mapping module utilizes a inverse depth labeling method to extract a 3D cost volume through temporal aggregation on synchronized camera poses. After semi-global optimization and post-processing, a dense depth image is calculated and fed into our uncertainty-aware truncated signed distance function (TSDF) fusion approach, from which a live dense 3D map is produced.
 

Autonomous aerial navigation using monocular visual-inertial fusion

By Yi LIN

We present a real-time monocular visual-inertial dense mapping and autonomous navigation system. The whole system is implemented on a tight size and light weight quadrotor where all modules are processing onboard and in real time. By properly coordinating three major system modules: state estimation, dense mapping and trajectory planning, we validate our system in both cluttered indoor and outdoor environments via multiple autonomous flight experiments. A tightly-coupled monocular visual-inertial state estimator is develop for providing high-accuracy odometry, which is used for both feedback control and dense mapping. Our estimator supports on-the-fly initialization, and is able to online estimate vehicle velocity, metric scale, and IMU biases.
Without any prior knowledge of the environment, our dense mapping module utilizes a plane-sweeping-based method to extract a 3D cost volume through temporal aggregation on synchronized camera poses. After semi-global optimization and post-processing, a dense depth image is calculated and fed into our uncertainty-aware TSDF fusion approach, from which a live dense 3D map is produced. Using this map, our planning module firstly generates an initial collision-free trajectory based on our sampling-based path searching method. A gradient-based optimization method is then applied to ensure trajectory smoothness and dynamic feasibility. Following the trend of rapid increases in mobile computing power, we believe our minimum sensing sensor setup suggests a feasible solution to fully autonomous miniaturized aerial robots.

 

High-precision online markerless stereo extrinsic calibration

By Yonggen LING

Stereo cameras and dense stereo matching algorithms are core components for many robotic applications due to their abilities to directly obtain dense depth measurements and their robustness against changes in lighting conditions. However, the performance of dense depth estimation relies heavily on accurate stereo extrinsic calibration. In this work, we present a real-time markerless approach for obtaining high-precision stereo extrinsic calibration using a novel 5-DOF (degrees-of-freedom) and nonlinear optimization on a manifold, which captures the observability property of vision-only stereo calibration. Our method minimizes epipolar errors between spatial per-frame sparse natural features. It does not require temporal feature correspondences, making it not only invariant to dynamic scenes and illumination changes, but also able to run significantly faster than standard bundle adjustment-based approaches. We introduce a principled method to determine if the calibration converges to the required level of accuracy, and show through online experiments that our approach achieves a level of accuracy that is comparable to offline markerbased calibration methods. Our method refines stereo extrinsic to the accuracy that is sufficient for block matching-based dense disparity computation. It provides a cost-effective way to improve the reliability of stereo vision systems for long-term autonomy.

 

Real-time monocular dense mapping on aerial robots using visual-inertial fusion

By Zhenfei YANG

In this work, we present a solution to real-time monocular dense mapping. A tightly-coupled visual-inertial localization module is designed to provide metric and high-accuracy odometry. A motion stereo algorithm is proposed to take the video input from one camera to produce local depth measurements with semi-global regularization. The local measurements are then integrated into a global map for noise filtering and map refinement. The global map obtained is able to support navigation and obstacle avoidance for aerial robots through our indoor and outdoor experimental verification. Our system runs at 10Hz on an Nvidia Jetson TX1 by properly distributing computation to CPU and GPU. Through onboard experiments, we demonstrate its ability to close the perception-action loop for autonomous aerial robots. We release our implementation as open-source software.

 

Building maps for autonomous navigation using sparse visual SLAM features

By Yonggen LING

Autonomous navigation, which consists of a systematic integration of localization, mapping, motion planning and control, is the core capability of mobile robotic systems. However, most research considers only isolated technical modules. There exist significant gaps between maps generated by SLAM algorithms and maps required for motion planning. Our work presents a complete online system that consists in three modules: incremental SLAM, real-time dense mapping, and free space extraction. The obtained free-space volume (i.e. a tessellation of tetrahedra) can be served as regular geometric constraints for motion planning. Our system runs in real-time thanks to the engineering decisions proposed to increase the system efficiency. We conduct extensive experiments on the KITTI dataset to demonstrate the run-time performance. Qualitative and quantitative results on mapping accuracy are also shown. For the benefit of the community, we make the source code public.

 

Visual-Inertial State Estimation

Monocular visual-inertial state estimation with online initialization and camera-IMU extrinsic calibration

By Zhenfei YANG

There have been increasing demands for developing microaerial vehicles with vision-based autonomy for search and rescue missions in complex environments. In particular, the monocular visual-inertial system (VINS), which consists of only an inertial measurement unit (IMU) and a camera, forms a great lightweight sensor suite due to its low weight and small footprint. In this paper, we address two challenges for rapid deployment of monocular VINS: 1) the initialization problem and 2) the calibration problem. We propose a methodology that is able to initialize velocity, gravity, visual scale, and camera-IMU extrinsic calibration on the fly. Our approach operates in natural environments and does not use any artificial markers. It also does not require any prior knowledge about the mechanical configuration of the system. It is a significant step toward plug-and-play and highly customizable visual navigation for mobile robots. We show through online experiments that our method leads to accurate calibration of camera-IMU transformation, with errors less than 0.02 m in translation and 1° in rotation. We compare out method with a state-of-the-art marker-based offline calibration method and show superior results. We also demonstrate the performance of the proposed approach in large-scale indoor and outdoor experiments.

 

Self-calibrating multi-camera visual-inertial fusion for autonomous MAVs

By Zhenfei YANG

We address the important problem of achieving robust and easy-to-deploy visual state estimation for micro aerial vehicles (MAVs) operating in complex environments. We use a sensor suite consisting of multiple cameras and an IMU to maximize perceptual awareness of the surroundings and provide sufficient redundancy against sensor failures. Our approach starts with an online initialization procedure that simultaneously estimates the transformation between each camera and the IMU, as well as the initial velocity and attitude of the platform, without any prior knowledge about the mechanical configuration of the sensor suite. Based on the initial calibrations, a tightly-coupled, optimization-based, generalized multi-camera-inertial fusion method runs onboard the MAV with online camera-IMU calibration refinement and identification of sensor failures. Our approach dynamically configures the system into monocular, stereo, or other multicamera visual-inertial settings, with their respective perceptual advantages, based on the availability of visual measurements. We show that even under random camera failures, our method can be used for feedback control of the MAVs. We highlight our approach in challenging indoor-outdoor navigation tasks with large variations in vehicle height and speed, scene depth, and illumination.

 

 Aggressive quadrotor flight using dense visual-inertial fusion

By Yonggen LING

In this work, we address the problem of aggressive flight of a quadrotor aerial vehicle using cameras and IMUs as the only sensing modalities. We present a fully integrated quadrotor system and demonstrate through online experiment the capability of autonomous flight with linear velocities up to 4.2 m/s, linear accelerations up to 9.6 m/s2 , and angular velocities up to 245.1 degree/s. Central to our approach is a dense visual-inertial state estimator for reliable tracking of aggressive motions. An uncertainty-aware direct dense visual tracking module provides camera pose tracking that takes inverse depth uncertainty into account and is resistant to motion blur. Measurements from IMU pre-integration and multi-constrained dense visual tracking are fused probabilistically using an optimization-based sensor fusion framework. Extensive statistical analysis and comparison are presented to verify the performance of the proposed approach. We also release our code as open-source ROS packages.

 

High altitude monocular visual-inertial state estimation: initialization and sensor fusion

By Tianbo LIU

Obtaining reliable state estimates at high altitude but GPS-denied environments, such as between high-rise buildings or in the middle of deep canyons, is known to be challenging, due to the lack of direct distance measurements. Monocular visual-inertial systems provide a possible way to recover the metric distance through proper integration of visual and inertial measurements. However, the nonlinear optimization problem for state estimation suffers from poor numerical conditioning or even degeneration, due to difficulties in obtaining observations of visual features with sufficient parallax, and the excessive period of inertial measurement integration. Here we propose a spline-based high altitude estimator initialization method for monocular visual-inertial navigation system (VINS) with special attention to the numerical issues. Our formulation takes only inertial measurements that contain sufficient excitation, and drops uninformative measurements such as those obtained during hovering. In addition, our method explicitly reduces the number of parameters to be estimated in order to achieve earlier convergence. Based on the initialization results, a complete closed-loop system is constructed for high altitude navigation. Extensive experiments are conducted to validate our approach.

 

Robust initialization of monocular visual-inertial estimation on aerial robots

By Tong QIN

We propose a robust on-the-fly estimator initialization algorithm to provide high-quality initial states for monocular visual-inertial systems (VINS). Due to the non-linearity of VINS, a poor initialization can severely impact the performance of either filtering-based or graph-based methods. Our approach starts with a vision-only structure from motion (SfM) to build the up-to-scale structure of camera poses and feature positions. By loosely aligning this structure with pre-integrated IMU measurements, our approach recovers the metric scale, velocity, gravity vector, and gyroscope bias, which are treated as initial values to bootstrap the nonlinear tightly-coupled optimization framework. We highlight that our approach can perform on-the-fly initialization in various scenarios without using any prior information about system states and movement. The performance of the proposed approach is verified through the public UAV dataset and real-time onboard experiment. We make our implementation open source, which is the initialization part integrated in the VINS-Mono.