Our focus is on vision-based perception in multi-robot systems.
To perform a set of perception-driven tasks a team of network-connected robots with vision sensors requires two fundamental functionalities: i) autonomous navigation in the environment and ii) vision-based pose estimation of all static and dynamic objects in the scene. Our research aims to understand how the overall task performance can be maximized within the constraints of communication and computation. To this end, we focus on the following interconnected research threads.
Multi-robot Active Perception -- We investigate methods for multi-robot formation control based on cooperative target perception without relying on a pre-specified formation geometry. We have developed methods for teams of mobile aerial and ground robots, equipped with only RGB cameras, to maximize their joint perception of moving persons or objects in 3D space by actively steering the formations that facilitate the joint perception. We have introduced and rigorously tested active perception methods using novel detection and tracking pipelines and nonlinear model predictive control (MPC) based formation controller.
Multi-robot Sensor Fusion -- We study and develop unified methods for sensor fusion that are not only scalable to large environments but also simultaneously to a large number of sensors and teams of robots. We have developed several methods for unified and integrated multi-robot cooperative localization and target tracking. Here "unified" means that the poses of all robots and targets are estimated by every other robot and "integrated" means that disagreement among sensors, inconsistent sensor measurements, occlusions and sensor failures, are handled within a single Bayesian framework. The methods are either filter-based (filtering) or pose-graph optimization-based approaches (smoothing). While each category has its own advantages w.r.t. the available computational resources and the level of estimation accuracy, we have also developed a novel moving-horizon technique for a hybrid method that combines the advantages of both kinds of approaches.
New Robot Platforms -- In order to have extensive access to the hardware, we design and build most of our robotic platforms. Currently, our main flying platforms include 8-rotor Octocopters.
There are currently two ongoing AirCap projects in our group: 3D Motion Capture and Perception-Based Control