This page is a jumping off point to find information and resources about benchmarking robotics applications. Benchmarking is a hot topic in robotics research. The general goal of this project is to present a set of tools and a knowledge base for reproducing and benchmarking robotics software environments and simulations. The main mechanism for ensuring reproducibility is integration of the CITK recipe repository with software containers.

Distributing Simulations

One major challenge has been to handle the complexity of common software environments in robotics. In order to generate truly reproducible experiments, robotics needs some way of sharing the complete configuration of those experiments. Robobench simulations are built and distributed by the citman client. The citman client manages a repository of procedurally generated software containers embodying a simulation or experiment. citman also manages the complex configuration options necessary to get robotics applications working in a container.

Install Citman

Recipe Generated Software Containers

By their nature containers obfuscate all of the configuration details needed to understand and maintain a simulation. Procedurally generating containers from a system recipe comprised of individual components allows each component to be explicitly documented with full versioning information. The containers are currently housed on DockerHub

Benchmarking Missions

Typically, individual tasks such as grasping or navigating are presented as benchmarks. However, to understand the performance and efficiency of a component, a full mission simulation which allows the component to be understood in full context.

Performance Benchmarking and Profiling

Benchmarking multiprogram applications is generally an unsolved problem. It becomes somewhat more complex in robotics applications due to the use of GPGPU resources, dependency on time and sensor data, and heavy reliance on package repositories rather than compiling from source. Articles describing how these issues are handled in the Robobench-alpha set are among those presented in the knowledge base.

Benchmark Sets

Benchmarkable simulations are organized into sets presented here. The Robobench-alpha benchmark is a set of simulations spanning many of the most commonly used robotics frameworks. It is not comprehensive, but it includes simulations of drones, mobile manipulators, and underwater autonomous vehicles performing common patrolling, tracking, and manipulation tasks. The components used in this set include Gazebo, MOOS, MORSE, OpenCV, PCL, and others. It demonstrates that these components can be efficiently containerized and additionally demonstrates how to profile these components inside a container.

Future benchmark sets will focus on introducing more complete mission simulations as well as incorporating existing component-wise benchmarks such as grasping and manipulation