top of page


With Shhuna Director - a graphical user interface built on top of the Shhuna Engine - you can rapidly and intuitively describe sensor fusion problems to be solved. It allows for drag-and-drop scenario definition, visualization of sensor data, reconstruction results and a convenient sensor calibration.

Calibration scenario examples

Intrinsic camera calibration

This classical scenario estimates intrinsic camera parameters from observations of a regular dot pattern. Thousands of images are collected and processed. Our Hexagon setup is used as live video source.


Extrinsic camera calibration

Here a regular pattern is used in order to estimate the pose between two video cameras. A rosbag file is used as video source.


Hand-Eye calibration

The sensor rig consists of a video camera, an infrared LED pattern and an electromagnetic tracking device. The static rig is defined by two different markers observed by the moving camera, an optical infrared tracking system and a magnetic field generator. The dashed blue links depict the spacial transforms to be estimated.


Real-time pose estimation scenario examples

Multi-camera outside-in tracking of retroreflective infrared markers

A pair of goggles equipped with retroreflective markers is tracked by four synchronized IR-sensitive video cameras.


Multicamera inside-out tracking

Four synchronized monochrome video cameras are used for detection and tracking of a random dot pattern. While the pattern is detected independently by the cameras' corresponding matcher components, the optimization is performed jointly on all available measurements.


Simultaneous outside-in / inside-out tracking

Three synchronized cameras equipped with configurable random dot IR-patterns are used. Two are static, one is moving. The moving and the static cameras observe each other's random dot patterns. The patterns are identified independently by 2d-Matchers, but the pose is optimized jointly using all available observations, which results in a superior accuracy.


Simultaneous outside-in / inside-out tracking using 3D-3D matching

A variation of the previous scenario. The difference is that the moving camera's IR-pattern is now detected using the epipolar geometry of the stereo camera pair.


Fusion between an inside-out and an outside-in system

A moving camera is equipped with an infrared LED pattern and observes a marker. The LED pattern is tracked by the Atracsys FTK 250 system. The fusion level is a mixture of the feature- and pose-level optimization.


Monocular visual odometry (experimental)

This seemingly simple scenario uses optical flow as 2d matcher, while the target (World) is marked to be extendable, which results in a recursive monocular visual odometry.


Two-stage monocular visual odometry

A variation of the monocular visual odometry scenario above. Two optimization cores are used: One running key-frames-based bundle adjustment and one running pure localization. Both cores share the same world model and observation set.

bottom of page