Deferred 3D Processing

<< Click to Display Table of Contents >>

Navigation:  NxLib API > HowTo's > Getting 3D Data >

Deferred 3D Processing

It is possible to defer the 3D processing of the images. This allows time critical applications to capture stereo image pairs quickly and do the computationally demanding stereo matching later. The procedure is as follows:

Image Acquisition

Use the Capture or Trigger and Retrieve commands to acquire an image pair

Retrieve the binary image data from the Images/Raw nodes and store them in your application's memory

Retrieve the calibration data from the Calibration node and store it with the image pair

Start over to capture the next image until your entire sequence is recorded

Stereo Processing

Load a stored image pair from your application's memory into the Images/Raw nodes of the camera

Restore the matching calibration data into the Calibration node

Compute disparity and point maps using ComputeDisparityMap and ComputePointMap

Start over to continue with the next stored image pair

Note: Although the camera calibration is fixed it is necessary to store the calibration data with every image in order to obtain accurate reconstructions. This is necessary to correctly compensate temperature deformations of the camera. Dynamic calibration effects are currently tracked within the Dynamic node of the camera's calibration parameters.

Note: By storing the entire Calibration subtree you are safe to obtain the very same reconstructions offline as online, also in future software revisions. Currently it is sufficient to save the entire Calibration subtree once, and only store the Dynamic parameter subtree for every image pair. Before processing the image pairs you can then restore the Calibration subtree (or leave it as it is, if you're using the same camera as for capturing), and then set the Dynamic parameters for each image pair before doing the stereo matching.

Code Examples

hmtoggle_plus1C++
hmtoggle_plus1Halcon/HDevelop