Deferred 3D Processing

It is possible to defer the 3D processing of the images. This allows time critical applications to capture stereo image pairs quickly and do the computationally demanding stereo matching later. The procedure is as follows:

Image Acquisition

  • Use the Capture or Trigger and Retrieve commands to acquire an image pair

  • Retrieve the binary image data from the Images/Raw nodes and store them in your application’s memory or file.

  • Retrieve the calibration data from the Calibration node and store it with the image pair into memory or file.

  • Start over to capture the next image until your entire sequence is recorded

Stereo Processing

There are two possible ways of processing the saved images. Either you load the images back from memory, or you create a file camera from the saved images, which is preferred.

  • From memory:
    • Load a stored image pair from your application’s memory into the Images/Raw nodes of the camera.

    • Restore the matching calibration data into the Calibration node.

    • Compute disparity and point maps using ComputeDisparityMap and ComputePointMap.

    • Start over to continue with the next stored image pair.

  • From a file camera:
    • create a file camera out of the saved file with the CreateCamera Command.

    • operate with the file camera.

Note

Although the camera calibration is fixed it is necessary to store the calibration data with every image in order to obtain accurate reconstructions. This is necessary to correctly compensate temperature deformations of the camera. Dynamic calibration effects are currently tracked within the Dynamic node of the camera’s calibration parameters.

Note

By storing the entire Calibration subtree you are safe to obtain the very same reconstructions offline as online, also in future software revisions. Currently it is sufficient to save the entire Calibration subtree once, and only store the Dynamic parameter subtree for every image pair. Before processing the image pairs you can then restore the Calibration subtree (or leave it as it is, if you’re using the same camera as for capturing), and then set the Dynamic parameters for each image pair before doing the stereo matching.

Code Examples

The C++ sample shows the preferred way. Saving the images as files and create a file camera out of it. The Halcon Script shows how to set the Images/Raw nodes from the images stored in memory.

NxLibItem root; // References the tree's root item at path "/"

// replace "1234" with your camera's serial number:
NxLibItem camera =
    root[itmCameras][itmBySerialNo]["1234"]; // References the camera's item at path "/Cameras/BySerialNo/1234"

// (1) Capture 10 images and store them together with their calibration data into files.

int NumberOfImagesToCapture = 10;
for (int i = 0; i < NumberOfImagesToCapture; i++) {
	// Execute the Capture command with default parameters
	NxLibCommand capture(cmdCapture);
	capture.parameters()[itmSerialNumber] = "1234";
	capture.execute();

	// Create filenames to save the binary data and their calibration data
	std::string imageLeftFileName = std::to_string(i) + "_left.png";
	std::string imageRightFileName = std::to_string(i) + "_right.png";
	std::string calibrationFileName = std::to_string(i) + "_calib.json";

	// Invoke the save image command to save binary data into files
	NxLibCommand saveImage(cmdSaveImage);

	// Saving the left image
	saveImage.parameters()[itmFilename] = imageLeftFileName;
	saveImage.parameters()[itmNode] = camera[itmImages][itmRaw][itmLeft].path;
	saveImage.execute();

	// Saving the right image
	saveImage.parameters()[itmFilename] = imageRightFileName;
	saveImage.parameters()[itmNode] = camera[itmImages][itmRaw][itmRight].path;
	saveImage.execute();

	// Save the calibration data as json formatted std::string into a file
	std::ofstream calibrationFile(calibrationFileName);
	if (!calibrationFile) {
		std::cerr << "Could not open or create file: " << calibrationFileName << std::endl;
		break;
	}
	calibrationFile << camera[itmCalibration].asJson();
}

// (2) Store all the files within a folder and open the folder as a file camera. You can now work with the file
// camera created from the saved raw images and their corresponding calibration files.

// ...
* References the tree's root item at path "/"
open_framegrabber ('Ensenso-NxLib', 0, 0, 0, 0, 0, 0, 'default', 0, 'Raw', -1, 'false', 'Item', '/', 0, 0, RootHandle)

* Open the camera and reference the camera's item at path "/Cameras/BySerialNo/1234"
* replace "1234" with your camera's serial number
Serial := '1234'
open_framegrabber ('Ensenso-NxLib', 0, 0, 0, 0, 0, 0, 'default', 0, 'Raw', 'auto_grab_data=0', 'false', 'Stereo', Serial, 0, 0, CameraHandle)
set_framegrabber_param (CameraHandle, 'grab_data_items', ['Images/Raw/Left', 'Images/Raw/Right'])

* (1) Capture 10 images and store them together with their calibration data into tuples

gen_empty_obj (LeftImages)
gen_empty_obj (RightImages)
Calibrations := []

NumberOfImagesToCapture := 10
for Index := 1 to NumberOfImagesToCapture by 1
        * Execute the Capture command with default parameters
        set_framegrabber_param (RootHandle, 'do_execute', 'Capture')

        * Retrieve Raw Images
        grab_data (Images, Regions, Contours, CameraHandle, Data)
        select_obj (Images, Left, 1)
        select_obj (Images, Right, 2)

        * Copy calibration data in JSON format into string variable
        get_framegrabber_param (CameraHandle, 'Calibration', Calibration)

        concat_obj (LeftImages, Left, LeftImages)
        concat_obj (RightImages, Right, RightImages)
        Calibrations := [Calibrations, Calibration]
endfor

* (2) Load all images and their calibration data into the tree items and generate point clouds for each image pair

set_framegrabber_param (CameraHandle, 'grab_data_items', 'Images/PointMap')

count_obj (LeftImages, NumImages)
for Index := 1 to NumImages by 1
        * instead of calling Capture we can now simply write the raw image data into the tree nodes for the left and right image
        select_obj (LeftImages, Left, Index)
        select_obj (RightImages, Right, Index)
        nxLibSetBinary (Left, CameraHandle, 'Images/Raw/Left')
        nxLibSetBinary (Right, CameraHandle, 'Images/Raw/Right')

        * Restore calibration data from saved JSON representation
        set_framegrabber_param (CameraHandle, 'Calibration', ['apply', Calibrations[Index]])

        * Now we can compute the disparity map and point cloud as if we captured the image normally

        set_framegrabber_param (RootHandle, 'do_execute', 'ComputeDisparityMap')
        set_framegrabber_param (RootHandle, 'do_execute', 'ComputePointMap')

        grab_data (PointMap, Region, Contours, CameraHandle, Data)

        * You can now compute something on the point cloud data
        * ...

endfor

Note

Some subnodes are read-only so we need to specify the onlyWriteableNodes parameter of setJson as true so that the function will not fail on read-only nodes but just continue to restore all writable parameters.