Texturing Point Clouds

You can use an additional uEye camera to provide colors for the 3D data of your stereo camera. To do this, you have to calibrate the monocular camera to the view of the stereo camera first.

Calibrating the Monocular Camera with the Calibration Wizard

  • Open NxView.

  • Select both your stereo camera and the monocular camera that you want to calibrate to it.

  • Click “Calibrate…” and follow the instructions on the screen. This will calibrate the monocular camera’s internal geometry and set its Link node to the relative position between the two cameras.

Calibrating the Monocular Camera Manually

  • Collect calibration patterns with the CollectPattern command. It is important that you get relatively large calibration patterns in the view field of the monocular camera (for calibrating its internal geometry) as well as patterns that are visible in both cameras.

  • Use the Calibrate command to calibrate the link between the two cameras. Simply specify the serial numbers of the two cameras in the Cameras parameter. The command will automatically see that you want to calibrate the link between a monocular and a stereo camera.

  • Use the StoreCalibration command to persistently store the calibration in the camera’s EEPROM.

Combining Data from Stereo and Mono Camera

After you successfully calibrated the monocular camera, you can open both of the cameras in NxView. The point cloud will automatically be textured with images from the monocular camera. This is achieved by executing the RenderView command with both cameras specified in the Cameras parameter.

To combine the data of both cameras in your program you can use the RenderPointMap command. Use its Camera parameter to render the 3D data from the point of view of the mono camera.

NxLibCommand renderPointMap(cmdRenderPointMap);
renderPointMap.parameters()[itmCamera] = "MONO SERIAL";
renderPointMap.execute();

The resulting rendered point map and texture images will have aligned pixel positions which also coincide with the pixels of the mono camera’s rectified image.

For more advanced usages, you can project 3D positions (e.g. from a stereo camera’s point map) to the (undistorted) mono camera image. To do this take the 3D position (in world coordinates), apply the mono camera’s pose transformation to move it to the mono camera frame and project it into the camera with the camera matrix.