RenderPointMap

Details

Renders the camera surfaces as orthographic/telecentric or perspective projection and outputs a depth map containing only z coordinates or a full, three channel xyz point map. The resulting depth data is stored in /Images/RenderPointMap together with it’s texture information from monocular cameras in /Images/RenderPointMapTexture .

By default, a telecentric projection is generated using each cameras DisparityMap and the parameters from the global /Parameters/RenderPointMap node. When specifying the Camera parameter, a perspective projection is rendered.

Note

The point coordinates are automatically transformed into world/workspace coordinates.

Error Symbols

GLDriver

Cause: There was a problem during initialization of OpenGL.

Solution: Verify that your graphics driver is correctly installed and supports the required OpenGL version as indicated in the system requirements or use the CPU rendering method by setting the UseOpenGL parameter to false.

Note

Rendering commands like RenderPointMap and RenderView use a single OpenGL context and therefore will be internally serialized and run sequentially, also when executed from separate Execute nodes. As these commands also have a single, global output node it is recommended to run at most one instance of each at a time.

Rendering Methods

Due to the very different architectures of CPU and GPU the RenderPointMap command uses two different methods to transform the camera’s depth data into another perspective depending on the selection of the UseOpenGL parameter. Both methods produce slightly different results and might have advantages and disadvantages for your application, so it is advised to read through the descriptions below and choose what best fits your application needs and computer setup.

Rendering with OpenGL on the GPU

In order to handle the depth data on the graphics card efficiently we transform the disparity map into a mesh of triangles in 3d space. We simply divide each square of 2x2 pixels in the disparity map into two triangles by computing each pixels 3d coordinate. If two points of a triangle are more than SurfaceConnectivity apart in the camera’s Z direction we consider those pixels to be on different sides of an edge in the surface and discard the corresponding triangle to also break up our virtual surface at this edge.

Doing this on all camera’s depth data gives us triangulated surface meshes of all cameras which we can efficiently pass over to OpenGL for rendering from a perspective of our choice.

Rendering on the CPU

As the CPU is less suited to handling the triangle meshes we chose to use a slightly simpler transformation method on the CPU. Each point in the disparity map is transformed into its corresponding 3d point. Instead of forming triangles with its neighbors the 3d point is then directly projected into the target image where its coordinates are being noted. If two points hit the same pixel in the target image the one with smaller Z value in the target perspective is noted in the rendered image (as in the OpenGL variant). But if the density of 3d points is much lower than the target perspective’s pixel size not all points of the rendered image got hit by a projection of a point and will remain empty. This leads to aliasing or very sparsely filled rendered images and might complicate further processing. To circumvent this you can set a Scaling factor (smaller than 1) by which the resolution of the target image will be reduced during rendering to have large enough pixels that each pixel is hit by a 3d point projection. After projecting all pixels into the reduced target image the image will be upscaled to the final requested Size and PixelSize and the holes are filled by nearest neighbor interpolation.

Comparison of rendering methods

Advantage

Disadvantage

With OpenGL

Using the triangulated meshes the GPU will take care of correctly resampling the mesh regardless of the target image resolution.

The triangulation requires to set SurfaceConnectivity a priori. Too large values might lead to spurious surfaces along the viewing direction, too small values might break surfaces apart due to noise on the Z coordinates.

On the CPU

No graphics card required. This simpler method might also outperform the OpenGL variant on mid-end graphics cards.

You might need to set the Scaling factor to avoid sparse result images and aliasing when the resampled resolution is higher than the 3d point cloud point distance. Choosing a good Scaling factor might be difficult when dealing with point clouds of very different resolution.