Using Virtual Cameras

Starting from EnsensoSDK 2.2 NxLib can emulate hardware cameras by simulating raw images on a scene of objects given as STL or PLY files.

virtual_scene

Creating a Virtual Camera

Open NxView and click the Create button and select Virtual Camera :

NxView_CreateCamera

In the following dialog you can input the model identifier of the camera to be simulated:

virtual_camera_dlg

You can simply use the online camera selector to find a suitable model for your working volume and past the camera model name into the edit field. After clicking Ok the virtual camera will be created and is now available in your camera list.

Note

The serial number for the camera will be chosen automatically as the model name. You can specify the serial number by choosing Advanced Options if necessary.

Using Virtual Cameras in NxView

Virtual cameras can be opened just like normal cameras in NxView and via the NxLib API. From the API the cameras operate as if they were a normal hardware camera except for a few minor differences:

  • They can only be operated in software trigger mode

  • The cameras image the scene independently, i.e. each projectors pattern is visible only in its corresponding left and right camera and not in other stereo or monocular cameras

  • The cameras additionally support to control optical parameters

Modifying Objects Manually

After opening a virtual camera you can manually load STL or PLY objects into the scene by opening the Render Objects dialog from the menu Tools->Render Objects. You can use the Objects tab to load and position objects manually and specify their materials. The Camera Links tab allows you to quickly reposition opened cameras to adjust their viewing position.

Throwing Models into the Scene

Alternatively to placing parts manually you can throw models into the scene by using a simple integrated physical simulation. You should first position a container or box at a fixed position in the scene. Then you can select a model on the Physics tab and throw an instance of this model into the container by pressing the T key on your keyboard, when the 3D view in the NxView main window is active. This will throw one instance of the model along the current viewing direction.

Note

  • Make sure to have live mode enabled when throwing models into the scene to see them falling into the box.

  • The complexity of the physical simulation of many complex models and their interaction in a scene will grow quickly with the model cound and slow down the simulation speed. You should use the Fix all Objects button on the Physics tab after throwing a few parts to fix all dropped parts into their current position. This will accelerate simulation speed when dropping more parts.

Create a Randomly Filled Bin using the Scene Wizard

Instead of manually placing or throwing objects into the scene you can select the menu item Tools->Scene Wizard and generate a box filled randomly with one type of object. Just enter the box dimension in the the scene wizard, select your model from an STL or PLY file or use one of the integrated shapes and let the wizard run the physical simulation to fill the box with randomly placed and randomly oriented parts.

Using Virtual Cameras in User Applications

The simplest approach to use virtual cameras in a user application is to create a camera and the corresponding scene in NxView and export the object positions using the Export button in the Render Objects dialog. This will save the Objects node containing all object positions and material properties into a json file. The user application should then fill the Objects node with the content from the exported json file and open the virtual camera. It should then be able to grab images from the virtual scene just as it would when using a hardware camera.

Accuracy and Limitations of camera simulation

NxLib virtual cameras try to simulate many properties of active stereo vision cameras. Each property can be simulated more or less accurate, therefore the table below provides a rough categorization of the simulation accuracy in the categories reliable, approximate and rough guess only.

Geometry

Optical Properties

Material Properties

Simulated Properties

Projection Geometry

Vignetting

Reflection 1

Lens Distortion

Depth of Field / Lens Aperture

Color 2

Focal Length

Sensor Noise

Volume Scattering

Sensor Resolution

Projection Power

Sensor Pixel Sampling

Ambient Light

Foreshortening 2

Reliability of Results

Field of View

XY-Resolution

Z-Resolution 3

Shadowing / Data Coverage 3

Exposure / Gain

Footnotes

1

phong shading model.

2(1,2)

gray conversion is done by taking blue channel of the rendered RGB result image (color simulation for N-series IR models is therefore currently not supported).

3(1,2)

excluding material dependent effects.