Hand-Eye Calibration

<< Click to Display Table of Contents >>

Navigation:  HowTo's > Multi Camera Setups and Calibrations >

Hand-Eye Calibration

With a hand-eye calibration, you can determine the position of the camera relative to a robot. The NxLib supports two different types of setup. The camera can either be fixed with respect to the robot's base or it can be mounted on the robot's hand and move with it.

 

Fixed Camera

When the camera is fixed, the calibration can find out the position of the camera relative to the robot's base.

In order to do this, you need to grab a calibration pattern with the robot and move it to different positions. Capture an image of the calibration pattern in each of these poses. The calibration can then calculate the camera's position as well as the position of the calibration pattern relative to the robot's hand (which does not need to be known beforehand).

Moving Camera

When the camera is mounted on the robot's hand, the calibration can find out the exact position of the camera relative to the robot's hand.

In order to do this, you need to place a calibration pattern in the robot's workspace. Then you can move the robot to different poses and capture an image of the calibration pattern from each of them. The calibration can then calculate the camera's exact position on the robot as well as the position of the calibration pattern in the robot's workspace (which does not need to be known beforehand).

 

Steps to Perform a Hand-Eye Calibration

You can follow almost the same steps for performing both types of hand-eye calibration. The following instructions are meant for a fixed camera setup. When they differ, the instructions for a moving camera setup are given in parentheses.

 

Mount the camera inside of the robot's workspace (or on the robot's hand).

Place the calibration pattern in the robot's hand (or somewhere in the robot's workspace).

Move the robot into the start pose, i.e. a position and orientation from which the camera can detect the calibration pattern well.

Set the GridSpacing to the correct value for your pattern. Alternatively, you can also use the DecodeData parameter for the CollectPattern command below.

Make sure that you don't have any collected patterns in the pattern buffer by checking PatternCount or executing the DiscardPatterns command.

Move the robot into different poses around the start pose. Random angle and position deltas work fairly well and are easy to implement. At each of these positions, perform the following steps.

oStop for a while (between 0.5 to 2 seconds). Most robots tend to oscillate around the target position a little, even after signaling that they reached the target pose. Not waiting long enough can therefore reduce the calibration accuracy significantly.

oCapture an image.

oCollect an observation of the calibration pattern using the CollectPattern command. The observed pattern position will be stored in the pattern buffer.

oRetrieve the exact pose of the robot hand from your robot and store it in your own program. You can also store it in a temporary node inside of the NxLib tree.

After observing the calibration pattern for at least 5 times, you can start the calibration. The more observations you stored, the more accurate the calibration will be (at the cost of a longer calibration time).

Execute the CalibrateHandEye command. You will need to pass your stored robot poses in the Transformations parameter. Additionally, you can specify the type of calibration that you are performing (fixed or moving) with the Setup parameter.

Wait for the command to finish. The calibration might take around 15 to 120 seconds to complete.

 

When the calibration is finished, it will set the Link node of your camera to the estimated transformation (either between the camera and the robot's hand or between the camera and the robot's base). Additionally, it will set the camera's Target node to the Target parameter that was given to the CalibrateHandEye command. This is important as you can use this link to get 3D data in world coordinates later. See below for detailed instruction on how this works.

Additionally, you can find the estimated pose of the pattern in the command's result node. This pose it either relative to the robot's hand (for a fixed camera) or to the robot's base (for a moving camera).

Note: The new link data is not stored to the camera permanently, unless you use the StoreCalibration command.

Example Code

The following code outlines the process of a hand-eye calibration. Note that it only contains the parts that are done by the NxLib. You will have to insert the necessary code to move your robot and get its current position.

hmtoggle_plus1C++

Grabbing 3D Data in World Coordinates

After performing a hand-eye calibration, you can get your 3D data in world coordinates. This works with the NxLib's transformation system. If you do not know this system yet, you can read about it in the documentation of the Links node or in this overview.

Note that you can change the coordinate system to which the robot is linked after the hand-eye calibration by specifying the hand-eye calibration's Target parameter.

Fixed Camera

For a fixed camera, the transformation to robot coordinates is done automatically. The camera's Link node is set to the transformation between the camera and the robot's base, so any point map will automatically be transformed into robot base coordinates.

The hand-eye calibration set the camera's Target node to the "Workspace" coordinate system by default. When your world frame is different from your robot base frame, you can therefore specify the transformation between these two by settings the "Workspace" coordinate system in the Links node. An example function that grabs 3D data in workspace/world coordinates could look like this.

hmtoggle_plus1C++

Moving Camera

For a moving camera, the calibration only sets the camera's Link node to the transformation between the camera and the robot's hand. This means that when you compute a point map, it will be transformed from camera coordinates to robot hand coordinates. For getting the data in robot base or world coordinates, you need to tell the NxLib the current pose of your robot. This is done by specifying the corresponding transformation in the Links node. By default, the hand-eye calibration sets your camera's Target node to the "Hand" link. This means that you need to put the robot's pose into the Hand link node.

As for the fixed camera, you can point the Hand frame's Target node to another frame (e.g. "Workspace") and put the transformation between the "Hand" and "Workspace" frame into the Links node. This way you can get the 3D data in world coordinates.

An example function that grabs 3D data in robot base coordinates could look like this.

hmtoggle_plus1C++

Recalibrating Robot Geometry

NxLibs CalibrateHandEye command can also take robot poses in the form of joint angles together with a geometric robot model. This allows the internal iteration procedure to estimate offset angles by which the real-world zero angle of each joint deviates from the believed zero angle of the robot controller. The computed offset angles (see Angles result) can then be fed back into the controller to achieve a more accurate hand-eye calibration because of improved absolute accuracy in the calibrated working volume. Please have a look at the Links parameter of CalibrateHandEye for an example robot description.