Processing Measurement Results (Stereo ace)#
概要#
Basler Stereo ace uses the rectified stereo-image pair to compute disparity, error, and confidence images.
After rectification, an object point is guaranteed to be projected onto the same pixel row in both left and right image. That point's pixel column in the right image is always lower than or equal to the same point's pixel column in the left image. The term disparity signifies the difference between the pixel columns in the right and left images and expresses the depth or distance of the object point from the camera. The disparity image stores the disparity values of all pixels in the left camera image.
The larger the disparity, the closer the object point. A disparity of 0 means that the projections of the object point are in the same image column and the object point is at infinite distance. Often, there are pixels for which disparity cannot be determined. This is the case for occlusions that appear on the left sides of objects, because these areas are not seen from the right camera. Furthermore, disparity cannot be determined for textureless areas. Pixels for which the disparity cannot be determined are marked as invalid with the special disparity value of 0. To distinguish between invalid disparity measurements and disparity measurements of 0 for objects that are infinitely far away, the disparity value for the latter is set to the smallest possible disparity value above 0.
To compute disparity values, the stereo matching algorithm has to find corresponding object points in the left and right camera images. These are points that represent the same object point in the scene. For stereo matching, the Basler Stereo ace uses SGM (Semi-Global Matching).
Computing Depth Images and Point Clouds#
The disparity image contains 16 bit unsigned integer values. These values must be multiplied by the scale value given in the GenICam feature Scan3dCoordinateScale
to get the disparity values d
in pixels. To compute the 3D object coordinates from the disparity values, the focal length and the baseline as well as the principle point are required. These parameters are available as features Scan3dFocalLength
, Scan3dBaseline
, Scan3dPrincipalPointU
and Scan3dPrincipalPointV
. The focal length and principal point depend on the image resolution of the selected component. Knowing these values, the pixel coordinates and the disparities can be transformed into 3D object coordinates in the camera coordinate system.
Assuming that d16ik is the 16-bit disparity value at column i
and row k
of a disparity image, the float disparity in pixels dik is given by the following formula:
$$ d_{ik}=d16_{ik} \cdot \mathrm{Scan3dCoordinateScale} $$
The 3D reconstruction in meters can be written with the GenICam parameters as:
$$ P_x=\left(i+0.5-\mathrm{Scan3dPrincipalPointU}\right) \frac{\mathrm{Scan3dBaseline}}{d_{ik}}\ $$
$$ P_y=\left(k+0.5-\mathrm{Scan3dPrincipalPointV}\right) \frac{\mathrm{Scan3dBaseline}}{d_{ik}}\ $$
$$ P_z=\mathrm{Scan3dFocalLength} \frac{\mathrm{Scan3dBaseline}}{d_{ik}} $$
The set of all object points computed from the disparity image gives the point cloud, which can be used for 3D modeling applications. The disparity image is converted into a depth image by replacing the disparity value in each pixel with the value of Pz.
情報
It is preferable to enable chunk data with the parameter ChunkModeActive
and to use the chunk parameters ChunkScan3dCoordinateScale
, ChunkScan3dFocalLength
, ChunkScan3dBaseline
, ChunkScan3dPrincipalPointU
and ChunkScan3dPrincipalPointV
that are delivered with every image, because their values already fit to the image resolution of the corresponding image.
距離の計算#
点の座標(x,y,z)
がmmで指定されている場合、その点からカメラの光学中心までの距離は、次の式を使用して計算できます。
$$ d = sqrt( x * x + y * y + z * z ) $$