画像品質の最適化#
Camera parameters like the frame rate, exposure time, and gain determine when and how images are captured. Apart from that, they also have a huge impact on the image quality. To further improve the quality of the depth image, there are a number of stereo matching parameters that you can adjust.
Both types of parameters can be configured using the Stereo Camera Viewer. This allows you to see the effects of any setting changes immediately in the respective image windows.
Camera Parameters#
To check the changes you make, use the Intensity image window.
フレームレート#
The maximum achievable frame rate (in frames per second = fps) depends on both the configuration of the camera and the hardware of the host system. Several key factors influence the frame rate:
- Illumination Control: The projector's duty cycle is limited to a maximum of 2.5 %. Together with a maximum exposure time of 5 ms, this restricts the speed of image capture.
- Exposure Time: The exposure time directly affects the interval between image acquisitions. Shorter exposure times allow for higher frame rates, while longer exposure times reduce the maximum possible frame rate.
- Depth Calculation: The time required to compute depth images depends on the resolution selected (via the
BslDepthQuality
parameter) and the processing hardware available on the host system (CPU or GPU). Higher resolutions and less powerful hardware increase the processing time and therefore reduce the achievable frame rate.
The frame rate determines how many images are transmitted via the GigE Vision interface. Reducing the frame rate can significantly reduce the network load as well as the computing load for both the Stereo ace camera and the receiving host computer. In many applications, the frame rate can and should be reduced to meet the capabilities of the system while still fulfilling the requirements of the application.
Exposure Time and Gain#
The exposure time is the time during which the image sensor is exposed to light during image acquisition. It influences how bright the captured image appears. The exposure time is typically in the order of several milliseconds. If there is less light in the scene, a longer exposure time is required. However, long exposure times may cause blurring if objects in the scene or the Stereo ace camera itself move.
The second parameter that influences the image brightness is the gain. The gain is an amplification factor that allows you to make the image appear brighter. At the same, though, it also increases image noise.
By configuring exposure time and gain, you can focus on those parts of the scene that are relevant to your application, e.g., objects in the center of the scene. Overexposure within the area of interest must be avoided but can be tolerated in scene parts that are not relevant for the application.
As a general rule, overexposure is worse than underexposure for image processing because the information in overexposed regions is lost due to the pixels becoming saturated too quickly. In contrast, underexposed regions often contain enough information for image processing. Therefore, it's better to configure the parameters such that images are darker rather than too bright.
The gain should only be increased in low lighting conditions or if you want to reduce the exposure time in order to avoid motion blur. As a general rule, the gain value should always be as low as possible.
Stereo Matching Parameters#
To check the changes you make, use the Point Cloud, Depth Map, and Confidence image windows.
Working Range#
First, you need to consider the working range of your application. By setting the BslDepthMaxDepth
parameter to a certain value invalidates all pixels in the depth image that are farther away. This is an easy way to exclude parts of the scene that aren't relevant for an application.
Similarly, you can increase the BslDepthMinDepth
parameter if you are certain that no target objects is ever going to be closer to the Stereo ace camera than the distance specified. Increasing the minimum depth has the benefit of also reducing the time it takes to compute the depth image and therefore increases the frame rate.
レイテンシー#
In some applications, the latency with which depth images are computed is more important than the quality of the depth values. In this case, you can change the BslDepthQuality
parameter's default setting (High
) to Medium
or Low
. Depth images of a lower quality are computed by reducing the image resolution before stereo matching. This reduces the processing time and increases the frame rates you can achieve significantly. However, the frame rate for computing depth images is always limited by the camera's frame rate setting.
As discussed above, the minimum depth should be increased if possible as this also helps to reduce latency and increase the frame rates you can achieve.
Density#
Density is a measure of how many invalid pixels there are in an image. Higher density means fewer invalid pixels. Depth images often contain invalid values that appear black in the depth map shown in the Stereo Camera Viewer. This happens for parts of the scene that are outside the specified working range as well as for areas on the left side of objects. This is because the left camera is the reference camera and part of the background isn't visible for the right camera. This is called an occlusion and is a normal phenomenon.
Generally speaking, invalid pixels are undesirable. They are caused by the inability of the stereo matching method to compute depth. There are various reasons for this. Five different (and complementary) strategies are available to increase the density of depth images:
- Adjusting image exposure: Overexposed areas don't contain enough information for matching and therefore appear invalid. Furthermore, areas may appear invalid if the image noise is stronger than the texture of the scene. Both problems can be solved by adjusting the exposure time and gain settings as explained in the Exposure Time and Gain section.
- Exploiting static scenes: If the scene and the Stereo ace camera don't move during image capturing, Basler recommends using the static mode (
BslDepthStaticScene
parameter). When this parameter is enabled, the Stereo ace camera internally collects consecutive images and averages these images to reduce noise. This permits stereo matching even for weakly textured objects. This often results in much denser depth images. - Improving the scene's texture through the projector: One of the most common reasons for invalid areas in indoor environments is the lack of texture or the presence of very repetitive textures. Typically, walls and tables are unicolor without any texture. Other structures are highly repetitive. Both cases aren't really conducive to successful stereo matching. If these overtextured or undertextured objects and parts of the scene are important for the application, the only way to capture them is by adding texture artificially by means of a projector.
- Selecting the illumination mode: To retrieve intensity data without a projected pattern in the scene, use the
AlternateActive
illumination mode. In this mode, for each image, two exposures are performed: one with the projector enabled for depth calculation and another for capturing intensity data. Note thatAlternateActive
increases latency. If only disparity data is required, you can reduce latency by settingBslIlluminationMode
toAlwaysActive
instead ofAlternateActive
. That way, the projector is always enabled during exposure. - Configuring the illumination: Adjust the illumination parameters
BslIlluminationStrength
andBslIlluminationMode
to optimize the projected pattern for your scene. Increasing the illumination strength can improve the visibility of the projected pattern, especially in scenes with strong ambient light. At the same time, you must ensure that duty cycle and maximum activation time are not exceeded. Select an appropriate illumination mode to synchronize the projector with the camera exposure for best results. InAlternateActive
illumination mode, if there is little ambient light and the images without the projected pattern are too dark, try reducing the illumination strength and increasing the exposure time to achieve a better image brightness.
Depth Artifacts#
The following depth artifacts are common:
- Outliers: Typically, small patches of depth values that can appear anywhere in the working range.
- Geometric errors: Depth values judged to be incorrect but close to the real depth.
Filtering errors means setting the incorrect depth values to invalid. The BslDepthFill
parameter can be used to interpolate small areas that are typically caused by error filtering. Higher fill-in values increase the number of areas that can be interpolated. The size of areas that are interpolated directly depends on the segmentation setting for error filtering.
Outliers can be filtered out using the BslDepthSeg
parameter. This parameter gives the maximum number of connected pixels that are to be removed. All patches with fewer pixels are set to invalid. The size is also used for the BslDepthFill
parameter. Holes that are caused by this filter are typically interpolated.
Another way to reduce the likelihood of outliers is to increase the minimum confidence as the confidence is the probability that a pixel is not an outlier.
Depth values that have a potentially high geometric error can be filtered using the BslDepthMaxDepthErr
parameter.
For some applications, it is necessary that the surfaces in the depth image are smooth. This can be achieved by enabling the Smoothing parameter.
If the camera image includes highlights (specular reflections) that aren't reconstructed in the depth image, you can verify whether these highlights were introduced by the projector of the camera (e.g., by simply turning the projector off). In such cases, Basler recommends using the Double-Shot mode.