Operator BlobDetector2D

Operator Library: Blob

The BlobDetector2D operator detects objects in binary images and determines their properties. The outputs of the operator are several streams of data representing the properties for each object.

The functionality of the operator can be fully simulated in VisualApplets.

[Important] Availability

To use the BlobDetector2D operator, you need either a Segmentation and Classification Library license, or the VisualApplets 4 license.

Read also the introduction to the Library Blob.

General Information

The BlobDetector2D supports the BoundingBox, Area, and Center of Gravity X and Y features.

[Note] The Center of Gravity X and Y Are No Direct Outputs of the Operator

The output ports CenterX and CenterY actually provide the image moments M10 (CenterX) and M01 (CenterY). To extract the center of gravity of an object, you must divide the moments by the extracted area:

The incoming image is processed line by line by the operator. Since the operator can't predict the future, regions that are connected later may appear as independent at first. Therefore, connected pixels within a single line in this text are named object artifacts. The BlobDetector2D operator checks whether those object artifacts overlap with other object artifacts of the following line. This is to determine whether the objects are really independent or whether they are part of a larger object. The maximum number of object artifacts that can be handled by the operator is defined by the LabelCount parameter. If the number of object artifacts exceed the value of LabelCount, data will be lost and the integrity of the object results is no longer given. This is called a context corruption. The line, in which the data is lost, is marked with a Line Error dummy object discernible by an ErrorFlag. All following objects within the corrupted context are flagged as such. The context corruption ends once the context is reset automatically by the end of frame or by restarting the applet. All error flags are listed in the section 'Output Ports'.

In most cases, the number of required labels to prevent a context corruption is equal to the maximum number of object artifacts within a line. For neighborhood = 4 and neighborhood = 8, the precise number of labels required is slightly different:

  • For neighborhood = 4, the number of labels to prevent a context corruption is equal to the maximum number of object artifacts between any pixel at position X of line N and the pixel at position X of line N+1.

  • For neighborhood = 8, the number of labels to prevent a context corruption is equal to the maximum number of object artifacts between any pixel at position X of line N and the pixel at position X+1 of line N+1.

If you are not sure whether the selected amount of labels is sufficient for your use case, check it in the simulation.

When instantiating the BlobDetector2D module, you can define which output ports shall be created. By enabling/disabling the ports, the corresponding features are also automatically enabled/disabled. After confirming these ports, you can't change them anymore. If you need other output ports, you must instantiate another BlobDetector2D module.

Performance

The blob detector operators belong to the few operators whose processing speed, i.e., bandwidth, depends on the image content. The operators work best on images with few changes between foreground and background pixels within cohesive lines. Therefore, noise or a high number of object artifacts within multiple lines slow down the data throughput noticeably. However, this applies only for images containing very strong noise.

Latency

The operator works with minimum latency by processing the input images line by line. Once the end of an object becomes unambiguous, the object features are immediately provided as output by the operator. Therefore, the output of the object takes place in the same line in which it ends. This enables the post-processing of completed object features while the image itself is still being processed by the operator.

Note that the DMA and some other operators wait for the frame end signal before they report completion. The frame end signal is generated once the last line of the input image has been processed.

Resource Consumption

The resource consumption of the operator depends on the enabled features, the number of bits used for the features and the selected number of labels. Therefore, Basler recommends to disable not required features as well as to be cautious when setting the LabelCount parameter.

[Note] Partially Disabling Bounding Box Ports Doesn't Save Resources

The bounding box has several output ports, therefore the feature is only disabled if all of its ports are disabled. Partially disabling bounding box ports doesn't save resources.

[Note] Setting the Maximum Values Manually Saves Resource

To save resource and to improve the handling of the design, Basler recommends to set the bit widths for the links manually, if you know the maximum values for the objects to be detected.

Input Ports

The ImageI input of the blob detector expects a binary (pixel bit width of 1), 2D image. The operator supports the input of different line lengths within an image as well as empty lines and empty frames.

The operator assumes foreground values to have the value ONE, and background values to have the value ZERO. Make sure you match this requirement in the preliminary binarization process.

Output Ports

The output is represented in form of the VALT_IMAGE2D image format. Each of the output ports represents one object feature / object property. They are synchronous and together they describe one single object. Each output pixel provides the value of either a completed object or of an error status object. Whether this output is an extracted object or an error status object is indicated by the ErrorFlagsO output port. The pixel that represents an object appears in the same line in which the object has been completed. They are output in the order of their completion. For example, two objects end in line 10. The last pixel of object A has the X-coordinate 42, the last pixel of object B has the X-coordinate 97. Therefore, object A is the first pixel and object B is the second pixel at the operator's output ports. Lines, in which no objects are completed and no errors occurred, are empty.

The output port's widths either adapt automatically, based on the given input format to prevent an overflow of the feature data. Alternatively you can manually configure the output port's widths. Manual configurations of link bit widths are available for Area/ CenterX/ CenterY, if you have enabled them in the operator properties.

The port ErrorFlagsO outputs several error flags for overflows or additional info tags. Each bit is reserved for a specific flag. You can see these error flags either in the runtime environment of your hardware or in the simulation. To see these error flags in the simulation:

  • Add a simulation probe to the ErrorFlagsO port of the BlobDetector2D.

  • Add an image that causes an overflow to the simulation.

  • Run the simulation.

  • Open the Simulation Probe View of your output port.

  • Scroll to the pixel you want to examine, zoom in to see the pixel values and set the pixel values to Hex:
    Error Flag in the Simulation Probe Viewer

    Figure 425. Error Flag in the Simulation Probe Viewer


  • The active bits within your hex value indicate different errors or status information regarding your blob object. For details check the table blow.

You find a detailed explanation of the error flags in the parameter description below. A summary is given in the following table:

Bit # Description Notes
0 Area Overflow

The Area value of the object exceeded the provided output bit width at some point in its processing history.

1 CenterX Overflow

The CenterX value (image moment M10) of the object exceeded the provided output bit width at some point in its processing history.

2 CenterY Overflow

The CenterY value (image moment M01) of the object exceeded the provided output bit width at some point in its processing history.

3, 4 Reserved

Reserved for future extensions.

5 Line Error

Indicates a dummy object, which marks a line in which more object artifacts occurred than can be handled by the selected number of labels. The available memory is insufficient to process further incoming data.

Since this is a dummy object, the feature data values at the other ports don't contain any useful data. While the feature values are displayed as 0 in the simulation, they can be any value in hardware. Due to the lower ErrorBits 7-0 being tied to the enabled features and their values, these flags also might differ in hardware from the simulation.

If this error flag exists, it is always the first object output of a line.

6 ContextCorruption

More object artifacts occurred than could be handled by the selected number of labels. The available memory is not sufficient to process all incoming data. The integrity of the flagged object can't be guaranteed.

The operator returns to correct functionality with the end of the frame or by restarting the acquisition. The following image is processed without any impact of a previous context corruption.

Table 35. Explanation of Blob Error Flags


I/O Properties

Property Value
Operator Type M
Input Link ImageI, data input
Output Links BoundingX0O, data output
BoundingX1O, data output
BoundingY1O, data output
BoundingHeightO, data output
AreaO, data output
CenterXO, data output
CenterYO, data output
ErrorFlagsO, data output

Supported Link Format

Link Parameter Input Link ImageI Output Link BoundingX0O Output Link BoundingX1O Output Link BoundingY1O Output Link BoundingHeightO Output Link AreaO Output Link CenterXO Output Link CenterYO Output Link ErrorFlagsO
Bit Width 1 auto1 auto1 auto2 auto3 [1, 48] / auto4 [1, 48] / auto4 [1, 48] / auto4 7
Arithmetic unsigned unsigned unsigned unsigned unsigned unsigned unsigned unsigned unsigned
Parallelism 64 1 1 1 1 1 1 1 1
Kernel Columns 1 1 1 1 1 1 1 1 1
Kernel Rows 1 1 1 1 1 1 1 1 1
Img Protocol VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D VALT_IMAGE2D
Color Format VAF_GRAY VAF_GRAY VAF_GRAY VAF_GRAY VAF_GRAY VAF_GRAY VAF_GRAY VAF_GRAY VAF_GRAY
Color Flavor FL_NONE FL_NONE FL_NONE FL_NONE FL_NONE FL_NONE FL_NONE FL_NONE FL_NONE
Max. Img Width 2^16 auto5 auto5 auto5 auto5 auto5 auto5 auto5 auto5
Max. Img Height 2^24 as I as I as I as I as I as I as I as I

1

The bit width is based on the incoming maximum image width of the ImageI link. It is calculated as ceil(log2(Max. Img Width)).

2

The bit width is based on the incoming maximum image height of the ImageI link. It is calculated as ceil(log2(Max. Img Height)).

3

The bit width is based on the incoming maximum image height of the ImageI link. It is calculated as ceil(log2(Max. Img Height + 1)).

4

Via the parameters AreaBitWidthMode, CenterXBitWidthMode, and CenterYBitWidthMode, you can select whether you either configure the link manually or use the automatic mode. If you edit the links manually, the allowed range is [1, 48].

5

The maximum image width equals the parameter LabelCount + 1.

Parameters

LabelCount
Type static parameter
Default 100
Range [1, (max. Img Width(Input Link I) / 2 + 2]

To track partial objects internally, this parameter sets the number of available labels. It also influences the maximum number of objects which may be output in a line. Note that the required resources for the BlobDetector2D operator depend on the number of labels. A good value range for this parameter is between 100 and 500, which should be sufficient for most applications.

If more partial objects than selected labels occur, data is lost and the context of the frame becomes corrupted. In this case, bit 6 of the output link ErrorFlagsO is '1' to indicate that the provided data is no longer reliable. The error bit is set when detecting the data loss and is reset with the end of the output frame.

Neighborhood
Type static parameter
Default EightConnected
Range {FourConnected, EightConnected}

This parameter selects the required neighborhood for object detection. See 'Definition' for a detailed explanation of pixel neighborhoods.

AreaBitWidthMode
Type static parameter
Default AutomaticFrameMaximum
Range {AutomaticFrameMaximum, LinkManuallyEditable}

The area of an object is defined by the sum of all object foreground pixels.

If AutomaticContextMaximum is selected, the operator automatically sets the bits for the maximum size of an object. The maximum possible size of an object depends on the maximum width and height at the input link I. The theoretical maximum area of an object is achieved when all pixels of an image consist of foreground values, i.e., a white input image which is one large object.

If the maximum possible CenterX (image moment M10) value is known, or only objects of a specific X-coordinate are of interest, Basler recommends to use the mode LinkManuallyEditable. This saves resources and improves the build time of the entire design. In the LinkManuallyEditable mode, you can edit the bit width property of the link within the allowed range of [1, 48]. The bit width of the link directly translates to the bit width used internally for calculating the Area feature and therefore impacts the overall resource consumption.

If the bit width is not sufficient for the Area value of an object, it is indicated by the ErrorFlagO link (bit 0).

CenterXBitWidthMode
Type static parameter
Default AutomaticFrameMaximum
Range {AutomaticFrameMaximum, LinkManuallyEditable}

See 'Center of Gravity' for an explanation of the Center of Gravity feature.

Note that the output is the image moment M10. To receive a pixel coordinate for the Center of Gravity X, either use the DIV operator or divide the value by the area on the host PC.

If you set AutomaticContextMaximum, the operator automatically sets the output bits. The maximum possible size of an object depends on the maximum width and height at the input link I. The theoretical maximum CenterX value of an object is achieved if all pixels of an image consist of foreground values, i.e., a white input image which is one large object. The bit width of the CenterXO link can get very high if large input images are used.

If the maximum possible CenterX (image moment M10) value is known, or only objects of a specific X-coordinate are of interest, Basler recommends to use the mode LinkManuallyEditable. This saves resources and improves the build time of the entire design. In the LinkManuallyEditable mode, you can edit the bit width property of the link within the allowed range of [1, 48]. The bit width of the link directly translates to the bit width used internally for calculating the CenterX feature and therefore impacts the overall resource consumption.

If the bit width is not sufficient for the CenterX value of an object, it is indicated by the ErrorFlagO link (bit 1).

CenterYBitWidthMode
Type static parameter
Default AutomaticFrameMaximum
Range {AutomaticFrameMaximum, LinkManuallyEditable}

See 'Center of Gravity' for an explanation of the Center of Gravity.

Note that the output is the image moment M01. To receive a pixel coordinate for the Center of Gravity Y, either use the DIV operator or divide the value by the area on the host PC.

If you set AutomaticContextMaximum, the operator automatically sets the output bits. The maximum possible size of an object depends on the maximum width and height at the input link I. The theoretical maximum CenterY value of an object is achieved if all pixels of an image consist of foreground values, i.e., a white input image which is one large object. The bit width of the CenterYO link can get very high if large input images are used.

If the maximum possible CenterY (image moment M10) value is known, or only objects of a specific Y-coordinate are of interest, Basler recommends to use the mode LinkManuallyEditable. This saves resources and improves the build time of the entire design. In the LinkManuallyEditable mode, you can edit the bit width property of the link within the allowed range of [1, 48]. The bit width of the link directly translates to the bit width used internally for calculating the Area feature and therefore impacts the overall resource consumption.

If the bit width is not sufficient for the CenterY value of an object, it is indicated by the ErrorFlagO link (bit 2).

Examples of Use

The use of operator BlobDetector2D is shown in the following examples:

  • 'BlobDetector2D'

    Examples - Shows the usage of operator BlobDetector2D in area scan applications.