Skip to content

Programming Using the Low Level API#

Note

The Low Level API should only be used for existing applications and for rare highly advanced use cases that can't be covered using the Instant Camera classes.

Please use the Instant Camera classes whenever possible instead of the Low Level API.

Low Level Camera Classes (deprecated)#

A Low Level API camera object wraps a pylon Device providing more convenient access to the parameters of the Camera, the Stream Grabber, the Event Grabber, and the Transport Layer using GenApi parameter classes.

The following table shows the currently available classes:

Transport Layer Name of Class Kind of Devices
PylonGigE Pylon::CBaslerGigECamera GigE Vision compliant cameras
PylonUsb Pylon::CBaslerUsbCamera USB3 Vision compliant cameras
PylonCLSer Pylon::CBaslerCameraLinkCamera Serial Camera Link cameras

Grabbing Images#

Terminology#

Acquire, Transfer, and Grab Images#

In this document we distinguish between image acquisition, image data transfer, and image grabbing.

We denote the processes inside the camera as image acquisition. When a camera starts image acquisition, the sensor is exposed. When exposure is complete, the image data is read out from the sensor.

The acquired image data is transferred from the camera's memory to the computer using an interface such as USB or Gigabit Ethernet.

The process of writing the image data into the computer's main memory is referred to as "grabbing" an image.

Image Data Stream and Stream Grabber#

A camera may provide different sources for image data, where each source can deliver a stream of image data. In pylon, so called Stream Grabber objects are responsible for managing the process of grabbing data from a stream, i.e., writing the data into the computer's main memory.

A Stream Grabber only grabs images from a single data stream. To grab data from multiple streams, several Stream Grabbers are needed.

Using Stream Grabbers#

The following sections describe the use of Stream Grabber objects. The order of this section reflects the sequence in which a typical grab application will use a Stream Grabber object.

Getting a Stream Grabber#

Stream Grabber objects are managed by Camera objects. The number of available stream grabbers can be determined with the IPylonDevice::GetNumStreamGrabberChannels() method of the camrea object. The IPylonDevice::GetStreamGrabber() method returns a pointer to the Pylon::IStreamGrabber object. Before retrieving a Stream Grabber object, the Camera object must be opened. You should check the value retuned by IPylonDevice::GetNumStreamGrabberChannels() to see which index parameter can be used when calling IPylonDevice::GetStreamGrabber() . The Stream Grabber object itself also must be opened before it is used. Some camera objects i.e Camera Link may not support stream grabbers and will return 0 when calling IPylonDevice::GetNumStreamGrabberChannels() .

Example:

camera.Open();

// get the number of stream grabbers available
const unsigned int numGrabbers = camera.GetNumStreamGrabberChannels();

if (numGrabbers > 0) {
  IStreamGrabber* pGrabber = camera.GetStreamGrabber(0);
  pGrabber->Open();

  // use the grabber
  // ...

  pGrabber->Close()
}

// ...

Attention

Never try to call delete or free on a stream grabber pointer retrieved from a Camera object. The Camera object retains ownership of a Stream Grabber object and manages its lifetime.

Configuring a Stream Grabber#

Independent of the transport layer used, each stream grabber provides two mandatory parameters:

  • MaxBufferSize - Maximum size in bytes of a buffer used for grabbing images
  • MaxNumBuffer - Maximum number of buffers used for grabbing images

A grab application must set the above two parameters before grabbing starts.

Depending on the transport layer, a Stream Grabber provides further parameters such as streaming related timeouts. All of these parameters are set to default values and image grabbing can be performed without tweaking the defaults.

There are two ways for accessing a Stream Grabber's parameters:

The most comfortable way is to use a concrete class for a Stream Grabber object. Each Camera class provides a typedef for the corresponding Stream Grabber class. A Stream Grabber class takes ownership of a IStreamGrabber pointer returned by the GetStreamGrabber() method. The Stream Grabber class has members to access the Stream Grabber object's parameters.

Example:

camera.Open();
if (camera.GetNumStreamGrabberChannels() == 0) {
  // device doesn't support stream grabbers
  return;
}

IStreamGrabber* pGrabber = camera.GetStreamGrabber(0);
CBaslerGigECamera::StreamGrabber_t StreamGrabber( pGrabber );

// First open the stream grabber
StreamGrabber.Open();

// Set the maximum buffer size according to the amount of data
// the camera will send
StreamGrabber.MaxBufferSize = camera.PayloadSize();

// We are going to use 10 buffers
StreamGrabber.MaxNumBuffer = 10;

When using the generic programming approach, i.e., using the Pylon::IPylonDevice and Pylon::IStreamGrabber interfaces instead of Camera and Stream Grabber classes, the IStreamGrabber::GetNodeMap() method must be used to retrieve the GenApi node map holding the stream grabber's parameters.

Stream Grabber node maps are used in the same way as node maps for Camera objects. The use of node maps for Camera objects is described in the Accessing Parameters section.

Preparing a Stream Grabber for Grabbing#

Depending on the transport layer used for grabbing images, different system resources are required, for example:

  • DMA resources
  • Memory for the driver's data structures

The Stream Grabber's PrepareGrab() method is used to allocate the needed resources.

In addition to resource allocation, the PrepareGrab() call causes the camera object to perform a change in state. Typically, the camera parameters controlling the image size (AOI, pixel format, binning, etc.) will be read-only after PrepareGrab() has been called. These parameters must be set up before calling PrepareGrab() and must not be changed while image grabbing is active.

Providing Memory for Grabbing#

All pylon transport layers can grab image data into memory buffers allocated by a user application. The user allocated memory buffers must be registered at the Stream Grabber object. The registration step is needed for performance reasons, allowing the Stream Grabber to prepare and cache internal data structures used for dealing with user provided memory.

The buffer registration returns handles to the registered buffers, which are used in the steps following the buffer registration.

Example:

StreamGrabber.Open();
const int bufferSize = (int) camera.PayloadSize();
const int numBuffers = 10;
unsigned char* ppBuffers[numBuffers];
StreamBufferHandle handles[numBuffers];

StreamGrabber.MaxBufferSize = bufferSize;
StreamGrabber.MaxNumBuffer = numBuffers;

StreamGrabber.PrepareGrab();
for ( int i = 0; i < numBuffers; ++i ) {
  ppBuffers[i] = new unsigned char[bufferSize];
  handles[i] = StreamGrabber.RegisterBuffer( ppBuffers[i], bufferSize);
}

The buffer registration mechanism restricts the ownership of the buffers. Although the content of registered buffers can be changed by the user application, the application must not delete the memory of buffers that are registered. Freeing the memory is not allowed until the buffers are deregistered by using IStreamGrabber::DeregisterBuffer().

Feeding the Stream Grabber's Input Queue#

Each Stream Grabber maintains two different buffer queues. The buffers to be filled must be fed into the Grabber's input queue. Grabbed buffers can be retrieved from the Grabber's output queue.

The IStreamGrabber::QueueBuffer() method is used to put a buffer into the grabber's input queue. The QueueBuffer() method accepts two parameters, a buffer handle and an optional, user provided pointer to user provided context information. Together with the buffer, the context pointer is passed back to the user when retrieving the grabbed buffer from the grabber's output queue. A Stream Grabber never changes the memory to which the context pointer is pointing.

Example:

MyContext context[numBuffers];
for ( int i = 0; i < numBuffers; ++i ) {
  // Enqueue image buffers and use the buffer's index as context
  // information
  StreamGrabber.QueueBuffer( handles[i], & context[i] );
}

Attention

A stream grabber temporarily takes ownership of an enqueued buffer. Never try to modify or delete a buffer when it has been placed into the stream grabber's input queue.

Queueing buffers into the stream grabber's input queue does not make the camera start acquiring images!** After queueing the buffers, the stream grabber is prepared to grab data from the camera into the queued buffers. Image acquisition must be explicitly started.

Starting and Stopping Image Acquisition#

To start image acquisition, use the Camera object's Pylon::CBaslerGigECamera::AcquisitionStart parameter. Pylon::CBaslerGigECamera::AcquisitionStart is a command parameter, i.e., calling the Execute() method of the Pylon::CBaslerGigECamera::AcquisitionStart parameter sends an acquisition start command to the camera.

A camera device typically provides two acquisition modes:

  • Single Frame mode where the camera acquires one image.
  • Continuous mode where the camera continuously acquires and transfers images until acquisition is stopped.

To be precise, the acquisition start command does not necessarily start immediate acquisition inside of the camera. When either external triggering or software triggering is enabled, the acquisition start command prepares the camera to acquire images. Actual acquisition starts when the camera senses an external trigger signal or receives a software trigger command.

When the camera's continuous acquisition mode is enabled, the Pylon::CBaslerGigECamera::AcquisitionStop parameter is used to stop image acquisition.

Normally, a camera starts transferring the image as soon as possible after starting the acquisition, no special command to start the image transfer is needed.

Example:

using namespace Basler_GigECameraParams;
camera.AcquisitionMode.SetValue( AcquisitionMode_Continuous );
camera.AcquisitionStart.Execute();

Retrieving Grabbed Images#

The transferred image data is written to the buffer(s) in the stream grabber's input queue. When a buffer is filled with grabbed image data, the stream grabber places it into its output queue, from which it can be retrieved by the user application.

There is a wait object associated with the Stream Grabber's output queue. This wait object allows the application to wait until either a grabbed image arrives at the output queue or a timeout expires.

When the wait operation successfully returns, the grabbed buffer can be returned with the Stream Grabber object's RetrieveResult() method. The RetrieveResult() method fills a Pylon::GrabResult object. The object contains, among other things, the following information:

  • Status of the grab (succeeded, canceled, failed)
  • The buffer's handle
  • The pointer to the buffer
  • The user provided context pointer
  • AOI and image format
  • Error number and error description if the grab has failed

When getting a buffer from the grabber's output queue, ownership of the buffer is given over to the application. A buffer retrieved from the output queue will never be overwritten until it is again placed into the grabber's input queue.

Remember, a buffer retrieved from the output queue must be deregistered before its memory can be freed.

We recommended using the buffer handle from the Grab Result object to requeue a buffer into the grabber's input queue.

When the camera does not send data, a buffer remains in the grabber's input queue until the Stream Grabber object's CancelGrab() method is called. The CancelGrab() puts all buffers from the input queue to the output queue, including any buffer currently being filled. By checking the status of a Grab Result object, you can determine whether a buffer has been canceled.

The following example shows a typical grab loop:

const int numGrabs = 100;
GrabResult Result;
for ( int i = 0; i < numGrabs; ++i ) {
  // Wait for the grabbed image with a timeout of 3 seconds
  if ( StreamGrabber.GetWaitObject().Wait( 3000 )) {
     // Get an item from the grabber's output queue
     if ( ! StreamGrabber.RetrieveResult( Result ) ) {
       cerr << "Failed to retrieve an item from the output queue" << endl;
       break;
     }

     if ( Result.Succeeded() ) {
       // Grabbing was successful. Process the image.
       ProcessImage( (unsigned char*) Result.Buffer() );
     } else {
       cerr << "Grab failed: " << Result.GetErrorDescription() << endl;
       break;
     }
     // Requeue the buffer
     if ( i + numBuffers < numGrabs )
       StreamGrabber.QueueBuffer( Result.Handle(), Result.Context() );
  } else {
    cerr << "timeout occurred when waiting for a grabbed image" << endl;
    break;
  }
}

Finish Grabbing#

If the camera is set for continuous acquisition mode, acquisition should first be stopped:

camera.AcquisitionStop.Execute();

If you are not sure that the grabber's input queue really is empty, the Stream Grabber object's CancelGrab() method should be issued to flush the input queue. The canceled buffers are now available at the grabber's output queue.

An application should retrieve all items from the grabber's output queue before closing a Stream Grabber object.

Before freeing their memory, deregister the buffers.

When all buffers are deregistered, call the Stream Grabber object's FinishGrab() method to release all resources related to grabbing. FinishGrab() must not be called when there are still buffers in the grabber's input queue!

When grabbing has been finished, a Stream Grabber object should be closed.

Example:

// The camera is in continuous mode, stop the image acquisition
camera.AcquisitionStop.Execute();
// Flush the input queue
StreamGrabber.CancelGrab();
// Consume all items from the output queue
while ( StreamGrabber.GetWaitObject().Wait(0) ) {
  if ( ! StreamGrabber.RetrieveResult( Result ) ) {
    cerr << "Faile to retrieve item from output queueu" << endl;
  } else {
    if ( Result.Status() == Canceled ) {
      cout << "Got canceled buffer" << endl;
    }
  }
}

for ( int i = 0; i < numBuffers; ++i ) {
  StreamGrabber.DeregisterBuffer(handles[i]);
  delete [] ppBuffers[i];
}

StreamGrabber.FinishGrab();
StreamGrabber.Close();

Complete Sample program#

Here is the complete sample program for acquiring images from a GigE camera in continuous mode.

#include <pylon/PylonIncludes.h>
#include <pylon/gige/BaslerGigECamera.h>
#include <ostream>
using namespace Pylon;
using namespace Basler_GigECameraParams;
using namespace std;

typedef CBaslerGigECamera Camera_t;

void ProcessImage( unsigned char* pImage, int imageSizeX, int imageSizeY )
{
  // Do something with the image data
}

struct MyContext
{
  // Define some application specific context information here
};

int main()
{
  PylonAutoInitTerm autoInitTerm;

  try
  {
    // Enumerate GigE cameras
    CTlFactory& TlFactory = CTlFactory::GetInstance();
    ITransportLayer *pTl = TlFactory.CreateTl( Camera_t::DeviceClass() );
    DeviceInfoList_t devices;
    if ( 0 == pTl->EnumerateDevices( devices ) ) {
      cerr << "No camera present!" << endl;
      return 1;
    }

    // Create a camera object
    Camera_t camera ( pTl->CreateDevice( devices[ 0 ] ) );

    // Open the camera object
    camera.Open();

    // Parameterize the camera

    // Mono8 pixel format
    camera.PixelFormat.SetValue( PixelFormat_Mono8 );

    // Maximized AOI
    camera.OffsetX.SetValue( 0 );
    camera.OffsetY.SetValue( 0 );
    camera.Width.SetValue( camera.Width.GetMax() );
    camera.Height.SetValue( camera.Height.GetMax() );

    // Continuous mode, no external trigger used
    camera.TriggerSelector.SetValue( TriggerSelector_AcquisitionStart );
    camera.TriggerMode.SetValue( TriggerMode_Off );
    camera.AcquisitionMode.SetValue( AcquisitionMode_Continuous );

    // Configure exposure time and mode
    camera.ExposureMode.SetValue( ExposureMode_Timed );
    camera.ExposureTimeRaw.SetValue( 100 );

    // check whether stream grabbers are avalaible
    if (camera.GetNumStreamGrabberChannels() == 0) {
      cerr << "Camera doesn't support stream grabbers." << endl;
    } else {
      // Get and open a stream grabber
      IStreamGrabber* pGrabber = camera.GetStreamGrabber(0);
      CBaslerGigECamera::StreamGrabber_t StreamGrabber( camera.GetStreamGrabber(0) );
      StreamGrabber.Open();

      // Parameterize the stream grabber
      const int bufferSize = (int) camera.PayloadSize();
      const int numBuffers = 10;
      StreamGrabber.MaxBufferSize = bufferSize;
      StreamGrabber.MaxNumBuffer = numBuffers;
      StreamGrabber.PrepareGrab();

      // Allocate and register image buffers, put them into the
      // grabber's input queue
      unsigned char* ppBuffers[numBuffers];
      MyContext context[numBuffers];
      StreamBufferHandle handles[numBuffers];
      for ( int i = 0; i < numBuffers; ++i )
      {
        ppBuffers[i] = new unsigned char[bufferSize];
        handles[i] = StreamGrabber.RegisterBuffer( ppBuffers[i], bufferSize);
        StreamGrabber.QueueBuffer( handles[i], &context[i] );
      }

      // Start image acquisition
      camera.AcquisitionStart.Execute();

      // Grab and process 100 images
      const int numGrabs = 100;
      GrabResult Result;
      for ( int i = 0; i < numGrabs; ++i ) {
        // Wait for the grabbed image with a timeout of 3 seconds
        if ( StreamGrabber.GetWaitObject().Wait( 3000 )) {
          // Get an item from the grabber's output queue
          if ( ! StreamGrabber.RetrieveResult( Result ) ) {
            cerr << "Failed to retrieve an item from the output queue" << endl;
            break;
          }
          if ( Result.Succeeded() ) {
            // Grabbing was successful. Process the image.
            ProcessImage( (unsigned char*) Result.Buffer(), Result.GetSizeX(), Result.GetSizeY() );
          } else {
            cerr << "Grab failed: " << Result.GetErrorDescription() << endl;
            break;
          }
          // Requeue the buffer
          if ( i + numBuffers < numGrabs )
            StreamGrabber.QueueBuffer( Result.Handle(), Result.Context() );
        } else {
          cerr << "timeout occurred when waiting for a grabbed image" << endl;
          break;
        }
      }

      // Finished. Stop grabbing and do clean-up

      // The camera is in continuous mode, stop image acquisition
      camera.AcquisitionStop.Execute();

      // Flush the input queue, grabbing may have failed
      StreamGrabber.CancelGrab();

      // Consume all items from the output queue
      while ( StreamGrabber.GetWaitObject().Wait(0) ) {
        StreamGrabber.RetrieveResult( Result );
        if ( Result.Status() == Canceled )
          cout << "Got canceled buffer" << endl;
      }

      // Deregister and free buffers
      for ( int i = 0; i < numBuffers; ++i ) {
        StreamGrabber.DeregisterBuffer(handles[i]);
        delete [] ppBuffers[i];
      }

      // Clean up
      StreamGrabber.FinishGrab();
      StreamGrabber.Close();
    }

    camera.Close();
    TlFactory.ReleaseTl( pTl );
  }
  catch( Pylon::GenericException &e )
  {
    // Error handling
    cerr << "An exception occurred!" << endl << e.GetDescription() << endl;
    return 1;
  }

  // Quit application
  return 0;
}

Handling Camera Events#

Basler GigE Vision and USB3 Vision cameras can send event messages. For example, when a sensor exposure has finished, the camera can send an Exposure End event to the computer. The event can be received by the computer before the image data for the finished exposure has been completely transferred. This section describes how to retrieve and process event messages.

Event Grabbers#

The Grabbing Images section describes how Stream Grabber objects are used to grab images from a camera. Analogously, Event Grabber objects are used to receive event messages from a camera.

Creating and Preparing Event Grabbers#

Event Grabber objects are created and returned by Camera objects.

// Get the event grabber
Camera_t::EventGrabber_t EventGrabber(camera.GetEventGrabber());
if ( ! EventGrabber.IsAttached() ) {
  cerr << "The camera does not support event grabbing" << endl;
  return false;
}

Never try to call free or delete on IEventGrabber pointers. The camera object owns Event Grabbers and manages their lifetime.

Event Grabbers use internal memory buffers for receiving event messages. The number of buffers can be parametrized using the Event Grabber's Pylon::CPylonGigEEventGrabber::NumBuffer member:

EventGrabber.NumBuffer.SetValue(20);

Note

The number of buffers must be parametrized before calling the Event Grabber's Open() method!

A connection to the device and all necessary resources for receiving events are allocated by calling the Event Grabber's Open() method:

EventGrabber.Open();

Enabling Events#

To let the camera send event messages, the sending of event messages must be enabled using the Camera object.

First, the Pylon::CBaslerGigECamera::EventSelector must be set to the type of event to be enabled. In the following example the selector is set to the Exposure End event:

// Select the Exposure End event
camera.EventSelector = EventSelector_ExposureEnd;

When the Event Selector is set, sending events of the desired type can be enabled by using the Pylon::CBaslerGigECamera::EventNotification parameter:

// Enable sending of events of the selected event type
camera.EventNotification.SetValue( EventNotification_GenICamEvent );

To be sure that you don't miss an event, the Event Grabber should be prepared before events are enabled (see the [Creating and Preparing Event Grabbers] section above).

The following code snippet illustrates how to disable the sending Exposure End events:

// Select the Exposure End event
camera.EventSelector = EventSelector_ExposureEnd;
// Disable sending of Exposure End events
camera.EventNotification.SetValue( EventNotification_Off );

Receiving Events#

Receiving events is very similar to grabbing images. The Event Grabber provides a wait object that is signaled when an event message is available. When an event message is available, it can be retrieved by calling the Event Grabber's RetrieveEvent() method.

In contrast to grabbing images, memory buffers for receiving events need not be provided by the application. Memory buffers to store event messages are organized by the Event Grabber itself.

In typical applications, waiting for grabbed images and event messages is done in one common loop. This is demonstrated in the following code snippet:

// Add the stream grabber's and the event grabber's wait objects to a container
WaitObjects waitset;
waitset.Add( EventGrabber.GetWaitObject() );
waitset.Add( StreamGrabber.GetWaitObject() );

while ( doGrabbing ) {
  // Wait for an image or an event to occur (5 sec timeout)
  int idx;
  if ( waitset.WaitForAny( 5000, &idx ) ) {
    // Got event or image
    switch ( idx )
    {
    case 0: // Event available, get the message
      {
        EventResult EvResult;
        if ( EventGrabber.RetrieveEvent( EvResult ) ) {
          if ( EvResult.Succeeded() ) {
            // Successfully got the event message.
            // EvResult.Buffer points to the message
          } else {
            // Error occurred
            cerr << "Error retrieving event:" << EvResult.ErrorDescription() << endl;
          }
        } else {
          // No event available?
          // Should never happen in this sample because the wait object
          // was in signaled state when reaching this point.
        }
        break;
      } // Case 0
    case 1: // Image available, process it
      {
        GrabResult GrResult;
        if (StreamGrabber.RetrieveResult( GrResult )) {
          if (GrResult.Succeeded()) {
            // Process the image

            // Reuse the buffer for further grabbing
            StreamGrabber.QueueBuffer( GrResult.Handle(), GrResult.Context() );
          } else {
            // handle error
            // ...
          }
        }
      } // Case 1
    }  // Switch
  } // if
  else {
    // handle timeout
    // ...
  }
} // While

Parsing and Dispatching Events#

The previous section explained how to receive an event message. This section describes how to interpret an event message.

The specific layout of event messages depends on the event type and the camera type. The pylon API uses GenICam support for parsing event messages. This means that the message layout is described in the camera's XML description file.

As described in the GenApi Node Maps section, a GenApi node map is created from the XML camera description file. That node map contains node objects representing the elements of the XML file. Since the layout of event messages is described in the camera description file, the information carried by the event messages is exposed as nodes in the node map. The camera object provides members used for accessing the event related nodes in the same way as camera parameter related nodes.

For example, an Exposure End event carries the following information:

  • ExposureEndEventFrameID: indicates the number of the image frame that has been exposed
  • ExposureEndEventTimestamp: indicates the moment when the event was generated
  • ExposureEndEventStreamChannelIndex: indicates the number of the image data stream used to transfer the exposed frame

Example: The camera object's Pylon::CBaslerGigECamera::ExposureEndEventFrameID member is used to access the number of the frame the event is associated with:

int64_t frameNr = camera.ExposureEndEventFrameID.GetValue();

As described in the Accessing Parameters section, the ExposureEndEventFrameID could also be retrieved by using the camera object's node map directly:

GenApi::INodeMap* pNodeMap = pCamera->GetNodeMap();
CIntegerPtr ptrExposureEndFrameId = pNodeMap->GetNode("ExposureEndFrameId");
if ( ! ptrExposureEndFrameId ) {
  cerr << "There is no exposure time parameter" << endl;
  exit( 1 );
}
int64_t frameNr = ptrExposureEndFrameId->GetValue();

An Event Adapter object is used to update the event related nodes of the camera object's node map. Updating the nodes is done by passing the event message to an Event Adapter.

Event Adapters are created by Camera objects:

Pylon::IEventAdapter *pEventAdapter = camera.CreateEventAdapter();
if ( pEventAdapter == NULL ) {
  cerr << "Failed to create an event adapter" << endl;
}

To update the event related nodes, call the Event Adapter's DeliverMessage() method for each received event message:

// Retrieve the event result
EventResult EvResult;
if ( EventGrabber.RetrieveEvent( EvResult ) ) {
  if ( EvResult.Succeeded() ) {
     cout << "Successfully got an event message!" << endl;
     // Let the event adapter update the camera object's node map
     EventAdapter.DeliverMessage( EvResult.Buffer, sizeof EvResult.Buffer );
  } else {
     cerr << "Error retrieving event:" << EvResult.ErrorDescription() << endl;
  }
}

It is not possible to determine whether a message contains an Exposure End event by passing the event message to the Event Adapter. The next section describes how node callbacks are used to get informed about the occurrence of specific events.

Event Callbacks#

The previous section described how Event Adapters are used to push the content of event messages into a camera object's node map. The IEventAdapter::DeliverMessages() method updates all nodes related to events contained in the message passed in.

As described in the Getting Informed About Parameter Changes section, it is possible to register callback functions that are fired when nodes may have been changed.

These callbacks can be used to determine if an event message contains a certain event type. For example, to get informed about Exposure End events, a callback for one of the Exposure End event related nodes must be installed. The following code snippet illustrates how to install a callback function for the ExposureEndFrameId node:

// Member function of this class will be registered as callback
struct CallbackTarget
{
  CallbackTarget( Camera_t& camera )
    : m_Camera( camera )
  { }

  // Will be fired when an Exposure End event occurs
  void EndOfExposureCallback( GenApi::INode* pNode )
  {
    try
    {
      cout << "The message contains an Exposure End event." << endl;
      cout << "Timestamp: " << m_pCamera->ExposureEndEventTimestamp.GetValue()
           << "Frame number: " << m_pCamera->ExposureEndEventFrameID.GetValue() << endl;
    }
    catch ( Pylon::GenericException& e )
    {
      cerr << "Failed to get event information. Exception occurred:"
           << e.GetDescription() << endl;
    }
  }

  Camera_t& m_Camera;
} callbackTarget( camera );

// Register the callback for the ExposureEndEventTimestamp node.
GenApi::CallbackHandleType hCb = GenApi::Register(
  pCamera->ExposureEndEventTimestamp.GetNode(),
  callbackTarget,
  &CallbackTarget::EndOfExposureCallback );

The registered callback will be fired from the context of the IEventAdapter::DeliverMessage() function.

Note

Since one event message can aggregate multiple events, DeliverMessage will issue multiple calls to a callback function when multiple events of the same type are available.

Cleanup#

Before closing and deleting the Camera object, the event related objects must be closed and destroyed as illustrated in the following code snippet:

// Disable sending of Exposure End events
camera.EventSelector = EventSelector_ExposureEnd;
camera.EventNotification.SetValue( EventNotification_Off );

// Cleanup of event grabber and event adapter
// Deregister the callback first, or else when shutting down the event grabber
camera.ExposureEndEventTimestamp.GetNode()->DeregisterCallback( hCb );
// Close the event grabber to tear down the connection
// and free the resources used for receiving events
EventGrabber.Close();
// Delete the event adapter object
camera.DestroyEventAdapter( pEventAdapter );

Chunk Parser: Accessing Chunk Features#

Basler Cameras can send additional information appended to the image data, such as frame counters, time stamps, and CRC checksums. This section explains how to enable Chunk Features and how to access the added data.

Enabling Chunks#

Before a feature producing a chunk can be activated, the camera's chunk mode must be activated:

// Open the camera
camera.Open();

// Enable chunks in general
if ( GenApi::IsWritable( camera.ChunkModeActive ) ) {
  camera.ChunkModeActive.SetValue( true );
} else {
  cerr << "The camera does not support chunk features" << endl;
  return 1;
}

When the camera is in chunk mode, it transfers data blocks that are partitioned into chunks. The first chunk is always the image data. When chunk features are enabled, the image data chunk is followed by chunks containing the information generated by the chunk features.

Once the chunk mode is activated, chunk features can be enabled:

camera.ChunkSelector.SetValue( ChunkSelector_Timestamp );
camera.ChunkEnable.SetValue( true );

Grabbing Buffers#

Grabbing from an image stream with chunks is very similar to grabbing from an image stream without chunks. Memory buffers must be provided that are large enough to store both the image data and the added chunk data.

The camera's PayloadSize parameter reports the necessary buffersize (in bytes):

// Ask for the buffer size
const size_t ImageSize = (size_t) ( camera.PayloadSize.GetValue() );

// Allocate buffer(s)
uint8_t *pBuffer = new uint8_t[ ImageSize ];

// Inform the stream grabber about the buffer size
StreamGrabber.MaxBufferSize.SetValue( ImageSize );

// Tell the stream grabber how many buffers will be used
// ( in this example only 1 )
StreamGrabber.MaxNumBuffer.SetValue( 1 );

Now an image plus added chunks can be grabbed:

// Allocate resources related to grabbing
StreamGrabber.PrepareGrab();

// Subscribe the buffer
const StreamBufferHandle hBuffer =
  StreamGrabber.RegisterBuffer( pBuffer, ImageSize );

// Put buffer into the grab queue so it will be filled with data
StreamGrabber.QueueBuffer( hBuffer, NULL );

// Let the camera acquire one image
camera.AcquisitionMode.SetValue( AcquisitionMode_SingleFrame );
camera.AcquisitionStart.Execute();


GrabResult Result;
// Wait for the buffer to be filled
if ( StreamGrabber.GetWaitObject().Wait( 3000 ) ) {
  // Get the grab result from the grabber's result queue
  StreamGrabber.RetrieveResult( Result );
} else {
  // Timeout
  cerr << "Timeout occurred!" << endl;
  return 1;
}

if ( ! Result.Succeeded() ) {
  // Error Handling
  cerr << "No image acquired!" << endl;
  cerr << "Error code : 0x" << hex
    << Result.GetErrorCode() << endl;
  cerr << "Error description : "
    << Result.GetErrorDescription() << endl;
  return 1;
}

// Check if a buffer containing chunk data has been received
if ( PayloadType_ChunkData != Result.GetPayloadType () ) {
  cerr << "Unexpected payload type received" << endl;
  return 1;
}

Accessing the Chunk Data#

The data block containing the image chunk and the other chunks has a self-descriptive layout. Before accessing the data in the added chunks, the data block must be parsed by a Chunk Parser object.

The Camera object is responsible for creating a Chunk Parser:

// Create ChunkParser
IChunkParser &ChunkParser = *camera.CreateChunkParser();

Once a Chunk Parser is created, grabbed buffers can be attached to the Chunk Parser. When a buffer is attached to a chunk parser, it is parsed and the chunk data access is provided by members of the Camera object.

// Attach image buffer with chunk data to the parser. The parser extracts
// the included data from the chunk.
pChunkParser->AttachBuffer( Result.Buffer(), Result.GetPayloadSize() );

// Access the chunk data.
// Before accessing the chunk data, it should be checked to see
// if the chunk is readable. When it is readable, the buffer
// contains the requested chunk data.
if ( IsReadable(camera.ChunkTimestamp) )
  cout << "TimeStamp : " << camera.ChunkTimestamp.GetValue() << endl;

To check the result of the CRC Checksum chunk feature, use the Chunk Parser's HasCRC() and CheckCRC() methods. Note that the camera only sends a CRC when the CRC Checksum feature is enabled.

// Enable crc chunks  ( before PrepareGrab()! )
camera.ChunkSelector.SetValue( ChunkSelector_PayloadCRC16 );
camera.ChunkEnable.SetValue( true );

// ...

// Check the CRS sum ( after having the buffer attached to the Chunk Parser)
if ( pChunkParser->HasCRC() && !pChunkParser->CheckCRC() ) {
  cerr << "Image was damaged!" << endl;
  return 1;
}

Before reusing a buffer for grabbing, the buffer must be detached from the Chunk Parser.

// After detaching the buffer, the chunk data is no longer accessible!
pChunkParser->DetachBuffer();

After detaching a buffer, the next grabbed buffer can be attached and the included chunk data can be read.

When you have finished grabbing, the Chunk Parser must be deleted:

// Destroy the chunk parser
camera.DestroyChunkParser( pChunkParser );

Getting Informed About Device Removal#

Callback functions can be installed that are triggered when a Camera device has been removed. As soon as the Camera object's Open() method has been called, either a C or C++ class member function can be installed as callbacks.

Installing a C function:

void RemovalCallbackFunction( IPylonDevice* pDevice )
{
  cout << endl << "Callback function for removal of device "
      << pDevice->GetDeviceInfo().GetFullName().c_str() << " has been fired" << endl;
}

// ...

// Open the camera
pCamera->Open();

// Register a "normal" function
DeviceCallbackHandle hCb2 = RegisterRemovalCallback( pCamera, &RemovalCallbackFunction);

Installing a C++ class member function:

// A class with a member function that can be registered for device removal notifications
class AClass
{
public:
  // The member function to be registered
  void RemovalCallbackMemberFunction( IPylonDevice* pDevice )
  {
    cout << endl << "Member function callback for removal of device "
      << pDevice->GetDeviceInfo().GetFullName().c_str() << " has been fired" << endl;
  }
};

// ...

  AClass a;  // A member function of this class will be registered as a removal callback function

  // ...

  // Open the camera
  pCamera->Open();

  // Register a member function
  DeviceCallbackHandle hCb1 = RegisterRemovalCallback( pCamera, a, &AClass::RemovalCallbackMemberFunction);

All registered callbacks must be deregistered before calling the Camera object's Close() method.

if ( ! pCamera->DeregisterRemovalCallback( hCb1 ) )
  cerr << "Failed to deregister the callback function" << endl;

pCamera->Close();