Outils pour utilisateurs

Outils du site


ueye

Ceci est une ancienne révision du document !


Caméras Ueye

Miscellaneous

Tests with ROS Node

  1. Testing onlye one camera to check if there are dropped frames
    1. 5 FPS: It works fine, no dropped frames.
    2. 30 FPS: There are still dropped frames.
  2. Test if there are corrupted frames: No corrupted frames either at 5 or 60 FPS.
  3. Test one camera on USB2: First checked with ueyedemo the configuration parameters and the max pixel clock is only 35Mhz, 25 fps and 39ms of exposure. Configured the ros node with those parameters and at 30 fps there are still dropped frames. Started the test with 0 on the display and stoped it around 200. I only got 50 images from the rosbag and indeed there were dropped frames.

Where to find the documentation

To find the IDS documentation on the bellatrix computer (jessica).

  file:///usr/share/doc/ids/ueye_manual/index.html?xc_cockpit_sharpness.html

On the IDS website (be carefull the documentation on the website is the more updated one, which maybe doesn't correspond to your SDK version) :

  https://en.ids-imaging.com/manuals/uEye_SDK/EN/uEye_Manual/

Tips for master/slave cameras

In our stereo cameras, the camera ID2 is the master. At the beginning of the program before configuring the camera in the initialization, the PWM of the master camera have to be disabled. Then, first we configure the slave camera and secondly we configure the master camera and its PWM output. With this initialization, the cameras take there first picture at the same time.

Camera synchronization

To avoid cropped images, we have to lock the buffer currently read and use a ring buffer. To lock and unlock the “pLastMem[i]” buffer from a ring buffer :

  is_LockSeqBuf (hCam[i], IS_IGNORE_PARAMETER, (char*)(pLastMem[i]));
  is_UnlockSeqBuf (hCam[i], IS_IGNORE_PARAMETER, (char*)pLastMem[i]);

To use a ring buffer, do the following.

In the initialization :

  nRet = nRet && is_ClearSequence(*hCam);
  for(int i=0; i<nbImageBuffer; i++){
      nRet = nRet || is_AllocImageMem (*hCam, width, height, bitspixel, &((*pcImgMem)[i]), &((*memID)[i]));
      nRet = nRet || is_AddToSequence(*hCam, (*pcImgMem)[i], (*memID)[i]); // Add the memory to the ring buffer sequence
  }
  

In the loop, to get the “pCurrentMem[i]” buffer currently used for writting and the last filled “pLastMem[i]” buffer ready for reading :

  is_GetActSeqBuf(hCam[i], &nbImageBuffer, &(pCurrentMem[i]), &(pLastMem[i]));
  

At the end to close the camera :

  // Clear sequence
  nRet = is_ClearSequence(*hCam);
  // Release the Image memory
  for(int i=0; i<nbImageBuffer; i++){
      nRet = nRet || is_FreeImageMem (*hCam, pcImgMem[i], memID[i]);
  }
  

One of the disadvantages of the function is_GetActSeqBuf() is that at very high frame rates it may happen that additional images arrive between the frame event and accessing/locking the memory. The images arriving in this period will be skipped when you query the current image. The ring buffer is not filled correctly (one image memory after one image memory in the ring buffer). The get the last image used for writting and available for reading, use the image queue method. In addition, the function is_GetActSeqBuf() at the first call gives the same buffer in the two variable pCurrentMem[i] and pLastMem[i]. Then we lock the buffer which is currently used for writting. That's why the first image was cropped.

With the image queue method, new images will be added to the end of the queue on arrival (FIFO principle). You can be sure to always receive the oldest image which has not yet been queried. In addition, image memories are automatically locked immediately after receiving the image. This prevents images from being overwritten when very high frame rates and few image memories are used.

To initialize the image queue method :

  nRet = is_InitImageQueue(*hCam, 0);

In the loop to get the last image used for witting (the lock is automatically down by the function is_WaitForNextImage(), the unlock have to be done afterwards) :

  is_WaitForNextImage(hCam[i], 1000, &(pLastMem[i]), &(pLastID[i]));
  // --> copy the data from the buffer
  is_UnlockSeqBuf (hCam[i], IS_IGNORE_PARAMETER, (char*)pLastMem[i]);

To exit the image queue :

  nRet = is_ExitImageQueue(*hCam);

The limitations with our model of camera

  • For hardware reasons, the board-level versions of the USB uEye LE cameras can only be triggered on the falling edge.
  • The falling edge of the UI-124x/UI-324x/UI-524x in usb3 model is by default 21 ms. Check https://en.ids-imaging.com/manuals/uEye_SDK/EN/uEye_Manual/index.html?hw_ui-124x.html
  • The 324x sensor slows down in triggered mode. In free run mode : read out of the previous image and exposure of the current image run in parallel. In trigger mode : the sensor waits until the trigger is sent. After the trigger we have sensor exposure and then the sensor read out. The camera IU-3060 and IU-3260 can do in trigger mode the read-out and the exposure in parallele
  • The read-out time is between 16.5 and 16.6 ms. So for 30 Hz (correspond to a period of 33.3 ms), the maximal exposure time is 16.7 ms. At 16.8 ms, the exposure time is too high. This has been tested on the cameras.

automatic brightness control

The automatic control can be done by IDS SDK with :

  double dEnable = 1;
  double maxExposure(16.7);
  is_SetAutoParameter (*hCam, IS_SET_AUTO_SHUTTER_MAX, &maxExposure, 0); // Have to be set before enabling auto shutter
  is_SetAutoParameter (*hCam, IS_SET_ENABLE_AUTO_GAIN, &dEnable, 0);
  is_SetAutoParameter (*hCam, IS_SET_ENABLE_AUTO_SHUTTER, &dEnable, 0);

The function are not the same in recent SDK. It is replaced by is_AutoParameter(). These modifications are effective 2 or 3 images after the first image. First the camera modify its exposure time and after the hardware gain.

For the stereo cameras, we should update ourself the exposure time and the gain hardware for both camera in the same time. So we are sure the two cameras have the same configuration when they take a picture. It is possible to get and set the exposure time and hardware gain :

  double new_exposure(0);
  // to get the exposure
  nRet = is_Exposure(hCam[1], IS_EXPOSURE_CMD_GET_EXPOSURE, (void*)&new_exposure, sizeof(new_exposure)); 
  // to set the exposure
  nRet = is_Exposure(hCam[0], IS_EXPOSURE_CMD_SET_EXPOSURE, (void*)&new_exposure, sizeof(new_exposure)); 
  // to get the hardware gain
  INT masterGain = is_SetHardwareGain (hCam[1], IS_GET_MASTER_GAIN, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER); 
  // to set the hardware gain
  nRet = is_SetHardwareGain (hCam[0], masterGain, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER);
  

memcheck-clean

The IDS SDK seems to be not memcheck-clean.

  valgrind --log-file="logValgrind" -v  --leak-check=full --track-origins=yes --show-reachable=yes ./synchronizationUeyeCamera
  

ROS node ueye_cam updated

The ROS node ueyecam is updated to include the image queue method and a ring buffer. To get the corresponded two images left and right in a another node which subscribes to both images, we can use the tool Time_Synchronizer in the package message_filters : http://wiki.ros.org/message_filters#Time_Synchronizer.

With the Time_Synchronizer we can synchronized messages with their timestamp. With the ueyecam we can do it in this way :

      #include <message_filters/subscriber.h>
      #include <message_filters/time_synchronizer.h>
      #include <sensor_msgs/Image.h>
      
      using namespace sensor_msgs;
      using namespace message_filters;
      
      void stereoCallback(const ImageConstPtr& image1, const ImageConstPtr& image2)
      {
          try
          {
              cv_bridge::CvImageConstPtr cv_ptr_1 = cv_bridge::toCvShare(image1, "bgr8");
              cv_bridge::CvImageConstPtr cv_ptr_2 = cv_bridge::toCvShare(image2, "bgr8");
              // ...
          }
          catch (cv_bridge::Exception& e)
          {
              ROS_ERROR("Could not convert from '%s' to 'bgr8'.", image1->encoding.c_str());
          }
      }
      
      int main(int argc, char** argv)
      {
          ros::init(argc, argv, "vision_node");
          ros::NodeHandle nh;
      
          message_filters::Subscriber<Image> image_left_sub(nh, "/stereo/left/image_raw", 1);
          message_filters::Subscriber<Image> image_right_sub(nh, "/stereo/right/image_raw", 1);
          TimeSynchronizer<Image, Image> sync(image_left_sub, image_right_sub, 10);
          sync.registerCallback(boost::bind(&stereoCallback, _1, _2));
      
          ros::spin()
          
          return 0;
      }
      

Autre exemple avec une fonction de remaping pour les cameras fisheye étalonnée avec la toolbox matlab ocam_calib :

  ssh://git@redmine.laas.fr/laas/jcombier/remap_image/remap_image.git
  

Caméras Ueye

Miscellaneous

Tests with ROS Node

  1. Testing onlye one camera to check if there are dropped frames
    1. 5 FPS: It works fine, no dropped frames.
    2. 30 FPS: There are still dropped frames.
  2. Test if there are corrupted frames: No corrupted frames either at 5 or 60 FPS.
  3. Test one camera on USB2: First checked with ueyedemo the configuration parameters and the max pixel clock is only 35Mhz, 25 fps and 39ms of exposure. Configured the ros node with those parameters and at 30 fps there are still dropped frames. Started the test with 0 on the display and stoped it around 200. I only got 50 images from the rosbag and indeed there were dropped frames.

Where to find the documentation

To find the IDS documentation on the bellatrix computer (jessica).

  file:///usr/share/doc/ids/ueye_manual/index.html?xc_cockpit_sharpness.html

On the IDS website (be carefull the documentation on the website is the more updated one, which maybe doesn't correspond to your SDK version) :

  https://en.ids-imaging.com/manuals/uEye_SDK/EN/uEye_Manual/

Tips for master/slave cameras

In our stereo cameras, the camera ID2 is the master. At the beginning of the program before configuring the camera in the initialization, the PWM of the master camera have to be disabled. Then, first we configure the slave camera and secondly we configure the master camera and its PWM output. With this initialization, the cameras take there first picture at the same time.

Camera synchronization

To avoid cropped images, we have to lock the buffer currently read and use a ring buffer. To lock and unlock the “pLastMem[i]” buffer from a ring buffer :

  is_LockSeqBuf (hCam[i], IS_IGNORE_PARAMETER, (char*)(pLastMem[i]));
  is_UnlockSeqBuf (hCam[i], IS_IGNORE_PARAMETER, (char*)pLastMem[i]);

To use a ring buffer, do the following.

In the initialization :

  nRet = nRet && is_ClearSequence(*hCam);
  for(int i=0; i<nbImageBuffer; i++){
      nRet = nRet || is_AllocImageMem (*hCam, width, height, bitspixel, &((*pcImgMem)[i]), &((*memID)[i]));
      nRet = nRet || is_AddToSequence(*hCam, (*pcImgMem)[i], (*memID)[i]); // Add the memory to the ring buffer sequence
  }
  

In the loop, to get the “pCurrentMem[i]” buffer currently used for writting and the last filled “pLastMem[i]” buffer ready for reading :

  is_GetActSeqBuf(hCam[i], &nbImageBuffer, &(pCurrentMem[i]), &(pLastMem[i]));
  

At the end to close the camera :

  // Clear sequence
  nRet = is_ClearSequence(*hCam);
  // Release the Image memory
  for(int i=0; i<nbImageBuffer; i++){
      nRet = nRet || is_FreeImageMem (*hCam, pcImgMem[i], memID[i]);
  }
  

One of the disadvantages of the function is_GetActSeqBuf() is that at very high frame rates it may happen that additional images arrive between the frame event and accessing/locking the memory. The images arriving in this period will be skipped when you query the current image. The ring buffer is not filled correctly (one image memory after one image memory in the ring buffer). The get the last image used for writting and available for reading, use the image queue method. In addition, the function is_GetActSeqBuf() at the first call gives the same buffer in the two variable pCurrentMem[i] and pLastMem[i]. Then we lock the buffer which is currently used for writting. That's why the first image was cropped.

With the image queue method, new images will be added to the end of the queue on arrival (FIFO principle). You can be sure to always receive the oldest image which has not yet been queried. In addition, image memories are automatically locked immediately after receiving the image. This prevents images from being overwritten when very high frame rates and few image memories are used.

To initialize the image queue method :

  nRet = is_InitImageQueue(*hCam, 0);

In the loop to get the last image used for witting (the lock is automatically down by the function is_WaitForNextImage(), the unlock have to be done afterwards) :

  is_WaitForNextImage(hCam[i], 1000, &(pLastMem[i]), &(pLastID[i]));
  // --> copy the data from the buffer
  is_UnlockSeqBuf (hCam[i], IS_IGNORE_PARAMETER, (char*)pLastMem[i]);

To exit the image queue :

  nRet = is_ExitImageQueue(*hCam);

The limitations with our model of camera

  • For hardware reasons, the board-level versions of the USB uEye LE cameras can only be triggered on the falling edge.
  • The falling edge of the UI-124x/UI-324x/UI-524x in usb3 model is by default 21 ms. Check https://en.ids-imaging.com/manuals/uEye_SDK/EN/uEye_Manual/index.html?hw_ui-124x.html
  • The 324x sensor slows down in triggered mode. In free run mode : read out of the previous image and exposure of the current image run in parallel. In trigger mode : the sensor waits until the trigger is sent. After the trigger we have sensor exposure and then the sensor read out. The camera IU-3060 and IU-3260 can do in trigger mode the read-out and the exposure in parallele
  • The read-out time is between 16.5 and 16.6 ms. So for 30 Hz (correspond to a period of 33.3 ms), the maximal exposure time is 16.7 ms. At 16.8 ms, the exposure time is too high. This has been tested on the cameras.

automatic brightness control

The automatic control can be done by IDS SDK with :

  double dEnable = 1;
  double maxExposure(16.7);
  is_SetAutoParameter (*hCam, IS_SET_AUTO_SHUTTER_MAX, &maxExposure, 0); // Have to be set before enabling auto shutter
  is_SetAutoParameter (*hCam, IS_SET_ENABLE_AUTO_GAIN, &dEnable, 0);
  is_SetAutoParameter (*hCam, IS_SET_ENABLE_AUTO_SHUTTER, &dEnable, 0);

The function are not the same in recent SDK. It is replaced by is_AutoParameter(). These modifications are effective 2 or 3 images after the first image. First the camera modify its exposure time and after the hardware gain.

For the stereo cameras, we should update ourself the exposure time and the gain hardware for both camera in the same time. So we are sure the two cameras have the same configuration when they take a picture. It is possible to get and set the exposure time and hardware gain :

  double new_exposure(0);
  // to get the exposure
  nRet = is_Exposure(hCam[1], IS_EXPOSURE_CMD_GET_EXPOSURE, (void*)&new_exposure, sizeof(new_exposure)); 
  // to set the exposure
  nRet = is_Exposure(hCam[0], IS_EXPOSURE_CMD_SET_EXPOSURE, (void*)&new_exposure, sizeof(new_exposure)); 
  // to get the hardware gain
  INT masterGain = is_SetHardwareGain (hCam[1], IS_GET_MASTER_GAIN, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER); 
  // to set the hardware gain
  nRet = is_SetHardwareGain (hCam[0], masterGain, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER, IS_IGNORE_PARAMETER);
  

memcheck-clean

The IDS SDK seems to be not memcheck-clean.

  valgrind --log-file="logValgrind" -v  --leak-check=full --track-origins=yes --show-reachable=yes ./synchronizationUeyeCamera
  

ROS node ueye_cam updated

The ROS node ueyecam is updated to include the image queue method and a ring buffer. To get the corresponded two images left and right in a another node which subscribes to both images, we can use the tool Time_Synchronizer in the package message_filters : http://wiki.ros.org/message_filters#Time_Synchronizer.

With the Time_Synchronizer we can synchronized messages with their timestamp. With the ueyecam we can do it in this way :

      #include <message_filters/subscriber.h>
      #include <message_filters/time_synchronizer.h>
      #include <sensor_msgs/Image.h>
      
      using namespace sensor_msgs;
      using namespace message_filters;
      
      void stereoCallback(const ImageConstPtr& image1, const ImageConstPtr& image2)
      {
          try
          {
              cv_bridge::CvImageConstPtr cv_ptr_1 = cv_bridge::toCvShare(image1, "bgr8");
              cv_bridge::CvImageConstPtr cv_ptr_2 = cv_bridge::toCvShare(image2, "bgr8");
              // ...
          }
          catch (cv_bridge::Exception& e)
          {
              ROS_ERROR("Could not convert from '%s' to 'bgr8'.", image1->encoding.c_str());
          }
      }
      
      int main(int argc, char** argv)
      {
          ros::init(argc, argv, "vision_node");
          ros::NodeHandle nh;
      
          message_filters::Subscriber<Image> image_left_sub(nh, "/stereo/left/image_raw", 1);
          message_filters::Subscriber<Image> image_right_sub(nh, "/stereo/right/image_raw", 1);
          TimeSynchronizer<Image, Image> sync(image_left_sub, image_right_sub, 10);
          sync.registerCallback(boost::bind(&stereoCallback, _1, _2));
      
          ros::spin()
          
          return 0;
      }
      

Autre exemple avec une fonction de remaping pour les cameras fisheye étalonnée avec la toolbox matlab ocam_calib :

  ssh://git@redmine.laas.fr/laas/jcombier/remap_image/remap_image.git
  

ROS stereo calibration

Before running the calibration, a parameter has to be changed in the code. For this, open /opt/ros/indigo/lib/python2.7/dist-packages/camera_calibration/calibrator.py as sudoer. Search for stereoCalibrate and after this function is called twice, alpha is set to 0. set alpha to -1.

How to calibrate the stereo pair (ueye_cam node stereo has to be launched)

  
  rosrun camera_calibration cameracalibrator.py --size 7x6 --square 0.108 --size 8x5 --square 0.029 right:=/stereo/right/image_raw left:=/stereo/left/image_raw
  

It is possible to do the calibration with multiple calibration patterns. That's why there are two –size X –square Y above. The reason to use two is because the algorithm for corner extraction that is embedded in the calibration, cannot perform properly if the calibration pattern is far away and its square size is small. That's why multiple calibration patterns of different sizes can be used.

Check that the size in –square is in meters.

Message “camera_info” is described here: sensor_msgs/CameraInfo Message

Clicking in save on the calibration GUI will save a zip file with all the captured images and a ost.txt file which contains the result of the calibration. Be aware that in that file the translation vector and rotation matrix (extrinsic parameters) are not saved. They are only displayed in the terminal where the gui was launched.

Updated version (June 2016): Clicking save (on the GUI), a .zip file will be saved (check the terminal where the GUI was launched to see its path, it should be in /tmp). It contains the images from left and right that were used for the calibration process. Besides, it contains a ost.txt, left.yaml and right.yaml.

left.yaml:

image_width: 1280
image_height: 1024
camera_name: narrow_stereo/left
camera_matrix:
  rows: 3
  cols: 3
  data: [2452.651066, 0.000000, 810.700853, 0.000000, 2429.659020, 541.137675, 0.000000, 0.000000, 1.000000]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.486129, 0.187693, 0.000584, -0.005902, 0.000000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [0.012554, 0.989660, -0.142883, -0.999812, 0.014534, 0.012818, 0.014762, 0.142696, 0.989657]
projection_matrix:
  rows: 3
  cols: 4
  data: [2295.255726, 0.000000, 1012.966070, 0.000000, 0.000000, 2295.255726, 288.972866, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000]

right.yaml:

image_width: 1280
image_height: 1024
camera_name: narrow_stereo/right
camera_matrix:
  rows: 3
  cols: 3
  data: [2461.513649, 0.000000, 419.666788, 0.000000, 2448.229542, 465.184518, 0.000000, 0.000000, 1.000000]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.490283, 0.197057, 0.000775, 0.010704, 0.000000]
rectification_matrix:
  rows: 3
  cols: 3
  data: [-0.010678, -0.991706, -0.128083, 0.999942, -0.010806, 0.000301, -0.001682, -0.128072, 0.991763]
projection_matrix:
  rows: 3
  cols: 4
  data: [2295.255726, 0.000000, 1012.966070, -246.272996, 0.000000, 2295.255726, 288.972866, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000]

Therefore, left.ini and right.ini files have to be created. As the latest version (June 2016) already saves a left.yaml and right.yaml as well, we need to convert them to left.ini and right.ini. Pay attention to change the camera name from [narrow_stereo/{left,right}] to [stereo/{left,right}] in the .yaml files before doing the following step.

Open a new terminal and go to the folder where left.yaml and right.yaml have been saved and do:

sed -i 's,camera_name: narrow_stereo/left,camera_name: stereo/left,g'  left.yaml
sed -i 's,camera_name: narrow_stereo/right,camera_name: stereo/right,g'  right.yaml
rosrun camera_calibration_parsers convert left.yaml left.ini
rosrun camera_calibration_parsers convert right.yaml right.ini

Once this four files are created they have to be copied to the default directory where the ueye_cam node expects them:

mkdir -p  ~/.ros/camera_conf/stereo/
cp right.ini ~/.ros/camera_conf/stereo/
cp left.ini ~/.ros/camera_conf/stereo/
mkdir -p  ~/.ros/camera_info/stereo/  
cp left.yaml ~/.ros/camera_info/stereo/  
cp right.yaml ~/.ros/camera_info/stereo/

To run the image_stereo_proc ROS package to have the rectified images on a separate terminal:

  ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc
  

The ROS topics containing them will be:

  /stereo/left/image_rect_color
  /stereo/right/image_rect_color
  

Stereo 3D Reconstruction

Stereo 3D reconstruction is based on image_stereo_proc ROS package. It takes the raw images from the cameras and outputs rectified images, disparity map and a 3D pointcloud.

To obtain a good disparity map and 3D pointcloud, it is advised to follow the Choosing Good Stereo Parameters Guide from ROS.

For the following steps, image_stereo_proc has to be launched.

ROS_NAMESPACE=stereo rosrun stereo_image_proc stereo_image_proc

How to see the disparity map

rosrun image_view disparity_view image:=/stereo/disparity

How to see the 3D pointcloud

RVIZ is a tool from ROS where we can display the 3D pointcloud. It needs a link in the TF tree in order to display it. If you have already a setup of your robot for this, avoid the next step. If you are doing a single test with your cameras, a simple approach is to create a link in tf. On a separate terminal, run:

rosrun tf static_transform_publisher 1 1 0 0 0 0 /base_link /camera 500

And then launch RVIZ with:

rosrun rviz rviz

When RVIZ opens, select either 'base_link' or 'camera' in “Fixed Frame” under “Global Options”.

If the 3D pointcloud is not showing, check that the disparity map looks correct. If it doesn't, you have to re-tune the stereo parameters to have a good disparity map. Mainly, you will have to play with speckle_range, disparity_range and min_disparity.

ueye.1467294175.txt.gz · Dernière modification : 2016/06/30 15:42 de ariel