The data are used to identify the  kinematic parameters deviation of Cartesian robot, train Gaussian Process Regression (GPR) model, record the compensation result of four calibration methods under different loading conditions.

Instructions: 

Compensation results file: It expresses the compensation results in 8 test points when using four calibration methods under different loading conditions. We can see Figure16 in this paper.

HCT+BD+GPR_training file: These data record 320 groups of position points of the end effector after using HCT+BD model to compensate.  We can get 320 groups of residual error data by simply calculating the difference between these data and these designated positions. And they are used to train GPR model. 10-fold cross validation results of GPR model about x and z error are obtained by using these data. They  are shown in Figure14 and Figure15 in this paper.

 

HCT+GPR_training file: These data record 320 groups of position points of the end effector after using HCT model to compensate.  We can get 320 groups of residual error data by simply calculating the difference between these data and these designated positions. And they are used to train GPR model.

Identify_kinematic_parameter_deviation file: Using nonlinear least squares method to minimize the difference  between the amended position and actual position. We can get the deviation of kinematic parameters. The procedure to identify the deviation of kinematic parameters is shown in Figure 4. And we can see the result of deviation in Table 2 in this paper.

 

Categories:
50 Views

Calibration datasets used in the article Standard Plenoptic Cameras Mapping to Camera Arrays and Calibration based on DLT. These datasets were acquired with a Lytro Illum camera using two calibration grids with different sizes: 8 × 6 grid of 211 × 159 mm (Big Pattern) with approximately 26.5 mm cells, and 20×20 grid of 121.5 × 122 mm (Small Pattern) with approximately 6.1 mm cells. Each dataset acquired is composed of 66 fully observable poses of the calibration pattern.

Instructions: 

The dataset is divided into the following zip files:

  • GD44M00145_WhiteImages: White image database of the Lytro Illum camera used to acquire the datasets.

  • Big Pattern 2D - Full: Calibration dataset with 66 poses of the big calibration grid.

  • Big Pattern 2D - Sample: Calibration dataset with 10 poses of the big calibration grid.

  • Big Pattern 2D - Sample Reduced: Calibration dataset with 5 poses of the big calibration grid.

  • Small Pattern 2D - Full: Calibration dataset with 66 poses of the small calibration grid.

  • Small Pattern 2D - Sample: Calibration dataset with 10 poses of the small calibration grid.

  • Small Pattern 2D - Sample Reduced: Calibration dataset with 5 poses of the small calibration grid.

  • Object: Objects dataset with the same acquisition conditions as the calibration datasets.

  • PlenCalCVPR2013Datasets: Lytro images used in the article for Lytro 1st generation calibration.

 

In order to obtain the lightfield associated with each image, you should read the Lytro raw image files (.lfp) using Dansereau's calibration toolbox (https://github.com/doda42/LFToolbox) and the white images provided here. The calibration of these datasets can be performed using the calibration toolbox provided in the article (http://www.isr.tecnico.ulisboa.pt/~nmonteiro/articles/plenoptic/tcsvt2019).

Categories:
229 Views