Please select Into the mobile phone version | Continue to access the computer ver.

An Open-Source Algorithm for Automatic Extrinsic Calibration of Multip

[Copy Link]
Author: Livox Pioneer | Time: 2020-11-24 16:56:46 | Robotics|
0 20886

36

Threads

38

Posts

490

Credits

Administrator

Rank: 9Rank: 9Rank: 9

Credits
490
Posted on 2020-11-24 16:56:46| All floors |Read mode
An Open-Source Algorithm for Automatic Extrinsic Calibration of Multiple LiDARs

2020/06/23  Livox will continue to provide more sensor calibration methods, help improve the entire ecosystem, and make the cost effective Livox LiDAR more user-friendly.
  As perception modules are used in autonomous driving to acquire information of and perceive the surroundings, their performance and reliability are critical for the entire autonomous driving system, leading to a direct impact on the downstream control chains including positioning, route planning, and decision making, etc. The development of autonomous driving focuses on enhancing the perception of the system in complex scenes through multi-sensor data fusion. However, camera, millimeter wave radar, and liDAR have their own coordinate systems, meaning that the sensors output data based on their own coordinate systems. The process of transforming each sensor into a common coordinate system is called extrinsic sensor calibration.
   
      LiDAR point cloud data contain 3D coordinate information of the object with itself serving as the reference origin. To splice multiple LiDARs for a larger 3D coverage, point clouds of all LiDARs can be spliced by transforming each liDAR into a global coordinate system through extrinsic parameters.
   
  01 Manual extrinsic calibration
  Extrinsic calibration of multiple liDARs can be achieved using the calibration tool in Livox Viewer. The trick is to find a place with a broad view where there are well-arranged eye-catching buildings or objects (hereinafter referred to as “A”) at an intersection common to different LiDAR FOVs.
   
  

   
  First, acquire a frame of A point cloud data with a longer integration time using Livox Viewer, observe the A point cloud formed by each liDAR within its FOV, and fine-adjust the controls in the Livox Viewer calibration tool (by six variables [x, y, z, roll, pitch, yaw]) until A point clouds within each FOV overlap entirely. Then, the parameters obtained are the desired extrinsic parameters. The allowable accuracy error for calibration using this method is less than 0.1 degree.
   
   Obviously, this method has limitations since it depends on the field of view common to multiple LiDARs, and its accuracy and efficiency are positively correlated to the user's proficiency.
   
  02 Automatic extrinsic calibration
  In some use cases, it is not easy to find an open environment or reference object for calibration. As such, Livox introduced its automatic calibration technique TFAC-Livox (Target-Free Automatic Calibration) and made the algorithm available to all Github users. This technique relies on the geometry consistency assumption that the local 3D models scanned by multiple LiDARs are consistent. Build a map by moving the reference LiDAR (LiDAR0), register and calculate the reconstructed map of LiDAR0 with the rest of LiDAR data in an iterative fashion, and then minimize the matching error based on the consistency assumption until the algorithm converges to achieve an invariant calibration matrix stiffness (six parallel lines). The final calibration matrix (extrinsic parameters) can be obtained using the consistency algorithm.
   
  Github: https://github.com/Livox-SDK/Livox_automatic_calibration
   
  

  
Example: The red point cloud is the result of reference object (LiDAR0) mapping; the green point cloud indicates that the target LiDAR is being automatically calibrated.

   
   
  Take two Horizons for example and assume that LiDAR0 is the reference LiDAR and LiDAR1 is the one for calibration. Based on the assumption that both the data and the trajectory are synchronous, LiDAR0 is used for SLAM to get a submap M. Then, the point clouds of LiDAR1 are rotated, translated and transformed to the vicinity of the appropriate submap M based on the synchronized time stamp and motion trajectory and the initial value of rough map estimation. The matching error is minimized with geometry consistency constraints and through iteration of the nearest neighbor matching algorithm until the algorithm converges to achieve an invariant calibration matrix stiffness (six parallel lines), and the final calibration matrix (extrinsic parameters) can be calculated using the consistency algorithm.
   
  
  
Figure 1 Dual-LiDAR coordinate system and coordinate transformation {R,T}

   
   
  Ensure that the area scanned by the reference LiDAR (map M) can be detected by the rest of LiDARs during the mobile data acquisition process. Choose an environment with rich characteristics, e.g. underground indoor car parks. moving objects nearby should be avoided when data are being acquired. Motions should be as slow as possible, especially at corners, in order to minimize motion distortion. The mounting position of LiDAR is not limited to overlapping FOV. Instead, LiDAR can be mounted freely as long as the initial values of extrinsic parameters can be obtained.
   
  

  
Acquisition environment and route map

  

  03 Conclusion
  We believe that users can complete Livox multi-liDAR calibration quickly using the manual/automatic method described above. Livox will continue to provide more sensor calibration methods, help improve the entire ecosystem, and make the cost effective Livox LiDAR more user-friendly.
   
  

  

  
Reply

Use props Report

You need to log in before you can reply Login | Register

Credit Rules

Quick Reply Back to top