Please select Into the mobile phone version | Continue to access the computer ver.

Livox 开源分享:关于激光雷达去畸变的那些事儿 | LiDAR Distort

[Copy Link]
Author: Livox Pioneer | Time: 2021-11-29 12:25:05 | Autonomous driving|
0 2905

28

Threads

29

Posts

318

Credits

Administrator

Rank: 9Rank: 9Rank: 9

Credits
318
Posted on 2021-11-29 12:25:05| All floors |Read mode
一 什么是激光雷达自运动畸变
激光雷达通过发射激光束来测量周围环境物体的距离和方位,从而判断车辆与障碍物的相对位置。当其发射的激光束足够多时,这一个个的激光点将汇集成一片点云,勾勒出其所处的三维环境信息,这便是我们常说的点云数据。


对于多数激光雷达而言,尽管激光的发射与接收很快,但构成点云的每一个点仍非同一时刻生成的。一般我们会将100ms (对应典型值10Hz) 内累积的数据作为一帧点云输出。若在这100ms内,激光雷达本体或安装所在的机体发生绝对位置的变化,那么此帧点云中每一个点的坐标系就是不同的。直观上看,这一帧点云数据就会发生一定的“变形”,不能真实对应所探测到的环境信息,类似于拍照时手抖了,拍出来的照片就会糊。这便是激光雷达的自运动畸变。


二 自运动畸变产生的本质以及校正

我们来具体看一看自运动畸变是什么样的。
激光雷达点云自运动畸变的形态,与其扫描方式是相关的。比如传统360度机械式激光雷达每一帧,是以雷达为中心环绕扫描一周(100ms)得到的。当雷达本体或所在车体静止时,扫描起始点和终止点可以比较好地闭合(坐标原点始终保持不变)。而当雷达或自车运动时,自运动畸变就会发生,环绕一圈的数据就会发生扭曲,导致环绕不再闭合(不同点的坐标原点不同)。

图1 360度机械式激光雷达自运动畸变示意


下面我们再深入分析一下这一现象的本质。

简单来说,激光雷达点云自运动畸变的产生本质上是一帧中每一个点的坐标系不同。
如下图,左图p1~p3表示激光雷达依次扫描到的三个位置点,这三点在真实世界中共线。但由于激光雷达自身在一帧时间内存在“剧烈”运动,如中间图所示,雷达自身分别在三个不同的实际姿态下对三个点进行了扫描。因此在最后得到的点云中(最右图),三个点坐标实际处于不同的坐标系,看起来不再共线了。

2 点云坐标系发生变化
图3也给出一个实际应用的例子。

搭载Livox激光雷达的车辆因自身掉头发生了自运动畸变:远处的墙体和车辆都因为自车快速旋转产生了分层现象。


图3 由于车体运动,路边停放的车辆点云出现分层



那么,自运动畸变要怎么校正呢?显然,只要我们把车开得足够的慢……
当然不是,只需要我们将这一帧内所有点的坐标系都转换到同一个,如图1 第一个点 p1 所在的雷达坐标系,这本质上就是对雷达的运动进行补偿。


原理上看起来非常简单(实际也非常简单)。只要知道每个点的T1i就行了,那到底怎么知道呢?
在实际应用中,一般首先设法测量激光雷达的运动信息,如一帧点云首尾(100ms间隔)的雷达位姿变化 T。然后根据某点到初始点或末尾点之间的时间差 Δt,通过短时匀速假设进行线性插值得到该点的  。

位姿变化 T 可通过惯性导航系统(INS)或激光雷达里程计(LiDAR Odometry, 如 LIO-Livox)提供的位姿信息获得。若使用惯性测量单元(IMU,可提供角速度以及加速度信息)计算位姿变化,则需要额外提供雷达或者自车的初始速度信息。

那 Δt 的值怎么获得呢?
Livox 激光雷达输出自带每个点的时间戳。在获取点云的时候就可从点云数据包 Custom Msg中直接读取到。而其他雷达则可能需要根据各自雷达的 SDK 所提供信息或自行手动解算得到每个点的时间戳。

按上述公式将各个点坐标转换到同一坐标系,就是去畸变的过程了。
下图4 展示了去畸变前后的点云对比。



图4 点云校正后效果对比



   图5 道路两旁建筑点云畸变前后对比:彩色点云为原始点云,白色点云为校正后点云

三、自运动畸变校正工具使用及说明
上述去畸变的过程代码已上传 Github, 有兴趣的读者欢迎点击下方链接查看。
https://github.com/Livox-SDK/livox_cloud_undistortion

代码说明:
依赖:livox_ros_driver,PCL,ROS编译:在工作空间下使用指令 catkin_make运行:source devel/setup.bashroslaunch livox_dedistortion_pkg run.launch
接口说明:
在 data_process.h 中定义了 ImuProcess 类,该类的成员函 数UndistortPcl 为去畸变函数,该函数参数中 Sophus::SE3d Tbe 为当前帧点云帧头和帧尾之间的位姿,如果可以直接提供该位姿,则可以调用该函数进行去畸变。如果只有IMU数据,则调用 ImuProcess 的成员函 Process 进行去畸变。
使用:
输入:此工具基于ros开发,因此输入信息为两个 topic,点云 topic 为 /livox/lidar, customMsg 格式,IMU 信息 topic为/livox/imu输出:校正后的点云 /livox_unidistort

特别说明:平移畸变的校正需要用户根据各自的平移信息来源(GPS位置坐标、速度等)手动计算对应时间差下的平移变化,作为UndistortPcl输入。


Livox Open SourceSharing: About LiDAR Distortion Removal

I. What is LiDAR self-motion distortion?
LiDAR measures thedistance and direction of surrounding objects by emitting laser beams andmeasuring relative location of a vehicle to an obstacle. When there are enoughlaser beams, these laser points will form a point cloud to plot the 3Dinformation. This is what we often call point cloud data.
For most LiDARs,although the laser is transmitted and received quickly, each point that makesup the point cloud is not generated at the same moment. Generally, we outputthe data accumulated within 100ms (corresponding to a typical value of 10Hz) asone frame of the point cloud. If the absolute location of the LiDAR body or thebody where LiDAR is mounted changes during this 100ms, the coordinate system ofeach point in this frame point cloud will be different. Intuitively, this frameof point cloud data will be "distorted" and will not truly correspondto the detected environmental information. It is like the case of taking aphoto, if your hand shakes, the photo will be blurred. This is the self-motiondistortion of LiDAR.


II. Essence of self-motion distortion and itscalibration
Let's take a look atwhat self-motion distortion is.
The pattern of theself-motion distortion of LiDAR point cloud is related to its scanning method.For example, each frame of traditional 360-degree mechanical LiDAR is scannedfrom the points around the center of the radar (100ms). When the radar body orthe host vehicle is stationary, the scanning start and end points will have abetter matching (the coordinate origin always remains the same). And when theradar or vehicle is in motion, the self-motion distortion will occur and thedata for one round will be distorted, causing the surrounding points are longermatched (different points have different coordinate origins).


Figure 1. 360-degreemechanical LiDAR self-motion distortion schematic
Let's furtheranalyze the essence of this phenomenon.
To put it simply,the self-motion distortion in the LiDAR point cloud is essentially the resultof a different coordinate system for each point in a frame.
In the figure below,the left figure p1~p3 indicates the three location points scanned by LiDARconsecutively, which are co-linear in the real world. However, due to the"violent" motion of the LiDAR in one frame, the radar scanned threepoints in three different attitudes, as shown in the middle figure. Therefore,in the final point cloud obtained (the rightmost figure), these three pointsare actually in different coordinate systems and no longer appear to beco-linear.
Figure 2. Point cloud coordinate system changes
Figure 2 gives anexample of actual application.
Self-motiondistortion of a vehicle equipped with Livox LiDAR due to its U-turn: Both thewall and the vehicle in the distance produced delamination due to the rapidrotation of the vehicle.

Figure 3. Pointclouds of parked vehicles on the roadside are delaminated due to vehicle motion

       So,how to correct the self-motion distortion? Apparently, if we drive the carslowly enough ...
Of course not, weneed to convert the coordinate system of all points in one frame to the samecoordinate system where the first point p1 is located as shown in Figure 1,which is essentially a compensation for the motion of the radar.
We use p1ito indicate the coordinate of pi in the Radar CoordinateSystem 1, and T ji to indicate the location changeof coordinate from i to j. In a frame point cloud, the changes of eachpoint to the first point coordinate system are T12,T13, T14 … thecorresponding point can be easily transferred to the coordination of the firstpoint:

It looks very simplein principle (and very simple in practice). As long as you knowT1iat each point, so how can we know it?
In practicalapplications, generally, first try to measure the motion information of theLiDAR, such as the radar location change T of a frame point cloud (100ms interval). The T1i of a point is then obtained bylinear interpolation based on the time difference Δt between a point and theinitial or end point by the short-time uniformity assumption. Thelocationchange T can be acquired through the inertial navigation system (INS) orLiDAR Odometry (e.g., LIO attached to the LIO link).If an inertial measurement unit (IMU, which can provide angular velocity andacceleration information) is used to calculate the location change, additionalinformation on the initial velocity of the radar or vehicle is required.
And how do you getthe value of Δt? The Livox LiDAR output comes with a timestamp for each point.The point cloud can be read directly from the point cloud data package CustomMsg. Other radars may need to get the timestamp for each point based on theinformation form SDK or through manual calculation.

Converting thecoordinate of each point in each frame to the same coordinate system accordingto the above formula is the process of distortion removal.
Figure 3 below showsthe point cloud comparison before and after distortion removal.


Figure 4 shows theresults after point cloud calibration

Figure 5. Comparison of point cloud distortion of buildings on bothsides of the road, where the color point cloud is the original point cloud andthe white point cloud is the calibrated point cloud
III. Use of self-motion distortion calibrationtool and instructions
Code notes:
Rely on:
livox_ros_driver
PCL
ROS

Compilation: Usecommands under workspace
catkin_make

Run:
sourcedevelop/setup.bash
roslaunchlivox_distribution_pkg run.launch
Interface notes:
ImuProcess class isdefined in data_process.h. The member function UndistortPcl of this class isde-distortion function, the function parameter Sophus::SE3d Tbe is the locationbetween current frame point cloud frame head and frame tail, if the locationcan be provided directly, then the function can be called for distortionremoval. If only IMU data is available, Process - the member function ofImuProcess is called for distortion removal.
Special notes:
The calibration of offset distortion requires the user to manually calculatethe offset change under the corresponding time difference according to theirrespective offset information sources (GPS position coordinate, speed, etc.)and input it as the function UndistortPcl in the code.
Use:
Input:
       Thistool is developed from ros, so the input information has two topics, the pointcloud topic is /livox/lidar, customMsg format, and the IMU information topic is/livox/imu
Output:
The output is the calibrated point cloud/livox_unidistort

https://github.com/Livox-SDK/livox_cloud_undistortion



This post contains more resources

You need to Login Before they can download or view these resources, Don’t have an account?Register

x
Reply

Use props Report

You need to log in before you can reply Login | Register

Credit Rules

Quick Reply Back to top