Description
I am very excited about your project, and interested in trying with different Lidars (non-Livox) and potentially depth cameras to generate the necessary point-clouds. Wondering if, similar to https://github.com/hku-mars/FAST-LIVO, you can specify key parameters to update to make the SUPER project applicable for a wide range of lidar+ non-lidar hardware:
Important parameters (example):
Edit config/xxx.yaml to set the below parameters:
lid_topic: The topic name of LiDAR data.
imu_topic: The topic name of IMU data.
img_topic: The topic name of camera data.
img_enable: Enbale vio submodule.
lidar_enable: Enbale lio submodule.
point_filter_num: The sampling interval for a new scan. It is recommended that 34 for faster odometry, and 12 for denser map.
outlier_threshold: The outlier threshold value of photometric error (square) of a single pixel. It is recommended that 50250 for the darker scenes, and 5001000 for the brighter scenes. The smaller the value is, the faster the vio submodule is, but the weaker the anti-degradation ability is.
img_point_cov: The covariance of photometric errors per pixel.
laser_point_cov: The covariance of point-to-plane redisual per point.
filter_size_surf: Downsample the points in a new scan. It is recommended that 0.050.15 for indoor scenes, 0.30.5 for outdoor scenes.
filter_size_map: Downsample the points in LiDAR global map. It is recommended that 0.150.3 for indoor scenes, 0.40.5 for outdoor scenes.
pcd_save_en: If true, save point clouds to the PCD folder. Save RGB-colored points if img_enable is 1, intensity-colored points if img_enable is 0.
delta_time: The time offset between the camera and LiDAR, which is used to correct timestamp misalignment.
After setting the appropriate topic name and parameters, you can directly run FAST-LIVO on the dataset.