点击下方
卡片
,关注
「3D视觉工坊」
公众号
选择
星标
,干货第一时间送达
来源:3D视觉工坊
添加小助理:dddvision,备注:方向+学校/公司+昵称,拉你入群。文末附行业细分群
扫描下方二维码,加入
3D视觉知识星球
,星球内凝聚了众多3D视觉实战问题,以及各个模块的学习资料:
近20门视频课程(星球成员免费学习)
、
最新顶会论文
、计算机视觉书籍
、
优质3D视觉算法源码
等。想要入门3D视觉、做项目、搞科研,欢迎扫码加入!
作者:
殷杰
毕业于上海交通大学,曾在ICRA,IROS,RAL等机器人顶会/期刊上发表多篇成果
详情请参照原链接:
https://github.com/sjtuyinjie/awesome-LiDAR-Visual-SLAM
,更新中。整理不易,请多多star和fork.
LiDAR-Visual-Fusion SLAM(同步定位与地图构建)技术结合了LiDAR(激光雷达)和视觉传感器的优势,能够在复杂环境中实现高精度的定位和地图构建。相较于单一的LiDAR或视觉SLAM系统,LiDAR-Visual-Fusion SLAM具有更强的鲁棒性和准确性,尤其在光线变化、动态环境或结构复杂的场景中表现出色。近年来,随着传感器技术和算法的不断进步,LiDAR-Visual-Fusion SLAM在自动驾驶、无人机导航、机器人等领域得到了广泛应用和快速发展。
这是一个关于LiDAR-Visual-Fusion-SLAM的资源汇总列表,我会不定期更新这个页面。如果这个项目对您的研究有帮助,请给我一个star并fork,谢谢!如果您的LiDAR-Visual-SLAM相关工作被顶级会议或期刊接收,欢迎提issue提醒我更新您的工作!
论文汇总
2024
-
[RAL2024]
LIVER: A Tightly Coupled LiDAR-Inertial-Visual State Estimator With High Robustness for Underground Environments
-
paper: https://ieeexplore.ieee.org/abstract/document/10404014
-
code: https://github.com/ZikangYuan/sr_livo
-
[RAL2024]
A LiDAR-inertial-visual odometry and mapping system based on the sweep reconstruction method
-
paper: https://xplorestaging.ieee.org/document/10501952
-
code: https://github.com/ZikangYuan/sr_livo
-
[RAL2024]
LVIO-Fusion LiDAR-Visual-Inertial Odometry and Mapping in Degenerate Environments
-
paper: https://ieeexplore.ieee.org/abstract/document/10452777
-
[TRO2024 under review]
FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry
-
paper: https://ieeexplore.ieee.org/document/9739244
-
code: https://github.com/hku-mars/FAST-LIVO2
-
[TPAMI2024 under review]
R3LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator
-
paper: https://arxiv.org/abs/2209.03666
-
code: https://github.com/hku-mars/r3live
-
[TIM2024]
Dynam-LVIO: A Dynamic-Object-Aware LiDAR Visual Inertial Odometry in Dynamic Urban Environments
-
paper: https://ieeexplore.ieee.org/document/10511062
-
[Remote Sensing2024]
LVI-Fusion: A Robust Lidar-Visual-Inertial SLAM Scheme
-
paper: https://www.mdpi.com/2072-4292/16/9/1524
2023
-
[TPAMI2023]
SDV-LOAM: Semi-Direct Visual–LiDAR Odometry and Mapping
-
paper: https://ieeexplore.ieee.org/abstract/document/10086694
-
code: https://github.com/ZikangYuan/SDV-LOAM
-
[RAL2023]
Coco-LIC: Continuous-Time Tightly-Coupled LiDAR-Inertial-Camera Odometry using Non-Uniform B-spline
-
paper: https://arxiv.org/pdf/2309.09808
-
code: https://github.com/APRIL-ZJU/Coco-LIC
2022
-
[IROS2022]
Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry
-
paper: https://ieeexplore.ieee.org/document/9739244
-
code: https://github.com/hku-mars/FAST-LIVO
-
[ICRA2022]
R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
-
paper: https://ieeexplore.ieee.org/document/9811935
-
code: https://github.com/hku-mars/r3live
-
[RAL2022]
mvil-fusion: Monocular visual-inertial-lidar simultaneous localization and mapping in challenging environments
-
paper: https://ieeexplore.ieee.org/abstract/document/9968060
2021
-
[RAL2021]
R2LIVE: A Robust, Real-time, LiDAR-Inertial-Visual tightly-coupled state Estimator and mapping
-
paper: https://github.com/hku-mars/r2live/blob/master/paper/r2live_ral_final.pdf```
-
code: https://github.com/hku-mars/r2live
-
[IROS2021]
Lvio-Fusion: A Self-adaptive Multi-sensor Fusion SLAM Framework Using Actor-critic Method
-
paper: https://arxiv.org/abs/2106.06783
-
code: ```https://github.com/jypjypjypjyp/lvio_fusion
-
[ICRA2021]
CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System
-
paper: https://ieeexplore.ieee.org/abstract/document/9561149
-
code: https://github.com/ISEE-Technology/CamVox
-
[ICRA2021]
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
-
paper: https://github.com/TixiaoShan/LVI-SAM/blob/master/doc/paper.pdf
-
code: https://github.com/TixiaoShan/LVI-SAM
-
[TIV2021]
Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM
-
paper: https://ieeexplore.ieee.org/abstract/document/9632274
-
[ROBIO2021]
LVIO-SAM: A Multi-sensor Fusion Odometry via Smoothing and Mapping
-
paper: https://ieeexplore.ieee.org/document/9739244
-
code: https://github.com/TurtleZhong/LVIO-SAM
-
[Remote Sensing2021]
DV-LOAM: Direct Visual LiDAR Odometry and Mapping
-
paper: https://www.mdpi.com/2072-4292/13/16/3340
-
code: https://github.com/kinggreat24/dv-loam
2020
-
[IROS2020]
LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking
-
paper: https://arxiv.org/pdf/2008.07196
-
[ICRA2020]
Lidar-Monocular Visual Odometry using Point and Line Features
-
paper: https://ieeexplore.ieee.org/abstract/document/9196613
-
[Automomous Robots2020]
DVL-SLAM: sparse depth enhanced direct visual-LiDAR SLAM
-
paper: https://link.springer.com/article/10.1007/s10514-019-09881-0
2019
-
[IROS2019]
LIC-Fusion: LiDAR-Inertial-Camera Odometry
-
paper: https://ieeexplore.ieee.org/xpl/conhome/8957008/proceeding
-
[IROS2019]
ViLiVO: Virtual LiDAR-Visual Odometry for an Autonomous Vehicle with a Multi-Camera System
-
paper: https://ieeexplore.ieee.org/abstract/document/8968484
2018
-
[IROS2018]
LIMO: Lidar-Monocular Visual Odometry
-
paper: https://ieeexplore.ieee.org/abstract/document/8594394
-
code: https://github.com/johannes-graeter/limo
相关的awesome-lists
awesome-SLAM:
https://github.com/SilenceOverflow/Awesome-SLAM
awesome-SLAM-datasets:
https://github.com/youngguncho/awesome-slam-datasets
Awesome-Implicit-NeRF-Robotics:
https://github.com/zubair-irshad/Awesome-Implicit-NeRF-Robotics
awesome-humanoid-learning:
https://github.com/jonyzhang2023/awesome-humanoid-learning
awesome-isaac-gym:
https://github.com/wangcongrobot/awesome-isaac-gym
Awesome_Quadrupedal_Robots:
https://github.com/curieuxjy/Awesome_Quadrupedal_Robots
Awesome Robot Descriptions:
https://github.com/robot-descriptions/awesome-robot-descriptions
awesome-legged-locomotion-learning:
https://github.com/gaiyi7788/awesome-legged-locomotion-learning
本文仅做学术分享,如有侵权,请联系删文。
3D视觉工坊交流群
目前我们已经建立了3D视觉方向多个社群,包括
2D计算机视觉
、
大模型
、
工业3D视觉
、
SLAM
、
自动驾驶
、
三维重建
、
无人机
等方向,细分群包括:
2D计算机视觉:
图像分类/分割、目标/检测、医学影像、GAN、OCR、2D缺陷检测、遥感测绘、超分辨率、人脸检测、行为识别、模型量化剪枝、迁移学习、人体姿态估计等
大模型:
NLP、CV、ASR、生成对抗大模型、强化学习大模型、对话大模型等
工业3D视觉:
相机标定、立体匹配、三维点云、结构光、机械臂抓取、缺陷检测、6D位姿估计、相位偏折术、Halcon、摄影测量、阵列相机、光度立体视觉等。
SLAM
:
视觉SLAM、激光SLAM、语义SLAM、滤波算法、多传感器融合、多传感器标定、动态SLAM、MOT SLAM、NeRF SLAM、机器人导航等。
自动驾驶:
深度估计、Transformer、毫米波|激光雷达|视觉摄像头传感器、多传感器标定、多传感器融合、自动驾驶综合群等、3D目标检测、路径规划、轨迹预测、3D点云分割、模型部署、车道线检测、Occupancy、目标跟踪等。
三维重建:
3DGS、NeRF、多视图几何、OpenMVS、MVSNet、colmap、纹理贴图等
无人机:
四旋翼建模、无人机飞控等
除了这些,还有
求职
、
硬件选型
、
视觉产品落地
、
最新论文
、
3D视觉最新产品
、
3D视觉行业新闻
等交流群
添加小助理: dddvision,备注:
研究方向+学校/公司+昵称
(如3D点云+清华+小草莓)
, 拉你入群。
▲长按扫码添加助理
3D视觉工坊知识星球
3D视觉从入门到精通知识星球、国内成立最早、6000+成员交流学习。包括:
星球视频课程近20门(价值超6000)
、
项目对接
、
3D视觉学习路线总结
、
最新顶会论文&代码
、
3D视觉行业最新模组
、
3D视觉优质源码汇总