专栏名称: 3DCV
关注工业3D视觉、SLAM、自动驾驶技术,更专注3D视觉产业的信息传播和产品价值的创造,深度聚焦于3D视觉传感器、SLAM产品,使行业产品快速连接消费者。
目录
相关文章推荐
清华经管学院职业发展中心  ·  活动 | BUILD YOUR ... ·  2 天前  
人力葵花  ·  各岗位薪资等级表(2.0).xls ·  2 天前  
人力资源法律  ·  工作纠纷与领导互殴被打骨折算工伤吗?| ... ·  2 天前  
人力葵花  ·  入职、在职、离职风险排查.xls ·  3 天前  
51好读  ›  专栏  ›  3DCV

AAAI 2024具身智能与机器人相关汇总!

3DCV  · 公众号  ·  · 2025-02-04 00:00

正文

点击下方 卡片 ,关注 「3DCV」 公众号
选择 星标 ,干货第一时间送达

来源:具身智能之心

添加小助理:cv3d001,备注:方向+学校/公司+昵称,拉你入群。文末附3D视觉行业细分群。

扫描下方二维码,加入 「3D视觉从入门到精通」知识星球 ( 点开有惊喜 ) ,星球内凝聚了众多3D视觉实战问题,以及各个模块的学习资料: 近20门独家秘制视频课程 最新顶会论文 、计算机视觉书籍 优质3D视觉算法源码 等。想要入门3D视觉、做项目、搞科研,欢迎扫码加入!

AAAI 2024 即第 38 届 AAAI 人工智能大会,于 2024 年 2 月 20 日至 27 日在加拿大温哥华的温哥华会议中心西楼举行。本届 AAAI 收到了创纪录的 10,504 篇有效投稿,经过严格的审稿程序,最终录用了 2527 篇。AAAI 2024 收录论文的主题涵盖机器学习、自然语言处理、计算机视觉、数据挖掘、多智能体系统、知识表示与推理、人机交互、搜索与规划、机器人与感知、人工智能伦理、与其他领域的交叉等方面。其中与具身智能相关的论文总结如下:

多模态基础模型

Cai Xu, Jiajun Si, Ziyu Guan, Wei Zhao, Yue Wu, Xiyue Gao: Reliable Conflictive Multi-view Learning. 可靠的冲突多视图学习。https://arxiv.org/pdf/2402.16897

Yi Xin, Junlong Du, Qiang Wang, Ke Yan, Shouhong Ding: MmAP : Multi-modal Alignment Prompt for Cross-domain Multi-task Learning. MmAP:用于跨域多任务学习的多模态对齐提示。https://arxiv.org/pdf/2312.08636

Jingsheng Gao, Jiacheng Ruan, Suncheng Xiang, Zefang Yu, Ke Ji, Mingye Xie, Ting Liu, Yuzhuo Fu: LAMM: Label Alignment for Multi-Modal Prompt Learning. LAMM:用于多模态提示学习的标签对齐。 https://arxiv.org/pdf/2312.08212

Peng Wu, Xuerong Zhou, Guansong Pang, Lingru Zhou, Qingsen Yan, Peng Wang, Yanning Zhang: VadCLIP: Adapting Vision-Language Models for Weakly Supervised Video Anomaly Detection. VadCLIP:使视觉语言模型适应弱监督视频异常检测。2308.11681

Haoyang He, Jiangning Zhang, Hongxu Chen, Xuhai Chen, Zhishan Li, Xu Chen, Yabiao Wang, Chengjie Wang, Lei Xie: DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection. DiAD:一种基于扩散的多类异常检测框架。2312.06607

Lianyu Hu, Liqing Gao, Zekang Liu, Chi-Man Pun, Wei Feng: COMMA: Co-articulated Multi-Modal Learning. COMMA:共同表达的多模态学习。2401.00268

Vincent Tao Hu, Wei Zhang, Meng Tang, Pascal Mettes, Deli Zhao, Cees Snoek: Latent Space Editing in Transformer-Based Flow Matching. 基于Transformer的流匹配中的潜在空间编辑。

Wenbo Hu, Yifan Xu, Yi Li, Weiyue Li, Zeyuan Chen, Zhuowen Tu: BLIVA: A Simple Multimodal LLM for Better Handling of Text-Rich Visual Questions. BLIVA:一种用于更好处理富含文本的视觉问题的简单多模态大语言模型。

Xiaoming Hu, Zilei Wang: A Dynamic Learning Method towards Realistic Compositional Zero-Shot Learning. 一种面向现实合成零样本学习的动态学习方法。

Yufeng Huang, Jiji Tang, Zhuo Chen, Rongsheng Zhang, Xinfeng Zhang, Weijie Chen, Zeng Zhao, Zhou Zhao, Tangjie Lv, Zhipeng Hu, Wen Zhang: Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-Modal Structured Representations. Structure-CLIP:利用场景图知识增强多模态结构化表示。

Chenchen Jing, Yukun Li, Hao Chen, Chunhua Shen: Retrieval-Augmented Primitive Representations for Compositional Zero-Shot Learning. 用于合成零样本学习的检索增强基本表示。

Guibiao Liao, Jiankun Li, Xiaoqing Ye: VLM2Scene: Self-Supervised Image-Text-LiDAR Learning with Foundation Models for Autonomous Driving Scene Understanding. VLM2Scene:使用基础模型进行自动驾驶场景理解的自监督图像-文本-激光雷达学习。

Yuqi Lin, Minghao Chen, Kaipeng Zhang, Hengjia Li, Mingming Li, Zheng Yang, Dongqin Lv, Binbin Lin, Haifeng Liu, Deng Cai: TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP without Training. TagCLIP:一种从局部到全局的框架,用于在无需训练的情况下增强CLIP的开放词汇多标签分类。

Chao Liu, Ting Zhao, Nenggan Zheng: DeepBranchTracer: A Generally-Applicable Approach to Curvilinear Structure Reconstruction Using Multi-Feature Learning. DeepBranchTracer:一种使用多特征学习进行曲线结构重建的通用方法。

Fan Ma, Xiaojie Jin, Heng Wang, Jingjia Huang, Linchao Zhu, Yi Yang: Stitching Segments and Sentences towards Generalization in Video-Text Pre-training. 在视频文本预训练中拼接片段和句子以实现泛化。

Dejie Yang, Zijing Zhao, YangLiu: PlanLLM: Video Procedure Planning with Refinable Large Language Models. PlanLLM:使用可细化的大语言模型进行视频过程规划。


3D场景重建

Hao Wu, Yuxuan Liang, Wei Xiong, Zhengyang Zhou, Wei Huang, Shilong Wang, Kun Wang: Earthfarsser: Versatile Spatio-Temporal Dynamical Systems Modeling in One Model. Earthfarsser:在一个模型中实现多功能时空动力学系统建模。

Zechen Li, Weiming Huang, Kai Zhao, Min Yang, Yongshun Gong, Meng Chen: Urban Region Embedding via Multi-View Contrastive Prediction. 通过多视图对比预测进行城市区域嵌入。

Yubin Hu, Sheng Ye, Wang Zhao, Matthieu Lin, Yuze He, Yu-Hui Wen, Ying He, Yong-Jin Liu: O^2-Recon: Completing 3D Reconstruction of Occluded Objects in the Scene with a Pre-trained 2D Diffusion Model. O^2-Recon:利用预训练的2D扩散模型完成场景中被遮挡物体的3D重建。

Chengyou Jia, Minnan Luo, Zhuohang Dang, Guang Dai, Xiaojun Chang, Mengmeng Wang, Jingdong Wang: SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-Form Layout-to-Image Generation. SSMG:用于自由形式布局到图像生成的空间语义图引导扩散模型。

Shijian Jiang, Qi Ye, Rengan Xie, Yuchi Huo, Xiang Li, Yang Zhou, Jiming Chen: In-Hand 3D Object Reconstruction from a Monocular RGB Video. 从单目RGB视频中进行手中3D物体重建。

GeonU Kim, Kim Youwang, Tae-Hyun Oh: FPRF: Feed-Forward Photorealistic Style Transfer of Large-Scale 3D Neural Radiance Fields. FPRF:大规模3D神经辐射场的前馈逼真风格迁移。

Ru Li, Jia Liu, Guanghui Liu, Shengping Zhang, Bing Zeng, Shuaicheng Liu: SpectralNeRF: Physically Based Spectral Rendering with Neural Radiance Field. SpectralNeRF:基于物理的神经辐射场光谱渲染。

Shengtao Li, Ge Gao, Yudong Liu, Yu-Shen Liu, Ming Gu: GridFormer: Point-Grid Transformer for Surface Reconstruction. GridFormer:用于表面重建的点网格Transformer。

Xin Lin, Chong Shi, Yibing Zhan, Zuopeng Yang, Yaqi Wu, Dacheng Tao: TD²-Net: Toward Denoising and Debiasing for Video Scene Graph Generation. TD²-Net:迈向视频场景图生成的去噪和去偏。

Youtian Lin: Ced-NeRF: A Compact and Efficient Method for Dynamic Neural Radiance Fields. Ced-NeRF:一种紧凑高效的动态神经辐射场方法。

目标检测

Yuhao Huang, Sanping Zhou, Junjie Zhang, Jinpeng Dong, Nanning Zheng: Voxel or Pillar: Exploring Efficient Point Cloud Representation for 3D Object Detection. 体素还是柱体:探索用于3D目标检测的高效点云表示。

Joonhyun Jeong, Geondo Park, Jayeon Yoo, Hyungsik Jung, Heesu Kim: ProxyDet: Synthesizing Proxy Novel Classes via Classwise Mixup for Open-Vocabulary Object Detection. ProxyDet:通过逐类混合合成代理新类别用于开放词汇目标检测。

Xiaohui Jiang, Shuailin Li, Yingfei Liu, Shihao Wang, Fan Jia, Tiancai Wang, Lijin Han, Xiangyu Zhang: Far3D: Expanding the Horizon for Surround-View 3D Object Detection. Far3D:拓展环视3D目标检测的视野范围。

Yang Jiao, Zequn Jie, Shaoxiang Chen, Lechao Cheng, Jingjing Chen, Lin Ma, Yu-Gang Jiang: Instance-Aware Multi-Camera 3D Object Detection with Structural Priors Mining and Self-Boosting Learning. 基于结构先验挖掘和自增强学习的实例感知多相机3D目标检测。

Xin Jin, Kai Liu, Cong Ma, Ruining Yang, Fei Hui, Wei Wu: SwiftPillars: High-Efficiency Pillar Encoder for Lidar-Based 3D Detection. SwiftPillars:用于基于激光雷达的3D检测的高效柱体编码器。

Seunggu Kang, WonJun Moon, Euiyeon Kim, Jae-Pil Heo: VLCounter: Text-Aware Visual Representation for Zero-Shot Object Counting. VLCounter:用于零样本目标计数的文本感知视觉表示。

Yogesh Kumar, Saswat Mallick, Anand Mishra, Sowmya Rasipuram, Anutosh Maitra, Roshni R. Ramnani: QDETRv: Query-Guided DETR for One-Shot Object Localization in Videos. QDETRv:用于视频中一次性目标定位的查询引导的DETR。

Jinxiang Lai, Wenlong Wu, Bin-Bin Gao, Jun Liu, Jiawei Zhan, Congchong Nie, Yi Zeng, Chengjie Wang: MatchDet: A Collaborative Framework for Image Matching and Object Detection. MatchDet:一种用于图像匹配和目标检测的协作框架。

JongMin Lee, Yohann Cabon, Romain Brégier, Sungjoo Yoo, Jérôme Revaud: MFOS: Model-Free & One-Shot Object Pose Estimation. MFOS:无模型且一次性的目标姿态估计。

Xiang Li, Junbo Yin, Wei Li, Chengzhong Xu, Ruigang Yang, Jianbing Shen: DI-V2X: Learning Domain-Invariant Representation for Vehicle-Infrastructure Collaborative 3D Object Detection. DI-V2X:学习用于车路协同3D目标检测的域不变表示。

Yaoyuan Liang, Xiao Liang, Yansong Tang, Zhao Yang, Ziran Li, Jingang Wang, Wenbo Ding, Shao-Lun Huang: CoSTA: End-to-End Comprehensive Space-Time Entanglement for Spatio-Temporal Video Grounding. CoSTA:用于时空视频定位的端到端综合时空纠缠。

Jianghang Lin, Yunhang Shen, Bingquan Wang, Shaohui Lin, Ke Li, Liujuan Cao: Weakly Supervised Open-Vocabulary Object Detection. 弱监督开放词汇目标检测。

Jiaming Liu, Yue Wu, Maoguo Gong, Qiguang Miao, Wenping Ma, Cai Xu, Can Qin: M3SOT: Multi-Frame, Multi-Field, Multi-Space 3D Single Object Tracking. M3SOT:多帧、多场、多空间3D单目标跟踪。

Liu Liu, Anran Huang, Qi Wu, Dan Guo, Xun Yang, Meng Wang: KPA-Tracker: Towards Robust and Real-Time Category-Level Articulated Object 6D Pose Tracking. KPA-Tracker:迈向鲁棒且实时的类别级关节物体6D姿态跟踪。

Sahal Shaji Mullappilly, Abhishek Singh Gehlot, Rao Muhammad Anwer, Fahad Shahbaz Khan, Hisham Cholakkal: Semi-supervised Open-World Object Detection. 半监督开放世界目标检测。

3D场景理解

Bohan Li, Yasheng Sun, Jingxin Dong, Zheng Zhu, Jinming Liu, Xin Jin, Wenjun Zeng: One at a Time: Progressive Multi-Step Volumetric Probability Learning for Reliable 3D Scene Perception. 一次一个:用于可靠 3D 场景感知的渐进式多步骤体素概率学习。

Hanxuan Li, Bin Fu, Ruiping Wang, Xilin Chen: Point2Real: Bridging the Gap between Point Cloud and Realistic Image for Open-World 3D Recognition. Point2Real:为开放世界 3D 识别弥合点云与真实图像之间的差距。

Xiawei Li, Qingyuan Xu, Jing Zhang, Tianyi Zhang, Qian Yu, Lu Sheng, Dong Xu: Multi-Modality Affinity Inference for Weakly Supervised 3D Semantic Segmentation. 用于弱监督 3D 语义分割的多模态亲和力推断。

Matthieu Lin, Jenny Sheng, Yubin Hu, Yangguang Li, Lu Qi, Andrew Zhao, Gao Huang, Yong-Jin Liu: Exploring Temporal Feature Correlation for Efficient and Stable Video Semantic Segmentation. 探索时间特征相关性以实现高效稳定的视频语义分割。

Xingyu Liu, Pengfei Ren, Yuanyuan Gao, Jingyu Wang, Haifeng Sun, Qi Qi, Zirui Zhuang, Jianxin Liao: Keypoint Fusion for RGB-D Based 3D Hand Pose Estimation. 基于 RGB-D 的 3D 手部姿态估计的关键点融合。

Ziyang Lu, Yunqiang Pei, Guoqing Wang, Peiwei Li, Yang Yang, Yinjie Lei, Heng Tao Shen: ScanERU: Interactive 3D Visual Grounding Based on Embodied Reference Understanding. ScanERU:基于具身参考理解的交互式 3D 视觉定位。

Run Luo, Zikai Song, Lintao Ma, Jinlin Wei, Wei Yang, Min Yang: DiffusionTrack: Diffusion Model for Multi-Object Tracking. DiffusionTrack:用于多目标跟踪的扩散模型。

Zhipeng Luo, Gongjie Zhang, Changqing Zhou, Zhonghua Wu, Qingyi Tao, Lewei Lu, Shijian Lu: Modeling Continuous Motion for 3D Point Cloud Object Tracking. 为 3D 点云目标跟踪建模连续运动。

Wentao Mo, Yang Liu: Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA. 弥合 2D 与 3D 视觉问答之间的差距:一种用于 3D 视觉问答的融合方法。

Bahram Mohammadi, Yicong Hong, Yuankai Qi, Qi Wu, Shirui Pan, Javen Qinfeng Shi: Augmented Commonsense Knowledge for Remote Object Grounding. 用于远程目标定位的增强常识知识。

Wenzhe Ouyang, Xiaolin Song, Bailan Feng, Zenglin Xu: OctOcc: High-Resolution 3D Occupancy Prediction with Octree. OctOcc:使用八叉树进行高分辨率 3D 占用预测。

Zhiyi Pan, Nan Zhang, Wei Gao, Shan Liu, Ge Li: Less Is More: Label Recommendation for Weakly Supervised Point Cloud Semantic Segmentation. 少即是多:用于弱监督点云语义分割的标签推荐。

视觉语言导航

Xiulong Liu, Sudipta Paul, Moitreya Chatterjee, Anoop Cherian: CAVEN: An Embodied Conversational Agent for Efficient Audio-Visual Navigation in Noisy Environments. CAVEN:一种用于在嘈杂环境中实现高效视听导航的具身对话智能体。

Zhixuan Shen, Haonan Luo, Kexun Chen, Fengmao Lv, Tianrui Li: Enhancing Multi-Robot Semantic Navigation Through Multimodal Chain-of-Thought Score Collaboration. 通过多模态思维链分数协作增强多机器人语义导航。

强化学习

Jialu Zhang, Xiaoying Yang, Wentao He, Jianfeng Ren, Qian Zhang, Yitian Zhao, Ruibin Bai, Xiangjian He, Jiang Liu: Scale Optimization Using Evolutionary Reinforcement Learning for Object Detection on Drone Imagery. 利用进化强化学习对无人机图像中的目标检测进行尺度优化。

本文仅做学术分享,如有侵权,请联系删文。

3D视觉交流群,成立啦!

目前我们已经建立了3D视觉方向多个社群,包括 2D计算机视觉 最前沿 工业3D视觉 SLAM 自动驾驶 三维重建 无人机 等方向,细分群包括:

工业3D视觉 :相机标定、立体匹配、三维点云、结构光、机械臂抓取、缺陷检测、6D位姿估计、相位偏折术、Halcon、摄影测量、阵列相机、光度立体视觉等。

SLAM







请到「今天看啥」查看全文