Large Language Models and 3D vision for intelligent robotic perception and autonomy

Vinit Mehta, Charu Sharma, Karthick Thiyagarajan

Research output: Contribution to journalArticlepeer-review

Abstract

With the rapid advancement of artificial intelligence and robotics, the integration of Large Language Models (LLMs) with 3D vision is emerging as a transformative approach to enhancing robotic sensing technologies. This convergence enables machines to perceive, reason, and interact with complex environments through natural language and spatial understanding, bridging the gap between linguistic intelligence and spatial perception. This review provides a comprehensive analysis of state-of-the-art methodologies, applications, and challenges at the intersection of LLMs and 3D vision, with a focus on next-generation robotic sensing technologies. We first introduce the foundational principles of LLMs and 3D data representations, followed by an in-depth examination of 3D sensing technologies critical for robotics. The review then explores key advancements in scene understanding, text-to-3D generation, object grounding, and embodied agents, highlighting cutting-edge techniques such as zero-shot 3D segmentation, dynamic scene synthesis, and language-guided manipulation. Furthermore, we discuss multimodal LLMs that integrate 3D data with touch, auditory, and thermal inputs, enhancing environmental comprehension and robotic decision-making. To support future research, we catalog benchmark datasets and evaluation metrics tailored for 3D-language and vision tasks. Finally, we identify key challenges and future research directions, including adaptive model architectures, enhanced cross-modal alignment, and real-time processing capabilities, which pave the way for more intelligent, context-aware, and autonomous robotic sensing systems.

Original languageEnglish
Article number6394
Number of pages51
JournalSensors
Volume25
Issue number20
DOIs
Publication statusPublished - Oct 2025

Keywords

  • 3D vision
  • embodied agents
  • human robot interaction
  • large language models
  • robot sensing
  • scene understanding
  • sensor applications
  • visual sensing

Fingerprint

Dive into the research topics of 'Large Language Models and 3D vision for intelligent robotic perception and autonomy'. Together they form a unique fingerprint.

Cite this