A NeRF-Based Captioning Framework for Spatially Rich and Context-Aware Image Descriptions

Guardado en:
Detalles Bibliográficos
Publicado en:Journal Europeen des Systemes Automatises vol. 58, no. 5 (May 2025), p. 1059-1065
Autor principal: Garine, Bindu Madhavi
Otros Autores: Parimala, Rajeshwari, Motukuri, Sridevi, Raja Sekhar Reddy Pocha, Pulipati, Srilatha
Publicado:
International Information and Engineering Technology Association (IIETA)
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:Traditional caption models are mainly dependent on 2D visual properties, which limit their ability to understand and describe spatial conditions, depth and three-dimensional structures in images. These models struggle to capture object interviews, beef and light variations, which are important for generating relevant and spatial conscious details. To address these boundaries, we introduce Neural Radiance Feilds Captioning (NeRF-Cap) framework is a new Neural Radiance Field based on multimodal image-tight frame that integrates 3D-visual reconstruction with natural language treatment (NLP). NeRF's ability to create a constant volumetric representation of a view of several 2D approaches enables the recovery of depth-individual and geometrically accurate functions, which improves the descriptive power of the caption generated. Our approach also integrates the advanced visual language models such as Bootstrapping Language-Image Pre-training (BLIP), Contrastive Language-Image Pretraining (CLIP) and Large Language Model Meta AI (LLaMA) which process the text details by involving semantic object interlation, depth such and light effect in the caption process. By taking advantage of the high definition 3D representation of the NeRF, NeRF-Cap improved traditional captions by providing spatial consistent, photorealist and geometrically consistent details. We evaluate our method for synthetic and real-world datasets, and perform complex spatial properties and its effectiveness in capturing visual dynamics. Experimental results indicate that NeRF-Cap outperforms existing captioning models in terms of spatial awareness, contextual accuracy, and natural language fluency, as measured by standard benchmarks such as Bilingual Evaluation Understudy (BLEU), Metric for Evaluation of Translation with Explicit Ordering (METEOR), Consensus-based Image Description Evaluation (CIDEr) and a novel Depth-Awareness Score. Our work highlights the potential of 3D-aware multimodal captioning, paving the way for more advanced applications in robotic perception, augmented reality, and assistive vision systems.
ISSN:1269-6935
2116-7087
DOI:10.18280/jesa.580518
Fuente:ABI/INFORM Global