The Roles of Contextual Semantic Relevance Metrics in Human Visual Processing

Đã lưu trong:
Chi tiết về thư mục
Xuất bản năm:arXiv.org (Oct 13, 2024), p. n/a
Tác giả chính: Sun, Kun
Tác giả khác: Wang, Rong
Được phát hành:
Cornell University Library, arXiv.org
Những chủ đề:
Truy cập trực tuyến:Citation/Abstract
Full text outside of ProQuest
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!

MARC

LEADER 00000nab a2200000uu 4500
001 3116750675
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3116750675 
045 0 |b d20241013 
100 1 |a Sun, Kun 
245 1 |a The Roles of Contextual Semantic Relevance Metrics in Human Visual Processing 
260 |b Cornell University Library, arXiv.org  |c Oct 13, 2024 
513 |a Working Paper 
520 3 |a Semantic relevance metrics can capture both the inherent semantics of individual objects and their relationships to other elements within a visual scene. Numerous previous research has demonstrated that these metrics can influence human visual processing. However, these studies often did not fully account for contextual information or employ the recent deep learning models for more accurate computation. This study investigates human visual perception and processing by introducing the metrics of contextual semantic relevance. We evaluate semantic relationships between target objects and their surroundings from both vision-based and language-based perspectives. Testing a large eye-movement dataset from visual comprehension, we employ state-of-the-art deep learning techniques to compute these metrics and analyze their impacts on fixation measures on human visual processing through advanced statistical models. These metrics could also simulate top-down and bottom-up processing in visual perception. This study further integrates vision-based and language-based metrics into a novel combined metric, addressing a critical gap in previous research that often treated visual and semantic similarities separately. Results indicate that all metrics could precisely predict fixation measures in visual perception and processing, but with distinct roles in prediction. The combined metric outperforms other metrics, supporting theories that emphasize the interaction between semantic and visual information in shaping visual perception/processing. This finding aligns with growing recognition of the importance of multi-modal information processing in human cognition. These insights enhance our understanding of cognitive mechanisms underlying visual processing and have implications for developing more accurate computational models in fields such as cognitive science and human-computer interaction. 
653 |a Visual fields 
653 |a Human motion 
653 |a Semantics 
653 |a Vision 
653 |a Deep learning 
653 |a Visual perception 
653 |a Information processing (biology) 
653 |a Statistical models 
653 |a Cognition 
700 1 |a Wang, Rong 
773 0 |t arXiv.org  |g (Oct 13, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3116750675/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2410.09921