Understanding Emotion and Gaze During Visual Behavior
Spremljeno u:
| Izdano u: | PQDT - Global (2025) |
|---|---|
| Glavni autor: | |
| Izdano: |
ProQuest Dissertations & Theses
|
| Teme: | |
| Online pristup: | Citation/Abstract Full Text - PDF Full text outside of ProQuest |
| Oznake: |
Bez oznaka, Budi prvi tko označuje ovaj zapis!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3273632607 | ||
| 003 | UK-CbPIL | ||
| 020 | |a 9798263313029 | ||
| 035 | |a 3273632607 | ||
| 045 | 2 | |b d20250101 |b d20251231 | |
| 084 | |a 189128 |2 nlm | ||
| 100 | 1 | |a Fang, Yini | |
| 245 | 1 | |a Understanding Emotion and Gaze During Visual Behavior | |
| 260 | |b ProQuest Dissertations & Theses |c 2025 | ||
| 513 | |a Dissertation/Thesis | ||
| 520 | 3 | |a Understanding human emotion and attention during visual behavior offers deep insights into internal cognitive states. Grounded in the action-perception loop, we study how humans process, interpret, and act upon visual information, and how these responses reflect underlying affective and cognitive mechanisms. This thesis focuses on two key challenges: detecting and interpreting emotion in long, naturalistic videos, and modeling gaze behavior in goal-directed visual tasks.1. Emotion Understanding. Emotion analysis in video presents several challenges, including subtle and transient expressions, overlapping affective signals, and the difficulty of obtaining high-quality annotations. Moreover, spotting and recognizing expressions are often handled in separate stages, which can introduce inefficiencies and hinder performance. To address these issues, we developed a lightweight spotting framework that captures fine-grained motion using phase-based features, enabling robust and efficient detection of micro-expressions. We further proposed a unified end-to-end model that jointly performs expression spotting and recognition, improving accuracy and reducing the need for handcrafted preprocessing. Additionally, we introduced a transformer-based regression approach that models temporal dynamics to estimate emotional intensity directly from raw video frames.2. Gaze Behavior Modeling. Traditional gaze modeling has largely focused on low-level, pixel-based fixations, which often overlook semantic object structure and task-driven intentions. This limits the interpretability and applicability of such models in real-world settings. To overcome this, we designed an object-level scanpath prediction framework that models gaze as a sequence of attentional shifts over meaningful objects. By incorporating semantic object information, spatial priors, and target representations, the framework more accurately reflects human behavior in structured search tasks.These contributions deepen our understanding and models of facial expressions and gaze during behavior, offering efficient and interpretable models tailored to naturalistic settings. They lay the groundwork for cognitively-informed behavior modeling and open new directions for incorporating psychological constraints, explainable mechanisms, and adaptive human-in-the-loop learning. | |
| 653 | |a Eye movements | ||
| 653 | |a Affect (Psychology) | ||
| 653 | |a Behavior | ||
| 653 | |a Happiness | ||
| 653 | |a Deep learning | ||
| 653 | |a Computer vision | ||
| 653 | |a Self report | ||
| 653 | |a Real time | ||
| 653 | |a Decision making | ||
| 653 | |a Taxonomy | ||
| 653 | |a Role models | ||
| 653 | |a Human-computer interaction | ||
| 653 | |a Emotions | ||
| 653 | |a Cognition & reasoning | ||
| 653 | |a Semantics | ||
| 653 | |a Object linking & embedding | ||
| 653 | |a Electrical engineering | ||
| 653 | |a Computer engineering | ||
| 773 | 0 | |t PQDT - Global |g (2025) | |
| 786 | 0 | |d ProQuest |t ProQuest Dissertations & Theses Global | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3273632607/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch |
| 856 | 4 | 0 | |3 Full Text - PDF |u https://www.proquest.com/docview/3273632607/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch |
| 856 | 4 | 0 | |3 Full text outside of ProQuest |u https://doi.org/10.14711/thesis-hdl152561 |