FaceXFormer: A Unified Transformer for Facial Analysis

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:arXiv.org (Dec 19, 2024), p. n/a
Egile nagusia: Narayan, Kartik
Beste egile batzuk: Vibashan, V S, Chellappa, Rama, Patel, Vishal M
Argitaratua:
Cornell University Library, arXiv.org
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full text outside of ProQuest
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!

MARC

LEADER 00000nab a2200000uu 4500
001 2969147332
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2969147332 
045 0 |b d20241219 
100 1 |a Narayan, Kartik 
245 1 |a FaceXFormer: A Unified Transformer for Facial Analysis 
260 |b Cornell University Library, arXiv.org  |c Dec 19, 2024 
513 |a Working Paper 
520 3 |a In this work, we introduce FaceXFormer, an end-to-end unified transformer model capable of performing nine facial analysis tasks including face parsing, landmark detection, head pose estimation, attribute prediction, and estimation of age, gender, race, expression, and face visibility within a single framework. Conventional methods in face analysis have often relied on task-specific designs and pre-processing techniques, which limit their scalability and integration into a unified architecture. Unlike these conventional methods, FaceXFormer leverages a transformer-based encoder-decoder architecture where each task is treated as a learnable token, enabling the seamless integration and simultaneous processing of multiple tasks within a single framework. Moreover, we propose a novel parameter-efficient decoder, FaceX, which jointly processes face and task tokens, thereby learning generalized and robust face representations across different tasks. We jointly trained FaceXFormer on nine face perception datasets and conducted experiments against specialized and multi-task models in both intra-dataset and cross-dataset evaluations across multiple benchmarks, showcasing state-of-the-art or competitive performance. Further, we performed a comprehensive analysis of different backbones for unified face task processing and evaluated our model "in-the-wild", demonstrating its robustness and generalizability. To the best of our knowledge, this is the first work to propose a single model capable of handling nine facial analysis tasks while maintaining real-time performance at 33.21 FPS. 
653 |a Datasets 
653 |a Encoders-Decoders 
653 |a Pose estimation 
653 |a Transformers 
653 |a Query processing 
700 1 |a Vibashan, V S 
700 1 |a Chellappa, Rama 
700 1 |a Patel, Vishal M 
773 0 |t arXiv.org  |g (Dec 19, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2969147332/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2403.12960