A heterogeneous graph neural network assisted multi-agent reinforcement learning for parallel service function chain deployment

Guardat en:
Dades bibliogràfiques
Publicat a:Journal of King Saud University. Computer and Information Sciences vol. 37, no. 8 (Oct 2025), p. 236
Autor principal: Ai, Yintan
Altres autors: Li, Hua, Ruan, Hongwei, Liu, Hanlin
Publicat:
Springer Nature B.V.
Matèries:
Accés en línia:Citation/Abstract
Full Text
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!
Descripció
Resum:The deployment of parallel Service Function Chains (SFCs) in Network Function Virtualization (NFV) environments presents significant challenges in jointly optimizing Virtual Network Function (VNF) parallelization and placement decisions. Traditional approaches typically decouple these decisions, leading to suboptimal performance and inefficient resource utilization. This paper proposes HGNN-PSFC, a novel heterogeneous graph neural network-assisted multi-agent deep reinforcement learning framework that jointly optimizes VNF parallelization and placement for parallel SFC deployment. Our approach employs cooperative agents: a Parallelization Agent that determines optimal VNF parallelization structures, and multiple Placement Agents that make VNF placement decisions. The framework utilizes a heterogeneous graph representation to capture complex relationships between VNFs, substrate network topology, and current VNF placement states. Through Multi-Agent Proximal Policy Optimization (MAPPO) training within a Centralized Training with Decentralized Execution (CTDE) paradigm, our method achieves effective coordination between parallelization and placement decisions. Extensive experimental results demonstrate that HGNN-PSFC achieves near-optimal performance with approximately 92% of the optimal algorithm’s effectiveness while maintaining polynomial computational complexity.
ISSN:1319-1578
DOI:10.1007/s44443-025-00258-1
Font:Computer Science Database