Quantum-Train-Based Distributed Multi-Agent Reinforcement Learning

Salvato in:
Dettagli Bibliografici
Pubblicato in:arXiv.org (Dec 12, 2024), p. n/a
Autore principale: Kuan-Cheng, Chen
Altri autori: Samuel Yen-Chi Chen, Chen-Yu, Liu, Leung, Kin K
Pubblicazione:
Cornell University Library, arXiv.org
Soggetti:
Accesso online:Citation/Abstract
Full text outside of ProQuest
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3144199741
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3144199741 
045 0 |b d20241212 
100 1 |a Kuan-Cheng, Chen 
245 1 |a Quantum-Train-Based Distributed Multi-Agent Reinforcement Learning 
260 |b Cornell University Library, arXiv.org  |c Dec 12, 2024 
513 |a Working Paper 
520 3 |a In this paper, we introduce Quantum-Train-Based Distributed Multi-Agent Reinforcement Learning (Dist-QTRL), a novel approach to addressing the scalability challenges of traditional Reinforcement Learning (RL) by integrating quantum computing principles. Quantum-Train Reinforcement Learning (QTRL) leverages parameterized quantum circuits to efficiently generate neural network parameters, achieving a \(poly(\log(N))\) reduction in the dimensionality of trainable parameters while harnessing quantum entanglement for superior data representation. The framework is designed for distributed multi-agent environments, where multiple agents, modeled as Quantum Processing Units (QPUs), operate in parallel, enabling faster convergence and enhanced scalability. Additionally, the Dist-QTRL framework can be extended to high-performance computing (HPC) environments by utilizing distributed quantum training for parameter reduction in classical neural networks, followed by inference using classical CPUs or GPUs. This hybrid quantum-HPC approach allows for further optimization in real-world applications. In this paper, we provide a mathematical formulation of the Dist-QTRL framework and explore its convergence properties, supported by empirical results demonstrating performance improvements over centric QTRL models. The results highlight the potential of quantum-enhanced RL in tackling complex, high-dimensional tasks, particularly in distributed computing settings, where our framework achieves significant speedups through parallelization without compromising model accuracy. This work paves the way for scalable, quantum-enhanced RL systems in practical applications, leveraging both quantum and classical computational resources. 
653 |a Parallel processing 
653 |a Quantum computing 
653 |a Quantum entanglement 
653 |a Convergence 
653 |a Multiagent systems 
653 |a Neural networks 
653 |a Parameters 
653 |a Task complexity 
653 |a Distributed processing 
700 1 |a Samuel Yen-Chi Chen 
700 1 |a Chen-Yu, Liu 
700 1 |a Leung, Kin K 
773 0 |t arXiv.org  |g (Dec 12, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3144199741/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.08845