Control strategy of robotic manipulator based on multi-task reinforcement learning
Guardat en:
| Publicat a: | Complex & Intelligent Systems vol. 11, no. 3 (Mar 2025), p. 175 |
|---|---|
| Publicat: |
Springer Nature B.V.
|
| Matèries: | |
| Accés en línia: | Citation/Abstract Full Text - PDF |
| Etiquetes: |
Sense etiquetes, Sigues el primer a etiquetar aquest registre!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3168509074 | ||
| 003 | UK-CbPIL | ||
| 022 | |a 2199-4536 | ||
| 022 | |a 2198-6053 | ||
| 024 | 7 | |a 10.1007/s40747-025-01816-w |2 doi | |
| 035 | |a 3168509074 | ||
| 045 | 2 | |b d20250301 |b d20250331 | |
| 245 | 1 | |a Control strategy of robotic manipulator based on multi-task reinforcement learning | |
| 260 | |b Springer Nature B.V. |c Mar 2025 | ||
| 513 | |a Journal Article | ||
| 520 | 3 | |a Multi-task learning is important in reinforcement learning where simultaneously training across different tasks allows for leveraging shared information among them, typically leading to better performance than single-task learning. While joint training of multiple tasks permits parameter sharing between tasks, the optimization challenge becomes crucial—identifying which parameters should be reused and managing potential gradient conflicts arising from different tasks. To tackle this issue, instead of uniform parameter sharing, we propose an adjudicate reconfiguration network model, which we integrate into the Soft Actor-Critic (SAC) algorithm to address the optimization problems brought about by parameter sharing in multi-task reinforcement learning algorithms. The decision reconstruction network model is designed to achieve cross-network layer information exchange between network layers by dynamically adjusting and reconfiguring the network hierarchy, which can overcome the inherent limitations of traditional network architecture in handling multitasking scenarios. The SAC algorithm based on the decision reconstruction network model can achieve simultaneous training in multiple tasks, effectively learning and integrating relevant knowledge of each task. Finally, the proposed algorithm is evaluated in a multi-task environment of the Meta-World, a benchmark for multi-task reinforcement learning containing robotic manipulation tasks, and the multi-task MUJOCO environment. | |
| 653 | |a Potential gradient | ||
| 653 | |a Algorithms | ||
| 653 | |a Parameter identification | ||
| 653 | |a Machine learning | ||
| 653 | |a Reconstruction | ||
| 653 | |a Robot arms | ||
| 653 | |a Robot control | ||
| 653 | |a Reconfiguration | ||
| 653 | |a Optimization | ||
| 653 | |a Multitasking | ||
| 773 | 0 | |t Complex & Intelligent Systems |g vol. 11, no. 3 (Mar 2025), p. 175 | |
| 786 | 0 | |d ProQuest |t Advanced Technologies & Aerospace Database | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3168509074/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch |
| 856 | 4 | 0 | |3 Full Text - PDF |u https://www.proquest.com/docview/3168509074/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch |