Optimizing the Long-Term Efficiency of Users and Operators in Mobile Edge Computing Using Reinforcement Learning

Guardat en:
Dades bibliogràfiques
Publicat a:Electronics vol. 14, no. 8 (2025), p. 1689
Autor principal: Shao Jianji
Altres autors: Li, Yanjun
Publicat:
MDPI AG
Matèries:
Accés en línia:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3194581963
003 UK-CbPIL
022 |a 2079-9292 
024 7 |a 10.3390/electronics14081689  |2 doi 
035 |a 3194581963 
045 2 |b d20250101  |b d20251231 
084 |a 231458  |2 nlm 
100 1 |a Shao Jianji  |u College of Artificial Intelligence, Wenzhou Polytechnic, Wenzhou 325035, China 
245 1 |a Optimizing the Long-Term Efficiency of Users and Operators in Mobile Edge Computing Using Reinforcement Learning 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Mobile edge computing (MEC) has emerged as a promising paradigm to enhance computational capabilities at the network edge, enabling low-latency services for users while ensuring efficient resource utilization for operators. One of the key challenges in MEC is optimizing offloading decisions and resource allocation to balance user experience and operator profitability. In this paper, we integrate software-defined networking (SDN) and MEC to enhance system utility and propose an SDN-based MEC network framework. Within this framework, we formulate an optimization problem that jointly maximizes the utility of both users and operators by optimizing the offloading decisions, communication and computation resource allocation ratios. To address this challenge, we model the problem as a Markov decision process (MDP) and propose a reinforcement learning (RL)-based algorithm to optimize long-term system utility in a dynamic network environment. However, since RL-based algorithms struggle with large state spaces, we extend the MDP formulation to a continuous state space and develop a deep reinforcement learning (DRL)-based algorithm to improve performance. The DRL approach leverages neural networks to approximate optimal policies, enabling more effective decision-making in complex environments. Experimental results validate the effectiveness of our proposed methods. While the RL-based algorithm enhances the long-term average utility of both users and operators, the DRL-based algorithm further improves performance, increasing operator and user efficiency by approximately 22.4% and 12.2%, respectively. These results highlight the potential of intelligent learning-based approaches for optimizing MEC networks and provide valuable insights into designing adaptive and efficient MEC architectures. 
653 |a User experience 
653 |a Communication 
653 |a Markov processes 
653 |a Edge computing 
653 |a Resource allocation 
653 |a Mobile computing 
653 |a Machine learning 
653 |a Distance learning 
653 |a Energy consumption 
653 |a Computer centers 
653 |a Performance enhancement 
653 |a Neural networks 
653 |a Infrastructure 
653 |a Optimization 
653 |a Network latency 
653 |a Effectiveness 
653 |a Computation offloading 
653 |a Design 
653 |a Operators 
653 |a Software-defined networking 
653 |a Algorithms 
653 |a Quality of service 
653 |a Utility functions 
653 |a Resource utilization 
653 |a Deep learning 
653 |a Cloud computing 
653 |a Decisions 
700 1 |a Li, Yanjun  |u School of Computer Science and Engineering, Zhejiang University of Technology, Hangzhou 310023, China; yjli@zjut.edu.cn 
773 0 |t Electronics  |g vol. 14, no. 8 (2025), p. 1689 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3194581963/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3194581963/fulltextwithgraphics/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3194581963/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch