Cache‐Assisted Offloading Optimization for Edge Computing Tasks

Guardado en:
Bibliografiske detaljer
Udgivet i:IET Communications vol. 19, no. 1 (Jan/Dec 2025)
Hovedforfatter: Liu, Hao
Andre forfattere: Zhen, Yan, Zheng, Libin, Huo, Chao, Zhang, Yu
Udgivet:
John Wiley & Sons, Inc.
Fag:
Online adgang:Citation/Abstract
Full Text
Full Text - PDF
Tags: Tilføj Tag
Ingen Tags, Vær først til at tagge denne postø!

MARC

LEADER 00000nab a2200000uu 4500
001 3253267428
003 UK-CbPIL
022 |a 1751-8628 
022 |a 1751-8636 
024 7 |a 10.1049/cmu2.70089  |2 doi 
035 |a 3253267428 
045 2 |b d20250101  |b d20251231 
084 |a 186333  |2 nlm 
100 1 |a Liu, Hao  |u Zhejiang University, Zhejiang, China 
245 1 |a Cache‐Assisted Offloading Optimization for Edge Computing Tasks 
260 |b John Wiley & Sons, Inc.  |c Jan/Dec 2025 
513 |a Journal Article 
520 3 |a ABSTRACT Mobile edge computing (MEC) serves as a feasible architecture that brings computation closer to the edge, enabling rapid response to user demands. However, most research on task offloading (TO) overlooks the scenario of repetitive requests for the same computing tasks during long time slots, and the spatiotemporal disparities in user demands. To address this gap, in this paper, we first introduce edge caching into TO and then divide base stations (BSs) into different communities based on the regional characteristics of user demands and activity areas, enabling collaborative caching among BSs within the same community. Subsequently, we design a dual timescale to update task popularity within both short and long‐term time slots. To maximize cache benefits, we construct a model that transforms the caching issue into a 0–1 knapsack problem, and employ dynamic programming to obtain offloading strategies. Simulation results confirm the efficiency of the proposed task caching policy algorithm, and it effectively reduces the offloading cost and improves cache resource utilization compared to the other three baseline algorithms.In this paper, we first introduce edge caching into TO and then divide BSs into different communities based on the regional characteristics of user demands and activity areas, enabling collaborative caching among BSs within the same community. Subsequently, we design a dual timescale to update task popularity within both short and long‐term time slots. To maximize cache benefits, we construct a model that transforms the caching issue into a 0–1 knapsack problem and employ dynamic programming to obtain offloading strategies. 
653 |a Dynamic programming 
653 |a User behavior 
653 |a Collaboration 
653 |a User experience 
653 |a Caching 
653 |a Edge computing 
653 |a Costs 
653 |a Optimization 
653 |a Mobile computing 
653 |a Computation offloading 
653 |a Decomposition 
653 |a Algorithms 
653 |a Resource utilization 
653 |a Cloud computing 
653 |a Time 
653 |a Energy consumption 
653 |a Knapsack problem 
700 1 |a Zhen, Yan  |u Beijing Smartchip Microelectronics Technology Company Limited, Beijing, China 
700 1 |a Zheng, Libin  |u Beijing Smartchip Microelectronics Technology Company Limited, Beijing, China 
700 1 |a Huo, Chao  |u Beijing Smartchip Microelectronics Technology Company Limited, Beijing, China 
700 1 |a Zhang, Yu  |u Beijing Smartchip Microelectronics Technology Company Limited, Beijing, China 
773 0 |t IET Communications  |g vol. 19, no. 1 (Jan/Dec 2025) 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3253267428/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text  |u https://www.proquest.com/docview/3253267428/fulltext/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3253267428/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch