Minimizing cache usage with fixed-priority and earliest deadline first scheduling

Đã lưu trong:
Chi tiết về thư mục
Xuất bản năm:Real - Time Systems vol. 60, no. 4 (Dec 2024), p. 625
Được phát hành:
Springer Nature B.V.
Những chủ đề:
Truy cập trực tuyến:Citation/Abstract
Full Text - PDF
Các nhãn: Thêm thẻ
Không có thẻ, Là người đầu tiên thẻ bản ghi này!

MARC

LEADER 00000nab a2200000uu 4500
001 3143473249
003 UK-CbPIL
022 |a 0922-6443 
022 |a 1573-1383 
024 7 |a 10.1007/s11241-024-09423-7  |2 doi 
035 |a 3143473249 
045 2 |b d20241201  |b d20241231 
245 1 |a Minimizing cache usage with fixed-priority and earliest deadline first scheduling 
260 |b Springer Nature B.V.  |c Dec 2024 
513 |a Journal Article 
520 3 |a Cache partitioning is a technique to reduce interference among tasks running on the processors with shared caches. To make this technique effective, cache segments should be allocated to tasks that will benefit the most from having their data and instructions stored in the cache. The requests for cached data and instructions can be retrieved faster from the cache memory instead of fetching them from the main memory, thereby reducing overall execution time. The existing partitioning schemes for real-time systems divide the available cache among the tasks to guarantee their schedulability as the sole and primary optimization criterion. However, it is also preferable, particularly in systems with power constraints or mixed criticalities where low- and high-criticality workloads are executing alongside, to reduce the total cache usage for real-time tasks. Cache minimization as part of design space exploration can also help in achieving optimal system performance and resource utilization in embedded systems. In this paper, we develop optimization algorithms for cache partitioning that, besides ensuring schedulability, also minimize cache usage. We consider both preemptive and non-preemptive scheduling policies on single-processor systems with fixed- and dynamic-priority scheduling algorithms (Rate Monotonic (RM) and Earliest Deadline First (EDF), respectively). For preemptive scheduling, we formulate the problem as an integer quadratically constrained program and propose an efficient heuristic achieving near-optimal solutions. For non-preemptive scheduling, we combine linear and binary search techniques with different fixed-priority schedulability tests and Quick Processor-demand Analysis (QPA) for EDF. Our experiments based on synthetic task sets with parameters from real-world embedded applications show that the proposed heuristic: (i) achieves an average optimality gap of 0.79% within 0.1× run time of a mathematical programming solver and (ii) reduces average cache usage by 39.15% compared to existing cache partitioning approaches. Besides, we find that for large task sets with high utilization, non-preemptive scheduling can use less cache than preemptive to guarantee schedulability. 
653 |a Mathematical programming 
653 |a Scheduling 
653 |a Heuristic 
653 |a Preempting 
653 |a Memory tasks 
653 |a Embedded systems 
653 |a Task scheduling 
653 |a Microprocessors 
653 |a Resource scheduling 
653 |a Demand analysis 
653 |a Algorithms 
653 |a Resource utilization 
653 |a Real time 
653 |a Design optimization 
653 |a Partitioning 
653 |a Priority scheduling 
653 |a Run time (computers) 
773 0 |t Real - Time Systems  |g vol. 60, no. 4 (Dec 2024), p. 625 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3143473249/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3143473249/fulltextPDF/embedded/6A8EOT78XXH2IG52?source=fedsrch