Distributed data processing and task scheduling based on GPU parallel computing

保存先:
書誌詳細
出版年:Neural Computing & Applications vol. 37, no. 4 (Feb 2025), p. 1757
出版事項:
Springer Nature B.V.
主題:
オンライン・アクセス:Citation/Abstract
タグ: タグ追加
タグなし, このレコードへの初めてのタグを付けませんか!

MARC

LEADER 00000nab a2200000uu 4500
001 3163602262
003 UK-CbPIL
022 |a 0941-0643 
022 |a 1433-3058 
024 7 |a 10.1007/s00521-024-10489-4  |2 doi 
035 |a 3163602262 
045 2 |b d20250201  |b d20250228 
245 1 |a Distributed data processing and task scheduling based on GPU parallel computing 
260 |b Springer Nature B.V.  |c Feb 2025 
513 |a Journal Article 
520 3 |a Distributed data parallel (DDP) computing ensures data parallelism, enabling execution across several computers. A separate distributed data parallel computing instance should be created for each process that uses distributed data parallel computing. Task scheduling in parallel processing employs various methods and strategies to minimize the number of delayed jobs. Reliability in terms of a system’s capacity to distribute work over several machines and computers is improved by workload sharing, task migration, and automatic task replication; data transmission and reception are facilitated via the internet. The graphics processing unit (GPU) is a highly specialized electrical circuit. Compared to a traditional computer, a GPU parallel structure allows faster computation and greater effectiveness. GPUs are employed in parallel processing in new methods for DDP-GPUs with more processors, allowing a more significant workload to be handled in portions. The execution time of a program can be reduced by distributing various task areas among many processors. This study employs various multitasking scenarios, from simulated to real-world cases and from graphically heavy applications to parallel-processing workloads. The research compares the efficacy of several allocation algorithms, illuminating how best to divide GPU resources among multiple processes. By sorting through a vast volume of data more slowly, parallel computing saves time and money. Parallel and distributed computing concern the use of numerous computing resources to improve the results of a distributed and computationally expensive application. A single computer or a group of computers connected via a network can be used for computation. Our experimental findings show that the DPP-GPU implements task scheduling 95.3% better than the conventional GPU. The proposed model improves system reliability with an average execution time of 25.7 s. The time needed to run a program divided by the machine’s cost is approximately 95% when utilizing GPU execution ratio analysis to measure the parallel system’s efficacy. The high stability ratio between tasks helps us turn preferences into a distribution of probability by examining the study results. A task-based processing evaluation ratio of 98.45% allows the automatic execution of specified tasks if specific criteria are satisfied. 
653 |a Parallel processing 
653 |a Computers 
653 |a Task scheduling 
653 |a Data processing 
653 |a System reliability 
653 |a Graphics processing units 
653 |a Multitasking 
653 |a Effectiveness 
653 |a Resource scheduling 
653 |a Workload 
653 |a Data transmission 
653 |a Processors 
653 |a Circuits 
653 |a Sorting algorithms 
653 |a Workloads 
653 |a Distributed processing 
653 |a Time measurement 
773 0 |t Neural Computing & Applications  |g vol. 37, no. 4 (Feb 2025), p. 1757 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3163602262/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch