Scaling Parallel Full-Graph GNN Training to Thousands of GPUs

Guardado en:
Detalles Bibliográficos
Publicado en:ProQuest Dissertations and Theses (2025)
Autor principal: Ranjan, Aditya Kishore
Publicado:
ProQuest Dissertations & Theses
Materias:
Acceso en línea:Citation/Abstract
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3226020931
003 UK-CbPIL
020 |a 9798286455065 
035 |a 3226020931 
045 2 |b d20250101  |b d20251231 
084 |a 66569  |2 nlm 
100 1 |a Ranjan, Aditya Kishore 
245 1 |a Scaling Parallel Full-Graph GNN Training to Thousands of GPUs 
260 |b ProQuest Dissertations & Theses  |c 2025 
513 |a Dissertation/Thesis 
520 3 |a Graph neural networks have emerged as a potent class of neural networks capable of leveraging the connectivity and structure of real-world graphs to learn intricate properties and relationships between nodes. Many real-world graphs exceed the memory capacity of a GPU due to their sheer size, and using GNNs on them requires techniques such as mini-batch sampling to scale. However, this can lead to reduced accuracy in some cases, and sampling and data transfer from the CPU to the GPU can also slow down training. On the other hand, distributed full-graph training suffers from high communication overhead and load imbalance due to the irregular structure of graphs.In this thesis, we propose Plexus, a three-dimensional (3D) parallel approach for full-graph training that tackles these issues and scales to billion-edge graphs. Additionally, we introduce performance optimizations such as a permutation scheme for load balancing, and a performance model to predict the optimal 3D configuration. Plexus is evaluated on several graph datasets and scaling results are shown for up to 2048 A100 GPUs on Perlmutter, which is 33% of the supercomputer, and 1024 MI250X GPUs on Frontier. Plexus achieves unprecedented speedups of 2.3x-12.5x over existing methods and a reduction in the time to solution by 5.2-8.7x on Perlmutter and 7-54.2x on Frontier. 
653 |a Computer science 
653 |a Computer engineering 
653 |a Artificial intelligence 
773 0 |t ProQuest Dissertations and Theses  |g (2025) 
786 0 |d ProQuest  |t ProQuest Dissertations & Theses Global 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3226020931/abstract/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3226020931/fulltextPDF/embedded/7BTGNMKEMPT1V9Z2?source=fedsrch