Advances in Machine Learning-Enabled Resource Management in Manycore Systems: From Von Neumann to Heterogeneous Processing-in-Memory Architectures
Guardado en:
| Publicado en: | ProQuest Dissertations and Theses (2025) |
|---|---|
| Autor principal: | |
| Publicado: |
ProQuest Dissertations & Theses
|
| Materias: | |
| Acceso en línea: | Citation/Abstract Full Text - PDF |
| Etiquetas: |
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
MARC
| LEADER | 00000nab a2200000uu 4500 | ||
|---|---|---|---|
| 001 | 3261569972 | ||
| 003 | UK-CbPIL | ||
| 020 | |a 9798297636583 | ||
| 035 | |a 3261569972 | ||
| 045 | 2 | |b d20250101 |b d20251231 | |
| 084 | |a 66569 |2 nlm | ||
| 100 | 1 | |a Narang, Gaurav | |
| 245 | 1 | |a Advances in Machine Learning-Enabled Resource Management in Manycore Systems: From Von Neumann to Heterogeneous Processing-in-Memory Architectures | |
| 260 | |b ProQuest Dissertations & Theses |c 2025 | ||
| 513 | |a Dissertation/Thesis | ||
| 520 | 3 | |a The carbon output of computing - from edge devices to the large data centers - must be dramatically reduced. In this respect, Voltage-Frequency Island (VFI) is a well-established design paradigm to create scalable and energy-efficient manycore chips (e.g., CPUs). Voltage/Frequency (V/F) knobs of the VFIs can be dynamically tuned to reduce the energy while maintaining the application’s quality of service (QoS). In the first part of this dissertation, we consider the problem of dynamic power management (DPM) in manycore SoCs and propose novel Machine-learning (ML)-enabled DPM strategies to improve the energy efficiency in von Neumann-based manycore architectures.Deep Neural Networks (DNNs) and Graph Neural Networks (GNNs) have enabled remarkable advancements in various real-world applications, including natural language processing, healthcare, molecular chemistry, etc. As the complexity of neural network models continues to grow, their intensive computing and memory requirements pose significant performance and energy efficiency challenges for the traditional von Neumann architectures. Processing-in-Memory (PIM)-based computing platforms have emerged as a promising alternative due to their ability to perform computation within the memory itself, thereby reducing data movement and improving energy efficiency. However, communication between PIM-based processing elements (PEs) in a manycore architecture remains a bottleneck. In addition, in-memory computation suffers from device and crossbar non-idealities arising due to temperature, conductance drift, etc. In this dissertation, we address these challenges and propose a design of thermally efficient dataflow-aware Network-on-Chip (NoC) to accelerate DNN inferencing. We also address the reliability, energy, and performance challenges in DNN training and propose a heterogeneous architecture that combines the benefits of multiple PIM devices in a single platform to enable energy-efficient and high-performance DNN training. Later in this dissertation, we exploit the heterogeneity in the computational kernels behind deep learning models such as DNNs, GNNs, and transformers to design high-performance, energy-efficient, and reliable heterogeneous PIM-based manycore systems for sustainable deep learning.Overall, we utilize ML to enable the design and resource management of high-performance, energy-efficient, and reliable computing systems spanning from von Neumann to heterogeneous PIM-based architectures. | |
| 653 | |a Computer engineering | ||
| 653 | |a Engineering | ||
| 653 | |a Artificial intelligence | ||
| 653 | |a Information technology | ||
| 773 | 0 | |t ProQuest Dissertations and Theses |g (2025) | |
| 786 | 0 | |d ProQuest |t ProQuest Dissertations & Theses Global | |
| 856 | 4 | 1 | |3 Citation/Abstract |u https://www.proquest.com/docview/3261569972/abstract/embedded/J7RWLIQ9I3C9JK51?source=fedsrch |
| 856 | 4 | 0 | |3 Full Text - PDF |u https://www.proquest.com/docview/3261569972/fulltextPDF/embedded/J7RWLIQ9I3C9JK51?source=fedsrch |