Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory

Guardat en:
Dades bibliogràfiques
Publicat a:arXiv.org (Dec 16, 2024), p. n/a
Autor principal: Bencheikh, Wadjih
Altres autors: Finkbeiner, Jan, Neftci, Emre
Publicat:
Cornell University Library, arXiv.org
Matèries:
Accés en línia:Citation/Abstract
Full text outside of ProQuest
Etiquetes: Afegir etiqueta
Sense etiquetes, Sigues el primer a etiquetar aquest registre!

MARC

LEADER 00000nab a2200000uu 4500
001 3145904932
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3145904932 
045 0 |b d20241216 
100 1 |a Bencheikh, Wadjih 
245 1 |a Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory 
260 |b Cornell University Library, arXiv.org  |c Dec 16, 2024 
513 |a Working Paper 
520 3 |a Recurrent neural networks (RNNs) are valued for their computational efficiency and reduced memory requirements on tasks involving long sequence lengths but require high memory-processor bandwidth to train. Checkpointing techniques can reduce the memory requirements by only storing a subset of intermediate states, the checkpoints, but are still rarely used due to the computational overhead of the additional recomputation phase. This work addresses these challenges by introducing memory-efficient gradient checkpointing strategies tailored for the general class of sparse RNNs and Spiking Neural Networks (SNNs). SNNs are energy efficient alternatives to RNNs thanks to their local, event-driven operation and potential neuromorphic implementation. We use the Intelligence Processing Unit (IPU) as an exemplary platform for architectures with distributed local memory. We exploit its suitability for sparse and irregular workloads to scale SNN training on long sequence lengths. We find that Double Checkpointing emerges as the most effective method, optimizing the use of local memory resources while minimizing recomputation overhead. This approach reduces dependency on slower large-scale memory access, enabling training on sequences over 10 times longer or 4 times larger networks than previously feasible, with only marginal time overhead. The presented techniques demonstrate significant potential to enhance scalability and efficiency in training sparse and recurrent networks across diverse hardware platforms, and highlights the benefits of sparse activations for scalable recurrent neural network training. 
653 |a Recurrent neural networks 
653 |a Memory tasks 
653 |a Checkpointing 
653 |a Microprocessors 
653 |a Distributed memory 
653 |a Neural networks 
653 |a Chips (memory devices) 
653 |a Optimization 
653 |a Alternative energy sources 
700 1 |a Finkbeiner, Jan 
700 1 |a Neftci, Emre 
773 0 |t arXiv.org  |g (Dec 16, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3145904932/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.11810