Optimal Gradient Checkpointing for Sparse and Recurrent Architectures using Off-Chip Memory

Sábháilte in:
Sonraí bibleagrafaíochta
Foilsithe in:arXiv.org (Dec 16, 2024), p. n/a
Príomhchruthaitheoir: Bencheikh, Wadjih
Rannpháirtithe: Finkbeiner, Jan, Neftci, Emre
Foilsithe / Cruthaithe:
Cornell University Library, arXiv.org
Ábhair:
Rochtain ar líne:Citation/Abstract
Full text outside of ProQuest
Clibeanna: Cuir clib leis
Níl clibeanna ann, Bí ar an gcéad duine le clib a chur leis an taifead seo!
Cur síos
Achoimre:Recurrent neural networks (RNNs) are valued for their computational efficiency and reduced memory requirements on tasks involving long sequence lengths but require high memory-processor bandwidth to train. Checkpointing techniques can reduce the memory requirements by only storing a subset of intermediate states, the checkpoints, but are still rarely used due to the computational overhead of the additional recomputation phase. This work addresses these challenges by introducing memory-efficient gradient checkpointing strategies tailored for the general class of sparse RNNs and Spiking Neural Networks (SNNs). SNNs are energy efficient alternatives to RNNs thanks to their local, event-driven operation and potential neuromorphic implementation. We use the Intelligence Processing Unit (IPU) as an exemplary platform for architectures with distributed local memory. We exploit its suitability for sparse and irregular workloads to scale SNN training on long sequence lengths. We find that Double Checkpointing emerges as the most effective method, optimizing the use of local memory resources while minimizing recomputation overhead. This approach reduces dependency on slower large-scale memory access, enabling training on sequences over 10 times longer or 4 times larger networks than previously feasible, with only marginal time overhead. The presented techniques demonstrate significant potential to enhance scalability and efficiency in training sparse and recurrent networks across diverse hardware platforms, and highlights the benefits of sparse activations for scalable recurrent neural network training.
ISSN:2331-8422
Foinse:Engineering Database