Memory-Efficient 4-bit Preconditioned Stochastic Optimization

Wedi'i Gadw mewn:
Manylion Llyfryddiaeth
Cyhoeddwyd yn:arXiv.org (Dec 14, 2024), p. n/a
Prif Awdur: Li, Jingyang
Awduron Eraill: Ding, Kuangyu, Kim-Chuan Toh, Zhou, Pan
Cyhoeddwyd:
Cornell University Library, arXiv.org
Pynciau:
Mynediad Ar-lein:Citation/Abstract
Full text outside of ProQuest
Tagiau: Ychwanegu Tag
Dim Tagiau, Byddwch y cyntaf i dagio'r cofnod hwn!
Disgrifiad
Crynodeb:Preconditioned stochastic optimization algorithms, exemplified by Shampoo, have demonstrated superior performance over first-order optimizers, providing both theoretical advantages in convergence rates and practical improvements in large-scale neural network training. However, they incur substantial memory overhead due to the storage demands of non-diagonal preconditioning matrices. To address this, we introduce 4-bit quantization for Shampoo's preconditioners. We introduced two key methods: First, we apply Cholesky decomposition followed by quantization of the Cholesky factors, reducing memory usage by leveraging their lower triangular structure while preserving symmetry and positive definiteness to minimize information loss. To our knowledge, this is the first quantization approach applied to Cholesky factors of preconditioners. Second, we incorporate error feedback in the quantization process, efficiently storing Cholesky factors and error states in the lower and upper triangular parts of the same matrix. Through extensive experiments, we demonstrate that combining Cholesky quantization with error feedback enhances memory efficiency and algorithm performance in large-scale deep-learning tasks. Theoretically, we also provide convergence proofs for quantized Shampoo under both smooth and non-smooth stochastic optimization settings.
ISSN:2331-8422
Ffynhonnell:Engineering Database