Memory-Efficient 4-bit Preconditioned Stochastic Optimization

Guardado en:
書目詳細資料
發表在:arXiv.org (Dec 14, 2024), p. n/a
主要作者: Li, Jingyang
其他作者: Ding, Kuangyu, Kim-Chuan Toh, Zhou, Pan
出版:
Cornell University Library, arXiv.org
主題:
在線閱讀:Citation/Abstract
Full text outside of ProQuest
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!

MARC

LEADER 00000nab a2200000uu 4500
001 3145904397
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3145904397 
045 0 |b d20241214 
100 1 |a Li, Jingyang 
245 1 |a Memory-Efficient 4-bit Preconditioned Stochastic Optimization 
260 |b Cornell University Library, arXiv.org  |c Dec 14, 2024 
513 |a Working Paper 
520 3 |a Preconditioned stochastic optimization algorithms, exemplified by Shampoo, have demonstrated superior performance over first-order optimizers, providing both theoretical advantages in convergence rates and practical improvements in large-scale neural network training. However, they incur substantial memory overhead due to the storage demands of non-diagonal preconditioning matrices. To address this, we introduce 4-bit quantization for Shampoo's preconditioners. We introduced two key methods: First, we apply Cholesky decomposition followed by quantization of the Cholesky factors, reducing memory usage by leveraging their lower triangular structure while preserving symmetry and positive definiteness to minimize information loss. To our knowledge, this is the first quantization approach applied to Cholesky factors of preconditioners. Second, we incorporate error feedback in the quantization process, efficiently storing Cholesky factors and error states in the lower and upper triangular parts of the same matrix. Through extensive experiments, we demonstrate that combining Cholesky quantization with error feedback enhances memory efficiency and algorithm performance in large-scale deep-learning tasks. Theoretically, we also provide convergence proofs for quantized Shampoo under both smooth and non-smooth stochastic optimization settings. 
653 |a Error feedback 
653 |a Memory tasks 
653 |a Shampoos 
653 |a Algorithms 
653 |a Convergence 
653 |a Neural networks 
653 |a Machine learning 
653 |a Storage 
653 |a Cognitive tasks 
653 |a Optimization 
653 |a Preconditioning 
700 1 |a Ding, Kuangyu 
700 1 |a Kim-Chuan Toh 
700 1 |a Zhou, Pan 
773 0 |t arXiv.org  |g (Dec 14, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3145904397/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2412.10663