Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

Kaydedildi:
Detaylı Bibliyografya
Yayımlandı:arXiv.org (Dec 8, 2024), p. n/a
Yazar: Akhiad Bercovich
Diğer Yazarlar: Ronen, Tomer, Abramovich, Talor, Ailon, Nir, Nave Assaf, Dabbah, Mohammad, Galil, Ido, Geifman, Amnon, Geifman, Yonatan, Golan, Izhak, Haber, Netanel, Karpas, Ehud, Koren, Roi, Levy, Itay, Molchanov, Pavlo, Mor, Shahar, Zach, Moshe, Nabwani, Najeeb, Puny, Omri, Rubin, Ran, Schen, Itamar, Shahaf, Ido, Tropp, Oren, Omer Ullman Argov, Zilberstein, Ran, El-Yaniv, Ran
Baskı/Yayın Bilgisi:
Cornell University Library, arXiv.org
Konular:
Online Erişim:Citation/Abstract
Full text outside of ProQuest
Etiketler: Etiketle
Etiket eklenmemiş, İlk siz ekleyin!

MARC

LEADER 00000nab a2200000uu 4500
001 3134992503
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3134992503 
045 0 |b d20241208 
100 1 |a Akhiad Bercovich 
245 1 |a Puzzle: Distillation-Based NAS for Inference-Optimized LLMs 
260 |b Cornell University Library, arXiv.org  |c Dec 8, 2024 
513 |a Working Paper 
520 3 |a Large language models (LLMs) have demonstrated remarkable capabilities, but their adoption is limited by high computational costs during inference. While increasing parameter counts enhances accuracy, it also widens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a framework to accelerate LLM inference on specific hardware while preserving their capabilities. Through an innovative application of neural architecture search (NAS) at an unprecedented scale, Puzzle systematically optimizes models with tens of billions of parameters under hardware constraints. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization. We demonstrate the real-world impact of our framework through Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B), a publicly available model derived from Llama-3.1-70B-Instruct. Nemotron-51B achieves a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while preserving 98.4% of the original model's capabilities. Nemotron-51B currently stands as the most accurate language model capable of inference on a single GPU with large batch sizes. Remarkably, this transformation required just 45B training tokens, compared to over 15T tokens used for the 70B model it was derived from. This establishes a new paradigm where powerful models can be optimized for efficient deployment with only negligible compromise of their capabilities, demonstrating that inference performance, not parameter count alone, should guide model selection. With the release of Nemotron-51B and the presentation of the Puzzle framework, we provide practitioners immediate access to state-of-the-art language modeling capabilities at significantly reduced computational costs. 
653 |a Computing costs 
653 |a Integer programming 
653 |a Mixed integer 
653 |a Computer architecture 
653 |a Large language models 
653 |a Graphics processing units 
653 |a Hardware 
653 |a Constraints 
653 |a Parameters 
653 |a Inference 
700 1 |a Ronen, Tomer 
700 1 |a Abramovich, Talor 
700 1 |a Ailon, Nir 
700 1 |a Nave Assaf 
700 1 |a Dabbah, Mohammad 
700 1 |a Galil, Ido 
700 1 |a Geifman, Amnon 
700 1 |a Geifman, Yonatan 
700 1 |a Golan, Izhak 
700 1 |a Haber, Netanel 
700 1 |a Karpas, Ehud 
700 1 |a Koren, Roi 
700 1 |a Levy, Itay 
700 1 |a Molchanov, Pavlo 
700 1 |a Mor, Shahar 
700 1 |a Zach, Moshe 
700 1 |a Nabwani, Najeeb 
700 1 |a Puny, Omri 
700 1 |a Rubin, Ran 
700 1 |a Schen, Itamar 
700 1 |a Shahaf, Ido 
700 1 |a Tropp, Oren 
700 1 |a Omer Ullman Argov 
700 1 |a Zilberstein, Ran 
700 1 |a El-Yaniv, Ran 
773 0 |t arXiv.org  |g (Dec 8, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3134992503/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2411.19146