Improve Mathematical Reasoning in Language Models by Automated Process Supervision

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:arXiv.org (Dec 11, 2024), p. n/a
Egile nagusia: Luo, Liangchen
Beste egile batzuk: Liu, Yinxiao, Liu, Rosanne, Phatale, Samrat, Guo, Meiqi, Harsh, Lara, Li, Yunxuan, Shu, Lei, Zhu, Yun, Meng, Lei, Jiao, Sun, Rastogi, Abhinav
Argitaratua:
Cornell University Library, arXiv.org
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full text outside of ProQuest
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!

MARC

LEADER 00000nab a2200000uu 4500
001 3067012156
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3067012156 
045 0 |b d20241211 
100 1 |a Luo, Liangchen 
245 1 |a Improve Mathematical Reasoning in Language Models by Automated Process Supervision 
260 |b Cornell University Library, arXiv.org  |c Dec 11, 2024 
513 |a Working Paper 
520 3 |a Complex multi-step reasoning tasks, such as solving mathematical problems or generating code, remain a significant hurdle for even the most advanced large language models (LLMs). Verifying LLM outputs with an Outcome Reward Model (ORM) is a standard inference-time technique aimed at enhancing the reasoning performance of LLMs. However, this still proves insufficient for reasoning tasks with a lengthy or multi-hop reasoning chain, where the intermediate outcomes are neither properly rewarded nor penalized. Process supervision addresses this limitation by assigning intermediate rewards during the reasoning process. To date, the methods used to collect process supervision data have relied on either human annotation or per-step Monte Carlo estimation, both prohibitively expensive to scale, thus hindering the broad application of this technique. In response to this challenge, we propose a novel divide-and-conquer style Monte Carlo Tree Search (MCTS) algorithm named \textit{OmegaPRM} for the efficient collection of high-quality process supervision data. This algorithm swiftly identifies the first error in the Chain of Thought (CoT) with binary search and balances the positive and negative examples, thereby ensuring both efficiency and quality. As a result, we are able to collect over 1.5 million process supervision annotations to train Process Reward Models (PRMs). This fully automated process supervision alongside the weighted self-consistency algorithm is able to enhance LLMs' math reasoning performances. We improved the success rates of the instruction-tuned Gemini Pro model from 51\% to 69.4\% on MATH500 and from 86.4\% to 93.6\% on GSM8K. Similarly, we boosted the success rates of Gemma2 27B from 42.3\% to 58.2\% on MATH500 and from 74.0\% to 92.2\% on GSM8K. The entire process operates without any human intervention or supervision, making our method both financially and ... 
653 |a Supervision 
653 |a Search algorithms 
653 |a Annotations 
653 |a Algorithms 
653 |a Large language models 
653 |a Automation 
653 |a Monte Carlo simulation 
653 |a Task complexity 
653 |a Reasoning 
700 1 |a Liu, Yinxiao 
700 1 |a Liu, Rosanne 
700 1 |a Phatale, Samrat 
700 1 |a Guo, Meiqi 
700 1 |a Harsh, Lara 
700 1 |a Li, Yunxuan 
700 1 |a Shu, Lei 
700 1 |a Zhu, Yun 
700 1 |a Meng, Lei 
700 1 |a Jiao, Sun 
700 1 |a Rastogi, Abhinav 
773 0 |t arXiv.org  |g (Dec 11, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3067012156/abstract/embedded/ZKJTFFSVAI7CB62C?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2406.06592