Ensembling Large Language Models with Process Reward-Guided Tree Search for Better Complex Reasoning

Saved in:
Bibliographic Details
Published in:arXiv.org (Dec 20, 2024), p. n/a
Main Author: Park, Sungjin
Other Authors: Liu, Xiao, Gong, Yeyun, Choi, Edward
Published:
Cornell University Library, arXiv.org
Subjects:
Online Access:Citation/Abstract
Full text outside of ProQuest
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Abstract:Despite recent advances in large language models, open-source models often struggle to consistently perform well on complex reasoning tasks. Existing ensemble methods, whether applied at the token or output levels, fail to address these challenges. In response, we present Language model Ensemble with Monte Carlo Tree Search (LE-MCTS), a novel framework for process-level ensembling of language models. LE-MCTS formulates step-by-step reasoning with an ensemble of language models as a Markov decision process. In this framework, states represent intermediate reasoning paths, while actions consist of generating the next reasoning step using one of the language models selected from a predefined pool. Guided by a process-based reward model, LE-MCTS performs a tree search over the reasoning steps generated by different language models, identifying the most accurate reasoning chain. Experimental results on five mathematical reasoning benchmarks demonstrate that our approach outperforms both single language model decoding algorithms and language model ensemble methods. Notably, LE-MCTS improves performance by 3.6% and 4.3% on the MATH and MQA datasets, respectively, highlighting its effectiveness in solving complex reasoning problems.
ISSN:2331-8422
Source:Engineering Database