Advances in Mathematical Reasoning with Large Language Models

Salvato in:
Dettagli Bibliografici
Pubblicato in:ITM Web of Conferences vol. 80 (2025)
Autore principale: Zhao, Zihao
Pubblicazione:
EDP Sciences
Soggetti:
Accesso online:Citation/Abstract
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!
Descrizione
Abstract:Large Language Models (LLMs) have made impressive strides in understanding and generating natural language, but they still struggle with mathematical problem-solving, especially when it comes to tasks that require multi-step reasoning and precise calculations. This review looks at recent advancements aimed at improving LLMs’ performance in math, focusing on two main approaches: refining inference methods and adding external tools. Techniques like Chain-of-Thought (CoT) prompting, Program-Aided Language Models (PAL), and Toolformer have helped improve LLMs’ ability to handle complex math problems. These models rely on external tools, such as Python interpreters or calculators, to perform precise calculations, which has proven effective for solving problems in algebra, calculus, and other areas. Models like Minerva and Llemma, which are pre-trained specifically on mathematical content, can solve more advanced problems like differential equations without needing additional tools. However, challenges still exist, such as the reliance on external tools for exact calculations, difficulties with multi-step reasoning, and limited transparency in the models’ decision-making processes. Looking ahead, the integration of multi-modal capabilities, autonomous computation, and human feedback could further enhance LLMs’ mathematical abilities. With continued improvements, LLMs could transform problem-solving in fields like education, research, and finance.
ISSN:2431-7578
2271-2097
DOI:10.1051/itmconf/20258001030
Fonte:Advanced Technologies & Aerospace Database