Advances in Mathematical Reasoning with Large Language Models

Salvato in:
Dettagli Bibliografici
Pubblicato in:ITM Web of Conferences vol. 80 (2025)
Autore principale: Zhao, Zihao
Pubblicazione:
EDP Sciences
Soggetti:
Accesso online:Citation/Abstract
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3284871302
003 UK-CbPIL
022 |a 2431-7578 
022 |a 2271-2097 
024 7 |a 10.1051/itmconf/20258001030  |2 doi 
035 |a 3284871302 
045 2 |b d20250101  |b d20251231 
084 |a 268430  |2 nlm 
100 1 |a Zhao, Zihao 
245 1 |a Advances in Mathematical Reasoning with Large Language Models 
260 |b EDP Sciences  |c 2025 
513 |a Conference Proceedings 
520 3 |a Large Language Models (LLMs) have made impressive strides in understanding and generating natural language, but they still struggle with mathematical problem-solving, especially when it comes to tasks that require multi-step reasoning and precise calculations. This review looks at recent advancements aimed at improving LLMs’ performance in math, focusing on two main approaches: refining inference methods and adding external tools. Techniques like Chain-of-Thought (CoT) prompting, Program-Aided Language Models (PAL), and Toolformer have helped improve LLMs’ ability to handle complex math problems. These models rely on external tools, such as Python interpreters or calculators, to perform precise calculations, which has proven effective for solving problems in algebra, calculus, and other areas. Models like Minerva and Llemma, which are pre-trained specifically on mathematical content, can solve more advanced problems like differential equations without needing additional tools. However, challenges still exist, such as the reliance on external tools for exact calculations, difficulties with multi-step reasoning, and limited transparency in the models’ decision-making processes. Looking ahead, the integration of multi-modal capabilities, autonomous computation, and human feedback could further enhance LLMs’ mathematical abilities. With continued improvements, LLMs could transform problem-solving in fields like education, research, and finance. 
653 |a Problem solving 
653 |a Large language models 
653 |a Differential equations 
653 |a Natural language processing 
653 |a Reasoning 
773 0 |t ITM Web of Conferences  |g vol. 80 (2025) 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3284871302/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3284871302/fulltextPDF/embedded/H09TXR3UUZB2ISDL?source=fedsrch