Reducing Reasoning Costs -- The Path of Optimization for Chain of Thought via Sparse Attention Mechanism

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Dec 11, 2024), p. n/a
Autor principal: Wang, Libo
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 3128887362
003 UK-CbPIL
022 |a 2331-8422 
035 |a 3128887362 
045 0 |b d20241211 
100 1 |a Wang, Libo 
245 1 |a Reducing Reasoning Costs -- The Path of Optimization for Chain of Thought via Sparse Attention Mechanism 
260 |b Cornell University Library, arXiv.org  |c Dec 11, 2024 
513 |a Working Paper 
520 3 |a In order to address the chain of thought in the large language model inference cost surge, this research proposes to use a sparse attention mechanism that only focuses on a few relevant tokens. The researcher constructed a new attention mechanism and used GiantRabbit trained with custom GPTs as an experimental tool. The experiment tested and compared the reasoning time, correctness score and chain of thought length of this model and o1 Preview in solving the linear algebra test questions of MIT OpenCourseWare. The results show that GiantRabbit's reasoning time and chain of thought length are significantly lower than o1 Preview. It verifies the feasibility of sparse attention mechanism for optimizing chain of thought reasoning. Detailed architectural details and experimental process have been uploaded to Github, the link is:https://github.com/brucewang123456789/GeniusTrail.git. 
653 |a Large language models 
653 |a Linear algebra 
653 |a Reasoning 
773 0 |t arXiv.org  |g (Dec 11, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3128887362/abstract/embedded/6A8EOT78XXH2IG52?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2411.09111