Getting More out of Large Language Models for Proofs

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (May 31, 2023), p. n/a
Autor principal: Zhang, Shizhuo Dylan
Otros Autores: Ringer, Talia, First, Emily
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 2811359378
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2811359378 
045 0 |b d20230531 
100 1 |a Zhang, Shizhuo Dylan 
245 1 |a Getting More out of Large Language Models for Proofs 
260 |b Cornell University Library, arXiv.org  |c May 31, 2023 
513 |a Working Paper 
520 3 |a Large language models have the potential to simplify formal theorem proving and make it more accessible. But how to get the most out of these models is still an open question. To answer this question, we take a step back and explore the failure cases of these models using common prompting-based techniques. Our talk will discuss these failure cases and what they can teach us about how to get more out of these models. 
653 |a Questions 
653 |a Large language models 
700 1 |a Ringer, Talia 
700 1 |a First, Emily 
773 0 |t arXiv.org  |g (May 31, 2023), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2811359378/abstract/embedded/75I98GEZK8WCJMPQ?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2305.04369