Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case Study

Guardado en:
Detalles Bibliográficos
Publicado en:arXiv.org (Mar 22, 2024), p. n/a
Autor principal: Tim van Dam
Otros Autores: van der Heijden, Frank, de Bekker, Philippe, Nieuwschepen, Berend, Otten, Marc, Izadi, Maliheh
Publicado:
Cornell University Library, arXiv.org
Materias:
Acceso en línea:Citation/Abstract
Full text outside of ProQuest
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!

MARC

LEADER 00000nab a2200000uu 4500
001 2982185529
003 UK-CbPIL
022 |a 2331-8422 
035 |a 2982185529 
045 0 |b d20240322 
100 1 |a Tim van Dam 
245 1 |a Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case Study 
260 |b Cornell University Library, arXiv.org  |c Mar 22, 2024 
513 |a Working Paper 
520 3 |a Language model-based code completion models have quickly grown in use, helping thousands of developers write code in many different programming languages. However, research on code completion models typically focuses on imperative languages such as Python and JavaScript, which results in a lack of representation for functional programming languages. Consequently, these models often perform poorly on functional languages such as Haskell. To investigate whether this can be alleviated, we evaluate the performance of two language models for code, CodeGPT and UniXcoder, on the functional programming language Haskell. We fine-tune and evaluate the models on Haskell functions sourced from a publicly accessible Haskell dataset on HuggingFace. Additionally, we manually evaluate the models using our novel translated HumanEval dataset. Our automatic evaluation shows that knowledge of imperative programming languages in the pre-training of LLMs may not transfer well to functional languages, but that code completion on functional languages is feasible. Consequently, this shows the need for more high-quality Haskell datasets. A manual evaluation on HumanEval-Haskell indicates CodeGPT frequently generates empty predictions and extra comments, while UniXcoder more often produces incomplete or incorrect predictions. Finally, we release HumanEval-Haskell, along with the fine-tuned models and all code required to reproduce our experiments on GitHub (https://github.com/AISE-TUDelft/HaskellCCEval). 
653 |a Datasets 
653 |a Programming languages 
653 |a Performance evaluation 
653 |a Functional programming 
653 |a Imperative programming 
700 1 |a van der Heijden, Frank 
700 1 |a de Bekker, Philippe 
700 1 |a Nieuwschepen, Berend 
700 1 |a Otten, Marc 
700 1 |a Izadi, Maliheh 
773 0 |t arXiv.org  |g (Mar 22, 2024), p. n/a 
786 0 |d ProQuest  |t Engineering Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/2982185529/abstract/embedded/H09TXR3UUZB2ISDL?source=fedsrch 
856 4 0 |3 Full text outside of ProQuest  |u http://arxiv.org/abs/2403.15185