HITgram: A Platform for Experimenting with n-gram Language Models

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:arXiv.org (Dec 14, 2024), p. n/a
Kaituhi matua: Dasgupta, Shibaranjani
Ētahi atu kaituhi: Maity, Chandan, Mukherjee, Somdip, Singh, Rohan, Dutta, Diptendu, Jana, Debasish
I whakaputaina:
Cornell University Library, arXiv.org
Ngā marau:
Urunga tuihono:Citation/Abstract
Full text outside of ProQuest
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!
Whakaahuatanga
Whakarāpopotonga:Large language models (LLMs) are powerful but resource intensive, limiting accessibility. HITgram addresses this gap by offering a lightweight platform for n-gram model experimentation, ideal for resource-constrained environments. It supports unigrams to 4-grams and incorporates features like context sensitive weighting, Laplace smoothing, and dynamic corpus management to e-hance prediction accuracy, even for unseen word sequences. Experiments demonstrate HITgram's efficiency, achieving 50,000 tokens/second and generating 2-grams from a 320MB corpus in 62 seconds. HITgram scales efficiently, constructing 4-grams from a 1GB file in under 298 seconds on an 8 GB RAM system. Planned enhancements include multilingual support, advanced smoothing, parallel processing, and model saving, further broadening its utility.
ISSN:2331-8422
Puna:Engineering Database