HYLR-FO: Hybrid Approach Using Language Models and Rule-Based Systems for On-Device Food Ordering

Salvato in:
Dettagli Bibliografici
Pubblicato in:Electronics vol. 14, no. 4 (2025), p. 775
Autore principale: Yang, Subhin
Altri autori: Kim, Donghwan, Lee, Sungju
Pubblicazione:
MDPI AG
Soggetti:
Accesso online:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Tags: Aggiungi Tag
Nessun Tag, puoi essere il primo ad aggiungerne!!

MARC

LEADER 00000nab a2200000uu 4500
001 3171004772
003 UK-CbPIL
022 |a 2079-9292 
024 7 |a 10.3390/electronics14040775  |2 doi 
035 |a 3171004772 
045 2 |b d20250101  |b d20251231 
084 |a 231458  |2 nlm 
100 1 |a Yang, Subhin 
245 1 |a HYLR-FO: Hybrid Approach Using Language Models and Rule-Based Systems for On-Device Food Ordering 
260 |b MDPI AG  |c 2025 
513 |a Journal Article 
520 3 |a Recent research has explored combining large language models (LLMs) with speech recognition for various services, but such applications require a strong network environment for quality service delivery. For on-device services, which do not rely on networks, resource limitations must be considered. This study proposes HYLR-FO, an efficient model that integrates a smaller language model (LM) and a rule-based system (RBS) to enable fast and reliable voice-based order processing in resource-constrained environments, approximating the performance of LLMs. By considering potential error scenarios and leveraging flexible natural language processing (NLP) and inference validation, this approach ensures both efficiency and robustness in order execution. Smaller LMs are used instead of LLMs to reduce resource usage. The LM transforms speech input, received via automatic speech recognition (ASR), into a consistent form that can be processed by the RBS. The RBS then extracts the order and validates the extracted information. The experimental results show that HYLR-FO, trained and tested on 5000 order data samples, achieves up to 86% accuracy, comparable to the 90% accuracy of LLMs. Additionally, HYLR-FO achieves a processing speed of up to 55 orders per second, significantly outperforming LLM-based approaches, which handle only 1.14 orders per second. This results in a 48.25-fold improvement in processing speed in resource-constrained environments. This study demonstrates that HYLR-FO provides faster processing and achieves accuracy similar to LLMs in resource-constrained on-device environments. This finding has theoretical implications for optimizing LM efficiency in constrained settings and practical implications for real-time low-resource AI applications. Specifically, the design of HYLR-FO suggests its potential for efficient deployment in various commercial environments, achieving fast response times and low resource consumption with smaller models. 
653 |a Language 
653 |a Accuracy 
653 |a Text categorization 
653 |a Deep learning 
653 |a Large language models 
653 |a Voice recognition 
653 |a Natural language processing 
653 |a Reaction time 
653 |a Speech recognition 
653 |a Language modeling 
653 |a Real time 
653 |a Constraints 
653 |a Automatic speech recognition 
653 |a Speech 
653 |a Chatbots 
653 |a Order processing 
653 |a Inference 
653 |a Robustness 
653 |a Deployment 
653 |a Acknowledgment 
700 1 |a Kim, Donghwan 
700 1 |a Lee, Sungju 
773 0 |t Electronics  |g vol. 14, no. 4 (2025), p. 775 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3171004772/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text + Graphics  |u https://www.proquest.com/docview/3171004772/fulltextwithgraphics/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3171004772/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch