Sign language interpretation using machine learning and artificial intelligence

Gorde:
Xehetasun bibliografikoak
Argitaratua izan da:Neural Computing & Applications vol. 37, no. 2 (Jan 2025), p. 841
Argitaratua:
Springer Nature B.V.
Gaiak:
Sarrera elektronikoa:Citation/Abstract
Full Text - PDF
Etiketak: Etiketa erantsi
Etiketarik gabe, Izan zaitez lehena erregistro honi etiketa jartzen!

MARC

LEADER 00000nab a2200000uu 4500
001 3159007326
003 UK-CbPIL
022 |a 0941-0643 
022 |a 1433-3058 
024 7 |a 10.1007/s00521-024-10395-9  |2 doi 
035 |a 3159007326 
045 2 |b d20250101  |b d20250131 
245 1 |a Sign language interpretation using machine learning and artificial intelligence 
260 |b Springer Nature B.V.  |c Jan 2025 
513 |a Journal Article 
520 3 |a Sign language is the only way for deaf and mute people to represent their needs and feelings. Most of non-deaf-mute people do not understand sign language, which leads to many difficulties for deaf-mutes' communication in their social life. Sign language interpretation systems and applications get a lot of attention in the recent years. In this paper, we review sign language recognition and interpretation studies based on machine learning, image processing, artificial intelligence, and animation tools. The two reverse processes for sign language interpretation are illustrated. This study discusses the recent research on sign language translation to text and speech with the help of hand gestures, facial expressions interpretation, and lip reading. Also, state of the art in speech to sign language translation is discussed. In addition, some of the popular and highly rated Android and Apple mobile applications that facilitate disabled people communication are presented. This paper clarifies and highlights the recent research and real used applications for deaf-mute people help. This paper tries to provide a link between research proposals and real applications. This link can help covering any gap or non-handled functionalities in the real used applications. Based on our study, we introduce a proposal involves set of functionalities/options that separately introduced and discussed by the recent research studies. These recent research directions should be integrated for achieving more real help. Also, a set of non-addressed research directions are suggested for future focus. 
653 |a Speech 
653 |a Machine learning 
653 |a Computer animation 
653 |a Artificial intelligence 
653 |a Sign language 
653 |a Applications programs 
653 |a Deafness 
653 |a Lip reading 
653 |a Mobile computing 
653 |a Software 
653 |a Translation 
653 |a People with disabilities 
653 |a Image processing 
653 |a Language translation 
653 |a Lipreading 
653 |a Facial expressions 
653 |a Gestures 
653 |a Language acquisition 
653 |a Mobile communication systems 
653 |a Application 
653 |a Social life & customs 
653 |a Communication 
653 |a Research 
653 |a Research applications 
773 0 |t Neural Computing & Applications  |g vol. 37, no. 2 (Jan 2025), p. 841 
786 0 |d ProQuest  |t Advanced Technologies & Aerospace Database 
856 4 1 |3 Citation/Abstract  |u https://www.proquest.com/docview/3159007326/abstract/embedded/L8HZQI7Z43R0LA5T?source=fedsrch 
856 4 0 |3 Full Text - PDF  |u https://www.proquest.com/docview/3159007326/fulltextPDF/embedded/L8HZQI7Z43R0LA5T?source=fedsrch