5G Enabled Dual Vision and Speech Enhancement Architecture for Multimodal Hearing-Aids

I tiakina i:
Ngā taipitopito rārangi puna kōrero
I whakaputaina i:Electronics vol. 13, no. 13 (2024), p. 2588
Kaituhi matua: Ni, Xianpo
Ētahi atu kaituhi: Yang, Cen, Tyagi, Tushar, Godwin Enemali, Arslan, Tughrul
I whakaputaina:
MDPI AG
Ngā marau:
Urunga tuihono:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Ngā Tūtohu: Tāpirihia he Tūtohu
Kāore He Tūtohu, Me noho koe te mea tuatahi ki te tūtohu i tēnei pūkete!
Whakaahuatanga
Whakarāpopotonga:This paper presents the algorithmic framework for a multimodal hearing aid (HA) prototype designed on a Field Programmable Gate Array (FPGA), specifically the RFSOC4*2 AMD FPGA, and evaluates the transmitter performance through simulation studies. The proposed architecture integrates audio and video inputs, processes them using advanced algorithms, and employs the 5G New Radio (NR) communication protocol for uploading the processed signal to the cloud. The core transmission utilizes Orthogonal Frequency Division Multiplexing (OFDM), an algorithm that effectively multiplexes the processed signals onto various orthogonal frequencies, enhancing bandwidth efficiency and reducing interference. The design is divided into different modules such as Sound reference signal (SRS), demodulation reference signal (DMRS), physical broadcast channel (PBCH), and physical uplink shared channel (PUSCH). The modulation algorithm has been optimized for FPGA parallel processing capabilities, making it better suited for the hearing aid requirements for low latency. The optimized algorithm achieves a transmission time of only 4.789 ms and uses fewer hardware resources, enhancing performance in a cost-effective and energy-efficient manner.
ISSN:2079-9292
DOI:10.3390/electronics13132588
Puna:Advanced Technologies & Aerospace Database