Thesis Open Access
EYOB BOKRU BERHE
Unlike ASL or other Sign language alphabets, ESL fingerspelling is primely distinguished by hand shape and hand motion. ESL finger spelling represents consonant series with hand configurations, seven movements correspond to the seven vowels. This makes Ethiopian sign language dynamic. In dynamic sign languages, motion detection is essential for the recognition of the language. In this work, an ESL fingerspelling recognition using a fusing of computer vision and motion sensor research work is proposed to recognize and classify the vowel using motion sensor data and deep learning, also using computer vision to recognize consonants. With A large dataset of Ethiopia manual sign language alphabets from multiple subjects, two deep neural networks selected to derive a best topology for each. The vison model is implemented by a YoloV5 model and the smartwatch motion sensor motion model is implemented by LSTM network. Several experimental tests were conducted to gain the most accurate result. And the proposed model yolov5 and Bi-LSTM got 98.5% and 97% respectively with inference time less than 2 Msec. As a result, it has been proven that the proposed decoupling model would be effective for Realtime EMA classification.
Name | Size | |
---|---|---|
f1042664640.pdf
md5:d45cbd5fd89e1d88df48cadeabb670f7 |
647.9 kB | Download |
All versions | This version | |
---|---|---|
Views | 0 | 0 |
Downloads | 0 | 0 |
Data volume | 0 Bytes | 0 Bytes |
Unique views | 0 | 0 |
Unique downloads | 0 | 0 |