An Efficient Model for Lip-reading in Persian Language Based on Visual Word and Fast Furrier Transform Combined with Neural Network



Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, Iran


Automatic lip-reading plays an important role in human computer interaction in noisy environments where audio speech recognition may be difficult. However, similar to speech recognition, lip-reading systems also face several challenges due to variances in the inputs, such as with facial features, skin colors, speaking speeds, and intensities. In this study a new method has been proposed for extracting features from a video containing a certain Persian words without any audio signal. The method is based on the fast furrier transform combined with the color specification of the frames in the recorded video of the spoken word. To improve the system performance visual word has been used as the shortest element of visual speech. Five speaker, three men and two women, have participated for capturing the videos of the spoken words. After obtaining features from the videos an artificial neural network has been employed as classifier. The experimental results show the average accuracy about 86.8% in recognition 31 Persian words


Main Subjects