Indian sign language detection and translation using deep learning
Synopsis
Out of India's population, about 63 million use Indian Sign Language (ISL) as the natural means of communication. However, massive barriers in communication exist between hearing-impaired people and the general population, mainly in spheres like education, healthcare, and jurisprudence, which often require professional interpreters. This language gap brings before the community of hearing-impaired several social, academic, and professional issues. The recent progress on deep learning, especially the models and architectures based on Convolutional Neural Networks (CNNs) and Transformers, have demonstrated promising results in sign language recognition. These models can be employed for significant accuracy, robustness, and better use in communication gap bridging. The project aims to develop and optimize deep learning-based sign language recognition models using the INCLUDE dataset, the standardized resource for ISL gestures. A systematic comparison and evaluation on the performance of different models will be performed on exactly the same set of data. This research therefore contributes to work on sign language recognition, pointing toward possible future solutions for a real-time translation facility and communication systems for hearing-impaired people via accurate recognition of ISL gestures. In the end, it aims at improving accessibility and promoting inclusivity in a society where communication barriers still exist for the hearing-impaired.
Keywords: Deep Learning, Gesture Recognition, Indian Sign Language, Indian Sign Language