Abstract
Sign languages are one of the most essential communication skills for hearing-impaired people, yet they are not easy to understand for hearing people and this situation has created communication barriers through many aspects of our society. While recruiting a sign language interpreter for each hearing-impaired people is apparently not feasible, improving the communication effectiveness through up-to-date research work in the field of haptics, motion capture and face recognition can be promising and practical. In this paper, we review a number of previous methods in sign language recognition using different approaches, and identify a few techniques that may improve the effectiveness of the communication pipeline between hearing-impaired and hearing people. These techniques can be fit into a comprehensive communication pipeline and serve as a foundation model for more research work between hearing-impaired and hearing people.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Hernandez-Rebollar, J.L., Lindeman, R.W., Kyriakopoulos, N.: A multi-class pattern recognition system for practical finger spelling translation. In: Proceedings of the IEEE International Conference on Multimodal Interfaces, p. 185 (2002)
Kadous, M.W.: Machine recognition of Auslan signs using PowerGloves: “Towards large lexicon recognition of sign language”. In: Proceedings of the Workshop: Integration of Gesture in Language and Speech (1996)
Nayak, S., Duncan, K., Sarkar, S., Loeding, B.: Finding recurrent patterns from continuous sign language sentences for automated extraction of signs. J. Mach. Learn. Res. 13, 2589–2615 (2012)
Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous language sentences using iterated conditional modes. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 795–810 (2012)
Sarkar, S., Loeding, B., Yang, R., Nayak, S., Parashar, A.: Segmentation-robust representations, matching, and modeling for sign language. In: Computer Vision and Pattern Recognition Workshops, pp. 13–19. IEEE Computer Society (2011)
Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous sign language sentences using iterated conditional modes. In: IEEE Conference on Computer Vision and Pattern Recognition, June 2009
Yang, R., Sarkar, S.: Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming. IEEE Trans. Pattern Anal. Mach. Intell. 2009)
Yang, R., Sarkar, S.: Coupled grouping and matching for sign and gesture recognition. Comput. Vis. Image Underst. 113(6), 663–681 (2009)
Nayak, S., Sarkar, S., Loeding, B.: Distribution-based dimensionality reduction applied to articulated motion recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(5) (2009)
Ong, S.C.W., Ranganath, S.: Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 873–891 (2005)
Cooper, H., Ong, E.J., Pugeault, N., et al.: Sign language recognition using sub-units. J. Mach. Learn. Res. 13(1), 2205–2231 (2012)
Wong, S.F., Cipolla, R.: Real-time interpretation of hand motions using a sparse Bayesian classifier on motion gradient orientation images. In: Proceedings of the BMVC, Oxford, UK, vol. 1, pp. 379–388 (2005)
Cooper, H., Bowden, R.: Sign language recognition using boosted volumetric features. In: Proceedings of the IAPR Conference on Machine Vision Applications, Tokyo, Japan, pp. 359–362 (2007)
Buehler, P., Everingham, M., Zisserman, A.: Learning sign language by watching TV (using weakly aligned subtitles). In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2961–2968 (2009)
Segen, J., Kumar, S.: Shadow gestures: “3D hand pose estimation using a single camera”. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1 (1999)
Feris, R., Turk, M., Raskar, R., Tan, K., Ohashi, G.: Exploiting depth discontinuities for vision-based fingerspelling recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition: Workshop, vol. 10 (2004)
Starner, T., Weaver, J., Pentland, A.: Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1371–1375 (1998)
Munoz-Salinas, R., Medina-Carnicer, R., Madrid-Cuevas, F.J., Carmona-Poyato, A.: Depth silhouettes for gesture recognition. Pattern Recognit. Lett. 29(3), 319–329 (2008)
Vogler, C., Metaxas, D.: ASL recognition based on a coupling between HMMs and 3D motion analysis. In: Proceedings of the ICCV, Bombay, India, pp. 363–369 (1998)
Doliotis, P., Stefan, A., Mcmurrough, C., Eckhard, D., Athitsos, V.: Comparing gesture recognition accuracy using color and depth information. In: Proceedings of the Conference on Pervasive Technologies Related to Assistive Environments (PETRA) (2011)
Li, Y.: Hand gesture recognition using Kinect. In: Proceedings of the IEEE 3rd International Conference on Software Engineering and Service Science (ICSESS), pp. 196–199 (2012)
Li, K.F., Lothrop, K., Gill, E., et al.: A web-based sign language translator using 3d video processing. In: Proceedings of the IEEE 14th International Conference on Network-Based Information Systems (NBiS), pp. 356–361 (2011)
Lang, S., Block, M., Rojas, R.: Sign language recognition using kinect. In: Artificial Intelligence and Soft Computing, pp. 394–402. Springer (2012)
Chai, X., Li, G., Lin, Y., et al.: Sign language recognition and translation with kinect. In: Proceedings of the IEEE Conference on AFGR (2013)
Zafrulla, Z., Brashear, H., Starner, T., et al.: American sign language recognition with the Kinect. In: Proceedings of the ACM 13th International Conference on Multimodal Interfaces, pp. 279–286 (2011)
Sun, C., Zhang, T., Bao, B.K., et al.: Discriminative exemplar coding for sign language recognition with Kinect. IEEE Trans. Cybern. 43(5), 1418–1428 (2013)
Ershaed, H., Al-Alali, I., Khasawneh, N., Fraiwan, M.: An Arabic sign language computer interface using the Xbox Kinect. In: Proceedings of the Annual Undergraduate Research Conference on Applied Computing (2011)
Raheja, J.L., Chaudhary, A., Singal, K.: Tracking of fingertips and centre of palm using kinect. In: Proceedings of the 3rd IEEE International Conference on Computational Intelligence, Modelling and Simulation, pp. 248–252 (2011)
Yang, R., Sarkar, S., Loeding, B.: Enhanced level building algorithm for the movement epenthesis problem in sign language recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)
Yang, R., Sarkar, S., Loeding, B.: Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming”. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 462–477 (2010)
Yang, R., Sarkar, S.: Detecting coarticulation in sign language using conditional random fields. In: Proceedings of the IEEE 18th International Conference on Pattern Recognition, pp. 108–112 (2006)
Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous sign language sentences using iterated conditional modes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2583–2590 (2009)
http://research.microsoft.com/en-us/collaboration/stories/kinect-sign-language-translator.aspx
Jia, D., Bhatti, A., Nahavandi, S., Horan, B.: Human performance measures for interactive haptic-audio-visual interfaces. IEEE Trans. Haptics 6(1), 46–57 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Wei, L., Zhou, H., Shi, J., Nahavandi, S. (2018). Improve Communication Efficiency Between Hearing-Impaired and Hearing People - A Review. In: Qiao, F., Patnaik, S., Wang, J. (eds) Recent Developments in Mechatronics and Intelligent Robotics. ICMIR 2017. Advances in Intelligent Systems and Computing, vol 691. Springer, Cham. https://doi.org/10.1007/978-3-319-70990-1_38
Download citation
DOI: https://doi.org/10.1007/978-3-319-70990-1_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70989-5
Online ISBN: 978-3-319-70990-1
eBook Packages: EngineeringEngineering (R0)