Skip to main content

Improve Communication Efficiency Between Hearing-Impaired and Hearing People - A Review

  • Conference paper
  • First Online:
  • 1260 Accesses

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 691))

Abstract

Sign languages are one of the most essential communication skills for hearing-impaired people, yet they are not easy to understand for hearing people and this situation has created communication barriers through many aspects of our society. While recruiting a sign language interpreter for each hearing-impaired people is apparently not feasible, improving the communication effectiveness through up-to-date research work in the field of haptics, motion capture and face recognition can be promising and practical. In this paper, we review a number of previous methods in sign language recognition using different approaches, and identify a few techniques that may improve the effectiveness of the communication pipeline between hearing-impaired and hearing people. These techniques can be fit into a comprehensive communication pipeline and serve as a foundation model for more research work between hearing-impaired and hearing people.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. http://en.wikipedia.org/wiki/Sign_language

  2. Hernandez-Rebollar, J.L., Lindeman, R.W., Kyriakopoulos, N.: A multi-class pattern recognition system for practical finger spelling translation. In: Proceedings of the IEEE International Conference on Multimodal Interfaces, p. 185 (2002)

    Google Scholar 

  3. Kadous, M.W.: Machine recognition of Auslan signs using PowerGloves: “Towards large lexicon recognition of sign language”. In: Proceedings of the Workshop: Integration of Gesture in Language and Speech (1996)

    Google Scholar 

  4. Nayak, S., Duncan, K., Sarkar, S., Loeding, B.: Finding recurrent patterns from continuous sign language sentences for automated extraction of signs. J. Mach. Learn. Res. 13, 2589–2615 (2012)

    MATH  MathSciNet  Google Scholar 

  5. Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous language sentences using iterated conditional modes. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 795–810 (2012)

    Article  MATH  Google Scholar 

  6. Sarkar, S., Loeding, B., Yang, R., Nayak, S., Parashar, A.: Segmentation-robust representations, matching, and modeling for sign language. In: Computer Vision and Pattern Recognition Workshops, pp. 13–19. IEEE Computer Society (2011)

    Google Scholar 

  7. Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous sign language sentences using iterated conditional modes. In: IEEE Conference on Computer Vision and Pattern Recognition, June 2009

    Google Scholar 

  8. Yang, R., Sarkar, S.: Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming. IEEE Trans. Pattern Anal. Mach. Intell. 2009)

    Google Scholar 

  9. Yang, R., Sarkar, S.: Coupled grouping and matching for sign and gesture recognition. Comput. Vis. Image Underst. 113(6), 663–681 (2009)

    Article  Google Scholar 

  10. Nayak, S., Sarkar, S., Loeding, B.: Distribution-based dimensionality reduction applied to articulated motion recognition. IEEE Trans. Pattern Anal. Mach. Intell. 31(5) (2009)

    Google Scholar 

  11. Ong, S.C.W., Ranganath, S.: Automatic sign language analysis: a survey and the future beyond lexical meaning. IEEE Trans. Pattern Anal. Mach. Intell. 27(6), 873–891 (2005)

    Article  Google Scholar 

  12. Cooper, H., Ong, E.J., Pugeault, N., et al.: Sign language recognition using sub-units. J. Mach. Learn. Res. 13(1), 2205–2231 (2012)

    MATH  Google Scholar 

  13. Wong, S.F., Cipolla, R.: Real-time interpretation of hand motions using a sparse Bayesian classifier on motion gradient orientation images. In: Proceedings of the BMVC, Oxford, UK, vol. 1, pp. 379–388 (2005)

    Google Scholar 

  14. Cooper, H., Bowden, R.: Sign language recognition using boosted volumetric features. In: Proceedings of the IAPR Conference on Machine Vision Applications, Tokyo, Japan, pp. 359–362 (2007)

    Google Scholar 

  15. Buehler, P., Everingham, M., Zisserman, A.: Learning sign language by watching TV (using weakly aligned subtitles). In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2961–2968 (2009)

    Google Scholar 

  16. Segen, J., Kumar, S.: Shadow gestures: “3D hand pose estimation using a single camera”. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1 (1999)

    Google Scholar 

  17. Feris, R., Turk, M., Raskar, R., Tan, K., Ohashi, G.: Exploiting depth discontinuities for vision-based fingerspelling recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition: Workshop, vol. 10 (2004)

    Google Scholar 

  18. Starner, T., Weaver, J., Pentland, A.: Real-time American sign language recognition using desk and wearable computer based video. IEEE Trans. Pattern Anal. Mach. Intell. 20(12), 1371–1375 (1998)

    Article  Google Scholar 

  19. Munoz-Salinas, R., Medina-Carnicer, R., Madrid-Cuevas, F.J., Carmona-Poyato, A.: Depth silhouettes for gesture recognition. Pattern Recognit. Lett. 29(3), 319–329 (2008)

    Article  Google Scholar 

  20. Vogler, C., Metaxas, D.: ASL recognition based on a coupling between HMMs and 3D motion analysis. In: Proceedings of the ICCV, Bombay, India, pp. 363–369 (1998)

    Google Scholar 

  21. Doliotis, P., Stefan, A., Mcmurrough, C., Eckhard, D., Athitsos, V.: Comparing gesture recognition accuracy using color and depth information. In: Proceedings of the Conference on Pervasive Technologies Related to Assistive Environments (PETRA) (2011)

    Google Scholar 

  22. Li, Y.: Hand gesture recognition using Kinect. In: Proceedings of the IEEE 3rd International Conference on Software Engineering and Service Science (ICSESS), pp. 196–199 (2012)

    Google Scholar 

  23. Li, K.F., Lothrop, K., Gill, E., et al.: A web-based sign language translator using 3d video processing. In: Proceedings of the IEEE 14th International Conference on Network-Based Information Systems (NBiS), pp. 356–361 (2011)

    Google Scholar 

  24. Lang, S., Block, M., Rojas, R.: Sign language recognition using kinect. In: Artificial Intelligence and Soft Computing, pp. 394–402. Springer (2012)

    Google Scholar 

  25. Chai, X., Li, G., Lin, Y., et al.: Sign language recognition and translation with kinect. In: Proceedings of the IEEE Conference on AFGR (2013)

    Google Scholar 

  26. Zafrulla, Z., Brashear, H., Starner, T., et al.: American sign language recognition with the Kinect. In: Proceedings of the ACM 13th International Conference on Multimodal Interfaces, pp. 279–286 (2011)

    Google Scholar 

  27. Sun, C., Zhang, T., Bao, B.K., et al.: Discriminative exemplar coding for sign language recognition with Kinect. IEEE Trans. Cybern. 43(5), 1418–1428 (2013)

    Article  Google Scholar 

  28. Ershaed, H., Al-Alali, I., Khasawneh, N., Fraiwan, M.: An Arabic sign language computer interface using the Xbox Kinect. In: Proceedings of the Annual Undergraduate Research Conference on Applied Computing (2011)

    Google Scholar 

  29. Raheja, J.L., Chaudhary, A., Singal, K.: Tracking of fingertips and centre of palm using kinect. In: Proceedings of the 3rd IEEE International Conference on Computational Intelligence, Modelling and Simulation, pp. 248–252 (2011)

    Google Scholar 

  30. Yang, R., Sarkar, S., Loeding, B.: Enhanced level building algorithm for the movement epenthesis problem in sign language recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007)

    Google Scholar 

  31. Yang, R., Sarkar, S., Loeding, B.: Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming”. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 462–477 (2010)

    Article  Google Scholar 

  32. Yang, R., Sarkar, S.: Detecting coarticulation in sign language using conditional random fields. In: Proceedings of the IEEE 18th International Conference on Pattern Recognition, pp. 108–112 (2006)

    Google Scholar 

  33. Nayak, S., Sarkar, S., Loeding, B.: Automated extraction of signs from continuous sign language sentences using iterated conditional modes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2583–2590 (2009)

    Google Scholar 

  34. http://research.microsoft.com/en-us/collaboration/stories/kinect-sign-language-translator.aspx

  35. Jia, D., Bhatti, A., Nahavandi, S., Horan, B.: Human performance measures for interactive haptic-audio-visual interfaces. IEEE Trans. Haptics 6(1), 46–57 (2012)

    Google Scholar 

  36. http://www.novint.com/index.php/novintxio

  37. http://www.dextarobotics.com/products/Dexmo

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junsheng Shi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wei, L., Zhou, H., Shi, J., Nahavandi, S. (2018). Improve Communication Efficiency Between Hearing-Impaired and Hearing People - A Review. In: Qiao, F., Patnaik, S., Wang, J. (eds) Recent Developments in Mechatronics and Intelligent Robotics. ICMIR 2017. Advances in Intelligent Systems and Computing, vol 691. Springer, Cham. https://doi.org/10.1007/978-3-319-70990-1_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70990-1_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70989-5

  • Online ISBN: 978-3-319-70990-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics