´ëÇѾð¾îÇÐȸThe Linguistic Association of Korea

ÇÐȸÁö

  • Ȩ
  • ÇÐȸÁö
  • ³í¹®ÀÚ·á½Ç

³í¹®ÀÚ·á½Ç

Á¦¸ñ Challenges in Deep Learning-Based Analysis of Korean Sign Language: Through the Lens of American Sign Language Research
ÀúÀÚ Yong-hun Lee
±Ç/È£ Á¦33±Ç / 2È£
Ãâó 115-135
³í¹®°ÔÀçÀÏ 2025.06.30
ÃÊ·Ï Lee, Yong-hun. (2025). Challenges in deep learning-based analysis of Korean sign language: Through the lens of American sign language research. The Linguistic Association of Korea Journal, 33(2), 115-135. Sign language is a fully developed linguistic system using visual-gestural elements such as hand movements, facial expressions, and spatial organization. While deep learning has advanced American Sign Language (ASL) research, applying these methods to Korean Sign Language (KSL) faces challenges due to KSL's classifier predicates, spatial referencing, and topic-comment structures. This paper critically reviews ASL-based deep learning in Sign Language Recognition (SLR), Sign Language Production (SLP), and Sign Language Translation (SLT) to assess their adaptation for KSL. In this review, SLR covers the automatic recognition of sign sequences from visual input, SLP addresses the generation of natural sign gestures from text or speech, and SLT focuses on translating between sign and spoken languages. Methodologically, we conduct a comparative literature review of state-of-the-art deep learning models, analyzing their architectures, training strategies, and evaluation metrics within each subfield (SLR, SLP, SLT). We examine linguistic differences between ASL and KSL, noting difficulties in gesture synthesis, spatial modeling, and non-manual feature integration. We highlight limitations of direct ASL-to-KSL model transfer and propose multi-modal learning, expanded datasets, and enhanced spatial encoding to advance KSL processing technologies.
ÆÄÀÏ PDFº¸±â  ´Ù¿î·Îµå