´ëÇѾð¾îÇÐȸThe Linguistic Association of Korea

ÇÐȸÁö

  • Ȩ
  • ÇÐȸÁö
  • ³í¹®ÀÚ·á½Ç

³í¹®ÀÚ·á½Ç

Á¦¸ñ Àû´ëÀû »ç·Ê¿¡ ±â¹ÝÇÑ ¾ð¾î ¸ðÇüÀÇ Çѱ¹¾î °Ý ±³Ã¼ ÀÌÇØ ´É·Â Æò°¡
ÀúÀÚ ¼Û»óÇå ³ë°­»ê ¹Ú±Ç½Ä ½Å¿î¼· Ȳµ¿Áø
±Ç/È£ Á¦30±Ç / 1È£
Ãâó 45-72
³í¹®°ÔÀçÀÏ 2022-03-31
ÃÊ·Ï Song, Sanghoun; Noh, Kang San; Park, Kwonsik; Shin, Un-sub & Hwang, Dongjin. (2022). Adversarial example-based evaluation of how language models understand Korean case alternation. The Linguistic Association of Korea Journal, 30(1), 45-72. In the field of deep learning-based language understanding, adversarial examples refer to deliberately constructed examples of data, slightly different from original examples. The contrasts between the original and adversarial examples are less perceivable to human readers, but the disruption has a notorious effect on the performance of machines. Thus, adversarial examples facilitate assessing whether and how a specific deep learning architecture (e.g., a language model) robustly works. Out of the multiple layers of linguistic structures, this study lays focus on a morpho- syntactic phenomenon in Korean, namely, case alternation. We created a set of adversarial examples regarding case alternation, and then tested the morpho-syntactic ability of neural language models. We extracted the instances of case alternation from the Sejong Electronic Dictionary, and made use of mBERT and KR-BERT as the language models. The results (measured by means of surprisal) indicate that the language models are unexpectedly good at discerning case alternation in Korean. In addition, it turns out that the Korean-specific language model performs better than the multilingual model. These imply that an in-depth knowledge of linguistics is essential for creating adversarial examples in Korean.
ÆÄÀÏ PDFº¸±â  ´Ù¿î·Îµå