´ëÇѾð¾îÇÐȸ ÀüÀÚÀú³Î

´ëÇѾð¾îÇÐȸ

31±Ç 4È£ (2023³â 12¿ù)

µö·¯´× ¾ð¾î ¸ðµ¨°ú Àΰø½Å°æ¸Á ±â°è ¹ø¿ªÀ» È°¿ëÇÑ ´ãÈ­ Á¶ÀÀ Çö»ó°ú ÇÑÁ¤ ¸í»ç±¸ ¿¬±¸

°­¾Æ¸§ ¡¤ ÀÌ¿ëÈÆ

Pages : 1-34

DOI : https://doi.org/10.24303/lakdoi.2023.31.4.1

PDFº¸±â

¸®½ºÆ®

Abstract

Kang, Arum & Lee, Yong-hun. (2023). A study of discourse anaphora and definite NP in Korean: Utilizing deep learning models and neural machine translations. The Linguistic Association of Korea Journal, 31(4), 1-34. In this preliminary study, we investigate the phenomena of discourse anaphora and definite descriptions within the framework of the so-called donkey sentence. Unlike English, Korean allows for the expression of donkey anaphora using either the pronoun kukes it or definite noun phrases (bare NP or ku+NP). Employing neural machine translations and deep learning models, we examine the appropriateness of these two types of donkey sentences in Korean through the following procedure: Firstly, utilizing ChatGPT, we generate 60 sentences with donkey structures containing both pronouns and definite noun phrases. Secondly, we employ Google Translation and Papago to translate these sentences. Thirdly, we use KR-BERT to evaluate the acceptability of the translations. Finally, we conduct a statistical analysis based on the obtained acceptability scores. The results reveal that definite noun phrases are a more natural expression than pronouns in Korean donkey sentences. This novel finding suggests that the E-type approach would provide a better theoretical account than DRT (Discourse Representation Theory).

Keywords

# discourse anaphora(´ãÈ­ Á¶ÀÀ»ç) # definite NP(ÇÑÁ¤¸í»ç±¸) # donkey sentence(´ç³ª±Í ±¸¹®) # AI(ÀΰøÁö´É) # neural machine translations(Àΰø ½Å°æ¸Á ±â°è¹ø¿ª) # Bert(¹öÆ®) # ChatGPT(êÁöÇÇƼ)

References

  • °­¾Æ¸§. (2023). Çѱ¹¾î ¹Î¸í»çÀÇ ÇÑÁ¤Àû Çؼ®: ´ç³ª±Í ±¸¹®À» Áß½ÉÀ¸·Î. ¾ð¾î, 48(1), 75-97.
  • ¾ÈÈñµ·. (2020). ¹®Àå ¹®¹ý¼º ÆÇ´ÜÀ» À§ÇÑ ±âÃÊ ÀÚ·á ±¸Ãà. ¼­¿ï: ±¹¸³±¹¾î¿ø.
  • À̱ԹÎ, ±è¼ºÅÂ, ±èÇö¼ö, ¹Ú±Ç½Ä, ½Å¿î¼·, ¿Õ±ÔÇö, ¹Ú¸í°ü, ¼Û»óÇå. (2021). DeepKLM: Åë»ç ½ÇÇèÀ» À§ÇÑ Àü»ê ¾ð¾î¸ðµ¨ ¶óÀ̺귯¸®. ¾ð¾î»ç½Ç°ú °üÁ¡, 52, 265-306.
  • À±¿µÀº. (2004). ¾ð¾îÀÇ ÀÇ¹Ì Çö»ó. ¼­¿ï: Çѱ¹¹®È­»ç.
  • Cooper, R. (1979). The interpretation of pronouns. Syntax and Semantics, 10, 61-92.
  • Elbourne, P. (2005). Situations and individuals. Cambridge, MA: MIT Press.
  • Evans, G. (1980). Pronouns. Linguistic Inquiry, 11, 337-362.
  • Gilardi, F., Alizadeh, M., & Kubli, M. (2023). Chatgpt outperforms crowd- workers for text-annotation tasks, arXiv: 2303.15056.
  • Geach, P. (1962). Reference and generality. Cornell: Cornell University Press.
  • Heim, I. (1982). The Semantics of Definite and Indefinite Noun Phrases. Unpublished doctoral dissertation, University of Massachusetts at Amherst.
  • Heim, I. (1990). E-type Pronouns and Donkey Anaphora. Linguistics and Philosophy, 13, 137-178.
  • Jacobson, P. (1977). The syntax of crossing coreference sentences. Ph.D. Dissertation, UC, Berkeley.
  • Kamp, H. (1981), A theory of truth and semantic representation. In J.A.G. Groenendijk, T.M.V. Janssen, and M.B.J. Stokhof (Eds), Formal methods in the Study of Language, Mathematical Centre Tracts 135 (pp. 277-322). Amsterdam: Mathematisch Centrum.
  • Karttunen, L. (1969). Pronouns and variables. The Proceedings of the fifth regional meeting of the Chicago Linguistic Society, 108-116.
  • Ludlow, P. (1994). Conditionals, events, and unbound pronouns. Lingua e Stile, 29, 165-183.
  • Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv: 1810.04805.
  • Lee, S., Jang, H., Baik, Y., Park, S., & Shin, H. (2020). KR-BERT: A small-scale Korean-specific language model. arXiv preprint arXiv: 2008.03979.
  • Lee, Y. H. (2021). English Island Constraints Revisited: Experimental vs. Deep Learning Approach. English Language and Linguistics, 27(3), 21-45.
  • Pires, T., Schlinger, E., & Garrette, D. (2019). How multilingual is multilingual BERT? arXiv preprint arXiv: 1906.01502.
  • Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving Language Understanding by Generative Pre-Training. OpenAI Blog.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L., & Polosukhin, I. (2017). Attention Is All You Need. arXiv preprint arXiv: 1706.03762.