Does ChatGPT Resemble Humans in Processing Implicatures?
Proceedings of the 4th Natural Logic Meets Machine Learning Workshop (NALOMA 23), 2023
Recommended citation: Qiu, Z., Duan, X., and Cai, Z. (2023). Does ChatGPT Resemble Humans in Processing Implicatures? Proceedings of the 4th Natural Logic Meets Machine Learning Workshop (NALOMA 23). Association for Computational Linguistics . https://aclanthology.org/2023.naloma-1.3/
Recent advances in large language models (LLMs) and LLM-driven chatbots, such as ChatGPT, have sparked interest in the extent to which these artificial systems possess human-like cognitive abilities. In this study, we assessed ChatGPT’s pragmatic capabilities by conducting three preregistered experiments focused on its ability to compute pragmatic implicatures. The first experiment tested whether ChatGPT inhibits the computation of generalized conversational implicatures (GCIs) when explicitly required to process the text’s truth-conditional meaning. The second and third experiments examined whether the communicative context affects ChatGPT’s ability to compute scalar implicatures (SIs). Our results showed that ChatGPT did not demonstrate human-like flexibility in switching between pragmatic and semantic processing. Additionally, ChatGPT’s judgments did not exhibit the well-established effect of communicative context on SI rates.