Overview
The North American Chapter of the Association for Computational Linguistics (NAACL) provides a regional focus for ACL members in North, Central, and South America, organizes annual conferences, promotes cooperation and information exchange among related scientific and professional societies, encourages and facilitates ACL membership by people and institutions in the Americas, and provides a source of information on regional activities for the ACL Executive Committee. Learn more about Amazon's 30+ accepted publications.
Accepted publications
Workshops
NAACL 2024 Workshop on Bridging Human-Computer Interaction and Natural Language Processing
June 21
NAACL 2024 Workshop on Trustworthy Natural Language Processing (TrustNLP)
June 21 - June 22
Website: https://trustnlpworkshop.github.io
Amazon organizers: Kai-Wei Chang - UCLA, Amazon Visiting Academic, Ninareh Mehrabi - Amazon Alexa AI, Aram Galystan - USC, Amazon Visiting Academic, Jwala Dhamala - Amazon Alexa AI, Rahul Gupta - Amazon Alexa AI.
About: Recent advances in Natural Language Processing, and the emergence of pretrained Large Language Models (LLM) specifically, have made NLP systems omnipresent in various aspects of our everyday life. In addition to traditional examples such as personal voice assistants, recommender systems, etc, more recent developments include content-generation models such as ChatGPT, text-to-image models (Dall-E), and so on. While these emergent technologies have an unquestionable potential to power various innovative NLP and AI applications, they also pose a number of challenges in terms of their safe and ethical use. To address such challenges, NLP researchers have formulated various objectives, e.g., intended to make models more fair, safe, and privacy-preserving. However, these objectives are often considered separately, which is a major limitation since it is often important to understand the interplay and/or tension between them. For instance, meeting a fairness objective might require access to users’ demographic information, which creates tension with privacy objectives. The goal of this workshop is to move toward a more comprehensive notion of Trustworthy NLP, by bringing together researchers working on those distinct yet related topics, as well as their intersection.
Amazon organizers: Kai-Wei Chang - UCLA, Amazon Visiting Academic, Ninareh Mehrabi - Amazon Alexa AI, Aram Galystan - USC, Amazon Visiting Academic, Jwala Dhamala - Amazon Alexa AI, Rahul Gupta - Amazon Alexa AI.
About: Recent advances in Natural Language Processing, and the emergence of pretrained Large Language Models (LLM) specifically, have made NLP systems omnipresent in various aspects of our everyday life. In addition to traditional examples such as personal voice assistants, recommender systems, etc, more recent developments include content-generation models such as ChatGPT, text-to-image models (Dall-E), and so on. While these emergent technologies have an unquestionable potential to power various innovative NLP and AI applications, they also pose a number of challenges in terms of their safe and ethical use. To address such challenges, NLP researchers have formulated various objectives, e.g., intended to make models more fair, safe, and privacy-preserving. However, these objectives are often considered separately, which is a major limitation since it is often important to understand the interplay and/or tension between them. For instance, meeting a fairness objective might require access to users’ demographic information, which creates tension with privacy objectives. The goal of this workshop is to move toward a more comprehensive notion of Trustworthy NLP, by bringing together researchers working on those distinct yet related topics, as well as their intersection.