-
NeurIPS 20232023A large body of NLP research has documented the ways gender biases manifest and amplify within large language models (LLMs), though this research has pre- dominantly operated within a gender binary-centric context. A growing body of work has identified the harmful limitations of this gender-exclusive framing; many LLMs cannot correctly and consistently refer to persons outside the gender binary, especially
-
AAAI 20242023Toxic content detection is crucial for online services to remove inappropriate content that violates community standards. To automate the detection process, prior works have proposed varieties of machine learning (ML) approaches to train Language Models (LMs) for toxic content detection. However, both their accuracy and transferability across datasets are limited. Recently, Large Language Models (LLMs)
-
2023 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)2023In this paper, we propose the first successful implementation of associated learning (AL) to automatic speech recognition (ASR). AL has been shown to provide better label noise robustness, faster training convergence, and flexibility on model complexity than back-propagation (BP) in classification tasks. However, extending the learning approach to autoregressive models such as ASR, where model outputs are
-
EMNLP 20232023Generating concise summaries of news events is a challenging natural language processing task. While journalists often curate timelines to highlight key sub-events, newcomers to a news event face challenges in catching up on its historical context. In this paper, we address this need by introducing the task of background news summarization, which complements each timeline update with a background summary
-
EMNLP 20232023Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields. However, LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations in many real-world applications. Existing works for detecting hallucinations in LLMs either rely on external knowledge for reference retrieval or require sampling multiple responses
Related content
-
August 25, 2021Katrin Kirchhoff, director of speech processing for Amazon Web Services, on the many scientific challenges her teams are tackling.
-
August 16, 2021Teams' research papers that outline their approaches to development and deployment are now available.
-
August 16, 2021Team Alquist awarded $500,000 prize for top score in finals competition; teams from Stanford University and the University of Buffalo place second and third.
-
August 12, 2021New metric can be calculated 55 times as quickly as its state-of-the-art predecessor, making it practical for model training.
-
August 11, 2021Holleman, the chief scientist of Alexa Fund company Syntiant, explains why the company’s new architecture allows machine learning to be deployed practically anywhere.
-
August 05, 2021New track of the 10th Dialog System Technology Challenge (DSTC10) will target noisy speech environments.