-
2024Customer behavioral data significantly impacts e-commerce search systems. However, in the case of less common queries, the associated behavioral data tends to be sparse and noisy, offering inadequate support to the search mechanism. To address this challenge, the concept of query reformulation has been introduced. It suggests that less common queries could utilize the behavior patterns of their popular
-
2024Large language models (LLMs) have been shown to be effective on tabular prediction tasks in the low-data regime, leveraging their internal knowledge and ability to learn from instructions and examples. However, LLMs can fail to generate predictions that satisfy group fairness, that is, produce equitable outcomes across groups. Critically, conventional debiasing approaches for natural language tasks do not
-
ICASSP 20252024Multilingual ASR offers training, deployment and overall performance benefits, but models trained via simple data pooling are known to suffer from cross-lingual interference. Oracle language information (exact-prior) and language-specific parameters are usually leveraged to overcome this, but such approaches cannot enable seamless, truly multilingual experiences. Existing methods try to overcome this limitation
-
Machine Learning for Health Symposium 20242024Generalist large language models (LLMs), not developed to do particular medical tasks, have achieved widespread use by the public. To avoid medical uses of these LLMs that have not been adequately tested and thus minimize any potential health risks, it is paramount that these models use adequate guardrails and safety measures. In this work, we propose a synthetic medical prompt generation method to evaluate
-
2024Models of various NLP tasks have been shown to exhibit stereotypes, and the bias in the question answering (QA) models is especially harmful as the output answers might be directly consumed by the end users. There have been datasets to evaluate bias in QA models, while bias mitigation technique for the QA models is still under-explored. In this work, we propose BMBI, an approach to mitigate the bias of
Related content
-
July 12, 2023Data augmentation, novel loss functions, and weakly supervised training enable a state-of-the art model for recognizing mispronunciations.
-
July 10, 2023Familiar topics such as question answering and natural-language understanding remain well represented, but a new concentration on language modeling and multimodal models reflect the spread of generative AI.
-
July 09, 2023Finding that 70% of attention heads and 20% of feed-forward networks can be excised with minimal effect on in-context learning suggests that large language models are undertrained.
-
July 07, 2023Amazon’s Yang Liu, general chair of this year’s meeting of the Association for Computational Linguistics, on the road ahead for LLMs.
-
July 06, 2023The program exposes students to computer science as they create their own Alexa skills.
-
July 05, 2023Amazon Research Award recipient Shrikanth Narayanan is on a mission to make inclusive human-AI conversational experiences.