-
Interspeech 20232023Voice digital assistants must keep up with trending search queries. We rely on a speech recognition model using contextual biasing with a rapidly updated set of entities, instead of frequent model retraining, to keep up with trends. There are several challenges with this approach: (1) the entity set must be frequently reconstructed, (2) the entity set is of limited size due to latency and accuracy trade-offs
-
Personnel Psychology2023We evaluated the effectiveness of machine learning (ML) and natural language processing (NLP) for automatically scoring a simulation requiring audio-based constructed responses. We administered the simulation to 3,174 recent new professional-level hires working in a large multinational technology company. Human subject matter experts (SMEs) scored each response using behaviorally anchored rating scales
-
EACL 20232023Missing information is a common issue of dialogue summarization where some information in the reference summaries is not covered in the generated summaries. To address this issue, we propose to utilize natural language inference (NLI) models to improve coverage while avoiding introducing factual inconsistencies. Specifically, we use NLI to compute fine-grained training signals to encourage the model to
-
ACL Findings 20232023The Open-Domain Question Answering (ODQA) task involves retrieving and subsequently generating answers from fine-grained relevant passages within a database. Current systems leverage Pre-trained Language Models (PLMs) to model the relationship between questions and passages. However, the diversity in surface form expressions can hinder the model’s ability to capture accurate correlations, especially within
-
ACL 20232023Large pre-trained language models (PLMs) have been shown to retain implicit knowledge within their parameters. To enhance this implicit knowledge, we propose Knowledge Injection into Language Models (KILM), a novel approach that injects entity-related knowledge into encoder-decoder PLMs, via a generative knowledge infilling objective through continued pre-training. This is done without architectural modifications
Related content
-
July 14, 2022To become the interface for the Internet of things, conversational agents will need to learn on their own. Alexa has already started down that path.
-
July 13, 2022Four MIT professors are the recipients of the inaugural call for research projects.
-
July 13, 2022Allowing separate tasks to converge on their own schedules and using knowledge distillation to maintain performance improves accuracy.
-
July 11, 2022The SCOT science team used lessons from the past — and improved existing tools — to contend with “a peak that lasted two years”.
-
July 08, 2022Industry track chair and Amazon principal research scientist Rashmi Gangadharaiah on trends in industry papers and the challenges of building practical dialogue systems.
-
July 08, 2022New model sets new standard in accuracy while enabling 60-fold speedups.