-
NAACL 20192019We explore active learning (AL) for improving the accuracy of new domains in a natural language understanding (NLU) system. We propose an algorithm called Majority-CRF that uses an ensemble of classification models to guide the selection of relevant utterances, as well as a sequence labeling model to help prioritize informative examples. Experiments with three domains show that Majority-CRF achieves 6.6%
-
NAACL 2019 Workshop on Structured Prediction2019We propose a semi-supervised learning framework to boost the performance of slot tagging in the low resource case, where we only have a small labeled target dataset available for model training, but we have access to a large unlabeled source dataset. Our framework consists of two components: first performing data selection to find a subset of the source data that is semantically similar to the target data
-
NAACL 20192019Neural text-to-speech synthesis (NTTS) models have shown significant progress in generating high-quality speech, however they require a large quantity of training data. This makes creating models for multiple styles expensive and time-consuming. In this paper different styles of speech are analysed based on prosodic variations, from this a model is proposed to synthesise speech in the style of a newscaster
-
NAACL 20192019In this paper, we consider advancing webscale knowledge extraction and alignment by integrating OpenIE extractions in the form of (subject, predicate, object) triples with Knowledge Bases (KB). Traditional techniques from universal schema and from schema mapping fall in two extremes: either they perform instance-level inference relying on embedding for (subject, object) pairs, thus cannot handle pairs absent
-
ICML 20192019A key problem in multi-label classification is to utilize dependencies among the labels. Chaining classifiers are a simple technique for addressing this problem but current algorithms all assume a fixed, static label ordering. In this work, we propose a multi-label classification approach which allows to choose a dynamic, context-dependent label ordering. Our proposed approach consists of two sub-components
Related content
-
November 19, 2020AI models exceed human performance on public data sets; modified training and testing could help ensure that they aren’t exploiting short cuts.
-
November 16, 2020Amazon Scholar Julia Hirschberg on why speech understanding and natural-language understanding are intertwined.
-
November 11, 2020With a new machine learning system, Alexa can infer that an initial question implies a subsequent request.
-
November 10, 2020Alexa senior applied scientist provides career advice to graduate students considering a research role in industry.
-
November 09, 2020Watch a recording of the EMNLP 2020 session featuring a discussion with Amazon scholars and academics on the state of conversational AI.
-
November 06, 2020Work aims to improve accuracy of models both on- and off-device.