-
ACL 20192019We present methods for multi-task learning that take advantage of natural groupings of related tasks. Task groups may be defined along known properties of the tasks, such as task domain or language. Such task groups represent supervised information at the inter-task level and can be encoded into the model. We investigate two variants of neural network architectures that accomplish this, learning different
-
ECNLP 2019, The Web Conference 20192019For a product of interest, we propose a search method to surface a set of reference products. The reference products can be used as candidates to support downstream modeling tasks and business applications. The search method consists of product representation learning and fingerprint-type vector searching. The product catalog information is transformed into a high-quality embedding of low dimensions via
-
ICASSP 20192019We propose a novel audio watermarking system that is robust to the distortion due to the indoor acoustic propagation channel between the loudspeaker and the receiving microphone. The system utilizes a set of new algorithms that effectively mitigate the impact of room reverberation and interfering sound sources without using dereverberation procedures. The decoder has low-latency and it operates asynchronously
-
ICASSP 20192019Recent speech synthesis systems based on sampling from autoregressive neural networks models can generate speech almost undistinguishable from human recordings. To work properly these models required large amounts of data. However, they are more efficient at dealing less homogenous data, which might make possible to compensate the lack of data from one speaker with data from other speakers. This paper evaluates
-
ICASSP 2019, EMNLP 20192019Typically, spoken language understanding (SLU) models are trained on annotated data which are costly to gather. Aiming to reduce data needs for bootstrapping a SLU system for a new language, we present a simple but effective weight transfer approach using data from another language. The approach is evaluated with our promising multi-task SLU framework developed towards different languages. We evaluate our
Related content
-
November 19, 2020AI models exceed human performance on public data sets; modified training and testing could help ensure that they aren’t exploiting short cuts.
-
November 16, 2020Amazon Scholar Julia Hirschberg on why speech understanding and natural-language understanding are intertwined.
-
November 11, 2020With a new machine learning system, Alexa can infer that an initial question implies a subsequent request.
-
November 10, 2020Alexa senior applied scientist provides career advice to graduate students considering a research role in industry.
-
November 09, 2020Watch a recording of the EMNLP 2020 session featuring a discussion with Amazon scholars and academics on the state of conversational AI.
-
November 06, 2020Work aims to improve accuracy of models both on- and off-device.