-
AKBC 2020 Workshop on Bias in Automatic Knowledge Graph Construction2020It has recently been shown that word embeddings encode social biases, with a harmful impact on downstream tasks. However, to this point there has been no similar work done in the field of knowledge graph embeddings. We present the first study on social bias in knowledge graph embeddings, and propose a new metric suitable for measuring such bias. We conduct experiments on Wikidata and Freebase, and show
-
ACM L@S 20202020E-learning is becoming popular as it provides learners the flexibility, targeted resources across the internet, personalized guidance, and immediate feedback during learning. However, lack of social interaction, an indispensable component in developing some skills, has been a pain point in e-learning. We propose using Alexa, a voice-controlled Intelligent Personal Assistants (IPA), in e-learning to provide
-
ACL 2020 Workshop on NLP for Medical Conversations2020Automatic speech recognition (ASR) systems in the medical domain that focus on transcribing clinical dictations and doctor-patient conversations often pose many challenges due to the complexity of the domain. ASR output typically undergoes automatic punctuation to enable users to speak naturally, without having to vocalise awkward and explicit punctuation commands, such as “period”, “add comma” or “exclamation
-
KDD 20202020We consider the extreme multi-label text classification (XMC) problem: given an input text, return the most relevant labels from a large label collection. For example, the input text could be a product description on Amazon.com and the labels could be product categories. XMC is an important yet challenging problem in the NLP community. Recently, deep pretrained transformer models have achieved state-of-the-art
-
SIGIR 20202020IR-based Question Answering (QA) systems typically use a sentence selector to extract the answer from retrieved documents. Recent studies have shown that powerful neural models based on the Transformer can provide an accurate solution to Answer Sentence Selection (AS2). Unfortunately, their computation cost prevents their use in real-world applications. In this paper, we show that standard and efficient
Related content
-
July 23, 2018Automatic speech recognition systems, which convert spoken words into text, are an important component of conversational agents such as Alexa. These systems generally comprise an acoustic model, a pronunciation model, and a statistical language model. The role of the statistical language model is to assign a probability to the next word in a sentence, given the previous ones. For instance, the phrases “Pulitzer Prize” and “pullet surprise” may have very similar acoustic profiles, but statistically, one is far more likely to conclude a question that begins “Alexa, what playwright just won a … ?”
-
July 16, 2018To be as useful as possible to customers, Alexa should be able to make educated guesses about the meanings of ambiguous utterances. If, for instance, a customer says, “Alexa, play the song ‘Hello’”, Alexa should be able to infer from the customer’s listening history whether the song requested is the one by Adele or the one by Lionel Richie.
-
June 08, 2018Amazon Alexa currently has more than 40,000 third-party skills, which customers use to get information, perform tasks, play games, and more. To make it easier for customers to find and engage with skills, we are moving toward skill invocation that doesn’t require mentioning a skill by name (as highlighted in a recent post).
-
June 07, 2018Alexa is a cloud-based service with natural-language-understanding capabilities that powers devices like Amazon Echo, Echo Show, Echo Plus, Echo Spot, Echo Dot, and more. Alexa-like voice services traditionally have supported small numbers of well-separated domains, such as calendar or weather. In an effort to extend the capabilities of Alexa, Amazon in 2015 released the Alexa Skills Kit, so third-party developers could add to Alexa’s voice-driven capabilities. We refer to new third-party capabilities as skills, and Alexa currently has more than 40,000.
-
June 01, 2018Developing a new Alexa skill typically means training a machine-learning system with annotated data, and the skill’s ability to “understand” natural-language requests is limited by the expressivity of the semantic representation used to do the annotation. So far, the techniques used to represent natural language have been fairly simple, so Alexa has been able to handle only relatively simple requests.
-
May 29, 2018As Alexa-enabled devices continue to expand into new countries, we propose an approach for quickly bootstrapping machine-learning models in new languages, with the aim of more efficiently bringing Alexa to new customers around the world.