At the meeting of the Association for the Advancement of Artificial Intelligence (AAAI) earlier this year, papers whose coauthors included Amazon researchers were runners-up for two best-paper awards.
One of the papers was a submission to the main conference: “Learning from eXtreme bandit feedback”, by Romain Lopez, a PhD student at the University of California, Berkeley, who was an Amazon intern when the work was done; Inderjit Dhillon, an Amazon vice president and distinguished scientist; and Michael I. Jordan, a Distinguished Amazon Scholar and professor at Berkeley, where he’s one of Lopez’s thesis advisors.
In their paper, Lopez, Dhillon, and Jordan examine the problem of how to train a machine learning system to select some action — such as ranking results of a product query — when the space of possible actions is massive and training data reflects the biases of a prior action selection strategy.
The other paper was a submission to the AAAI Workshop on Health Intelligence: “AWS CORD-19 Search: A neural search engine for COVID-19 literature”, which has 15 coauthors, all at Amazon, led by senior applied scientist Parminder Bhatia and research scientist Lan Liu, with Taha Kass-Hout as senior author.
That paper examines the array of machine learning tools that enabled AWS CORD-19 Search (ACS), a search interface that provides natural-language search of the CORD-19 database of COVID-related research papers assembled by the Allen Institute.
Extreme bandit feedback
In their paper on bandit feedback, Lopez, Dhillon, and Jordan consider the problem of batch learning from bandit feedback in the context of extreme multilabel classification.
Bandit problems commonly arise in reinforcement learning, in which a machine learning system attempts, through trial and error, to learn a policy that maximizes some reward. In the case of a recommendation system, for instance, the policy is how to select links to serve to particular customers; the reward is clicks on those links.
The classic bandit setting is online, meaning that the system can continually revise its policy in light of real-time feedback. In the offline setting, by contrast, the system’s training data all comes from transaction logs: which links did which customers see, and did they click on those links?
The problem is that the links that the customers saw were selected by an earlier policy, typically called the logging policy. The goal of batch learning from bandit feedback is to discover a new policy, which outperforms the logging policy. But how is that possible, given that we have feedback results only for the old policy?
This problem is exacerbated when there are a huge number of possible actions that the system can take. In that case, not only did customers see links selected by a suboptimal policy, but they saw only a tiny fraction of the links they might have seen.
In their paper, the researchers tackle the challenge of learning an optimal policy in this context. First, they present a theoretical analysis, describing a general approach to policy selection that converges to an optimal solution. Then they present a specific algorithm for implementing that approach. And finally, they compare the algorithm’s performance to that of four leading predecessors, using six different metrics, and find that their approach delivers the best results across the board.
The theoretical proof depends on what’s known as Rao-Blackwellization. Given any type of estimator — a procedure for estimating a quantity based on observed data — the Rao-Blackwell theorem provides a statistical method for updating the estimator that may improve its accuracy but will not diminish it. The researchers’ proof provides a way to compute the accuracy gains offered by Rao-Blackwellization in the context of extreme bandit feedback, depending on statistical properties of the transaction log data.
In practice, the researchers simply use the logging policy as the initial estimator and update it according to the Rao-Blackwell method. This yields significant increases in accuracy versus even the best-performing previous approaches — between 31% and 37% on the six metrics.
CORD-19 search
With AWS CORD-19 search (ACS), customers can query the CORD-19 database using natural language — questions such as “Is remdesivir an effective treatment for COVID-19?” or “What is the average hospitalization time for patients?”
Amazon Science has discussed some of the elements described in the paper on in greater detail elsewhere: Miguel Romero Calvo explained the structure of the CORD-19 knowledge graph and the method for assembling it, and Amazon Science contributor Doug Gantenbein described the ways in which ACS leverages machine learning tools from Amazon Web Services such as Amazon Kendra, a semantic-search and question-answering service, and Comprehend Medical, a tool for extracting information from unstructured text that is specialized for the medical texts.
In addition to addressing these topics, the researchers’ paper also covers the ACS approach to topic modeling, or automatically grouping documents according to topic descriptors extracted from their texts, and multi-label classification, or training a machine learning model to assign new topic labels to documents on the basis of the descriptors extracted by the topic-modeling system.
Finally, the researchers compare ACS to two other CORD-19 search interfaces, showing that for natural-language queries, it delivers the best results by a significant margin, while remaining competitive on more traditional keyword search.
Editor's note: After publishing this post, we learned that a third Amazon paper, "Targeted feedback generation for constructed-response questions", won the best-paper award at another AAAI 2021 workshop, the Workshop on AI Education.