Large online data repositories like the Amazon Store are distributed across massive banks of servers, and retrieving data from those repositories must be efficient in order to provide a good customer experience. It is usual for a service-level agreement (SLA) to be in place, typically mandating that some fraction of queries (say, 95%) must be responded to within some stipulated upper time limit (say, 150 milliseconds).
One way to improve efficiency is to cluster related content together on just a handful of servers, limiting the amount of data that information retrieval algorithms must consider.
But if enough users trying to access the same data, the result may be load imbalances in system utilization, resulting in SLA violations.
In a paper we’re publishing in the journal ACM Transactions on Information Systems (TOIS), my collaborators at the University of Melbourne and I present a new query-processing method that avoids such imbalances by distributing data more uniformly but still limits the amount of data that needs to be considered.
Moreover, our method is an anytime query method, which means that it dynamically adjusts to changing user demand, providing at least some results when bandwidth and processing cycles are scarce but improving the quality of the results in accord with the available resources. This ensures that SLAs are met while minimizing resource usage.
In experiments involving the standard ClueWeb09B document collection and queries from the TREC Million Query Track, our approach outperformed a series of baselines in terms of query result quality, relative to exhaustive query processing, demonstrating that it is capable of meeting strict latency SLAs over large document collections.
Selective-search-based anytime ranking
To preserve resources, selective-search methods target queries to parts of the index that contain topically relevant documents.
Most selective-search methods distribute all topically related documents to a single server node; by contrast, our method distributes a fraction of each topic to every node. This uniform distribution of documents has load-balancing benefits while at the same time simplifying the overall distributed system.
Within each node, we perform a finer-grained clustering of the documents in each topical shard. The cluster categories are determined automatically, but within the topic “headphones”, for instance, our algorithm might cluster documents related to noise-cancelling headphones, wireless headphones, and so on.
Within each topic, we reorder documents according to this finer-grained clustering, enabling more targeted and therefore more efficient retrieval. We describe the details of the method in our paper “Faster index reordering with bipartite graph partitioning”, which we presented at this year’s meeting of the ACM Special Interest Group on Information Retrieval (SIGIR).
Anytime-ranking query processing
Finer-grained clustering within nodes lets us establish different relevance thresholds for different topic clusters, enabling anytime query processing. Based on the query, our algorithm determines the order in which the clusters within each topic should be visited. If no data within the cluster crosses the threshold for a given query, the cluster is bypassed entirely.
In the example below, the goal is to retrieve the pink data points. In the diagram at left, our algorithm determines to first visit the data-rich second cluster, going on to the sparser third cluster only if time permits. The first cluster is skipped entirely, as the current score threshold (blue line) is above the cluster relevance threshold (red line). The right-hand diagram shows a more conventional information retrieval algorithm, which must work its way through all documents in the node in a fixed order.
In experiments, we compared our clustering method, when used in conjunction with two standard information retrieval algorithms (VBMW and MaxScore in the figure below), with an existing anytime-query method (JASS). We also compared it to an oracle model, which explores exactly the right topic clusters in exactly the right order, establishing an upper performance bound.
To evaluate the methods, we used rank-biased overlap (RBO), which indicates what percentage of the top k results (k = 10 and 1,000 in the diagrams below) the algorithm returns in the right order; a score of 1.0 indicates an optimal ordering. As can be seen in the diagram below, our clustering method enables the algorithm to converge on the optimal ordering more efficiently than existing methods.
Overall, our experiments showed that our proposed indexing and query-processing scheme permits adherence to the strict-latency SLAs that are standard in large-scale information retrieval systems while providing fine-grained trade-offs between latency and query result quality.