BERT based ranking models have achieved superior performance on various information retrieval tasks. However, the large number of parameters and complex self-attention operations come at a significant latency overhead. To remedy this, recent works propose late-interaction architectures, which allow precomputation of intermediate document representations, thus reducing latency. Nonetheless, having solved the immediate latency issue, these methods now introduce storage costs and network fetching latency, which limit their adoption in real-life production systems.
In this work, we propose the Succinct Document Representation (SDR) scheme that computes highly compressed intermediate document representations, mitigating the storage/network issue. Our approach first reduces the dimension of token representations by encoding them using a novel autoencoder architecture that uses the document’s textual content in both the encoding and decoding phases. After this token encoding step, we further reduce the size of the document representations using modern quantization techniques. Amit Portnoy∗ Ben-Gurion University† amitport@post.bgu.ac.il Amir Ingber Pinecone Systems† ingber@pinecone.io 0.380 Ranking quality (MRR@10) 0.375 0.370 BERTSPLIT 0.365 0.360 0.355 0.350 0.345 (uncompressed) SDR SDR (float16) Baseline 0.340 103 104 105 Document corpus size (MB) 106 Figure 1: MRR@10 performance vs. document corpus size tradeoff, measured on the MSMARCO-DEV dataset. BERTSPLIT is a distilled late-interaction model with reduced vector width and no compression (§ 4.2). For MRR@10 above 0.35, SDR is 4x–11.6x more efficient compared to the baseline. from a (very large) search index. Retrieval is typically fast but not accurate enough; in order to improve the quality of the end result for the user, the candidate documents are re-ranked using a more accurate but computationally expensive algorithm. Neural approaches have achieved the state of the
Evaluation on MSMARCO’s passage re-ranking task show that compared to existing approaches using compressed document representations, our method is highly efficient, achieving 4x–11.6x higher compression rates for the same ranking quality. Similarly, on the TREC CAR dataset, we achieve 7.7x higher compression rate for the same ranking quality.
SDR: Efficient neural re-ranking using succinct document representation
2022
Research areas