Decision layer: Enhancing multi-model, multi timescale decisions on the fly with online feedback
2023
Rogue actors employ sophisticated automation techniques to mimic human browsing/click patterns and generate invalid (i.e., fraudulent or robotic) traffic on retail marketplaces to artificially inflate their key performance metrics at the expense of their legitimate competitors. To maintain a clean and fair advertising system, it is essential to identify and mitigate ad traffic that is invalid, i.e., fraudulent or coerced or unintended, driven by bad actors and ensure that advertisers do not get charged for invalid traffic (IVT). One major challenge for advertising systems is the absence of complete ground truth fraud labels, even in limited amounts, which makes it challenging to build one single overarching model for comprehensive IVT detection. This generally results in a suite of models, each trying to identify some specific bot modus operandi. While this approach has been beneficial to offer more robust protection to advertisers by catching a variety of bots, it also piled up potentially millions of dollars of lost revenue opportunities, with each algorithm contributing incrementally to false positive detection (i.e., incorrect removal of valid traffic). Hence, we propose to build a “model over models” that learns to maintain true IVT coverage of ad fraud detection system while simultaneously lowering the cost of false positives. In this paper, we present a few variations for the new system, trained with incomplete labels that are either high quality but delayed in availability or low quality but available faster. Our proposed online algorithm combines the best of both worlds. It continuously adapts to not only reduce false positive cost by a massive 37% (owing to strong delayed labels), but also to rapidly mitigate revenue loss spikes (owing to weak fast labels) associated with occasional IVT detection system failure scenarios. To this end, we show that the online algorithm has sub-linear regret.
Research areas