The online advertising industry heavily relies on auction mechanisms to allocate impression opportunities to advertisers, and price them. In real-world auctions, we want to maximise welfare for auction participants while being incentive-compatible (allowing bidders to bid truthfully), but we may also wish to include other constraints such as improving revenue for the seller, or mitigating externalities on non-bidding third-parties. Whilst the first two goals can be attained by drawing on auction theory, more general constraints lead to objectives that cannot be optimised exactly.
In this work, we propose a general framework to learn incentive-compatible auction mechanisms via gradient descent. Following related work on “neural auctions”, we resort to approximate continuous relaxations of the discrete sorting operation that render our objectives differentiable and thus amenable to optimisation in modern Deep Learning frameworks. We inject Gumbel-noise into the allocation rule, leading to a smoother loss surface and a stochastic auction mechanism under the Plackett-Luce model. Additionally, we derive a pricing rule that preserves incentive compatibility in this probabilistic setting. We present experiments for a known difficult case for exact optimisation: that of maximising revenue while maintaining optimal welfare and incentive compatible pricing. Results show we are able to improve on classical mechanisms, even under non-deterministic allocation.
A probabilistic framework to learn auction mechanisms via gradient descent
2023
Research areas