Online continual learning for progressive distribution shift (OCL-PDS): A practitioner’s perspective
2023
We introduce the novel OCL-PDS problem - online continual learning for progressive distribution shift. PDS refers to the subtle, gradual, and continuous distribution shift that widely exists in modern deep learning applications. It is widely observed in industry that PDS can cause significant performance drop. While Previous work in continual learning and domain adaptation addresses this problem to some extent, our investigations from the practitioner’s perspective reveal flawed assumptions that limit their applicability on daily challenges faced in realworld scenarios, and this work aims to close the gap between academic research And industry. For this new problem, we build 4 new benchmarks from the wilds dataset (Koh et al., 2021), and implement 12 algorithms and baselines including both supervised and semi-supervised methods, which we test extensively on the new benchmarks. We hope that this work can provide practitioners with tools to better handle realistic PDS, and help scientists design better OCL algorithms.
Research areas