On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning#

The paper highlights a common issue in Continual Learning: using rehearsal methods to prevent forgetting can lead to unstable decision boundaries. To address this, we propose Lipschitz-DrivEn Rehearsal (LiDER), which smooths the network’s outputs by constraining its Lipschitz constant for respect to replay examples. We show that LiDER improves performance across datasets for rehearsal CL methods, with or without pre-training, and shed light on buffer overfitting in CL.