Abstract
Mitigating catastrophic forgetting in continual learning is a long-standing challenge for artificial intelligence. Often, methods used to alleviate forgetting make use of either rehearsal buffers, pretrained backbones or task-id knowledge. However, these requirements result in severe limitations regarding scalability, privacy preservation, and efficient deployment. In this work, we explore how to eliminate the need for such requirements in incremental learning approaches based on parameter isolation. We propose Low Interference Feature Extraction Subnetworks (LIFES), a method that learns a subnetwork per task and uses all of them concurrently at inference time. This solution minimises requirements; however, it creates the need to address certain challenges. To formalize them, we break down the catastrophic forgetting problem into 4 distinct causes, and address them with a novel lateral classifiers regularization, weight standardization, and subnetwork interference connection pruning. Specifically, the use of lateral classification shows very promising results, forcing the model to learn distributions with higher inter-class distance. Using these components, LIFES achieves competitive results in standard task-agnostic scenarios, demonstrating the viability of this new perspective for parameter isolation, which has minimal requirements. Finally, we discuss how future work can improve this new paradigm further, and how the strategies defined can be complementary to other approaches.
| Original language | English |
|---|---|
| Article number | 81 |
| Journal | Neural Processing Letters |
| Volume | 57 |
| Issue number | 5 |
| DOIs | |
| Publication status | Published - Oct 2025 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© The Author(s) 2025.
Keywords
- Catastrophic Forgetting
- Continual Learning
- Representational Overlap
- Task-agnostic