Overcoming Catastrophic Forgetting with DOLFIN: A Novel Approach to Continual Learning

Friday 21 November 2025

The quest for perpetual learning has long been a holy grail in the realm of artificial intelligence. While machines can learn and adapt quickly, they often struggle to retain this knowledge as new information is introduced. This phenomenon, known as catastrophic forgetting, has stumped researchers for decades.

Recently, a team of scientists has made significant strides in addressing this issue by developing a novel approach called DOLFIN. This method tackles the challenge of continual learning in federated settings, where multiple clients work together to learn from distributed data while preserving client privacy.

In traditional machine learning, models are trained on large datasets and then deployed for use. However, this approach has limitations when dealing with non-IID (independent and identically distributed) data, which is common in real-world scenarios. DOLFIN seeks to overcome these challenges by introducing a new architecture that combines the benefits of Vision Transformers (ViTs) with low-rank adapters.

The key innovation lies in the way DOLFIN updates its parameters during training. Unlike traditional methods, which may overwrite previously learned knowledge, DOLFIN’s orthogonal low-rank adapters ensure that new information is integrated without disrupting existing understanding. This approach, known as Dual Gradient Projection Memory (DualGPM), enables the model to adapt to new tasks while retaining previously acquired knowledge.

To evaluate DOLFIN, researchers tested it on four benchmark datasets: CIFAR-100 and ImageNet-R with varying levels of Dirichlet heterogeneity, and ImageNet-A and CUB-200. The results demonstrate that DOLFIN outperforms six strong baselines in terms of final average accuracy, a metric that measures the model’s ability to generalize across all tasks.

One of the most significant benefits of DOLFIN is its efficiency. By reducing the need for rehearsal buffers and minimizing communication overhead, this approach can learn from distributed data without compromising performance. This makes it an attractive solution for real-world applications, where data is often scattered across multiple clients.

While DOLFIN shows promise in addressing catastrophic forgetting, there are still challenges to overcome before it can be widely adopted. For instance, the method requires careful hyperparameter tuning and may not perform well on datasets with complex relationships between tasks. Nevertheless, this breakthrough represents a significant step forward in the quest for perpetual learning, and its potential applications are vast.

As researchers continue to refine DOLFIN and explore new approaches to continual learning, we can expect to see even more innovative solutions emerge.

Cite this article: “Overcoming Catastrophic Forgetting with DOLFIN: A Novel Approach to Continual Learning”, The Science Archive, 2025.

Artificial Intelligence, Catastrophic Forgetting, Continual Learning, Dolfin, Federated Learning, Machine Learning, Perpetual Learning, Vision Transformers, Low-Rank Adapters, Dual Gradient Projection Memory

Reference: Omayma Moussadek, Riccardo Salami, Simone Calderara, “DOLFIN: Balancing Stability and Plasticity in Federated Continual Learning” (2025).

Leave a Reply