Saturday 22 March 2025
As machines learn, they often forget what they’ve learned before. This phenomenon is known as catastrophic forgetting, and it’s a major challenge for artificial intelligence (AI) researchers. In recent years, scientists have been working on developing algorithms that can continuously learn from new data without forgetting the knowledge they’ve gained earlier.
A team of researchers has made significant progress in this area by proposing two novel measures of sequence transferability, which assess how well a task sequence is learned and retained over time. These measures, known as Total Transferability (TFT) and Relative Transferability (TRT), can help identify the most effective order for tasks to be completed.
The researchers demonstrated that their approach outperformed random task selection in both single-batch and multiple-batch settings. They also found that the sequence transferability measures were robust to changes in sample size and choice of base transferability metric.
Catastrophic forgetting occurs when a machine is trained on multiple tasks, one after another, without any special mechanism to retain previously learned knowledge. This can result in the model performing poorly on earlier tasks once it’s been trained on new ones. The researchers’ approach aims to mitigate this problem by identifying task sequences that are more likely to be retained and transferred over time.
The proposed measures of sequence transferability are based on an analysis of how well a task is learned and retained over time. They take into account not only the performance of the model on each individual task but also its ability to generalize from one task to another. This allows them to identify sequences that are more likely to be successfully transferred, even if they involve complex or nuanced relationships between tasks.
The researchers tested their approach using several different algorithms and datasets, including some common replay-based methods. They found that their measures of sequence transferability were effective in identifying the most beneficial task orders for each algorithm, leading to improved performance across a range of scenarios.
This work has important implications for the development of AI systems that can learn continuously from new data without forgetting what they’ve learned earlier. It could enable machines to adapt more effectively to changing environments and tasks, making them more useful in a wide range of applications.
Cite this article: “New Measures of Sequence Transferability Help AI Systems Learn Continuously Without Forgetting”, The Science Archive, 2025.
Artificial Intelligence, Catastrophic Forgetting, Machine Learning, Sequence Transferability, Task Selection, Total Transferability, Relative Transferability, Model Performance, Generalization, Algorithm Development







