Continual Deep Learning through Unforgettable Past Practical Convolution

  • Muhammad Rehan Naeem Department of Computer Science, University of Engineering and Technology, Taxila Pakistan
  • Muhammad Munwar Iqbal Department of Computer Science,University of Engineering and Technology, Taxila Pakistan
  • Rashid Amin Department of Computer Science, University of Engineering and Technology, Taxila Pakistan
Keywords: Deep Neural Networks, traditional deep learning approaches, Practical Convolution, continual learning, MNIST-built binary


When trained sequentially on tasks, several machine-learning models forget how to perform previously learned tasks. This phenomenon, called catastrophic forgetting, is an essential challenge to address so that systems can learn continuously. In the face of unexpected circumstances, humans and animals can develop sophisticated predictive models that allow them to correctly and effectively reason about real-world phenomena, and they can adjust these models extremely rapidly. Because of the static and offline design of traditional deep learning methods, the viability of deep neural networks (DNNs) to solve data stream problems still needs extensive research. It is necessary for intelligent systems to continuously learn new skills, but traditional deep learning approaches suffer from disastrous forgetting of the past. With weight regularization, recent works resolve this. Although computationally costly, functional regularization is supposed to work better, but it does so rarely in practice. In this article, we address this problem by using a modern approach to functional regularization that uses a few memorable historical instances that are important to prevent forgetting. Our methodology allows weight-space preparation by using a Gaussian Method formulation of deep networks when defining both the unforgettable history and a practical prior. Our technique achieves state-of-the-art success on traditional metrics and opens up a new path for continual learning where approaches based on regularization and memory are naturally merged. In 10 pieces of training, 10-200 unforgettable examples are set. For split, we use five MNIST-built binary classification tasks 0/1, 2/3, 4/5, 6/7, and 8/9 with a completely linked multi-head network with two hidden layers, each with 256 hidden units. Throughout all tasks, EUHC performs reliably better (except for the first task where it is similar to the best). It also increases by a wide margin over the lower limit (' distinct tasks'). In particular, the EUHC matches the performance of tasks 4-6 with the network trained jointly on all tasks, which means that forgetting is completely avoided there. The overall output is also the highest for all duties. For instance, with 10 such memorable instances, EUHC's careful selection raises the average accuracy to 70% from 45%.

How to Cite
M. Naeem, M. M. Iqbal, and R. Amin, “Continual Deep Learning through Unforgettable Past Practical Convolution”, jictra, pp. 20-29, Dec. 2021.
Original Articles