Efficient Deep Learning Networks with Complementary DNNs and Memory Components

Friday 31 January 2025


Deep learning models, which are used in a wide range of applications including image and speech recognition, natural language processing, and autonomous vehicles, require significant computational resources and energy consumption. This has led to the development of smaller, more efficient deep neural networks (DNNs) that can be deployed on edge devices such as smartphones and smart home appliances.


However, these smaller DNNs often sacrifice accuracy for efficiency, which can lead to poor performance in certain applications. Researchers have been working on developing methods to improve the accuracy of these smaller DNNs while reducing their energy consumption.


Recently, a team of researchers proposed a novel approach that uses two complementary DNNs integrated with a memory component to reduce energy consumption and improve accuracy. The first DNN is used for initial classification, and if the confidence score is low, the second DNN is invoked to make a more accurate prediction. The memory component stores previous classifications and can recall them when similar inputs are encountered again.


The researchers tested their approach on four different datasets, including CIFAR-10, ImageNet, Intel, and FashionMNIST, and found that it significantly reduced energy consumption while maintaining high accuracy. For example, on the CIFAR-10 dataset, the approach reduced energy consumption by 85.8% compared to a single large DNN.


The researchers also evaluated the performance of their approach using different architectural pairs and found that it was consistent across different models. This suggests that the approach is robust and can be applied to a wide range of applications.


In addition to reducing energy consumption, the approach also improved accuracy. For example, on the ImageNet dataset, the approach achieved an accuracy of 80.9%, which is comparable to larger DNNs.


The researchers believe that their approach has significant implications for edge computing and IoT devices, where energy efficiency and accuracy are critical. They propose using this approach in applications such as smart home appliances, autonomous vehicles, and healthcare devices.


Overall, the researchers have developed a novel approach that uses complementary DNNs and memory components to reduce energy consumption and improve accuracy. This approach has significant implications for edge computing and IoT devices and could be used in a wide range of applications where energy efficiency and accuracy are critical.


Cite this article: “Efficient Deep Learning Networks with Complementary DNNs and Memory Components”, The Science Archive, 2025.


Deep Learning, Neural Networks, Edge Computing, Iot Devices, Energy Consumption, Memory Component, Dnns, Classification, Accuracy, Image Recognition


Reference: Michail Kinnas, John Violos, Ioannis Kompatsiaris, Symeon Papadopoulos, “Reducing Inference Energy Consumption Using Dual Complementary CNNs” (2024).


Leave a Reply