ADAMT: Adaptive Distributed Multi-Task Learning for Efficient Image Recognition in Mobile Ad-Hoc Networks

Adaptive Distributed Multi-Task Learning Framework ADAMT: Efficient Image Recognition in Mobile Ad-hoc Networks

Academic Background

Distributed machine learning in Mobile Ad-hoc Networks (MANETs) faces significant challenges. These challenges primarily stem from the limited computational resources of devices, non-independent and identically distributed (Non-IID) data, and dynamic network topologies. Existing methods often rely on centralized coordination and stable network conditions, which are difficult to maintain in practical applications. To address these issues, researchers have proposed an adaptive distributed multi-task learning framework called ADAMT (Adaptive Distributed Multi-Task Learning), designed to achieve efficient image recognition in resource-constrained mobile ad-hoc networks.

MANETs are decentralized, infrastructure-free networks that enable autonomous device connections and information sharing without fixed infrastructure. This flexibility makes MANETs highly applicable in scenarios such as disaster recovery, military operations, and real-time intelligent transportation systems. However, these characteristics also pose substantial challenges for implementing efficient machine learning frameworks, especially under constraints like limited computational resources, bandwidth limitations, and dynamic network topologies.

Source of the Paper

This paper is co-authored by Jia Zhao, Wei Zhao, Yunan Zhai, Liyuan Zhang, and Yan Ding. They are affiliated with the School of Computer Technology and Engineering and the College of Artificial Intelligence Technology at Changchun Institute of Technology, the School of Electronics Engineering and Computer Science at Peking University, the School of Computer Science and Engineering at Changchun University of Technology, and the Education Examinations Authority of Jilin Province, respectively. The paper was published in 2025 in the journal Neural Networks, titled “ADAMT: Adaptive Distributed Multi-Task Learning for Efficient Image Recognition in Mobile Ad-hoc Networks.”

Research Process

1. Research Design and Framework

The core of the ADAMT framework lies in its three key innovations: 1. Feature Expansion Mechanism: Enhances the expressiveness of local models by leveraging task-specific information. 2. Deep Hashing Technique: Enables efficient on-device retrieval and multi-task fusion. 3. Adaptive Communication Strategy: Dynamically adjusts the model updating process based on network conditions and node reliability.

2. Experimental Setup

The research was conducted in a mobile ad-hoc network consisting of 10 smartphones, each equipped with a Snapdragon™ 865 processor, 12GB RAM, and 256GB ROM. The dataset includes ImageNet-1k and some common items collected from online academic databases. To validate the model’s performance, researchers assigned specific tasks to each device based on user usage characteristics and hobbies, and collected corresponding image data.

3. Model Training and Optimization

Researchers first semi-supervised the proposed model using the user’s local dataset for 100 iterations. To enhance the model’s robustness, data augmentation techniques such as random horizontal flipping, random cropping, and noise addition were applied to the input images. During training, an SGD optimizer with a momentum of 0.99, mini-batch size of 256, and initial learning rate of 0.01 was used, with a weight decay parameter of 0.0005.

4. Comparative Experiments

Researchers compared the ADAMT framework with several baseline models, including PatchConvNet, VAN-B4, TinyViT, VOLO, PP-ShiTu, and ReXNet. The comparative experiments demonstrated that ADAMT outperformed existing methods in terms of recognition accuracy, communication overhead, and convergence speed.

Key Results

1. Model Performance

Experimental results on the ImageNet dataset showed that ADAMT achieved a Top-1 accuracy of 0.867, significantly outperforming existing distributed learning methods. Additionally, ADAMT significantly reduced communication overhead and accelerated convergence speed by 2.69 times compared to traditional distributed SGD.

2. Adaptive Communication Strategy

ADAMT’s adaptive communication strategy effectively balances the trade-off between model performance and resource consumption. By dynamically adjusting the probability of communication with neighboring nodes, ADAMT performs exceptionally well in resource-constrained environments.

3. Comparative Experimental Results

During the iterative process of personalized task processing and overall task aggregation, ADAMT’s recognition accuracy continued to improve, significantly outperforming other models. Notably, TinyViT and VAN-B4 experienced a decline in recognition accuracy during iterations, indicating their inability to adapt to non-IID data scenarios.

Conclusion

The ADAMT framework provides an efficient and adaptive solution for distributed machine learning in mobile ad-hoc networks. By introducing the feature expansion mechanism, deep hashing technique, and adaptive communication strategy, ADAMT not only improves image recognition accuracy but also significantly reduces communication overhead and convergence time. This research paves the way for deploying advanced machine learning applications on edge devices, offering significant scientific value and practical potential.

Research Highlights

  1. Fully Decentralized Learning Framework: ADAMT does not rely on a central server, leveraging mobile computing resources for model improvements, making it particularly suitable for dynamic, infrastructure-free environments.
  2. Deep Hashing Feature Expansion Mechanism: Through deep hashing, ADAMT enhances the expressiveness of local models, effectively handling non-IID data distributions.
  3. Adaptive Communication Strategy: ADAMT’s dynamic communication strategy significantly reduces communication costs and power loss, enabling the model to seamlessly adapt to the dynamic conditions of mobile ad-hoc networks.

Additional Valuable Information

Researchers also conducted ablation experiments to validate the performance improvements brought by the Noisy Student method and deep hashing algorithm. The results showed that the Noisy Student method significantly improved the model’s generalization ability, while the deep hashing algorithm excelled in storage and retrieval efficiency.

The ADAMT framework provides an efficient and adaptive solution for distributed machine learning in mobile ad-hoc networks, offering broad application prospects and significant scientific value.