CaBaFL: Asynchronous Federated Learning via Hierarchical Cache and Feature Balance

Citation Author(s):
Zeke
Xia
East China Normal University
Submitted by:
Zeke Xia
Last updated:
Thu, 07/25/2024 - 23:03
DOI:
10.21227/0xy7-ej22
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Federated Learning (FL) as a promising distributed machine learning paradigm has been widely adopted in Artificial Intelligence of Things (AIoT) applications. However, the efficiency and inference capability of FL is seriously limited due to the presence of stragglers and data imbalance across massive AIoT devices, respectively. To address the above challenges, we present a novel asynchronous FL approach named CaBaFL, which includes a hierarchical \textbf{Ca}che-based aggregation mechanism and a feature \textbf{Ba}lance-guided device selection strategy. CaBaFL maintains multiple intermediate models simultaneously for local training. The hierarchical cache-based aggregation mechanism enables each intermediate model to be trained on multiple devices to align the training time and mitigate the straggler issue. In specific, each intermediate model is stored in a low-level cache for local training and when it is trained by sufficient local devices, it will be stored in a high-level cache for aggregation. To address the problem of imbalanced data, the feature balance-guided device selection strategy in CaBaFL adopts the activation distribution as a metric, which enables each intermediate model to be trained across devices with totally balanced data distributions before aggregation. Experimental results show that compared to the state-of-the-art FL methods, CaBaFL achieves up to 9.26X training acceleration and 19.71\% accuracy improvements.

Instructions: 

The results are mainly contained in a txt file, which has two columns for accuracy and time

Comments

ok

Submitted by Aswin Jagadeesan on Wed, 10/02/2024 - 10:31

Dataset Files

    Files have not been uploaded for this dataset