Abstract
Leukemia diagnosis with medical imaging necessitates the development of highly accurate and individualized models that uphold data privacy among institutions. This research proposes a framework named FedPerLoRA-Health, a communication-efficient federated learning framework that combines federated personalization and low rank adaptation with EfficientNet architectures for personalized leukemia detection. The proposed PerFLR-EffNet algorithm holds the structural efficiency of EfficientNet variants B0 and B2 as backbone models, facilitating parameter-efficient updates and local personalization across diverse client datasets. Within this framework, personalized layers undergo local training, whereas LoRA-adapted global layers are disseminated to reduce communication overhead. The proposed method is assessed on a Blood Cells Cancer Acute Lymphoblastic Leukemia (ALL) dataset with classification-based metrics such as accuracy, precision, recall and F1-score and federated learning-based metrics such as communication cost and convergence rate. The efficiency of the proposed model is analysed by comparing it with the baseline models such as centralized EfficientNetB0 and EfficientNetB2 without personalized Federation. Experimental results indicate that PerFLR-EffNet attains a better average classification accuracy of 98.67% and also proves to be communication efficient by reporting reduced number of trainable parameters and a reduction in communication overhead by 88.12% when compared with the baseline models.
Similar content being viewed by others
Introduction
One of the most prevalent and severe blood cancers in children and adolescents is Acute Lymphoblastic Leukemia (ALL). Better treatment and survival rates depend on early and precise diagnosis. Traditional diagnosis includes experienced pathologists manually examining blood smear pictures, which is time-consuming, resource-intensive, and subject to observer variability. Haematological cancer detection has adopted artificial intelligence (AI), notably deep learning (DL)-based image classification approaches, to meet the need for fast, accurate, and scalable diagnostics1. Modern lightweight convolutional neural networks (CNNs) like EfficientNet have shown promising results in medical picture categorization due to their high accuracy-to-parameter ratio. EfficientNet variants B0 and B2 can process pictures locally without internet connectivity or significant server infrastructure, making them ideal for edge devices2. However, traditional centralized training of such models presents data privacy problems, especially in medical settings where GDPR and HIPAA protect patient data.
Federated Learning (FL) is a promising option that allows AI model training across decentralized data silos without transmitting raw data to a central server. For data sovereignty and privacy compliance, each institution (or client) trains a local model and only updates the central server3. Despite these benefits, FL techniques like FedAvg suffer various real-world deployment obstacles such as data heterogeneity and communication bottlenecks. Clinical data is non-IID due to demographics, acquisition processes, and staining techniques varying among medical institutions. Global models may underperform local client data due to heterogeneity4. Also, frequent model synchronization in FL can cause bandwidth constraints, particularly for big models relayed across restricted edge networks5. Standard federated techniques lack in personalization that makes it less efficient at specific clients6.
In order to bridge these gaps which are identified with the standard federated approaches, this research work aims to develop and evaluate FedPerLoRA-Health, a unique, communication-efficient and personalized federated learning system for accurate leukemia diagnosis from medical images in decentralized clinical settings. FedPer (Federated Personalization) supports client-specific adaptation7 and Low-Rank Adaptation (LoRA) minimizes communication overhead8 during model updates and EfficientNet variants B0 and B2 provide efficient and high-quality feature extraction. EfficientNet B0 and B2 variants were chosen as their higher variants might directly counteract the primary objective of the proposed work and results in high communication overhead and demands high computation which makes it less suitable. Using these components, this research addresses real-world federated medical learning difficulties such as avoiding centralized data aggregation and handling heterogeneous medical institution data distributions. This helps in protecting the privacy of patients and reducing edge client communication. The novelty of this work relies in providing dual personalization mechanism where only the personalized adapter updates is shared with the global server and not model weights. This makes it different from simple integration of FerPer, which sends full model updates, LoRA which trains a single, global model and EfficientNet base.
Key contributions of this research include the following:
-
A personalized framework, FedPerLoRA-Health facilitating client-specific personalization with layered separation provided by FedPer and LoRA based parameter efficient updates to maintain accuracy while drastically reducing parameter transmission.
-
A novel federated learning algorithm, PerFLR-EffNet, integrates federated personalization with Low-Rank Adaptation and EfficientNet (B0 and B2) to achieve high diagnostic performance while preserving privacy across diverse clients.
-
A communication efficient federated approach to reduce communication overhead by transmitting only minimal model updates with limited network bandwidth for bandwidth-constrained clinical environments.
Related work
Recent advances in artificial intelligence, edge computing and federated learning have improved medical diagnosis, particularly for hematopoietic malignancies such as Acute Lymphoblastic Leukemia. Many studies have examined deep learning-based ALL classification and privacy-preserving machine learning algorithms for collaborative model training that does not compromise patient data. Meanwhile, personalized federated learning algorithms and parameter-efficient fine-tuning approaches like LoRA have gained popularity as answers to dispersed environments’ non-IID data distributions and communication restrictions. This section reviews research in four key areas: ALL detection using deep learning architectures, federated learning applications in medical imaging, personalized federated frameworks such as FedPer and pFedMe, and LoRA for lightweight model adaptation. This underscores the research gap of not having a cohesive solution that integrates EfficientNet, FedPer, and LoRA in a federated configuration for ALL prediction. A Novel Deep Learning Segmentation and Classification Framework used custom UNet for leukemic cell segmentation and downstream classification9.
Deep learning has transformed automated hematologic oncology diagnosis, especially with blood smear images for ALL dataset. CNN-based models are used to extract high-level features from microscopic images 10. EfficientNet versions with compound scaling and higher parameter efficiency perform well in medical imaging. Das and Meher created an EfficientNet-based classifier for ALL diagnosis that outperformed CNNs like AlexNet and ResNet with over 99% accuracy11. Rahman et al. found that ResNet50 along with nature inspired algorithms for feature selection outperformed other CNN architectures in terms of blood cancer classification accuracy and processing efficiency12. Recent research confirms that EfficientNet-B3 handles class imbalances improving the model complexity and accuracy, making it suitable for mobile and edge healthcare devices13. A lightweight EfficientNet-B3 model is proposed for classifying ALL with less trainable parameters14. According to a recent study 15, EfficientNet variants perform better in medical image classification tasks owing to their compound scaling method and lower processing complexity. Elsayed et al. examined bone marrow image-based deep learning models for ALL diagnosis through 2023. In several circumstances, specialized CNNs with 100% accuracy showed strong classification performance when boosted using boosting methods and optimization strategies16.
FL is a decentralized machine learning method where clients train a global model without sharing data. This benefits privacy-sensitive industries, such as healthcare. In their comprehensive assessment of FL in medical imaging, Sohan et al. stressed its capacity to protect privacy while facilitating clinical institution-wide model construction3. Another study found that PPPML-HMI, a tailored FL framework for heterogeneous medical imaging, increases model robustness and privacy protection across institutions17. Traditional FL frameworks perform poorly with non-independent and identically distributed (non-IID) client data and high communication costs, making them unsuitable for healthcare18. A comprehensive MDPI survey19 highlighted FL’s applicability of FL in CT, MRI, and histopathology. F. Zang et.al demonstrated that FL works in resource-constrained medical situations with minimal data and annotation20.
Mazid et al. introduced blockchain-driven customized federated learning for the Internet of Medical Things to provide decentralized trust, data security, and efficient model training across distributed medical devices21. Sharma et al. proposed a blockchain-enabled federated learning system for privacy-preserved electronic health records that protects sensitive patient data and supports collaborative model building across many institutions22. Both studies show how blockchain and federated learning can help make healthcare AI systems secure, transparent, and privacy-preserving.
Several Personalized Federated Learning (PFL) frameworks have been developed to address the shortcomings of FL in non-IID conditions. A novel method, pFedLT performs layer-wise transformation to capture the diversity of data distribution among clients23. A bi-level optimization target based on Moreau envelope regularization balances global knowledge and local adaptation in pFedMe24. PFL is expanded by the PFed-NS framework25, which dynamically reconfigures neural structures to meet the client’s needs. This architecture is important in healthcare, where inter-institutional variance is widespread. An improved version of FedPer shows improvement in personalization both in terms of accuracy and stability26. For smoother personalization, pFedMe uses Moreau envelope regularization and converges with significant data heterogeneity. These methods move away from “one-size-fits-all” FL models toward client-adaptive learning architectures.
Some customized FL approaches handle client heterogeneity without full model update. Local batch-normalization layers in FedBN prevent feature distribution shifts among clients, increasing convergence in non-IID circumstances27. FedAMP employs an attention-based message-forwarding mechanism, allowing clients to prioritize updates from comparable peers and personalize them based on inter-client affinities28. FedRep allows personalization with minimum communication of shared features by learning a shared backbone globally, while each client maintains its own classifier head 29. FedPerLoRA-Health freezes the majority of the EfficientNet backbone and personalizes it via lightweight LoRA adapters, lowering communication and computation while enabling fine-grained medical diagnosis client adaptation.
LoRA is a lightweight fine-tuning strategy that inserts trainable rank-decomposed matrices into frozen model weights, thereby lowering the number of adjustable parameters. LoRA, popularized for large language models, is becoming increasingly popular in vision-based applications. LoRA was applied to large vision models for lung nodule detection, attaining near-state-of-the-art accuracy with only 3% parameter updates30. In another study, RadLoRA improved latency and accuracy for real-time radiological image classification, which is critical for healthcare edge AI31. These studies demonstrate the potential of LoRA for communication-efficient federated training. Computerized Medical Imaging and Graphics reported a LoRA-enhanced RT-DETR for musculoskeletal ultrasound segmentation with quick convergence and fewer trainable parameters than the previous model32. This strategy works for federated learning, where communication and memory efficiency are important. A summary of these related works are provided in Table 1.
EfficientNet has been extensively researched for ALL detection, and Federated Learning and Personalized Federated Learning frameworks, such as FedPer and pFedMe, have shown success with heterogeneous client data. However, LoRA for parameter-efficient adaptation within a personalized federated framework for leukemia prediction has not been studied. No previous work has combined EfficientNet, FedPer and LoRA in a leukemia diagnostics-specific federated architecture. Hence, this paper integrates FedPer, LoRA and EfficientNet variants to detect leukemia. The proposed approach builds on EfficientNet, FL and LoRA, but differs from personalized or communication-efficient FL techniques in several aspects. FedProx reduces statistical heterogeneity with a proximal term and pFedMe personalizes clients with Moreau envelope regularization. Both strategies require the updating and transmission of large model parameters, which increases the communication overhead. FedPerLoRA trains and communicates lightweight LoRA adapters while freezing the EfficientNet backbone, thereby decreasing computational and communication expenses. The proposed method uses EfficientNet to exploit strong feature extraction for medical imaging and systematically evaluates the trade-off between personalization and communication efficiency in leukemia diagnosis. Other LoRA-FL approaches mostly demonstrate parameter-efficient adaptation in general FL settings. Thus, FedPerLoRA-Health offers a domain-specific, communication-efficient, and tailored FL framework that surpasses FedProx, pFedMe, and other LoRA-FL techniques.
Materials and methods
This section begins with the entire research workflow, from data partitioning to model evaluation. Next, EfficientNet-B0 and B2 backbone networks were described for efficient and scalable medical picture analysis. The integration of LoRA modules in EfficientNet layers is detailed to enable parameter-efficient learning. In the next subsection, the personalized local training for each federated client is detailed. Finally, the client–server communication loop and FedAdam optimizer for client update aggregation are presented in the federated-training process. This method was formalized in the final algorithm.
Architecture and process flow of the FedPerLoRA-Health framework
The architecture and process of FedPerLoRA-Health for efficient and personalized federated learning in blood cancer classification from microscopic blood smear images are shown in Fig. 1. The process begins with data partitioning and preparation. The Dirichlet distribution simulates non-IID data across 10 clients using 80% training and 20% testing samples from the global dataset. All RGB images are pre-processed to 224 × 224 resolution. The initial convolutional layer, batch normalization, and swish activation uses this tensor, and the timm library scales feature maps for EfficientNet-B2 as 260 × 260 input. All clients has its own lightweight LoRA module in the convolutional layers. Each client receives data distribution-specific classification head. This allows for efficient and private personalization. All clients trained their models using class imbalance resistant focus loss functions. After local training, only the LoRA module delta parameters were communicated to the server. These local deltas are collected by the FedAdam optimizer on the server side, updating the global LoRA parameters while maintaining the common base model. FedAdam is preferred because of its enhanced convergence speed and stability, which makes it desirable in heterogeneous federated setups. Unlike FedAvg, which utilizes simple averaging and is unstable with non-IID data, FedAdam reduces the gradient variance between clients by using adaptive learning rates and momentum-based updates. FedAdam optimizes faster and smoother by dynamically altering global updates based on historical gradients than FedProx, which adds a proximal term to control client drift but slows the convergence.
In this process flow, the ConV2D layers were shared with the global server and the Classifier Head is personalized at each client. Multiple federated rounds of client–server communication are continued. The final performance of the framework was tested on a held-out global test set. Each of the 10 clients received a customized model, which was evaluated using the F1-score, accuracy, Cohen’s Kappa, MCC, ROC, and PR curves. Grad-CAM visualizes class-specific attention in input photos to improve the model explainability.
FedPerLoRA-Health framework
The proposed FedPerLoRA-Health framework integrates FedPer for personalizing, LoRA adapters for communication efficiency, and EfficientNet backbones for lightweight deep learning to train models on non-IID blood smear images from hospitals without sharing raw data. Each client retains its private, non-IID blood smear dataset and trains only lightweight LoRA adapters and a private classification head on a frozen EfficientNet backbone (B0/B2) to address class imbalance using focal loss and weighted sampling.
The federated server manages the shared EfficientNet base and global LoRA adapter. Each communication round exchanges solely LoRA adapter deltas; clients receive global adapters, train locally, and return updates that FedAdam collects. Quantization and pruning were not considered because pruning risks while removing weights and is crucial for rare medical subtypes, and quantization introduces noise that may harm feature extraction needed for classification. LoRA retains the model’s capability and only adds an adaptive subspace for safe and efficient client-side tuning. Figure 2 shows how each client trains a customized model with a shared base, global adapter, and private classification head. Each FedPerLoRA-Health federated client, that is, a hospital or medical institution, holds private, non-IID blood smear image datasets for leukemia diagnosis.
The local dataset for each client \(\dot{i}\) is represented as \({D}_{i}={\left\{\left({x}_{j},{y}_{j}\right)\right\}}_{j=1}^{{n}_{i}}\)
where \({x}_{j}^{\left(i\right)}\) is an input image and \({y}_{j}^{\left(i\right)}\) is the leukemia label. The client uses a composite network of the shared EfficientNet base \({\theta }_{s}\) , the received LoRA adapters \({\theta }_{\text{lora}}^{\left(i\right)}\) , and the local private classification head \({\theta }_{p}^{\left(i\right)}\) to create a personalized model.
Backbone models: EfficientNet-B0 and B2
All clients in the proposed framework uses the frozen shared base model EfficientNet-B0 or B2. The FedPerLoRA-Health framework instantiates \({\theta }_{s}\) using a frozen EfficientNet architecture that extracts features from all federated clients. Specifically, EfficientNet-B0 and EfficientNet-B2 were adopted as candidate architectures for \({\theta }_{s}\), depending on the computing capability and data complexity. The EfficientNet models use a compound scaling coefficient \({\upphi }_{i}\) to uniformly scale the network depth, width , and input resolution \(r\). The scaling relations are defined as \(d = \alpha^{\phi } ,\;w = \beta^{\phi } ,\;r = \gamma^{\phi }\) such that \(\alpha \cdot \beta^{2} \cdot \gamma^{2} \approx 2\).
The baseline model, EfficientNet-B0, was utilized when \(\upphi\) = 0, whereas the scaled-up variant EfficientNet-B2, had greater \(w,\) \(d\) and resulting in improved accuracy but higher resource utilization. Both variants freeze the EfficientNet model parameters, so \({\theta }_{s}\) remains static during local training and is shared across all clients. The foundation of EfficientNet is the MBConv block, a mobile inverted bottleneck structure with three stages namely expansion layer (1 × 1 convolution), depthwise convolution (3 × 3 or 5 × 5), squeeze-and-excite (SE) module and projection layer (1 × 1). Given an input tensor \(x\), the MBConv transformation is as follows:
During federated training, the clients do not update the base model \({\theta }_{s}\). Instead, they inserted LoRA adapters inside the blocks 1 × 1 convolutional layers of the MBConv blocks. Each client \(i\) optimizes its trainable LoRA adapter weights \({\theta }_{\text{lora}}^{\left(i\right)}\) locally using the client data \({D}_{i}\). B2 is a more expressive feature extractor than B0 (224 × 224) because of its enhanced capacity and resolution (260 × 260 input size). However, this requires more computation. The framework’s versatility allows it to transition between B0 and B2 as \({\theta }_{s}\), depending on the application. Overall, employing EfficientNet-B0 or B2 as the frozen basis \({\theta }_{s}\) and tailored LoRA modules \({\theta }_{\text{lora}}^{\left(i\right)}\) balances the accuracy, communication efficiency and personalization for non-IID leukemia diagnosis.
Integrating LoRA with EfficientNet
The FedPerLoRA-Health system integrates LoRA modules into the frozen EfficientNet backbone \({\theta }_{s}\) to personalize models and improve communication. Each client \(i\) trains a small number of \({\theta }_{\text{lora}}^{\left(i\right)}\) parameters embedded in specific layers of the basic model, instead of fine-tuning the entire set of parameters. In the EfficientNet backbone \({\theta }_{s}\), \(W\in {R}^{d\times k}\) is a weight matrix from a \(1 \times 1\) convolutional layer in a Mobile Inverted Bottleneck (MBConv) block. This layer was modified by LoRA during training as given below.
where \(A\in {R}^{d\times r}\text{ and }B\in {R}^{r\times k}\) is a trainable low-rank matrix with rank \(r\ll \text{min}\left(d,k\right)\) , and \(W\) is frozen (part of \({\theta }_{s}\)). Thus, each client’s effective parameter set is given by Eq. 3.
The Eq. 4 denotes the personalized forward propagation for client \(i\), computed by the improved EfficientNet layer.
where \({f}_{{\uptheta }_{s}}\left(\cdot \right)\) denotes frozen operations and \({f}_{{\uptheta }_{\text{lora}}^{\left(i\right)}}\left(\cdot \right)\) represents the low-rank modifications of the LoRA modules. The communication overhead is much lower than that of full model updates. The server aggregates these updates using an adaptive optimizer, such as FedAdam and the modified global adapter weights \({\uptheta }_{\text{lora}}^{\left(t+1\right)}\) are redistributed for the following communication round. The framework achieves parameter-efficient fine-tuning while maintaining the generalization ability of the shared EfficientNet backbone by inserting LoRA adapters into selective \(1\times 1\) convolutional layer of MBConv blocks. This method allows each client to change the model to its local data distribution without changing the backbone, which is useful in federated learning with non-IID medical image datasets.
Personalized training at each client
As shown in Fig. 3, only the client’s private, non-IID data were used to train this model. A weighted random sampler draws balanced mini-batches during training to reduce class imbalances in medical imaging datasets.
A data sample with labels \({y}_{j}\in \{\text{0,1}\}\) has a sampling probability calculated as given in Eq. 5.
where \({f}_{k}\) is the frequency of class \(k\) in \({D}_{i}\) and \({n}_{k}\) is the number of samples in class \(k\).
To give minority class samples more weight, the probability \(P\left({y}_{j}\right)\) is inversely proportional to class frequency. This ensures that local training sufficiently represents the majority and minority class samples without oversampling or data duplication. Class-aware Focal Loss functions down weight simple samples and prioritize hard-to-classify data to optimize the model. This strengthens models in clinical settings with underrepresented disease classes. Each client minimizes a Focal Loss function to handle the class imbalance in the medical dataset, which is given in Eq. 6.
where \(\alpha \;\varepsilon \left[ {0,1} \right]\) is the balancing factor for class frequencies, \(\upgamma \ge 0\) is the focusing parameter that reduces the loss for well-classified samples and \(\widehat{{y}_{j}^{\left(i\right)}}\) is the predicted probability for the true class. The local forward pass for sample \({x}_{j}\in {D}_{i}\) is given in Eq. 7.
where \({f}_{{\uptheta }_{s}+{\uptheta }_{\text{lora}}^{\left(i\right)}}\left({x}_{j}\right)\) denotes the output feature representation obtained from \({\theta }_{s}\) and \({\theta }_{\text{lora}}^{\left(i\right)}\) and \({f}_{{\uptheta }_{p}^{\left(i\right)}}\left(\cdot \right)\) maps features to the predicted class probability \(\widehat{{y}_{j}^{\left(i\right)}}\). For computational efficiency, backpropagation updates only \({\theta }_{\text{lora}}^{\left(i\right)}\) and \({\uptheta }_{p}^{\left(i\right)}\) during local optimization, leaving \({\theta }_{s}\) to be fixed. After local training, the client calculates LoRA delta update as follows:
where the global LoRA status at communication round \(t\) is given by \({\theta }_{\text{lora}}^{\left(t\right)}\).
This delta only captures LoRA module alterations, lowering the communication load compared to exchanging full model parameters. The client sends this delta to the central server while maintaining its private data and classification head. This lightweight, tailored training technique optimizes collaboration among distributed medical institutions. FedPerLoRA-Health is ideal for data silos and heterogeneity-prone fields, such as leukemia diagnosis, because it reduces communication overhead as given in Eq. 9 and protects model privacy. Scalable and clinically flexible, the modular framework allows immediate local population customization.
Federated training workflow and aggregation
The workflow of the FedPerLoRA-Health system depicted in Fig. 4 iteratively enables communication-efficient and privacy-preserving leukemia detection across different hospitals. All participating hospitals received global LoRA adapters, lightweight, trainable modules linked to the shared EfficientNet backbone, at the start of each federated learning round. After receiving these adapters, Clients A and B used the FedPer approach to merge the frozen shared base with their private classifier heads and trained their non-IID blood smear datasets locally. The optimization handles class imbalance by using focal loss and weighted random sampling. Each client computes the LoRA adapter parameter delta (update) after training and sends it solely to the server, keeping the raw data and model weights private.
The FedperAdam optimizers with momentum terms stabilize the convergence and combine all LoRA updates on the server. Calculating the average of all client deltas, updating the internal optimizer state variables (moment estimations) and refining the global LoRA adapter parameters using the optimizer rule are all required for redistribution of updated adapters to clients in the next round and continues the loop. This parallelized and lightweight communication loop allows individualized model training while decreasing communication overhead, making it ideal for medical situations with severe data privacy and bandwidth limitations.
In FedPerLoRA-Health, server-side orchestration as shown in Fig. 5, manages communication-efficient and privacy-preserving collaborative training for numerous clients. The global LoRA parameters \({\uptheta }_{\text{lora}}^{\left(0\right)}\) and FedAdam optimizer states for adaptive aggregation were initialized first. The server selects a selection of participating clients in each communication round using a predefined sampling. For local training, the server broadcasts the current global LoRA parameters \({\theta }_{\text{lora}}^{\left(t\right)}\) to all selected clients after client cohort selection. After receiving global parameters, each client fine-tunes its private data and computes the LoRA adapter delta \(\Delta {\uptheta }_{\text{lora}}^{\left(i\right)} .\)
. Local updates compare the updated and received LoRA weights and are forwarded to the server. After collecting the local deltas from the participating clients, the FedAdam optimizer aggregates them. FedAdam scales updates and stabilizes training using first and second-order moment estimates \(\left({m}_{t},{v}_{t}\right)\). Updated global parameters are calculated as follows:
where \(\eta\) represents the learning rate, and \(\epsilon\) guarantees numerical stability. The selection, broadcast, local training, delta aggregation and parameter update continue for \(N\) communication cycles. The server finalizes the optimal global LoRA settings \({\uptheta }_{\text{lora}}^{\left(N\right)}\) for deployment or customization after these cycles. This federated training strategy preserves data locality and generalizes models across clients for scalable and compliant deployment in privacy-sensitive medical diagnostics. The proposed PerFLR-EffNet Algorithm is given below and reports a complexity as shown in Table 2.
The complexity of the proposed PerFLR-EffNet algorithm, including the server, client and overall training round time costs was analyzed. The server-side complexity is related to the number of clients (\(K\)) and the size of the LoRA parameters (\({\uptheta }_{\text{LoRA}}\)), indicating a small aggregation overhead \(\mathcal{O}\left(\text{K}\cdot {\uptheta }_{\text{LoRA}}\right)\). The cost for each client is determined to be \(\mathcal{O}\left(E\cdot {D}_{i}\cdot {\uptheta }_{s}\right)\) given by the number of local epochs (\(E\)), local dataset size (\({D}_{i}\)) and frozen backbone size (\({\uptheta }_{s}\)), which accounts for most of the computational work during training. The final complexity is \(\mathcal{O}\left(T\cdot K\cdot E\cdot {D}_{i}\cdot {\uptheta }_{s}\right)\) by combining the contributions from all rounds (\(T\)). LoRA’s parameter-efficient architecture keeps server costs low, whereas clients shoulder most of the computing load.
Experimental setup
Dataset description
This research used the publicly available "Blood Cells Cancer (ALL) dataset" of microscopic blood smear images curated for ALL identification. The collection consists of 3242 colour images that represent the leukemia cell development phases and subtypes as Benign, Early Pre-B, Pre-B and Pro-B. Expert annotation labels each image to ensure supervised learning reliability. In the dataset, staining, cell shape and background noise varies, mimicking non-IID conditions found in real-world medical data from different hospitals. This makes it a good baseline for federated and personalized learning systems like FedPerLoRA-Health. The collection is split between synthetic clients representing hospital facilities to imitate the federated environments. Subsets of data with class imbalance and domain shifts are sent to each client, duplicating localized population distributions and diagnostic variability. Under communication-efficient, parameter-restricted LoRA modules and federated personalization algorithms, this segmentation allows for client-specific personalization performance measurement. All raw blood smear images are scaled to \(224 \times 224\) pixels for uniformity and interoperability with the EfficientNet based study backbone as discussed in section "Architecture and process flow of the FedPerLoRA-health framework". The channel-wise mean and standard deviation from the dataset were used to normalize the images to meet the EfficientNet pretraining settings. The dataset is configured as illustrated in Table 3 as follows.
For the personalized federated learning, the client-wise data distribution is listed in Table 4 as shown below.
The hyper-parameters are configured as shown below in Table 5 in order to achieve better accuracy and communication efficiency. The hyperparameters were empirically evaluated on a validation set and follow federated learning and parameter-efficient fine-tuning literature. The Dirichlet concentration parameter (α = 0.7) creates genuine heterogeneity, while a single local epoch (E = 1) reduces client drift. The LoRA rank (r = 16) optimizes adaptation and parameter efficiency. The focal Loss settings (γ = 2.0, α = 0.25) were selected to solve class imbalance and defined learning rates for steady convergence in server aggregation and client-side adaptation.
Evaluation metrics
Traditional classification metrics and Federated Learning (FL)-specific efficiency indicators are used to evaluate the FedPerLoRA-Health system. Leukemia detection accuracy and robustness across varied client datasets are measured using classification metrics, whereas FL-specific metrics assess federated optimization’s communication overhead and convergence behavior.
-
Accuracy: The percentage of accurately categorized leukemia cell images indicates prediction accuracy.
$${\text{Accuracy}}=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}}$$(11) -
Precision – Recall Curve: This metric shows the trade-off between Precision (accurate positive predictions) and Recall (complete positive predictions). Blood smear images are uneven; therefore, the PR curve area is useful.
$${\text{Precision}} = \frac{TP}{{TP + FP}}$$(12)$${\text{Recall}}=\frac{TP}{TP+FN}$$(13) -
ROC Curve: Compares True Positive Rate (TPR) against False Positive Rate (FPR) across thresholds. The classifier’s AUC summarizes its class distinction abilities.
$${\text{TPR}}=\frac{TP}{TP+FN}$$(14)$${\text{FPR}}=\frac{FP}{FP+TN}$$(15) -
F1-Score: Harmonic mean of precision and recall balances both metrics.
$$F1\text{-score}=2\cdot \frac{{\text{Precision}}\cdot {\text{Recall}}}{{\text{Precision}}+{\text{Recall}}}$$(16) -
Cohen’s Kappa (κ): This is a measure of agreement between anticipated and actual labels, considering the chance. Note that \({p}_{0}\) represents observed accuracy, \({p}_{e}\) represents the anticipated agreement in Eq. 17.
$$\kappa =\frac{{p}_{o}-{p}_{e}}{1-{p}_{e}}$$(17)
MCC : This is a balanced measure of binary classification quality, particularly in cases of class imbalance as given in Eq. 18.
Metrics for federated learning
Metrics to assess FedPerLoRA-Health’s federated efficiency and scalability such as Communication cost and convergence rate are used.
-
Cost per Round of Communication: Quantifies the server-client data transmitted each communication round (MB) as given in Eq. 19. The communication footprint is much lower than full-model updates because only LoRA adapters and classification heads are sent.
$$\text{Cost per Round}=2\times \text{Model Size (MB)}\times \text{Clients per Round}$$(19) -
Rounds to Convergence: Denotes the number of communication rounds needed for the global model to reach a convergence criterion (e.g., target accuracy or loss) as given in Eq. 20. Fewer rounds indicate better communication in learning.
$$\text{Rounds to Convergence}=\text{min}\{r:{L}_{r}\le {L}_{\text{threshold}}\text{ or }{\text{Accuracy}}_{r}\ge {\text{target}}\}$$(20)
Results and discussion
The proposed FedPerLoRA-Health framework is analysed with two variants of EfficientNet namely EfficientNet-B0 and EfficientNet-B2. These models were applied in a centralized manner and also in personalized federation among the clients as FedPer-B0 and FedPer-B2. The performance of these variants was analysed based on the metrics listed in “Evaluation metrics”. The confusion matrix of the centralized EfficientNet-B0, EffecientNet-B2 are shown in Fig. 6 and the aggregated confusion matrix of personalized federated models FedPer-B0 and FedPer-B2 are shown in Fig. 7.
To assess the centralized model’s classification performance, confusion matrices for EfficientNet-B0 and B2 were examined. Both models attempt to discriminate benign, Pre-B (Malignant), Pro-B (Malignant) and early Pre-B (Malignant). Both models have good classification accuracy, but EfficientNet-B2 reduces false positives in crucial malignant categories better. This suggests it could aid early and accurate leukemia subtype identification in centralized diagnostic systems.
The confusion matrices in Fig. 7 show the effectiveness of FedPerLoRA-Health with deeper backbones and personalization layers. The FedPer-B0 performs well for Pre-B and Pro-B, but early Pre-B confuses it with Benign and Pre-B, suggesting a class imbalance or overlapping feature representations. FedPer-B2 improves upon FedPer-B0, especially in distinguishing the early Pre-B class. Owing to its stronger backbone, EfficientNet-B2 improves feature representation and generalization, reducing misclassifications across all classes. FedPer-B0 struggles with class separation for early Pre-B leukemic cells, while FedPer-B2 dramatically reduces these mistakes, showing that more expressive models such as EfficientNet-B2, better capture small morphological changes across ALL kinds.
The two centralized models were compared to provide a performance benchmark. Given their full training dataset access, the EfficientNet-B0 and B2 architectures perform well. All measures as depicted in Table 6 show a slight but continuous advantage for the EfficientNet-B2 model. The F1-Score, Matthews Correlation Coefficient (MCC), and Cohen’s Kappa score increased from 0.9812 to 0.9903 as EfficientNet-B2 enhanced test accuracy from 98.31 to 99.38%. The incremental performance increase shows that EfficientNet-B2’s architectural capacity allows it to capture more discriminative features even when trained on a large dataset. Thus, the proposed federated learning frameworks are evaluated against the more rigorous gold-standard baseline, the centralized EfficientNet-B2.
A critical analysis of the FedPer-B0 and FedPer-B2 federated models depicted in Table 7 shows that the backbone architecture’s representational capabilities is crucial for success in a non-IID context. While functional, the FedPer-B0 architecture has a moderate average accuracy of 78.54% and severe performance volatility, with Matthews Correlation Coefficient (MCC) values ranging from 0.065 on Client 5 to 0.759 on Client 6. It cannot generalize across statistically diverse data silos. In contrast, FedPer-B2 has a state-of-the-art average accuracy of 98.67%. More importantly, it shows remarkable robustness, improving performance for all clients and narrowing the performance gap. For example, Client 5’s MCC almost six-folded to 0.405, while Client 2’s reached 0.874. EfficientNet-B2’s increased architectural complexity is not only beneficial but essential, allowing it to learn discriminative features that are generalizable across diverse client data distributions and overcome the main challenge of federated learning. FedPer-B2 exceeds FedPer-B0 for every client in terms of accuracy, F1-score, Kappa, and MCC. The improvement is significant, with the average accuracy increasing from 0.7854 to 0.9867 and the F1-Score from 0.7560 to 0.9868. This shows that FedPer-B2 customization is more consistent and reliable across clients and models.
Table 8 shows that the personalized federated models performed better after switching from FedPer-B0 to FedPer-B2. FedPer-B0 had a mean accuracy of 0.7854, whereas FedPer-B2 greatly exceeded it with 0.9867, supported by tight confidence intervals and a significant p-value (< 0.001). FedPer-B0 had an F1 score of 0.7560, and FedPer-B2 0.9868, with narrow intervals and great statistical significance. These findings demonstrate that FedPer-B2 is suitable for robust medical diagnostic applications because it improves the reliability and effectiveness of personalized federated learning. FedPer-B2 showed significant improvements in Accuracy and F1-Score, as proven by p < 0.001. Accuracy [0.9856, 0.9878] had minimal variability and good model confidence. This shows FedPer-B2’s superior and statistically dependable performance over FedPer-B0.
The precision-recall curve in Fig. 8 shows that the centralized EfficientNet-B0 model performed well, with a near-perfect Average Precision (AP) score of 0.998. The model consistently had high precision across all recall levels (0.0–1.0), demonstrating its ability to properly identify positive cases with low false positives. The model’s excellent precision values, even at full recall, indicate wide coverage and dependable predictions, making it a promising categorization solution. These results demonstrate the model’s excellent discriminative capacity when trained on centralized data, making it an appropriate benchmark for federated learning comparisons.
A theoretically flawless AP of 1.000 for the centralized EfficientNet-B2 model as shown in Fig. 9 indicates an optimum precision-recall balance across all classification thresholds. The model had maximal precision (100% positive predictive value) for all recall (sensitivity) levels from 0.0 to 1.0. The upper-bound capacity of the architecture, not the real-world performance, should be interpreted from this optimal performance under controlled evaluation settings.
An AUC of 0.999 indicates excellent classification performance for the EfficientNet-B0 model as shown in Fig. 10. This result shows the near-flawless discrimination of the model, with its ROC curve approaching the ideal top-left corner. The model’s 0.999 AUC exceeded chance-level classification (AUC = 0.5), proving its ability to discriminate between positive and negative cases across all decision thresholds. The model’s high sensitivity and specificity make it reliable for clinical decision support. When trained on extensive centralized data, EfficientNet-B0 was highly effective for this diagnostic task. With an AUC of 1.000 in the test set as in Fig. 11, the EfficientNet-B2 model showed theoretically ideal discrimination, suggesting flawless class separation under the given assessment conditions.
The FedPer-B0 model’s Precision-Recall curves as shown in Fig. 12 vary widely by client, with AP scores ranging from 0.308 (Client 5) to 0.891 (Client 6). Some clients had high precision (AP > 0.77), others clustered near-randomly, and most clustered moderately. This mismatch may be related to architectural restrictions because the model cannot generalize robustly across varied data distributions. Owing to its dependence on local data, FedPer-B0 is unreliable for uniform rollout.
PR curves of FedPer-B2 as in Fig. 13 show good and consistent performance across customers, with AP scores mostly in the high range (0.767–0.943). Clients 2 (0.943), 6 (0.937), and 9 (0.890) had excellent AP values (> 0.8). Only Client 5 had a somewhat lower AP (0.616), indicating a weakness in its data distribution. The close clustering of high AP values suggests that FedPer-B2 has robust predictive dependability across various customers, with most precision values consistent even at higher recall levels (0.6–1.0). The model’s ability to acquire generalizable features in federated settings and handle non-IID data distributions explains its excellent persistent performance.
The FedPer-B0 model showed moderate to strong discrimination among clients as shown in Fig. 14, with AUC scores ranging from 0.656 (Client 5) to 0.952 (Client 6). Most clients (7/10) achieved an AUC > 0.75, indicating good classification performance. However, there was significant variation, with three (4, 6, 8) achieving excellent performance (AUC > 0.9), two (1, 5) marginally performing (AUC < 0.75), and the rest demonstrating solid mid-range performance (0.75–0.85 This shows that FedPer-B0 can perform well for some clients but inconsistently across data distributions, possibly due to the model’s limited ability to adapt to all non-IID data features in the federated context. High-performing (Client 6:0.952) and low-performing (Client 5:0.656) cases demonstrate the difficulty in maintaining uniform effectiveness with this design in heterogeneous federated contexts.
In Fig. 15, FedPer-B2’s ROC-AUC curves is depicted. At AUC scores of 0.785–0.977, the FedPer-B2 model discriminated well across all the clients. Nine of ten clients performed over 0.9 AUC, with Clients 2 (0.977), 6 (0.974), and 9 (0.956) performing well. Client 5 exhibited clinically relevant discrimination despite a lower AUC (0.785). FedPer-B2 learns robust and generalizable features while protecting data privacy in federated contexts, as shown by its good AUC scores across institutions. The model handles varied data distributions well, with most customers achieving near-perfect classification performance (AUC > 0.95).
The federated learning-based efficiency analysis of the proposed FedPer-B0 and FedPer-B2 are analysed with the metrics as given in section "Evaluation metrics" shown in Tables 9 and 10. As aligned with FedPer, there are several approaches such as FedProx, pFedMe and FedPer ++ . FedProx restricts local updates to handle statistical heterogeneity but does not allow client-specific customization, thereby limiting its use in non-IID medical data contexts. pFedMe provides meta-learning-inspired regularization for personalization; however, its high computational overhead and slower convergence due to several local updates every round make it unsuitable for large-scale or resource-constrained healthcare situations. FedPer ++ adds personalization tactics but increases communication and storage requirements, which may be restricted in federated medical implementations with limited bandwidth and device capabilities. Owing to its communication efficiency, FedPer was used in the proposed work.
The injection of LoRA into the proposed federated framework notably improves communication efficiency, which is essential for federated learning in resource-constrained contexts. Table 9 shows how FedPer-B0 reduces the number of trainable parameters from 5,288,548 in the entire EfficientNet-B0 model to 664,368. A significant 87.44% decrease was observed, and the communication overhead decreased proportionally with the parameter efficiency. Lightweight 2.53 MB LoRA adapters were sent instead of the entire 20.17 MB model in each round of federated learning. This 87.44% reduction in transmission cost per round reduces the network bandwidth and energy usage for participating clients, making the proposed federated learning system more scalable and feasible.
Table 10 shows that the proposed FedPer-B2 reduces the computational and transmission overheads much further than the EfficientNet-B2 backbone. The FedPer-B2 model has 1,079,024 trainable parameters, which is 88.16% less than the centralized EfficientNet-B2 model. Communication expenses reflect this efficiency in the study. The data payload is substantially smaller, from 34.75 to 4.12 MB every round. This 88.16% decrease in communication burden is crucial and this proves that the FedPer framework allows clients with low network bandwidth to use advanced and high-performance models.
The convergence profile of these strategies was depicted in Fig. 16. Different architectures and training methodologies exhibited different convergence patterns in the validation accuracy rates. Both centralized models EfficientNet-B0, B2 and FedPer-B2 fit the validation data with 100% correctness, whereas FedPer-B0 peaked at 93.26%, showing model capacity limitations in processing non-IID data distributions. FedPer-B2 equals the peak performance of its centralized counterpart, demonstrating that it is more adaptable to federated settings than FedPer-B0. The unquantified training loss curves should exhibit faster convergence for B2-based models than for B0 variants, a larger final loss for FedPer-B0, and stable optimization for FedPer-B2, mimicking centralized training dynamics. These findings show that architectural capacity is crucial for federated learning stability, as larger models (B2) better adjust for data heterogeneity and retain convergence qualities, such as centralized training.
In order to highlight the contribution of each component of this work and to examine how the model components affected the performance and communication efficiency, an ablation study was conducted and the results were shown in Table 11. The base EfficientNet-B0/B2 models lack federated or personalized components. FedPer + EfficientNet-B0/B2 combines FedPer with EfficientNet without the LoRA parameter-efficient fine-tuning technique. LoRA + EfficientNet-B0/B2 performs efficient parameter updates without the FedPer customization layer, focusing on local fine-tuning. The proposed model, FedPer-B0/B2, combines customization (FedPer) with efficient adaptation (LoRA) within the EfficientNet architecture. The EfficientNet-B0 and B2 baselines had notable accuracy (98–99%) but substantial communication costs (20–35 MB). FedPer balances performance and scalability by decreasing the communication costs and preserving near-baseline accuracy. However, LoRA alone (without FedPer) yields 72–75% reduced accuracy despite low communication costs, indicating that LoRA alone is inadequate for robust model generalization. FedPer + LoRA offers an optimal trade-off. The communication costs reduce to 2.53 MB and the accuracy declines to 78.5% for B0. The FedPer + LoRA + EfficientNet-B2 arrangement yielded the best balance, with near-baseline accuracy (98.67%) and a communication cost of 4.12 MB, saving ~ 88% compared to the baseline. The computation efficiency of the proposed model, as depicted in the table, shows that LoRA-based variations without FedPer were computed in 25 min, which seems to be the fastest. FedPer variations without LoRA are the most computationally expensive, taking EfficientNet-B2 75 min. The proposed FedPer + LoRA models show a balance in computation and communication efficiency, with a reduced computation time compared to pure FedPer models.
Figure 17 shows the model variant-specific accuracy-communication cost trade-offs. The ablation analysis shows that the baselines are communication-heavy and unsuited for scaled federated learning, but they perform well. LoRA alone reduces the cost but greatly reduces the accuracy. FedPer and LoRA provided the best balance, particularly with EfficientNet-B2, providing cutting-edge precision with minimal communication overhead. This proves the efficiency and resilience of the proposed method in federated medical imaging.
For this early pre-B ALL case, Grad-CAM visualizations in Fig. 18 showed fundamentally divergent diagnostic techniques between the models. Attention dispersed across non-specific cytoplasmic regions and background artifacts prevented the centralized model from identifying diagnostic nuclear features, such as lymphoblast morphology with a high nuclear-to-cytoplasmic ratio and a dispersed chromatin pattern that pathologists use to identify this ALL subtype. In contrast, FedPer-B2 appropriately focused on the nuclear membranes, chromatin patterns, and small basophilic cytoplasm of malignant lymphoblasts. Owing to its focus on WHO-defined diagnostic criteria (2016 categorization), federated training may improve hematopathological feature learning across institutions by decreasing site-specific staining artifact biases that may impact centralized models. This instance shows how the data variety of federated learning may improve rare leukemia subtype morphological feature selection, where centralized models overfit to institutional trends.
The attention maps show that FedPer-B0 and FedPer-B2 have significantly different diagnoses for this Pro-B ALL instance as shown in Fig. 19. Both models accurately diagnosed malignancy although FedPer-B0 misclassified the subtype as Pre-B ALL because of its focus on cytoplasmic traits rather than nuclear features of Pro-B ALL (such as open chromatin and prominent nucleoli). FedPer-B2’s proper classification was connected with careful attention to diagnostic nuclear characteristics, specifically nuclear membrane abnormalities and chromatin distribution patterns pathognomonic for Pro-B ALL according to the WHO 2022 criteria. In federated settings, the B2 architecture’s increased representational capacity allows more precise subtyping by focusing on diagnostically critical cellular features, while the B0 model’s limited capacity over-reliance on less specific morphological characteristics.
The t-SNE projection in Fig. 20 illustrates clinically relevant non-IID heterogeneity in the proposed federated learning configuration employing Dirichlet partitioning (α = 0.7). Client 2 had a substantial Pre-B skew (47.4%), and Client 9 was dominated by Pro-B cases (37.1%), resembling paediatric or adult leukemia facilities. Clients 4, 5, and 7 formed center clusters with balanced subtype distributions, indicating general-purpose hospitals, whereas Clients 0, 3, and 6 had higher Early Pre-B prevalence, indicating comparable patient demographics. These patterns support the biologically realistic institution-level feature coherence. Large inter-cluster distances underscore the federated problem of learning during distributional shifts, as indicated by performance disparities (e.g., Client 5 AP = 0.616 vs. Client 6 AP = 0.937) when utilizing lightweight models such as FedPer-B0. FedPer-B2’s increased alignment and robustness across dispersed clusters demonstrate its architectural benefits in addressing multi-institutional variance. The t-SNE graphic confirmed the clinical realism and value of the setup as a rigorous tailored federated learning testbed.
Conclusion
FedPerLoRA-Health, a unique and communication-efficient federated learning architecture for customized leukemia diagnosis, was presented in this paper. The suggested method uses LoRA modules in the EfficientNet-B0 and B2 backbones to fine-tune tailored local models without compromising client data privacy. FedPerLoRA-Health minimizes the communication cost per round in real-world healthcare settings with bandwidth limits by freezing most model parameters and training only lightweight LoRA adapters and private classification heads. FedPerLoRA-Health outperformed the baseline centralized and federated models in terms of convergence speed and communication efficiency on the Blood Cell Cancer (ALL) dataset. Private heads and LoRA adapters offer tailored training that captures client-specific data distributions, addressing non-IID data problems in medical institutions. Even though the proposed model addresses the mentioned research gaps, it has few limitations such as real-world deployment feasibility and multi-institutional trials. While FedPerLoRA-Health has shown promising results in communication efficiency and tailored leukemia diagnosis, various practical extensions can improve its application and robustness. The ~ 88% reduction in communication overhead implies a proportionate reduction in energy usage. This research sets a foundation for communication efficiency, however, future work will quantify energy savings and hardware-level optimizations for clinical edge devices provided by parameter reduction. Personalization can be increased by tailoring LoRA module complexity to client-specific hardware capabilities for effective deployment over a heterogeneous network of devices.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
El Alaoui, Y. et al. A review of artificial intelligence applications in hematology management: current practices and future prospects. J Med Internet Res 24(7), e36490. https://doi.org/10.2196/36490 (2022).
Tan M., Le V. Q., EfficientNet: Rethinking model scaling for convolutional neural networks, In: 36th International Conference on Machine Learning, ICML 2019, May 2019, vol. 2019-June, pp. 10691–10700. Accessed: Aug. 17, 2025. [Online]. Available: https://proceedings.mlr.press/v97/tan19a.html
Sohan, M. F. & Basalamah, A. A systematic review on federated learning in medical image analysis. IEEE Access, IEEE Access 11, 28628–28644. https://doi.org/10.1109/ACCESS.2023.3260027 (2023).
Alhafiz, F. S. & Basuhail, A. A. Non-IID medical imaging data on COVID-19 in the federated learning framework: Impact and directions. COVID 4(12), 1985–2016. https://doi.org/10.3390/covid4120140 (2024).
Almanifi, O. R. A., Chow, C. O., Tham, M. L., Chuah, J. H. & Kanesan, J. Communication and computation efficiency in Federated Learning: A survey. Internet Things 22, 100742. https://doi.org/10.1016/j.iot.2023.100742 (2023).
Zhang, H., Li, C., Dai, W., Zou, J. & Xiong, H. Federated learning based on model discrepancy and variance reduction. IEEE Trans. Neural Networks Learn. Syst. 36(6), 10407–10421. https://doi.org/10.1109/TNNLS.2024.3517658 (2025).
Fani, E., Camoriano, R., Caputo, B. & Ciccone, M. Resource-efficient personalization in federated learning with closed-form classifiers. IEEE Access 13, 61928–61957. https://doi.org/10.1109/ACCESS.2025.3556587 (2025).
Mao, Y. et al. A survey on LoRA of large language models. Front Comput Sci 19(7), 197605. https://doi.org/10.1007/s11704-024-40663-9 (2025).
Alzahrani, A. K., Alsheikhy, A. A., Shawly, T., Azzahrani, A. & Said, Y. A novel deep learning segmentation and classification framework for leukemia diagnosis. Algorithms 16(12), 556. https://doi.org/10.3390/a16120556 (2023).
OybekKizi, R. F., Theodore Armand, T. P. & Kim, H. C. A review of deep learning techniques for leukemia cancer classification based on blood smear images. Appl Biosci 4(1), 9. https://doi.org/10.3390/applbiosci4010009 (2025).
Das, P. K. & Meher, S. An efficient deep convolutional neural Network based detection and classification of acute lymphoblastic Leukemia. Expert Syst. Appl. 183, 115311. https://doi.org/10.1016/j.eswa.2021.115311 (2021).
Rahman, W. et al. Multiclass blood cancer classification using deep CNN with optimized features. Array 18, 100292. https://doi.org/10.1016/j.array.2023.100292 (2023).
Soladoye, A. A., Olawade, D. B., Adeyanju, I. A., Adereni, T. & Olagunju, K. M. Clement David-Olawade, “Enhancing leukemia detection in medical imaging using deep transfer learning. Int. J. Med. Inform. 203, 106023. https://doi.org/10.1016/j.ijmedinf.2025.106023 (2025).
Batool, A. & Byun, Y. C. Lightweight EfficientNetB3 model based on depthwise separable convolutions for enhancing classification of leukemia white blood cell images. IEEE Access 11, 37203–37215. https://doi.org/10.1109/ACCESS.2023.3266511 (2023).
Rai, H. M. et al. Deep learning for leukemia classification: Performance analysis and challenges across multiple architectures. Fractal Fract. 9(6), 337. https://doi.org/10.3390/fractalfract9060337 (2025).
Elsayed, B. et al. Deep learning enhances acute lymphoblastic leukemia diagnosis and classification using bone marrow images. Front Oncol 13, 1330977. https://doi.org/10.3389/fonc.2023.1330977 (2023).
Zhou, J. et al. Personalized and privacy-preserving federated heterogeneous medical image analysis with PPPML-HMI. Comput. Biol. Med. 169, 107861. https://doi.org/10.1016/j.compbiomed.2023.107861 (2024).
Rehman, M. H. U., Pinaya, W. H. L., Nachev, P., Teo, J. T. & Cardoso, M. J. Federated learning for medical imaging radiology. Br Inst Radiol 96(1150), 20220890. https://doi.org/10.1259/bjr.20220890 (2023).
Guan, H., Yap, P. T., Bozoki, A. & Liu, M. Federated learning for medical image analysis: A survey. Pattern Recognit 151, 110424. https://doi.org/10.1016/j.patcog.2024.110424 (2024).
Zhang, F. et al. Recent methodological advances in federated learning for healthcare. Patterns 5(6), 101006. https://doi.org/10.1016/j.patter.2024.101006 (2024).
Mazid, A., Kirmani, S., Abid, M. & Pawar, V. A secure and efficient framework for internet of medical things through blockchain driven customized federated learning. Cluster Comput. 28(4), 1–16. https://doi.org/10.1007/s10586-024-04896-4 (2025).
Nezhadsistani, N., Moayedian, N. S., Stiller, B. Blockchain-enabled federated learning in healthcare: survey and state-of-the-art, IEEE Access. Institute of electrical and electronics engineers Inc., 2025. https://doi.org/10.1109/ACCESS.2025.3587345.
Tu, J., Huang, J., Yang, L. & Lin, W. Personalized federated learning with layer-wise feature transformation via meta-learning. ACM Trans Knowl Discov Data 18(4), 1–21. https://doi.org/10.1145/3638252 (2024).
Dinh, C. T., Tran, N. H. & Nguyen, T. D. Personalized federated learning with moreau envelopes. Adv Neural Inf Process Syst 33, 21394–21405 (2020).
Liu, Y., He, L., Zhang, Z. & Ren, S. PFed-NS: An adaptive personalized federated learning scheme through neural network segmentation. IEEE Trans. Comput. 74(6), 1936–1948. https://doi.org/10.1109/TC.2025.3547138 (2025).
Xu, J., Yan, Y., Huang, S. L., FedPer++: Toward improved personalized federated learning on heterogeneous and imbalanced data, In: Proceedings of the International Joint Conference on Neural Networks, 2022, vol. 2022-July. https://doi.org/10.1109/IJCNN55064.2022.9892585.
Li, X., Jiang, M., Zhang, X., Kamp, M., Dou, Q. FEDBN: FEDERATED LEARNING ON NON-IID FEATURES VIA LOCAL BATCH NORMALIZATION, In: ICLR 2021—9th International Conference on Learning Representations, 2021. Accessed: Sep. 29, 2025. [Online]. Available: https://github.com/med-air/FedBN.
Huang et al. Y. Personalized cross-silo federated learning on non-IID data, In: 35th AAAI Conference on Artificial Intelligence, AAAI 2021, May 2021, vol. 9A, no. 9, pp. 7865–7873. https://doi.org/10.1609/aaai.v35i9.16960.
Collins, L., Hassani, H., Mokhtari, A., Shakkottai, S Exploiting shared representations for personalized federated learning, In: Proceedings of Machine Learning Research, Jul. 2021, vol. 139, pp. 2089–2099. Accessed: Sep. 29, 2025. [Online]. Available: https://proceedings.mlr.press/v139/collins21a.html
Veasey, B. P. & Amini, A. A. Low-rank adaptation of pre-trained large vision models for improved lung nodule malignancy classification. IEEE Open J. Eng. Med. Biol. 6, 296–304. https://doi.org/10.1109/OJEMB.2025.3530841 (2025).
Zheng, Y., Gao, Y., Liu, J., Yao, N. & Ji, Z. Radlora: A smart low-rank adaptive approach for radiological image classification. Multimed. Syst. https://doi.org/10.1007/S00530-025-01778-6 (2025).
Kao, J. P., Chung, Y. C., Hung, H. Y., Chen, C. P. & Chen, W. S. LoRA-enhanced RT-DETR: First low-rank adaptation based DETR for real-time full body anatomical structures identification in musculoskeletal ultrasound. Comput. Med. Imaging Graph. 124, 102583. https://doi.org/10.1016/j.compmedimag.2025.102583 (2025).
Funding
Open access funding provided by Vellore Institute of Technology. This research received no specific grant from any funding agency.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Introduction and Related work were performed by Keerthika P and Suresh P. Problem formulation, proposed methodology and Results were done by Nitesh Kumar AR and Keerthika P. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. To the best of my knowledge and belief, any actual, perceived, or potential conflicts between my duties as an employee and my private and/or business interests have been fully disclosed in this form in accordance with the requirements of the journal.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Suresh, P., Keerthika, P. & Kumar, A.R.N. A personalized communication efficient federated learning framework with low rank adaptation for intelligent leukemia diagnosis. Sci Rep 16, 339 (2026). https://doi.org/10.1038/s41598-025-29672-1
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-29672-1























