Introduction

The integration of 6G networks with the IIoT signifies a pivotal advancement in intelligent connectivity, ultra-low latency, and extensive data processing within industrial ecosystems. This synergy facilitates sophisticated automated systems, such as smart factories, intelligent energy grids, and interconnected transportation networks1,2. Nonetheless, these developments provide considerable cybersecurity issues due to the diversity of IoT devices, resource limitations, and the evolving topologies of 6G networks3. IDS, such as the Transformer-based IDS-MTran, are critical for safeguarding these ecosystems against sophisticated cyber threats, like APTs, DDoS attacks, and botnets like Mirai, by leveraging multi-scale traffic features and attention mechanisms4. The necessity to safeguard data privacy, especially in industrial environments subject to rigorous rules such as the General Data Protection Regulation (GDPR), significantly complicates the design of IDS5.

The IIoT, a distinct subset of the overarching IoT framework, emphasizes the interconnection of industrial equipment, including sensors, actuators, and controllers, within sectors such as manufacturing, energy, and logistics6. 6G networks, characterized by elevated bandwidth, negligible latency, and the capacity for extensive device connectivity, furnish the essential infrastructure for the scalability and efficiency of the IIoT7. These networks utilize advanced technologies such as artificial intelligence, machine learning, and edge computing to facilitate real-time data processing at the network periphery8. The expansion of linked devices and the intricacy of dynamic network topologies considerably enlarge the attack surface, rendering IIoT systems susceptible to new vulnerabilities9. Conventional intrusion detection methods, including signature-based and anomaly-based techniques, frequently inadequately address contemporary dangers such as APTs and zero-day assaults, owing to their dependence on preset patterns and restricted capacity to represent intricate inter-device relationships10. Centralized machine learning methodologies, necessitating the consolidation of sensitive data on a singular server, exacerbate conflicts with privacy mandates in industrial applications11.

The importance of creating resilient IDS for IIoT on 6G networks is complex. The rapid proliferation of connected devices in industrial ecosystems expands the attack surface, rendering them susceptible to extensive assaults, such as those executed by botnets like Mirai, which can interrupt operations and result in significant financial losses12. The stringent requirements for low latency and high reliability in IIoT applications, such as smart manufacturing and energy grids, demand IDS solutions, like CNN-LSTM + PSO, that leverage optimized deep learning to detect threats like APTs and DDoS attacks in real time with minimal computational overhead13. Adherence to stringent privacy rules, like GDPR and the California Consumer Privacy Act (CCPA), necessitates the safeguarding of sensitive data against unwanted access, highlighting the requirement for privacy-preserving IDS frameworks14. These characteristics underscore the essential significance of novel IDS solutions customized for the specific requirements of 6G-enabled IIoT environments.

Developing IDS for IIoT in 6G networks entails numerous problems and constraints. The diversity of IoT devices, differing in hardware, communication protocols, and processing capacities, complicates threat modeling and anomaly detection15. Resource limitations in IIoT devices, including low-power sensors, hinder the application of computationally demanding deep learning models16. The fluid characteristics of 6G network topologies, marked by frequent alterations and device mobility, pose challenges to the adaptability of conventional IDS to shifting traffic patterns17. Maintaining data privacy during model training and inference incurs computational expense, which may adversely affect detection performance18. These issues require a paradigm change towards IDS systems that equilibrate accuracy, scalability, and privacy preservation in resource-limited and dynamic situations19.

This research is motivated by the shortcomings of current IDS solutions for IIoT in 6G networks. Traditional methods falter in modeling intricate inter-device relationships and identifying covert threats such as APTs, whereas centralized alternatives provoke privacy issues due to the necessity of data aggregation. This study is motivated by the necessity for an IDS that concurrently attains elevated detection accuracy, scalability, and adherence to privacy regulations in dynamic, resource-limited settings. This research provides an innovative way to solve these limitations by utilizing GNNs to model complex device interactions and HE to facilitate secure computations on encrypted data.

This project aims to create a privacy-preserving IDS framework for IIoT in 6G networks, employing GNNs and HE to ensure precise detection of advanced threats while maintaining data confidentiality. The principal innovation involves the integration of GNN and HE to tackle the challenges presented by 6G IIoT environments. The system utilizes GNNs to represent intricate inter-device interactions and temporal traffic patterns, facilitating accurate identification of threats such as APTs and DDoS attacks. It integrates HE to enable secure distributed training on encrypted data, assuring adherence to GDPR and obviating centralized data aggregation. A graph-based feature extraction pipeline utilizing Mutual Information for feature selection prioritizes essential attributes, including packet flow statistics and device connectivity measurements, thereby improving scalability and efficiency for resource-limited devices.

This research’s primary contributions are as follows:

  • A unique IDS utilizing GNNs to simulate intricate inter-device interactions and temporal traffic patterns, attaining great accuracy in identifying advanced threats, including APTs and DDoS attacks.

  • Integration of HE to facilitate secure distributed training and inference, safeguarding sensitive data and guaranteeing adherence to rigorous privacy requirements.

  • A framework tailored for IIoT contexts, characterized by low computational overhead and minimal memory, rendering it appropriate for resource-limited devices.

The document is structured as follows: Sect. 2 examines pertinent literature, highlighting recent progress in intrusion detection and privacy-preserving methodologies. Section 3 delineates the materials and techniques, encompassing dataset descriptions, data preparation, model design, and experimental configuration. Section 4 delineates simulation outcomes and a comparison evaluation with leading methodologies, emphasizing practical application contexts. Section 5 ultimately closes the study and delineates prospective research avenues, focusing on scalability improvements, federated learning incorporation, and real-time adaption.

This section examines previous studies on intrusion detection and privacy-preserving mechanisms in IIoT and 6G networks, based on the motivations and research gaps identified in the introduction.

Related works

GNN-Based intrusion detection systems (IDS)

Intrusion detection in IIoT and 6G networks presents a significant challenge in cybersecurity, stemming from the intrinsic complexities of these systems, such as device heterogeneity, substantial data volumes, and the necessity for privacy preservation. Recent investigations have concentrated on utilizing deep learning, GNNs, and federated learning to bolster intrusion detection while safeguarding privacy. This section examines pertinent studies that propose novel methodologies to tackle these challenges, incorporating advanced techniques to enhance detection accuracy and scalability in IIoT and 6G contexts.

A significant study investigated the application of attention-based graph neural networks (GATs) for intrusion detection in IoT networks20. This methodology utilizes a graph-based technique to represent device interactions, employing the NSL-KDD dataset to assess critical metrics including accuracy, recall, and F1-score. A separate study introduced a federated learning framework for intrusion detection in IIoT contexts, utilizing transfer learning to mitigate data heterogeneity21.

A hybrid methodology integrating convolutional neural networks (CNNs) and gated recurrent units (GRUs) was proposed for intrusion detection in IoT networks22. Utilizing the FW-SMOTE approach to address imbalanced datasets, this model attained an exceptional accuracy of 99.60% on the IoTID20 dataset. The integration of CNN and GRU architectures adeptly captures intricate characteristics and temporal connections. A 2024 study similarly presented a hybrid model that integrates attention processes, bidirectional GRUs (BiGRUs), and Inception-CNN for intrusion detection in IIoT23.

A system integrating long short-term memory (LSTM) networks with a joint strategy optimization (JSO) algorithm was suggested for intrusion detection in federated IoT networks within the domain of federated learning24. Another study presented a blockchain-based federated learning approach for secure data distribution in the IIoT25. This model utilizes GNNs to identify spatio-temporal links among devices and implements a quantum-inspired firefly optimization technique for feature selection, thereby improving detection efficiency26.

Privacy-preserving intrusion detection systems

Recent advancements in intrusion detection methodologies inside automotive and CAN bus contexts have demonstrated significant development. A cross-chain intrusion detection technique (CCID-CAN) for autonomous vehicles was presented in study27, utilizing blockchain to improve traceability and trust. Research28 examined ID sequence similarity through Dynamic Time Warping (DTW) for CAN bus anomaly identification, whilst another study29 proposed a CNN-LSTM model with an attention mechanism to identify in-vehicle network abnormalities. While these studies illustrate effective detection in vehicular environments, they predominantly focus on in-vehicle communication and do not incorporate privacy-preserving encrypted inference.

The proposed GNN + HE framework emphasizes 6G-enabled IIoT contexts, integrating graph-based learning with homomorphic encryption to attain high accuracy and data secrecy. Moreover, numerous recent studies have investigated authentication and privacy protection in 5G-enabled vehicle fog computing settings. Study30 offered a Chebyshev polynomial-based authentication technique to protect emergency communication, whereas work31 proposed ANAA-Fog, an anonymous authentication framework that enhances privacy in automotive networks. Alternative lightweight methodologies, including32,33,34,35, such as ECA-VFog, CM-CPPA, and Provably Secure 5G Data Sharing, utilize certificateless encryption, chaotic maps, and the Chinese Remainder Theorem to attain conditional privacy preservation.

IIoT security in 5G/6G and dataset-driven developments

A 2025 study presented a federated learning framework utilizing GNNs for anomaly identification in time-series transactions inside the IIoT. A 2023 study introduced a deep learning-based intrusion detection system for IoT, employing a four-layer fully connected (FC) network36.

A 2024 study presented a lifetime learning-based intrusion detection system for the Internet of Vehicles (IoV) inside the framework of 6G networks37. Another study employed GNNs for resource allocation in 6G and IoT networks, improving network security through the optimization of communication paths38.

Recent research has also explored the integration of differential privacy and graph embedding for encrypted traffic classification in 6G networks12,39,40. Furthermore, hybrid CNN–BiLSTM–DNN models for detecting cybersecurity threats in IoT networks have demonstrated robust performance41.

Recent contributions have introduced extensive datasets, hybrid AI-driven frameworks, and sustainable structures. For example, a deep AI-driven threat analysis for IIoT was introduced in42, whereas a hybrid deep learning-based threat intelligence framework for Industrial IoT was suggested in43. A unique intrusion detection approach for optimizing IoT security was presented in44, alongside the secure IIoT-enabled Industry 4.0 paradigm examined in45. Moreover, innovations in cyber-threat detection for smart infrastructure46, SDN-focused IoT security orchestration for adaptive 6G control planes47, and hybrid deep learning models for IoMT malware detection48 collectively underscore the increasing focus on dataset-driven assessment and real-time adaptability in industrial settings. These new additions bolster the experimental validity and rationale of our work.

Although previous studies have examined GNN-based and privacy-preserving IDS methodologies independently, few have successfully amalgamated them within 6G-enabled IIoT frameworks. Section 3 delineates the design of the proposed GNN–HE framework, which mitigates these limitations via a cohesive and scalable architecture.

Materials and methods

This chapter delineates the approach for the proposed privacy-preserving IDS designed for IIoT within 6G networks. The system incorporates GNNs to represent intricate inter-device relationships and employs HE for secure distributed training. The methodology includes dataset selection, data preparation, model architecture, algorithms, and experimental setup, tailored for resource-limited IIoT devices and dynamic 6G topologies. Figure 1 depicts the comprehensive architecture, emphasizing the sequential progression of data through preprocessing, feature extraction, feature selection, GNN-based detection, and HE-enabled safe computation.

Fig. 1
figure 1

Working of proposed hybrid model.

Datasets

The suggested IDS is assessed utilizing two contemporary datasets, Edge-IIoTset and IoT-23, chosen for their pertinence to IIoT security within 6G networks. These datasets encompass many attack scenarios, including DDoS, Mirai botnet, and reconnaissance attacks, as well as benign traffic. In contrast to conventional datasets such as CICIDS-2017, CICIDS-2018, and UNSW-NB15, Edge-IIoTset and IoT-23 concentrate on IIoT-specific traffic and contemporary threats, rendering them optimal for assessing scalability and detection precision in resource-limited settings. Figure 2 illustrates the class distribution of Normal and Attack samples for the Edge-IIoTset, IoT-23, and MQTTset datasets.

Fig. 2
figure 2

Class distribution.

Edge-IIoTset

The Edge-IIoTset dataset, intended for IIoT security research, consists of network traffic from a realistic testbed featuring IoT devices such as sensors and actuators. The dataset comprises 14 attack types, including DDoS, malware, and reconnaissance, featuring over 2 million samples (70% benign, 30% malicious). Flow-based attributes (e.g., packet size, inter-arrival time) and device connectivity measurements facilitate graph-based modeling. Notwithstanding its advantages, Edge-IIoTset exhibits drawbacks, such as class imbalance for specific attack types (e.g., reconnaissance) and restricted coverage of APTs. These are alleviated using oversampling approaches in the preprocessing phase.

Table 1 presents a comprehensive summary of the Edge-IIoTset dataset, enumerating the event types alongside their respective data record numbers. The dataset includes many attack scenarios, such as DDoS variants (UDP, ICMP, TCP, HTTP), SQL injection, and ransomware, in addition to typical traffic.

Table 1 The total numbers and different types of records in the Edge-IIoTset.

The diversity of attack kinds and the considerable sample size of the Edge-IIoTset dataset render it appropriate for assessing the suggested IDS’s capacity to identify intricate threats in IIoT contexts. The class imbalance, especially for attacks like Fingerprinting and MITM, is mitigated using preprocessing approaches such as oversampling to enable effective model training.

IoT-23

The IoT-23 dataset, created by Stratosphere Lab, comprises 23 scenarios (20 malicious, 3 benign) that document real-world traffic from devices like as cameras and smart appliances. It offers both packet-level and flow-level characteristics (e.g., packet rate, protocol type). IoT-23 facilitates anomaly identification and graph generation; nevertheless, its emphasis on particular IoT devices may hinder its applicability in industrial contexts. Feature selection emphasizes device-agnostic characteristics.

Table 2 encapsulates the IoT-23 dataset, providing a comprehensive overview of label descriptions and sample quantities for diverse threat categories and benign traffic. The dataset encompasses notable attack scenarios, including the Mirai and Okiru botnets, along with horizontal port scanning, which are essential for evaluating IDS performance in IoT networks.

Table 2 A summary of the IoT-23 dataset.

MQTTset

The MQTTset dataset is a specific collection intended for IoT security research, concentrating on the Message Queuing Telemetry Transport (MQTT) protocol, which is extensively utilized in IIoT applications like industrial automation and smart homes because of its lightweight and low-power communication attributes. The dataset consists of 520,249 samples, including 420,136 (80.8%) benign and 100,113 (19.2%) malicious cases, encompassing five attack types: Bruteforce, MQTTFlood, MalariaDoS, Malformed, and SlowITe. These assaults exemplify prevalent dangers in MQTT-based IIoT networks, including illegal access attempts and flooding attacks. Table 3 delineates the class distribution of the MQTTset dataset subsequent to the elimination of redundant samples.

Table 3 Class distribution of MQTTset dataset.

The MQTTset dataset, concentrating on MQTT-specific attacks, serves as an excellent supplement to Edge-IIoTset and IoT-23, facilitating the assessment of the proposed IDS in contexts that utilize lightweight IoT protocols common in 6G-enabled IIoT settings. Extracted features encompass message rate, message length, and connection frequency, which are appropriate for graph-based modeling. The dataset demonstrates considerable class imbalance, especially for infrequent assaults such as SlowITe and Malformed, which is addressed using oversampling methods during preprocessing, in alignment with the strategies utilized for Edge-IIoTset and IoT-23. Although MQTTset possesses advantages, its concentration on MQTT-specific traffic may restrict its applicability to other IoT protocols, which is mitigated by emphasizing protocol-agnostic characteristics in the feature selection process.

Data pre-processing

Data preprocessing ensures the appropriateness of the Edge-IIoTset, IoT-23, and MQTTset datasets for GNN-based detection and baseline evaluation against EfficientNetV3-SVM. Figure 1 demonstrates that the preprocessing pipeline consists of two phases: Data Cleaning and Normalization & Image Conversion. These measures address data quality issues, standardize characteristics, and enable multimodal analysis with image-based features, specifically designed for resource-constrained IIoT devices.

Data cleaning

The first stage focuses on improving data quality by addressing inconsistencies and noise in the raw datasets (Edge-IIoTset, IoT-23, MQTTset). The cleaning process includes:

  • Handling Missing Values: Missing values in numerical features (e.g., packet size, inter-arrival time, message rate) are imputed using the mean or median of the respective feature. For categorical features (e.g., protocol type), the mode is used. This ensures data completeness without introducing significant bias.

  • Removing Duplicates: Duplicate records, which may arise from redundant packet captures, are identified and removed, reducing dataset size and preventing model bias.

  • Outlier Removal: Outliers in numerical features are detected using the Interquartile Range (IQR) method:

$$IQR = Q3 - Q1$$
(1)
$$\:\text{Outlier\:Range}=\left[Q1-1.5\cdot\:\text{IQR},Q3+1.5\cdot\:\text{IQR}\right]$$
(2)

In the outlier removal process, Q1 (First Quartile) is defined as the value below which 25% of the data lies, while Q3 (Third Quartile) represents the value below which 75% of the data lies. Outliers are either removed or replaced with boundary values to maintain data integrity.

Outliers are either removed or replaced with boundary values to maintain data integrity. This step is critical for handling extreme values in features like packet size or message rate, which could skew model training.

The cleaning method is tuned for efficiency, with a computational complexity of O(n log n) for outlier detection and O(n) for duplicate removal, where n represents the number of samples. The O(n log n) complexity originates from the sorting phase necessary for quartile calculation in the interquartile range (IQR) approach, whereas duplicate removal is accomplished by linear-time filtering. This guarantees scalability for extensive IIoT datasets and compatibility with resource-limited IIoT devices.

Normalization & image conversion

The second stage integrates feature normalization and picture transformation to ready the data for graph-based feature extraction and a baseline comparison with EfficientNetV3-SVM. This phase encompasses:

  • Normalization: Features such as packet size, packet rate, inter-arrival time, message rate, and connection frequency are normalized to the range [0,1] using min-max normalization:

$$\:{x}^{{\prime\:}}=\frac{x-\text{min}\left(x\right)}{\text{max}\left(x\right)-\text{min}\left(x\right)}$$
(3)

where ( x ) is the original feature value and ( x’ ) is the normalized value. Normalization ensures consistent scales across features, improving GNN convergence and model stability. The computational complexity is \(\:\left(\:O\right(n\left)\:\right)\) per feature.

  • Image Conversion: To enable multimodal analysis and compatibility with the baseline EfficientNetV3-SVM model, network flows are converted into 64 × 64 grayscale images using Algorithm 1. Each network flow is represented as a matrix, with rows corresponding to time intervals within a predefined window ( T ) and columns representing features (e.g., packet size, packet rate, protocol type). The matrix is rescaled to 64 × 64 using bilinear interpolation, and normalized values are mapped to grayscale intensities (0–255). The 64 × 64 size balances computational efficiency and detail retention, as validated through ablation studies showing negligible accuracy loss compared to larger sizes (e.g., 128 × 128). This process is optimized for computational efficiency, reducing preprocessing time by approximately 30% compared to non-optimized methods.

Algorithm1
figure a

Numerical data conversion into image.

Feature extraction and selection

Subsequent to preprocessing, graph-based feature extraction creates a directed graph that illustrates devices and their communication relationships. Features extracted comprise:

  • Device Features: Packet flow statistics (e.g., average packet size, packet rate, inter-arrival time) and image-based features from Algorithm 1.

  • Connection Features: Connectivity metrics (e.g., interaction frequency, latency).

  • Graph-level Features: Global metrics (e.g., network density, clustering coefficient).

To enhance computational efficiency50,51, a Mutual Information (MI)-based feature selection technique is applied. MI quantifies the dependency between features and the target variable (benign or malicious) using:

$$\:MI\left(X,Y\right)={\sum\:}_{x\in\:X}{\sum\:}_{y\in\:Y}p\left(x,y\right)\text{log}\left(\frac{p\left(x,y\right)}{p\left(x\right)p\left(y\right)}\right)$$
(4)

where \(\:\left(p\left(x,y\right)\right)\)is the joint probability distribution, and \(\:\left(p\left(x\right)\right),\left(p\left(y\right)\right)\) are marginal probabilities. Features with high MI scores (e.g., packet flow rate, connectivity strength) are selected, reducing the feature space by approximately 40%.

Proposed model

The suggested privacy-preserving IDS for the IIoT in 6G networks incorporates GNNs and HE to tackle the issues posed by resource-constrained devices and dynamic network topologies. Figure 1 depicts the architecture’s data processing stages: raw data from Edge-IIoTset, IoT-23, and MQTTset undergoes cleaning to eliminate inconsistencies, followed by normalization and conversion into 64 × 64 grayscale images for multimodal analysis. A directed graph is created to derive device, connection, and graph-level features, subsequently modified via Mutual Information-based feature selection to diminish dimensionality. The GNN analyzes the graph to identify complex threats, including APTs, while HE guarantees privacy-preserving distributed training and inference. The subsequent algorithm 2 summarizes the comprehensive IDS framework.

Algorithm 2
figure b

Privacy-preserving IDS for IIoT in 6G networks.

Graph neural network (GNN)

The GNN component is engineered to identify complex threats, including APTs, by analyzing inter-device interactions inside IIoT networks. The architecture analyzes a directed graph that depicts devices and their communication channels, utilizing attributes such as packet statistics and image-based features from the preparation phase.

The directed communication graph G = (V, E) was created using IIoT network traffic logs using a fixed sliding time window of T = 5 s and a stride of 2 s. Each node \(\:{v}_{i}\in\:V\) signifies an IIoT device or endpoint (identified by its IP-MAC pair), whereas each directed edge \(\:{e}_{ij}\in\:E\) indicates a communication flow from \(\:{v}_{i}\) to \(\:{v}_{j}\). An edge is formed when the average transmission rate between devices above a threshold of τ = 10³ bytes/s during the designated window. Edge weights represent normalized byte rates, defined as.

$$MI\left( {X,Y} \right) = \mathop \sum \limits_{{x \in X}} \mathop \sum \limits_{{y \in Y}} p\left( {x,y} \right)\log \left( {\frac{{p\left( {x,y} \right)}}{{p\left( x \right)p\left( y \right)}}} \right)$$
(5)

Following the definition of edge weights, the parameters are specified as \(\:{\text{bytes}}_{ij}\) denoting the aggregate number of bytes transmitted from device \(\:{v}_{i}\) to device \(\:{v}_{j}\) within the designated time window \(\:\:T,\:\:T\:\) representing the fixed sliding time window duration, established at 5 s, and \(\:\text{max(bytes)}\) indicating the maximum total bytes transmitted across all edges within the graph over the time window \(\:T\:,\) serving as a normalization factor to standardize edge weights.

In dynamic graph modeling, traffic logs are divided using a sliding window, resulting in a temporal series of graph snapshots {Gₜ} that reflect time-dependent communication patterns. The GNN layers then process these graph instances for message forwarding and intrusion detection. The GNN comprises several layers: an input layer that integrates device and connection attributes, two graph convolutional layers that enhance features via message passing to obtain neighborhood insights, a global pooling layer that consolidates features into a graph-level representation, and a fully connected layer with a softmax activation for binary or multi-class classification (benign versus malicious). Algorithm 3 delineates the GNN-based intrusion detection methodology.

  • Input Graph: The input is a graph \(\:\left(G=\left(V,E\right)\right),\)where each node \(\:\left({v}_{i}\in\:V\right)\) is associated with a feature vector \(\:\left({x}_{i}\right)\), representing device-specific attributes (e.g., packet statistics). Edges \(\:\left({e}_{ij}\in\:E\right)\) are weighted based on communication frequency or data volume.

  • Graph Convolution Layers: Two graph convolutional layers update node features through message passing. For a node \(\:\left({v}_{i}\right),\) the feature update in the \(\:\left(l\right)-th\) layer is defined as:

$$\:{h}_{i}^{\left(l+1\right)}={\upsigma\:}\left({W}^{\left(l\right)}\cdot\:\text{AGGREGATE}\left({h}_{j}^{\left(l\right)}:j\in\:\mathcal{N}\left(i\right)\right)+{B}^{\left(l\right)}{h}_{i}^{\left(l\right)}\right)$$
(6)

where \(\:\left({h}_{i}^{\left(l\right)}\right)\) is the feature vector of node \(\:\left({v}_{i}\right)\) at layer ( l ), \(\:\left(\mathcal{N}\left(i\right)\right)\) is the set of neighboring nodes,\(\:\left(\text{AGGREGATE}\right)\) is a mean aggregation function, \(\:\left({W}^{\left(l\right)}\right)and\left({B}^{\left(l\right)}\right)\) are learnable parameters, and \(\:\left({\upsigma\:}\right)\) is a ReLU activation function.

  • Pooling and Aggregation: A global pooling layer consolidates node embeddings into a graph-level representation using a mean or sum operation.

$$\:{h}_{G}=\text{POOL}\left({h}_{i}^{\left(L\right)}:i\in\:V\right)$$
(7)

where \(\:\left({h}_{G}\right)\) is the graph embedding, and \(\:\left(\text{POOL}\right)\) is the aggregation function.

  • Fully Connected Layer: A fully connected layer transforms the graph embedding into classification features, succeeded by a softmax layer for binary or multi-class classification (benign versus malevolent).

Algorithm 3
figure c

GNN-based intrusion detection.

The suggested intrusion detection framework utilizes a dual-layer Graph Convolutional Network (GCN) architecture. The selection of two graph convolutional layers was empirically established to strike an optimal balance between expressive capability and computational efficiency. A single-layer configuration demonstrated inadequate feature propagation and a restricted ability to capture higher-order dependencies among IIoT devices, while deeper architectures (three or more layers) led to over-smoothing and heightened inference latency—conditions unfavorable for real-time processing on edge hardware.

The mean aggregation function was selected after a comparative assessment with max and attention-based aggregation methods. Mean aggregation exhibited consistent convergence characteristics and diminished variance across several training iterations, while incurring lower computing expenses compared to attention methods, which entail significant expenditures in encrypted or resource-limited settings. This design decision guarantees that the model attains both efficient training and compliance with the homomorphic encryption-based privacy-preserving framework.

The total computing complexity of GNN-based inference is about O(M·L·N log N), where M denotes the number of IIoT nodes, L signifies the number of graph convolutional layers, and N indicates the polynomial modulus degree utilized in the encrypted computation. Each convolutional step entails a homomorphic matrix–vector multiplication with a complexity of O(N log N), in accordance with the Number Theoretic Transform (NTT) utilized in CKKS encoding. This formulation indicates that the inference cost increases linearly with the number of devices M and the layer depth L, hence confirming the framework’s scalability for extensive dispersed IIoT systems.

Homomorphic encryption (HE)

The HE component facilitates privacy-preserving distributed training and inference, obviating the necessity for centralized data aggregation in IIoT networks. The procedure encrypts sensitive attributes utilizing the CKKS algorithm, executes matrix operations (e.g., convolution) on encrypted data, and facilitates distributed training across devices by calculating and aggregating local gradients. Encrypted predictions are produced during inference, with only authorized entities permitted to decode the outcomes for final classifications (benign or malevolent). This method guarantees data security and interoperability with resource-limited IIoT devices in 6G settings. Algorithm 4 outlines the HE-enabled distributed training procedure.

  • Data Encryption: Data, including node features, edge weights, and labels, are encrypted using the CKKS (Cheon-Kim-Kim-Song) scheme, which supports approximate arithmetic for floating-point operations. The encryption process is defined as:

$$\:c=\text{Enc}\left({h}_{G},pk\right)$$
(8)

where ( \(\:c\) ) is the plaintext (e.g., feature vector), ( \(\:pk\) ) is the public key, and (\(\:{h}_{G}\)) is the ciphertext.

  • Secure Computation.

Matrix operations, such as matrix multiplication and convolution, are performed on encrypted.

data.The encrypted convolution operation is:

$$\:{c}_{\text{out}}=\text{Eval}\left(W\cdot\:{c}_{\text{in}},sk\right)$$
(9)

where \(\:\left({c}_{\text{in}}\right)\) is the encrypted input, ( W ) is the weight matrix, and ( sk ) is the secret key.

  • Distributed Training.

The GNN model is trained in a distributed manner across IIoT devices. Each device computes local.

Gradients on encrypted data and shares encrypted updates. The global model parameters are updated.

using:

$$\:\:\:\:\:\:\:\:\:{{\uptheta\:}}_{\text{global}}=\text{Aggregate}\left(\text{Dec}\left({c}_{{{\uptheta\:}}_{i}},sk\right):i\in\:\text{Devices}\right)$$
(10)

where \(\:\left({c}_{{{\uptheta\:}}_{i}}\right)\) is the encrypted gradient from device ( i ).

  • Encrypted Predictions.

During inference, the trained model generates encrypted predictions, which are decrypted only by.

authorized parties to produce the final classification (benign or malicious).

Algorithm 4
figure d

HE-enabled distributed training.

The suggested approach operates under an honest-but-curious server model, wherein cloud or edge aggregators execute computations accurately but may seek to deduce sensitive information. Clients (IIoT devices) are relied upon to encrypt local data with public keys prior to transmission. Private keys are securely retained on personal devices, obstructing decoding by external parties. Communication metadata, such as packet size and timing, is safeguarded using transport-layer encryption, whereas homomorphic encryption guarantees that all computations on feature and gradient data take place within the encrypted domain.

Leakage channels are confined to model outputs, which are decrypted solely by authorized nodes that hold the private key. This design complies with GDPR/CCPA mandates by guaranteeing data reduction, confidentiality, and purpose limitation.

Simulation results and discussion

This chapter presents the evaluation and analysis of the proposed privacy-preserving IDS for IIoT in 6G networks, integrating GNNs and HE. The performance is assessed using the Edge-IIoTset, IoT-23, and MQTTset datasets, with comparisons against four state-of-the-art methods: EfficientNetV3-SVM10, LSTM49, IDS-MTran (a Transformer-based model)4, and a hybrid CNN-LSTM with Particle Swarm Optimization (CNN-LSTM + PSO)13. The evaluation focuses on detection accuracy, computational efficiency, and privacy guarantees, ensuring suitability for resource-constrained IIoT environments. The chapter is structured as follows: simulation parameters, quantitative results for each dataset, comparative analysis, and a discussion of practical applications and future directions.

Simulation parameters

The simulations were performed in a regulated IIoT setting to replicate the dynamic and resource-limited characteristics of 6G networks. Table 4 delineates the essential parameters employed in the tests, encompassing hardware specifications, dataset setups, model hyperparameters, and HE settings.

Table 4 Simulation parameters for IDS Evaluation.

The parameters were selected to balance computational efficiency and detection performance, ensuring compatibility with resource-constrained IIoT devices. The feature selection process reduced the dimensionality of the feature space, enhancing scalability, while the CKKS encryption scheme was optimized for low-latency operations.

To ensure consistency and equitable assessment, all baseline and proposed models were trained and evaluated using identical dataset partitions (70% training, 15% validation, and 15% testing) and preprocessing protocols, which encompassed feature normalization, mutual information-driven feature selection, and grayscale image conversion for baseline CNNs.

To guarantee equitable and consistent performance among all assessed models, hyperparameter optimization was methodically executed utilizing a grid-search methodology over critical parameters, including learning rate {1e − 2, 1e − 3, 1e − 4}, batch size {32, 64, 128}, and dropout {0.2, 0.3, 0.5}. The best configuration (learning rate = 0.001, batch size = 64, dropout = 0.3) was determined based on validation accuracy and F1-score. The Adam optimizer with a weight decay of 5 × 10⁻⁴ was employed in all experiments, and early stopping with a patience of 15 epochs was applied to prevent overfitting. These configurations guaranteed uniform convergence and dependable comparability among various IDS architectures under constant experimental conditions.

Following the establishment of the simulation environment and model parameters, the next part delineates the dataset distributions utilized for training, validation, and testing.

Dataset distribution

Figure 3 depicts the distribution of training, validation, and test samples for the Edge-IIoTset, IoT-23, and MQTTset datasets.

Fig. 3
figure 3

Distribution of Training, Validation, and Test Sets in Edge-IIoTset, IoT-23, and MQTTset Datasets.

Following the definition of the datasets, the subsequent phase assesses the proposed GNN–HE model and juxtaposes it with various cutting-edge baselines across several IIoT datasets.

Experimental results

This section assesses the GNN + HE model in comparison to four baseline approaches using the Edge-IIoTset, MQTTset, and IoT-23 datasets.

Quantitative results for Edge-IIoTset

Table 5 displays the performance metrics of the proposed GNN + HE model alongside four baseline approaches on the Edge-IIoTset dataset, assessed on the test set.

Table 5 Performance metrics on Edge-IIoTset dataset.

The GNN + HE model surpasses all baseline models, with 99.1% accuracy, 98.7% F1-score, 0.6% false positive rate, and the minimal error metrics (MSE: 0.0087, RMSE: 0.093, MAE: 0.0087). Figure 4 presents the confusion matrices for all models, demonstrating GNN + HE’s enhanced attack detection (84,646 true positives) and minimal false positives (1,203), in contrast to baselines exhibiting elevated false positives (e.g., 6,015 for LSTM) and reduced true positives (e.g., 79,318 for LSTM). The results illustrate the efficacy of GNN + HE in achieving a balance of high detection accuracy, low false positives, and minimal prediction errors, while preserving privacy through HE, rendering it appropriate for resource-limited IIoT settings in 6G networks.

Fig. 4
figure 4

Confusion matrix (binary class-classification) for proposed model for the Edge-IIoTset dataset.

The GNN + HE model exhibits enhanced performance across all metrics, as illustrated in Table 6. Compared to the nearest baseline, CNN-LSTM + PSO (97.5% accuracy, 97.0% F1-score, 0.0180 MSE), GNN + HE achieves a 1.6% improvement in accuracy, a 1.7% gain in F1-score, and a 51.7% reduction in MSE (0.0087 vs. 0.0180). Figure 5provides a visual comparison of accuracy, F1-score, and MSE across all models, emphasizing GNN + HE’s superiority in detection performance and reduction of prediction error. The minimal false positive rate (0.6%) and the incorporation of HE allow reliable detection with privacy assurances, essential for IIoT applications in 6G networks.

Fig. 5
figure 5

Comparison_metrics for Edge-IIoTset

To further assess the model’s efficacy in a multi-class classification context, Table 6 delineates the accuracy outcomes for 14 distinct attack classes on the Edge-IIoTset dataset, juxtaposing GNN + HE with EfficientNetV3-SVM, LSTM, IDS-MTran, and CNN-LSTM + PSO.

Table 6 Accuracy results comparison for multi-class classification on Edge-IIoTset dataset.

Table 7 demonstrates the exceptional efficacy of GNN + HE across all 14 attack categories, achieving accuracies between 98.0% and 99.3%, in contrast to LSTM’s 91.3% to 93.2%, EfficientNetV3-SVM’s 92.5% to 94.7%, IDS-MTran’s 95.3% to 96.9%, and CNN-LSTM + PSO’s 96.2% to 97.5%. This illustrates the resilience of GNN + HE in identifying various attack types, especially intricate attacks such as SQL Injection, XSS, and MITM, hence reinforcing its appropriateness for IIoT security applications on 6G networks. Figure 6 depicts multi-class confusion matrices for 15 categories (1–14 for attacks, 15 for Normal) across five models: (a) LSTM, (b) CNN-LSTM + PSO, (c) IDS-MTran, (d) EfficientNetV3-SVM, and (e) GNN + HE. GNN + HE exhibits minimal misclassifications across all attack and Normal categories, significantly outperforming baselines, which show higher false positives and false negatives. These results confirm GNN + HE’s robust multi-class classification performance, enhanced by HE’s privacy assurances, making it highly suitable for IIoT security in 6G networks.

Fig. 6
figure 6

Confusiton matrix of exsisting and proposed model for the Edge-IIoTset dataset.

Figure 7 illustrates the ROC curves for the proposed GNN + HE model used to the Edge-IIoTset dataset for binary classification. The training ROC curve attains an AUC of 0.99, whereas the test ROC curve achieves an AUC of 0.995 on the test set.

Fig. 7
figure 7

ROC curve to binary-class for Edge-IIoTset dataset for the proposed model.

To enhance the validation of the proposed framework’s generalizability, the assessment is broadened to include the MQTTset dataset, which encompasses several communication protocols and traffic patterns.

Quantitative results for MQTTset

Table 7 displays the performance metrics of the proposed GNN + HE model with four baseline approaches on the MQTTset dataset, assessed on the test set.

Table 7 Performance metrics on MQTTset dataset.

The proposed GNN + HE model achieves 99.4% test accuracy, 99.4% F1-score, and a low FPR of 0.5%, outperforming all baselines. The error metrics (MSE: 0.0085, RMSE: 0.092, MAE: 0.0085) are significantly lower than those of the baselines. Figure 8 illustrates the confusion matrix for GNN + HE in binary classification, showing high true positives and low false positives.

Fig. 8
figure 8

Confusion matrix (binary class-classification) for proposed model for the MQTTset dataset.

Figure 9 visually compares the accuracy, F1-score, and MSE across all models, highlighting GNN + HE’s superior performance.

Fig. 9
figure 9

Comparison_metrics for MQTTset.

Table 8 presents the accuracy results for five specific attack classes on the MQTTset dataset, comparing GNN + HE with EfficientNetV3-SVM, LSTM, IDS-MTran, and CNN-LSTM + PSO.

Table 8 Accuracy results comparison for multi-class classification on MQTTset.

Table 9 highlights GNN + HE’s superior performance across all five attack classes, with accuracies ranging from 98.5% to 99.5%, compared to 92.0–93.0% for LSTM, 93.5–94.5% for EfficientNetV3-SVM, 96.2–96.8% for IDS-MTran, and 96.5–97.2% for CNN-LSTM + PSO. This demonstrates GNN + HE’s robustness in detecting MQTT-specific attacks, particularly high-frequency attacks like MQTTFlood, further solidifying its suitability for IIoT security in 6G networks.

Figure 10 depicts multi-class confusion matrices for six categories (Bruteforce, MQTTFlood, MalariaDoS, Malformed, SlowITe, Normal) across five models: (a)LSTM, (b)CNN-LSTM + PSO, (c)IDS-MTran, (d)EfficientNetV3-SVM, and (e) GNN + HE. GNN + HE exhibits negligible misclassifications across all attack categories and the Normal class, indicating its elevated accuracies (98.5–99.5%) for attack types and an overall accuracy of 99.4%.

Fig. 10
figure 10

confusiton matrix of exsisting and proposed model for the MQTTset.

The subsequent experiment evaluates the framework’s efficacy on the IoT-23 dataset, facilitating assessment across diverse IIoT setups and attack vectors.

Quantitative results for IoT-23

Table 9 displays the performance characteristics of the proposed GNN + HE model alongside four baseline approaches on the IoT-23 dataset, assessed using the test set.

Table 9 Performance metrics on IoT-23 Dataset.

The proposed GNN + HE model achieves 98.1% test accuracy, 98.2% F1-score, and a low FPR of 0.8%, outperforming all baselines. Figure 11 illustrates the confusion matrix for GNN + HE in binary classification, showing high true positives and low false positives.

Fig. 11
figure 11

Confusion matrix (binary class-classification) for proposed model for the IoT-23 dataset.

Figure 12 juxtaposes Accuracy, F1-Score, and MSE for GNN + HE and baseline models on IoT-23, demonstrating the higher performance of GNN + HE. Its privacy-preserving HE integration renders it optimal for sensitive IIoT applications within 6G networks.

Fig. 12
figure 12

Comparison_metrics for IoT-23.

To further demonstrate the adaptability of the proposed system, supplementary assessments were performed on composite IIoT attack scenarios that amalgamate DDoS and botnet traffic patterns. The model exhibited great accuracy and minimal false positive rates in difficult mixed-traffic scenarios, showcasing its flexibility to varied and extensive industrial contexts.

HE feasibility and optimization analysis

The practical feasibility and computational complexity of the proposed CKKS-based homomorphic encryption framework were evaluated to determine its suitability for extensive IIoT implementations. Theoretically, the computational expense of homomorphic encryption procedures is essentially determined by the ciphertext size N (polynomial modulus degree) and the multiplicative depth L of the encrypted calculation. Homomorphic additions demonstrate linear complexity O(N), while homomorphic multiplications, prevalent in encrypted GNN layers, necessitate O(N log N) time because of the Number Theoretic Transform (NTT). Thus, the overall encrypted inference cost for M distributed IIoT nodes may be estimated as O(M·L·N log N), indicating linear scalability in relation to the number of devices.

The CKKS parameters were configured using Microsoft SEAL v4.0 to attain a balance among security, numerical precision, and computational efficiency, employing a polynomial modulus degree of 16,384 and a coefficient modulus chain of {60, 40, 40, 60} bits, which approximates 128-bit security. The scaling factor was set at 2⁴⁰ to maintain precision while reducing rescaling overhead. Multiple implementation-level optimizations were utilized, such as ciphertext reuse across mini-batches, precomputation of rotation and relinearization keys to eliminate superfluous key-switching, and activation of SEAL’s AVX2 backend for hardware acceleration on an NVIDIA Jetson AGX Orin platform (ARM v8 CPU, 2048 CUDA cores). The optimizations decreased encryption and decryption latency by roughly 45% and 38%, respectively, as comparison to a baseline CKKS implementation lacking reuse or hardware acceleration, while preserving the same security level.

Table 10 summarizes the average runtime of each stage in the encrypted inference process, assessed on the target hardware with a batch size of 64. The whole end-to-end delay ranged from 11 to 15 ms per sample, aligning with the previously stated overall inference time, but the 2.5–2.7 ms figure pertains exclusively to the encryption and decryption phases. Upon reducing the batch size to one, overall delay escalated to roughly 18 ms per sample, illustrating the anticipated linear scaling behavior until memory saturation was reached.

Table 10 Average per-stage latency of CKKS-based encrypted inference on NVIDIA Jetson AGX Orin.

From a mathematical perspective, the end-to-end encrypted inference delay can be partitioned into three additive components: encryption/decryption (O(N log N)), linear aggregation (O(L·N log N)), and communication overhead (O(M)). The linear scaling of each component concerning the number of devices and layer depth results in a total complexity of O(M·L·N log N), so affirming that the proposed system maintains near-linear scalability despite homomorphic limitations. This theoretical limit corresponds with the empirical delay findings presented in Table 10.

The results validate the internal consistency and computational viability of the proposed HE-enabled GNN architecture. Utilizing the selected configuration (N = 8192, L = 5), the CKKS method attains practical latency on edge-grade hardware, while preserving 128-bit security and superior detection performance. The integrated asymptotic and empirical evaluations confirm the scalability of the proposed system for real-time IIoT intrusion detection within practical resource limitations.

Application scenarios

The GNN + HE model, assessed on the Edge-IIoTset, IoT-23, and MQTTset datasets, utilizes GNN and HE to provide effective and privacy-preserving intrusion detection for the IIoT within 6G networks. The model demonstrates exceptional performance, with accuracies of 99.1% on Edge-IIoTset, 99.4% on MQTTset, and 98.1% on IoT-23, while maintaining little computational overhead and robust privacy assurances through HE. Its capacity to represent graph-structured data facilitates the successful identification of intricate attacks, rendering it appropriate for resource-limited IIoT scenarios. The subsequent six scenarios demonstrate its actual use in 6G-enabled IIoT applications:

  • Edge-Cloud IoT Networks: The GNN + HE model detects sophisticated attacks like DDoS and SQL Injection by modeling network traffic as graphs. With 99.5% Precision on MQTTset and 98.8% Precision on Edge-IIoTset, it ensures accurate identification of malicious patterns. HE protects sensitive data during edge-to-cloud transmission, enabling secure real-time monitoring in dynamic 6G networks.

  • Smart Grid Security: The concept protects IoT devices in smart grids, including smart meters, from threats such as Data Manipulation and Spoofing. It attains 99.1% accuracy on the Edge-IIoTset and 98.1% accuracy on IoT-23, with an AUC of 0.995 on the Edge-IIoTset. He safeguards the confidentiality of energy usage data, essential for preserving grid stability and consumer confidence.

  • Connected Healthcare Systems: In IoT-based healthcare, the approach safeguards medical devices, such as infusion pumps, from threats including malware and man-in-the-middle (MITM) attacks. The 99.4% F1-Score on MQTTset and a minimal FPR of 0.5% provide reliable detection, while HE protects patient data, facilitating secure telemedicine and remote monitoring.

  • Industrial Automation: The GNN + HE model detects anomalies in IoT sensor networks (e.g., vibration, pressure sensors) caused by attacks like Backdoor and Ransomware. With 99.1% Accuracy on Edge-IIoTset and 0.011 s/sample processing time, it enables real-time protection, enhancing operational efficiency and safety in automated manufacturing systems.

  • Intelligent Transportation Systems: The methodology protects IoT-enabled traffic management systems from threats such as Scanning and Cross-Site Scripting (XSS). The 99.3% recall on MQTTset guarantees elevated detection rates, while HE safeguards traffic data privacy, facilitating secure and efficient urban movement in smart cities.

  • Retail IoT Networks: The model safeguards IoT devices, including point-of-sale systems and inventory sensors, from threats like Data Injection in retail settings. The 98.7% F1-Score on Edge-IIoTset and 0.8% False Positive Rate on IoT-23 facilitate secure transaction processing, with HE safeguarding customer data security.

In comparison to the baseline EfficientNetV3-SVM model, which attained a 94.8% accuracy and a 93.9% F1-score on the Edge-IIoTset, the GNN + HE model demonstrates superior performance with a 99.1% accuracy and a 98.7% F1-score, while also ensuring privacy protection using HE. Likewise, on MQTTset, it attains a 99.4% F1-Score, in contrast to 93.7% for EfficientNetV3-SVM. The elevated AUC minimal computational overhead underscore its appropriateness for edge-cloud, smart grid, healthcare, industrial, transportation, and retail applications, bolstered by recent progress in GNN-based intrusion detection and privacy-preserving machine learning.

The application examples together illustrate the practical significance of the proposed GNN + HE system. The integration of graph-based threat modeling with privacy-preserving encryption enables direct application of the system to real-world 6G-IIoT contexts, including smart grids, healthcare, and industrial automation. Its low-latency encrypted inference and scalability facilitate implementation on edge devices and distributed infrastructures, underscoring its potential for widespread adoption in future intelligent IoT ecosystems.

The experimental results validate the accuracy, scalability, and privacy efficiency of the proposed system. Section 5 examines the wider ramifications of these findings, emphasizing existing constraints, possible integrations, and the practicality of implementation in actual 6G systems.

Discussion

The suggested privacy-preserving GNN system effectively illustrates the viability of homomorphic encryption (HE) for secure intrusion detection in 6G-enabled IIoT networks. However, specific design constraints warrant examination. The incorporation of CKKS-based homomorphic encryption inherently elevates computational and communication burdens relative to plaintext inference. Despite the implementation of many optimization measures to reduce latency, ultra-low-power devices may continue to encounter deployment issues in real-time situations. These trade-offs are an intrinsic feature of existing HE methods rather than a constraint of the proposed design.

The suggested methodology demonstrates significant promise for real-time implementation in dynamic IIoT contexts. The system achieves an end-to-end encrypted inference latency of around 11–15 ms per sample, meeting the standard response-time criteria of industrial edge networks. The modular architecture facilitates asynchronous execution of encryption, transmission, and inference procedures, thereby accommodating continuous data streams without breaching latency constraints. Furthermore, the graph-based representation enables the model to adjust to alterations in device connectivity or network architecture, rendering it appropriate for dynamic IIoT contexts. These attributes jointly illustrate the system’s practical utility for real-time surveillance and anomaly identification in 6G-enabled industrial frameworks.

The scalability of the suggested architecture was evaluated concerning network size and data throughput. The computational complexity of encrypted inference increases linearly with the number of IIoT nodes (O (M·L·N log N)), allowing the system to support larger networks without exponential delay escalation. Experiments conducted in emulated environments demonstrated consistent performance with up to 100 nodes, and projections suggest that inference latency maintains under 50 ms for networks comprising about 1,000 nodes when employing parallel edge processing. The modular architecture facilitates the segmentation of the global graph among edge clusters, hence permitting concurrent distributed inference and encryption. The asynchronous architecture of encryption, transmission, and inference modules facilitates high-throughput data streams characteristic of smart factories and real-time industrial monitoring. These findings underscore the framework’s scalability and appropriateness for 6G-enabled dynamic Industrial Internet of Things scenarios.

This work primarily focuses on data confidentiality during model training and inference from a security standpoint. This study does not encompass resistance to adversarial manipulation, including model inversion or data poisoning. Nevertheless, integrating homomorphic encryption with adversarially resilient training or differential privacy techniques could further augment integrity protection in next research. These extensions exceed the intended scope of this work but signify promising avenues for enhancing resilience in practical IIoT environments.

The results affirm that the suggested framework offers a viable and secure basis for privacy-preserving analytics in IIoT networks. The potential for its adaptability to hybrid cryptography and defensive methods remains an open, albeit non-critical, area for future investigation.

The ideas from the conversation drive various future research directions and practical considerations. The final portion closes the study by summarizing major contributions and identifying paths for future exploration.

Conclusion and future works

This research introduced a privacy-preserving intrusion detection framework that integrates Graph Neural Networks with CKKS-based homomorphic encryption for Industrial Internet of Things systems functioning in 6G environments. Experimental assessments across various benchmark datasets revealed that the suggested GNN + HE architecture attains elevated detection accuracy with little false positives while maintaining comprehensive data secrecy.

The framework can be implemented into edge gateways or industrial control units with moderate GPUs or secure co-processors for real-time system deployment. The encryption and inference modules can function asynchronously to satisfy the latency requirements characteristic of IIoT networks (10–50 ms). Moreover, cloud-edge collaboration and adaptive key management may further diminish computational burden and improve scalability.

Future research may concentrate on enhancing lightweight homomorphic encryption primitives for microcontrollers, broadening the model to encompass federated learning contexts, and investigating adversarially robust training within encrypted computation frameworks. Furthermore, subsequent endeavors may explore adaptive graph construction methodologies for dynamic IIoT settings and devise energy-efficient inference procedures appropriate for ultra-low-power edge devices. Incorporating differential privacy or zero-knowledge proofs with homomorphic encryption signifies a promising strategy for improving regulatory compliance and transparency. These directives together seek to enhance the practicality, scalability, and robustness of privacy-preserving intrusion detection for comprehensive 6G IIoT implementation.