Table 2 Experimental environment and hyperparameter settings.

From: Secure multi-party test case data generation through generative adversarial networks

Parameter / Component

Specification / Value

Hardware Configuration

Server (Coordinator)

NVIDIA A100 GPU (80GB), Intel Xeon Gold 6248R

Client (Participant)

NVIDIA Jetson AGX Xavier (32GB ARM64) \(\times\) 5

Network Simulation

Linux TC (Delay: 10-50ms, Bandwidth: 50-200Mbps)

Model Architectures

Autoencoder (AE)

Encoder: FC(512, ReLU) \(\rightarrow\) BiLSTM(256) \(\rightarrow\) FC(32, Sigmoid)

Decoder: FC(256, ReLU) \(\rightarrow\) LSTM(512) \(\rightarrow\) FC(d, Linear)

Generator (G)

Input(\(z \in \mathbb {R}^{32}\)) \(\rightarrow\) FC(128, LeakyReLU) \(\rightarrow\) LSTM(256) \(\rightarrow\) FC(d, Tanh)

Discriminator (D)

Input(\(x \in \mathbb {R}^{d}\)) \(\rightarrow\) FC(256, LeakyReLU) \(\rightarrow\) FC(128) \(\rightarrow\) Output(1, Sigmoid)

Software Stack

Frameworks

TensorFlow Federated 0.19, PySyft 0.5, OpenSSL 3.0

Encryption Library

Python-Paillier (Key size: \(N=2048\) bits)

Protocol Simulators

ModbusPal, PyModbus, Eclipse Milo (OPC UA)

Training Settings

Federated Learning

Global Rounds: \(T=100\), Local Epochs: \(E=5\)

GAN Learning Rate

Generator: \(2 \times 10^{-4}\), Discriminator: \(2 \times 10^{-4}\)

Optimizer

Adam (\(\beta _1=0.5, \beta _2=0.999\))

Batch Size

Local Batch: \(B=64\), Test Batch: 1,000

DP Parameters

Privacy Budget \(\epsilon =0.5\), \(\delta =10^{-5}\), Clipping Norm \(C=1.0\)

Loss Weights

Adversarial: \(\lambda _{adv}=1.0\), Syntax: \(\lambda _{syn}=0.3\), AE \(\lambda _{rec}=1.0\)