Table 3 Computational Resource Consumption and Real-World Feasibility of the Proposed Model on NVIDIA Jetson TX2.

From: Deep learning steganography for big data security using squeeze and excitation with inception architectures

Sl No

Category

Details

Feasibility

1

Hardware

NVIDIA Jetson TX2 (256-core Pascal GPU, 6-core ARM CPU, 8 GB LPDDR4 RAM)

Compact and powerful edge AI hardware, suitable for medical use

2

Power Consumption

7.5–15 W (under moderate-to-high inference load)

Low enough for mobile clinics or battery-powered diagnostic tools

3

Energy Efficiency

 ~ 0.5 W/image (based on 15W max power and 30 images/sec throughput)

Energy-efficient for real-time steganography in portable and embedded setups

4

Model Size

 ~ 25–30 MB

Lightweight for TX2’s onboard storage

5

Disk Usage

 ~ 3.0 GB (model, dataset, dependencies)

Easily accommodated on TX2’s internal or external storage

6

RAM Usage

 ~ 1.8–2.2 GB during inference

Fits comfortably within 8 GB RAM

7

Processing Time (Latency)

 ~ 25–35 ms per image (encoder + decoder on Jetson TX2 with CUDA acceleration)

Real-time embedding and decoding achievable

8

Processing Throughput

 ~ 25–30 images/sec

Suitable for continuous or batch secure image transmission

9

Environmental Constraints

0–50 °C

Suitable for mobile labs, rural health units, or static clinical environments

10

Deployment feasibility

Plug-and-play deployment

High-no external dependencies beyond standard CUDA stack