Table 3 Comparative results of the VTSAMRNN-FARS method with existing models.

From: A vision transformer with recurrent neural network-based fall activity recognition system for disabled persons in smart IoT environments

Methods

\(\:\varvec{A}\varvec{c}\varvec{c}{\varvec{u}}_{\varvec{y}}\)

\(\:\varvec{P}\varvec{r}\varvec{e}{\varvec{c}}_{\varvec{n}}\)

\(\:\varvec{S}\varvec{e}\varvec{n}{\varvec{s}}_{\varvec{y}}\)

\(\:\varvec{S}\varvec{p}\varvec{e}{\varvec{c}}_{\varvec{y}}\)

VTSAMRNN-FARS

99.67

99.67

99.67

99.67

Open Pose-LSTM

92.72

94.85

97.53

95.01

2D-ConvNN

95.88

94.35

94.93

97.70

ResNet50

96.21

95.15

95.40

91.89

2D Pose estimation

95.60

92.00

91.90

98.16

TD_CNN-LSTM

99.23

93.99

93.64

96.85

RetinaNet

94.69

96.75

95.03

95.61

YOLOv7

91.51

96.68

92.34

96.10

YOLOv5

94.83

91.94

97.16

94.34