Table 3 Description of SMNN architectures.

From: Novel AI driven approach to classify infant motor functions

SMNN

Hidden layer

Neuron type

Drop out

Hidden layer

Neuron type

Drop out

Hidden layer

Neuron type

Output layer

Neuron type

1

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

      

1

Sigmoid

2

\(N_{stack} \times 50 \rightarrow 100\)

ReLU

      

1

Sigmoid

3

\(N_{stack} \times 50 \rightarrow 200\)

ReLU

      

1

Sigmoid

4

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

20%

50

PReLU

   

1

Sigmoid

5

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

20%

100

PReLU

   

1

Sigmoid

6

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

20%

200

PReLU

   

1

Sigmoid

7

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

20%

50

PReLU

20%

50

PReLU

1

Sigmoid

8

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

20%

100

PReLU

20%

100

PReLU

1

Sigmoid

9

\(N_{stack} \times 50 \rightarrow 50\)

ReLU

20%

200

PReLU

20%

200

PReLU

1

Sigmoid

  1. Numbers correspond to the number of neurons in each layer. For example, SMNN 1 consists of one hidden linear layer with 50 ReLU neurons and a linear output layer with one sigmoid neuron. \(N_{stack} \times 50\) denotes dimension of the input to the first hidden layer.