Table 2 Total number parameters for the proposed CNN with a channel attention-based model.

From: Stacked CNN-based multichannel attention networks for Alzheimer disease detection

Type of Layer

Output

Number of Parameter

Input Layer

(None, M, N, 3)

0

(CNNB-1)

(None, M, N, 16)

1216

(CNNB-2)

(None, 82, 98, 32)

12832

(CNNB-3)

(None, 37, 45, 64)

37264

(CNNB-4)

(None, 14, 18, 128)

204928

(CNNB-5)

(None, 3, 5, 256)

819456

Channel-attention

(None, 1, 2, 256)

33088

(Dropout-1)

(None, 1, 2, 256)

0

(Flatten)

(None, 512)

0

(Dense-1)

(None, 256)

131328

(Dropout-2)

(None, 256)

0

(Dense-2)

(None, 4)

1028

(Soft-max)

(None, 4)

0

Total params:

 

1,241,140

Train params:

 

1,241,140

Non-train params:

 

0