Table 2 DNN architecture configuration with LR as learning rate

From: Quantum topology identification with deep neural networks and quantum walks

Computation network with LR = 0.0001

Block

Layer

Filter

Size

Activation

Padding

Repetition

1st

AvgPool

–

(2, 2)

–

Valid

2

Conv2D

8

(5, 5)

–

Valid

SeparableConv2D

8

(5, 5)

ELU

Same

2nd

AvgPool

–

(2, 2)

–

Valid

2

Conv2D

16

(5, 5)

–

Valid

SeparableConv2D

16

(5, 5)

ELU

Same

3rd

AvgPool

–

(2, 2)

–

Valid

2

Conv2D

32

(5, 5)

–

Valid

SeparableConv2D

32

(5, 5)

ELU

Same

4th

Linear

–

256

Relu

–

1

5th

Linear

–

5

Softmax

–

1

Memory network with LR = 0.4

Height

Width

Element size

Decay factor of LR

Initial radius

256

256

32

 

0.9

128