Table 1 Architecture of the network fθ

From: Deep-prior ODEs augment fluorescence imaging with chemical sensors

Layers

Output shape

FC + LReLU

1 × 16

FC + Reshape

1 × 6 × 6

Conv + IN + LReLU

64 × 6 × 6

Upsampling + (Conv + IN + LReLU)

64 × 12 × 12

Upsampling + (Conv + IN + LReLU)

64 × 24 × 24

Upsampling + (Conv + IN + LReLU)

64 × 48 × 48

Upsampling + (Conv + IN + LReLU)

64 × Nx × Ny

Conv + ReLU

1 × Nx × Ny

  1. Shape of initial input: (1 × 3). The shape of the input to any given layer is that of the output of the previous layer. FC fully-connected layer with bias, LReLU leaky ReLU (slope 0.01), Conv convolutional layer with (3 × 3) kernels and reflective boundary conditions, IN instance normalization layer with learnable per-channel affine transform parameter vectors and ϵ = 10−582. Upsampling: bilinear interpolation. This network consists of 149,925 learnable parameters for Nx = 96, Ny = 68.