Extended Data Fig. 7: Preambles and CNN model architecture. | Nature

Extended Data Fig. 7: Preambles and CNN model architecture.

From: Laser writing in glass for dense, fast and efficient archival data storage

Extended Data Fig. 7

(a) Schematics of preambles along the bottom of a sector. The pattern is designed to be easy to detect so that we can unambiguously find the sector boundary. (b) Schematic of the write geometry. Each layer is filled with data sectors. For non-polygon writers, the fast axis is the x axis. For polygon writers, the fast axis is the y axis. (c) Architecture of the neural network used for decoding. Our network architecture consists of an image embedding stem, which extracts high resolution features. Then, three consecutive stages, each operating on successively downsampled inputs, extract features at coarser resolutions. Additional projection layers balance information flow between different stages directly to the final classification layers. The three different stages combine blocks of 2D convolutional layers, 2D batch-normalisations and rectified linear unit (ReLU) or Gaussian-error linear unit (GELU) activations. (d) Impact of number of context images on decode quality. Context images are additional images provided to the neural network to give it more information about the 3D structure of the data (see section ‘Machine learning model’).

Back to article page