Fig. 4: Learned weights. | Nature

Fig. 4: Learned weights.

From: Supervised learning in DNA neural networks

Fig. 4

a, Abstract training process of learning two classes of 100-bit handwritten digits in two distinct orders. Grey and black wires indicate inhibited and learned weights, respectively. b, Fluorescence kinetics experiments that read out the learned weights. Learning was performed as follows: present all 10 training patterns from one class together with the class label, wait for 24 h, add the label inhibitor, wait for 2 h, and then repeat with the second class. After learning was completed, 100 aliquots of the learned memories were each mixed with a unique pair of activatable weight molecules (\({W}_{i,1}^{* }\) and \({W}_{i,2}^{* }\)), a fuel strand (XFi), all 100 input strands (X1 to X100), and a pair of standard reporters that each converts one of the two possible output signals to fluorescence. The two reporters were modified with fluorophores ATTO590 and ATTO488, respectively, allowing for simultaneous readout in two fluorescence channels. Eight hours of kinetics data are shown in two 10-by-10 arrays. Each position in both arrays corresponds to the same sample. Each array corresponds to one of the two fluorescence channels. c, Measured weight concentrations at 4 h and error statistics for learning handwritten digits 0 and 1 in two distinct orders. d,e, Overlaid training patterns (10 per class), representing target weights, and learned weights (measured weight concentrations at 4 h) for learning handwritten digits 3 and 4 (d) or 6 and 7 (e). f, Distribution of errors from experiments shown in ce. Ø indicates a blank memory. w1 and w2 indicate the weight matrix for memory 1 and 2, respectively.

Source data

Back to article page