Figure 7
From: Constructing neural networks with pre-specified dynamics

Polynomial execution time to attain delta consistency. (a, b) Execution time until consistency as a function of the number of nodes in G (before expansion). (a) Times are not well fitted by an exponential: linear fit \(f(x)=ax+b\), \(a=0.0019,\textrm{CI}=(0.0016,0.0021)\), \(b=-0.40\), \(\textrm{CI}=(-0.65,-0.15)\), \(\mathrm {d.f.}=58\), \(\mathrm {R-squared}=0.78\); log fit \(a=0.84,\textrm{CI}=(0.78,0.90)\), \(b=-3.5\), \(\textrm{CI}=(-3.8,-3.2)\), \(\mathrm {d.f.}=58\), \(\mathrm {R-squared}=0.93\). (b) Times are well fitted by a power function: linear fit \(a=1.93,\textrm{CI}=(1.80,2.07)\), \(b=-3.46\), \(\textrm{CI}=(-3.77,-3.15)\), \(\mathrm {d.f.}=58\), \(\mathrm {R-squared}=0.93\). (c, d) Number of executed steps (\(N_{steps}\)) as a function of the number of nodes (\(N_{v}\)) in G (before expansion). (c) Steps number are not well fitted by an exponential: linear fit \(a=0.0016,\textrm{CI}=(0.013,0.0020)\), \(b=4.4\), \(\textrm{CI}=(4.1,4.7)\), \(\mathrm {d.f.}=58\), \(\mathrm {R-squared}=0.65\); log fit \(a=0.84,\textrm{CI}=(0.78,0.90)\), \(b=-3.5\), \(\textrm{CI}=(-3.8,-3.2)\), \(\mathrm {d.f.}=58\), \(\mathrm {R-squared}=0.99\). (d) Steps number are well fitted by a power function: \(a=1.9\)4, \(\textrm{CI}=(1.91,1.98)\), \(b=1.20\), \(CI=(1.11,1.28)\), \(\mathrm {R-squared}=0.99\), \(\mathrm {d.f}=48\). Execution times and executed steps were computed for random transition graphs, with 3 stimuli and between 5 and 3000 nodes, in logarithmic scale. Three graphs for each \(N_{v}\) value were constructed.