Fig. 5: Workflow of IT-π for discovering optimal dimensionless variables. | Nature Communications

Fig. 5: Workflow of IT-π for discovering optimal dimensionless variables.

From: Dimensionless learning based on information

Fig. 5: Workflow of IT-π for discovering optimal dimensionless variables.

This schematic illustrates a specific example in which the input dimensionless variables Π = [Π1, Π2] are optimized. Each generation of the CMA-ES algorithm maintains a population of candidate solutions (represented by blue dots), where each individual encodes a set of exponents (c1c2) that define the candidate dimensionless variables. For each individual, the variables are constructed as \({\Pi }_{1}={{{{\bf{q}}}}}^{{{{\bf{W}}}}{{{{\bf{c}}}}}_{1}}\) and \({\Pi }_{2}={{{{\bf{q}}}}}^{{{{\bf{W}}}}{{{{\bf{c}}}}}_{2}}\), where q represents the set of dimensional quantities and W is the matrix of basis vectors. The fitness of each candidate is evaluated using the irreducible error across Rényi orders, \(J={\max }_{\alpha }[{\epsilon }_{LB}]={\max }_{\alpha }\left[{e}^{-{I}_{\alpha }({\Pi }_{o};{{{\mathbf{\Pi }}}})}\cdot c(\alpha,p,{h}_{\alpha,o})\right],\) where Iα denotes the Rényi mutual information. The optimal α is found through golden-section search in the range (1/(1 + p), 10]. The algorithm evolves the population across generations by updating the mean and covariance of the sampling distribution, navigating the fitness landscape (whose constant-value contours are shown as dark purple dashed lines) to minimize the irreducible error. The final generation yields the maximally predictive dimensionless variables \({{{{\mathbf{\Pi }}}}}^{ * }=[{\Pi }_{1}^{ * },{\Pi }_{2}^{ * }]=[{{{{\bf{q}}}}}^{{{{\bf{W}}}}{{{{\bf{c}}}}}_{1}^{ * }},{{{{\bf{q}}}}}^{{{{\bf{W}}}}{{{{\bf{c}}}}}_{2}^{ * }}]\). The dimensionless output Πo can either be included in the optimization or treated as fixed. For clarity, this schematic illustrates only the optimization of c1 and c2 associated with the input variables.

Back to article page