Fig. 1: An UML diagram of the computational framework. | npj Computational Materials

Fig. 1: An UML diagram of the computational framework.

From: Attention towards chemistry agnostic and explainable battery lifetime prediction

Fig. 1

The framework is designed around three principal class clusters. The first includes a ConfigHandler engineered to manage a comprehensive set of user-defined configurations and establishes a blueprint for handling various subconfigurations such as general settings, data properties, and model specifications. During hyperparameter optimization tasks, ConfigHandler interfaces with the Optuna optimization library to adaptively create and update the tuning configuration. The second key class structure includes TrainProcedure, which serves as an architectural template for the training process. Its attributes are employed throughout the computational pipeline, starting with data preparation and extending to the instantiation of specialized loss functions and Seq2Seq models via the LossFactory and Seq2SeqFactory. FineTuning is a specialized subclass that inherits from TrainProcedure while TuneProcedure and PredictProcedure, the latter of which uses the QuantilePredictor, are incorporated into the pipeline depending on the desired use case and settings. The tuning operates on single trials with a TPESampler when multiple runs are desired. Lastly, Seq2SeqFactory is engineered to govern the instantiation of encoder-decoder architectures. Depending on the user-defined configurations, it can orchestrate a multihead or an additive encoder-decoder mechanism. The inclusion of custom attention mechanisms within the architecture is handled by the AdditiveDecoder class or the MultiheadDecoder, conditional upon the configuration stipulations.

Back to article page