Fig. 1: Summary of the Flexynesis data integration and analysis workflow.

Flexynesis accepts as input one or more data tables in tabular format along with sample metadata, carries out various data-cleaning steps, and provides multiple feature-selection options. A sequential Bayesian hyperparameter optimization routine is applied for model training. The trained models are evaluated on the test data using a variety of metrics, and input features are assessed and ranked based on feature attribution scores with respect to their contribution to outcome prediction tasks. Flexynesis currently supports multiple neural network architectures, including classical feedforward neural networks, variational autoencoders, multi-triplet neural networks, and graph-convolutional neural networks. Each network can be utilized in a supervised multi-task setting for regression, classification, or survival analysis, as well as in unsupervised or cross-modality prediction tasks. Separate data modalities can be fused using either early or intermediate fusion options. Flexynesis is available for installation through publicly accessible repositories such as PyPI, Guix, and Bioconda, and is ready to be used on the Galaxy platform. Flexynesis is developed using PyTorch, PyTorch Lightning, and PyTorch Geometric.