Extended Data Table 1 Hyper-parameter settings for rank, and alpha for LoRA experiments fine-tuning on SplitCIFAR in main text

From: Engineering flexible machine learning systems by traversing functionally invariant paths

  1. Table shows explicit accuracy values for SplitCIFAR fine-tuning using LoRA for different values of LoRA the rank parameter. The final entry (LoRA weights + original weights) shows accuracy for combined application of LoRA and complete network fine-tuning. Bold indicates row showing FIP performance.