Fig. 2: A flow-chart showing the processes involved in uploading a new contribution to the leaderboard.
From: JARVIS-Leaderboard: a large scale benchmark of materials design methods

The jarvis_populate_data.py scripts generate a benchmark dataset. A user can apply their method, train models, or run experiments on that dataset and prepare a csv.zip, a metadata.json file, and other files in a new folder in the contributions directory. The contributions can be locally checked by the user using jarvis_server.py script. Then the folder can be uploaded to a user’s GitHub account by the automated jarvis_upload.py script involving several GitHub uploading steps. The administrators of the JARVIS-Leaderboard at NIST will verify the contributions and then finally, it will become part of the leaderboard website.