Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–5 of 5 results
Advanced filters: Author: Sören Dittmer Clear advanced filters
  • With the explosion of machine learning models of increasing complexity for research applications, more attention is needed for the development of good quality codebases. Sören Dittmer, Michael Roberts and colleagues discuss how to embrace guiding principles from traditional software engineering, including the approach to incrementally grow software, and to use two types of feedback loop, testing correctness and efficacy.

    • Sören Dittmer
    • Michael Roberts
    • Carola-Bibiane Schönlieb
    Reviews
    Nature Machine Intelligence
    Volume: 5, P: 681-686
  • The area under the receiver operating characteristic curve (AUROC) of the test set is used throughout machine learning (ML) for assessing a model’s performance. However, when concordance is not the only ambition, this gives only a partial insight into performance, masking distribution shifts of model outputs and model instability.

    • Michael Roberts
    • Alon Hazan
    • Carola-Bibiane Schönlieb
    Comments & Opinion
    Nature Machine Intelligence
    Volume: 6, P: 373-376
  • Shadbahr et al. highlight the importance of evaluating imputation quality when building classification models for incomplete data. They demonstrate how a model built on poorly imputed data can compromise the classifier, and develop a new method for assessing imputation quality based on how well the overall data distribution is preserved.

    • Tolou Shadbahr
    • Michael Roberts
    • Carola-Bibiane Schönlieb
    ResearchOpen Access
    Communications Medicine
    Volume: 3, P: 1-15