Filter By:

Journal Check one or more journals to show results from those journals only.

Choose more journals

Article type Check one or more article types to show results from those article types only.
Subject Check one or more subjects to show results from those subjects only.
Date Choose a date option to show results from those dates only.

Custom date range

Clear all filters
Sort by:
Showing 1–13 of 13 results
Advanced filters: Author: Cynthia Rudin Clear advanced filters
  • There is no shortage of opinions on the impact of artificial intelligence and deep learning. We invited authors of Comment and Perspective articles that we published in roughly the first half of 2019 to look back at the year and give their thoughts on how the issue they wrote about developed.

    • Alexander S. Rich
    • Cynthia Rudin
    • Jack Stilgoe
    Special Features
    Nature Machine Intelligence
    Volume: 2, P: 2-9
  • A set of predictive models can exist that predict equally well; however, the specific variables underlying these models may be important to some of them but not to others. Jiayun Dong and Cynthia Rudin demonstrate a method to visualize and quantify this effect of variable importance.

    • Jiayun Dong
    • Cynthia Rudin
    Research
    Nature Machine Intelligence
    Volume: 2, P: 810-824
  • Deep learning excels in medical signal processing but lacks interpretability. An efficient, interpretable architecture that matches the performance of larger models at orders of magnitude fewer parameters in tasks common to wearable devices has been proposed.

    • Sully F. Chen
    • Zhicheng Guo
    • Cynthia Rudin
    Research
    Nature Machine Intelligence
    Volume: 6, P: 1132-1144
  • The black-box nature of neural networks is a concern for high-stakes medical applications in which decisions must be based on medically relevant features. The authors develop an interpretable machine learning-based framework that aims to follow the reasoning processes of radiologists in providing predictions for cancer diagnosis in mammography.

    • Alina Jade Barnett
    • Fides Regina Schwartz
    • Cynthia Rudin
    Research
    Nature Machine Intelligence
    Volume: 3, P: 1061-1070
  • Black box machine learning models can be dangerous for high-stakes decisions. They rely on untrustworthy databases, and their predictions are difficult to troubleshoot, explain and error check for real-time predictions. Their use leads to serious ethics and accountability issues.

    • Cynthia Rudin
    Comments & Opinion
    Nature Reviews Methods Primers
    Volume: 2, P: 1-2
  • There is much interest in ‘explainable’ AI, but most efforts concern post hoc methods. Instead, a neural network can be made inherently interpretable, with an approach that involves making human-understandable concepts (aeroplane, bed, lamp and so on) align along the axes of its latent space.

    • Zhi Chen
    • Yijie Bei
    • Cynthia Rudin
    Research
    Nature Machine Intelligence
    Volume: 2, P: 772-782
  • In mouse models of pulmonary metastasis, adjuvant epigenetic therapy targeting myeloid-derived suppressor cells disrupts the premetastatic microenvironment after resection of primary tumours and inhibits the dissemination of residual tumour cells.

    • Zhihao Lu
    • Jianling Zou
    • Malcolm V. Brock
    Research
    Nature
    Volume: 579, P: 284-290
  • Describing data from preclinical studies and early clinical trial results, Azad et al. suggest that low-dose epigenetic-modulating agents can reprogramme tumour cells and override any immediate cytotoxic effects that are typically observed at high doses. Such optimization of drug dosing and scheduling of currently available epigenetic therapies could give these agents a prominent place in cancer management—when used alone or in combination with other therapies.

    • Nilofer Azad
    • Cynthia A. Zahnow
    • Stephen B. Baylin
    Reviews
    Nature Reviews Clinical Oncology
    Volume: 10, P: 256-266
  • There has been a recent rise of interest in developing methods for ‘explainable AI’, where models are created to explain how a first ‘black box’ machine learning model arrives at a specific decision. It can be argued that instead efforts should be directed at building inherently interpretable models in the first place, in particular where they are applied in applications that directly affect human lives, such as in healthcare and criminal justice.

    • Cynthia Rudin
    Reviews
    Nature Machine Intelligence
    Volume: 1, P: 206-215
  • The design of polymers for regenerative medicine could be accelerated with the help of machine learning. Here the authors note that machine learning has been applied successfully in other areas of polymer chemistry, while highlighting that data limitations must be overcome to enable widespread adoption within polymeric biomaterials.

    • Samantha M. McDonald
    • Emily K. Augustine
    • Matthew L. Becker
    ReviewsOpen Access
    Nature Communications
    Volume: 14, P: 1-11
  • The imminent release of tissue atlases combining multichannel microscopy with single-cell sequencing and other omics data from normal and diseased specimens creates an urgent need for data and metadata standards to guide data deposition, curation and release. We describe a Minimum Information about Highly Multiplexed Tissue Imaging (MITI) standard that applies best practices developed for genomics and for other microscopy data to highly multiplexed tissue images and traditional histology.

    • Denis Schapiro
    • Clarence Yapp
    • Peter K. Sorger
    Comments & Opinion
    Nature Methods
    Volume: 19, P: 262-267