Table 1 A breakdown of the barriers towards clinical implementation of neuroimaging CAD models presented in this article.
From: Challenges of implementing computer-aided diagnostic models for neuroimages in a clinical setting
Barrier type | Reason | Description | Solution(s) |
---|---|---|---|
Technical | Generalizability | Failure of CAD models to generalize across different scanner types and hospitals, as well as different population subgroups, ethnicities, ages, and genders | Federated learning, larger datasets, methods that prevent overfitting, domain switching, harmonization |
Verifiability | A general set of problems, including the black box model, prevents users from knowing the reasons for a CAD model’s decision. | Segmentation-based models explainable AI, gradient class activations | |
Integration into workflow | Translation of models from proof-of-concept to usable software products | Investment in software engineering and user experience, corporate partnerships | |
Incomplete and mislabeled data | EHR data is often incomplete or mislabeled, hampering the training of CAD models | More careful record keeping, translating clinician notes, careful exclusion of data, and development of methods that can handle such incomplete data. | |
High computational requirements | Computational requirements for medical image computations are very high, which is expensive. | Cloud-based solutions; institutional investment in servers | |
Disease-related | Lack of biomarkers | Lack of consistent physiological features detectable in data that are consistently present with a particular brain disorder | Dependent on the type of disorder studied, and for some it may be insurmountable. However, higher-resolution data, different modalities, and more advanced analysis techniques may mitigate the issue. |
Lack of sufficient modalities | Modality types used in the research world (primarily to study psychiatric disorders) are often not present in the clinic, curtailing the implementation of neuroimaging for the detection of such disorders | Inclusion of fMRI, EEG, etc. into clinical workflows | |
Disease differentiation | Emphasis on causes of the disease (e.g. whether dementia is caused by Alzheimer’s or vascularization), which is often just as important as the presence of a disease | More careful labeling of disorders and confounders, further study of ML methods beyond binary classification | |
Correlation with confounding variables | Variables for which the disorder of interest is systematically correlated with another variable regardless of the dataset; similar to generalizability (above), except different methods are required to mitigate model bias | Data matching, machine-learning-based regression methods | |
Lack of control group | Clinical data often lacks a healthy control group, against which to compare, to train CAD models | Careful data curation; reformulation of the problem, such that a control group doesn’t have to be healthy, but merely has to not have the disease of interest | |
Institutional | Separation of AI experts and data scientists from clinicians | Data scientists and AI experts are most often employed at sites other than hospitals, thus being separated from real-world medical data, while clinicians work in hospitals, leading to incomplete understanding on both sides | Increased postdoctoral salaries in research hospitals, stabilization of career tracks for junior biomedical researchers, specialized fellowship programs to partner AI experts with clinicians |
Technical expertise of clinicians/Usability of CAD models | Clinicians are disinclined from using CAD models and other automated tools due to the technical skill required and the amount of time required for use | Work more closely with data scientists/AI experts, supplementary training courses; Prioritization of usability in CAD models | |
Lack of funding for implementation studies | Funding bodies are often more inclined to fund novelty studies rather than implementation studies | Different guidelines for funding bodies (e.g. NIH) | |
Disorganization of clinical databases | Related to “incomplete and mislabeled data," above. Databases in hospitals are often disorganized, hampering big-data machine-learning studies and leading to mislabeled data. Medical images are often duplicated and identifiers are often missing or difficult to match with medical images, leading to loss of clinical/demographic information for medical images. | Institutional investment clinical databases, both on the part of hospitals and vendors. | |
Federal approval processes | Federal bodies are often disinclined from approving CAD models, though much of this is a result of the above issues | Addressing many of the above problems, leading to greater confidence in the efficacy of CAD models; clarification, on the part of FDA and other regulatory bodies, of requirements for CAD model implementation and approval | |
Underdeveloped business model of medical AI | Lack of development of business model for medical AI. Who does the value accrue to, and who pays for it? | Development of AI business models in other industries and in business schools, which will likely inform the best practices for doing so in medicine. | |
Lack of capabilities for post-market surveillance | After an AI model is implemented in a hospital, what mechanisms are available to monitor their effectiveness on a large scale? | Centralized monitoring and reporting systems that do not interface with patient data directly, thus ensuring security. |