Table 2 Recommendations for technology designers and developers of AI-driven closed-loop neurotechnologies.
From: Clinician perspectives on explainability in AI-driven closed-loop neurotechnology
Design Focus | Recommendations | Rationale |
|---|---|---|
Explainability | Prioritize clinically relevant explanations (e.g., input–output logic, feature importance) | Clinicians value understanding how inputs relate to outputs over technical model details |
User-centered interfaces | Design interfaces that visualize AI outputs and relevant features in an intuitive clinical format Prioritize tiered and role-adapted explainability frameworks | Supports rapid interpretation and integration into clinical workflow Promotes the provision of information based on stakeholder-specific informational needs and ethical priorities |
Transparency over full disclosure | Offer selective transparency tailored to user needs rather than full algorithmic transparency | Full technical detail is often irrelevant; actionable clarity is more effective |
Context-specific XAI tools | Implement explainability methods such as SHAP adapted to the neuroclinical use case | Clinicians responded positively to familiar, task-specific interpretability tools |
Clinical relevance assurance | Ensure outputs align with clinical goals, terminology, and decision pathways | Builds trust and promotes usability by linking AI reasoning to real-world clinical logic |
Iterative co-design | Involve clinicians throughout the development lifecycle | Incorporates real-world constraints and enhances acceptance through early stakeholder input |
Ethical and regulatory alignment | Embed explainability features that meet legal standards and protect patient rights (e.g., EU AI Act, Article 86) | Ensures compliance and mitigates future policy and liability risks |