Table 1 Illustrative quotes for the respective themes.

From: Clinician perspectives on explainability in AI-driven closed-loop neurotechnology

Algorithmic specifications information request

 

“To be honest, as a doctor I don’t feel comfortable to really understand the differences between a random forest and a Support Vector Machine (SVM).” (P6)

 

“I don’t know whether I need to understand what the AI model is doing.” (P11)

 

“Well for the patients [AI and training data] is not relevant, I mean for 99,9% of all patients. But for me it is interesting […].” (P12)

 

“As with any technical solutions, there needs to be someone who can repair it […]. This is the same for AI-driven technology, so I believe that engineers and technicians need a good understanding of how to repair it and decide when to decommission it.” (P18)

 

“But verifying how the machine or the technology has derived the result, I think, is [for me] overrated. […] I think that’s certainly a different question for engineers and developers.” (P18)

Input data information request

 

“[…] There are a lot of clinical specifications, so therefore I would like to know if the training data is suitable to be used for certain situations.” (P12)

 

“[…] But there are a lot of [tremor] subtypes, e.g., for Parkinson’s disease patients this means many axial symptoms, and these are more difficult to be reduced by neurostimulation.” (P10)

 

“We need unrestricted access to the source data. We simply need to be able to obtain and analyze this source data without any time delay and, if necessary, to be able to model it with AI to really make a statement, as long as we don’t have this, we are a bit in a black box situation and only see the AI-driven output at some point […]. But that’s the important thing.” (P19)

 

“Closed-loop plays a big role in the sensory and motor symptoms domain. And in this area, we would say the main thing is it works, and the patient has a better quality of life.” (P7)

 

“Because we really don’t know, so it’s sometimes more like Frankenstein research, you try it and see what happens.” (P7)

 

“Reality is much more complex than all the, I'll say, training datasets you have and what looks great and works great in training datasets, as soon as you let a little more reality it starts to collapse.” (P7)

 

“We did a project with a start-up from the [blinded]. We wrote a Science paper together where, all of a sudden, a hormone level was statistically significant for the algorithm’s prediction. This was complete nonsense from a clinical perspective (although the data scientist liked the result) because this hormone level was a proxy for a certain medication for only very sick patients”. (P13)

Output information request

 

“Even if you don’t understand the system itself, you might at least be able to assess the consequences of using it.” (P7)

 

“With DBS, we have an extremely efficient therapy and all that without AI. The thing is, DBS is not going to improve by 30% because we all of the sudden have adaptive DBS […].” (P15)

AI user interface design requirements

 

“If the focus is too broad, this might lead to a cancelling out of the effects.” (P10)

 

“Just link the relevant publications to the findings to see the robustness of the features in other research as well.” (P13)

 

“As a researcher, I would like to have 100% transparency and access to the data.” (P7)