The convergence of artificial intelligence (AI) and neurotechnology, sometimes referred to as NeuroAI, is reshaping neurological practice. From AI-enhanced seizure prediction and neuroimaging analytics to closed-loop neurostimulation and brain–computer interfaces, AI-based tools already affect clinical practice and the lives of people with neurological disease1,2. Yet, with this transformation comes a heightened need for ethical vigilance. Among the many concerns raised by these developments, the issue of trust stands out as especially urgent.

Discussions about trust in AI, particularly in healthcare, have intensified over the past few years, particularly since the 2019 release of the European Union’s Ethics Guidelines for Trustworthy AI, which served as template for the 2024 European Union AI Act3. Much of this debate has focused on risks such as opacity, bias or technical malfunction. Meanwhile, neurotechnology has become a site of growing ethical scrutiny, especially as it becomes more autonomous, adaptive and embedded in public discourse4. Yet, in both domains, trust is often framed narrowly, focusing on trust by individual patients or clinicians. While this question is crucial, it is insufficient. Trust does not operate in a vacuum, and the integration of AI into neurotechnology, where AI does not just interpret data but acts upon the human brain, inspires awe as much as fear in the collective imagination, rendering public trust a central concern. This shortfall in attention is lamentable, as public trust in AI and neurotechnology will codetermine the acceptance of novel tools in neurology by both patients and physicians.

As a 2025 consensus statement highlighted, trust in technology is highly contextual and shaped by the sociotechnical environment in which it operates5. Trust is therefore not simply an individual attitude but a web: an interdependent structural condition for sustainable innovation that encompasses patients, clinicians, regulators, developers, institutions and the broader public. Without trust, adoption might stall even when tools are technically sound6. Moreover, the convergence of AI and neurotechnology creates a compounding effect: public distrust in one domain can quickly spill over into the other and vice versa. But what happens if NeuroAI systems are not trustworthy? Faulty systems could produce misleading diagnostics, biased outputs or inappropriate interventions that directly affect cognitive or emotional functioning, harming both patients’ autonomy and their health.

Public trust in NeuroAI can erode even in the absence of technical failures. Increasingly, public concern centres not only on what NeuroAI systems do but on how they are developed, communicated and governed. For example, controversies surrounding companies such as Neuralink have highlighted how opaque trial reporting, conflicts of interest and close ties between private actors and government roles can create deep skepticism7. In these cases, the problem is not necessarily the technology’s safety, but the perceived lack of transparency, accountability and procedural integrity. Trust, once lost, can be difficult to restore, and its loss can extend far beyond a single device or company, casting doubt over the legitimacy of the entire field. Hence, maintaining public trust is not simply about avoiding errors but is about conducting NeuroAI development in a way that is intelligible, inclusive and aligned with democratic values.

So, how can we move forward? At least three lessons for trust-building are identified here. First, trust, whether individual or public, cannot be enforced top-down but must be based on reasons, and ideally good ones8. Therefore, NeuroAI systems need to be made trustworthy through responsible, human-centred design, involving meaningful stakeholder inclusion and responsiveness to the needs and concerns of those most affected — people with neurological disease, their caregivers and clinicians. Importantly, such trust-building should be iterative and dialogical, incorporating different perspectives as co-creators of the technology throughout the entire development and deployment process. In this sense, NeuroAI demands that neuroethics and AI ethics converge, not only in institutional structures but also in intellectual priorities. AI ethics should take heed from neuroethics’ sustained engagement with questions of identity, agency and personhood, whereas neuroethics could benefit from a more operational focus on explainability, bias mitigation and accountability.

Second, trustworthiness must be perceivable. It is not enough for systems to be safe or fair in principle; the public must be able to see and understand the basis for trust. NeuroAI raises questions not only about what systems do but also about how they do it, and whether the public perceives this doing as aligned with their values and expectations. Put differently, with NeuroAI, we need not only to get it right, but also to get it across. This demands openness, not only in algorithmic logic but in governance, oversight and communication. In a space saturated with opacity and hype, clear and honest messaging is essential. Inflated claims, whether in media or scientific articles, can backfire, fuelling unrealistic expectations and deepening distrust when they go unmet9.

Third, public trust should never be a substitute for regulation, nor should it obscure the need for proper scrutiny. Fostering trust is not always desirable per se. In some cases, the most responsible action might be to withhold trust, demand further evidence and question whether certain applications are warranted in the first place. An ethically substantive focus on trust can and should help to make existing limitations visible, acknowledge the need for additional evidence and motivate open and honest communication. Emerging governance frameworks such as the UNESCO Recommendation on the Ethics of Neurotechnology and the upcoming Council of Europe’s Guidelines on AI-processing of Neural Data mark a step in the right direction.

Ultimately, trust in NeuroAI is not just about system performance but about the larger web of society we want to build. Although neurology alone cannot resolve all the challenges of NeuroAI, the field can still make a pivotal contribution by ensuring that the integration of AI and neurotechnology happens in a responsible manner, guided by patients’ interest and preventing failures that could inflict lasting damage to public trust. In doing so, neurology can help to secure a future in which public trust in NeuroAI, and in healthcare more broadly, is safeguarded. As Annette Baier aptly noted10, “Trust comes in webs, not in single strands, and disrupting one strand often rips apart whole webs.” In NeuroAI, those webs are intricate but indispensable.