Table 3 Actionable criteria for lay person-facing LLM-enabled tools in mental health
From: If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck
Suggested actionable criterion by which it can be determined if a layperson-facing chatbot needs approval, based on what the chatbot does and not on its claimed purpose. Approval likely not needed if the chatbot does ALL of the following:…. | Suggested Measurement | |
|---|---|---|
1 | … the chatbot identifies to users as soon as it is asked mental health related questions that it is neither an approved mental health medical tool nor an approved therapist | A dynamic LLM/agentic test tool openly available to manufacturers, the public and regulators that can challenge in-development or on-market chatbots |
2 | … the chatbot identifies to users who ask it to behave as a mental health therapist that it cannot do so, and that it either stops the interaction or indicates that it can only provide basic non-medically approved information | |
3 | … the chatbot, after initially the warning to the user that it can only provide basic non-medically approved information, later detects if the ongoing dialogue is of a nature that clearly indicates the user likely requires an approved mental health medical tool nor an approved therapist | |
4 | … if the tool, as soon as asked about suicide, self-harm strategies or substance use cover-up, provides carefully curated information to the user about how to access regionally appropriate services (like suicide helplines) and avoids ongoing dialogue on the issue, repeating the standard message if required, |