Large language models can be manipulated to generate misinformation by poisoning of a very small percentage of the data on which they are trained, but a harm mitigation strategy using biomedical knowledge graphs can offer a method for addressing this vulnerability.
- Daniel Alexander Alber
- Zihao Yang
- Eric Karl Oermann