Large language models appear aligned, yet harmful pretraining knowledge persists as latent patterns. Here, the authors prove current alignment creates only local safety regions, leaving global adversarial paths intact, and empirically show widespread vulnerability across leading aligned models.
- Jiawei Lian
- Jianhong Pan
- Lap-Pui Chau