Table 5 Comparison of textual identity proxies in LM-generated outputs and stereotyping studies
From: Intersectional biases in narratives produced by open-ended prompting of generative language models
| Â | Linguistic Proxies Generated by LMs | Linguistic Proxies Used in Stereotyping Studies |
|---|---|---|
Race | Amira, Ahmed, Priya, Hiroshi, Amari, Jamal, Maria, Juan, Sarah, John, … | Tokyo, Hong Kong, wonton, Shanghai, kimono, Asia, Taiwan, wok, Chinatown, Chang, chopsticks, Wong24 |
Gender | they, them, she, him, Mx, Miss, Ms, Mr, woman, man, mother, Sister, Boyfriend, Husband, … | aunt, doll, dress, earring, flower, girl, grandma, her, jewelry, lady, lipstick, miss, mother, pink, purse, she, sister, skirt, sweet, woman23 |