Abstract
We present a dataset capturing multiple manifestations of self-bias, the systematic prioritization of self-related information, across cognitive, social, and economic decision-making domains. While individual self-bias effects have been extensively documented, their underlying relationships remain poorly characterized, limiting the development of integrative theoretical frameworks. This dataset addresses this limitation by providing comprehensive trial-by-trial data from 134 participants who completed 10 classic self-bias paradigms: self-reference effect, mere ownership effect, self-face visual search, self-name visual search, cocktail party effect, self-name attentional blink, shape-label matching, self-enhancement, implicit association test of self-esteem, and endowment effect. We also collected key individual difference variables, including personality traits, self-esteem, and cultural-related self-construals. The dataset enables researchers to elucidate underlying mechanisms of self-biases, apply computational models to elucidate underlying mechanisms, and investigate how individual differences may modulate self-bias across domains. This resource provides an empirical foundation for determining whether self-biases reflect a unitary construct, like a g-factor of self-processing, or domain-specific phenomena, advancing our understanding of how self-relevance shapes human cognition and behavior.
Background & Summary
Humans systematically prioritize information related to the self over information related to others, a phenomenon observed consistently across perception, attention, memory, evaluation, and choice1,2,3,4. These self-biases manifest in multiple forms, reflecting the multifaceted nature of self-representation in human cognition. Despite their ubiquity in everyday life, our understanding of how different self-biases relate to one another remains limited. A major reason is historical: cognitive, social, and economic traditions have developed in parallel, using distinct paradigms, measures, and theories.
In cognitive psychology, self-prioritization yields faster and more accurate of processing of self-related information5,6,7,8,9. Related findings include the self-referential memory advantage for self-encoded material10,11,12 and preferential detection of one’s own face and own name in cluttered scenes (the cocktail-party effect)13,14. In social psychology, self-positivity bias reflects individuals’ tendency to perceive themselves in an ‘unrealistically’ positive manner (e.g., better-than-average judgments)15,16,17, evident in both explicit18,19,20 and implicit21,22 levels. Valuation may link these traditions but highlight mechanistic heterogeneity: the mere-ownership effect aligns with a positivity route, whereas the endowment effect reflects reference-dependent valuation driven by loss aversion23,24.
A central question is whether these biases represent manifestations of the same underlying mechanisms analogous to a unified self-processing system that operates regardless of context, related but distinct phenomena, or entirely separate processes25,26. Intrinsic self-biases observed in individuals with amnesia or mild cognitive impairment suggest some self-processing may operate independently of explicit self-knowledge27,28. However, recent studies attempting to address this question have typically examined correlations between two or three self-bias paradigms, generating inconsistent results with correlations that are often small and lack robustness25,26,29,30. Equally unresolved is how individual differences, such as personality, self-esteem, and cultural factors, shape the magnitude or expression of self-bias.
The present dataset constitutes the most comprehensive measures of self-bias to date, providing trial-by-trial data for 134 participants across 10 widely used paradigms, spanning cognitive, social, and economic decision-making domains, including self-reference effect, mere ownership effect, self-face visual search, self-name visual search, cocktail party effect, self-name attentional blink, shape-label matching, self-enhancement, implicit association test of self-esteem, and endowment effect. In addition, we collected measures of key individual difference variables, including the Big Five personality dimensions, self-esteem, and independent-interdependent self-construals, which previous research suggests may modulate self-bias effects20,31,32.
Taken together, this resource brings 10 established self-bias paradigms into a single, trial-level dataset collected within one cohort, enabling direct cross-paradigm comparisons across cognitive domains and cautious tests of shared versus domain-specific mechanisms25,26. The inclusion of individual difference measures—such as personality, self-esteem, and self-construals—allows examination of heterogeneity across individuals and potential cultural modulation. The trial-level structure is suitable for computational modeling (e.g., drift-diffusion modeling), making it possible to investigate where self-biases may influence processing (evidence accumulation, decision thresholds, response bias, or non-decision time)6,33,34. We release these data to support transparent reuse, method benchmarking, and progress toward integrative accounts of how self-related processing shapes cognition and behavior across contexts.
Methods
Participants
The present research was approved by the Ethics Committee of the Department of Psychological and Cognitive Sciences at Tsinghua University (NO. 2022–29), and was conducted in accordance with the ethical standards laid down in the Declaration of Helsinki. A total of one hundred and thirty-four Chinese undergraduate or graduate students (77 females; mean age = 21.99 ± 2.08 years old; ranging from 18 to 28 years old) were recruited from the psychology subject pool at Tsinghua University. All of them reported being right-handed and having normal or corrected-to-normal vision without color blindness. Each participant signed an informed consent for participation and data sharing before the start of the experiment. The entire experiment lasted approximately 4 hours, and each participant received 240 Chinese Yuan (CNY) for their time and participation.
Design and procedure
The experiment comprised 10 widely-used experimental paradigms to investigate self-biases across cognitive domains (see Table 1 for a review), along with an online questionnaire that included measurements of big five personality, self-construals, individualism-collectivism, self-esteem, subjective well-being, self-concept clarity, the dark triad (Machiavellianism, narcissism, and psychopathy), as well as modesty. The entire task was divided into two sets, requiring participants to engage in the experiment over two separate days (two hours each). The order of the aforementioned tasks was pseudorandomized for each participant. After signing the informed consent form, participants were asked by the experimenter to indicate the full name of their best friend in real life. They were instructed to input either the full name or the family name of this “friend” before the start of certain self-bias paradigms, according to the instructions presented on the screen. The sex of the best friend was not controlled, as participants selected this individual based on their own subjective criteria. The self-enhancement and endowment effects were assessed through an online questionnaire hosted on the WJX platform (www.wjx.cn). The remaining tasks were conducted using PsychoPy software (version 2022.2.4). For stimulus presentation, we employed a 25-inch external monitor with a resolution of 1920 × 1080 pixels at 60 Hz. Below are specific descriptions of each self-bias paradigm and each self-reported scale.
Self-bias paradigms
Self-reference effect (SRE)
In line with previous research11,25,35, we employed a trait-word evaluation paradigm to elicit the self-reference effect. The task followed a single-factor (Identity: self, friend, or familiar other) within-participants design. The “familiar other” used in this paradigm was Lu Xun, a highly influential modern Chinese writer, widely recognized as a foundational figure in 20th-century Chinese literature and thought. His works are extensively taught in Chinese schools, and his character, ideas, and values are well-known to most university students in China. This choice follows prior self-processing studies that used Lu Xun as a representative figure for the “familiar other” condition in self-processing research9,36. It should be noted that we did not individually assess participants’ knowledge of Lu Xun’s character traits in this dataset. The materials comprised a list of 240 two-character trait adjectives. These adjectives were divided into four sub-lists (40 items for each of the three conditions in the encoding phase, and 120 new items serving as distractors in the recognition phase). Items in the four sub-lists were matched in valence and frequency according to the results of a pilot study. Notably, for each sub-list, half of the adjectives were positive, and the other half were negative.
As shown in Fig. 1, the task comprised two phases: the encoding phase and the recognition phase. The instruction presented before the encoding phase was as follows: “In the upcoming task, you will be shown a series of adjective–name pairs. For each pair, please evaluate how well the adjective describes the named individual. You will have 4 seconds to make each judgment. Please try to respond within this time limit.”
During the encoding phase, each trial began with a fixation cross presented against a gray background (RGB: 127, 127, 127) for 1000 ms. After that, a trait adjective was presented simultaneously with either the full name of the participant, the full name of the best friend, or “Lu Xun”. Participants were instructed to rate the extent to which the trait adjective was descriptive of the specified person on a 5-point scale (1 represented “not at all descriptive” and 5 represented “very descriptive”). Participants were required to complete the rating within 4000 ms; otherwise, the stimuli would disappear, and no answer would be recorded. The next trial commenced immediately after the rating was completed or the stimuli disappeared. The encoding phase consisted of 120 randomized trials, with rest periods provided after every 60 trials.
Following the encoding phase, participants performed a task-unrelated mental calculation exercise for approximately 3 minutes. In the subsequent recognition phase, participants undertook an unexpected recognition test. All 240 trait adjectives were presented in a randomized sequence. The instruction presented before the recognition phase was as follows: “You will now see a series of adjectives. Your task is to judge whether each adjective is ‘old’ (i.e., previously presented during the encoding phase) or ‘new’ (i.e., not previously encountered). For each adjective judged as ‘old’, you will then be asked to indicate whether you ‘remember’ it (i.e., you can recollect specific contextual details from the encoding phase, such as associated thoughts, feelings, or visual impressions) or merely ‘know’ it (i.e., the item feels familiar, but you cannot recall any specific details)”37. Specifically, participants were required to indicate their responses to the aforementioned questions by pressing either the “Z” or “M” key on the keyboard. The key-response mapping was counterbalanced across participants. There was no time limit imposed for responding to either of those questions.
Mere ownership effect (MOE)
The task involved an encoding phase and a recognition phase, following a single-factor (Ownership: self-owned or experimenter-owned) within-participants design. Consistent with previous research29, participants were informed that both they and the experimenter had won a competition, resulting in each receiving a basket filled with various shopping items.
Materials comprised 120 photographic representations of everyday purchase items, obtained from the Bank of Standardized Stimuli37. These items were divided into three sets (A, B, and C), each containing 40 items. Items were paired across sets based on similar categories. For instance, there were different fruits such as an apple, strawberry, and banana in sets A, B, and C, respectively. Items across the three sets were equated for familiarity based on subjective ratings provided by Brodeur and colleagues38. Each set was randomly allocated to either the “self” condition, the “experimenter” condition, or to serve as distractors during the recognition phase. The instruction presented before the encoding phase was as follows: “Both you and the experimenter had won a competition, resulting in each receiving a basket filled with various shopping items. The images of the items, along with their associated ownership cues, will be presented sequentially. Please assign each object to the appropriate basket based on the color of the ownership cue.”
During the encoding phase, two shopping baskets were displayed in the lower corners of the screen – one in blue and the other in green (see Fig. 2). Participants were informed that one basket belonged to themselves, and the other belonged to the experimenter. Each trial began with a 1000 ms fixation cross on a gray background (RGB: 127, 127, 127), followed by a centrally presented item photograph for 2000 ms. Subsequently, a blue or green border appeared around the item, indicating its assigned ownership. Participants were required to allocate the item to the corresponding basket by pressing a designated key, as quickly and accurately as possible (without time limit). Upon keypress, the next trial began immediately. The encoding phase included 80 randomized trials, with a rest break provided after 40 trials.
Following the encoding phase, participants carried out an unrelated mental calculation task for approximately 3 minutes. In the subsequent recognition phase, participants undertook an unexpected recognition test. All 120 items were presented in a randomized sequence. The instruction presented before the recognition phase was as follows: “You will now see a series of items. Your task is to judge whether each item is ‘old’ (i.e., previously presented during the encoding phase) or ‘new’ (i.e., not previously encountered). For each item judged as ‘old’, you will then be asked to indicate whether you ‘remember’ it (i.e., you can recollect specific contextual details from the encoding phase, such as associated thoughts, feelings, or visual impressions) or merely ‘know’ it (i.e., the item feels familiar, but you cannot recall any specific details).” Specifically, participants were required to indicate their responses to the aforementioned questions by pressing either the “Z” or “M” key on the keyboard. The key-response mapping was counterbalanced across participants. There was no time limit imposed for responding to either of those questions.
Self-face visual search (FVS)
The task followed a 2 (Target identity: self or stranger) × 2 (Target presence: present or absent) within-participant design. Participants had their identification photo taken in the laboratory at the end of the first experimental day and completed this task on the second experimental day. Each photo was captured using a Canon M50 Mark II camera with a focal length of 45 mm. Apart from participants’ own facial image (i.e., the self-face), a facial image of another participant of the same biological sex was randomly selected by the experimenter to act as the face image of a stranger (i.e., the stranger-face) during the visual search task. All participants reported that they did not know the assigned stranger. Fifteen male and fifteen female facial images with a neutral expression were obtained from a previous database39, and served as distractors. The mean age of the participants in this database was comparable to that of our participants (21.70 ± 2.37 vs. 21.99 ± 2.08), t(162) = 0.68, p = 0.50. Using the Adobe Photoshop 2023 software, all these images were cropped to the same size (1600 pixels × 2000 pixels), and then stored with 256 gray levels14.
At the beginning of the task, facial images of the participant and the assigned stranger were displayed on the screen. The instruction presented was as follows: “ You will search for either the self-face or the stranger-face in two separate blocks. In each block, please press the designated key as quickly and accurately as possible to indicate whether the target face was presented.” Specifically, participants responded by pressing either the “Z” or “M” key on the keyboard to indicate the presence or absence of the target face. The key-response mapping was counterbalanced across participants.
Each trial began with a fixation cross presented against a gray background (RGB: 127, 127, 127) for 500 ms, followed by an array of six different facial images (each 2.38° × 3.18°) evenly positioned around the central point, visible for 3000 ms (see Fig. 3). The visual angle between the center of each image and the central point was approximately 5.3°. Distractor faces were randomly selected from a set of fifteen distractors of the same biological sex. Participants were required to press one of two corresponding keys to indicate whether the target image presented in this trial, as quickly and accurately as possible. Upon a keypress, or after 3000 ms from the images display, the subsequent trial would start immediately. The entire task comprised 192 trials, evenly distributed across four experimental conditions. The trial order was randomized within each block. Participants had the opportunity to rest after every set of 32 trials.
Self-name attentional blink (AB)
We investigated the self-name attentional blink using the rapid serial visual presentation paradigm26,40, which followed a 3 (T2 identity: self, friend, or stranger) × 2 (T2 presence: present or absent) × 4 (Lag: 1, 2, 5, or 8) × 2 (Task type: blink or control) within-participant design. The Chinese family names of the participants themselves, their best friends, and another randomly selected participant were used as the self-name, the friend-name, and the stranger-name, respectively (note that the family names for all participants and their best friends consisted of a single Chinese character, a common phenomenon among Chinese individuals). Before the experiment started, participants were asked to eliminate names from a list of twenty-four common Chinese first names if: (1) the name was the same as, or similar to, one of the three target names; or (2) someone close to them had that name. The remaining names on the list would then serve as distractors during the task.
The instruction presented was as follows: “You will see a rapid stream of single-character Chinese family names. One of them will be white, and all others will be black. In certain blocks, your task is to first report the white character and then judge whether a black target character appeared. In other blocks, you may ignore the white character and only respond to the black target character. Important: Each block has different instructions. Be sure to read the guidance shown at the beginning of each block carefully!” As visualized in Fig. 4A, each trial began with a central fixation cross presenting for 1000 ms. This was followed by a rapid serial visual presentation (RSVP) sequence consisting of 15 first names (see Fig. 4B for an illustration), each appearing for 100 ms against a gray background (RGB: 127, 127, 127). Except for the first target (T1), which was displayed in white, all other names in the sequence were in black. Distractor names, randomly selected for each trial, included T1, which was always positioned at the third, fourth, or fifth place in the sequence. The second target (T2), identified as either the self-name, the friend-name, or the stranger-name, was either omitted or appeared at one of four different intervals (lags) following T1: Lag 1, Lag 2, Lag 5, or Lag 826.
(A) Trial procedure flowchart for the self-name attentional blink paradigm. (B) A detailed illustration of the RSVP sequence, with examples for the second target (T2) presented at Lag 2. It is important to note that the first target (T1) may appear at either the third, fourth, or fifth position in the RSVP sequence.
The experiment included two types of tasks: the blink task and the control task. In the blink task, following the RSVP stream, participants were asked to sequentially answer two questions: (1) “What was the white character?” (to be typed as a response); and (2) “Was [T2 name] present or not present?” (responded by clicking one of two buttons on the screen). The presentation of stimuli in the control task was identical to that in the blink task. However, participants were only required to respond to the second question.
The entire task consisted of six blocks (i.e., self-blink, friend-blink, stranger-blink, self-control, friend-control, and stranger-control). In each block, T2 was presented at each lag 12 times and was not presented in another 48 trials, resulting in 96 trials presented in a randomized sequence. The order of the six blocks was also randomized. Participants were given the opportunity to rest after every set of 32 trials.
Self-name visual search (NVS)
The task followed a 3 (Target identity: self, friend, or stranger) × 2 (Target presence: present or absent) within-participant design. Participants were required to search for Chinese first names of themselves, their best friends, and another randomly selected participant in three separate blocks. It should be noted that for each participant, the same first name was used as the stranger’s name in this task and the self-name attentional blink task. Before the experiment started, participants were asked to eliminate names from a list of twenty-four common Chinese first names, using the same exclusion criteria as in the self-name attentional blink task. The remaining names on the list would then serve as distractors.
The instruction presented was as follows: “You will search for the self-name, the friend-name, or the stranger-name in three separate blocks. In each block, please press the designated key as quickly and accurately as possible to indicate whether the target name was presented.” Specifically, participants responded by pressing either the “Z” or “M” key on the keyboard to indicate the presence or absence of the target name. The key-response mapping was counterbalanced across participants.
Each trial began with a fixation cross presented against a gray background (RGB: 127, 127, 127) for 500 ms (see Fig. 5), followed by an array of six distinct first names (each 1.32° × 1.32°) evenly arranged around the central point, visible for 2000 ms. The visual angle between the center of each name and the central point was approximately 5.3°. Distractor names shown were randomly selected from the distractors. Participants were required to press one of two corresponding keys to indicate whether the target name presented in this trial, as quickly and accurately as possible. Following a keypress or after 2000 ms from the display of names, the subsequent trial commenced immediately. The entire task comprised 288 trials, evenly distributed across six experimental conditions. The trial order was randomized within each block. Participants had the opportunity to rest after every set of 32 trials.
Cocktail party effect (CPE)
The task followed a single-factor (Target identity: self or stranger) within-participants design. To create a setting similar to a cocktail party, we simultaneously played two recordings of Chinese first names through the left and right channels of headphones41. The Chinese first names of the participants themselves and another randomly selected participant were used as the self-name and the stranger-name, respectively. The recordings of these first names, as well as those of another twenty common Chinese first names, were acquired via an AI-based voice synthesis platform (https://voice.ncdsnc.com/). The recordings differed only in pronunciation, while other acoustic properties like volume, tone, and timbre were kept consistent. Using Adobe Audition 2023, we processed each recording into monaural source stimuli. These stimuli, comprising versions for both the left and right channels, were trimmed to a uniform length of 400 ms. Before the task started, participants were instructed to remove any names from the list of thirty common Chinese first names if the pronunciation was identical or similar to either of the two target names or to the first name of someone they knew well. The remaining names on the list would then serve as distractors during the task.
The instruction presented was as follows: “In the following task, you will simultaneously hear two different Chinese family names—one in your left ear and one in your right ear. If either the [self-name] or the [stranger-name] is presented, please respond as quickly and accurately as possible by indicating the ear in which the target name appeared (press the “Z” key for left ear, and the “M” key for right ear). If neither of the two target surnames is presented, no response is required for that trial. Please note that both target names will never appear in the same trial.”
Each trial began with a central fixation cross presented against a gray background (RGB: 127, 127, 127) for 1000 ms (see Fig. 6). After that, recordings of two different first names were presented through the left and right channels of headphones, respectively. Participants were required to press one of two corresponding keys to indicate the position (left or right) of the target name, as quickly and accurately as possible. Participants were informed that they did not need to press any keys if the target name was not presented, and each trial contained at most one target name. Upon a keypress, or after 2000 ms from the presentation of recordings, the trial ended. Following an inter-trial interval with a blank screen for 1000 ms, the subsequent trial commenced immediately. The whole task was conducted in a single block consisting of 240 trials. The self-name and the stranger-name were each paired with a randomly selected distractor in 60 trials, respectively. In the remaining 120 trials, two randomly selected distractors were paired. The channel assignment was balanced so that each target name was presented through the left and right channels in 30 trials each. The trial order was randomized during the task. Participants had the opportunity to rest after every set of 60 trials.
Shape–label matching (SLM)
The task followed a 3 (Shape identity: self, friend, or familiar other) × 2 (Trial type: matching or nonmatching) within-participant design. The full names of the participant, their best friend, and “Lu Xun” (representing the familiar other), were used as labels corresponding to the self, friend, and familiar other, respectively9,25. The task consisted of two phases. In the learning phase, participants learned to associate three geometric shapes (circle, square, and triangle) with three named identities—specifically, the full names of the self, their best friend, and the familiar other.
The instruction presented was as follows: “You are now required to remember the following associations: the circle represents [self-name], the square represents [friend-name], and the triangle represents Lu Xun. In the upcoming task, each trial will present a shape–name pair on the screen. Based on the associations you just learned, please judge as quickly and accurately as possible whether the presented pair is a correct match or not.” Specifically, participants indicated whether each pair was matched or mismatched by pressing either the “Z” or “M” key on the keyboard. The key-response mapping was counterbalanced across participants. In addition, the associations between geometric shapes and identity labels (i.e., full names) were also counterbalanced to ensure experimental control across subjects.
During the testing phase, participants were presented with shape–name pairings and tasked with determining if the pairing matched, based on previously learned rules, as quickly and accurately as possible. As shown in Fig. 7, each trial began with the presentation of a fixation cross against a gray background (RGB: 127, 127, 127) for 500 ms. This was followed by the display of a shape–name pairing for 100 ms. The shapes (2.4° × 2.4°) and names (about 4.4° × 2.4°) appeared consistently above and below the cross, respectively. The midpoint of both the shape and the name was positioned 3.5° from the fixation cross. Subsequently, a blank screen appeared for 1100 ms, during which participants had the opportunity to respond by pressing one of two designated keys to signify whether the shape–name pairing matched or not. After a keypress, or once 1100 ms had elapsed from the onset of the blank screen, feedback indicating whether the response was correct, incorrect, or too slow was displayed for 500 ms. Following the disappearance of this feedback, the next trial began immediately.
Initially, the participants engaged in a practice block. Once they achieved six consecutive correct responses, they progressed to the formal experiment. The formal experiment consisted of 360 trials, equally divided among six experimental conditions. It should be noted that each non-matching pair combination was presented an equal number of times. For example, in the self-nonmatching condition, there were 30 trials each of “self-shape + friend-name” and “self-shape + familiar other-name”. The trial order was randomized during the task. Participants had the opportunity to rest after every set of 60 trials.
Self-enhancement (SE)
The measurement of self-enhancement was determined by comparing self-assessments with established external benchmarks20,42. Participants estimated their ranking (as integers), relative to their peers at Tsinghua University across eight characteristics: intelligence, cooperation, appearance, morality, sociability, health, honesty, and generosity. Specifically, the instruction presented was “Please estimate the approximate percentile rank of the following traits of yours within the Tsinghua University student population. (A lower number indicates a higher ranking)” In this ranking system, a score of “0” indicated the top position, while “100” denoted the bottom position.
Implicit association test of self-esteem (IAT)
The Implicit Association Test was utilized to assess participants’ implicit attitudes towards themselves21. In this task, participants had to sort Chinese words according to their meanings. The experiment involved two sets of word lists. The first list contained 12 words related to different identities, with half representing the concept of “self” and the other half representing “others.” The second list comprised 6 trait adjectives with a positive valence (sincere, reliable, intelligent, friendly, kind-hearted, and generous), and 6 trait adjectives with a negative valence (phony, deceitful, rude, cold, mean, and lazy). These trait adjectives were selected based on likability ratings from a previous study43.
The instruction presented was as follows: “In this task, you will be asked to categorize words based on the label(s) presented in the upper-left and upper-right corners of the screen. For each word, if it belongs to the category indicated by the label(s) on the upper-left corner, press the ‘Z’ key. If it belongs to the category on the upper-right corner, press the ‘M’ key. Please respond as quickly and accurately as possible. The table below displays all the words that may appear in the task, along with the category to which each belongs. Please take a moment to familiarize yourself with the word-category pairings before beginning the experiment.”
The task followed a five-block IAT design, recognized as the standard in the current IAT methodology44. As illustrated in Table 2, blocks 1, 2, and 4, each comprising 20 trials, served as practice sessions, though this was not disclosed to the participants. Implicit attitudes were assessed by comparing performances in blocks 3 and 5, each containing 40 trials where identity and valence categories were combined. In each block, category labels were consistently displayed in the upper left and right corners. Each trial started with a fixation cross against a gray background (RGB: 127, 127, 127) for 500 ms (see Fig. 8), followed by a word (pertaining to either identity or valence) from the two aforementioned lists, displayed at the center of the screen. Participants were asked to sort it into the corresponding category by pressing the left or right key, as quickly and accurately as possible. The word disappeared as soon as the keypress, followed by feedback (correct, or incorrect) presenting for 200 ms. After an inter-trial interval with a blank screen for 250 ms, the subsequent trial commenced immediately.
Each word was presented for equal times in each block, with a randomized order. Between each pair of blocks, an instruction screen was presented, detailing the nature of the forthcoming task modification. Participants were able to proceed to the next block at their own pace by pressing the space bar once they felt ready. The block order and keypress were counterbalanced among participants (see Table 2).
Endowment effect (EE)
We utilized the valuation paradigm to explore the endowment effect, wherein the willingness to pay (WTP) and the willingness to accept (WTA) were compared45. Consistent with a previous study29, we adapted this paradigm to suit a within-participants design. Our experimental materials comprised images of easily substitutable market goods, categorized into two sets. Each set contained a pen, a plate, a glass, and a doll, with each item differing in appearance from its corresponding item in the other set. The results of a pilot study (N = 50) indicated that each item pair had comparable perceived values, all ts < 0.34, ps > 0.73. The task followed a single-factor (Ownership: self-owned or experimenter-owned) within-participants design. In this setup, one set of goods was always designated as the self-owned items, while the other set was classified as experimenter-owned items.
Participants completed the task via an online questionnaire. On one page of this questionnaire, images of four self-owned items were displayed. For each item, participants were asked, “You own this item; how much would you be willing to sell it for?” On another page, images of four experimenter-owned items were shown. For each of these items, participants were asked, “The experimenter owns this item; how much would you be willing to buy it for?” The order of these two pages was randomized. Each response was limited to an integer value between ¥0 and ¥100.
Self-reported scales
Participants completed an online questionnaire using the WJX platform (www.wjx.cn), which included the following measurements.
Big five personality was measured by the Big Five Inventory-246. This scale consists of 60 items belonging to five dimensions: extraversion, agreeableness, conscientiousness, neuroticism, and openness. Participants rated the extent to which they agreed with each statement on a five-point Likert scale, anchored by 1 (completely disagree), and 5 (completely agree). In the present study, the Cronbach’s alpha coefficients for these dimensions were 0.80, 0.81, 0.83, 0.87, and 0.88, respectively.
Self-construals were assessed using the scale developed by Singelis47. This scale comprises 30 items, categorized into two dimensions: independent self-construal and interdependent self-construal. Participants rated the extent to which they agreed with each statement on a seven-point Likert scale, anchored by 1 (very much disagree), and 7 (very much agree). In the present study, the Cronbach’s alpha coefficients for the two dimensions were 0.72 and 0.77, respectively.
Individualism-Collectivism was measured by the scale developed by Singelis et al.48. This scale, consisting of 32 items, measures four dimensions: horizontal individualism, vertical individualism, horizontal collectivism, and vertical collectivism. Participants rated the extent to which they agreed with each statement on a seven-point Likert scale, anchored by 1 (very much disagree), and 7 (very much agree). In the present study, the Cronbach’s alpha coefficients for these dimensions were 0.70, 0.76, 0.77, and 0.69, respectively.
Self-esteem was measured by the 10-item scale developed by Rosenberg49. Participants rated the extent to which they agreed with each statement on a four-point Likert scale, anchored by 1 (strongly disagree), and 4 (strongly agree). In the present study, the Cronbach’s alpha coefficient for this scale was 0.89.
Subjective well-being was assessed using a combination of the Positive and Negative Affect Scale (PANAS)50 and the Satisfaction with Life Scale (SWLS)51. The PANAS comprises 18 items, equally divided between the positive affect and negative affect dimensions. Participants rated the degree to which they experienced each affective state on a five-point Likert scale, anchored by 1 (not at all) and 5 (extremely). The SWLS contains 5 items. Here, participants evaluated their agreement with each statement on a seven-point Likert scale, anchored by 1 (completely disagree) and 7 (completely agree). In the present study, the Cronbach’s alpha coefficients for the positive affect, negative affect, and satisfaction with life dimensions were 0.91, 0.88, and 0.83, respectively.
The dark triad were measured by the short Dark Triad—Chinese version52. This scale comprises three subscales that assess Machiavellianism, narcissism, and psychopathy, each consisting of 9 items. Participants rated the extent to which they agreed with each statement on a five-point Likert scale, anchored by 1 (completely disagree), and 5 (completely agree). In the present study, the Cronbach’s alpha coefficient for the three subscales were 0.71, 0.75, and 0.64, respectively.
Modesty was measured by the Modest Responding Scale53. This scale consists of 21 items distributed across three dimensions: disinclination to boast, inclination to boast, and social undesirableness of boasting, containing 10, 5, and 6 items, respectively. Participants rated the extent to which they agreed with each statement on a seven-point Likert scale, anchored by 1 (very much disagree), and 7 (very much agree). In the present study, the Cronbach’s alpha coefficient for these dimensions were 0.94, 0.80, and 0.89, respectively.
Self-concept clarity was measured the 12-item scale developed by Campbell et al.54. Participants rated the extent to which they agreed with each statement on a five-point Likert scale, anchored by 1 (very much disagree), and 5 (very much agree). In the present study, the Cronbach’s alpha coefficient for this scale was 0.82.
Data Records
The dataset of our study can be accessed at the Open Science Framework (https://doi.org/10.17605/OSF.IO/3H95F)55, and the stimuli we used in each task are shown in the supplementary material. Available under the CC BY 4.0 license, the dataset permits users to use and adapt the data for their purposes, with the requirement of providing proper credit.
Figure 9 provides a visual representation of this dataset’s structure. The main folder, named “Self_bias_dataset,” comprises 11 sub-folders. The initial sub-folder, labeled “0.Self_reported_scales,” compiles participants’ biological sex, age, and self-reported responses for each item on each scale. This data is organized with various items (variables) arranged column-wise and different participants laid out row-wise. On the form’s extreme right side, the calculated average scores for each dimension are presented.
The remaining 10 sub-folders each correspond to a self-bias paradigm as listed in Table 1. For instance, in the case of the self-reference effect, this includes both the trial-by-trial raw data of each participant and the pre-processed data related to the magnitude of self-biases. The file “Summary_SRE.xlsx” is organized into three sheets. The first sheet, “data_raw,” aggregates all the raw data from participants into a single sheet. The second sheet, “data_clean,” removes outliers based on the exclusion criteria detailed in the Technical Validation section. The third sheet offers comprehensive explanations and coding guidelines for each variable included in the first two sheets. Additionally, the file named “Preliminary_results_SRE.xlsx” compiles the performance data for each participant in the task. This includes key metrics such as response time (RT), accuracy (ACC), and response efficiency (RE), with the latter being calculated as RT divided by ACC. Notably, the paradigms for both self-enhancement and the endowment effect were conducted through questionnaires. As a result, the corresponding sub-folders contain only files with summarized self-reported data from participants.
Technical Validation
In this section, we present results of preliminary analysis for each experimental paradigm. This is done to assess whether our participants exhibited significant self-biases across these paradigms, and to highlight the technical quality of our dataset. Below are specific results for these analyses.
Table 3 provides an overview of the specific criteria used to exclude participants and trials from each experimental paradigm, as applied in the Technical Validation analyses. These exclusion procedures were implemented solely to assess the internal validity and robustness of each paradigm within this dataset. Researchers using the raw data are encouraged to apply their own exclusion and preprocessing criteria in accordance with their analytical goals and methodological standards.
Self-reference effect (SRE)
For this paradigm, no participants or trials were excluded from the data analysis. We conducted a one-way (Identity: self, friend, or familiar other) repeated-measure ANOVA on the Recognition accuracy (see Table 4). The results revealed a significant main effect of Identity, F(2, 266) = 134.83, p < 0.001, and ηp2 = 0.53.
Pairwise comparisons with Bonferroni correction suggested that participants performed best in the self-referent memory, both ts > 8.06, ps < 0.001, and Cohen’s ds > 0.69 (We always use Bonferroni corrections for multiple comparisons and report the corrected p values). Furthermore, participants showed better performance in the friend-referent memory than in the familiar other-referent memory, t(133) = 8.60, p < 0.001, and Cohen’s d = 0.74.
Mere ownership effect (MOE)
One participant was excluded as their recognition accuracy was nearly zero. We conducted a paired-samples t-test (Ownership: self-owned, or experimenter-owned) on the Recognition accuracy (see Table 5). The results revealed comparable recognition accuracy for both self-owned and experimenter-owned objects, t(132) = 0.40, p = 0.69. It is important to note that this aligns with the findings of previous research29, which suggested that the mere ownership effect might not be significant in Eastern cultural contexts.
Self-face visual search (FVS)
One participant was excluded due to their accuracy being more than three standard deviations below the group mean. Additionally, trials with response times (RTs) faster than 100 ms, and/or those beyond three standard deviations from the mean were removed, resulting in 0.4% of data being discarded. We then conducted a 2 (Target identity: self or stranger) × 2 (Target presence: present or absent) on the RT, and ACC (see Fig. 10). The results revealed a significant main effect of Target identity on both measurements, both Fs > 53.89, ps < 0.001, and ηp2 > 0.29. There was also a significant main effect of Target presence on both measurements, both Fs > 41.30, ps < 0.001, and ηp2 > 0.23. The interaction between these two factors was significant on RT, F(1, 132) = 18.52, p < 0.001, and ηp2 = 0.12, but not significant on ACC, F(1, 132) = 1.53, p = 0.22.
To streamline the results, we conducted separate pair-wised comparisons for target-present and target-absent trials. The findings showed that participants responded faster and more accurately when searching for their own face compared to a stranger’s face, regardless of whether the target was present or not, all ts > 5.53, ps < 0.001, and Cohen’s ds > 0.48.
Self-name attentional blink (AB)
For this paradigm, no participants or trials were excluded from the data analysis. Given that attentional blink effects are typically most pronounced at Lag 2, and least pronounced at Lag 826, we calculated the blink effect for each type of T2 (self, friend, or stranger) by subtracting the detection rate at Lag 2 from that at Lag 8. We then conducted separate one-way repeated-measures (T2 identity: self, friend, or stranger) ANOVAs for both the blink task and the control task. As shown in Fig. 11, the main effect of T2 identity was significant in the blink task, F(2, 266) = 70.50, p < 0.001, and ηp2 = 0.35, but it was not significant in the control task, F(2, 266) = 0.16, p < 0.85. Specifically, in the blink task, the results showed the least blink effect for the self-name, both ts > 10.15, ps < 0.001, and Cohen’s ds > 0.87. However, the blink effects for the friend-name and the stranger-name were comparable, t(133) = 0.52, p > 0.99. The results in the blink task indicated that the attentional blink was reduced when detecting one’s own name26,40. The absence of this pattern in the control task further suggested that this phenomenon could not be attributed to intrinsic differences among the three target names (T2).
Self-name visual search (NVS)
No participant was excluded from the data analysis. However, trials with response times (RTs) faster than 100 ms, and/or those beyond three standard deviations from the mean were removed, leading to the discarding of 2.4% of the data. We then conducted 3 (Target identity: self, friend, or stranger) × 2 (Target presence: present or absent) repeated-measure ANOVAs on the RT, and ACC (see Fig. 12). The results revealed a significant main effect of Target identity on both RT and ACC, both Fs > 25.65, ps < 0.001, and ηp2 > 0.16. There was also a significant main effect of Target presence on both measurements, both Fs > 32.12, ps < 0.001, and ηp2 > 0.19. Additionally, the interaction between these two factors was significant for both RT and ACC, both Fs > 3.52, ps < 0.032, and ηp2 > 0.02.
Performance in the self-name visual search paradigm. Note. (1) Means of RT and ACC for different experimental conditions are shown in panels A and B, respectively. (2) The error bars show the standard errors of the means. RT = response time; ACC = accuracy. *** denotes p < 0.001, and ns. denotes nonsignificant.
To streamline the results, separate one-way repeated-measures (Target identity: self, friend, or stranger) ANOVAs were conducted for both target-present and target-absent trials. For the target-present trials, the effect of Target identity was significant on both RT and ACC, both Fs > 21.84, ps < 0.001, and ηp2 > 0.14. Pairwise comparisons with Bonferroni correction indicated that participants responded faster and more accurately to their own name compared to both the friend-name and the stranger-name, all ts > 6.10, ps < 0.001, and Cohen’s ds > 0.52. However, the RT and ACC for the friend-name and the stranger-name were comparable, both ts < 0.78, ps > 0.99. For the target-absent trials, a significant effect of Target identity on both RT and ACC was also observed, both Fs > 8.73, ps < 0.001, and ηp2 > 0.06. Pairwise comparisons with Bonferroni correction indicated that participants responded faster and more accurately to their own name than to the stranger-name, both ts > 3.91, ps < 0.001, and Cohen’s ds > 0.33. Participants also responded faster to their own name than to the friend-name, t(133) = 6.97, p < 0.001, and ηp2 = 0.60, but the ACC was comparable, t(133) = 2.03, p = 0.13. Moreover, the RT and ACC for the friend-name and stranger-name were comparable, both ts < 2.28, ps > 0.07.
Cocktail party effect (CPE)
One participant was excluded due to their accuracy being more than three standard deviations below the group mean. Additionally, trials with response times (RTs) faster than 100 ms, and/or those beyond three standard deviations from the mean were removed, resulting in 1.8% of data being discarded. We then conducted paired-samples t-tests (Target identity: self or stranger) on the RT, and ACC. As shown in Fig. 13, the results revealed faster and more accurate responses to the self-name than to the stranger-name, both ts > 7.50, ps < 0.001, and Cohen’s ds > 0.65.
Shape–label matching (SLM)
Two participants were excluded due to their accuracy being more than three standard deviations below the group mean. Additionally, trials with response times (RTs) faster than 100 ms, and/or those beyond three standard deviations from the mean were removed, resulting in 0.2% of data being discarded. We then conducted 3 (Shape identity: self, friend, or familiar other) × 2 (Trial type: matching or nonmatching) repeated-measure ANOVAs on the RT, and ACC (see Fig. 14). The results revealed a significant main effect of Shape identity on both RT and ACC, both Fs > 85.42, ps < 0.001, and ηp2 > 0.39. There was also a significant main effect of Trial type on both measurements, both Fs > 3.89, ps < 0.05, and ηp2 > 0.03. Additionally, the interaction between these two factors was significant for both RT and ACC, both Fs > 40.95, ps < 0.001, and ηp2 > 0.23.
Performance in the Shape–label matching paradigm. Note. (1) Means of RT and ACC for names of different experimental conditions are shown in panels A and B, respectively. (2) The error bars show the standard errors of the means. RT = response time; ACC = accuracy. *** denotes p < 0.001, * denotes p < 0.05, and ns. denotes nonsignificant.
To streamline the results, we conducted separate one-way repeated-measures (Shape identity: self, friend, or familiar other) ANOVAs for matching and nonmatching trials. In the matching trials, the effect of Shape identity was significant on both RT and ACC, both Fs > 21.53, ps < 0.001, and ηp2 > 0.14. Pairwise comparisons with Bonferroni correction indicated that participants responded fastest and most accurately to pairings involving the self-shape, all ts > 10.71, ps < 0.001, and Cohen’s ds > 0.93. Participants also responded faster and more accurately to pairings involving the friend-shape than to those involving the familiar other-shape, both ts > 5.18, ps < 0.001, and Cohen’s ds > 0.45. In the nonmatching trials, a significant effect of Shape identity on both RT and ACC was also observed, both Fs > 4.61, ps < 0.011, and ηp2 > 0.03. Pairwise comparisons with Bonferroni correction indicated participants responded slowest, yet most accurately to pairings involving the self-shape, all ts > 2.85, ps < 0.04, and Cohen’s ds > 0.93. Moreover, the RT and ACC for pairings involving the friend-shape and familiar other-shape were comparable, both ts < 0.38, ps > 0.99. These results indicated that the self-prioritization effect in the shape-label matching task was significant only in the matching trials, aligning with the findings of previous research7,56,57.
Self-enhancement (SE)
For this paradigm, no participant was excluded from the data analysis. We calculated both individual traits percentile rankings and an overall percentile ranking for each participant. Table 6 summarizes the findings from one-sample t-tests, which compare these scores against a 50% benchmark. The results revealed a significant self-enhancement tendency in the overall percentile rankings of our participants, t(133) = 5.88, p < 0.001, and Cohen’s d = 0.51. Significant self-enhancement was also noted in the individual scores for morality, sociability, health, honesty, and generosity, all ts > 2.32, ps < 0.022, and Cohen’s ds > 0.20. However, the opposite pattern was observed for intelligence, t(133) = 2.92, p = 0.004, and Cohen’s d = 0.25. Scores for the remaining characteristics (cooperation, appearance, and generosity) did not significantly deviate from the 50% benchmark, all ts < 1.32, ps > 0.19.
Implicit association test of self-esteem (IAT)
Two participants were excluded due to their accuracy (in the two experimental blocks) being more than three standard deviations below the group mean. Additionally, trials with response times (RTs) faster than 100 ms, and/or those beyond three standard deviations from the mean were removed, resulting in the exclusion of 3.60% of the data. Paired-samples t-tests were conducted on the RT and ACC data for the congruent and incongruent conditions (as shown in Fig. 15). The term “congruent” in this context denotes the experimental block where “self” is associated with positive valence, while “incongruent” pertains to that pairing “self” with negative valence. The results indicated faster and more accurate responses in the congruent condition than in the incongruent condition, both ts > 4.48, ps < 0.001, and Cohen’s ds > 0.39.
Endowment effect (EE)
Data analysis excluded one participant whose responses were consistently “0” for all items. For each of the remaining participants, we calculated the total price for WTA (willingness to accept) by summing the prices they assigned to self-owned items. Similarly, we summed the prices assigned to experimenter-owned items to determine the total price for WTP (willingness to pay). As shown in Fig. 16, paired-samples t-test revealed that the WTA (107.84 ± 57.44) was higher than the WTP (74.36 ± 43.28), t(133) = 8.04, p < 0.001, and Cohen’s d = 0.70.
Self-reported scales
To evaluate whether the self-reported measures included in the dataset exhibited sufficient variability for meaningful analysis, we examined the descriptive statistics for each scale. Table 7 summarizes the means, and standard deviations, and observed ranges for all self-report instruments. These results indicate that the scales captured a wide range of individual differences, supporting their utility for future exploratory or correlational analyses.
Usage Notes
-
1.
While we have outlined the specific data exclusion criteria for each experimental paradigm in the Technical Validation section, researchers are free to pre-process the data using alternative methods.
-
2.
We have not prescribed a specific method for calculating the magnitude of each self-bias, due to the absence of a unified standard in this area. Researchers are encouraged to investigate the relationships between self-biases derived from different paradigms using various methods.
-
3.
Based on the methodology and findings of previous research25, we anticipated small-sized correlations when comparing self-bias across tasks. An a priori power analysis using G*Power indicated that a minimum sample size of 97 participants was needed (r = 0.25, α = 0.05, power = 0.80). We collected additional participants (N = 134) to accommodate researchers who may choose different analytical approaches beyond correlation analysis.
-
4.
It is important to note that the control conditions varied across paradigms in this dataset. Specifically, the comparison targets included a stranger, the experimenter, a friend, or another familiar person, depending on the original design of each paradigm. While this approach reflects the diversity of methodologies in self-bias research, it may limit the direct comparability of self-other distinctions across tasks. Researchers using this dataset are advised to take these variations into account when conducting cross-paradigm analyses. In particular, familiarity is a well-established modulator of self-related processing1,3,7,58, and the varying levels of familiarity across conditions should be considered when interpreting the results.
-
5.
To compare the magnitudes of self-biases across different cognitive domains, researchers are encouraged to consult the works of Amodeo et al.25 and Nijhof et al.26. For those interested in applying computational modeling techniques to investigate self-biases, we recommend referring to the studies by Golubickis et al.33 and Liang et al.34. These references provide valuable insights and methodologies that can guide future research using the dataset.
Code availability
No custom code was used during the compilation of the dataset. We utilized Microsoft Excel for data storage and to calculate self-biases for each participant.
References
Cunningham, S. J. & Turk, D. J. Editorial: A review of self-processing biases in cognition. Q. J. Exp. Psychol. 70, 987–995 (2017).
Gal, D. Why the sun will not set on the endowment effect: The endowment effect after loss aversion. Curr. Opin. Psychol. 39, 12–15 (2021).
Golubickis, M. & Macrae, C. N. Self-prioritization reconsidered: Scrutinizing three claims. Perspect. Psychol. Sci. 18, 876–886 (2023).
Scheller, M. et al. Self-association enhances early attentional selection through automatic prioritization of socially salient signals. eLife 13, RP100932 (2024).
Kirk, N. W. & Cunningham, S. J. Listen to yourself! prioritization of self‐associated and own voice cues. Br. J. Psychol. 116, 131–148 (2025).
Macrae, C. N. et al. Self-relevance prioritizes access to visual awareness. J. Exp. Psychol.-Hum. Percept. Perform. 43, 438–443 (2017).
Sui, J., He, X. & Humphreys, G. W. Perceptual effects of social salience: Evidence from self- prioritization effects on perceptual matching. J. Exp. Psychol.-Hum. Percept. Perform. 38, 1105–1117 (2012).
Pauly, M. & Wentura, D. The “plus polar self”: A reinterpretation of the self-prioritization effect as a polarity correspondence effect. J. Exp. Psychol.-Gen. 154, 672–685 (2025).
Qi, Y. et al. The avatar-prioritization effect among online gamers: A perspective from self–avatar identity relevance. J. Appl. Res. Mem. Cogn. 13, 71–81 (2024).
Cunningham, S. J. et al. Yours or mine? Ownership and memory. Conscious. Cogn. 17, 312–318 (2008).
Rogers, T. B. et al. Self-reference and the encoding of personal information. J. Pers. Soc. Psychol. 35, 677–688 (1977).
Zhang, M. et al. My child and I: Self- and child-reference effects among parents with self-worth contingent on children’s performance. Memory 31, 1244–1257 (2023).
Sui, J. & Han, S. Research report: Self-construal priming modulates neural substrates of self-awareness. Psychol. Sci. 18, 861–866 (2007).
Tong, F. & Nakayama, K. Robust representations for faces: Evidence from visual search. J. Exp. Psychol.-Hum. Percept. Perform. 25, 1016–1035 (1999).
Liu, S. et al. An event‐related potential study of self‐positivity bias in native and foreign language contexts. Psychophysiology 60, 14145 (2023).
Shi, Y. et al. Disowning the self: The cultural value of modesty can attenuate self-positivity. Q. J. Exp. Psychol. 70, 1023–1032 (2017).
Zhu, M. et al. Lonely individuals do not show interpersonal self-positivity bias: Evidence from n400. Front. Psychol. 9, 473 (2018).
Brown, J. D. Evaluations of self and others: Self-enhancement biases in social judgments. Soc. Cogn. 4, 353–376 (1986).
Brown, J. D. The self. (New York: McGraw-Hill, 1998).
Kurman, J. Self-enhancement: Is it restricted to individualistic cultures? Pers. Soc. Psychol. Bull. 27, 1705–1716 (2001).
Greenwald, A. G. & Farnham, S. D. Using the Implicit Association Test to measure self-esteem and self-concept. J. Pers. Soc. Psychol. 79, 1022–1038 (2000).
Guenther, C. L. & Alicke, M. D. Deconstructing the better-than-average effect. J. Pers. Soc. Psychol. 99, 755–770 (2010).
Thaler, R. Toward a positive theory of consumer choice. J. Econ. Behav. Organ. 1, 39–60 (1980).
Tong, L. C. P. et al. Trading experience modulates anterior insula to reduce the endowment effect. Proc. Natl. Acad. Sci. USA. 113, 9238–9243 (2016).
Amodeo, L. et al. A comparison of self-bias measures across cognitive domains. BMC Psychol. 9, 1–15 (2021).
Nijhof, A. D. et al. No evidence for a common self-bias across cognitive domains. Cognition 197, 104186 (2020).
Leblond, M. et al. Self-reference effect on memory in healthy aging, mild cognitive impairment and Alzheimer’s disease: Influence of identity valence. Cortex 74, 177–190 (2016).
Sui, J. & Humphreys, G. W. Self-referential processing is distinct from semantic elaboration: Evidence from long-term memory effects in a patient with amnesia and semantic impairments. Neuropsychologia 51, 2663–2673 (2013).
Collard, P. et al. The relationship between endowment and ownership effects in memory across cultures. Conscious. Cogn. 78, 102865 (2020).
Orellana-Corrales, G. et al. The impact of newly self-associated pictorial and letter-based stimuli in attention holding. Atten. Percept. Psychophys. 83, 2729–2743 (2021).
Lind, S. E. et al. The self-reference effect on memory is not diminished in autism: Three studies of incidental and explicit self-referential recognition memory in autistic and neurotypical adults and adolescents. J. Abnorm. Psychol. 129, 224–236 (2020).
Nowicka, M. M. et al. The impact of self-esteem on the preferential processing of self-related information: Electrophysiological correlates of explicit self vs. other evaluation. PLoS ONE 13, e0200604 (2018).
Golubickis, M. et al. Parts of me: Identity-relevance moderates self-prioritization. Conscious. Cogn. 77, 102848 (2020).
Liang, Q. et al. The roles of the LpSTS and DLPFC in self-prioritization: A transcranial magnetic stimulation study. Hum. Brain Mapp. 43, 1381–1393 (2022).
Zhang, Y. et al. Neural basis of cultural influence on self-representation. Neuroimage 34, 1310–1316 (2007).
Cao, M. et al. Processing priority for avatar reference in online gamers: Evidence from behavioral and ERPs studies. Acta Psychol. Sin. 52, 345–356 (2021).
Conway, M. A. & Dewhurst, S. A. The self and recollective experience. Appl. Cogn. Psychol. 9, 1–19 (1995).
Brodeur, M. B. et al. Bank of Standardized Stimuli (BOSS) Phase II: 930 New Normative Photos. PLoS ONE 9, 106953 (2014).
Yang, T. et al. Tsinghua facial expression database—A database of facial expressions in Chinese young and older women and men: Development and validation. PLoS ONE 15, 0231304 (2020).
Shapiro, K. L. et al. Personal Names and the Attentional Blink: A Visual “Cocktail Party” Effect. J. Exp. Psychol.-Hum. Percept. Perform. 23, 504–514 (1997).
Nakane, T. et al. How the non-attending brain hears its owner’s name. Cereb. Cortex 26, 3889–3904 (2016).
John, O. P. & Robins, R. W. Accuracy and bias in self-perception: Individual differences in self-enhancement and the role of narcissism. J. Pers. Soc. Psychol. 66, 206–219 (1994).
Anderson, N. H. Likableness ratings of 555 personality-trait words. J. Pers. Soc. Psychol. 9, 272–279 (1968).
Turel, O. & Serenko, A. Cognitive biases and excessive use of social media: The facebook implicit associations test (FIAT). Addict. Behav. 105, 106328 (2020).
Morewedge, C. K. & Giblin, C. E. Explanations of the endowment effect: An integrative review. Trends Cong. Sci. 19, 339–348 (2015).
Zhang, B. et al. The Big Five Inventory-2 in China: A comprehensive psychometric evaluation in four diverse samples. Assessment 29, 1262–1284 (2021).
Singelis, T. M. The measurement of independent and interdependent self-construals. Pers. Soc. Psychol. Bull. 20, 580–591 (1994).
Singelis, T. M. et al. Horizontal and Vertical Dimensions of Individualism and Collectivism: A Theoretical and Measurement Refinement. Cross-Cult. Res. 29, 240–275 (1995).
Rosenberg, M. Society and the adolescent self-image. (Princeton, NJ: Princeton University Press, 1965).
Qiu, L. et al. Revision of the Positive Affect and Negative Affect Scale. Chinese J. Appl. Psychol. 14, 249–254 (2008).
Diener, E. et al. The Satisfaction with Life Scale. J. Pers. Assess. 49, 71–75 (1985).
Zhang, J. et al. Development and evaluation of the short dark Triad—Chinese version (SD3-C). Curr. Psychol. 39, 1161–1171 (2020).
Whetstone, M.R. et al. The Modest Responding Scale. Paper presented at the convention of the American Psychological Society (San Diego, CA, 1992).
Campbell, J. D. et al. Self-concept clarity: Measurement, personality correlates, and cultural boundaries. J. Pers. Soc. Psychol. 70, 141–156 (1996).
Qi, Y. et al. A Comprehensive Dataset for Investigating the Structure of Self-Bias. Open Science Framework. https://doi.org/10.17605/OSF.IO/3H95F
Enock, F. E. et al. Overlap in processing advantages for minimal ingroups and the self. Sci Rep. 10, 18933 (2020).
Enock, F. et al. Self and team prioritisation effects in perceptual matching: Evidence for a shared representation. Acta Psychol. 182, 107–118 (2018).
Amodeo, L. et al. The Relevance of Familiarity in the Context of Self-Related Information Processing. Q. J. Exp. Psychol. 76, 2823–2836 (2023).
Harris, C. R. et al. Moray revisited: High-priority affective stimuli and visual search. Q. J. Exp. Psychol. Sect A-Hum. Exp. Psychol. 57a, 1–31 (2004).
Cherry, E. C. Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25, 975–979 (1953).
Moray, N. Attention in dichotic listening: Affective cues and the influence of instructions. Q. J. Exp. Psychol. 11, 56–60 (1959).
Greenwald, A. G. & Banaji, M. R. Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychol. Rev. 102, 4–27 (1995).
Greenwald, A. G. et al. Measuring individual differences in implicit cognition: The implicit association test. J. Pers. Soc. Psychol. 74, 1464–1480 (1998).
Kahneman, D., Knetsch, J. L. & Thaler, R. H. Experimental tests of the endowment effect and the coase theorem. J. Polit. Econ. 98, 1325–1348 (1990).
Acknowledgements
This research was supported by the National Social Science Foundation of China (No. 22 & ZD184). The authors would like to extend their thanks to Jun Fei Loo and Dennis Chong for assistance with some aspects of data collection.
Author information
Authors and Affiliations
Contributions
Yuxuan Qi: Conceptualization, methodology, software, investigation, formal analysis, writing - original draft, writing - review & editing. Fengjie Zou: investigation, writing - original draft, writing - review & editing. Xi Ying Chau: investigation, writing - original draft. Michelle Zhou: investigation, writing - original draft. Fei Wang: Conceptualization, methodology, writing - review & editing, supervision, project administration, funding acquisition. Jie Sui: Conceptualization, supervision, project administration, writing - review & editing.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Qi, Y., Zou, F., Chau, X.Y. et al. A Comprehensive Dataset for Investigating the Structure of Self-Bias. Sci Data 12, 1755 (2025). https://doi.org/10.1038/s41597-025-06035-z
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41597-025-06035-z















