Abstract
Learning a new language is a process everyone undergoes at least once. However, studying the neural mechanisms behind first-time language learning is a challenging task. Here we aim to explore the functional alterations following learning Israeli Sign Language, a visuo-spatial rather than an auditory-based language. Specifically, we investigate how phonological, lexical, and sentence-level components of the language system differ in their neural representations. In this within-participant design, hearing individuals naïve to sign languages (n = 79) performed an fMRI task requiring the processing of different linguistic components, before and after attending an Israeli Sign Language course. A learning-induced increase in activation was detected in various brain regions in task contrasts related to all sign language linguistic components. Activation patterns while processing different linguistic components post-learning were spatially distinct, suggesting a unique neural representation for each component. Moreover, post-learning activation maps successfully predicted learning retention six months later, associating neural and performance measures.
Similar content being viewed by others
Introduction
Language learning is a complex process involving multiple stages and components and hence, multiple brain regions. Following language learning, widespread alterations in brain activity in response to stimuli in the newly-learned language have been demonstrated1,2, sometimes within learning periods as short as a few weeks or even days3,4,5. A particularly intriguing case of language learning is that of hearing individuals learning sign language, as it presents a unique opportunity to investigate learning-induced functional plasticity by relying on a different modality of communication compared to spoken languages, based on the visuo-spatial rather than the auditory domain. Therefore, studying the acquisition of sign language allows an examination of universal elements of language-learning regardless of modality, while detecting its modality-specific qualities. Like spoken languages, learning sign language has been shown to induce vast modifications in brain function6,7,8.
To better understand the neural underpinnings of the language learning process and the learning-induced functional changes, in the current study we took into account the complexity of the language system. We did so by examining changes related to different components and processes in the novice-learners’ brains.
The research of sign language linguistics, first established by Stokoe9 and thoroughly addressed by Schlenker10, demonstrates that sign languages are fully developed linguistic systems with complex structures, comparable to those of spoken languages. Models of language processing generally acknowledge that the comprehension of language input starts with initial phonological processing11,12,13,14,15,16. The Dual Stream Model by Hickok and Poeppel14 describes a ventral stream for language comprehension in which sounds are mapped into meaning. This pathway involves acoustic–phonetic processing of spoken language input, the retrieval of lexical representation, and the engagement of the computational system responsible for syntactic and morphological operations. Similarly, a detailed framework of language comprehension based on the work by Bibb et al.11 and Gvion and Friedmann13 (Fig. 1), suggests that language input is first processed by a phonological input buffer (PIB), which stores phonological information required for short-term processing17. Familiar words are then processed by the phonological input lexicon (PIL), which stores phonological information of familiar words15,18 and activates the relevant entry in the semantic lexicon, containing information regarding the meaning of known words19. The semantic lexicon then activates the word’s representation in the conceptual system19,20. Finally, syntax is required to comprehend the relations between components in the sentence and the roles of the different participants in the event denoted by the sentence structure (i.e., who did what to whom).
Based on Bibb et al.11 and Gvion and Friedmann13. The different shades of blue represent the three clusters of language-processing components addressed in the current study (initial phonological processing, lexical processing—phonological and semantic, including the conceptual system, and syntactic (sentence-level) processing).
Distinct brain regions have been attributed to the different components of language comprehension14,21. Like spoken languages, sign language comprehension in deaf individuals has also been found to involve phonological, lexical and syntactic stages of processing22, engaging both modality-independent language-related fronto-temporal neural regions and modality-specific parieto-occipital neural regions16,23,24,25. Sign language learning in hearing individuals is therefore expected to involve distinct functional alterations in various brain regions when learners are exposed to stimuli associated with different linguistic components of the newly-learned language. Previous research on sign language learning in hearing participants has taken a more global approach, testing whole-brain functional changes in response to specific types of linguistic content (single signs8 or two-sign phrases6), or only examined post-learning sign language processing7. However, the functional modifications associated with processing different linguistic components of sign language in hearing participants have not been directly explored.
In the current study we examine, in a large cohort (n = 79) of hearing participants with no prior sign language knowledge, how learning sign language changes their brains. We specifically investigate how different linguistic components are represented following learning and whether these neural representations are predictive of short- and long-term learning success. Participants underwent two functional magnetic resonance imaging (fMRI) scans, before and after completing a comprehensive 4-week eight-lesson Israeli Sign Language (ISL) course. The course was taught four times, with four different groups of 17–22 participants, and covered ISL vocabulary, grammar, and syntax. Upon completion of the course, participants underwent an ISL-to-Hebrew sentence translation exam to validate that language learning has indeed occurred. A control group of 20 participants underwent the same two fMRI scans, with the same gap between them, without language learning in between, to ensure that any observed neural alterations may be attributed to the learning intervention and not to the passage of time or the second exposure to the MRI scanner, etc.
During the fMRI scans participants watched sign language videos of signed stimuli of different types, the comparison of which allowed us to isolate the processing of different linguistic components of ISL, according to the language comprehension framework described in Fig. 1. To detect brain regions associated with initial phonological processing we compared activation while watching unlearned signs (akin to pseudosigns, processed in the PIB) and non-linguistic videos of hand movements. For lexical (phonological and semantic) processing, we compared learned signs (involving the PIB, PIL, semantic lexicon and conceptual system) and unlearned signs. For syntactic (sentence-level) processing we compared full sentences (requiring all linguistic components in the language comprehension framework described above) and learned signs. Notably, sentence processing comprises analysis of syntactic structure and relations, as well as combining the meaning of the words/signs into a semantic representation, morphological processing, and processing of prosodic cues26,27,28.
To examine the overlap between the spatial patterns associated with the processing of different linguistic components of sign language, we employed both univariate and multivariate analytical approaches, leveraging their complementary characteristics to provide a robust and comprehensive perspective. While univariate analyses allow for the identification of localized brain regions, the multivariate analysis captures distributed neural representations that may not be detectable with traditional voxel-wise methods. Furthermore, we used a within-participant design and a unique cohort of naïve, highly-motivated participants, to investigate the functional changes occurring in the learning process. This combination of methodological and participant-level factors offers insights into the functional processes associated with language acquisition.
We tested the hypothesis that sign language learning would induce functional alterations in various brain regions detectable while processing different ISL linguistic components and their combinations. Like previous studies regarding sign language learning in hearing participants6,7,8, we expected ISL learning to engage both modality-independent fronto-temporal brain regions, traditionally associated with language processing, and modality-specific parieto-occipital brain regions, corresponding with the visuo-spatial nature of sign language. Modality-independent regions were expected to include Broca’s area in the left inferior frontal gyrus (IFG) as well as posterior temporal regions29,30,31,32,33,34,35, and modality-specific regions were expected to consist of the superior parietal lobule (SPL)36,37 and lateral occipital cortex6. Given the inherently different phonology of sign languages compared to spoken languages, we expected to detect modality-specific regions mainly, but not exclusively, in the context of initial phonological processing. Importantly, the involvement of the different stages of language comprehension was expected to engage spatially distinct functional patterns in novice hearing learners of sign language, manifested as the engagement of different brain regions in the processing of different linguistic components of sign language. Consequently, we hypothesized that these linguistic components and their combinations will vary in their ability to predict behavioral learning outcomes.
Results
Behavioral results
All 79 participants completed a sign language test immediately following the course. Results showed significantly successful learning, with a mean test score of 94.96 (out of 100, SD = 5.85, one-tailed Wilcoxon signed-rank test, z = 7.73, p = 5.26e−15). Fifty-eight of these participants repeated the test 6 months later, and still showed significant retention of ISL knowledge, with a mean score of 78.2 (SD = 20.12; z = 6.62, p = 1.78e−11). As expected, the 6 months without training resulted in a significant decay in the participants’ ISL proficiency (two-tailed Wilcoxon signed-ranks test, z = 5.82, p = 6.01e−9).
Functional neuroplasticity following sign language learning
To detect enhanced neural engagement in the processing of all linguistic components and their combinations following learning, we examined the increase in activity in the following task contrasts: sentences > non-linguistic hand movements, sentences > learned signs, learned > unlearned signs, unlearned signs > non-linguistic, learned signs > non-linguistic and unlearned signs > learned signs. A significant increase in brain activity (p < 0.05, FDR corrected for 91,282 comparisons, per number of vertices) was detected for all task contrasts, designed to elicit activity related to different linguistic components and their combinations (e.g. the learned signs > non-linguistic task contrast involved both phonological and lexical processing. For the average activation maps pre- and post-learning see Supplementary Fig. S1). For the sentences > non-linguistic contrast (Fig. 2A), involving the processing of all the linguistic components discussed above (initial phonological, lexical—phonological and semantic, and syntactic (sentence-level)), significant increase in activity was found in various brain regions associated with language and visual processing and with motor skills (Table 1). The same areas showed increased activity for the sentences > learned signs contrast (Fig. 2B and Table 1) as well. In the learned > unlearned signs (Fig. 2C and Table 1), involving lexical (phonological and semantic) processing, increased activity was detected in the left angular gyrus and in various bilateral regions (notice that although the semantic lexicon and the conceptual system are clearly distinct, in the current study we assessed them together). Increased activation in the unlearned signs > non-linguistic contrast (Fig. 2D and Table 1), associated with initial phonological processing, was found in bilateral parietal and motor areas, and in additional regions in the left hemisphere. As the learned signs > non-linguistic contrast (Fig. 2E and Table 1) corresponds with initial and lexical phonological processing, as well as lexical-semantic processing, increased activity was demonstrated in similar regions to the combined areas found for both the learned > unlearned signs and the unlearned signs > non-linguistic contrasts. For the regions showing increased activity in the unlearned signs > learned signs contrast, see Fig. 2F and Table 1. No significant increase in activation was observed for any of the task contrasts in a control group (N = 19) of participants who underwent the two scans but did not attend an ISL course in between. A vertex-wise 2 (pre-/post-learning) × 2 (learning/control) mixed-effects analysis of variance (ANOVA) performed on the sentences > non-linguistic activation maps of the control group and each ISL-course round (N = 17–22 per round) revealed a significant interaction effect, indicating learning-induced alterations in activity in language-related brain regions due to language learning (Fig. 3 and Table 2). For the results of a similar ANOVA performed on all other contrasts, averaged across ISL-course rounds, see Supplementary Fig. S2.
Increased activations post- vs. pre-sign language learning (p < 0.05, FDR corrected for 91,282 comparisons per number of vertices, N = 79) are presented for the following task-contrasts: A sentences > non-linguistic, B sentences > learned signs, C learned > unlearned signs, D unlearned signs > non-linguistic, E learned signs > non-linguistic, and F unlearned > learned signs. Sub-cortical regions are displayed in a volume representation. Colorbars indicate Z-scores.
A Average activation maps in the contrast sentences > non-linguistic before (left) and after (right) the sign language course, for each of the four rounds of participants including the control group (N = 17–20 per round). Colorbars indicate Z-scores. B Mixed-effect ANOVA group (learning/control participants) × time (pre-/post-learning) interaction effect of the activation in the sentences > non-linguistic contrast (p < 0.05, FDR corrected for 91,282 vertices), performed on the control group with the learning group of each course round. Colorbar indicates F-scores.
Furthermore, decreased activity in the sentences > non-linguistic and the sentences > learned signs contrasts following learning included regions associated with the Default Mode Network (DMN)38, mainly in the right hemisphere, and bilaterally in the unlearned signs > non-linguistic contrast (Fig. S7 and Table S1). For the regions showing decreased activity in the learned signs > unlearned signs, unlearned signs > non-linguistic and learned signs > non-linguistic contrasts, see Fig. S7(C, D and E, respectively) and Table S3.
Representational similarity analysis of different linguistic components following learning
Group-level representational similarity analysis (RSA, Fig. 4A)39 yielded three maps of brain regions associated with sentence-level, lexical, and initial phonological processing following sign language learning (Fig. 4B), thus characterizing the neural representations of the three linguistic components. Pearson’s correlations were computed between a neural representational dissimilarity matrix (RDM), calculated per vertex based on its BOLD signal in the different task conditions, and three conceptual (categorical) dissimilarity matrices representing sentence, lexical, and initial phonological processing (Fig. 4A). Brain regions in which neural RDMs positively correlated with the sentence-level processing dissimilarity matrix included language-related regions in the left hemisphere, such as Broca’s area and area 55b, the left fusiform gyrus, and bilateral temporal and motor regions. Brain areas associated with the phonological input lexicon and the semantic lexicon (and the conceptual system) included the left angular and fusiform gyri as well as prefrontal, temporal and motor regions. Regions associated with initial phonological processing (PIB) spanned across regions in the frontal, parietal and occipital lobes, as well as the bilateral fusiform gyrus and the left sensory-motor strip. For a full list of the neural regions associated with each linguistic component, see Table 3.
A Schematic demonstration of the representational similarity searchlight analysis procedure, in which the neural dissimilarity matrix is correlated with the categorical dissimilarity matrices representing sentence (left), lexical (middle), and initial phonological (right) processing. B RSA results (N = 79) showing brain regions in which activation patterns were significantly correlated with the categorical dissimilarity matrices of sentence-level (left), lexical (middle) and initial phonological processing–phonological input buffer (right). All p values < 0.001, Monte Carlo permutation test, threshold-free cluster enhancement correction for multiple comparisons. Colorbar indicates Z-scores. C Dice coefficients between maps associated with the linguistic components.
The Dice coefficient between the RSA-generated maps spanned between 0.012–0.155 (see Fig. 4C). These findings indicate a minimal overlap between the neural regions engaged in the processing of the three linguistic stages.
Group-averaged task-contrast brain activation maps following learning are presented in Fig. 5A (unthresholded) and 5B (thresholded using a Gaussian-two-Gammas mixture model40). An RDM based on these maps yielded significant 1-Pearson’s r dissimilarity scores between the learned > unlearned signs activation map and both other activation maps (as established by a permutation test with 10,000 iterations, p < 0.0001, Fig. 5C). A dissimilarity score of 1-Pearson’s r = 0.53 was found between the sentences > learned signs and unlearned signs > non-linguistic activation maps (Fig. 5C). Note that dissimilarity scores may range between 0 (corresponding with Pearson’s r = 1) and 2 (Pearson’s r = −1). Dice coefficients between the thresholded maps spanned between 0.04–0.37 (Fig. 5D). The RDM and Dice coefficients performed on the individual level and averaged across participants are presented in Supplementary Fig. S3.
A Averaged activation maps (N = 79) in the sentences > learned signs (top), learned signs > unlearned signs (middle) and unlearned signs > non-linguistic (bottom) task-contrasts, following sign language learning. Colorbars indicate Z-scores. B Thresholded maps corresponding to the same task-contrasts. C RDM depicting the relative dissimilarity between the averaged post-learning activation maps (***p < 0.0001). D Dice coefficients between the binarized significant-activation post-learning contrast maps.
Prediction of learning success from task-induced activation patterns
We examined the predictability of sign language test scores immediately after course completion and 6 months later from brain activity and from change in activation in the various task contrasts. The prediction was tested for each of the 58 participants who took the test at both timepoints, for the following task-contrast activation maps: sentences > non-linguistic, sentences > learned signs, learned > unlearned signs, learned > non-linguistic, and unlearned signs > non-linguistic. Predictions were performed based on both post-learning activation maps, and delta activation maps (calculated as the difference between activation post-learning and activation pre-learning). Test scores obtained 6 months post-learning were successfully predicted from post-learning maps (FDR corrected for 10 comparisons) from the sentences > learned signs contrast (Pearson’s r = 0.45, p = 0.0004, Fig. 6A), from the sentences > non-linguistic contrast (Pearson’s r = 0.35, p = 0.0071, Fig. 6B), and from the learned signs > non-linguistic contrast (Pearson’s r = 0.36, p = 0.006, Fig. 6C). Prediction of test scores obtained 6 months post-learning was not significantly successful for the learned > unlearned signs (Pearson’s r = −0.03, p = 0.82), and unlearned signs > non-linguistic (Pearson’s r = 0.11, p = 0.41) post-learning contrasts. Prediction success of test scores obtained immediately following learning was not significant for all five post-learning contrasts (Pearson’s r spanned between −0.017 and 0.22, see Supplementary Fig. S4), possibly due to a ceiling effect in the test scores. Prediction of test scores obtained 6 months post-learning was also successful (FDR corrected for 10 comparisons) when based on the sentences > learned signs delta map (Pearson’s r = 0.41, p = 0.001, Fig. 6D), but not when based on all other delta contrasts (sentences > non-linguistic: Pearson’s r = 0.19, p = 0.16; learned > non-linguistic: Pearson’s r = 0.2, p = 0.13; learned > unlearned signs: Pearson’s r = 0.13, p = 0.35; unlearned signs > non-linguistic: Pearson’s r = −0.07, p = 0.63). Prediction success of test scores obtained immediately following learning was not significant for all five delta contrasts (Pearson’s r spanned between 0.004 and 0.28, see Supplementary Fig. S4).
Test scores were predicted from the A–C post-learning activation maps and from D delta (post- vs. pre-learning) activation maps, in the task-contrasts A, D sentences > learned-signs; B sentences > non-linguistic; and C learned-signs > non-linguistic (**p < 0.01, ***p < 0.001). Scatter plots show prediction success. Each point represents a single participant’s (N = 58) actual and predicted test scores. Pearson’s r between actual and predicted scores is reported at the top-left corner of each plot. Contribution maps depict the relative contribution of each vertex to test scores prediction. Colorbar indicates the direction and magnitude of prediction value (“contribution power” in arbitrary units).
These findings suggest that sentence-level processing alone (more accurately, neural activity related to the learning of ISL sentences) is a strong predictor for language learning and retaining. An additional predictor for long-term learning success is activity related to the combination of lexical phonological and semantic-conceptual knowledge and of initial phonological processing.
We generated maps quantifying each vertex’s contribution to test scores predictions of the three successful predictions (of scores obtained 6 months post-learning). As for predictions based on post-learning maps, contribution maps in the sentences > learned signs and sentences > non-linguistic predictions indicated that areas in the left superior temporal sulcus (STS), angular gyrus, supramarginal gyrus (SMG), ventromedial prefrontal cortex (vmPFC), Precuneus, Broca’s area and area 55b had a positive weight in the prediction models (Fig. 6A, B). However, the contribution map of the sentences > non-linguistic task contrast activation did not include the bilateral inferior parietal sulcus (IPS) and the right supplementary motor area (SMA). The contribution map for the prediction from the learned signs > non-linguistic contrast maps showed mainly regions in the bilateral vmPFC and angular gyrus (Fig. 6C). Contribution map of the prediction based on the sentences > learned signs delta map included Broca’s area and the left STS, middle temporal gyrus (MTG), Precuneus, angular gyrus and pre-motor regions, as well as the bilateral SPL.
Discussion
This study presents a comprehensive investigation of the alterations in neural activity following sign language learning in hearing adults, isolating neural activity related to different ISL linguistic components. We show that learning a novel language induces widespread modifications in brain activity patterns in response to sentence-level, lexical, and phonological aspects of the newly-learned language. Learning-induced changes demonstrated the recruitment of distinct brain regions related to the processing of different linguistic components, offering a neural perspective to the language comprehension model and supporting a complex network of brain regions engaged in language input processing. Figure 7 presents the language comprehension model11,13, with the brain regions that were found in the current study to be associated with the processing of each component. Furthermore, we show that activation patterns associated with sentence processing following sign language learning are a strong predictor of long-term learning success.
Language learning is a widely used framework for the study of neuroplasticity41,42. Our findings are in line with previous literature on functional modifications following learning of both spoken4 and sign languages6,7,8, demonstrating altered neural activation upon exposure to the newly-learned language. We go beyond previous studies by breaking down the newly-learned language into discrete components and associating each of them with spatially independent activation patterns. Specifically, a representational similarity analysis of the different task conditions following sign language learning provides insights on the neural underpinnings of initial phonological processing, lexical (phonological input and semantic) and conceptual processing, and sentence processing, as discussed in the following sections.
Brain regions engaged in initial phonological processing (in the PIB) were expected to be activated, post-learning, in response to all three types of linguistic stimuli: sentences, learned signs, and unlearned signs, but not in response to the non-linguistic stimuli. The task contrast unlearned signs > non-linguistic therefore revealed activation patterns related to initial phonological processing in sign language, in novice signers.
The current work revealed PIB-related activation in multiple regions such as the left precentral gyrus and occipital cortex, as well as the bilateral fusiform gyrus. In the processing of spoken language, initial phonological processing often takes place in superior temporal auditory regions14,43. In line with our results, Emmorey16 associates the processing of sign language phonology in deaf signers with different regions of the human brain including the occipital cortex and left fusiform gyrus (the latter, in the comprehension of sublexical facial components of signs). Activity in processing sign language content in hearing participants following a sign language course has also been reported for the precentral gyrus in a previous study by Johnson et al.7. However, as they have detected precentral gyrus activation in two different task contrasts of American Sign Language (ASL) comprehension (sentences > word-list and word-list > rest), the attribution of this activation to a specific linguistic component remains uncertain. We also found phonological processing-related activations in visuo-spatial parietal areas including the SPL and IPS. Our results are in line with previous findings by Buchsbaum et al.44, in which phonological working memory tasks in deaf native signers recruited more dorsal-parietal regions compared to spoken language users, where activation is more ventral-parietal and temporal. Activation in the SPL6 has additionally been detected in hearing individuals while performing a semantic judgment task following sign language learning. However, due to the nature of the task in this study, in which participants were required to process two-sign phrases, it is not associated with any specific linguistic components.
Initial phonological processing of sign language input is followed by lexical processing16. Accordingly, once processed by the PIB, familiar words are sent to the PIL, which then activates the relevant entry in the semantic lexicon and from there—the conceptual system11,13. Previous work by Mayberry and Fischer45 has shown that late sign language learners—although deaf, unlike the hearing participants in the current work—seem to allocate more resources on phonological processing compared to native signers, who more easily access lexical components of sign language. These findings support the hierarchical framework suggested for language processing, which allows native signers to process lexical structure more automatically and hence prioritize the activation of semantic and conceptual representations. In contrast, non-native signers expend more cognitive resources on phonological identification, leaving fewer resources available for lexical and semantic processing, which negatively impacts comprehension.
Brain activity in response to learned but not to unlearned signs was expected to reveal brain regions associated with the PIL, with the semantic lexicon, and with the conceptual system, hereby referred to as “lexical processing”. The group-averaged map of the learned > unlearned signs task contrast yielded activity mainly in the left angular gyrus, inferior parietal cortex, and precuneus, and in the bilateral MTG and inferior temporal gyrus (ITG). These areas have been reported in studies of both spoken language46,47 and sign languages16,23. The angular gyrus, precuneus, MTG, and ITG are all reported as related to semantic processing in spoken languages46,47,48. The MTG and ITG have been reported in processing sign language by native signers25,49, and have shown increased activity in new learners following a sign language course6, and have also been shown to be shared for semantic processing across modalities (speech and sign) for bimodal bilinguals50. These regions are sometimes associated with different levels of semantic processing. The MTG in particular has been associated with lexical-semantic processes46 and conceptually-driven lexical selection51, making it a possible substrate for the semantic lexicon. This overlap in activated areas suggests a high degree of similarity in mechanisms underlying lexical processing across language modalities.
Finally, sentence-level processing was examined by contrasting brain activity in response to sentences vs. learned signs. The comprehension of full sentences consists of several cognitive elements, including syntactic, morphological and prosodic processing26,27,28. In previous work by Newman et al.52 regarding hemispheric laterality in sign-language processing, they have shown left-hemispheric activation in regions classically associated with language processing when examining the activations in response to ASL sentences as compared to nonsense signs, in late hearing signers. These regions included Broca’s areas and left temporal and inferior parietal regions, similar to the regions reported for sentence-level processing in the current work. Additional research by Johnson et al.7 has also revealed increased activation in language-related fronto-temporal regions in ASL sentence processing compared to ASL word-list processing, in late signers. MacSweeney et al.49 have additionally found greater activation in posterior regions of the left temporal lobe and left inferior frontal cortex, when contrasting processing of British Sign Language (BSL) sentences compared to signed lists, in deaf and hearing native signers. The detection of Broca’s area in this context is additionally in line with previous literature describing it as modality-independent, associated with both sign and spoken languages53. Furthermore, activations associated here with sentence-level processing are in line with regions previously linked to syntactic processing, such as the left perisylvian regions54, the posterior STS (pSTS)46, Broca’s area55, the left MTG56 and area 55b57. This suggests that at the sentence-level as well, similar areas are involved in language processing independently of language modality.
Our findings map the learning-evoked recruitment of brain areas associated with the different stages of language comprehension. We show that brain activity covers distinct regions as language input progresses from initial phonological to lexical, and eventually to sentence processing.
While we observed a striking similarity between areas involved in sign language processing following learning and those reported in studies of spoken-language learning, modality-specific activations were also detected. For example, we found initial phonological activations within the parietal lobe in the SPL and IPS, rather than perisylvian regions that are usually found for speakers of spoken languages58. Similar results of SPL involvement of sign language processing following a sign language course were also reported by Banaszkiewicz et al.6,36 and Johnson et al.7. The SPL has additionally been found to have a central role in sign language production37,59. This activity in parietal regions is most likely due to sign language reliance on the visuo-spatial rather than auditory input36,59,60, and may suggest that some of the regions related to initial phonological processing are modality-dependent, at least in late learning of a sign language by people who had acquired a spoken language as a first language. Similarly, activity in perisylvian regions was limited in the current study. This may show again that areas related to phonological processing are more dependent on language modality. One explanation of these modality-dependent activations may relate to the fact that perisylvian regions lie in proximity to primary auditory processing regions and may be designated to the phonological processing of spoken languages, especially in individuals who acquired spoken language as a first language. These regions may be substituted in the processing of sign language by regions in closer proximity to spatial or sensory-motor areas of the hands and arms.
In addition, in the current study processing of sentence, lexical and initial phonological components of sign language involved the left fusiform gyrus. This may be attributed to the location of the Visual Word Form Area (VWFA), which is traditionally associated with visual aspects of language processing in the form of reading60,61. While in the present study’s participants did not perform a reading task, they were in fact processing language in the visual rather than auditory modality. This finding is in line with previous research associating the VWFA with sign language processing23,62, supporting the notion that the VWFA plays a role in the processing of language through visual input in a broader sense, and not merely in reading. The reported results may however be related to the association of the left fusiform gyrus with semantic processing of linguistic input63, with its involvement in phonological processing64, or with its previously reported role as an integral part of the language system65.
Differences in neural activity between the processing of sign and spoken languages are also related to the involvement of the right hemisphere (whether sign language recruits stronger right-hemisphere structures than spoken language53,66). There are reports of a more right-hemispheric involvement in sign language processing52,67,68 and there is still a debate on the extent to which sign language processing relies on right-hemispheric structures25,69,70. Analysis of language hemispheric dominance in the current work revealed that the vast majority of the participants showed left-hemispheric dominance for sign language. Even though we did not exclude participants based on handedness, we still found a very low proportion of right-hemispheric dominance (with 79 out of 83 showing left-hemispheric dominance), which is similar to the proportions reported in spoken language studies71,72. These results are also consistent with the work by Banaszkiewicz et al.6, who showed (for right-handed participants) that sign language processing became more left lateralized in hearing participants following 3 months of sign language instruction. Furthermore, Newman et al.52 have demonstrated a critical period for recruitment of right-hemispheric regions for the processing of sign language, reporting substantial right-hemispheric activity in ASL processing in hearing native signers but not in those who acquired ASL after puberty. This shows that sign language is generally processed in left-hemisphere structures, just like spoken language, for both right- and left-handed participants, at least in adult late-learners of sign language who are native in a spoken language.
Notably, ISL has a separate syntax from Hebrew, including properties of other studied sign languages—such as right periphery of the syntactic tree (e.g., elements of the higher nodes of the sentence, such as wh-elements, or pronouns in yes/no questions, may appear sentence-final73), use of facial expressions as syntactic markers (e.g., eyebrow raise in yes/no questions, and eyebrow furrow in content questions), and the frequent use of unique syntactic structures such as pseudo-clefts and role-shifts74,75,76. Some examples of differences between the syntax of the two languages can be found in interrogatives and negation76: whereas in Hebrew, the wh- element appears in the beginning of the sentence, both in subject- and in object questions (similarly to English), in ISL the wh-element in object question structures remains in-situ (i.e., in the original position of the object in the sentence) and appears in a sentence-final position (you eat what?—“what did you eat”), and the question is produced with a facial expression that acts as a syntactic marker. In addition, and unlike Hebrew, negative markers in ISL generally follow the verb rather than precede it (i know not—“I don’t know”). In this study, most of the sentences we used included unique ISL structures, as can be seen in Supplementary Table S1. Learning participants in the current study were taught all the abovementioned structures that differ between ISL and Hebrew as a part of the sign language course.
While this was not part of our a priori hypotheses, several brain regions showed decreased activity following learning. These areas include regions of the DMN, that are considered to be task-negative38,77, but also regions that may be associated with language processing, such as the angular gyrus and the SMG. These findings may be a result of familiarity with the fMRI videos in the second scan, or rather with neural processing optimization as proficiency increased78. Additional studies are required to better understand the relationship between increased and decreased activations following learning, yet this is out of scope for the current work.
The test-retest reliability of fMRI is under constant examination. The current study included four different groups of learning participants, as well as a control group who did not undergo a language learning intervention. In order to associate our findings with the learning procedure, we performed a mixed-effect ANOVA on the cohort of control participants with each cohort of learning participants. The ANOVA revealed a significant group (learning/control) × time (pre-/post-learning) interaction effect, indicating increased activity following learning in each of the learning groups, compared to the control group. Furthermore, a group-level mass-univariate analysis aimed to identify regions of increased activity within the control group yielded no significant results. The fact that effects of learning were detected in each of the ISL-course rounds, but not in the control cohort, may suggest that the observed differences resulted from the learning intervention and not from the mere passage of 4 weeks between the two scans.
Notably, in the current work a 1-s inter-stimulus interval is used in the fMRI task. While previous studies79,80 have suggested that a greater temporal separation is recommended between stimuli to properly detect stimulus-related neural activation, here we use task conditions as long as 15 s, providing us with sufficient time to capture the hemodynamic response in each condition.
Brain activity in response to sign language stimuli immediately after learning predicted behavioral measures of learning success 6 months post-learning. This finding is consistent with previous evidence linking brain activation following language-learning to performance measures1. While to the best of our knowledge this association has been so far demonstrated between brain activation and behavioral performance measured at the same time, here we provide first evidence for the predictability of long-term learning retention from shorter-term neural response to the newly learned skill.
We found three activation patterns that were significant predictors for learning retention. The strongest was the activation pattern of the sentences > learned signs, which we interpret as the activation related to the learning of sentence-level structures (such as syntax, prosody and morphology). Additional predictors were the activation patterns related to the sentences > non-linguistic contrast, reflecting all aspects of language, and the learned signs > non-linguistic contrast, reflecting initial-phonological and lexical processing. This suggests that learning the sentence-level components of a new language may be essential for long-term proficiency in it. This conclusion is also supported by previous work demonstrating an association between performance in a learned language and activity in Broca’s area and in the posterior superior temporal gyrus (pSTG), both engaged with syntax processing81.
Our findings may indicate that the manner in which we encode a newly learned language dictates its long-term retention, possibly pointing to a more profound level of learning as opposed to simple memorizing. Specifically, learning sentence-level rules, beyond learning only isolated lexical items, improves the learning and retention of a new language.
To conclude, in the current work we demonstrate how phonological, lexical, and sentence-level components of sign language differ in their neural representations in hearing novice signers. Sign language learning drives changes in neural activity in response to different linguistic components of sign language, highlighting its functional impact on the brain. Furthermore, we show that post-learning activation patterns, as well as pre-to-post difference in activation associated with elements of sentence processing, effectively predict long-term learning retention, therefore linking neural and behavioral measures of learning outcomes.
Methods
Experimental design
A total of 107 volunteers participated in this study. Eighty-seven participants (mean age 26.08, SD 3.85, 79 right-handed, 52 females) underwent two MRI scans, one before and one after (mean time between scans was 5.6 weeks) attending an ISL course. The course was held four times, each with a different group of 20–23 participants, to allow optimal learning with individual guidance and attention from the teacher. An additional cohort of 20 control participants (mean age 24.5, SD 1.24, 17 right-handed, 12 females) underwent the same two MRI scans (mean time between scans was 5.1 weeks) but did not attend a sign language course in between. All participants were hearing individuals, naïve to sign language (mean number of familiar ISL signs 0.6, SD 0.95), with no history of neurological diseases, psychological disorders, drug or alcohol abuse, or use of neuro-psychiatric medication. Learning participants knew on average 2.4 spoken languages (SD 0.69) and control participants 2.25 spoken languages (SD 0.55) at a conversational level prior to the study. Four learning participants and one control participant were excluded from analysis due to poor data quality, yielding a dataset of 83 learning participants and 19 control participants. Four additional learning participants were excluded due to right-hemispheric dominance for sign-language processing (see “Language hemispheric dominance” paragraph in the “Methods” section), resulting in 79 learning participants (17–22 per learning group). All participants took an ISL proficiency test immediately after completing the ISL course, and 58 out of the 79 participants returned to re-take the same test 6 months later, while not practicing sign language in any organized way between the two timepoints. The research protocol was approved by the Institutional Review Board (IRB) of Sheba Medical Center. All participants signed an informed consent form. All ethical regulations relevant to human research participants were followed.
Sign language course
The ISL course consisted of eight 90-min lessons, taking place twice a week over the span of 4 weeks. It was taught by an experienced deaf Israeli Sign Language teacher native in ISL and held via Zoom. The course covered ISL vocabulary, grammar, and syntax. Learning success was evaluated using a sign language proficiency test constructed by the teacher, in which participants were requested to translate videos with ISL content into Hebrew. The test consisted of ten videos of sentences, five videos of dialogues and two videos of basic arithmetic exercises in Israeli Sign Language. All participants took the test immediately after the course. The sign language proficiency test, containing the ISL videos and following each one a blank space in which participants fill in their translation into Hebrew, can be found here.
MRI acquisition
MRI data were acquired using a 3T Magnetom Siemens Prisma (Siemens, Erlangen, Germany) scanner with a 64-channel RF coil, at the Tel-Aviv University Strauss Center for Computational Neuroimaging. Participants underwent two MRI sessions, one before and one after the sign language course. Each MRI session included anatomical and functional scans, with the following specifications: T1-weighted images were acquired with a 3D magnetization-prepared rapid acquisition gradient echo (MPRAGE) sequence, with TR/TE = 2400/2.78 ms and a resolution of 0.9 mm isotropic. T2-weighted images were acquired with a SPACE (SPC) sequence with TR/TE = 3200/554 ms and a resolution of 0.9 mm isotropic. Functional images were acquired with a T2*-weighted gradient-echo echo-planar protocol (GE-EPI) with the following parameters: TR/TE = 750/30.8 ms, multiband acceleration factor (Simultaneous Multi-Slice) of 8, resolution of 2 mm isotropic, flip angle: 52°, AP phase encoding. A fieldmap was acquired using spin-echo EPI images with opposite phase-encoding directions. Functional scans included two runs of an fMRI task (each run was 7 min and 45 s long, see below). Additional scans were acquired as part of the Strauss Center’s general acquisition protocol but were not analyzed in the current study.
fMRI task
The fMRI task included five conditions in a block design (Fig. 8). Four video conditions featured the same native signer of ISL (different from the sign language teacher) signing the following content: (1) three-sign sentences in ISL composed of vocabulary, grammar, and syntax taught in the course; (2) single signs (equivalent to words in spoken languages) in ISL that were taught in the course; (3) single signs in ISL that were not taught in the course (akin to pseudosigns); (4) matched non-linguistic hand and arm movements (i.e., hand and arm movements that did not abide the phonological rules of ISL, choreographed by Dan Pelleg, a professional choreographer with linguistics background). Each video was 2 s long, in all linguistic conditions. To avoid a possible iconicity effect, signs were selected such that they do not resemble the represented object/action, to the best of our ability (see Supplementary Fig. S5). Videos of all conditions did not contain mouthing. The non-linguistic condition included movements of the hands produced in the same space on which ISL signs are produced—in front of the signer’s body. They were of comparable in duration to the videos of the other conditions (~2 s), and were edited such that each video included a sequence of hand movements with variability in complexity and in movement duration, so that they can be contrasted both with single signs and with sentences. Six fluent ISL signers (mean age 32.83, SD 10.25, five females) confirmed that the non-linguistic videos did not contain any gestures that may be interpreted as ISL. For the non-linguistic videos, see Supplementary Movie 1–7. A baseline condition presented a fixation cross.
The videos in the fMRI task were selected from a bank of video clips. The sentences condition bank consisted of 36 unique videos, the learned and unlearned signs banks consisted of 20 videos per condition, and the non-linguistic bank consisted of seven videos. Videos were selected randomly with replacement. For the content of the videos see Supplementary Table S1. In order to minimize perceptual differences between the different conditions, all of the videos presented the same native ISL signer, wearing the same clothes, standing in front of a similar background and with the same lighting conditions, with the same parts of his body visible (the upper part of the body, including the torso, hands, and head). Further, an ANOVA revealed no significant differences between the different task conditions in the following parameters: brightness, contrast, average color histogram, two translational measures (x-axis and y-axis) and a rotation measure.
The different task conditions were designed to involve different combinations of linguistic components and processes (as summarized in Fig. 8): the non-linguistic condition involved no linguistic processing. Given that the movement sequences did not follow the phonology of ISL, these sequences probably did not enter the language input process at all. All three linguistic conditions involved linguistic processing, but each required different components: the unlearned signs condition, which did follow the phonology of ISL but were not stored in the participants’ lexicons, were expected to involve only initial phonological processing (PIB); the learned signs condition involved initial phonological processing (PIB), as well as lexical components: the PIL and the semantic-conceptual system; the sentences condition involved all of the above (the PIB, the PIL and the semantic- conceptual system), as well as additional elements of sentence-level (including syntactic, prosodic and morphological) processing. Notably, although the semantic lexicon and the conceptual system are distinct components, in the current study we assessed them together.
The task included four video conditions and one baseline condition presenting a fixation cross. Below each video condition are blue boxes specifying the linguistic conditions involved. The non-linguistic (blue) condition was designed to involve no linguistic components of ISL, the unlearned signs condition (in yellow) was designed to involve initial phonological processing (PIB) only, the learned signs condition (in purple) also involved phonological- and semantic-lexical processing (PIL and semantic lexicon), and the sentences condition (in green) also involved sentence-level processing.
The task included six blocks of each video condition, each consisting of five 2-s videos, and seven blocks of the baseline condition. The order of conditions was counterbalanced across task runs and participants. The videos within each condition were randomly selected and separated by a 1-s fixation cross. To encourage task engagement, participants were instructed to report their perceived understanding of each video in real-time, on a scale from 1 (“I do not understand any of the content in the video”) to 4 (“I fully understand the content of the video”). For the results of the subjective comprehension scores, see Supplementary Fig. S5 and Table S2.
MRI preprocessing
Image processing was carried out using the pipeline developed by the Human Connectome Project (HCP)82. The pipeline incorporates a set of tools provided by FMRIB Software Library (FSL), Freesurfer83 and the HCP’s Connectome Workbench84,85,86,87. Structural volume images in native space were transformed into MNI space using a nonlinear volume-based registration. The images were projected onto the surface and underwent registration using the multimodal surface matching (MSM) algorithm88. Cortical vertices were then combined with subcortical gray matter voxels to form the standard “CIFTI grayordinates” space89.
Functional images preprocessing included fieldmap-based unwarping of EPI images and motion correction, registration to the structural T1w images, followed by a nonlinear volume-based registration to MNI and projection onto the surface, and then onto the standard grayordinates space. Data were minimally smoothed by a 4 mm FWHM Gaussian kernel in the grayordinates space, cleaned for artifacts and noise using high-pass filtering (2000s cutoff) and FMRIB’s ICA-based Xnoiseifier (FIX)90,91 and underwent registration using the MSM algorithm88.
Behavioral data analysis
Sign language proficiency test scores were obtained immediately and 6 months following the sign language course. Significant differences in sign language test scores between timepoints were assessed using a non-parametric two-tailed Wilcoxon signed-ranks test.
Individual fMRI statistics
Task-fMRI data were analyzed using FSL’s FEAT92 to detect brain regions showing increased task-induced activation following the sign language course. First-level analysis was performed per task run for contrasts of the following video conditions: sentences > non-linguistic; learned signs > non-linguistic; unlearned signs > non-linguistic; sentences > learned signs; and learned signs > unlearned signs (Fig. 2; the opposite contrast of unlearned signs > learned signs is also presented). We then averaged each participant’s two task runs using a fixed-effects model, resulting in contrast activation maps per participant per timepoint (to allow for comparison of the contrasts between the scans performed before and after the course).
Language hemispheric dominance
The analyses in the current study were performed only on participants who demonstrated left-hemispheric dominance for sign language processing. Laterality index (LI) was calculated for all 83 learning participants based on their post-learning activation maps in the sentences > non-linguistic contrast. LI was calculated as [L − R]/[L + R]93 where L and R represent the number of vertices with significant positive activation in the left and right hemispheres, respectively. To identify these vertices, we thresholded each participant’s sentences > non-linguistic map using a Gaussian-two-Gammas mixture model40, where the Gaussian represents the noise, and the two Gamma distributions represent positive and negative activations. Only vertices with a Z-score above the median of the positive Gamma distributions were considered vertices of positive activation. For further analyses we only used the data of the 79 learning participants (mean age 26.23, SD 3.85, 50 females) who demonstrated left-hemispheric dominance for sign language input.
Functional plasticity—group-level fMRI statistics
A group-level analysis was performed to detect learning-induced alterations in task-activity per task contrast, corresponding with the different linguistic domains. For each contrast, a vertex-wise general linear model approach was employed to detect an increase in brain activity between the two timepoints, before and after the course (p < 0.05, FDR corrected for 91,282 vertices94,95), aiming to identify regions of increased engagement following learning. The same analysis was also performed on all 83 participants (see Supplementary Fig. S6), demonstrating a significant increase in activity in all task contrasts. A similar approach was further employed to detect a decrease in brain activity between the two timepoints (see Supplementary Fig. S7 and Table S3), among the 79 participants demonstrating left hemispheric dominance. While a direct comparison between learning and control groups may be biased due to unbalanced group sizes, a vertex-wise 2 (pre-/post-learning) × 2 (learning/control) mixed-effect ANOVA was conducted on the sentences > non-linguistic activation maps to compare learning participants from each course round with the control participants.
Representational similarity analysis of different linguistic components following learning
To investigate activation patterns associated with different linguistic components in a newly-learned language, all analyses described below were performed on the learning participants’ post-learning scans.
An RSA with a whole-brain surface-based searchlight approach39, using CoSMo Multivariate Pattern Analysis (CoSMoMVPA)96 was conducted on the 59,412 cortical vertices. Three categorical (conceptual) dissimilarity matrices were constructed, representing sentence-level processing (sentences > all other conditions), lexical processing—phonological lexicon and the semantic system (sentences and learned signs > unlearned signs and non-linguistic) and initial phonological processing (sentences, learned signs, and unlearned signs > non-linguistic). For the three categorical dissimilarity matrices, see Fig. 4A. A neural RDM was calculated per vertex by running a searchlight analysis, using circular two-dimensional patches of contiguous 100 vertices around each seed vertex on the cortical surface. The BOLD signal for each of the four non-baseline task conditions (concatenated across both task runs) were extracted from all vertices within each patch while subtracting the mean activity pattern across all task conditions to eliminate global effects97. A 4 × 4 neural RDM was then constructed for each vertex serving as patch-center, using Pearson’s correlation (dissimilarity measure of 1-Pearson’s r). The three categorical dissimilarity matrices associated with the different linguistic domains were correlated with each vertex’s RDM, thus generating three whole-brain maps of Pearson’s correlation scores, converted to Z-values using a Fisher transformation. For group-level significance testing we used random-effects Monte Carlo cluster statistics, running permutation testing (10,000 iterations) with threshold-free cluster enhancement (TFCE), as implemented by the CoSMoMVPA toolbox98,99. Significance threshold was set at α = 0.001, corrected for multiple comparisons. For a similar RSA analysis performed on the difference between the normalized BOLD signal post-learning and normalized BOLD signal pre-learning, see Supplementary Fig. S8. To assess the spatial overlap between the three group-level significance maps, all three maps were binarized and a Sørensen-Dice coefficient100,101 was calculated between each pair of maps, resulting in six Dice coefficients.
We additionally calculated an RDM between the task contrast maps associated with sentence-level processing (sentences > learned signs), lexical processing (learned signs > unlearned signs) and initial phonological processing (unlearned signs > non-linguistic). We averaged each contrast map across participants to generate a group-level activation map (Fig. 5A), and thresholded these maps using the Gaussian-two-Gammas mixture model40 to identify brain regions engaged in the processing of different linguistic components (Fig. 5B). We then computed the dissimilarity measure 1-Pearson’s r for each pair of maps (Fig. 5C). For significance testing we conducted a permutation test with 10,000 iterations, in which vertices’ activation values were randomly shuffled between the two maps. Finally, Dice coefficients were calculated for each pair of binarized thresholded activation maps (Fig. 5D). Both the RDM and the Dice coefficients were further computed on the individual level and then averaged across participants (Supplementary Fig. S3).
Prediction of future learning outcomes
Fifty-eight of the 79 participants re-took the sign language proficiency test 6 months after completing the course. We aimed to predict their test scores in both timepoints (i.e., immediately following the course and 6 months later) based on their post-learning activation maps and their delta activation maps (calculated as the difference between post-learning activation and pre-learning activation), in the following contrasts: sentences > non-linguistic, associated with the processing of all linguistic domains introduced in the current study; sentences > learned signs, associated with elements of sentence-level processing; learned > unlearned signs, associated with lexical (phonological and semantic) processing; and unlearned signs > non-linguistic, associated with initial phonological processing. Prediction was performed using a Brain Basis Set (BBS) pipeline102 in a ten-fold cross-validation routine. The BBS prediction pipeline was composed of four steps applied per fold: (1) Dimensionality reduction using principal components analysis (PCA) to a predetermined number of 20 components. (2) The selected components were regressed against the individual data of every participant in the current fold’s training set, to yield a number referred to as an “expression score” for each component and participant102. (3) A linear model was fit to predict the sign language test scores from the expression scores. (4) The model was applied to the current fold’s test set. Statistical significance was determined using a permutation test with 1000 iterations in which test scores were randomly shuffled. Contribution maps were created for successful predictions, visualizing the contribution of each vertex in the activation map to the prediction of the test scores. For each of the ten folds, each component derived from the data reduction step (step (1) in the BBS prediction pipeline above) was multiplied by its corresponding beta coefficient from the linear model, resulting in 20 weighted components per fold. The weighted components maps were then summed within and across folds, yielding a map representing each vertex’s quantified contribution to the prediction.
Statistics and reproducibility
Group-level (N = 79) statistics include mass-univariate tests for the detection of brain regions in which activity increased or decreased following sign language learning (FDR corrected for 91,282 comparisons per number of vertices). A mixed-effect ANOVA was conducted on the sentences > non-linguistic activation maps of control group (N = 19) with the four learning groups (N = 17–20), yielding a significant group (learning/control) × time (pre-/post-learning) interaction effect (p < 0.05, FDR corrected). The findings of the ANOVA indicate an increase in activity following learning in each of the learning groups, compared to the control group. An additional group analysis aiming to investigate activation patterns associated with different linguistic components included an RSA with a whole-brain surface-based searchlight approach, performed on the learning participants’ (N = 79) post-learning BOLD signal and post- vs. pre-learning BOLD signal. For group-level significance testing we used random-effects Monte Carlo cluster statistics, running permutation testing (10,000 iterations) with threshold-free cluster enhancement (TFCE)98,99. Significance threshold was set at α = 0.001, corrected for multiple comparisons. Lastly, prediction of learning success and retention was performed on all participants who took the sign language test at both timepoints (N = 58), using the Brain Basis Set (BBS) pipeline by Sripada et al.102. Statistical significance was determined using a permutation test with 1000 iterations in which test scores were randomly shuffled.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Data availability
The data that support the findings of this study are available upon request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
Code availability
While most analyses were performed using previously-published pipelines and toolboxes, custom Matlab scripts are available as Supplementary Software 1.
References
Barbeau, E. B. et al. The role of the left inferior parietal lobule in second language learning: an intensive language training fMRI study. Neuropsychologia 98, 169–176 (2017).
Gurunandan, K., Carreiras, M. & Paz-Alonso, P. M. Functional plasticity associated with language learning in adults. Neuroimage 201, 116040 (2019).
Alotaibi, S., Alsaleh, A., Wuerger, S. & Meyer, G. Rapid neural changes during novel speech-sound learning: an fMRI and DTI study. Brain Lang. 245, 105324 (2023).
Yang, J., Gates, K. M., Molenaar, P. & Li, P. Neural changes underlying successful second language word learning: an fMRI study. J. Neurolinguist. 33, 29–49 (2015).
Wang, Y., Sereno, J. A., Jongman, A. & Hirsch, J. fMRI evidence for cortical modification during learning of Mandarin lexical tone. J. Cogn. Neurosci. 15, 1019–1027 (2003).
Banaszkiewicz, A. et al. Multimodal imaging of brain reorganization in hearing late learners of sign language. Hum. Brain Mapp. 42, 384–397 (2021).
Johnson, L. et al. Functional neuroanatomy of second language sentence comprehension: an fMRI study of late learners of American Sign Language. Front Psychol. 9, 1626 (2018).
Williams, J. T., Darcy, I. & Newman, S. D. Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: a longitudinal study. Cortex 75, 56–67 (2016).
Stokoe, W. C. Sign language structure: an outline of the visual communication systems of the American deaf. in Studies in Linguistics Occasional Papers, Vol. 8 (University of Buffalo, 1960).
Schlenker, P. Visible meaning: sign language and the foundations of semantics. Theor. Linguist. 44, 123–208 (2018).
Bibb, B., Nickels, L. & Coltheart, M. Impaired auditory lexical access and the effect of speech-reading. Asia Pac. J. Speech Lang. Hear. 5, 129–135 (2000).
Coltheart, M. Are there lexicons? Q. J. Exp. Psychol. Sect. A 57, 1153–1171 (2004).
Gvion, A. & Friedmann, N. Phonological short-term memory in conduction aphasia. Aphasiology 26, 579–614 (2012).
Hickok, G. & Poeppel, D. Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92, 67–99 (2004).
Whitworth, A., Webster, J. & Howard, D. A Cognitive Neuropsychological Approach to Assessment and Intervention in Aphasia: A Clinician’s Guide (Psychology Press, 2013).
Emmorey, K. New perspectives on the neurobiology of sign languages. Front. Commun.6, 748430 (2021).
Howard, D. & Nickels, L. Separating input and output phonology: semantic, phonological, and orthographic effects in short-term memory impairment. Cogn. Neuropsychol. 22, 42–77 (2005).
Martin, N. & Saffran, E. M. The relationship of input and output phonological processing: an evaluation of models and evidence to support them. Aphasiology 16, 107–150 (2002).
Biran, M. & Friedmann, N. The representation of lexical-syntactic information: evidence from syntactic and lexical retrieval impairments in aphasia. Cortex 48, 1103–1127 (2012).
Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M. & Boyes-Braem, P. Basic objects in natural categories. Cogn. Psychol. 8, 382–439 (1976).
Schell, M., Zaccarella, E. & Friederici, A. D. Differential cortical contribution of syntax and semantics: an fMRI study on two-word phrasal processing. Cortex 96, 105–120 (2017).
Sandler, W. & Lillo-Martin, D. C. Sign Language and Linguistic Universals—Wendy Sandler, Diane Carolyn Lillo-Martin—Google Books (Cambridge University Press, 2006).
Emmorey, K., McCullough, S. & Weisberg, J. Neural correlates of fingerspelling, text, and sign processing in deaf American Sign Language–English bilinguals. Lang. Cogn. Neurosci. 30, 749–767 (2015).
Stroh, A.-L. et al. Neural correlates of semantic and syntactic processing in German Sign Language. Neuroimage 200, 231–241 (2019).
MacSweeney, M. et al. Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain 125, 1583–1593 (2002).
Lakretz, Y., Dehaene, S. & King, J.-R. What limits our capacity to process nested long-range dependencies in sentence comprehension? Entropy 22, 446 (2020).
Shetreet, E., Friedmann, N. & Hadar, U. The neural correlates of linguistic distinctions: unaccusative and unergative verbs. J. Cogn. Neurosci. 22, 2306–2315 (2010).
Friederici, A. D. Towards a neural basis of auditory sentence processing. Trends Cogn. Sci. 6, 78–84 (2002).
Flinker, A. et al. Redefining the role of Broca’s area in speech. Proc. Natl. Acad. Sci. USA 112, 2871–2875 (2015).
Friederici, A. D., Chomsky, N., Berwick, R. C., Moro, A. & Bolhuis, J. J. Language, mind and brain. Nat. Hum. Behav. 1, 713–722 (2017).
Ardila, A., Bernal, B. & Rosselli, M. How localized are language brain areas? A review of Brodmann areas involvement in oral language. Arch. Clin. Neuropsychol. 31, 112–122 (2016).
Hagoort, P. On Broca, brain, and binding: a new framework. Trends Cogn. Sci. 9, 416–423 (2005).
Fedorenko, E., Duncan, J. & Kanwisher, N. Language-selective and domain-general regions lie side by side within Broca’s area. Curr. Biol. 22, 2059–2062 (2012).
Broca, P. Remarques sur le siège de la faculté du langage articulé, suivies d’une observation d’aphémie (perte de la parole) [French]. Bull. Soc. Anat. 6, 330–357 (1861).
Binder, J. R. The Wernicke area. Neurology 85, 2170–2175 (2015).
Banaszkiewicz, A. et al. The role of the superior parietal lobule in lexical processing of sign language: insights from fMRI and TMS. Cortex 135, 240–254 (2021).
Emmorey, K., Mehta, S. & Grabowski, T. J. The neural correlates of sign versus word production. Neuroimage 36, 202–208 (2007).
Raichle, M. E. The Brain’s Default Mode Network. https://doi.org/10.1146/annurev-neuro-071013-014030 (2015).
Kriegeskorte, N., Mur, M. & Bandettini, P. Representational similarity analysis—connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 249 (2008).
Beckmann, C. F. & Smith, S. M. Probabilistic independent component analysis for functional magnetic resonance imaging. IEEE Trans. Med. Imaging 23, 137–152 (2004).
Li, P., Legault, J. & Litcofsky, K. A. Neuroplasticity as a function of second language learning: anatomical changes in the human brain. Cortex 58, 301–324 (2014).
Kiran, S. & Thompson, C. K. Neuroplasticity of language networks in aphasia: advances, updates, and future challenges. Front. Neurol. 10, 295 (2019).
Binder, J. et al. Human temporal lobe activation by speech and nonspeech sounds. Cereb. Cortex 10, 512–528 (2000).
Buchsbaum, B. et al. Neural substrates for verbal working memory in deaf signers: fMRI study and lesion case report. Brain Lang. 95, 265–272 (2005).
Mayberry, R. I. & Fischer, S. D. Looking through phonological shape to lexical meaning: the bottleneck of non-native sign language processing. Mem. Cogn. 17, 740–754 (1989).
Friederici, A. D. The cortical language circuit: from auditory perception to sentence comprehension. Trends Cogn. Sci. 16, 262–268 (2012).
Wilson, S. M., Saygin, A. P., Sereno, M. I. & Iacoboni, M. Listening to speech activates motor areas involved in speech production. Nat. Neurosci. 7, 701–702 (2004).
Binder, J. R., Desai, R. H., Graves, W. W. & Conant, L. L. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex19, 2767–2796 (2009).
MacSweeney, M. et al. Lexical and sentential processing in British Sign Language. Hum. Brain Mapp. 27, 63–76 (2006).
Evans, S., Price, C. J., Diedrichsen, J., Gutierrez-Sigut, E. & MacSweeney, M. Sign and speech share partially overlapping conceptual representations. Curr. Biol. 29, 3739–3747.e5 (2019).
Indefrey, P. & Levelt, W. J. M. The spatial and temporal signatures of word production components. Cognition 92, 101–144 (2004).
Newman, A. J., Bavelier, D., Corina, D., Jezzard, P. & Neville, H. J. A critical period for right hemisphere recruitment in American Sign Language processing. Nat. Neurosci. 5, 76–80 (2002).
Corina, D. P. & Lawyer, L. A. The neural organization of signed language. in The Oxford Handbook of Neurolinguistics (eds De Zubicaray, G. I. & Schiller, N. O.) 401–424 (Oxford University Press, 2019).
Shetreet, E., Palti, D., Friedmann, N. & Hadar, U. Cortical representation of verb processing in sentence comprehension: number of complements, subcategorization, and thematic frames. Cereb. Cortex 17, 1958–1969 (2007).
Grodzinsky, Y. The neurology of syntax: language use without Broca’s area. Behav. Brain Sci. 23, 1–21 (2000).
Matchin, W. & Hickok, G. The cortical organization of syntax. Cereb. Cortex 30, 1481–1498 (2020).
Nelson, M. J. et al. Neurophysiological dynamics of phrase-structure building during sentence processing. Proc. Natl. Acad. Sci. USA 114, E3669–E3678 (2017).
Perrachione, T. K., Ghosh, S. S., Ostrovskaya, I., Gabrieli, J. D. E. & Kovelman, I. Phonological working memory for words and nonwords in cerebral cortex. J. Speech Lang. Hear. Res. 60, 1959–1979 (2017).
Vinson, D., Fox, N., Devlin, J. T., Emmorey, K. & Vigliocco, G. Transcranial magnetic stimulation during British Sign Language production reveals monitoring of discrete linguistic units in left superior parietal lobule Abbreviated title: TMS during BSL production. https://doi.org/10.1101/679340 (2019).
Twomey, T., Price, C. J., Waters, D. & MacSweeney, M. The impact of early language exposure on the neural system supporting language in deaf and hearing adults. Neuroimage 209, 116411 (2020).
Dehaene, S., Le Clec’H, G., Poline, J. B., Le Bihan, D. & Cohen, L. The visual word form area: a prelexical representation of visual words in the fusiform gyrus. Neuroreport 13, 321–325 (2002).
Waters, D. et al. Fingerspelling, signed language, text and picture processing in deaf native signers: the role of the mid-fusiform gyrus. Neuroimage 35, 1287–1302 (2007).
Qin, L. et al. A heteromodal word-meaning binding site in the visual word form area under top-down frontoparietal control. J. Neurosci. 41, 3854–3869 (2021).
Dziȩgiel-Fivet, G., Beck, J. & Jednoróg, K. The role of the left ventral occipitotemporal cortex in speech processing—the influence of visual deprivation. Front. Hum. Neurosci. 17, 1228808 (2023).
Dȩbska, A., Wójcik, M., Chyl, K., Dziȩgiel-Fivet, G. & Jednoróg, K. Beyond the Visual Word Form Area—a cognitive characterization of the left ventral occipitotemporal cortex. Front. Hum. Neurosci. 17 https://doi.org/10.3389/fnhum.2023.1199366 (2023).
Neville, H. J. et al. Cerebral organization for language in deaf and hearing subjects: biological constraints and effects of experience. Proc. Natl. Acad. Sci. USA 95, 922–929 (1998).
Bavelier, D. et al. Hemispheric specialization for English and ASL: left invariance-right variability. Neuroreport 9, 1537–1542 (1998).
Neville, H. J. et al. Neural systems mediating American Sign Language: effects of sensory experience and age of acquisition. Brain Lang. 57, 285–308 (1997).
Hickok, G., Love-Geffen, T. & Klima, E. S. Role of the left hemisphere in sign language comprehension. Brain Lang. 82, 167–178 (2002).
MacSweeney, M. et al. Neural correlates of British Sign Language comprehension: spatial processing demands of topographic language. J. Cogn. Neurosci. 14, 1064–1075 (2002).
Flöel, A., Buyx, A., Breitenstein, C., Lohmann, H. & Knecht, S. Hemispheric lateralization of spatial attention in right- and left-hemispheric language dominance. Behav. Brain Res. 158, 269–275 (2005).
Pujol, J., Deus, J., Losilla, J. M. & Capdevila, A. Cerebral lateralization of language in normal left-handed people studied by functional MRI. Neurology 52, 1038–1043 (1999).
Cecchetto, C., Geraci, C. & Zucchi, S. Another way to mark syntactic dependencies: the case for right-peripheral specifiers in sign languages. Language85, 278–320 (2009).
Wilbur, R. B. Foregrounding structures in American sign language. J. Pragmat. 22, 647–672 (1994).
Quer, J. Attitude ascriptions in sign languages and role shift. (2013).
Meir, I. Question and negation in Israeli sign language. Sign Lang. Linguist. 7, 97–124 (2006).
Fransson, P. How default is the default mode of brain function?: further evidence from intrinsic BOLD signal fluctuations. Neuropsychologia 44, 2836–2845 (2006).
Stein, M. et al. Reduced frontal activation with increasing 2nd language proficiency. Neuropsychologia 47, 2712–2720 (2009).
Mumford, J. A., Davis, T. & Poldrack, R. A. The impact of study design on pattern estimation for single-trial multivariate pattern analysis. Neuroimage 103, 130–138 (2014).
Dimsdale-Zucker, H. R. & Ranganath, C. Representational similarity analyses: a practical guide for functional MRI applications. Handb. Behav. Neurosci. 28, 509–525 (2018).
Johnson, S. C., Saykin, A. J., Flashman, L. A., McAllister, T. W. & Sparling, M. B. Brain activation on fMRI and verbal memory ability: functional neuroanatomic correlates of CVLT performance. J. Int. Neuropsychol. Soc. 7, 55–62 (2001).
Glasser, M. F. et al. The minimal preprocessing pipelines for the Human Connectome Project. Neuroimage 80, 105–124 (2013).
Fischl, B. FreeSurfer. Neuroimage 62, 774–781 (2012).
Jenkinson, M., Beckmann, C. F., Behrens, T. E. J., Woolrich, M. W. & Smith, S. M. FSL. Neuroimage 62, 782–790 (2012).
Marcus, D. S. et al. Informatics and data mining tools and strategies for the human connectome project. Front. Neuroinform. 5, 4 (2011).
Smith, S. M. et al. Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23, S208–S219 (2004).
Woolrich, M. W. et al. Bayesian analysis of neuroimaging data in FSL. Neuroimage 45, S173–S186 (2009).
Robinson, E. C. et al. Multimodal surface matching with higher-order smoothness constraints. Neuroimage 167, 453–465 (2018).
Glasser, M. F. et al. The Human Connectome Project’s neuroimaging approach. Nat. Neurosci. 19, 1175–1187 (2016).
Griffanti, L. et al. ICA-based artefact removal and accelerated fMRI acquisition for improved resting state network imaging. Neuroimage 95, 232–247 (2014).
Salimi-Khorshidi, G. et al. Automatic denoising of functional MRI data: combining independent component analysis and hierarchical fusion of classifiers. Neuroimage 90, 449–468 (2014).
Woolrich, M. W., Ripley, B. D., Brady, M. & Smith, S. M. Temporal autocorrelation in univariate linear modeling of FMRI data. Neuroimage 14, 1370–1386 (2001).
Szaflarski, J. P. et al. Language lateralization in left-handed and ambidextrous people. Neurology 59, 238–244 (2002).
Woo, C. W., Krishnan, A. & Wager, T. D. Cluster-extent based thresholding in fMRI analyses: pitfalls and recommendations. Neuroimage 91, 412–419 (2014).
Benjamini, Y. & Hochberg, Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J. R. Stat. Soc. Ser. B Methodol.57, 289–300 (1995).
Oosterhof, N. N., Connolly, A. C. & Haxby, J. V. CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in matlab/GNU octave. Front. Neuroinform. 10, 194842 (2016).
Diedrichsen, J. & Kriegeskorte, N. Representational models: a common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS Comput. Biol. 13, e1005508 (2017).
Smith, S. M. & Nichols, T. E. Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage 44, 83–98 (2009).
Stelzer, J., Chen, Y. & Turner, R. Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): random permutations and cluster size control. Neuroimage 65, 69–82 (2013).
Sørensen, T. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. Biologiske Skrifter 5, 1–34 (1948).
Dice, L. R. Measures of the amount of ecologic association between species. Ecology 26, 297–302 (1945).
Sripada, C. et al. Basic units of inter-individual variation in resting state connectomes. Sci. Rep. 9, 1900 (2019).
Acknowledgements
The authors would like to thank the support of the Israel Science Foundation (ISF grant no. 1603/18), the Cukier-Goldstein-Goren Center for Mind, Cognition and Language (MILA) and the Minducate Science of Learning Research and Innovation Center. The authors also thank Mr. Doron Levy for teaching the ISL course.
Author information
Authors and Affiliations
Contributions
Y. Coldham: conceptualization, data collection, data analysis, writing—original draft; N. Haluts: conceptualization, writing—original draft; E. Elbaz, T. Ben-David, N. Racabi: data collection, S. Gal: fMRI task construction; M. Bernstein-Eliav: writing—review & editing; N. Friedmann: conceptualization; I. Tavor: supervision, conceptualization, writing—review & editing.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Communications Biology thanks the anonymous reviewers for their contribution to the peer review of this work. Primary Handling Editor: Benjamin Bessieres.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Coldham, Y., Haluts, N., Elbaz, E. et al. Distinct neural representations of different linguistic components following sign language learning. Commun Biol 8, 353 (2025). https://doi.org/10.1038/s42003-025-07793-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s42003-025-07793-7