The data provide experimental insight into how subtle and sensitive syllable-to-syllable transition information for word-level processing is. Decreases in syllable tracking (cerebroacoustic coherence) in auditory cortex and increases in cross-frequency coupling between right superior and middle temporal and frontal areas were found when lexical content was present compared to all other conditions however, not when conditions were compared separately. Evidence for an interaction of word- and acoustic syllable-level processing was inconclusive. Lexical content resulted, additionally, in increased neural activity. We show that syllable-to-syllable transition information compared to mere syllable information, activated a bilateral superior, middle temporal and inferior frontal network. Two conjectures were evaluated: (i) syllable-to-syllable transitions contribute to word-level processing and (ii) processing of words activates brain areas that interact with acoustic syllable processing. Lexical content (native language), sublexical syllable-to-syllable transitions (foreign language), or mere syllabic information (pseudo-words) were presented. Participants listened to disyllabic words presented at a rate of 4 syllables/s. In two MEG experiments, we investigate lexical and sublexical word-level processing and the interactions with (acoustic) syllable processing using a frequency-tagging paradigm. How syllabic processing interacts with higher levels of speech processing, beyond segmentation, including the anatomical and neurophysiological characteristics of the networks involved, is debated. Oscillation-based approaches suggest that low-frequency auditory cortex oscillations track syllable-sized acoustic information and therefore emphasize the relevance of syllabic-level acoustic processing for speech segmentation. Following a psychoacoustic model proposed by Kreiman (2014) and using a series of principal component analyses, we found that Sorani Kurdish-Persian bilingual speakers followed a similar acoustic pattern in their two different languages, suggesting that each speaker has a unique voice but uses the same voice parameters when switching from one language to the other.Speech comprehension requires the ability to temporally segment the acoustic input for higher-level linguistic analysis. For this purpose, speech samples of 10 simultaneous Sorani Kurdish-Persian bilingual speakers were acoustically analyzed. The present study investigated how acoustic parameters of voice quality are structured in two languages of a bilingual speaker and to what extent such features may vary between bilingual speakers. Yet little is known about the influence of language on within- and between-speaker vocal variability. ![]() But do bilinguals change their voice when they switch from one language to the other? It is typically assumed that while some aspects of the speech signal vary for linguistic reasons, some indexical features remain unchanged across languages. This phenomenon adds a fascinating dimension of variability to speech, both in perception and production. Summary/Abstract: Many individuals around the world speak two or more than two languages. ![]() Published by: Univerzita Karlova v Praze, Nakladatelství Karolinum Keywords: voice quality bilingual speakers Persian Sorani Kurdish principal component analysis Subject(s): Language and Literature Studies Bilingual acoustic voice variation: the case of Sorani Kurdish-Persian speakersīilingual acoustic voice variation: the case of Sorani Kurdish-Persian speakers Author(s): Maral Asiaee, Homa Asadi
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |