Siga este link para ver outros tipos de publicações sobre o tema: Acoustic and Linguistic Modalities.

Artigos de revistas sobre o tema "Acoustic and Linguistic Modalities"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Acoustic and Linguistic Modalities".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Barón-Birchenall, Leonardo. "Phonetic Accommodation During Conversational Interactions: An Overview". Revista Guillermo de Ockham 21, n.º 2 (22 de março de 2023): press. http://dx.doi.org/10.21500/22563202.6150.

Texto completo da fonte
Resumo:
During conversational interactions such as tutoring, instruction-giving tasks, verbal negotiations, or just talking with friends, interlocutors’ behaviors experience a series of changes due to the characteristics of their counterpart and to the interaction itself. These changes are pervasively present in every social interaction, and most of them occur in the sounds and rhythms of our speech, which is known as acoustic-prosodic accommodation, or simply phonetic accommodation. The consequences, linguistic and social constraints, and underlying cognitive mechanisms of phonetic accommodation have been studied for at least 50 years, due to the importance of the phenomenon to several disciplines such as linguistics, psychology, and sociology. Based on the analysis and synthesis of the existing empirical research literature, in this paper we present a structured and comprehensive review of the qualities, functions, onto- and phylogenetic development, and modalities of phonetic accommodation.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Calder, Jeremy. "The fierceness of fronted /s/: Linguistic rhematization through visual transformation". Language in Society 48, n.º 1 (11 de outubro de 2018): 31–64. http://dx.doi.org/10.1017/s004740451800115x.

Texto completo da fonte
Resumo:
AbstractThis article explores the roles that language and the body play in the iconization of cross-modal personae (see Agha 2003, 2004). Focusing on a community of radical drag queens in San Francisco, I analyze the interplay of visual presentation and acoustic dimensions of /s/ in the construction of the fierce queen persona, which embodies an extreme, larger-than-life, and anti-normative type of femininity. Taking data from transformations—conversations during which queens visually transform from male-presenting into their feminine drag personae—I explore the effect of fluid visual presentation on linguistic production, and argue that changes in both the linguistic and visual streams increasingly invoke qualia (see Gal 2013; Harkness 2015) projecting ‘harshness’ and ‘sharpness’ in the construction of fierce femininity. I argue that personae like the fierce queen become iconized through rhematization (see Gal 2013), a process in which qualic congruences are construed and constructed across multiple semiotic modalities. (Iconization, rhematization, qualia, sociophonetics, gender, personae, drag queens)*
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Wang, Yue, Allard Jongman e Joan Sereno. "Audio-visual clear speech: Articulation, acoustics and perception of segments and tones". Journal of the Acoustical Society of America 153, n.º 3_supplement (1 de março de 2023): A122. http://dx.doi.org/10.1121/10.0018372.

Texto completo da fonte
Resumo:
Research has established that clear speech with enhanced acoustic signal benefits segmental intelligibility. Less attention has been paid to visible articulatory correlates of clear-speech modifications, or to clear-speech effects at the suprasegmental level (e.g., lexical tone). Questions thus arise as to the extent to which clear-speech cues are beneficial in different input modalities and linguistic domains, and how different resources are incorporated. These questions address the fundamental argument in clear-speech research with respect to the trade-off between effects of signal-based phoneme-extrinsic modifications to strengthen overall acoustic salience versus code-based phoneme-specific modifications to maintain phonemic distinctions. In this talk, we report findings from our studies on audio-visual clear speech production and perception, including vowels and fricatives differing in auditory and visual saliency, and lexical tones believed to lack visual distinctiveness. In a 3-stream study, we use computer-vision techniques to extract visible facial cues associated with segmental and tonal productions in plain and clear speech, characterize distinctive acoustic features across speech styles, and compare audio-visual plain and clear speech perception. Findings are discussed in terms of how speakers and perceivers strike a balance between utilizing general saliency-enhancing and category-specific cues across audio-visual modalities and speech styles with the aim of improving intelligibility.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yang, Ziyi, Yuwei Fang, Chenguang Zhu, Reid Pryzant, DongDong Chen, Yu Shi, Yichong Xu et al. "i-Code: An Integrative and Composable Multimodal Learning Framework". Proceedings of the AAAI Conference on Artificial Intelligence 37, n.º 9 (26 de junho de 2023): 10880–90. http://dx.doi.org/10.1609/aaai.v37i9.26290.

Texto completo da fonte
Resumo:
Human intelligence is multimodal; we integrate visual, linguistic, and acoustic signals to maintain a holistic worldview. Most current pretraining methods, however, are limited to one or two modalities. We present i-Code, a self-supervised pretraining framework where users may flexibly combine the modalities of vision, speech, and language into unified and general-purpose vector representations. In this framework, data from each modality are first given to pretrained single-modality encoders. The encoder outputs are then integrated with a multimodal fusion network, which uses novel merge- and co-attention mechanisms to effectively combine information from the different modalities. The entire system is pretrained end-to-end with new objectives including masked modality unit modeling and cross-modality contrastive learning. Unlike previous research using only video for pretraining, the i-Code framework can dynamically process single, dual, and triple-modality data during training and inference, flexibly projecting different combinations of modalities into a single representation space. Experimental results demonstrate how i-Code can outperform state-of-the-art techniques on five multimodal understanding tasks and single-modality benchmarks, improving by as much as 11% and demonstrating the power of integrative multimodal pretraining.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Martínez, Fernando Casanova. "Multimodal exploration of the thank God expressive construction and its implications for translation". Translation, Cognition & Behavior 7, n.º 1 (10 de outubro de 2024): 48–89. http://dx.doi.org/10.1075/tcb.00095.mar.

Texto completo da fonte
Resumo:
Abstract Multimodal research in communication and translation studies is increasingly recognized, yet it remains incompletely explored. Leveraging computational linguistics with both Praat for acoustic analysis and the OpenPose and Rapid Annotator tools for visual analysis, this study delves into the intricate dynamics of the expressive construction thank God, providing a comprehensive examination of both visual and acoustic dimensions. Our objective is to uncover nuanced patterns of multimodal communication embedded within this expression and their implications for Translation and Interpreting. Through an analysis of linguistic features and co-speech gestures present in thank God, we aim to deepen our comprehension of how meaning crisscrosses modalities. Our findings underscore the necessity of a multimodal approach in language studies, emphasizing the requisite to preserve emotional and contextual nuances. The analysis unveils the phonological relevance of the duration of the construction’s second vowel, a key factor for translation. Additionally, data reveals a correlation between the emotion of relief and gestures executed with both hands closer to the chest. Overall, these findings contribute to advancing both multimodal communication research and translation studies, shedding light on the role of multimodal analysis in understanding language and translation dynamics, particularly in the context of constructions like thank God.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kirnosova, Nadiia, e Yuliia Fedotova. "Chinese and Japanese Characters from the Perspective of Multimodal Studies". ATHENS JOURNAL OF PHILOLOGY 8, n.º 4 (9 de setembro de 2021): 253–68. http://dx.doi.org/10.30958/ajp.8-4-1.

Texto completo da fonte
Resumo:
This article aims to demonstrate that a character can generate at least three different modalities simultaneously – visual, audial and vestibular — and influence a recipient in a deeper and more powerful way (than a sign from a phonetic alphabet). To show this, we chose modern Chinese and Japanese characters as live signs, and analyzed them functioning in texts with obvious utilitarian purposes – in advertisements. The main problem we were interested in during conducting this research was the “information capacity” of a character. We find out that any character exists in three dimensions simultaneously and generates three modalities at the same time. Its correspondence with morphemes opens two channels for encoding information – first of all, it brings a space for audial modality through the acoustic form of a syllable, and then it opens a space for visual modality through the graphical form of a character. The latter form implies a space for vestibular modality, because as a “figure,” any character occupies its “ground” (a particular square area), which becomes a source of a sense of stability and symmetry, enriching linguistic messages with non-verbal information. Keywords: advertisement, character, information, mode, multimodality
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Karpenko, O., V. Neklesova, A. Tkachenko e M. Karpenko. "SENSORY MODALITY IN ADVERTISING DISCOURSE". Opera in Linguistica Ukrainiana, n.º 31 (14 de julho de 2024): 302–17. http://dx.doi.org/10.18524/2414-0627.2024.31.309450.

Texto completo da fonte
Resumo:
The article is dedicated to the study of sensory modalities in advertising discourse. Advertising discourse refers to the language and communication strategies employed in advertising messages as cohesive texts deeply embedded in real-life contexts amidst numerous accompanying background elements within an integrated communicative environment. It encompasses the linguistic choices, persuasive techniques, and stylistic features used to convey marketing messages to a target audience, aiming at attracting attention, creating desire, and encouraging action, typically towards purchasing a product or service. The aim of this article is to analyze the ways of realizing sensory modalities in advertising discourse. The object of the research is advertising discourse, the subject being manifestations of sensory modalities in advertisements. The factual material of the research was selected from the collections of popular or iconic advertisements. Advertising discourse often appeals to emotions and utilizes visual elements to communicate the intended message effectively and influence consumer behaviour. The heterogeneity of advertisements, represented by a phrase alone or in combination with a static or dynamic visual image or acoustic accompaniment, these combinations vary significantly, increasing the desirability of the advertised product to impress us daily. Different sensory modalities — visual, associated with images; auditory, associated with sounds and their perception; and kinesthetic, associated with physical sensations, as well as their manifestations, reflected in certain speech predicates –influence how we think, feel, mentally represent our experiences and make choices. The application of sensory modalities in advertising discourse is observed on three levels: on the first level, we deal with the material representation of the advertising message that results in different communicative types of advertisements; on the second level, the preferred representational system is revealed via predicates or the sensory-based words; on the third level, product names, pragmatonyms, become the bearers of sensory information, consequently appealing to human senses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Dolník, Juraj. "Methodological impulses of Ján Horecký". Journal of Linguistics/Jazykovedný casopis 71, n.º 2 (1 de dezembro de 2020): 139–56. http://dx.doi.org/10.2478/jazcas-2020-0018.

Texto completo da fonte
Resumo:
Abstract The author of the study develops the ideas of J. Horecký, which relate to the language sign, the language system, language consciousness and its cultivation. Interpretations of J. Horecký’s statements on the systemic and communicative language sign lead to the conclusion that there is really only a communication sign as an ambivalent significant for users of the language who control the rules of its use. Significant are articulation‐acoustic units, which we feel as fictitious equivalents of what we experience when we are in the intentional state. J. Horecký’s reflections on the language system led the author to confront the user of the language as an actor of language practice with the user realizing himself as a reflexive linguistic being. In this confrontation, the language system came into focus in a practical and reflexive modality. On the background of these modalities of the language system, the author approaches linguistic consciousness in the interpretation of J. Horecký, in order to shed light on it in terms of two questions: (1) What is the degree of linguistic awareness of the mother tongue? (2) What is the “true” cultivation of language consciousness? These questions led the author to confront the linguistic realist with the anti‐realist and to discover a situation in which the linguist believes in realism but holds the position of anti‐realist. The author leans towards the realists and emphasizes the thesis that the representation of the language system is true when it corresponds to the language system resulting from the nature of language.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Snijders, Tineke M., Titia Benders e Paula Fikkert. "Infants Segment Words from Songs—An EEG Study". Brain Sciences 10, n.º 1 (9 de janeiro de 2020): 39. http://dx.doi.org/10.3390/brainsci10010039.

Texto completo da fonte
Resumo:
Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Shah, Shariq, Hossein Ghomeshi, Edlira Vakaj, Emmett Cooper e Rasheed Mohammad. "An Ensemble-Learning-Based Technique for Bimodal Sentiment Analysis". Big Data and Cognitive Computing 7, n.º 2 (30 de abril de 2023): 85. http://dx.doi.org/10.3390/bdcc7020085.

Texto completo da fonte
Resumo:
Human communication is predominantly expressed through speech and writing, which are powerful mediums for conveying thoughts and opinions. Researchers have been studying the analysis of human sentiments for a long time, including the emerging area of bimodal sentiment analysis in natural language processing (NLP). Bimodal sentiment analysis has gained attention in various areas such as social opinion mining, healthcare, banking, and more. However, there is a limited amount of research on bimodal conversational sentiment analysis, which is challenging due to the complex nature of how humans express sentiment cues across different modalities. To address this gap in research, a comparison of multiple data modality models has been conducted on the widely used MELD dataset, which serves as a benchmark for sentiment analysis in the research community. The results show the effectiveness of combining acoustic and linguistic representations using a proposed neural-network-based ensemble learning technique over six transformer and deep-learning-based models, achieving state-of-the-art accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Twisha Patel, Et al. "Deception/Truthful Prediction Based on Facial Feature and Machine Learning Analysis". International Journal on Recent and Innovation Trends in Computing and Communication 11, n.º 10 (2 de novembro de 2023): 797–803. http://dx.doi.org/10.17762/ijritcc.v11i10.8595.

Texto completo da fonte
Resumo:
The Automatic Deception detection refers to the investigative practices used to determine whether person is telling you Truth or lie. Automatic deception detection has been studied extensively as it can be useful in many real-life scenarios in health, justice, and security systems. Many psychological studies have been reported for deception detection. Polygraph testing is a current trending technique to detect deception, but it requires human intervention and training. In recent times, many machine learning based approaches have been applied to detect deceptions. Various modalities like Thermal Imaging, Brain Activity Mapping, Acoustic analysis, eye tracking. Facial Micro expression processing and linguistic analyses are used to detect deception. Machine learning techniques based on facial feature analysis look like a promising path for automatic deception detection. It also works without human intervention. So, it may give better results because it does not affect race or ethnicity. Moreover, one can do covert operation to find deceit using facial video recording. Covert Operation may capture the real personality of deceptive persons. By making combination of various facial features like Facial Emotion, Facial Micro Expressions and Eye blink rate, pupil size, Facial Action Units we can get better accuracy in Deception Detection.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Gijbels, Liesbeth, Jason D. Yeatman, Kaylah Lalonde e Adrian KC Lee. "Children’s age matters, but not for audiovisual speech enhancement". Journal of the Acoustical Society of America 150, n.º 4 (outubro de 2021): A337. http://dx.doi.org/10.1121/10.0008500.

Texto completo da fonte
Resumo:
Articulation movements help us identify speech in noisy environments. While this has been observed at almost all ages, the size of the perceived benefit and its relationship to development in children is less understood. Here, we focus on exploring audiovisual speech benefit in typically developing children (N = 160) across a wide age range (4–15 years) by measuring performance via an online audiovisual speech performance task that is low in cognitive and linguistic demands. Specifically, we investigated how audiovisual speech benefit develops over age and the impact of some potentially important intrinsic (e.g., gender, phonological skills) and extrinsic (e.g., choice of stimuli) experimental factors. Our results show an increased performance of individual modalities (audio-only, audiovisual, visual-only) as a function of age, but no difference in the size of audiovisual speech enhancement. Furthermore, older children showed a significant impact of visually distracting stimuli (e.g., mismatched video), where this had no additional impact on performance of the youngest children. No phonological or gender differences were found given the low cognitive and linguistic demands of this task.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Perlman, Marcus, e Ashley A. Cain. "Iconicity in vocalization, comparisons with gesture, and implications for theories on the evolution of language". Gesture 14, n.º 3 (31 de dezembro de 2014): 320–50. http://dx.doi.org/10.1075/gest.14.3.03per.

Texto completo da fonte
Resumo:
Scholars have often reasoned that vocalizations are extremely limited in their potential for iconic expression, especially in comparison to manual gestures (e.g., Armstrong & Wilcox, 2007; Tomasello, 2008). As evidence for an alternative view, we first review the growing body of research related to iconicity in vocalizations, including experimental work on sound symbolism, cross-linguistic studies documenting iconicity in the grammars and lexicons of languages, and experimental studies that examine iconicity in the production of speech and vocalizations. We then report an experiment in which participants created vocalizations to communicate 60 different meanings, including 30 antonymic pairs. The vocalizations were measured along several acoustic properties, and these properties were compared between antonyms. Participants were highly consistent in the kinds of sounds they produced for the majority of meanings, supporting the hypothesis that vocalization has considerable potential for iconicity. In light of these findings, we present a comparison between vocalization and manual gesture, and examine the detailed ways in which each modality can function in the iconic expression of particular kinds of meanings. We further discuss the role of iconic vocalizations and gesture in the evolution of language since our divergence from the great apes. In conclusion, we suggest that human communication is best understood as an ensemble of kinesis and vocalization, not just speech, in which expression in both modalities spans the range from arbitrary to iconic.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Mawalim, Candy Olivia, Shogo Okada e Yukiko I. Nakano. "Task-independent Recognition of Communication Skills in Group Interaction Using Time-series Modeling". ACM Transactions on Multimedia Computing, Communications, and Applications 17, n.º 4 (30 de novembro de 2021): 1–27. http://dx.doi.org/10.1145/3450283.

Texto completo da fonte
Resumo:
Case studies of group discussions are considered an effective way to assess communication skills (CS). This method can help researchers evaluate participants’ engagement with each other in a specific realistic context. In this article, multimodal analysis was performed to estimate CS indices using a three-task-type group discussion dataset, the MATRICS corpus. The current research investigated the effectiveness of engaging both static and time-series modeling, especially in task-independent settings. This investigation aimed to understand three main points: first, the effectiveness of time-series modeling compared to nonsequential modeling; second, multimodal analysis in a task-independent setting; and third, important differences to consider when dealing with task-dependent and task-independent settings, specifically in terms of modalities and prediction models. Several modalities were extracted (e.g., acoustics, speaking turns, linguistic-related movement, dialog tags, head motions, and face feature sets) for inferring the CS indices as a regression task. Three predictive models, including support vector regression (SVR), long short-term memory (LSTM), and an enhanced time-series model (an LSTM model with a combination of static and time-series features), were taken into account in this study. Our evaluation was conducted by using the R 2 score in a cross-validation scheme. The experimental results suggested that time-series modeling can improve the performance of multimodal analysis significantly in the task-dependent setting (with the best R 2 = 0.797 for the total CS index), with word2vec being the most prominent feature. Unfortunately, highly context-related features did not fit well with the task-independent setting. Thus, we propose an enhanced LSTM model for dealing with task-independent settings, and we successfully obtained better performance with the enhanced model than with the conventional SVR and LSTM models (the best R 2 = 0.602 for the total CS index). In other words, our study shows that a particular time-series modeling can outperform traditional nonsequential modeling for automatically estimating the CS indices of a participant in a group discussion with regard to task dependency.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Hagedorn, Christina, Michael Proctor, Louis Goldstein, Stephen M. Wilson, Bruce Miller, Maria Luisa Gorno-Tempini e Shrikanth S. Narayanan. "Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging". Journal of Speech, Language, and Hearing Research 60, n.º 4 (14 de abril de 2017): 877–91. http://dx.doi.org/10.1044/2016_jslhr-s-15-0112.

Texto completo da fonte
Resumo:
Purpose Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. Method One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Results Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Conclusion Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Lee, Yoonjeong. "Understanding production variability in multimodal communication". Journal of the Acoustical Society of America 152, n.º 4 (outubro de 2022): A275—A276. http://dx.doi.org/10.1121/10.0016254.

Texto completo da fonte
Resumo:
A comprehensive view of speech communication requires a consideration of dynamic characteristics in language users’ multimodal behaviors. This talk discusses dynamic properties of both verbal and non-verbal communication behaviors, focusing on variability in voice, speech, and co-speech gesture production. I present a series of experiments that leverage quantitative approaches to examine how the multimodal production units of the cognitive system vary in their dynamic control and realization in executing communication goals. The overarching hypothesis is that surface variations in multimodal behaviors are structured to reflect communicative intentions, exhibiting a tight relation between the linguistic system and recruited modalities. Supporting evidence emerges from two types of data: dynamic voice signals drawn from speech corpora of various speakers, speaking styles, and languages, and time-aligned multidimensional signals of speech and co-speech gestures (simultaneously recorded audio, kinematic, and visual signals). Voice variations are acoustically structured by both biologically relevant factors and factors that vary with language-specific phonology. Furthermore, the spatiotemporal patterning of vocal tract actions and cooccurring body movements systematically varies, reflecting the prosodic structure of language. The findings and the utility of the analytical tools employed in these studies have implications for comparative investigations of communicative behaviors in animals. [Work supported by the NSF.]
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Klinge, Alex. "On the linguistic interpretation of contractual modalities". Journal of Pragmatics 23, n.º 6 (junho de 1995): 649–75. http://dx.doi.org/10.1016/0378-2166(94)00051-f.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Ronga, Irene, Carla Bazzanella, Ferdinando Rossi e Giandomenico Iannetti. "Linguistic synaesthesia, perceptual synaesthesia, and the interaction between multiple sensory modalities". Pragmatics and Cognition 20, n.º 1 (7 de maio de 2012): 135–67. http://dx.doi.org/10.1075/pc.20.1.06ron.

Texto completo da fonte
Resumo:
Recent studies on cortical processing of sensory information highlight the importance of multisensory integration, and define precise rules governing reciprocal influences between inputs of different sensory modalities. We propose that psychophysical interactions between different types of sensory stimuli and linguistic synaesthesia share common origins and mechanisms. To test this hypothesis, we compare neurophysiological findings with corpus-based analyses relating to linguistic synaesthesia. Namely, we present Williams’ hypothesis and its recent developments about the hierarchy of synaesthetic pairings, and examine critical aspects of this theory concerning universality, directionality, sensory categories, and usage of corpora. These theoretical issues are verified against linguistic data derived from corpus-based analyses of Italian synaesthetic pairings related to auditory and tactile modalities. Our findings reveal a strong parallel between linguistic synaesthesia and neurophysiological interactions between different sensory stimuli, suggesting that linguistic synaesthesia is affected by tendencies similar to the rules underlying the perceptual association of distinct sensory modalities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Ekawati, Rosyida. "POWER THROUGH LINGUISTIC MODALITIES IN INDONESIAN PRESIDENTIAL SPEECHES". Discourse and Interaction 12, n.º 1 (19 de julho de 2019): 5–28. http://dx.doi.org/10.5817/di2019-1-5.

Texto completo da fonte
Resumo:
Language plays a crucial role in political speech. The use of a particular language canreflect or be influenced by the speaker’s ideology, power, cultural/social background, region, or social status. This paper is concerned with the relationship between language and power, specifically as manifested in the language used by an Indonesian president in international forums. It aims to uncover the power relations that were projected through the linguistic features of the president’s speech texts, particularly the use of modal verbs. Data for this paper are the speeches on the topics of peace and climate change delivered by Susilo Bambang Yudhoyono (SBY) in international forums during his first and second presidential terms. This paper’s analysis of linguistic modalities uses Fairclough’s three-dimensional model of critical discourse analysis (CDA) to answer its research questions. The results show that, in projecting his power, SBY used several linguistic modal verbs. From the context of the modality used it can be understood that the president conveyed his strategic desire to be himself as he tried to relate to the audience (as he assumed it to be) and construct an image of himself, of his audience, and of their relationship. The president produced discourse that embodied assumptions about the social relations between his leadership and the audience and asserted both his legitimate power as president and his expert power. Through the language used, SBY created, sustained, and replicated the fundamental inequalities and asymmetries in the forums he attended.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Moriarty Harrelson, Erin. "Deaf people with “no language”: Mobility and flexible accumulation in languaging practices of deaf people in Cambodia". Applied Linguistics Review 10, n.º 1 (25 de fevereiro de 2019): 55–72. http://dx.doi.org/10.1515/applirev-2017-0081.

Texto completo da fonte
Resumo:
AbstractDeaf people in Cambodia are often represented in the media as lacking language but in reality, deaf people’s repertoires and communicative practices challenge essentialisms regarding modalities and conventional understandings of “language.” Drawing on fieldwork in Cambodia, this article examines how notions of an urban/rural dichotomy devalue the communicative practices of rural deaf people. These ideologies marginalize the creative deployment of various modalities by deaf people in everyday languaging that are not commonly indexed as parts of a linguistic repertoire. Communicative practices such as drawing a picture to communicate, gestures, the use of physical objects such as city maps are devalued because academics and lay people tend to have rigid conceptualizations of language. This article calls for closer attention to modalities such as gestures, the drawing of pictures and the use of physical objects in everyday languaging to interrogate how the “invention” of languages results in distinctions between groups and individuals, especially in terms of access to linguistic resources such as a national signed language and perceptions about the use of modalities other than signing or speaking. In NGO narratives, often echoed by deaf Cambodians themselves, deaf people acquire a signed language only after rural-urban migration, which misrepresents their communicative competencies and creative use of linguistic resources. In reality, deaf people’s linguistic repertoires are constantly expanding as they enter new spaces, resulting in the flexible accumulation of languaging practices and modalities.
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Gordon, Matthew K., Paul Barthmaier e Kathy Sands. "A cross‐linguistic acoustic study of fricatives". Journal of the Acoustical Society of America 108, n.º 5 (novembro de 2000): 2506. http://dx.doi.org/10.1121/1.4743257.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Li, Qiang, Shuang Wang, Yunling Du e Nicole Müller. "Lexical Tone Perception in Mandarin Chinese Speakers With Aphasia". Chinese Journal of Applied Linguistics 44, n.º 1 (1 de março de 2021): 54–67. http://dx.doi.org/10.1515/cjal-2021-0004.

Texto completo da fonte
Resumo:
Abstract The brain localization debate of lexical tone processing concerns functional hypothesis that lexical tone, owing to its strong linguistic features, is dominant in the left hemisphere, and acoustic hypothesis that all pitch patterns, including lexical tone, are dominant in the right hemisphere due to their acoustic features. Lexical tone as a complex signal contains acoustic components that carry linguistic, paralinguistic, and nonlinguistic information. To examine these two hypotheses, the current study adopted triplet stimuli including Chinese characters, their corresponding pinyin with a diacritic, and the four diacritics representing Chinese lexical tones. The stimuli represent the variation of lexical tone for its linguistic and acoustic features. The results of a listening task by Mandarin Chinese speakers with and without aphasia support the functional hypothesis that pitch patterns are lateralized to different hemispheres of the brain depending on their functions, with lexical tone to the left hemisphere as a function of linguistic features.
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Yu, Luodi, Jiajing Zeng, Suiping Wang e Yang Zhang. "Phonetic Encoding Contributes to the Processing of Linguistic Prosody at the Word Level: Cross-Linguistic Evidence From Event-Related Potentials". Journal of Speech, Language, and Hearing Research 64, n.º 12 (13 de dezembro de 2021): 4791–801. http://dx.doi.org/10.1044/2021_jslhr-21-00037.

Texto completo da fonte
Resumo:
Purpose: This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. Method: Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic phonology (speech) conditions included disyllabic pseudowords spoken in Chinese and in English matched for syllabic structure, duration, and intensity. The prosodic acoustic (nonspeech) conditions were hummed versions of the speech stimuli, which eliminated the phonetic content while preserving the acoustic prosodic features. Results: We observed language-specific effects on the ERP that native stimuli elicited larger late negative response (LNR) amplitude than nonnative stimuli in the prosodic phonology conditions. However, no such effect was observed in the phoneme-free prosodic acoustic control conditions. Conclusions: The results support the integration view that word-level linguistic prosody likely relies on the phonetic content where the acoustic cues embedded in. It remains to be examined whether the LNR may serve as a neural signature for language-specific processing of prosodic phonology beyond auditory processing of the critical acoustic cues at the suprasyllabic level.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Lee, Yoonjeong, Marc Garellek, Christina Esposito e Jody Kreiman. "A cross-linguistic investigation of acoustic voice spaces". Journal of the Acoustical Society of America 150, n.º 4 (outubro de 2021): A191. http://dx.doi.org/10.1121/10.0008089.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Ozaydin, Selma. "Acoustic and Linguistic Properties of Turkish Whistle Language". Open Journal of Modern Linguistics 08, n.º 04 (2018): 99–107. http://dx.doi.org/10.4236/ojml.2018.84011.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Rose, Phil. "A linguistic‐phonetic acoustic analysis of shanghai tones". Australian Journal of Linguistics 13, n.º 2 (dezembro de 1993): 185–220. http://dx.doi.org/10.1080/07268609308599495.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Sharma, Neeraj Kumar, Venkat Krishnamohan, Sriram Ganapathy, Ahana Gangopadhayay e Lauren Fink. "Acoustic and linguistic features influence talker change detection". Journal of the Acoustical Society of America 148, n.º 5 (novembro de 2020): EL414—EL419. http://dx.doi.org/10.1121/10.0002462.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Gordon, Matthew, Paul Barthmaier e Kathy Sands. "A cross-linguistic acoustic study of voiceless fricatives". Journal of the International Phonetic Association 32, n.º 2 (dezembro de 2002): 141–74. http://dx.doi.org/10.1017/s0025100302001020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Sembiring, Sura Isnainy br, Nurlela Nurlela e Rozanna Mulyani. "A MODALITY ANALYSIS ON PRESIDENTIAL ELECTION AT 2019 IN INDONESIA: MULTICULTURAL STUDY". JOMSIGN: Journal of Multicultural Studies in Guidance and Counseling 5, n.º 1 (29 de março de 2021): 18–30. http://dx.doi.org/10.17509/jomsign.v5i1.32274.

Texto completo da fonte
Resumo:
A text modality system is a set of views, judgments, and opinions about linguistic phenomena. The theory used in this research is Systemic Functional Linguistics (SFL), which M.A.K Halliday initiated. This study's problem was to analyze the types and values of the modalities in the debate related to multicultural perspective. The data used in this study were 183 clauses and were analyzed qualitatively by using Functional Systemic Linguistic theory. Based on the results obtained through data analysis, Joko Widodo (Jokowi) nominated a probability modalization that shows the clause function as an exchange of information to express the speaker's attitude towards what he stated. Meanwhile, 80 types of modalities were found in Prabowo Subianto. They were nominated with the necessary modalities, which indicated that personal opinions or considerations were mandatory for the goods and services offered or requested.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Grabowski, Emily. "Differential effects of pitch on perceived duration in linguistic and auditory tasks". Journal of the Acoustical Society of America 151, n.º 4 (abril de 2022): A261. http://dx.doi.org/10.1121/10.0011265.

Texto completo da fonte
Resumo:
Duration perception is a complex phenomenon in linguistic studies. While it can be instrumentally measured in a straightforward fashion, its relationship to other acoustic measurements is still not well understood. Previous work has found that duration can influence the perception of other acoustic cues, such as pitch (Brigner, 1988). This implies that it is important to go beyond isolated analysis of individual acoustic cues, and consider the synergistic effects of multiple acoustic cues. This paper presents the results of an experiment designed to test the influence of changing pitch on duration in two tasks, one using linguistic stimuli and one using acoustic stimuli. I find an effect of pitch on duration perception in both tasks. However, the results are not identical. In the auditory task, there was a small effect where higher tones were heard as longer than low tones. However, in the linguistic task, higher tones were heard as shorter. This indicates that pitch can have both a direct and indirect effect on duration perception and that there are complex interactions that may have a significant impact on our language perception.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Møller, P., E. Myrseth, P. H. Pedersen, J. L. Larsen, J. Krakenes e G. Moen. "Acoustic Neuroma - Treatment Modalities: Surgery, gamma-knife or observation?" Acta Oto-Laryngologica 120, n.º 6 (1 de outubro de 2000): 34–37. http://dx.doi.org/10.1080/000164800453892.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Møller, P. "Acoustic Neuroma - Treatment Modalities: Surgery, gamma-knife or observation?" Acta Oto-Laryngologica 120, n.º 543 (janeiro de 2000): 34–37. http://dx.doi.org/10.1080/000164800454639-1.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Humayun, Mohammad Ali, Hayati Yassin e Pg Emeroylariffion Abas. "Dialect classification using acoustic and linguistic features in Arabic speech". IAES International Journal of Artificial Intelligence (IJ-AI) 12, n.º 2 (1 de junho de 2023): 739. http://dx.doi.org/10.11591/ijai.v12.i2.pp739-746.

Texto completo da fonte
Resumo:
<span lang="EN-US">Speech dialects refer to linguistic and pronunciation variations in the speech of the same language. Automatic dialect classification requires considerable acoustic and linguistic differences between different dialect categories of speech. This paper proposes a classification model composed of a combination of classifiers for the Arabic dialects by utilizing both the acoustic and linguistic features of spontaneous speech. The acoustic classification comprises of an ensemble of classifiers focusing on different frequency ranges within the short-term spectral features, as well as a classifier utilizing the ‘i-vector’, whilst the linguistic classifiers use features extracted by transformer models pre-trained on large Arabic text datasets. It has been shown that the proposed fusion of multiple classifiers achieves a classification accuracy of 82.44% for the identification task of five Arabic dialects. This represents the highest accuracy reported on the dataset, despite the relative simplicity of the proposed model, and has shown its applicability and relevance for dialect identification tasks. </span>
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Shen, Feiyu, Chenpeng Du e Kai Yu. "Acoustic Word Embeddings for End-to-End Speech Synthesis". Applied Sciences 11, n.º 19 (27 de setembro de 2021): 9010. http://dx.doi.org/10.3390/app11199010.

Texto completo da fonte
Resumo:
The most recent end-to-end speech synthesis systems use phonemes as acoustic input tokens and ignore the information about which word the phonemes come from. However, many words have their specific prosody type, which may significantly affect the naturalness. Prior works have employed pre-trained linguistic word embeddings as TTS system input. However, since linguistic information is not directly relevant to how words are pronounced, TTS quality improvement of these systems is mild. In this paper, we propose a novel and effective way of jointly training acoustic phone and word embeddings for end-to-end TTS systems. Experiments on the LJSpeech dataset show that the acoustic word embeddings dramatically decrease both the training and validation loss in phone-level prosody prediction. Subjective evaluations on naturalness demonstrate that the incorporation of acoustic word embeddings can significantly outperform both pure phone-based system and the TTS system with pre-trained linguistic word embedding.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

KEMP, NENAGH. "Commentary on Ravid & Tolchinsky ‘Developing linguistic literacy: a comprehensive model’". Journal of Child Language 29, n.º 2 (maio de 2002): 449–88. http://dx.doi.org/10.1017/s0305000902255348.

Texto completo da fonte
Resumo:
Ravid & Tolchinksy (R&T) introduce an important concept for the field of language acquisition and development: linguistic literacy. To be ‘linguistically literate’, they say, one must possess both ‘knowledge of the two major linguistic modalities – speech and writing’ and ‘a linguistic repertoire that encompasses a wide range of registers and genres’. These two statements make clear the great breadth of research that the authors have considered in formulating their model.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Piata, Anna. "Stylistic humor across modalities". Pragmatics of Internet Memes 3, n.º 2 (1 de julho de 2019): 174–201. http://dx.doi.org/10.1075/ip.00031.pia.

Texto completo da fonte
Resumo:
Abstract This paper is concerned with ‘Classical Art Memes’, a category of internet memes that distinctively derives its visual input from classical and medieval art. I specifically show that humor in Classical Art Memes arises from incongruity among different stylistic varieties, namely a colloquial linguistic expression in the text and a classical-style artwork in the image. Given that stylistic incongruity cross-cuts modalities, I further argue that Classical Art Memes make a case for what I call ‘multimodal stylistic humor’. The analysis is based on a small corpus of when-memes, whereby the image complements a when-clause. The findings of the study suggest that humor in Classical Art Memes serves to convey affective meanings that emerge from the embodied affect in the image that is textually recontextualized in contemporary terms. Such meanings ultimately convey a critical commentary on knowable features of modern life.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Flipsen, Peter, Lawrence Shriberg, Gary Weismer, Heather Karlsson e Jane McSweeny. "Acoustic Characteristics of /s/ in Adolescents". Journal of Speech, Language, and Hearing Research 42, n.º 3 (junho de 1999): 663–77. http://dx.doi.org/10.1044/jslhr.4203.663.

Texto completo da fonte
Resumo:
The goal of the current study was to construct a reference database against which misarticulations of /s/ can be compared. Acoustic data for 26 typically speaking 9- to 15-year-olds were examined to resolve measurement issues in acoustic analyses, including alternative sampling points within the /s/ frication; the informativeness of linear versus Bark transformations of each of the 4 spectral moments of /s/ (Forrest, Weismer, Milenkovic, & Dougall, 1988); and measurement effects associated with linguistic context, age, and sex. Analysis of the reference data set indicates that acoustic characterization of /s/ is appropriately and optimally (a) obtained from the midpoint of /s/, (b) represented in linear scale, (c) reflected in summary statistics for the 1st and 3rd spectral moments, (d) referenced to individual linguistic-phonetic contexts, (e) collapsed across the age range studied, and (f) described individually by sex.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Filichkina, Tatyana P. "Idioms as a means of creating the image of China in the English language media discourse". Izvestiya of Saratov University. New Series. Series: Philology. Journalism 21, n.º 3 (25 de agosto de 2021): 254–60. http://dx.doi.org/10.18500/1817-7115-2021-21-3-254-260.

Texto completo da fonte
Resumo:
The article deals with the application of phraseological units in describing China in the English language media discourse. The evaluation in idioms is determined by means of discourse analysis which takes into account extra-linguistic, linguistic and cognitive factors. Subjective modalities specify the evaluative potential of idioms and show the mechanism of manipulating public opinion in the media discourse.
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Filichkina, Tatyana P. "Idioms as a means of creating the image of China in the English language media discourse". Izvestiya of Saratov University. New Series. Series: Philology. Journalism 21, n.º 3 (25 de agosto de 2021): 254–60. http://dx.doi.org/10.18500/1817-7115-2021-21-3-254-260.

Texto completo da fonte
Resumo:
The article deals with the application of phraseological units in describing China in the English language media discourse. The evaluation in idioms is determined by means of discourse analysis which takes into account extra-linguistic, linguistic and cognitive factors. Subjective modalities specify the evaluative potential of idioms and show the mechanism of manipulating public opinion in the media discourse.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Kösem, Anne, Anahita Basirat, Leila Azizi e Virginie van Wassenhove. "High-frequency neural activity predicts word parsing in ambiguous speech streams". Journal of Neurophysiology 116, n.º 6 (1 de dezembro de 2016): 2497–512. http://dx.doi.org/10.1152/jn.00074.2016.

Texto completo da fonte
Resumo:
During speech listening, the brain parses a continuous acoustic stream of information into computational units (e.g., syllables or words) necessary for speech comprehension. Recent neuroscientific hypotheses have proposed that neural oscillations contribute to speech parsing, but whether they do so on the basis of acoustic cues (bottom-up acoustic parsing) or as a function of available linguistic representations (top-down linguistic parsing) is unknown. In this magnetoencephalography study, we contrasted acoustic and linguistic parsing using bistable speech sequences. While listening to the speech sequences, participants were asked to maintain one of the two possible speech percepts through volitional control. We predicted that the tracking of speech dynamics by neural oscillations would not only follow the acoustic properties but also shift in time according to the participant's conscious speech percept. Our results show that the latency of high-frequency activity (specifically, beta and gamma bands) varied as a function of the perceptual report. In contrast, the phase of low-frequency oscillations was not strongly affected by top-down control. Whereas changes in low-frequency neural oscillations were compatible with the encoding of prelexical segmentation cues, high-frequency activity specifically informed on an individual's conscious speech percept.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Ravid, Dorit, e Yehudit Chen-Djemal. "Spoken and written narration in Hebrew". Written Language and Literacy 18, n.º 1 (12 de fevereiro de 2015): 56–81. http://dx.doi.org/10.1075/wll.18.1.03rav.

Texto completo da fonte
Resumo:
The study is premised on speech and writing relying on differently coordinated temporal frames of communication, aiming to pinpoint the conceptual and linguistic differences between spoken and written Hebrew narration. This is a case study presenting in-depth psycholinguistic analyses of the oral and written versions of a personal-experience story produced by the same adult narrator in Hebrew, taking into account discursive functions, discourse stance, linguistic expression, and information flow, processing, and cohesion. Findings of parallel spoken and written content units presenting the same narrative information point to the interface of the narrative genre with the spoken and written modalities, together with the mature cognitive, linguistic, and social skills and experience of adulthood. Both spoken and written personal-experience adult narrative versions have a non-personal, non-specific, detached stance, though the written units are more abstract and syntactically complex. Adult narrating skill encompasses both modalities, recruiting different devices for the expression of cohesion.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Kim, YouJin, YeonJoo Jung e Stephen Skalicky. "LINGUISTIC ALIGNMENT, LEARNER CHARACTERISTICS, AND THE PRODUCTION OF STRANDED PREPOSITIONS IN RELATIVE CLAUSES". Studies in Second Language Acquisition 41, n.º 5 (23 de maio de 2019): 937–69. http://dx.doi.org/10.1017/s0272263119000093.

Texto completo da fonte
Resumo:
AbstractThe current study examined the occurrence and benefits of linguistic alignment in two modalities, face-to-face (FTF) and synchronous computer-mediated communication (SCMC), focusing on stranded prepositions in relative clauses. It further examined how learner characteristics (i.e., working memory, language proficiency, previous knowledge of the target structure) mediate the effects of linguistic alignment. Ninety-four Korean students were assigned to one of the following groups: FTF alignment, SCMC alignment, FTF control, and SCMC control. The alignment experimental groups completed two alignment sessions, finished three stranded preposition tests, and carried out a running span test and cloze test over three weeks. Results indicated not only that linguistic alignment occurred in both FTF and SCMC modes but also that alignment was facilitated significantly more in the SCMC than FTF interactions. Furthermore, the findings suggest immediate and delayed learning effects in both modalities, and that learners’ prior knowledge of the target structure was significantly associated with the occurrence of alignment.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Ahmed Lecturer, Abu-Shnein. "Bilingualism Impact on Intelligence and Scholastic Achievement". لارك 1, n.º 40 (31 de dezembro de 2020): 1163–56. http://dx.doi.org/10.31185/lark.vol1.iss40.1638.

Texto completo da fonte
Resumo:
Bilingualism can refer to the ability to preserve linguistic skills, to a certain level, at two separate linguistic systems, in the four linguistic modalities: listening, speaking, reading, and writing. It is related to social, psychological, economic, and political factors. The aim of this study is to review previous studies that targeted the influence, whether positive or negative, a second language may have on the intelligence and then on scholastic achievement of bilingual students versus the intelligence and scholastic achievement of monolinguals.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Lee, Yoonjeong, e Jody Kreiman. "Linguistic and personal influences on speaker variability". Journal of the Acoustical Society of America 151, n.º 4 (abril de 2022): A62—A63. http://dx.doi.org/10.1121/10.0010662.

Texto completo da fonte
Resumo:
Our previous studies examined the manner in which within- and between-speaker acoustic variability in voice follow patterns determined by biological factors, the language spoken, and individual idiosyncrasies. To date, we have analyzed data from speakers of English, Seoul Korean, and Hmong, which differ in whether they contrast phonation type and/or tone. We found several factors that consistently account for acoustic variability across languages, but also factors that vary with phonology. The present study adds Gujarati (which contrasts breathy with modal phonation) and Thai (a tone language without contrastive phonation) to this work. We hypothesize that F0 will emerge from analyses of Thai, as it did for Hmong and Korean (but not for English), and that differences in the amplitudes of lower harmonics will emerge for Gujarati, as occurred for Hmong, but not for English or Korean. We further hypothesize that two factors—the balance of high-frequency harmonic and inharmonic energy and formant dispersion—will emerge as the most important factors explaining acoustic variance for these new sets of speakers and languages, as they have in our previous studies. Such a result would be consistent with the view that speaker variability is governed by both biological and linguistic factors.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Guo, Li Xin, Jia Chao Cui e Yi Min Zhang. "Modal Analysis of Interior Acoustic Pressure Distribution of Bus Bodywork". Applied Mechanics and Materials 29-32 (agosto de 2010): 1997–2001. http://dx.doi.org/10.4028/www.scientific.net/amm.29-32.1997.

Texto completo da fonte
Resumo:
The characteristic of vibration and noise of an automobile has become an important guide line to evaluate an automobile during the development of manufacturing and comfort of automobiles. In this study, finite element modal analysis was used to extract the acoustic resonant frequency and acoustic modal modes. The results show that the influence of vehicle seats on the acoustic modality should be considered in the acoustic design of vehicle interior cavum. Higher acoustic pressure areas for lower acoustic modalities mainly lie at the anterior/posterior ends or middle of the vehicle interior cavum.
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Gorter, Durk, e Jasone Cenoz. "Translanguaging and linguistic landscapes". Linguistic Landscape. An international journal 1, n.º 1-2 (19 de junho de 2015): 54–74. http://dx.doi.org/10.1075/ll.1.1-2.04gor.

Texto completo da fonte
Resumo:
In this article we discuss the concept of translanguaging in relation to a holistic view of linguistic landscapes that goes beyond the analysis of individual signs. On the one hand, we look at instances of multilingual signage as a combination of linguistic resources. On the other hand, at the neighborhood level the individual signs combine, alternate and mix to shape linguistic landscapes as a whole. We expand our “Focus on Multilingualism” approach from school settings to the multilingual cityscape. One bookshop and its surrounding neighborhoods in Donostia-San Sebastián illustrate how readers navigate between languages and go across linguistic borders. Through translanguaging we foreground the co-occurrence of different linguistic forms, signs and modalities. At the level of neighborhood emerges the space in which translanguaging goes outside the scope of single signs and separate languages. We conclude that translanguaging is an approach to linguistic landscapes that takes the study of multilingualism forward.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Tessendorf, Bernd, Matjaz Debevc, Peter Derleth, Manuela Feilner, Franz Gravenhorst, Daniel Roggen, Thomas Stiefmeier e Gerhard Tröster. "Design of a multimodal hearing system". Computer Science and Information Systems 10, n.º 1 (2013): 483–501. http://dx.doi.org/10.2298/csis120423012t.

Texto completo da fonte
Resumo:
Hearing instruments (HIs) have become context-aware devices that analyze the acoustic environment in order to automatically adapt sound processing to the user?s current hearing wish. However, in the same acoustic environment an HI user can have different hearing wishes requiring different behaviors from the hearing instrument. In these cases, the audio signal alone contains too little contextual information to determine the user?s hearing wish. Additional modalities to sound can provide the missing information to improve the adaption. In this work, we review additional modalities to sound in HIs and present a prototype of a newly developed wireless multimodal hearing system. The platform takes into account additional sensor modalities such as the user?s body movement and location. We characterize the system regarding runtime, latency and reliability of the wireless connection, and point out possibilities arising from the novel approach.
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Olson, Daniel J. "Short-Term Sources of Cross-Linguistic Phonetic Influence: Examining the Role of Linguistic Environment". Languages 5, n.º 4 (24 de outubro de 2020): 43. http://dx.doi.org/10.3390/languages5040043.

Texto completo da fonte
Resumo:
While previous research has shown that bilinguals are able to effectively maintain two sets of phonetic norms, these two phonetic systems experience varying degrees of cross-linguistic influence, driven by both long-term (e.g., proficiency, immersion) and short-term (e.g., bilingual language contexts, code-switching, sociolinguistic) factors. This study examines the potential for linguistic environment, or the language norms of the broader community in which an interaction takes place, to serve as a source of short-term cross-linguistic phonetic influence. To investigate the role of linguistic environment, late bilinguals (L1 English—L2 Spanish) produced Spanish utterances in two sessions that differed in their linguistic environments: an English-dominant linguistic environment (Indiana, USA) and a Spanish-dominant linguistic environment (Madrid, Spain). Productions were analyzed at the fine-grained acoustic level, through an acoustic analysis of voice onset time, as well as more holistically through native speaker global accent ratings. Results showed that linguistic environment did not significantly impact either measure of phonetic production, regardless of a speaker’s second language proficiency. These results, in conjunction with previous results on long- and short-term sources of phonetic influence, suggest a possible primacy of the immediate context of an interaction, rather than broader community norms, in determining language mode and cross-linguistic influence.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Siren, Kathleen. "Visual and acoustic influences on linguistic discrimination during interviews". Journal of the Acoustical Society of America 151, n.º 4 (abril de 2022): A278—A279. http://dx.doi.org/10.1121/10.0011336.

Texto completo da fonte
Resumo:
This study investigates the influence of bias during the interview process. University students of different races and ethnicities recorded pre-determined responses to questions that might be asked during a graduate admissions interview. These recorded responses were played for a group of university professors and a group of non-academicians. The listeners heard student responses under three conditions: (1) audio only, (2) audio paired with a matched picture of the student, and (3) audio paired with a picture of a student of a different race and ethnicity. Listeners rated responses along several continua, including knowledge of field, competency, and motivation. Student responses were compared across several acoustic features including fundamental frequency, pitch range, and rate. Results are discussed in terms of potential linguistic discrimination that may be exacerbated during face-to-face graduate admissions interviews and acoustic characteristics of speech that may influence perceptual bias based on race.
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Ryalls, John, e Ivar Reinvang. "Functional Lateralization of Linguistic Tones: Acoustic Evidence from Norwegian". Language and Speech 29, n.º 4 (outubro de 1986): 389–98. http://dx.doi.org/10.1177/002383098602900405.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia