Gotowa bibliografia na temat „Acoustic and Linguistic Modalities”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Acoustic and Linguistic Modalities”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Acoustic and Linguistic Modalities"

1

Barón-Birchenall, Leonardo. "Phonetic Accommodation During Conversational Interactions: An Overview". Revista Guillermo de Ockham 21, nr 2 (22.03.2023): press. http://dx.doi.org/10.21500/22563202.6150.

Pełny tekst źródła
Streszczenie:
During conversational interactions such as tutoring, instruction-giving tasks, verbal negotiations, or just talking with friends, interlocutors’ behaviors experience a series of changes due to the characteristics of their counterpart and to the interaction itself. These changes are pervasively present in every social interaction, and most of them occur in the sounds and rhythms of our speech, which is known as acoustic-prosodic accommodation, or simply phonetic accommodation. The consequences, linguistic and social constraints, and underlying cognitive mechanisms of phonetic accommodation have been studied for at least 50 years, due to the importance of the phenomenon to several disciplines such as linguistics, psychology, and sociology. Based on the analysis and synthesis of the existing empirical research literature, in this paper we present a structured and comprehensive review of the qualities, functions, onto- and phylogenetic development, and modalities of phonetic accommodation.
Style APA, Harvard, Vancouver, ISO itp.
2

Calder, Jeremy. "The fierceness of fronted /s/: Linguistic rhematization through visual transformation". Language in Society 48, nr 1 (11.10.2018): 31–64. http://dx.doi.org/10.1017/s004740451800115x.

Pełny tekst źródła
Streszczenie:
AbstractThis article explores the roles that language and the body play in the iconization of cross-modal personae (see Agha 2003, 2004). Focusing on a community of radical drag queens in San Francisco, I analyze the interplay of visual presentation and acoustic dimensions of /s/ in the construction of the fierce queen persona, which embodies an extreme, larger-than-life, and anti-normative type of femininity. Taking data from transformations—conversations during which queens visually transform from male-presenting into their feminine drag personae—I explore the effect of fluid visual presentation on linguistic production, and argue that changes in both the linguistic and visual streams increasingly invoke qualia (see Gal 2013; Harkness 2015) projecting ‘harshness’ and ‘sharpness’ in the construction of fierce femininity. I argue that personae like the fierce queen become iconized through rhematization (see Gal 2013), a process in which qualic congruences are construed and constructed across multiple semiotic modalities. (Iconization, rhematization, qualia, sociophonetics, gender, personae, drag queens)*
Style APA, Harvard, Vancouver, ISO itp.
3

Wang, Yue, Allard Jongman i Joan Sereno. "Audio-visual clear speech: Articulation, acoustics and perception of segments and tones". Journal of the Acoustical Society of America 153, nr 3_supplement (1.03.2023): A122. http://dx.doi.org/10.1121/10.0018372.

Pełny tekst źródła
Streszczenie:
Research has established that clear speech with enhanced acoustic signal benefits segmental intelligibility. Less attention has been paid to visible articulatory correlates of clear-speech modifications, or to clear-speech effects at the suprasegmental level (e.g., lexical tone). Questions thus arise as to the extent to which clear-speech cues are beneficial in different input modalities and linguistic domains, and how different resources are incorporated. These questions address the fundamental argument in clear-speech research with respect to the trade-off between effects of signal-based phoneme-extrinsic modifications to strengthen overall acoustic salience versus code-based phoneme-specific modifications to maintain phonemic distinctions. In this talk, we report findings from our studies on audio-visual clear speech production and perception, including vowels and fricatives differing in auditory and visual saliency, and lexical tones believed to lack visual distinctiveness. In a 3-stream study, we use computer-vision techniques to extract visible facial cues associated with segmental and tonal productions in plain and clear speech, characterize distinctive acoustic features across speech styles, and compare audio-visual plain and clear speech perception. Findings are discussed in terms of how speakers and perceivers strike a balance between utilizing general saliency-enhancing and category-specific cues across audio-visual modalities and speech styles with the aim of improving intelligibility.
Style APA, Harvard, Vancouver, ISO itp.
4

Yang, Ziyi, Yuwei Fang, Chenguang Zhu, Reid Pryzant, DongDong Chen, Yu Shi, Yichong Xu i in. "i-Code: An Integrative and Composable Multimodal Learning Framework". Proceedings of the AAAI Conference on Artificial Intelligence 37, nr 9 (26.06.2023): 10880–90. http://dx.doi.org/10.1609/aaai.v37i9.26290.

Pełny tekst źródła
Streszczenie:
Human intelligence is multimodal; we integrate visual, linguistic, and acoustic signals to maintain a holistic worldview. Most current pretraining methods, however, are limited to one or two modalities. We present i-Code, a self-supervised pretraining framework where users may flexibly combine the modalities of vision, speech, and language into unified and general-purpose vector representations. In this framework, data from each modality are first given to pretrained single-modality encoders. The encoder outputs are then integrated with a multimodal fusion network, which uses novel merge- and co-attention mechanisms to effectively combine information from the different modalities. The entire system is pretrained end-to-end with new objectives including masked modality unit modeling and cross-modality contrastive learning. Unlike previous research using only video for pretraining, the i-Code framework can dynamically process single, dual, and triple-modality data during training and inference, flexibly projecting different combinations of modalities into a single representation space. Experimental results demonstrate how i-Code can outperform state-of-the-art techniques on five multimodal understanding tasks and single-modality benchmarks, improving by as much as 11% and demonstrating the power of integrative multimodal pretraining.
Style APA, Harvard, Vancouver, ISO itp.
5

Martínez, Fernando Casanova. "Multimodal exploration of the thank God expressive construction and its implications for translation". Translation, Cognition & Behavior 7, nr 1 (10.10.2024): 48–89. http://dx.doi.org/10.1075/tcb.00095.mar.

Pełny tekst źródła
Streszczenie:
Abstract Multimodal research in communication and translation studies is increasingly recognized, yet it remains incompletely explored. Leveraging computational linguistics with both Praat for acoustic analysis and the OpenPose and Rapid Annotator tools for visual analysis, this study delves into the intricate dynamics of the expressive construction thank God, providing a comprehensive examination of both visual and acoustic dimensions. Our objective is to uncover nuanced patterns of multimodal communication embedded within this expression and their implications for Translation and Interpreting. Through an analysis of linguistic features and co-speech gestures present in thank God, we aim to deepen our comprehension of how meaning crisscrosses modalities. Our findings underscore the necessity of a multimodal approach in language studies, emphasizing the requisite to preserve emotional and contextual nuances. The analysis unveils the phonological relevance of the duration of the construction’s second vowel, a key factor for translation. Additionally, data reveals a correlation between the emotion of relief and gestures executed with both hands closer to the chest. Overall, these findings contribute to advancing both multimodal communication research and translation studies, shedding light on the role of multimodal analysis in understanding language and translation dynamics, particularly in the context of constructions like thank God.
Style APA, Harvard, Vancouver, ISO itp.
6

Kirnosova, Nadiia, i Yuliia Fedotova. "Chinese and Japanese Characters from the Perspective of Multimodal Studies". ATHENS JOURNAL OF PHILOLOGY 8, nr 4 (9.09.2021): 253–68. http://dx.doi.org/10.30958/ajp.8-4-1.

Pełny tekst źródła
Streszczenie:
This article aims to demonstrate that a character can generate at least three different modalities simultaneously – visual, audial and vestibular — and influence a recipient in a deeper and more powerful way (than a sign from a phonetic alphabet). To show this, we chose modern Chinese and Japanese characters as live signs, and analyzed them functioning in texts with obvious utilitarian purposes – in advertisements. The main problem we were interested in during conducting this research was the “information capacity” of a character. We find out that any character exists in three dimensions simultaneously and generates three modalities at the same time. Its correspondence with morphemes opens two channels for encoding information – first of all, it brings a space for audial modality through the acoustic form of a syllable, and then it opens a space for visual modality through the graphical form of a character. The latter form implies a space for vestibular modality, because as a “figure,” any character occupies its “ground” (a particular square area), which becomes a source of a sense of stability and symmetry, enriching linguistic messages with non-verbal information. Keywords: advertisement, character, information, mode, multimodality
Style APA, Harvard, Vancouver, ISO itp.
7

Karpenko, O., V. Neklesova, A. Tkachenko i M. Karpenko. "SENSORY MODALITY IN ADVERTISING DISCOURSE". Opera in Linguistica Ukrainiana, nr 31 (14.07.2024): 302–17. http://dx.doi.org/10.18524/2414-0627.2024.31.309450.

Pełny tekst źródła
Streszczenie:
The article is dedicated to the study of sensory modalities in advertising discourse. Advertising discourse refers to the language and communication strategies employed in advertising messages as cohesive texts deeply embedded in real-life contexts amidst numerous accompanying background elements within an integrated communicative environment. It encompasses the linguistic choices, persuasive techniques, and stylistic features used to convey marketing messages to a target audience, aiming at attracting attention, creating desire, and encouraging action, typically towards purchasing a product or service. The aim of this article is to analyze the ways of realizing sensory modalities in advertising discourse. The object of the research is advertising discourse, the subject being manifestations of sensory modalities in advertisements. The factual material of the research was selected from the collections of popular or iconic advertisements. Advertising discourse often appeals to emotions and utilizes visual elements to communicate the intended message effectively and influence consumer behaviour. The heterogeneity of advertisements, represented by a phrase alone or in combination with a static or dynamic visual image or acoustic accompaniment, these combinations vary significantly, increasing the desirability of the advertised product to impress us daily. Different sensory modalities — visual, associated with images; auditory, associated with sounds and their perception; and kinesthetic, associated with physical sensations, as well as their manifestations, reflected in certain speech predicates –influence how we think, feel, mentally represent our experiences and make choices. The application of sensory modalities in advertising discourse is observed on three levels: on the first level, we deal with the material representation of the advertising message that results in different communicative types of advertisements; on the second level, the preferred representational system is revealed via predicates or the sensory-based words; on the third level, product names, pragmatonyms, become the bearers of sensory information, consequently appealing to human senses.
Style APA, Harvard, Vancouver, ISO itp.
8

Dolník, Juraj. "Methodological impulses of Ján Horecký". Journal of Linguistics/Jazykovedný casopis 71, nr 2 (1.12.2020): 139–56. http://dx.doi.org/10.2478/jazcas-2020-0018.

Pełny tekst źródła
Streszczenie:
Abstract The author of the study develops the ideas of J. Horecký, which relate to the language sign, the language system, language consciousness and its cultivation. Interpretations of J. Horecký’s statements on the systemic and communicative language sign lead to the conclusion that there is really only a communication sign as an ambivalent significant for users of the language who control the rules of its use. Significant are articulation‐acoustic units, which we feel as fictitious equivalents of what we experience when we are in the intentional state. J. Horecký’s reflections on the language system led the author to confront the user of the language as an actor of language practice with the user realizing himself as a reflexive linguistic being. In this confrontation, the language system came into focus in a practical and reflexive modality. On the background of these modalities of the language system, the author approaches linguistic consciousness in the interpretation of J. Horecký, in order to shed light on it in terms of two questions: (1) What is the degree of linguistic awareness of the mother tongue? (2) What is the “true” cultivation of language consciousness? These questions led the author to confront the linguistic realist with the anti‐realist and to discover a situation in which the linguist believes in realism but holds the position of anti‐realist. The author leans towards the realists and emphasizes the thesis that the representation of the language system is true when it corresponds to the language system resulting from the nature of language.
Style APA, Harvard, Vancouver, ISO itp.
9

Snijders, Tineke M., Titia Benders i Paula Fikkert. "Infants Segment Words from Songs—An EEG Study". Brain Sciences 10, nr 1 (9.01.2020): 39. http://dx.doi.org/10.3390/brainsci10010039.

Pełny tekst źródła
Streszczenie:
Children’s songs are omnipresent and highly attractive stimuli in infants’ input. Previous work suggests that infants process linguistic–phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children’s songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.
Style APA, Harvard, Vancouver, ISO itp.
10

Shah, Shariq, Hossein Ghomeshi, Edlira Vakaj, Emmett Cooper i Rasheed Mohammad. "An Ensemble-Learning-Based Technique for Bimodal Sentiment Analysis". Big Data and Cognitive Computing 7, nr 2 (30.04.2023): 85. http://dx.doi.org/10.3390/bdcc7020085.

Pełny tekst źródła
Streszczenie:
Human communication is predominantly expressed through speech and writing, which are powerful mediums for conveying thoughts and opinions. Researchers have been studying the analysis of human sentiments for a long time, including the emerging area of bimodal sentiment analysis in natural language processing (NLP). Bimodal sentiment analysis has gained attention in various areas such as social opinion mining, healthcare, banking, and more. However, there is a limited amount of research on bimodal conversational sentiment analysis, which is challenging due to the complex nature of how humans express sentiment cues across different modalities. To address this gap in research, a comparison of multiple data modality models has been conducted on the widely used MELD dataset, which serves as a benchmark for sentiment analysis in the research community. The results show the effectiveness of combining acoustic and linguistic representations using a proposed neural-network-based ensemble learning technique over six transformer and deep-learning-based models, achieving state-of-the-art accuracy.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Acoustic and Linguistic Modalities"

1

Pérez-Rosas, Verónica. "Exploration of Visual, Acoustic, and Physiological Modalities to Complement Linguistic Representations for Sentiment Analysis". Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699996/.

Pełny tekst źródła
Streszczenie:
This research is concerned with the identification of sentiment in multimodal content. This is of particular interest given the increasing presence of subjective multimodal content on the web and other sources, which contains a rich and vast source of people's opinions, feelings, and experiences. Despite the need for tools that can identify opinions in the presence of diverse modalities, most of current methods for sentiment analysis are designed for textual data only, and few attempts have been made to address this problem. The dissertation investigates techniques for augmenting linguistic representations with acoustic, visual, and physiological features. The potential benefits of using these modalities include linguistic disambiguation, visual grounding, and the integration of information about people's internal states. The main goal of this work is to build computational resources and tools that allow sentiment analysis to be applied to multimodal data. This thesis makes three important contributions. First, it shows that modalities such as audio, video, and physiological data can be successfully used to improve existing linguistic representations for sentiment analysis. We present a method that integrates linguistic features with features extracted from these modalities. Features are derived from verbal statements, audiovisual recordings, thermal recordings, and physiological sensors signals. The resulting multimodal sentiment analysis system is shown to significantly outperform the use of language alone. Using this system, we were able to predict the sentiment expressed in video reviews and also the sentiment experienced by viewers while exposed to emotionally loaded content. Second, the thesis provides evidence of the portability of the developed strategies to other affect recognition problems. We provided support for this by studying the deception detection problem. Third, this thesis contributes several multimodal datasets that will enable further research in sentiment and deception detection.
Style APA, Harvard, Vancouver, ISO itp.
2

Sinclair, Roderick. "Acoustic guitar practice and acousticity : establishing modalities of creative practice". Thesis, University of Newcastle Upon Tyne, 2008. http://hdl.handle.net/10443/654.

Pełny tekst źródła
Streszczenie:
The contemporarya cousticg uitarh asd evelopedfr om its origins in the 'Spanish' guitar to become a global instrument and the musical voice of a wide range of styles. The very 'acousticity' of the instrumentp ositionsi t as a binary oppositet o the electric guitar ano as a signifier for the organic and the natural world, artistry and maturity,e clecticisma ndt he esoteric.I n this concept-rootedsu bmissiont,h e acoustica nd guitaristicn atureo f the instrumentis consideredin relationt o a range of social, cultural and artistic concerns, and composition is used primarily to test a thesis, wherein a portfolio of original compositions, presented as recordings and understooda s phonogramsc, ommentu pona ndr eflect uponm odeso f performativity: instrument specific performance, introspection, virtuosity, mediation by technology and performance subjectivities.
Style APA, Harvard, Vancouver, ISO itp.
3

Dietz, Kimberly F. "Acoustic and linguistic interdependencies of irregular phonation". Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61154.

Pełny tekst źródła
Streszczenie:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 57-58).
Irregular phonation is a commonly occurring but only partially understood phenomenon of human speech production. We know properties of irregular phonation can be clues to a speaker's dialect and even identity. We also have evidence that irregular phonation is used as a signal of linguistic and acoustic intent. Nonetheless, there remain fundamental questions about the nature of irregular phonation and the interdependencies of irregular phonation with acoustic and linguistic speech characteristics, as well as the implications of this relationship for speech processing applications. In this thesis, we hypothesize that irregular phonation occurs naturally in situations with large amounts of change in pitch or power. We therefore focus on investigating parameters such as pitch variance and power variance as well as other measurable properties involving speech dynamics. In this work, we have investigated the frequency and structure of irregular phonation, the acoustic characteristics of the TIMIT Acoustic-Phonetic Speech Corpus, and relationships between these two groups. We show that characteristics of irregular phonation are positively correlated with several of our potential predictors including pitch and power variance. Finally, we demonstrate that these correlations lead to a model with the potential to predict the occurrence and properties of irregular phonation.
by Kimberly F. Dietz.
M.Eng.
Style APA, Harvard, Vancouver, ISO itp.
4

Ouellette, Gene Paul. "The neurological basis of linguistic prosody : an acoustic investigation". Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=56630.

Pełny tekst źródła
Streszczenie:
This study explored the ability of left hemisphere damaged (LHD) nonfluent aphasics, right hemisphere damaged (RHD) patients, and normal speakers to produce acoustic correlates of linguistic prosody. Productions of phonemic stress contrasts (e.g., black$ prime$board vs. black board$ prime$) and contrastive stress tokens (e.g., The man took the bus), were elicited and subjected to acoustic analyses. Results indicated that RHD and LHD groups resembled normal speakers in the use of fundamental frequency and amplitude to encode stress, indicating preserved abilities in both neurological populations. However, the LHD aphasic subjects demonstrated patterns of durational alterations that were statistically different from those obtained for the control and RHD groups. The data are indicative of a basic impairment in speech timing subsequent to LHD. Results are discussed in relation to current theories regarding the neurological basis of linguistic prosody.
Style APA, Harvard, Vancouver, ISO itp.
5

Deschamps-Berger, Théo. "Social Emotion Recognition with multimodal deep learning architecture in emergency call centers". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG036.

Pełny tekst źródła
Streszczenie:
Cette thèse porte sur les systèmes de reconnaissance automatique des émotions dans la parole, dans un contexte d'urgence médicale. Elle aborde certains des défis rencontrés lors de l'étude des émotions dans les interactions sociales et est ancrée dans les théories modernes des émotions, en particulier celles de Lisa Feldman Barrett sur la construction des émotions. En effet, la manifestation des émotions spontanées dans les interactions humaines est complexe et souvent caractérisée par des nuances, des mélanges et étroitement liée au contexte. Cette étude est fondée sur le corpus CEMO, composé de conversations téléphoniques entre appelants et Agents de Régulation Médicale (ARM) d'un centre d'appels d'urgence français. Ce corpus fournit un ensemble riche de données pour évaluer la capacité des systèmes d'apprentissage profond, tels que les Transformers et les modèles pré-entraînés, à reconnaître les émotions spontanées dans les interactions parlées. Les applications pourraient être de fournir des indices émotionnels susceptibles d'améliorer la gestion des appels et la prise de décision des ARM ou encore de faire des synthèses des appels. Les travaux menés dans ma thèse ont porté sur différentes techniques liées à la reconnaissance des émotions vocales, notamment l'apprentissage par transfert à partir de modèles pré-entraînés, les stratégies de fusion multimodale, l'intégration du contexte dialogique et la détection d'émotions mélangées. Un système acoustique initial basé sur des convolutions temporelles et des réseaux récurrents a été développé et validé sur un corpus émotionnel connu de la communauté affective, appelé IEMOCAP puis sur le corpus CEMO. Des recherches approfondies sur des systèmes multimodaux, pré-entraînés en acoustique et linguistique et adaptés à la reconnaissance des émotions, sont présentées. En outre, l'intégration du contexte dialogique dans la détection des émotions a été explorée, mettant en lumière la dynamique complexe des émotions dans les interactions sociales. Enfin, des travaux ont été initiés sur des systèmes multi-étiquettes multimodaux capables de traiter les subtilités des émotions mélangées dues à l'ambiguïté de la perception des annotateurs et du contexte social. Nos recherches mettent en évidence certaines solutions et défis liés à la reconnaissance des émotions dans des situations "in the wild". Cette thèse est financée par la Chaire CNRS AI HUMAAINE : HUman-MAchine Interaction Affective & Ethique
This thesis explores automatic speech-emotion recognition systems in a medical emergency context. It addresses some of the challenges encountered when studying emotions in social interactions. It is rooted in modern theories of emotions, particularly those of Lisa Feldman Barrett on the construction of emotions. Indeed, the manifestation of emotions in human interactions is complex and often characterized by nuanced, mixed, and is highly linked to the context. This study is based on the CEMO corpus, which is composed of telephone conversations between callers and emergency medical dispatchers (EMD) from a French emergency call center. This corpus provides a rich dataset to explore the capacity of deep learning systems, such as Transformers and pre-trained models, to recognize spontaneous emotions in spoken interactions. The applications could be to provide emotional cues that could improve call handling and decision-making by EMD, or to summarize calls. The work carried out in my thesis focused on different techniques related to speech emotion recognition, including transfer learning from pre-trained models, multimodal fusion strategies, dialogic context integration, and mixed emotion detection. An initial acoustic system based on temporal convolutions and recurrent networks was developed and validated on an emotional corpus widely used by the affective community, called IEMOCAP, and then on the CEMO corpus. Extensive research on multimodal systems, pre-trained in acoustics and linguistics and adapted to emotion recognition, is presented. In addition, the integration of dialog context in emotion recognition was explored, underlining the complex dynamics of emotions in social interactions. Finally, research has been initiated towards developing multi-label, multimodal systems capable of handling the subtleties of mixed emotions, often due to the annotator's perception and social context. Our research highlights some solutions and challenges in recognizing emotions in the wild. The CNRS AI HUMAAINE Chair: HUman-MAchine Affective Interaction & Ethics funded this thesis
Style APA, Harvard, Vancouver, ISO itp.
6

Daly, Nancy Ann. "Acoustic-phonetic and linguistic analyses of spontaneous speech : implications for speech understanding". Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/12009.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 142-149).
by Nancy Ann Daly.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
7

Bianchi, Michelle. "Effects of clear speech and linguistic experience on acoustic characteristics of vowel production". [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002084.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Marklund, Ellen. "Perceptual reorganization of vowels : Separating the linguistic and acoustic parts of the mismatch response". Doctoral thesis, Stockholms universitet, Institutionen för lingvistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-148559.

Pełny tekst źródła
Streszczenie:
During the first year of life, infants go from perceiving speech sounds primarily based on their acoustic characteristics, to perceiving speech sounds as belonging to speech sound categories relevant in their native language(s). The transition is apparent in that very young infants typically discriminate both native and non-native speech sound contrasts, whereas older infants show better discrimination for native contrasts and worse or no discrimi­na­tion for non-native contrasts. The rate of this perceptual reorganization depends, among other things, on the salience of the relevant speech sounds within the speech signal. As such, the perceptual reorganization of vowels and lexical tone typically precedes the perceptual reorganization of consonants. Perceptual reorganizatoin of speech sounds is often demonstrated by measuring in­fants’ discrimination of specific speech sound contrasts across development. One way of measuring discriminatory ability is to use the mismatch response (MMR). This is a brain response that can be measured using external electroencephalography re­cord­ings. Pre­senting an oddball (deviant) stimulus among a series of standard stimuli elicits a response that, in adults, correlates well with behavioral discrimination. When the two stimuli are speech sounds contrastive in the listeners’ language, the response arguably reflects both acoustic and linguistic processing. In infants, the response is less studied, but has nevertheless already proven useful for studies on the perceptual reorganization of speech sounds. The present thesis documents a series of studies with the end game of investigating how amount of speech exposure influences the perceptual reorganization, and whe­ther the learning mechanisms involved in speech sound cate­gory learning is specific to speech or domain-general. In order to be able to compare MMR results across diffe­rent age groups in infancy, a non-speech control condition needed to be devised however, to account for changes in the MMR across development that are attributable to general brain matura­tion rather than language development specifically. Findings of studies incorporated in the thesis show that spectrally rotated speech can be used to approximate the acoustic part of the MMR in adults. Subtracting the acoustic part of the MMR from the full MMR thus estimates the part of the MMR that is linked to linguistic, rather than acoustic, processing. The strength of this linguistic part of the MMR in four- and eight-month-old infants is directly related to the daily amount of speech that the infants are exposed to. No evidence of distributional learning of non-speech auditory categories was demonstrated in adults, but the results together with previous research generated hypo­theses for future study. In conclusion, the research performed within the scope of this thesis highlight the need of a non-speech control condition for use in developmental speech perception studies using the MMR, demonstrates the viability of one such non-speech control condition, and points toward relevant future research within the topic of speech sound category development.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 3: Manuscript.

Style APA, Harvard, Vancouver, ISO itp.
9

Levi, Susannah V. "The representation of underlying glides : a cross-linguistic study /". Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/8406.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Quadros, Talita Lidirene Limanski de. "Análise do uso do par é + adjetivo e do verbo poder em recortes de produção escrita de alunos de ensino fundamental e médio". Universidade Estadual do Oeste do Paraná, 2017. http://tede.unioeste.br/handle/tede/3433.

Pełny tekst źródła
Streszczenie:
Submitted by Neusa Fagundes (neusa.fagundes@unioeste.br) on 2018-02-27T12:17:38Z No. of bitstreams: 2 Talita_Quadros2017.pdf: 3672889 bytes, checksum: 662c90d816bac4e44956db64813b9097 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-02-27T12:17:38Z (GMT). No. of bitstreams: 2 Talita_Quadros2017.pdf: 3672889 bytes, checksum: 662c90d816bac4e44956db64813b9097 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-21
The present research comes from the need of promoting reflection on an important aspect of the teaching/ learning process of linguistic analysis: the use of elements that delimit positioning. This justification promoted a study on the performance of modals elements in excerpts from texts of elementary and middle school students from a countryside public school in a city from Parana State. It was based on the analysis of the collected material in the databank of the projects Theoretical Application and Reflection in the Classroom: linguistic analysis as a support for the production of texts from students of a public school in the State of Paraná (ART) and Diagnostics and Theoretical Application in Classroom: verification of the performance and evaluation of the teaching of linguistic analysis and textual production of high school students of a public school in the State of Parana (DAT). The basic and qualitative research was subsidized by authors dealing with linguistic modality, such as Castilho and Castilho (1992), Neves (2006), Corbari (2008/2013), Koch (2009) and Sella (2011). This course, based on the definition of study focus, theoretical reference reading, data collection and interpretation of the data collected, motivated to verify in the analyzed excerpts how the modal pair “é + adjetivo” and the verb can indicates points of view that sometimes are linked to the most internal and sometimes to the most external layers of significance. The goal is to interpret the occurrence of these modalities on text excerpts written by students who participated in the aforementioned projects, as well as to verify the degree of producers’ engagement with the expressed content which were expressed through these structures. The layers verification allowed us to evaluate the engagement degree established with the propositional content. This study also allowed us to notice established notions of emphasis and attenuation on the analyzed which leads to articulations that indicate negotiations of points of view.
A presente pesquisa nasce da necessidade de promover reflexão sobre aspecto importante do processo ensino-aprendizagem da análise linguística: o emprego de elementos que demarcam posicionamento. Essa justificativa impulsionou estudo sobre a atuação dos modalizadores em recortes de textos de alunos do ensino fundamental e médio, de escola pública do campo de uma cidade do Estado do Paraná. Partiu-se da análise de material coletado no banco de dados dos projetos Aplicação e Reflexão Teórica na Sala de Aula: análise linguística como suporte para a produção de textos de alunos de uma escola pública do Estado do Paraná (ART) e Diagnósticos e Aplicação Teórica em Sala de Aula: verificação do rendimento e avaliação do ensino de análise linguística e produção textual de alunos do ensino médio de uma escola pública do Estado do Paraná (DAT). A pesquisa básica e de cunho qualitativo foi subsidiada por autores que tratam da modalização linguística, como Castilho e Castilho (1992), Neves (2006), Corbari (2008/2013), Koch (2009) e Sella (2011). Esse percurso, pautado em definição de foco de estudo, leitura de referencial teórico, coleta de dados e interpretação dos dados coletados, motivou verificar nos recortes analisados como os modalizadores par é + adjetivo e verbo poder indicam pontos de vista ora vinculados a camadas mais internas, ora a camadas mais externas da significação. Objetivou-se interpretar ocorrências dos modalizadores em tela em recortes de textos de estudantes participantes dos projetos supracitados, além de verificar o grau de engajamento dos produtores com o conteúdo enunciado, expresso por meio de tais estruturas. A verificação dessas camadas proporcionou avaliar o grau de engajamento estabelecido com o conteúdo proposicional. Este estudo permitiu observar que os modalizadores em análise estabelecem noções de ênfase e de atenuação, o que aponta para articulações que indicam negociações de ponto de vista.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Acoustic and Linguistic Modalities"

1

Srebot-Rejec, Tatjana. Word Accent and Vowel Duration in Standard Slovene: An Acoustic and Linguistic Investigation. Bern: Peter Lang International Academic Publishers, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Srebot-Rejec, Tatjana. Word accent and vowel duration in standard Slovene: An acoustic and linguistic investigation. München: O. Sagner, 1988.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Santos, Juan Felipe García. Cambio fonético y fonética acústica. Salamanca: Ediciones Universidad de Salamanca, 2002.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Kora, Singer, Eggert Randall, Anderson Gregory i Chicago Linguistic Society Meeting, red. Papers from the panels on linguistic ideologies in contact, universal grammar, parameters and typology, the perception of speech and other acoustic signals: April 17-19, 1997. Chicago, Ill: Chicago Linguistic Society, 1997.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Fernando, Sánchez Miret, red. Experimental phonetics and sound change. Muenchen: LINCOM Europa, 2010.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Petrantoni, Giuseppe. Corpus of Nabataean Aramaic-Greek Inscriptions. Venice: Fondazione Università Ca’ Foscari, 2021. http://dx.doi.org/10.30687/978-88-6969-507-0.

Pełny tekst źródła
Streszczenie:
The impact of the Hellenization in the Ancient Near East resulted in a notable presence of Greek koiné language and culture and in the interaction between Greek and Nabataean that conducted inhabitants to engrave inscriptions in public spaces using one of the two languages or both. In this questionably ‘diglossic’ situation, a significant number of Nabataean-Greek inscriptions emerged, showing that the koinŽ was employed by the Nabataeans as a sign of Hellenistic cultural affinity. This book offers a linguistic and philological analysis of fifty-one Nabataean-Greek epigraphic evidences existing in northern Arabia, Near East and Aegean Sea, dating from the first century BCE to the third-fourth century CE. This collection is an analysis of the linguistic contact between Nabataean and Greek in the light of the modalities of social, religious and linguistic exchanges. In addition, the investigation of onomastics (mainly the Nabataean names transcribed in Greek script) might allow us to know more about the Nabataean phonological system.
Style APA, Harvard, Vancouver, ISO itp.
7

Guentchéva, Zlatka. Epistemic Modalities and Evidentiality in Cross-Linguistic Perspective. De Gruyter, Inc., 2022.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Guentchéva, Zlatka, red. Epistemic Modalities and Evidentiality in Cross-Linguistic Perspective. De Gruyter Mouton, 2018. http://dx.doi.org/10.1515/9783110572261.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Guentchéva, Zlatka. Epistemic Modalities and Evidentiality in Cross-Linguistic Perspective. De Gruyter, Inc., 2018.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Guentchéva, Zlatka. Epistemic Modalities and Evidentiality in Cross-Linguistic Perspective. De Gruyter, Inc., 2018.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Acoustic and Linguistic Modalities"

1

Grimaldi, Mirko. "Acoustic correlates of phonological microvariations". W Romance Languages and Linguistic Theory 2006, 89–110. Amsterdam: John Benjamins Publishing Company, 2009. http://dx.doi.org/10.1075/cilt.303.06gri.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Embarki, Mohamed, Slim Ouni, Mohamed Yeou, M. Christian Guilleminot i Sallal Al-Maqtari. "Acoustic and electromagnetic articulographic study of pharyngealisation". W Current Issues in Linguistic Theory, 193–216. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/cilt.319.09emb.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Hellmuth, Sam. "Acoustic cues to focus and givenness in Egyptian Arabic". W Current Issues in Linguistic Theory, 299–324. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/cilt.319.14hel.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Barbero, Nagore, i Carolina González. "Acoustic analysis of syllable-final /k/ in Northern Peninsular Spanish". W Current Issues in Linguistic Theory, 151–70. Amsterdam: John Benjamins Publishing Company, 2015. http://dx.doi.org/10.1075/cilt.335.08bar.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

de Boysson-Bardies, B., L. Sagart, P. Halle i C. Durand. "Acoustic Investigations of Cross-linguistic Variability in Babbling". W Precursors of Early Speech, 113–26. London: Palgrave Macmillan UK, 1986. http://dx.doi.org/10.1007/978-1-349-08023-6_9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Alexandris, Christina, i Ioanna Malagardi. "Linguistic Processing of Implied Information and Connotative Features in Multilingual HCI Applications". W Human-Computer Interaction. Interaction Modalities and Techniques, 13–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-39330-3_2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sugimoto, Takayo. "The Interplay Among the Linguistic Environment, Language Perception, and Production in Children’s Language-Specific Development". W Acoustic Communication in Animals, 201–17. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-0831-8_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yeou, Mohamed, i Shinji Maeda. "Airflow and acoustic modelling of pharyngeal and uvular consonants in Moroccan Arabic". W Current Issues in Linguistic Theory, 141–62. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/cilt.319.07yeo.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Al-Tamimi, Feda, i Barry Heselwood. "Nasoendoscopic, videofluoroscopic and acoustic study of plain and emphatic coronals in Jordanian Arabic". W Current Issues in Linguistic Theory, 163–92. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/cilt.319.08tam.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Zeroual, Chakir, John H. Esling i Philip Hoole. "EMA, endoscopic, ultrasound and acoustic study of two secondary articulations in Moroccan Arabic". W Current Issues in Linguistic Theory, 277–98. Amsterdam: John Benjamins Publishing Company, 2011. http://dx.doi.org/10.1075/cilt.319.13zer.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Acoustic and Linguistic Modalities"

1

MohmedShareif, Hanein O., Abdullah M. Elmangoush, Ayyah A. Fadhl i Malak A. Ali. "Utilizing Linguistic and Acoustic features from Arabic Transcripts for Early Detecting Alzheimer’s Disease Using Different Machine Learning Algorithms". W 2024 IEEE 7th International Conference on Advanced Technologies, Signal and Image Processing (ATSIP), 449–54. IEEE, 2024. http://dx.doi.org/10.1109/atsip62566.2024.10639034.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Dvoynikova, Anastasia, i Alexey Karpov. "Bimodal sentiment and emotion classification with multi-head attention fusion of acoustic and linguistic information". W INTERNATIONAL CONFERENCE on Computational Linguistics and Intellectual Technologies. RSUH, 2023. http://dx.doi.org/10.28995/2075-7182-2023-22-51-61.

Pełny tekst źródła
Streszczenie:
This article describes solutions to couple of problems: CMU-MOSEI database preprocessing to improve data quality and bimodal multitask classification of emotions and sentiments. With the help of experimental studies, representative features for acoustic and linguistic information are identified among pretrained neural networks with Transformer architecture. The most representative features for the analysis of emotions and sentiments are EmotionHuBERT and RoBERTa for audio and text modalities respectively. The article establishes a baseline for bimodal multitask recognition of sentiments and emotions – 63.2% and 61.3%, respectively, measured with macro F-score. Experiments were conducted with different approaches to combining modalities – concatenation and multi-head attention. The most effective architecture of neural network with early concatenation of audio and text modality and late multi-head attention for emotions and sentiments recognition is proposed. The proposed neural network is combined with logistic regression, which achieves 63.5% and 61.4% macro F-score by bimodal (audio and text) multitasking recognition of 3 sentiment classes and 6 emotion binary classes.
Style APA, Harvard, Vancouver, ISO itp.
3

Gkoumas, Dimitris, Qiuchi Li, Yijun Yu i Dawei Song. "An Entanglement-driven Fusion Neural Network for Video Sentiment Analysis". W Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/239.

Pełny tekst źródła
Streszczenie:
Video data is multimodal in its nature, where an utterance can involve linguistic, visual and acoustic information. Therefore, a key challenge for video sentiment analysis is how to combine different modalities for sentiment recognition effectively. The latest neural network approaches achieve state-of-the-art performance, but they neglect to a large degree of how humans understand and reason about sentiment states. By contrast, recent advances in quantum probabilistic neural models have achieved comparable performance to the state-of-the-art, yet with better transparency and increased level of interpretability. However, the existing quantum-inspired models treat quantum states as either a classical mixture or as a separable tensor product across modalities, without triggering their interactions in a way that they are correlated or non-separable (i.e., entangled). This means that the current models have not fully exploited the expressive power of quantum probabilities. To fill this gap, we propose a transparent quantum probabilistic neural model. The model induces different modalities to interact in such a way that they may not be separable, encoding crossmodal information in the form of non-classical correlations. Comprehensive evaluation on two benchmarking datasets for video sentiment analysis shows that the model achieves significant performance improvement. We also show that the degree of non-separability between modalities optimizes the post-hoc interpretability.
Style APA, Harvard, Vancouver, ISO itp.
4

Pascual, Santiago, Antonio Bonafonte i Joan Serrà. "Self-Attention Linguistic-Acoustic Decoder". W IberSPEECH 2018. ISCA: ISCA, 2018. http://dx.doi.org/10.21437/iberspeech.2018-32.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

"Technical session 6: Non-acoustic communication modalities 1". W 2016 IEEE Third Underwater Communications and Networking Conference (UComms). IEEE, 2016. http://dx.doi.org/10.1109/ucomms.2016.7583481.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

"Technical session 9: Non-acoustic communication modalities 2". W 2016 IEEE Third Underwater Communications and Networking Conference (UComms). IEEE, 2016. http://dx.doi.org/10.1109/ucomms.2016.7583484.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Sleefe, Gerard E., Mark D. Ladd, Timothy S. McDonald i Gregory J. Elbring. "Acoustic and seismic modalities for unattended ground sensors". W AeroSense '99, redaktorzy Edward M. Carapezza, David B. Law i K. Terry Stalker. SPIE, 1999. http://dx.doi.org/10.1117/12.357122.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Takada, Kazuma, Hideharu Nakajima i Yoshinori Sagisaka. "Analysis of communicative phrase prosody based on linguistic modalities of constituent words". W 2018 International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP). IEEE, 2018. http://dx.doi.org/10.1109/isai-nlp.2018.8692904.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ramus, Franck. "Acoustic correlates of linguistic rhythm: perspectives". W Speech Prosody 2002. ISCA: ISCA, 2002. http://dx.doi.org/10.21437/speechprosody.2002-16.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Choube, Gaurav, Gauri Rahul Dudhmande, Jagalingam Pushparaj, Christopher Anand i Shilpa Suresh. "Predicting Modalities of Dyslexic Students using Neuro-Linguistic Programming to Enhance Learning Method". W 2022 IEEE International Conference on Data Science and Information System (ICDSIS). IEEE, 2022. http://dx.doi.org/10.1109/icdsis55133.2022.9915905.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Acoustic and Linguistic Modalities"

1

Fridman, Alex, Ariel Stolerman, Sayandeep Acharya, Patrick Brennan, Patrick Juola, Rachel Greenstadt i Moshe Kam. Active Authentication Linguistic Modalities. Fort Belvoir, VA: Defense Technical Information Center, grudzień 2013. http://dx.doi.org/10.21236/ada593716.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Farrar, Charles. Sensing Modalities Deployed - Acoustic. Office of Scientific and Technical Information (OSTI), marzec 2024. http://dx.doi.org/10.2172/2318923.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii