Tesis sobre el tema "Perception de parole"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Perception de parole".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Lancia, Leonardo. "Dynamique non linéaire de la perception de la parole". Aix-Marseille 1, 2009. http://www.theses.fr/2009AIX10007.
Texto completoTroille, Emilie. "De la perception audiovisuelle des flux oro-faciaux en parole à la perception des flux manuo-faciaux en langue française parlée complétée adultes et enfants : entendants, aveugles ou sourds". Grenoble 3, 2009. http://www.theses.fr/2009GRE39021.
Texto completoCued Speech was created by Cornett in 1967 in order to disambiguate the phonology of the visible face by simultaneous phonemic hand gestures. But its productive secret was disclosed just five years ago when discovering that the hand was always ahead of the face (Attina & al. , 2004). This anticipatory coordination was a reminder of the current anticipatory behaviour in speech. The core question here addressed to this anticipatory issue concerned the perception of the acoustic and optic flows in Speech and Cued Speech. We will first establish the flexibility of bimodal speech even in simple CVCV structures, both between and within speakers. If speech can be seen before it is heard (as evidenced at its best by Cathiard & al. , 1991), we will show that the reverse is also true, even for the same speaker. Namely we will assess that speech can be heard before it is seen and even that speech can be heard as soon as it is seen. By carefully examining the pattern of behaviour of the perceived stimuli, we will show that the perceptive outcomes are locked to the produced oro-facial structures, provided we take into account their articulatory to acoustic relationships. Gating and desynchronization experiments for speech et Cued Speech, run with hearing and deaf adults and children – with blind "control" subjects for the audio –, will give us the opportunity to test the range of flexibility allowed by this unique hand-face phonemic coordination. These results will reinforce the proposal that the anticipatory Cued Speech behaviour relies on the phasing of compatible contact controls for hand vowels with orofacial consonants. The window offered by Cornett’s code – and the way it was skillfully embodied (say "embrained") – brought us a surprisingly more decisive answer about the nature of the controls in the phonology of language than the mere observation of simple speech behaviour
Cohen, Laurent. "Détection de stimuli non linguistiques et perception de la parole". Paris, EHESS, 1994. http://www.theses.fr/1994EHES0312.
Texto completoThe aim of this study is to evaluate the potential interest of the click monitoring technique for the on-line study of speech comprehension. While subjects listened to experimental sentences (or words), the reaction times to short superimposed clicks were measured. We studied the influence on reaction times of various lexical, syntactic, and semantic factors that contribute to sentence comprehension. We suggest that latencies are not sensitive to the most rapid and automatic processes of lexical access and syntactic parsing. However, longer latencies are observed whenever automatic modules fail to produce a definite output, requiring that controlled verification strategies come into play
Bensaada, Merzeghe. "Perception de la parole télévisuelle en Algérie. Dissonances et dyscommunication". Thesis, Montpellier 3, 2013. http://www.theses.fr/2013MON30064/document.
Texto completoTV programs based on words/debate/discussion appear to us as a space of crystallization of the linguistic and identity conflicts which the Algerian society knows. They show and amplify the dysfunctions of television and public communication in Algeria. This study consists in trying to understand the contexts and modalities of articulation of the psychosocial and ideological determinations by exploring situations of exchange and transmission which seem to reveal a kind of communicational discomfort on television. By analysing the word as a marker of identity and psychology, we have tried to identify symptoms of "dyscommunication". The expressions of the latter appear as the consequence of a politico-ideological failure and the indicator of an identity cleavage between the speaking subjects/enunciators. The problem which we raise has to do wit the phenomenon of maladjustment of the language used on television, which seems to influence the (both linguistic and paralinguistic) capacity of expression and to weaken the emotional and phatic potential of the speakers on television. Our observations and our investigation show that the receivers/viewers are sensitive to the emotional messages and to implicit cultural signs conveyed by mimogestual emblems, paraverbal language, pronunciation, and that these are determining factors in the quality of a communicative interaction, on television as in everyday life. Language alone is not enough to convey the totality of the message. The televiewer is very attentive to the "cooperative statements" and to processes of mutual recognition, as well as to the sociocultural and emotional skills which accompany and naturally emerge from endogenous speech
Leblanc, Michel-Antoine. "Recherche de correspondances entre production écrite et orale". Paris 10, 2001. http://www.theses.fr/2001PA100111.
Texto completoThis research concerns the subjective and objective relationships which could exist between an oral and written productions of the saine language extract. This can be viewed from two diffèrent aspects, both of which are taken into account here. They can either be seen in terms of "objective "relationships linked to the process of production, or in ternis of "sujective" relationships linked to inférences made by third parties in their perception of these productions. The general hypothesis that is being put forward is the following : "When subjects are asked to match oral and written productions from a population of speakers/writers the resulting matches are not random. ". The duality of production and perception lies at the heart of the experimental paradigm which consists of a series of matching experiments between voices and writings carried out by the subjects accompanied by an evaluation of these voices and writings by an other group of subjects. Main conclusions : - The subjects were not capable of perceiving an objective relationship between oral and written productions coming from the saine individual. There was a broad consensus,however, amongst the tested population in subjective links. Depending on whether the responses evoked by the voices and the writings more or less coincided or not the subjects inferred wether or not it concerned the saine person ; or, at least, they had a general feeling that a certain voice "went well" or not with a certain handwriting. It would seem that the criteria applied in an apparently unconscious way by the subjects to make their decisions correspond to quite specific associations of characteristics between voices and writings
Bruckert, Laetitia. "Production et perception de la voix : entre données phylogénétiques et modèles socio-culturels". Paris 10, 2006. http://www.theses.fr/2006PA100077.
Texto completoThe present thesis focuses on voice production and perception. We used male voices. Theses are the main results: - Consensus on the voice hedonistic judgment and on the inferences regarding the speaker can be noted. Theses appear whatever the linguistic nature of the vocal production listened, even while listening to a simple series of vowels. - The listeners prove themselves able to infer correctly the speakers' age but not height, as they mistakenly use non-reliable acoustic indications such as pitch. - There is no gender effect on the judgments : both male and female listeners seem to produce the same judgments. - A vocal corpus effect on the judgments produced while listening to voices can be observed, principally opposing the vowels to the other corpora. -It generally turns out that male and female speakers use mainly voice prosodic information and not the spectral aspects such as pitch and the voice tone
Vie, Marie-Thérèse. "Contribution à l'étude de la transmission de la parole par les prothèses auditives". Montpellier 1, 1988. http://www.theses.fr/1988MON13510.
Texto completoArnal, Luc. "Mécanismes prédictifs dans l'intégration audiovisuelle de la parole". Paris 6, 2010. http://www.theses.fr/2010PA066256.
Texto completoTreille, Avril. "Percevoir et agir : la nature sensorimotrice, multisensorielle et prédictive de la perception de la parole". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS015/document.
Texto completoSeeing the speaker’s articulatory gestures significantly enhances auditory speech perception. A key issue is whether cross-modal speech interactions only depend on well-known auditory and visual inputs from the speaker’s voice and face or, rather, might also be triggered by other sensory sources less common in speech communication, such as tactile information or vision of the tongue movements. Another goal of the present research was to determine the possible role of the motor system in these multisensory processes. Finally, we used electro-encephalographic, functional magnetic resonance imaging and transcranial magnetic stimulation techniques in order to better understand the time course and the functional neuroanatomical organization of these integration mechanisms. Our results extent the concept of “multisensory speech perception” by highlighting a facilitation of auditory processes during audio-haptic speech perception as well as during the observation of our own articulatory movements. They also provide new evidence in favor of a functional role of the motor system in speech perception by demonstrating an increase of motor activity during visuo-lingual speech perception and a more bilateral ventral premotor cortex recruitment during speech perception across aging. Taken together, our results reinforce the idea of a functional coupling and a co-structuring of speech perception and production systems. Our work support the existence of connections between sensory, integrative and motor regions allowing the implementation of multisensory, sensorimotor and predictive processes in the perception and understanding of speech actions
Snoeren, Natalie Dominique. "Variations phonologiques en production et perception de la parole : le phénomène de l'assimilation". Paris 5, 2005. http://www.theses.fr/2005PA05H035.
Texto completoThe present PhD thesis provides an in-depth study of a phonological variation frequently encountered in French, namely voice assimilation. The goal of the first series of experiments was to study the production of assimilated words and to provide an acoustico-phonetic description of word-final assimilated obstruents. Acoustic measurements showed that voice assimilation is often a graded, rather than a categorical phonetic process. Moreover, degrees of assimilation varied as a function of underlying voicing. Cross-modal priming results showed that the role of phonological right context varies as a function of the degree of assimilation. Perceptual processing of completely assimilated segments was facilitated in the presence of the right context, whereas the presence of "acoustic traces" sufficed to access partially assimilated segments. The hypothesis of the presence of acoustic traces in assimilated words was confirmed in the third series of experiments using semantic priming
Fort, Mathilde. "L'accès au lexique dans la perception audiovisuelle et visuelle de la parole". Phd thesis, Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00652068.
Texto completoBanel, Marie-Hélène. "Perception de la parole et segmentation lexicale traitement d'indices morphologiques et prosodiques". Paris 5, 1996. http://www.theses.fr/1996PA05H069.
Texto completoPallier, Christophe. "Rôle de la syllabe dans la perception de la parole : études attentionnelles". Paris, EHESS, 1994. http://www.theses.fr/1994EHES0318.
Texto completoA series of psycholinguistic experiments evaluating the hypothesis that the sillable is the unit of speech processing is presented. Subjects had to detect or classify linguistic stimuli on line. We have observed (a) that extra-syllabic varia bility; (b) that subjects could be induced to detect target phonemes faster when these were located at a precise syllabi c position, but not when the position was defined sequentially; (c) that sub-syllabic information could prime a motor re sponse. These results lead us to reject the classical syllabic model where the syllable is both the unit of decoding and the unit of representation of speech. We propose a new model in which subjects' reponses orginate from a phonological representation which possesses a syllabic structure
Sock, Rudolph. "Organisation temporelle en production de la parole émergence de catégories sensori-motrices phonétiques". Grenoble 3, 1998. http://www.theses.fr/1998GRE39019.
Texto completoSpeech production is a complex motor process, governed by the temporal orchestration of the articulators that are organized in time and space, thus contributing to the emergence of sensorimotor categories: the sounds of a given language. By paying more attention to the motor dimension of speech and by making a few epistemological adjustments, this approach resembles that of Stetson, who stated in his book "Motor phonetics" (1928; 1951; Reed. 1988) that speech was rather a set of gestures made audible than a set of sounds produced by movements. However, this research attempts to demonstrate, from experimental results, that speech is both a set of movements made audible (and visible) and a set of sounds produced by movements. In this perspective, the focus is on the nature of the mutual specification of the articulatory and acoustic levels, a trait that seems to characterize sensorimotor systems in general and, particularly, the speech production-perception one. In order to understand the timing of the gestures that "govern" the production of phonetic categories, it is essential to look, first, at the general principles that underlie biological sensorimotor behaviours to be able, second, to make necessary theoretical and methodological adaptations in the specific area of timing of linguistic gestures. This dissertation is concerned with a major theme in speech production: the timing of well contrasted linguistic categories. The main aim is to uncover temporal constraints that are tied to the production of quantity contrasts, by analyzing both articulatory and acoustic phasing patterns. Various hypotheses are made in this work, the strongest being the following: it is possible to pinpoint articulatory-acoustic regularities in the timing of quantity contrasts, regardless of language, dialect and speaker differences. Such regularities could be rationalized by referring to physical and semiotic demands of the speech production-perception system. After verifying the initial hypotheses, an attempt is made to discuss the data within a general theory of speech production and perception. Finally, propositions are made for a theoretical modelling of the emergence of sensorimotor behaviours in speech production and perception
Dubois, Cyril Michel Robert. "Les bases neurophysiologiques de la perception audiovisuelle syllabique : étude simultanée en Imagerie par Résonance Magnétique fonctionnelle et en électroencéphalographie (IRMf/EEG)". Strasbourg, 2009. https://publication-theses.unistra.fr/public/theses_doctorat/2009/DUBOIS_Cyril_Michel_Robert_2009.pdf.
Texto completoIn a noisy environment, speech intelligibility is improved by perceiving a speaker’s face (Sumby & Pollack, 1954), a dimension which seemingly involves a facilitation effect in accessing the mental lexicon. Massaro (1990) assumes that the influence of one source of information is greatest when the other source is neutral or ambiguous. However, the McGurk effect suggests that audible and visible sources have an equal impact on the speech perception system (McGurk & MacDonald, 1976). The result is indeed a perturbation, in terms of misperception of the “target”. Several studies claim that the McGurk effect operates on the lexical level as well as on word or phrasal levels. Taken together, previous studies indicate that the bimodal integration of the visual source is early and prelexical, moreover it could be influenced by a top-down effect. We conducted a study with simultaneous recordings in fMRI/EEG, in a discrimination task, comprising consonant-vowel syllables, in two perception modalities : audiovisual and audio only, in order to investigate the neural substrates of audiovisual syllabic perception. The discrimination task was based on syllable pairs, contrasting three features : vowel lip rounding, consonant place of articulation and voicing. For syllabic discrimination, the results show bilateral activation of the primary auditive cortex for each modality. Furthermore, the fusiform gyrus and MT/V5 area (in occipital cortex) are recruited in the audio-visual modality. ERPs indicate significant modulation around 150 and 250 milliseconds
Van, Bogaert Lucie. "Soutenir le développement de la parole chez l'enfant sourd porteur d'implant cochléaire : apports de l'Auditory Verbal Therapy et de la Langue française Parlée Complétée". Electronic Thesis or Diss., Université Grenoble Alpes, 2024. http://www.theses.fr/2024GRALS004.
Texto completoHearing loss can impact the cognitive and linguistic development of a child. Cochlear implants (CI) are designed to improve speech sound perception, but the auditory information provided by the CI remain limited, which could lead to spoken language difficulties. Therefore, for parents of deaf children who would like to use spoken language with their child, it is important to implement communication aids. This thesis aims to quantify the usage of these tools and methods in France and their potential benefits on speech development.The first study of this thesis involved documenting the tools and methods used by parents and professionals in France through online surveys, thereby providing a better description of current speech therapy and parental practices with deaf children.Among all these tools and methods, two approaches are specifically examined in the second part of the thesis: Auditory Verbal Therapy (AVT), which enhances auditory skills, and French Cued Speech (LfPC), a visual-manual tool that complements lip-reading with the addition of manual gestures. These two approaches differ in their use of the auditory modality exclusively for one and both auditory and visual modalities for the other. Three tasks from the EULALIES battery (Méloni et al., 2020), assessing speech perception and production, were used: a phonological alterations detection task, a picture naming task, and a non-word repetition task. The performances of children aged 5 to 11 were analyzed. Children were categorized into four groups: typically hearing children, deaf children with CI who were enrolled in an AVT program (AVT group), deaf children with CI with a high level of CS proficiency (CS+ group), and deaf children with CI with a low level of CS proficiency (CS- group).The results of these studies support the idea that cochlear implantation alone is not sufficient for a deaf child to develop adequate speech perception and production skills. Those caring for deaf children with CI, including parents, speech therapists, or doctors, should be aware of the limits of speech perception and production through CI and should consider specific speech rehabilitation approaches, particularly during the early years. It is essential to provide parents with all available communication options as early as possible. Regarding the two approaches studied in this thesis, the results indicate that both AVT and CS contribute to the development of linguistic processes involved for speech perception and production. Indeed, the speech performances of AVT and CS+ groups are improved compared to the CS- group. Therefore, the findings of these studies suggest that a high level of CS proficiency, and using an AVT approach, can contribute to the development of phonological skills in speech production and perception in children with CI. Finally, these studies reveal a lack of scientific evidence on the effectiveness of all these tools and methods
Guibert-Blanchard, Marie-Sophie. "Transmission des indices acoustiques de la parole par la prothèse auditive : approche d'une méthode d'essais techniques". Montpellier 1, 1992. http://www.theses.fr/1992MON13501.
Texto completoROSEMBERG, LASORNE MURIEL. "Marketing urbain et projet de ville : parole et representations geographiques des acteurs". Paris 1, 1997. http://www.theses.fr/1997PA010616.
Texto completoCities promoting themselves through the publicity they generate, that is called urban marketing, seems to be in fact communication. This saying activity is a factor in any decision procedure but appears in different ways when it refers to rebuilding the city. Urban marketing indeed is understood as a component of urban projects and planning. City's saying, and the way they decide to build or the events they sponsor are regarded as a kind of saying too, is expounded by its connections with rebuilding the city. Communication activity seems to be a geographic factor : it acts a part in actors system placing, in project's cultural environment, in writing the project, in playing it. Saying's study teaches on geographic image and thought of those who build the city. The subjective relating to space is obvious in the city projects that have been studied, through saying as well as through acting the city. City's saying bound for the inhabitants although addressed to the world as it seems, affirms the urban territory. So saying space's study is a means of understanding space
Gilbert, Gaëtan. "Fonctions d'importance fréquentielle pour la reconnaissance de la parole : application et amélioration d'une approche corrélationnelle". Lyon 1, 2003. http://www.theses.fr/2003LYO10205.
Texto completoLepage, Marie-Josée. "Les facteurs prosodiques qui marquent la perception des fins de tour de parole". Thèse, Université Laval, 2009. http://constellation.uqac.ca/188/1/030105478.pdf.
Texto completoVercherand, Géraldine. "Production et perception de la parole chuchotée en français : analyse segmentale et prosodique". Paris 7, 2010. http://www.theses.fr/2010PA070099.
Texto completoThe use of whisper is widespread ail over the world, even in societies using tonal languages where the fundamental frequency has a contrastive linguistic function. Whisper is a mode of speech that implies non vibration of the vocal folds and therefore the absence of fundamental frequency which remains the most important acoustic parameter in the production of intonation. But if intonation is still perceived in whispered voice, what are other means that allow to produce intonation in this mode of production? The aim of this thesis is to give a preliminary answer to this question through segmental and supra segmental analyses of whispered voice in French. Throughout this thesis, my aim is to describe how whispered speech is produced and how this speech is perceived. The main question is to see how the lack of fundamental frequency is supplied in this mode of production. The segmental study analyses acoustical aspects in the production of consonants and vowels. The supra segmental study analyses two aspects of intonation: modality and focus. Based on production analysis, this study aims to understand how modality and focus are realised and determine which acoustics phenomena are decisive. These studies are organised following the same general outline to highlight the link between production and perception (natural stimuli and resynthesized stimuli )
Scarbel, Lucie. "Relations sensori-motrices lors de communication parlée : Application chez les jeunes adultes et séniors normo-entendants et les patients sourds implantés cochléaire". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS007/document.
Texto completoSpeech communication can be considered as an interactive process involving afunctional coupling between sensory and motor systems. The aim of this thesis was to test possible perceptuo-motor linkages during both speech perception and production, using distinct behavioral paradigms and populations. The experimental protocol was made of three classic experiments: a first paradigm of close-shadowing, aiming at exploring the partially motor format of audio and audiovisual stimuli; a second paradigm allowing to correlate production and perception of vowels; and a third paradigm of conscious and unconscious imitation of pitch. The experimental protocol was validated with a first group of young hearing adults. The second population studied was composed of elderly normal-hearing participants, in order to evaluate the consequences of both cognitive and linguistic declines. Results allowed us to suggest a functional activation of perceptuo-motor linkage during speech production and perception.The third population we tested comprised post-lingually deaf patients wearing acochlear implant. Our objective was to determine the impact of the sensorial deprivation and the re-learning processes, associated with their implantation, on perceptuo-motor linkages. Unexpectedly, results showed an active sensori-motor relationship in those participants, even shortly after the cochlear implantation. Altogether, our results confirmed the perceptuo-motor nature of speech. Importantly, in spite of degraded performances, these interactions between the sensory and the motor systems during speech production and perception remained functional in both the elderly normal-hearing population and the post-lingually deaf patients, wearing a cochlear implant
Chung, Soo-Jin. "L'expression et la perception de l'émotion extraite de la parole spontanée : évidences du coréen et de l'anglais". Paris 3, 2000. http://www.theses.fr/2000PA030095.
Texto completoKouider, el Ouahed Sid-Ahmed. "Rôle de la conscience dans la perception des mots". Paris, EHESS, 2002. http://www.theses.fr/2002EHES0021.
Texto completoCOURSANT, MOREAU AUDREY. "Un systeme d'aide automatique a la lecture labiale pour les personnes sourdes profondes : lipcom elaboration et evaluation". Strasbourg 2, 1997. http://www.theses.fr/1997STR20006.
Texto completoThe lipcom project, developed in the ibm-france scientific center, is an automatic tool to help speech perception for profoundly deaf people. As a prototype, for the time being, lipcom is a real-time, speaker dependent phonetic recognition system, operating on continuous speech with unlimited vocabulary and unconstrained syntax. The aim of the system is to discriminate lip-reading ambiguities : when a deaf user observes speaker's lips, he/she may read in a peripheral vision, a phonetic "sub-titling" made by lipcom. We experimented this prototype over a three years period, with about 10 pre-lingual profoundly deaf children from 8 to 12 years old. In order to evaluate the efficiency and the usefulness of the system, we made different experiments, successively, on nonsense items, on syllabes, words and finally on sentences under two conditions : lip-reading plus hearing aids, and lip-reading plus hearing aids plus lipcom. The results show that using lipcom increases the identification scores of the subjects performances. The relative improvement brought by lipcom depends on the tests and protocols ; as a mean, the identifications scores obtained in the lip-reading plus hearing aids condition are about 48% to 64% with lipcom. In the same time, the experiment contributed to improve the lipcom system recognition level both from quantitative and qualitative point of view. Effectively, lipcom which had an initial weak recognition rate, raised, at the end of the experiment to a phoneme recognition rate between 90% and 95%. To conclude, this study shows that lipcom does help speech understanding by deaf persons and that the system is technically feasible
Collet, Gregory. "Etude des effets des entraînements auditifs sur la perception catégorielle du délai d'établissement du voisement: implications chez l'adulte, l'enfant et dans les troubles d'acquisition du langage". Doctoral thesis, Universite Libre de Bruxelles, 2012. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209639.
Texto completoDans une première étude, nous avons tenté d’évaluer les limites du système perceptif en matière d’extraction de l’information statistique en travaillant sur de fines différences acoustiques (Etude 1). Au fil des années, une partie de plus en plus importante de la littérature s’est développée, soutenant que la formation des catégories phonologiques reposait sur l’extraction des régularités statistiques existant dans la production des phonèmes. Cependant, en aucun cas la question des limites que pouvait imposer le système perceptif n’a été posée. Pour ce faire, nous avons décidé de déterminer dans quelle mesure l’exposition à une grande variabilité de stimuli séparés par de fines différences acoustiques pouvait conduire à l’amélioration des capacités de discrimination d’un contraste spécifique.
Par la suite, nous avons sommes concentrés sur la question des modifications de la PC suite à un entraînement. L’idée principale était de déterminer dans quelle mesure un entraînement centré sur une valeur particulière du continuum et mettant en jeu un contraste (i.e. opposition entre deux stimuli) pouvait avoir un impact sur la PC. Pour ce faire, nous avons commencé par entraîner des participants à identifier (Etude 2) des stimuli autour de trois frontières non-phonologiques (-30, -45 et -60 ms DEV).
Ensuite, nous avons entraîné d’autres participants à discriminer (Etude 3) des stimuli autour de deux frontières non-phonologiques (-30 et -45 ms DEV). Les modifications perceptives étaient évaluées sur différents paramètres qui caractérisent la PC (voir Introduction – La Perception Catégorielle) chez des adultes normo-entendant. Nos hypothèses reposaient sur l’idée selon laquelle, plus on s’éloigne de la frontière phonologique, plus les modifications perceptives seraient difficiles. Toutefois, les discontinuités perceptives pourraient interagir, facilitant ainsi les changements.
Sur base des résultats de ces études, nous nous sommes intéressés à la malléabilité de la perception catégorielle chez des enfants de troisième maternelle et de deuxième primaire (Etude 4). Dans ce cas, nous avons décidé d’entraîner les enfants à identifier des stimuli autour de la frontière phonologique du français (0 ms DEV) et autour d’une frontière non-phonologique (-30 ms DEV). L’idée sous-jacente était que les enfants, et plus particulièrement ceux qui n’avaient pas encore appris à lire, puissent être plus sensibles aux modifications perceptives imposées par leur environnement.
Par la suite, la question des entraînements auditifs comme source de changements chez les enfants et adultes normo-entendant s’est élargie aux pathologies et notamment dans les troubles spécifiques du langage (Etude 5). En effet, il est reconnu que ces enfants présentent des difficultés dans la perception des sons de parole et notamment du voisement. Dans cette étude, nous avons donc tenté de restructurer la PC au moyen d’un entraînement basé sur une tâche de discrimination. Malgré leur difficulté sévère à traiter le matériel auditif, ces enfants ne présentent pas des troubles de l’audition. Nous nous attendions donc à une amélioration de leurs habilités à percevoir le voisement.
Enfin, nous nous sommes interrogés sur les facteurs qui pouvaient contribuer, en plus des séances d’entraînement, à la consolidation des représentations phonologiques en mémoire. Parmi ceux-ci, la littérature dans le domaine visuel et moteur indique que le sommeil contribue à consolider ce qui a été appris. Nous avons donc décidé de nous intéresser aux rôles et aux bénéfices du sommeil dans la consolidation des apprentissages auditif chez des adultes normo-entendant (Etude 6).
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished
Grataloup, Claire Hombert Jean-Marie. "La reconstruction cognitive de la parole dégradée étude de l'intelligibilité comme indice d'une capacité cognitive humaine /". Lyon : Université Lumière Lyon 2, 2007. http://theses.univ-lyon2.fr/sdx/theses/lyon2/2007/grataloup_c.
Texto completoHennequin, Alexandre. "Percevoir la parole quand elle est produite différemment : étude des mécanismes de familiarisation multimodale/multisensorielle entre locuteurs tout-venants et locuteurs présentant un trouble de l'articulation". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAS013/document.
Texto completoSpeech is the most widely used means of communication by humans. It allows people to express their needs, exchange thoughts with others and contributes to the construction of social identity. It is also a complex communication channel involving elaborate motor control in production and the ability to analyze sound sequences produced by a wide variety of speakers in perception. This complexity results in speech being often the most altered or difficult to acquire mode of communication for people whose sensorimotor systems are impaired. This is particularly the case for people with trisomy 21 (T21), a genetic syndrome inducing complex orofacial motor difficulties and alterations in the auditory and somatosensory spheres. While speaking is possible for most of these people, their intelligibility is always affected. Improving their oral communication is a clinical and social issue. The study of speech production by people with T21 and its perception by typical listeners is also of theoretical interest, particularly with regard to the fundamental issues of multimodal perception of speech and the involvement of the auditor's motor system in this perception.In this thesis, we reposition the intelligibility disorder of people with T21 in a framework that conceives speech as a cooperative act between speaker and listener. In contrast to the traditional focus on the speaker in applied research, we are interested in the listener's means to better perceive speech, based on two observations: (1) T21 speech is not very intelligible auditorily; (2) its intelligibility is better for familiar than unfamiliar interlocutors. These observations are linked to two important research results on speech perception. First, in a situation of face-to-face communication, in addition to auditory information, the listener also uses the visual information produced by the speaker. In particular, the latter makes it possible to better perceive speech when auditory information is altered. Secondly, familiarization with a specific type of speech leads to a better perception of it. This effect is increased by the imitation of the speech perceived, which would further activate the listener’s internal motor representations.This connection between the specific difficulties of people with T21 and research on speech perception leads to the following questions. Given the anatomical orofacial specificities of the speaker with T21 impacting his articulatory motor gestures, does the typical listener benefit from the presence of visual information? Can the involvement of the motor system in familiarizing oneself with this specific speech help to better perceive it? To answer these questions, we conducted two experimental studies. In the first one, we show that seeing the face of the speaker with T21 improves the intelligibility of his consonants in a way comparable to typical speakers, using a classical paradigm of audio-visual perception of speech in noise. Visual information therefore seems to be relatively preserved despite anatomical and physiological specificities. In a second study, we adapt a familiarization paradigm with and without imitation to assess whether imitation during the auditory perception of words produced by a speaker with T21 can help improve their perception. Our results suggest that this is the case. This work opens up clinical and theoretical perspectives: the study of the perception of speech produced by people with atypical vocal tract and control mechanisms makes it possible to evaluate the generality of the perception mechanisms put forward with typical speakers and to delimit their contours
Le, Cocq Cécile. "Communication dans le bruit : perception de sa propre voix et rehaussement de la parole". Mémoire, École de technologie supérieure, 2010. http://espace.etsmtl.ca/274/1/LE_COCQ_C%C3%A9cile.pdf.
Texto completoLaurent, Raphael. "COSMO : un modèle bayésien des interactions sensori-motrices dans la perception de la parole". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM063/document.
Texto completoWhile speech communication is a faculty that seems natural, a lot remainsto be understood about the nature of the cognitive representations and processes that are involved. Central to this PhD research is the study of interactions between perception and action during production or perception of syllables. We choose Bayesian Programming as a rigorous framework within which we provide a mathematical definition of the COSMO model ("Communicating Objects using Sensori-Motor Operations"), which allows to formalize motor, auditory and perceptuo-motor theories of speech communication and to study them quantitatively. This approach first leads to a strong theoretical result:we prove an indistinguishability theorem, according to which, given some ideal learning conditions, motor and auditory theories make identical predictions for perception tasks, and therefore cannot be distinguished empirically. To depart from these conditions, we introduce an original “learning by accommodation” algorithm, which enables to adapt to the ambient acoustic environment as well as to develop idiosyncrasies. This algorithm, which learns by mimicking acoustic targets, allows to acquire motor skills from acoustic inputs only, with the remarkable property of focusing its learning on the adequate regions. We use syllables synthesized by a vocal tract model (VLAM ) to analyse how thedifferent models evolve through learning and how robust they are to degradations
Berdasco, Muñoz Elena. "La perception précoce de la parole chez les enfants prématurés et nés à terme". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCB233.
Texto completoPrematurity is currently an important public health problem in the world that affects 1 in 10 babies worldwide every year. In France, preterm birth has steadily increased from 5.9% in 1995 to 7.3% in 2014. Research has demonstrated that prematurely born children are more susceptible to encounter some difficulties in language development and other cognitive domains than children born fullterm. To date, knowledge on early language abilities in preterm infants remains limited. The first goal of this doctoral research was to specify different speech perception abilities in the first two years of life in preterm infants, comparing their abilities to those of fullterm infants of the same postnatal age. The second goal was to investigate whether degree of prematurity modulates linguistic performance across preterm infants. This thesis is organized in three experimental parts. First, we explored word segmentation (the ability to extract word forms) from fluent speech, an ability that is related to lexical acquisition. Our findings showed that basic segmentation abilities are in place in monolingual preterm infants at 6 months of postnatal age (Exp. 1), since they segment monosyllabic words just like their postnatal (Nishibayashi, Goyet, & Nazzi, 2015) and corrected age (4-month-olds; Exp.2) fullterm peers. However, we also found differences with fullterms. While 6-month-old preterms segment embedded syllables as fullterms do (Nishibayashi et al., 2015), the direction of the effect is reversed, suggesting differential processing mechanisms (Exp. 3). Moreover, at 8 months postnatal age, we failed to find evidence for a consonant bias in recognition of segmented word forms (Exp. 4) as found for fullterms of the same age (Nishibayashi & Nazzi, 2016). Nevertheless, French-dominant bilingual populations were found to segment monosyllabic words in French at 6 months, whether being born pre- or full-term (Exp. 5). In the second part, using eye-tracking techniques, we measured preterm and fullterm infants scanning patterns of a talking face in the native (French) and a non-native (English) language. We found that preterm infants at 8 months postnatal age show different looking behavior than their fullterm counterparts matched on postnatal and maturational age. Compared to fullterm infants who showed different scanning pattern of a face speaking in the two languages, preterm infants showed similar scanning patterns for both languages (Exp. 6). These differential gaze patterns provide a first step to characterize the developmental course of audiovisual speech perception in preterm infants. The third part focused on lexical development. Our results show that preterm infants recognize familiar word forms at 11 months postnatal age (Exp. 7), hence at the same postnatal age as fullterm infants (Hallé & de Boysson-Bardies, 1994). With respect to word production at around 24 months of postnatal age (Exp. 8), we found that preterm infants have smaller vocabularies than fullterms of the same postnatal age, but as a group have similar levels as their fullterm, corrected age peers. However, more preterm infants were below the 10th percentile than expected based on (fullterm) norms, which might constitute an index for early identification of (preterm) infants at risk for linguistic delays. Taken together, our results help us build a more detailed and nuanced picture of early language acquisition in preterm infants, and better understand the relative contribution of environmental input (i.e. exposure to unfiltered auditory and visual input after preterm birth) and brain maturation on this developmental trajectory
Tran, Ngoc Anaïs. "Perception de la parole sifflée : étude de la capacité de traitement langagier des musiciens". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ2052.
Texto completoSpeech perception is a process that must adapt to a large amount of variability. These variations, including differences in production that depend on the speaker, modify the speech signal. By then using this modified speech signal in experimental studies, we can target certain aspects of speech and their role in the perceptive process. In this thesis, I considered a form of naturally modified speech known as “whistled speech” to further explore the role of acoustic phonological cues in the speech perception process. Variation, however, is not unique to speech production: it is also present among those perceiving speech and varies according to individual experience. Here, I analyzed the effect of classical music expertise on whistled speech perception. Whistled speech augments the modal spoken speech signal into higher frequencies corresponding to a register best perceived by human hearing. In our corpus, vowels are reduced to high whistled frequencies, in a pitch range specific to each vowel, and consonants modify these frequencies according to their articulation. First, we considered how naive listeners (who have never heard whistled speech before) perceive whistled speech. We targeted four vowels and four consonants: /i,e,a,o/ and /k,p,s,t/, which we considered in isolation or a VCV form, and in whistled words (chosen to incorporate the target phonemes). We then considered the effect of musical experience on these categorization tasks, also taking an interest in the transfer of knowledge and the effect of instrument expertise. In these studies, we observed that naive listeners categorize whistled phonemes and whistled words well over chance, with a preference for acoustic cues that characterize consonants and vowels with contrasting pitches. This preference is nonetheless affected by the context in which the phoneme is heard (especially in the word). We also observed an effect of musical expertise on categorization, which improved with more experience and was strongest for high-level classical musicians. We attributed these differences to better use of acoustic cues, allowing for a transfer of skills between musical knowledge and whistled speech perception, though performances due to musical experience are much lower than participants with a knowledge of whistled speech. These acoustic skills were also found to be specific to the instrument played, where flute players outperformed the other instrumentalists, particularly on consonant tasks. Thus, we suggest that the effect of training, such as music, improves one's performance on whistled speech perception according to the similarities between the sound signals, both in terms of acoustics and articulation
Bedard-Giraud, Kimberly. "Troubles du traitement de la parole chez le dyslexique adulte". Toulouse 3, 2007. http://www.theses.fr/2007TOU30334.
Texto completoSpeech perception deficits may play a causal role in certain cases of developmental dyslexia. This research focuses on the perception of stop consonants in the adult dyslexic. In the first study [temporal course of Auditory Evoked Potentials (AEPs)], the cortical processing of temporal cues (Voice Onset Time) differentiating voiced and voiceless stops is analysed in dyslexics with persistent deficits. Two atypical electrophysiological patterns are observed: (i) AEP Pattern I is characterised by a differential coding of stimuli on the basis of some temporal cues but with more AEP components and a delay in termination time; (ii) AEP Pattern II is characterised by an absence of differential coding based on temporal cues. The second study [source modelling and asymmetry of temporal processing] shows an atypical functional asymmetry of this temporal cue processing in adult dyslexics - even in compensated cases with relatively normal AEP timecourses. The third study [Categorical Perception and MMN] suggests how atypical temporal cue processing may affect stop consonant discrimination: AEP Pattern I may be associated with the coding of superfluous non-phonetically pertinent cues, while AEP Pattern II may be associated with a severe voiced/voiceless discrimination deficit. In the fourth study [McGurk Effect], the integration of acoustic and visual cues in face-to-face speech perception is analysed in adult dyslexics. Compared to controls, dyslexics demonstrated less audiovisual integration, relying preferentially on acoustic cues. Together, these results are consistent with a speech perception deficit that affects multiple levels of processing in the developmental dyslexic
Gonseth, Chloe. "Multimodalité de la communication langagière humaine : interaction geste/parole et encodage de distance dans le pointage". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENS011/document.
Texto completoDesignating an object for the benefit of another person is one of the most basic processes inlinguistic communication. It is most of the time performed through the combined use of vocaland manual productions. The goal of this work is to understand and characterize the interactionsbetween speech and manual gesture during pointing tasks, in order to determine howmuch linguistic information is carried by each of these two systems, and eventually to test themain models of speech and gesture production.The first part of the study is about the production of vocal and manual pointing. The originalaspect of this work is to look for distance encoding parameters in the lexical, acoustic,articulatory and kinematic properties of multimodal pointing, and to show that these differentcharacteristics can be related with each other, and underlain by a similar basic motor behaviour: designating a distant object induces larger gestures, be they vocal or manual. This motorpattern can be related with the phonological pattern that is used for distance encoding in theworld’s languages. The experimental design that is used in this study contrasts bimodal vs. vocalmonomodal vs. monomodal manual pointings, and a comparison between these conditionsreveals that the vocal and manual modalities act in bidirectional cooperation for deixis, sharingthe informational load when used together.The second part of the study explores the development of multimodal pointing. The propertiesof multimodal pointing are assessed in 6-12 year-old children, in an experimental task similarto that of the adults. This second experiment attests a progressive evolution of speech/gestureinteractions in the development of spatial deixis. It reveals that distance is preferentially encodedin manual gestures in children, rather than in vocal gestures (and especially so in youngerchildren). It also shows that the cooperative use of speech and manual gesture in deixis is alreadyat play in children, though with more influence of gesture on speech than the reversedpattern.The third part of the study looks at sensorimotor interactions in the perception of spatial deixis.This experimental study, based on an intermodal priming paradigm, reveals that manual gestureplays a role in the production/perception mechanism associated with the semantic processingof language. These results can be related with those of studies on the sensorimotor nature ofrepresentations in the processing of linguistic sound units.Altogether, these studies provide strong evidence for an integrated representation of speech andmanual gestures in the human linguistic brain, even at a relatively early age in its development.They also show that distance encoding is a robust feature, which is also present in all aspectsof multimodal pointing
Signoret, Carine. "Exploration des mécanismes non conscients de la perception de la parole : approches comportementales et électroencéphalographiques". Phd thesis, Université Lumière - Lyon II, 2010. http://tel.archives-ouvertes.fr/tel-00562541.
Texto completoGonzalez, Gomez Nayeli. "Acquisition de relations phonologiques non-adjacentes : de la perception de la parole à l'acquisition lexicale". Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00733527.
Texto completoSato, Marc. "Représentations verbales multistables en mémoire de travail : vers une perception active des unités de parole". Grenoble INPG, 2004. http://www.theses.fr/2004INPG0082.
Texto completoIn recent years, the studies of cerebral and cognitive systems implied in the control and analysis of actions have provided empirical evidence towards a functional intertwining of perception, execution and mental imagery of actions. Within the framework of speech sciences, the aim of this thesis was to test the existence of purely motor constraints in the birth and analysis of mental phonological forms. The experimental paradigm of this work is that of the Verbal Transformation Effect, resting on the concept of multistable speech perception and providing an original access to sensorimotor interactions in relation to auditory imagery and working memory. By suggesting that phonology could be constrained, in some part, by sensorimotor properties and that verbal working memory could rely on both acoustical and articulatory representations, our hypotheses converge towards the idea of a speech perception system directed towards and for action control. Both behavioural and functional neuroimaging results confirm the existence of purely motor constraints in the multistable perception of speech and demonstrate the involvement of verbal working memory during the emergence of perceptual representations. During the executive control, the mental simulation or the perception of speech gestures, these "shared representations" could then form a basis for the recognition of speech units
Dahan, Delphine. "Étude de la prosodie du français en parole continue : processus de production et de perception". Paris 5, 1994. http://www.theses.fr/1994PA05H087.
Texto completoProsody refers to the melody and rhythm of speech. This thesis deals with a specific prosodic phenomenon, i. E. The production of emphatic accent on an element of the utterance, le latter becoming the focus of the utterance and conveying new inforlation (or rhemes), as opposed to the given information (the theme). An explanation of how prosodic phenomena convey linguistic information implies the postulate that the speaker and listener share mental representations. In order to specify the processes involved in the production and perception of a focusing conveyed by an empha tic accent, we have analyzed the productions of emphatic accents realized by several speakers, as well as their perception by listeners. It appears that the speaker produces a break in listener's expected prosodic structuration, the violation of these expectancies allowing the listener to perceive the presence of a linguistically relevant prosodic phenomenon
González, Gómez Nayeli. "Acquisition de relations phonologiques non-adjacentes : de la perception de la parole à l’acquisition lexicale". Thesis, Paris 5, 2012. http://www.theses.fr/2012PA05H102/document.
Texto completoLanguages instantiate many different kinds of dependencies, some holding between adjacent elements and others holding between non-adjacent elements. During the past decades, many studies have shown how infant initial language-general abilities change into abilities that are attuned to the language they are acquiring. These studies have shown that during the second half of their first year of life, infants became sensitive to the prosodic, phonetic and phonotactic properties of their mother tongue holding between adjacent elements. However, at the present time, no study has established sensitivity to nonadjacent phonological dependencies, which are a key feature in human languages. Therefore, the present dissertation investigates whether infants are able to detect, learn and use non-adjacent phonotactic dependencies. The Labial-Coronal bias, corresponding to the prevalence of structures starting with a labial consonant followed by a coronal consonant (LC, i.e. bat), over the opposite pattern (CL, i.e. tab) was used to explore infants sensitivity to non-adjacent phonological dependencies. Our results establish that by 10 months of age French-learning infants are sensitive to non-adjacent phonological dependencies (experimental part 1.1). In addition, we explored the level of generalization of these acquisitions. Frequency analyses on the French lexicon showed that the LC bias is clearly present for plosive and nasal sequences but not for fricatives. The results of a series of experiments suggest that infants preference patterns are not guided by overall cumulative frequencies in the lexicon, or frequencies of individual pairs, but by consonant classes defined by manner of articulation (experimental part 1.2). Furthermore, we explored whether the LC bias was trigger by maturational constrains or by the exposure to the input. To do so, we tested the emergence of the LC bias firstly in a population having maturational differences, that is infants born prematurely (± 3 months before term) and compared their performance to a group of full-term infants matched in maturational age, and a group of full-term infants matched in chronological age. Our results indicate that the preterm 10-month-old pattern resembles much more that of the full-term 10-month-olds (same listening age) than that of the full-term 7-month-olds (same maturational age; experimental part 1.3). Secondly we tested a population learning a language with no LC bias in its lexicon, that is Japanese-learning infants. The results of these set of experiments failed to show any preference for either LC or CL structures in Japanese-learning infants (experimental part 1.4). Taken together these results suggest that the LC bias is triggered by the exposure to the linguistic input and not only to maturational constrains. Finally, we explored whether, and if so when, phonological acquisitions during the first year of life constrain early lexical development at the level of word segmentation and word learning. Our results show that words with frequent phonotactic structures are segmented (experimental part 2.1) and learned (experimental part 2.2) at an earlier age than words with a less frequent phonotactic structure. These results suggest that prior phonotactic knowledge can constrain later lexical acquisition even when it involves a non-adjacent dependency
Lazard, Diane. "Réorganisation neurocognitive et perception de la parole après implantation cochléaire chez l'adulte sourd post-lingual". Paris 6, 2010. http://www.theses.fr/2010PA066465.
Texto completoBy restoring oral communication, cochlear implant (CI), is one of the major medical developments of the XXth century. However, outcome varies with a least 10% of rehabilitation failure. Peripheral predictors have been largely studied but do not fully explain this variability. Cerebral functional exploration has enlarged the investigation field of the cognitive impact on performance, and had led to the notion of an “auditory brain”. The aim of this thesis was to further explore the influence of cognitive functions in CI outcome. We showed, using a functional MRI paradigm on postlingually deaf adults, candidates to CI, that cortical reorganization of auditory memory networks occurs during deafness. Phonological memory, necessary to speech perception and associated audio‐visual supplementation, progressively deteriorates with profound deafness duration, yielding maladaptive right posterior superior temporal cortex disinhibition. This process is driven by a prompt environmental sound memory decline. The use of the dorsal network, based on visual, articulatory and motor associations, frequently observed as dominant cognitive strategy, is a robust good predictor for CI performance. Conversely, ventral network neural activity enhancement, using global identification and confrontation with stored representations, is associated with poor CI performance. These findings suggest that specific cognitive rehabilitation preserving auditory memory and its networks should be proposed to CI candidates
Laguitton, Virginie. "Indices acoustiques et perception de la parole : nature et latéralisation hémispherique des processus de traitement (études comportementales et électrophysiologiques)". Rennes 2, 1997. http://www.theses.fr/1997REN20015.
Texto completoIin the present thesis, we studied the processing of two acoustical speech features: voice-onset time (vot) and place of articulation (pa). Behavioural and electrophysiological data were obtained from human subjects. Natural syllabes, pronounced by a native French speaker, were presented to subjects in both an identification and a dichotic listening task. The group of subjects consisted of normals and epileptic patients. The latter were candidate for surgical treatment of their epilepsy. For this purpose they had intra-cerebral electrodes implanted in the auditory cortex. They were also submitted to the test of Wada, revealing their speech dominant hemisphere. Both normals and epileptic patients participated in behavioural experiments (identification of syllables; registration of reaction time). For the epileptic patients, intra-cerebral auditory evoked potentials were recorded. The results of the present study support the hypothesis that the left hemisphere dominance for speech is not based on the processing of general verbal characteristics of sounds but on specific acoustical aspects of the auditory information. Indeed, although both pa and vot are phonetic features, in a dichotic listening protocol only the feature vot results in a perceptual asymmetry between left and right hemisphere, correlated to the language dominant hemisphere. Moreover, the results of experiments where we varied the duration of the vot show that the perceptual distinction between voiced and unvoiced consonants is based on the detection of the successive acoustical events (voice and vowel), and that this auditory capacity is controlled by the left hemisphere (for right-handed subjects). These results are confirmed by the electrophysiological data, showing that, in left (and not in right) auditory cortex, the processing of a syllable is time-locked with the distinct parts of the syllable, indicating a temporal processing of the acoustical events of the syllable
Crouzet, Olivier. "Segmentation de la parole en mots et régularités phonotactiques : Effets phonologiques, probabilistes ou lexicaux ?" Phd thesis, Université René Descartes - Paris V, 2000. http://tel.archives-ouvertes.fr/tel-00425949.
Texto completoCathiard, Marie-Agnès. "La perception visuelle de l'anticipation des gestes vocaliques : cohérence des évènements audibles et visibles dans le flux de la parole". Grenoble 2, 1994. http://www.theses.fr/1994GRE29052.
Texto completoThis thesis deals with the perception of anticipation for the two visible dimensions of the vowel-tovowel syllabic modulation : i. E. Rounding (i-y) and height (i-a). The first part consists of a thorough review of the literature on audiovisual speech perception (mainly the recovery of invariants and sound sight desynchronisation) and on the production and the perception of the coarticulation phenomenon. The second part evaluates visual perception of the rounding gesture along acoustic pauses. Rounding can thus be visually identified up to 210 ms before the sound. The identification boundary - its date and its slope - depends on articulatory anticipation. But on the same signal, this phenomenon is robust for different experimental conditions (view angle : front vs. Profile ; presentation of static images vs. Dynamic sequences) : a meximum variation of 40 ms is observed on boundaries, differences appearing only in the transition phase, but not on target positions. A motion benefit (30 ms at best) is obtained only for front view, profile ones (static and dynamic) giving the best performances. Our interpretation draws near to the shape from motion processing : movement is useful to recover shape only when this shape is undersampled or not optimally profected, as it is the case for rounding in front views (vs. Profile ones). The third part of the thesis explores the coherence of audiovisual flow by reducing the natural delay of audio relative to the visual speech signal. The major result, obtained finally for rounding and height anticipation, is that identification scores do not decrease as long as the sound does not come ahead of the visual boundary. When it precedes this boundary, a majority of subjects experience conflict or are illusioned by vision. Thus the overall conclusion put emphasis on configurational vs. Timing constraints in speech
Elsabbagh, Mayada M. A. "Mécanismes précurseurs de changement développemental dans la cognition : trajectoires d'organisation perceptuelle typiques et atypiques /". Montréal : Université du Québec à Montréal, 2005. http://accesbib.uqam.ca/cgi-bin/bduqam/transit.pl?&noMan=24713003.
Texto completoGuellaï, Bahia. "Reconnaissance des visages par le nouveau-né : étude du rôle du langage, du regard et du mouvement". Paris 5, 2011. http://www.theses.fr/2011PA05H117.
Texto completoA few hours after birth newborns are already able to recognize faces. Research on unfamiliar face recognition at birth used photographs and abstract stimuli. However, in everyday life, faces speak, look, move. . . All of these elements perceived in combination could modulate face recognition at birth. The present work aimed at answering this question by presenting faces in interactive situations, using video films. A first series of experiments evidenced the influence of language on face recognition at birth. A second series showed that it is the association between direct gaze and language that is important in face recognition. A third series showed that rigid motion (of the head) and non-rigid motion (of internal features) presented together in synchrony with the flow of speech, as it is the case in a talking face, facilitated face recognition by newborn infants. In addition, when faced with two abstract configurations of a talking face in which only the rigid and non-rigid movements are specified in synchrony with the speech, newborns can rapidly detect the configuration that is congruent with the sentence heard. The work of this thesis by proposing a methodology closer to real life situations provides new evidence for early social-cognitive skills already present at birth
DURAFOUR, JEAN PIERRE. "Etudes de semantique genetique. Introduction a une phenomenologie de la perception du langage". Université Marc Bloch (Strasbourg) (1971-2008), 2000. http://www.theses.fr/2000STR20045.
Texto completoBogliotti, Caroline. "Perception catégorielle et perception allophonique : incidences de l'âge, du niveau de lecture et des couplages entre prédispositions phonétiques". Phd thesis, Université Paris-Diderot - Paris VII, 2005. http://tel.archives-ouvertes.fr/tel-00468920.
Texto completoLatinus, Marianne. "De la perception unimodale à la perception bimodale des visages : corrélats électrophysiologiques et interactions entre traitements des visages et des voix". Toulouse 3, 2007. http://www.theses.fr/2007TOU30028.
Texto completoThis thesis examined the processing of faces and voices, as well as the interaction between them, using evoked potentials; this technique informs on the temporal course of these processes. My experiments on face processing revealed that faces recruit successively the three configural processes described in the literature; each process underlies a stage of face perception from detection to identification. In a second part of this thesis, voice perception was approached. I showed that voices are processed in a slightly different way than faces. In the last part of this thesis, bimodal interactions between auditory and visual information was investigated using gender categorisation of faces and voices presented simultaneously. This study reinforced the view that face and voice processing differed; information carried by faces overruled voice information in gender processing. A summary model is presented at the end of the thesis. This model suggests that face and voice processing differ due to the specialisation of the auditory and visual systems in verbal and non verbal communication, respectively; these differences lead to a dominance of visual information in non verbal social interactions and a dominance of auditory information in language processing
Leclère, Thibaud. "Towards a binaural model for predicting speech intelligibility among competing voices in rooms". Thesis, Vaulx-en-Velin, Ecole nationale des travaux publics, 2015. http://www.theses.fr/2015ENTP0008/document.
Texto completoThis PhD work aims to propose a model predicting the perceived intelligibility of a target speech masked by competing sources in rooms. An existing model developed by Lavandier and Culling (2010) is already able to predict speech intelligibility of a near-field target in the presence of multiple noise sources. The present work deals with new implementations and experimental work needed to extend the model tothe case of a distant target and to the case of masking voices, which present different acoustical properties than noises (envelope fluctuations, fundamental frequency, modulations of fundamental frequency). The detrimental effect of reverberation on the target speech has been successfully implemented. This new version of the model provides a unified interpretation of several perceptual effects previously observed in the literature but it presents a room dependency which limits its predictive power. Experimental work has been conducted to determine how the model could account for sources presenting different spectra, and to account for several auditory mechanisms operating simultaneously (F0 segregation, spatial unmasking and temporal dip listening)
CONNAN, PIERRE YVES. "Etude, a la lumiere des temps de reaction, des strategies lexicales et des analyses phonetico-acoustiques en reconnaissance lexicale auditive, a partir de parole naturelle". Université Marc Bloch (Strasbourg) (1971-2008), 1998. http://www.theses.fr/1998STR20014.
Texto completoIf daily practice of oral communication shows how performant is the general processing of perception and comprehension of spoken utterances, we can't forget that spoken word recognition is an extremely complex phenomenon. On-going speech is naturally 'directional' in time but often incomplete, variable and very difficult to segment into discrete units. All these arguments seem incompatible with the intrinsic facility of understanding spoken language. This complexity is also due to the numerous steps (access, selection and integration) that constitute the lexical processing, and to the multiple relationships that exist within mental representations: phonological, morphological and semantic dimensions can interact at different levels and times in these processes. This study, based on a lexical decision task and on comportemental measurements (reaction times), should enable a better understanding of the organization of word recognition strategies. A major question addressed here, is to find out if auditory word recognition is facilitated (priming paradigm) when a word or a non-word prime and target share the same initial sequence, whose status, whether phonological or morphemic, may change access conditions to the mental lexicon. The results show a lack of phonological priming effect and the specific status of the initial morphemic syllable (prefix) as a factor that facilitates lexical decision. The data from large groups of untrained french listeners, classified by sex and age, are discussed in relation to interactive lexical recognition models such as the cohort theory, that have shown the priority of acoustic-phonetic analysis of the incoming speech signal ('bottom-up' information), the importance of word onsets and the role of 'top- down' information and processes