Дисертації з теми "Contrôle sensorimoteur de la parole"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Contrôle sensorimoteur de la parole".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Li, Jinyu. "Interaction entre structure rythmique et sens d’agentivité en production de la parole." Electronic Thesis or Diss., Paris 3, 2023. http://www.theses.fr/2023PA030119.
To adapt to unforeseen circumstances during speech production, the motor system integrates sensory information (e.g., auditory feedback) and benefits from rhythmic grouping, which is characterized by prosody. However, a speaker's sensorimotor system processes acoustic events related to their own voice differently from those of others. This thesis aims to examine the flexibility of speech production by analyzing the organizing role of both prosody and a speaker's subjective sensation of control over his voice (i.e., the sense of agency related to his voice).Experiments of auditory feedback perturbations were conducted with French-speaking female speakers. With delayed auditory feedback (DAF), the duration difference between accented and unaccented vowels increased, highlighting greater flexibility during accent production. Furthermore, DAF induced a reorganization of speech rhythm with enhanced syllabic grouping. With a constant shift in the fundamental frequency (f0) of auditory feedback, the majority of female speakers aligned their f0 with the modified auditory feedback, suggesting that their sensorimotor system processed the perceived voice as an external input. The simultaneous presence of DAF and an f0 shift resulted in a reduction of DAF effects compared to the condition without an f0 shift. This observation suggests a reduction in the sense of agency related to the voice among female speakers, as well as an interaction between rhythmic organization and sense of agency in sensorimotor processes of speech production
Grabski, Krystyna. "Les cartes sensorimotrices de la parole : Corrélats neurocognitifs et couplage fonctionnel des systèmes de perception et de production des voyelles du Français." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00753249.
Caudrelier, Tiphaine. "Transfert d’apprentissage sensorimoteur et développement des unités de parole." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAS008.
Speech motor control has traditionally been studied apart from other cognitive processes underlying speech production, since first cognitive theories presented the brain as a set of relatively independent modules (Fodor & Pylyshyn, 2007), taken apart from the body. However developments in embodied cognition (Varela, Thompson, & Rosch, 1991), grounded cognition (Barsalou, 2008) and dynamic systems (Smith & Thelen, 2003) occurred in the last three decades underline that cognition cannot be considered separately from a body and its environment. These frameworks constitute an inspiration for this thesis and a motivation to study motor control and sensorimotor processes in relation to other cognitive processes. Whether linguistic structures are grounded in sensorimotor processes will be an underlying question.A spoken message can be decomposed into sequences of linguistic units hierarchically structured. We argue that these speech units are grounded in sensorimotor representations, associating linguistic structures with auditory and motor information. Do these units correspond to words? Syllables? Phonemes? To probe the building blocks of speech production, we propose to use a paradigm of auditory-motor learning based on auditory feedback perturbation (Caudrelier & Rochet-Capellan, in press). This paradigm actually enables to change specific internal sensorimotor representations in speakers. Adaptation induces updating sensorimotor representations underlying the production of the training item. We assume that if this change affects the pronunciation of another word, it means that this word uses some of these updated representations. Thus, transfer patterns may reveal the structure of representations at stake.A first study in adults shows that transfer of auditory-motor learning occurs at word, syllable, and phoneme levels in parallel (Caudrelier, Schwartz, Perrier, Gerber, & Rochet-Capellan, 2018). These observations suggest that all these units may co-contribute to the organization of speech articulation in adult speakers. Experimental results are discussed in the light of existing theories and models of speech production. A second experiment suggests that whether a speaker reads a word aloud or names a picture may have an influence on the transfer of auditory-motor learning (Caudrelier, Perrier, Schwartz, & Rochet-Capellan, 2018). A third study in 4- to 5-year-old and 7- to 8 year-old children investigates whether phoneme sensorimotor representations may emerge during reading acquisition, or prior to it (Caudrelier et al., in revision). The observed transfer patterns suggest that phoneme representations emerge before reading acquisition, as a consequence of speech experience. Moreover, we found a relationship between adaptation to auditory perturbation and phonological awareness scores in both age groups. This suggests a link between sensorimotor representations and more explicit phonological representations. The potential causal or predictive nature of this link is discussed.Overall, this work exploits an original and fruitful tool to probe speech representations and study their development. It may have clinical implications with regards to speech rehabilitation, as well as developmental dyslexia. It also highlights connections between speech sensorimotor level and higher linguistic and contextual levels that further question the nature of speech representations
Brerro-Saby, Christelle. "Espèces réactives de l'oxygène et contrôle sensorimoteur musculaire." Aix-Marseille 2, 2009. http://www.theses.fr/2009AIX20691.
Bonnard, Mireille. "Contrôle volontaire d'un automatisme sensorimoteur : la locomotion humaine." Aix-Marseille 2, 1991. http://www.theses.fr/1991AIX22029.
Imbeault, Marie-Andrée. "Caractérisation du frisson chez l’humain et ses effets sur les comportements sensorimoteurs." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31461.
Jégou, Mathieu. "Coordination des tours de parole par le couplage sensorimoteur continu entre utilisateurs et agents." Thesis, Brest, 2016. http://www.theses.fr/2016BRES0061/document.
In this thesis, we present a model for the coordination of speaking turns in dyadic interactions between users and agents. According to a common view, to coordinate turns means avoiding overlaps and reduces silences between turns. By optimizing turn transitions between users and agents, the user’s experience is expected to be improved. However, observations of human conversations show a more complex coordination of speaking turns between users and agents: awkward silences and overlaps, competitive or not, are common. In order to improve the credibility and the naturalness of the interaction, we must observe the same variability of situations in a user-agent interaction. Nevertheless, coordination of speaking turns is, by nature, complex, the coordination is managed by the interaction between participants more than controlled by one participant alone. To capture this complexity, we elaborated a model emphasizing the continuous sensory-motor coupling existing between the user and the agent. As a result of this sensory-motor coupling, the behavior of the agent is not entirely controlled by the agent but is an emergent property of the interaction between the user and the agent. We show the capacity of our model to make emerge the different situations linked to the coordination of speaking turns in interactions between two agents and between one user and one agent
Couraud, Mathilde. "Etude du contrôle sensorimoteur dans un contexte artificiel simplifié en vue d'améliorer le contrôle des prothèses myoélectriques." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0287/document.
Upper limb amputation, although quite rare, induces enormous loss of autonomy for patients in most daily life activities. To overcome this loss, current myoelectric prosthesis offers a multitude of possible movements. However, current controls of these movements are typically non-intuitive and cognitively demanding, leading to a high abandon rate in response to the long and tedious learning involved. In this thesis, we aimed at identifying difficulties and gaps associated with myoelectric controls when compared to natural sensorimotor control, with the long term goal of informing the design of better solutions for prosthesis control. To do so, we manipulated several experimental conditions in a simplified human-machine interface, where non-amputated subjects controlled a cursor on a computer screen from isometric contractions, i.e. muscle contractions produced in the absence of joint movement. This isometric condition was designed to get closer to a situation in which an amputee controls a myoelectric prosthesis using electrical activity (EMG) of his/her residual muscles, without movement of the missing limb. During aiming movements, we demonstrated the benefits of adapting the decoder that translate muscle activities into cursor movement in conjunction with the own subject’s adaptation of the planned movement direction in response to oriented perturbations. Furthermore, these benefits were showed to be even more important as the artificial decoder adaptation was inspired by the modeled adaptation of a human. In reaching and tracking movements toward fixed and moving targets, which increasingly involve online movement regulations, we revealed the importance of an immediate congruency between sensorimotor information and the cursor position on the screen for timely and efficient corrections. For conditions in which the level of noise associated with the control signal is relatively low, such as when using force that is more stable than the usual EMG signal used, this congruency partly explains the better performance obtained with zero order control (i.e. position) when compared to first order control (i.e. velocity). However, when the noise level increases, as is the case with EMG signals, the filtering property associated with the integration involved in a velocity control elicits better performances than with a position control. Taken together, these results suggest that intuitive and adaptive decoder, that supplies and judiciously complements natural sensorimotor feedback loops, is promising to facilitate future prosthesis controls
Deborne, Renaud. "Modélisation de l'adaptation des conducteurs au comportement du véhicule et expérimentations sur simulateur." Phd thesis, Ecole Centrale Paris, 2009. http://tel.archives-ouvertes.fr/tel-00453301.
Deffains, Marc. "Rôle du striatum sensorimoteur dans le contrôle des séquences motrices automatisées chez le primate." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10087.
It is well known that the striatum, especially its sensorimotor part, is involved in the expression of motor skills which require the production of a sequence of movements. In this study, we addressed the respective contribution of efferent neurons and cholinergic interneurons of the striatum in the processes underlying the expression of motor sequences, by recording single unit activity of these two neuronal populations in monkeys performing sequential arm reaching movements. By this experimental approach, we examined activity modulations of these neurons during a change in the conditions of performance of the motor sequence. Thus, by changing the habitual order or the temporal structure of the sequence, we underlined that within sensorimotor striatum, efferent neurons and cholinergic interneurons are involved in the processing of spatial and temporal information which characterize an automatic motor sequence. In addition, we reported differential activations of these two neuronal populations depending on whether the serial order of the sequence of movements is visually cued or based on internally stored information. Taken together, these results provide essential information in order to better understand the neuronal mechanisms involved, within the sensorimotor part of striatum, in the control of the automatic motor sequences
Pialasse, Jean-Philippe. "Évaluation du contrôle sensorimoteur chez les patients ayant une scoliose idiopathique de l'adolescent : vers un biomarqueur des troubles sensorimoteur basé sur la stimulation vestibulaire galvanique." Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26968.
Scoliosis is the most frequent spinal deformity in adolescence. In 80% of the cases, it is idiopathic, meaning that no cause has been associated with the patient's case. Idiopathic scoliosis seems to respond to a multifactorial model including genetic, environmental, neurological, hormonal, biomechanical and skeletal growth factors. A neurological assumption is that an anomaly of the vestibular system would cause asymmetrical activation of the vestibulospinal pathway and of paraspinal muscles. This cascade would generate the scoliotic deformity. Animal models have demonstrated this possibility. In addition, many vestibular related anomalies are observed in adolescents with scoliosis as vestibulo-ocular reflex abnormalities or balance control disorders. Galvanic vestibular stimulation allows exploring sensorimotor control by faltering the vestibular afferents. The objective of this thesis is to explore the sensorimotor control through vestibular-evoked postural response in patients with scoliosis and healthy controls. The results of the first study show that the vestibular-evoked postural response is larger in patients compared to controls. Moreover, the amplitude of the postural response is not scaled to the spinal deformation amplitude. In a second study, through a neuromechanical feedback control model, we demonstrate that patients assigned a larger weight to vestibular signal compared to controls. Results of the third study reveal that young adults with idiopathic scoliosis, compared to controls, have a larger postural response. This observation excludes a transient response due to the maturation of the nervous system. Then, balance control impairment seems secondary to a neurosensory phenomenon as balance control dysfunction is observed in patients who had surgery reducing spine deformation. Ultimately, an algorithm has been developed to distinguish patients with or without sensorimotor control problems compared to healthy adolescents. Remarkably, the amplitude of the feedforward vestibular response of these patients is larger and they assign a larger weight to vestibular than proprioceptive information. Overall, this thesis proposes a procedure to identify patients with scoliosis having sensorimotor control impairment. In the end, it is believed that the classification procedure may help future clinical studies as patients with sensorimotor dysfunction could be identified. Hopefully, future research will enhance this procedure and lead to an efficient biomarker.
Ansermin, Eva. "Entraînement rythmique non intentionnel : étude et modélisation d'un contrôle sensorimoteur pour la coordination homme/robot." Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1008.
In this thesis, we address issues related to the integration of robots in a social and realistic environment. This context brings human/robot interactions in which robots must be able to adapt not only to changes but also to humans.The singularity of the present approach lies in the inclusion of the unintentional characteristic of interpersonal coordinations. We thus defend the interest of including a low-level rhythmic entrainment effect inspired by the theory of dynamical systems in a sensorimotor control exploiting oscillators for human/robot coordination.The first part of the work we present is dedicated to the study of biological movement, more accurately to the study of its kinematic characteristics, their specificities and their origins. We point out that the kinematic invariants under study can be attributed to the body physiognomy as well as to the exterior forces it is subjected to (gravity, inertia). A natural movement that is well integrated to the motor repertoire minimizes energy cost by using gravity's influence. The question of modeling this characteristic on a robot with its own mechanical constraints will lead us to the exploitation of oscillatory controllers.In this way, we put forward a model of low level rhythmic entrainment based on the theory of dynamical systems. This model is based on the integration of visual data (optical flow) in an oscillator which controls a robot. It alters the oscillator's phase and frequency and thus allows the robot to synchronize with its partner's movements. Through experiments we will validate the architecture in non-laboratory conditions to show that this approach does reproduce the rhythmic entrainment loops observed during human/human interactions.Those results allow us to implement a model for the learning of rhythmic movements by imitation founded on an oscillators based decomposition of motor trajectories. Notably, we show that the addition of a rhythmic entrainment effect to the oscillators bank allows a flexibility that greatly simplifies learning and the converging toward the desired frequencies and phases.This approach, however, brings forth various problems attributable to rhythmic controllers such as the one of phase management. We present a new architecture using several connected sets of oscillators. This allows the implementation of a feedback loop that maintains the already-learned frequencies and phases.The models will be implemented and validated on robots. Their efficiency will be justified and quantified during interactions with naïve subjects. As an application framework, we suggest a possible solution to the problem of grasping and handing an object from a human to a robot
Munuera, Jérôme. "Mécanismes et bases neurales du contrôle sensorimoteur des saccades oculaires chez l’Homme et le macaque." Thesis, Lyon 1, 2010. http://www.theses.fr/2010LYO10005.
Looking at or grasping an object are simple and trivial actions. However, these types of movements require complex processing of sensory and motor information in order to compensate for the natural variability within the sensorimotor system. A key concept describes these control processes: internal models. These models are dynamical representations of the state of our effectors, supported by a network of cerebral areas, which allow the comparison between the desired movement (perfect) and the realised movement (noisy). When a difference is perceived, a motor error (ME) signal is sent in order to adjust the ongoing movement. We performed a first study with human subjects to define the role of internal models during a simple sensorimotor action: a saccade. We developed an original task in order to introduce artificial motor noise (intrasaccadic target jump) during a sequence of saccades. These results validates the existence of an optimal sensorimotor control mechanism and confirms the predictions of a model based on the Kalman filter theory. This optimal control implies a balance between the reliability given to the desired movements versus the executed movements as a function of their uncertaincy (correlate to their variability). We then investigated the neural substrates of the ME estimation by adapting our protocols for use with rhesus monkeys. We recorded the electrophysiological activity of unitary neurons and performed reversible inactivations in the lateral intraparietal area (LIP), a key area for visuo-saccadic integration. Our results suggest, therefore, that the parietal cortex plays a role in the motor adjustment of the saccadic system. We postulate that parietal cortex could accumulate evidence (i.e. error signal given by efferent copy and sensorial feedback) on the necessity to perform a corrective saccade. When the amount of evidence exceeds an error threshold, the decision to trigger a correction could be made. This process could allow the optimization of these motor actions within noisy sensorimotor context
Vie, Bruno. "Le contrôle sensorimoteur du pied lors de la course et de la contraction statique fatiguante." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM5059/document.
The sensorimotor control of foot placement and motion plays a key role in the adaptive response of human being to his environment. The participation of both sensory and motor components is needed to control the foot placement during gait and posture and mechanoreceptors in the foot sole give major information on the body position. First, we established a protocol to quantify the sensation of foot sole pressure stimulation, which allowed us to examine the effects of metatarsal pads, and heel lifts in healthy subjects. We observed that 30-days of occupational activities with metatarsal pads elicited significant changes in sensation, lowering the threshold for the detection of the lowest pressure loads and, depending on the pattern of foot placement during upright standing and walking, modifying the global gain for the foot sensation. Second, we examined the consequences of fatiguing static contraction of foot invertor muscles (tibialis anterior or TA) and of maximal running exercise on a treadmill on post-test changes in foot placement using a baropodometer, maximal force production by TA. Power spectrum analyses of electromyographic (EMG) events were performed during both static and dynamic efforts and we also explored the myotatic reflexes through the recording of the tonic vibratory response (TVR) in foot muscles. Our results showed significant changes in post-test foot placement in the direction of foot eversion in both situations, significant decrease in maximal inversion force, a leftward shift of EMG spectrum in the sole TA muscle, indicating EMG signs of fatigue, and 4) significant reduction of TVR amplitude in the sole TA muscle after sustained static effort
Scotto, di cesare Cecile. "Processus d'intégration des informations visuelles et gravito-inertielles pour l'orientation spatiale et le contrôle sensorimoteur." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4117.
This dissertation investigates the integration process of visual and gravitoinertial cues at the origin of perceptual-motor skills. To that aim, we manipulated sagittal orientation of a visual scene, the body and the gravitoinertial vector by means of scene and body rotations, as well as centrifugation. Self-orientation perception and target localization were analyzed during these modifications. In 3 experiments we modulated several factors associated with i) the presentation of visual and gravitoinertial stimulations (e.g., rotation dynamics: fast vs. slow), ii) the combination of these stimulations (i.e., spatial congruence vs. non-congruence), iii) the task (i.e., self-tilt detection, continuous and discrete arm pointing movements), iv) individual characteristics (i.e., perceptive style). Overall, we show that sensory integration rules depend on these interacting factors. Two global effects were revealed on sensory weighting: i) spatial non-congruence between stimulations induces relative gravitoinertial dominance, whatever the task or visual scene properties; ii) by contrast, spatial congruence between stimulations could be associated to sensory weighting rules which are task dependent (i.e., perceptive vs. sensorimotor)
Payan, Yohan. "Modèle biomécanique et contrôle de la langue en parole." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0221.
Steiner, Kelly. "Étude des mécanismes du contrôle sensorimoteur pour la spécification de nouvelles métriques d'évaluation du coût cognitif." Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS064.
In the field of aeronautics, and particularly in neuro-ergonomics, assessment of the cognitive load on pilots is essential to reduce flight risks and improve the design/understanding of cockpit elements (flight controls, pilot aids). The complexity of the piloting task (sensorimotor and cognitive aspects) determines the workload associated with different flight situations, through flight qualities. The latter can be defined as the ease with which an aircraft can be controlled by the pilot. This notion of ease is assessed by the pilot's cognitive workload when controlling the aircraft's movement. There are various types of measures of this cognitive load, such as subjective, physiological or performance measures, but it is mainly subjective measures that are used. All these measures of cognitive load have their advantages and limitations in aeronautical applications, where the environment is highly demanding and constraining. Thus, a pilot action control approach could be complementary to these more particularly subjective measures of workload.To establish an activity-dependent measure, it is necessary to have a detailed, multi-level understanding of the action control mechanisms associated with variation in cognitive load. This approach could improve the limitations of some existing measures. In this context, Fitts' law represents an interesting tool for studying motor behavior during movement execution in conditions with standardized environments involving variations in cognitive load. With this method, it is possible to characterize the relationship between certain motor control mechanisms and the measure of cognitive load associated with the task.The aim of this thesis is to provide metrics for assessing cognitive load, based on the modeling and characterization of motor control mechanisms observed in stick activity. This study is conducted by investigating the relationship between aspects of motor control (kinematics, electromyography) and the measurement of cognitive load (NASA-TLX questionnaire) across different levels of environmental constraints.To establish this approach, we had the following objectives: (1) to characterize the relationship between task difficulty, motor control and movement optimality (kinematics, EMG), (2) to characterize the relationship between the different levels of movement analysis and measures of workload more classically used in the field of ergonomics (NASA-TLX), then to describe their sensitivity to task difficulty, (3) to evaluate our metrics in the context of an ecological helicopter flight situation. To address the first two thesis objectives, we carried out an initial laboratory experiment. This experiment was a classic Fitts reciprocal pointing task. To meet the last objective, we carried out flight tests involving an ADS33 maneuver similar to a Fitts task in the field
Beaulieu, Louis-David, and Louis-David Beaulieu. "Interventions non invasives en phase chronique post-AVC : rôle des afférences proprioceptives sur la plasticité cérébrale et le contrôle sensorimoteur." Doctoral thesis, Université Laval, 2016. http://hdl.handle.net/20.500.11794/27412.
L'accident vasculaire cérébral (AVC), est un problème de santé majeur au niveau mondial. Les traitements de réadaptation visent à réduire le fardeau individuel et sociétal engendrés par l'AVC, via l'amélioration de l'indépendance fonctionnelle et des déficiences sensoriels et moteurs. Par contre, la plupart des survivants conservent des séquelles chroniques et ce, malgré l'accès à des soins de réadaptation intensifs et spécialisés. La recherche se penche donc sur l'utilisation de technologies novatrices et sécuritaires pour notamment tenter de dépasser les gains obtenus en clinique. Parmi les différentes approches actuellement évaluées en recherche, les appareils de neurostimulation périphérique semblent favoriser la récupération des fonctions sensorimotrices, via la production massive d'informations somatosensorielles (cutanées et proprioceptives). Ces afférences forceraient le système nerveux central lésé à s'adapter, offrant une fenêtre temporelle pendant laquelle le cerveau serait dans un meilleur état pour recevoir une thérapie. Malgré ces données intéressantes, le manque de connaissances limite le transfert clinique de ces approches. En particulier, le rôle des afférences proprioceptives vs. cutanées sur les effets de la neurostimulation périphérique demeurent à ce jour mal compris. L'objectif principal de la thèse était de déterminer si la nature des afférences sensorielles recrutées par neurostimulation périphérique a un impact sur la plasticité cérébrale et les déficiences sensorimotrices chez une clientèle AVC au stade chronique. Les cinq études du doctorat visaient plus précisément à : (i) évaluer les propriétés métrologiques (fidélité et changement minimal détectable) d'un outil de mesure neurophysiologique (stimulation magnétique transcrânienne – TMS) utilisé dans la thèse pour tester la plasticité cérébrale (études 1 et 2); (ii) approfondir les connaissances reliées aux paramètres d'application et afférences produites par deux approches de neurostimulation périphérique, soit la rPMS (stimulation magnétique périphérique répétitive) et la NMES (stimulation électrique neuromusculaire) (étude 3); (iii) développer et valider une approche standardisée permettant d'induire des illusions de mouvement par vibration musculo-tendineuse (VIB), puis débuter son processus de validation (étude 4); (iv) déterminer l'influence des afférences sensorielles sur la plasticité cérébrale et la récupération sensorimotrice chez des personnes au stade chronique post-AVC en comparant les effets aigus de trois interventions de neurostimulation périphérique (NMES, rPMS, VIB) avec une séance d'exercices (étude 5). Dans l'ensemble, les résultats des études supportent : (i) que les évidences actuelles ne permettent pas de conclure sur la fidélité des mesures TMS, mais que les erreurs de mesure observées encouragent l'utilisation de ces mesures pour suivre des changements de groupe plutôt qu'individuels; (ii) que notre procédure standardisée utilisant les illusions de mouvement induites par la VIB est valide chez des individus au stade chronique post-AVC et (iii) que le recrutement préférentiel des afférences proprioceptives semble plus efficace pour favoriser la plasticité cérébrale et l'amélioration des déficiences sensorimotrices chez des individus vivant avec les séquelles chroniques d'un AVC. Toutefois, avant de considérer un potentiel transfert clinique des approches étudiées dans la thèse, des études supplémentaires devront évidemment reproduire nos résultats et approfondir les diverses réflexions soulevées.
Vincent, Damien. "Analyse et contrôle du signal glottique en synthèse de la parole." Télécom Bretagne, 2007. http://www.theses.fr/2007TELB0030.
The underlying technology of current speech synthesis systems is called corpus synthesis. It relies on the selection of an optimal sequence of acoustic units with respect to the synthesis context. This approach which minimizes the concatenation effort allows to generate natural speech as far as we consider only a reading style. However, the true acceptability of the speech synthesis technology depends on the ability to reproduce expressive patterns and various vocal qualities. In order to fulfill these expectations, more in depth studies in speech signal characterization must be carried out. This thesis deals with an explicit introduction of speech production mechanisms in synthesis. The first part handles decomposition of speech into a source component, namely the glottal wave, resulting from vocal fold vibration and a filter component modeling the vocal tract. To solve this deconvolution problem, we rely on an ARX-LF model which introduces in a linear speech production model prior information on the glottal wave using an LF model (Liljencrants Fant). The estimation of the model parameters in a least square sense results in a complex non-linear optimization problem. We made up an efficient method based on decoupling parameter estimation and on several algorithmic optimizations. Estimation results are very promising. First, the proposed deconvolution method leads to a better estimation of glottal closure instants compared to existing approaches. Then, estimated glottal waves have been corroborated with electroglottographic measures. The second part of this thesis deals with speech synthesis and modifications based on an ARX-LF model. A special care has been dedicated to controlling the time domain envelop of the residual signal in speech modifications. Results in fundamental frequency and duration modifications prove the relevance of the proposed method compared to other modification methods
Gervet, Marie-Françoise Tardy. "Contribution à l'étude du contrôle sensorimoteur des mouvements segmentaires chez l'homme : rôle des informations visuelles et proprioceptives musculaires." Aix-Marseille 1, 1987. http://www.theses.fr/1987AIX11083.
Delvaux, Véronique. "Contrôle et connaissance phonétique: les voyelles nasales du français." Doctoral thesis, Universite Libre de Bruxelles, 2002. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211385.
Nazari, Mohammad. "Modélisation biomécanique du visage : étude du contrôle des gestes orofaciaux en production de la parole." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00716331.
Nazari, Mohammad Ali. "Modélisation biomécanique du visage: Etude du contrôle des gestes oro-faciaux en production de la parole." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00665373.
Teissier, Pascal. "Fusion de capteurs avec contrôle du contexte : application a la reconnaissance de parole dans le bruit." Grenoble INPG, 1999. http://www.theses.fr/1999INPG0023.
Loevenbruck, Hélène. "Pistes pour le contrôle d'un robot parlant capable de réduction vocalique." Grenoble INPG, 1996. http://www.theses.fr/1996INPG0061.
Barbier, Guillaume. "Contrôle de la production de la parole chez l’enfant de 4 ans : l'anticipation comme indice de maturité motrice." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAS013/document.
This thesis work investigates speech production in 4-year-old children, in comparison with adults, from a speech motor control perspective. It focuses on two indices: measures of token-to-token variability in the production of isolated vowels and on anticipatory intra and extra-syllabic coarticulation within V1-C-V2 sequences. Acoustic and articulatory data were recorded thanks to ultrasound tongue imaging within the HOCUS system. Acoustic data from 20 children and 10 adults have been analyzed. Ultrasound data have been analyzed from a subset of these participants: 6 children and 2 adults. In agreement with former studies, token-to-token variability was greater in children than in adults. Strong anticipation of V2 in V1 realization was found in all adults. In children, anticipation was not systematic, and when observed, it was of smaller amplitude than in adults. In more details, only 5 children among the 20 studied showed a small amount of anticipation, mainly along the antero-posterior dimension, manifested in the acoustic F2 dimension. Anticipatory intra-syllabic coarticulation also seems to be of smaller amplitude in children than in adults. Last, children's speech gestures are slower than those of adults. These results are interpreted as evidence for the immaturity of children's speech motor control from two perspectives: insufficiently stable motor control patterns for vowel production, and a lack of effectiveness in anticipating forthcoming gestures. In line with theories of optimal motor control, we assume that anticipatory coarticulation is based on the use of internal models, i.e. sensori-motor representations of the speech production apparatus in the central nervous system, and that the amplitude of anticipatory coarticulation reflects the increasing maturation of these sensori-motor representations as speech develops
Szabados, Andrew. "Uncontrolled manifolds et réflexes à courte latence dans le contrôle moteur de la parole : une étude de modélisation." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS039/document.
This work makes use of a biomechanical model of speech production as a reference subject to address several phenomena related to the adaptability and stability of speech motor control, namely motor equivalence and postural stability. The first part of this thesis is related to the phenomenon of motor equivalence. Motor equivalence is a key feature of speech motor control, since speakers must constantly adapt to various phonetic contexts and speaking conditions. The Uncontrolled Manifold (UCM) idea offers a theoretical framework for considering motor equivalence in which coordination among motor control variables is separated into two subspaces, one in which changes in control variables modify the output and another one in which these changes do not influence the output.This concept is developed and investigated for speech production using a 2D biomechanical model. First, a representation of the linearized UCM based on orthogonal projection matrices is proposed. The UCMs of various vocal tract configurations of the 10 French oral vowels are then characterized using their command perturbation responses. It is then investigated whether each phonetic class such as phonemes, front/back vowels, rounded/un-rounded vowels can be characterized by a unique UCM, or whether the UCMs vary significantly across representatives of these different classes. It was found that linearized UCMs, especially those that are specifically computed for each configuration, but also across many of the phonetic classes allow for a command perturbation response that is effective. This suggests that similar motor equivalence strategies can be implemented within each of these classes and that UCMs provide a valid characterization of an equivalence strategy. Further work is suggested to elaborate which classes might be used in practice.The second part addresses the question of the degree to which postural control of the tongue is accomplished through passive mechanisms - such as the mechanical and elastic properties of the tongue itself - or through short-latency reflexes - such as the stretch reflex.A specific external force perturbation, was applied to the 2D biomechanical model , namely one in which the tongue is pulled anteriorly using specific force profile exerted on the tongue body using a force effector attached to the superior part of the tongue blade. Simulation results were compared to experimental data collected at Gipsa-lab under similar conditions.This perturbation was simulated with various values of the model's parameter modulating the reflex strength (feedback gain). The results showed that a perturbation rebound seen in simulated data is due to a reflex mechanism. Since a compatible rebound is seen in data from human subjects, this can be taken as evidence of a reflex mechanism being involved in postural stability of the tongue. The time course of the mechanisms of this reflex, including the generation of force and the movement of the tongue, were analyzed and it was determined that the precision of the model was insufficient to make any conclusions on the origin of this reflex (whether cortical or brainstem). Still, numerous experimental directions are proposed
Evrard, Marc. "Synthèse de parole expressive à partir du texte : Des phonostyles au contrôle gestuel pour la synthèse paramétrique statistique." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112202.
The subject of this thesis was the study and conception of a platform for expressive speech synthesis.The LIPS3 Text-to-Speech system — developed in the context of this thesis — includes a linguistic module and a parametric statistical module (built upon HTS and STRAIGHT). The system was based on a new single-speaker corpus, designed, recorded and annotated.The first study analyzed the influence of the precision of the training corpus phonetic labeling on the synthesis quality. It showed that statistical parametric synthesis is robust to labeling and alignment errors. This addresses the issue of variation in phonetic realizations for expressive speech.The second study presents an acoustico-phonetic analysis of the corpus, characterizing the expressive space used by the speaker to instantiate the instructions that described the different expressive conditions. Voice source parameters and articulatory settings were analyzed according to their phonetic classes, which allowed for a fine phonostylistic characterization.The third study focused on intonation and rhythm. Calliphony 2.0 is a real-time chironomic interface that controls the f0 and rhythmic parameters of prosody, using drawing/writing hand gestures with a stylus and a graphic tablet. These hand-controlled modulations are used to enhance the TTS output, producing speech that is more realistic, without degradation as it is directly applied to the vocoder parameters. Intonation and rhythm stylization using this interface brings significant improvement to the prototypicality of expressivity, as well as to the general quality of synthetic speech.These studies show that parametric statistical synthesis, combined with a chironomic interface, offers an efficient solution for expressive speech synthesis, as well as a powerful tool for the study of prosody
Blagouchine, Iaroslav. "Modélisation et analyse de la parole : Contrôle d’un robot parlant via un modèle interne optimal basé sur les réseaux de neurones artificiels. Outils statistiques en analyse de la parole." Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX26666.
This Ph.D. dissertation deals with speech modeling and processing, which both share the speech quality aspect. An optimum internal model with constraints is proposed and discussed for the control of a biomechanical speech robot based on the equilibrium point hypothesis (EPH, lambda-model). It is supposed that the robot internal space is composed of the motor commands lambda of the equilibrium point hypothesis. The main idea of the work is that the robot movements, and in particular the robot speech production, are carried out in such a way that, the length of the path, traveled in the internal space, is minimized under acoustical and mechanical constraints. Mathematical aspect of the problem leads to one of the problems of variational calculus, the so-called geodesic problem, whose exact analytical solution is quite complicated. By using some empirical findings, an approximate solution for the proposed optimum internal model is then developed and implemented. It gives interesting and challenging results, and shows that the proposed internal model is quite realistic; namely, some similarities are found between the robot speech and the real one. Next, by aiming to analyze speech signals, several methods of statistical speech signal processing are developed. They are based on higher-order statistics (namely, on normalized central moments and on the fourth-order cumulant), as well as on the discrete normalized entropy. In this framework, we also designed an unbiased and efficient estimator of the fourth-order cumulant in both batch and adaptive versions
Valdés, Vargas Julian Andrés. "Adaptation de clones orofaciaux à la morphologie et aux stratégies de contrôle de locuteurs cibles pour l'articulation de la parole." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT105/document.
The capacity of producing speech is learned and maintained by means of a perception-action loop that allows speakers to correct their own production as a function of the perceptive feedback received. This auto feedback is auditory and proprioceptive, but not visual. Thus, speech sounds may be complemented by augmented speech systems, i.e. speech accompanied by the virtual display of speech articulators shapes on a computer screen, including those that are typically hidden such as tongue or velum. This kind of system has applications in domains such as speech therapy, phonetic correction or language acquisition in the framework of Computer Aided Pronunciation Training (CAPT). This work has been conducted in the frame of development of a visual articulatory feedback system, based on the morphology and articulatory strategies of a reference speaker, which automatically animates a 3D talking head from the speech sound. The motivation of this research was to make this system suitable for several speakers. Thus, the twofold objective of this thesis work was to acquire knowledge about inter-speaker variability, and to propose vocal tract models to adapt a reference clone, composed of models of speech articulator's contours (lips, tongue, velum, etc), to other speakers that may have different morphologies and different articulatory strategies. In order to build articulatory models of various vocal tract contours, we have first acquired data that cover the whole articulatory space in the French language. Midsagittal Magnetic Resonance Images (MRI) of eleven French speakers, pronouncing 63 articulations, have been collected. One of the main contributions of this study is a more detailed and larger database compared to the studies in the literature, containing information of several vocal tract contours, speakers and consonants, whereas previous studies in the literature are mostly based on vowels. The vocal tract contours visible in the MRI were outlined by hand following the same protocol for all speakers. In order to acquire knowledge about inter-speaker variability, we have characterised our speakers in terms of the articulatory strategies of various vocal tract contours like: tongue, lips and velum. We observed that each speaker has his/her own strategy to achieve sounds that are considered equivalent, among different speakers, for speech communication purposes. By means of principal component analysis (PCA), the variability of the tongue, lips and velum contours was decomposed in a set of principal movements. We noticed that these movements are performed in different proportions depending on the speaker. For instance, for a given displacement of the jaw, the tongue may globally move in a proportion that depends on the speaker. We also noticed that lip protrusion, lip opening, the influence of the jaw movement on the lips, and the velum's articulatory strategy can also vary according to the speaker. For example, some speakers roll up their uvulas against the tongue to produce the consonant /ʁ/ in vocalic contexts. These findings also constitute an important contribution to the knowledge of inter-speaker variability in speech production. In order to extract a set of common articulatory patterns that different speakers employ when producing speech sounds (normalisation), we have based our approach on linear models built from articulatory data. Multilinear decomposition methods have been applied to the contours of the tongue, lips and velum. The evaluation of our models was based in two criteria: the variance explanation and the Root Mean Square Error (RMSE) between the original and recovered articulatory coordinates. Models were also assessed using a leave-one-out cross validation procedure
Savariaux, Christophe. "Étude de l'espace de contrôle distal en production de la parole : les enseignements d'une perturbation à l'aide d'un tube labial." Grenoble INPG, 1995. http://www.theses.fr/1995INPG0024.
Jelassi, Sofiene. "Contrôle adaptatif de la qualité lors du transfert interactif de la voix sur un réseau mobile ad-hoc." Paris 6, 2010. http://www.theses.fr/2010PA066190.
Delebecque, Louis. "Etude, analyse et modélisation physique de la production de la parole avec applications aux troubles liés à une surdité profonde." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT118/document.
Language learning requires specific muscle control of all organs that contribute to speech production. Voiced sounds production, which results from vocal folds self oscillation, is especially influenced by the whole phonatory apparatus, from diaphragm to lips. The general background of this thesis is the physical modeling of speech production and the objectives are motivated by a better comprehension of physical phenomena occurring in voiced sounds production. In the frame of this work, studies are focused on cases where speech production control is impaired, for example when the speaker suffers from an important hearing loss. In this situation, physical interactions can play an important role in speech production disorders emergence. The approach adopted here is first to observe the studied phenomena thanks to invivo measurements then to describe them thanks to theoretical models. Thereafter, the models are validated by comparing theoretical results with measurements performed on a replica of the phonatory apparatus. Finally, numerical simulations in the time domain, based on a two-mass model, allow to apply physical models to specific speech production occurrences.The first study deals with fundamental frequency jumps that are observed during an unvoluntary transition between two different laryngeal mechanisms in case of vowel production.Experimental and numerical results highlight that a transition between two different laryngeal mechanisms is a symptom of the laryngeal system bifurcation, and that such bifurcation occurs during a variation of the vocal folds stiffness, the subglottal pressure,the prephonatory glottal area or the acoustic resonators length. The theoretical models allow to simulate the fundamental frequency jumps that are observed experimentally. They are used to study the different motor strategies responsible for these frequency jumps.The second study deals with plosive consonants production, and in particular with the effectsof a vocal tract occlusion on voicing offset and onset. Simulations of vowel – voiceless plosive - vowel production highlight that passive expansion of the supraglottal cavity is responsible for the voicing extension after vocal tract closure, and that increase of the vocal tract length leads to a shorter delay between the vocal tract occlusion release and the voicing onset. These results highlight that the articulation plays an important role in voicing (voiced or voiceless) and in voice-onset-time value for a voiceless plosive
Nocaudie, Olivier. "Imitation et contrôle prosodique dans l'entraînement à la remédiation phonétique : évaluation, mesure et applications pour l'enseignant en langue étrangère." Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20123/document.
Imitation is a widespread behavior amongst animals and humans, helping us do many things, including adapting to our cultural and social environment, communicating with and learning from others. In this work, we consider aspects of imitation in speech at a prosodic level; more specifically, we will focus on phonetic remediation using the Verbo Tonal Method (VTM). Phonetic practice in the classroom, per se, is an imitation game raising interesting open questions linked to L2 speech perception and production as well as L1 acoustic features reproducibility, i.e phonetic-prosodic control. Our first study deals with the teacher’s ability to control prosodic features; it questions the link between the perception of prosodic similarity using the AX and AXB paradigms, and measures of similarity using other metrics on a more objective level. Results are then cross-compared: they reveal a fair correlation between semi-automatic methods and perceptual tests. Our second study builds on previous results and further tests measurements of prosodic similarity obtained from rectilinear stylized f0 curves using a Turning Function. Applying this method to a corpus of lexicalized and delexicalized speech imitations helps us underline the benefits and flaws of the method. We propose to apply such evaluation techniques to train teacher’s phonetic control
Patri, Jean-François. "Modélisation Bayésienne de planification motrice de la parole : variabilité, buts multisensoriels et intéraction perceptuo-motrices." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS019/document.
Context and goal:It is almost a truism to affirm that one of the main features of speech is its variability: variability inter-gender, inter-speaker, but also variability from one context to another, or from one repetition to another for a given subject. Variability underlies at the same time the beauty of speech, the complexity of its treatment by speech technologies, and the difficulty for understanding its mechanism. In this thesis we study certain aspects of speech variability, our starting point being the variability characterizing the repetitions of a given utterance by a given subject, in a given condition, which we call intrinsic variability.Models of speech motor control have mainly focused on the contextual aspects of speech variability, and have rarely considered its intrinsic component, even though it is this fundamental component of variability that gives speech it naturalness. In the general context of motor control, the precise origin of the intrinsic variability of our movements remains controversial and poorly understood, however, a common assumption is that intrinsic variability would mainly originate from neural and muscular noise in the execution chain.The main goal of this thesis is to address the contextual and intrinsic component of speech variability in an integrative computational framework . To this aim, we postulate that the main component of the intrinsic variability of speech is not just execution noise, but that it results from a control strategy where intrinsic variability characterizes the abundance of possible productions of the intended speech item.Methodology:We formalize this idea in a probabilistic computational framework, Bayesian modeling, where the abundance of possible realizations of a given speech item is naturally represented as uncertainty, and where variability is thus formally manipulated. We illustrate the pertinence of this approach with three main contributions.Results:Firstly, we reformulate in Bayesian terms an existing model of speech motor control, the GEPPETO model, and demonstrate that this Bayesian reformulation, which we call B-GEPPETO, contains GEPPETO as a particular case . In particular, we illustrate how the Bayesian approach enables to account for the intrinsic component of speech variability while including the same principles proposed by GEPPETO for the emergence and structuration of its contextual component.Secondly, the Bayesian framework enable us to go beyond and extend B-GEPPETO in order to include a multisensory characterization of speech motor goals, with auditory and somatosensory components. We apply this extension to explore variability in the context of compensations to sensory-motor perturbation in speech production. We account for differences in compensation as sensory preferences implemented by modulating the relative contribution of each sensory modality in the model . The somatosensory characterization of speech motor goals involved a certain number of hypotheses that we intended to evaluate with two experimental studies.Finally, in our third contribution we exploit the formalism for the reinterpretation of recent experimental observations concerning perceptual changes following speech motor adaptation to auditory perturbations. This original analysis is made possible thanks to the unified representation of knowledge in the model, which enables to account for production and perception processes in a single computational framework.Taken together, these contributions illustrate how the Bayesian framework offers a structured and systematic approach for the construction of models in cognitive sciences . The framework facilitates the development of models and their progressive complexification by specifying and clarifying underlying assumptions
Aouati, Amar. "Utilisation des technologies vocales dans une application multicanaux." Paris 11, 1985. http://www.theses.fr/1985PA112373.
Lalevee-Huart, Claire. "Développement du contrôle moteur de la parole : une étude longitudinale d'un enfant francophone agé de 7 à 16 mois, à partir d'un corpus audio-visuel." Grenoble, 2010. http://www.theses.fr/2010GRENL016.
The first year of life can be considered as a crucial period for speech development in children. Indeed 6 months of age is the time when babbling, a key step for this development, appears under a form which is quite similar for all children in the world, whatever the language in which they are reared. It is a period when the child has no control over the nature of his productions and no ability to produce phonological units of his mother tongue. Around 12 months the child begins to produce his firm words i. E. His first meaningful utterances. The child has followed a developmental path in which he has acquired new motor, articulatory and phonological skills. We studied the development of these capabilities with an approach at the crossroad of bottom-up (MacNeilage, 1998) and top·down (Filckert et al 2004, Wauquier, 2005, 2006) current scientific approaches. Indeed, it seems that the production of speech can not be explained outside the articulatory and motor control and acquisition. But so far it seems essential to take into account the structural features and constraints of the input language (Vihman, 1996). For us, the child must adapt to his mother tongue, as permitted by his articulatory motor skills, that w' evolve with growth and cognitive maturation, while constantly comparing his productions with his native language. To evaluate these theoretical propositions, we developed a database composed by the vocal productions of a child aged from 7 to 16 months from an audio- visual corpus. Our question concerns the nature of early words. Yet if the control of mandibular oscillations can be described as the basic underlying structure in speech, the development of an adult—like language-specific syllable will imply three types of controls in addition to that of the mandible : (i) the control of the velum, which yields a fully oral vocal tract to produce salient consonant-vowel sequences, (ii) the control of the oro-laryngeal coordination to obtain the voiced vs unvoiced distinction and (iii) the rhythmic mandibular control which enables the child to adapt to the prosodic patterns of his mother tongue
Rolland, de Rengervé Antoine. "Apprentissage Interactif en Robotique Autonome : vers de nouveaux types d'IHM." Phd thesis, Université de Cergy Pontoise, 2013. http://tel.archives-ouvertes.fr/tel-00969519.
Lalevée, Claire. "Développement du contrôle moteur de la parole : une étude longitudinale d'un enfant francophone âge de 7 à 16 mois, à partir d'un corpus audio-visuel." Phd thesis, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00579921.
Laflaquière, Alban. "Approche sensorimotrice de la perception de l'espace pour la robotique autonome." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00865091.
Andrieu, Clement. "De la prise de parole au silence, une interprétation en termes d'impuissance apprise dans le contexte organisationnel." Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ2048.
Individuals have an intrinsic need to experience a certain degree of personal control over their environment, the ability to influence situations to prevent hazards, mitigate negative experiences, or create positive outcomes. This basic need also holds true within organizations where individuals operate, whether it's in the workplace, in associations, at the university, etc. In these contexts, when negative events occur, people may seek to voice their concerns to authorities, managers, or decision-makers in an attempt to indirectly affect these events they wish to influence, either to prevent or alleviate them. However, there are instances when, faced with a negative event, individuals remain silent and accept it, believing that acting is futile (i.e., acquiescent silence). This silence contradicts their need to control their environment and can lead to detrimental consequences for both individuals and organizations. In the literature on organizational behavior, this state of silence is often considered as an example of learned helplessness, but is it so? Learned helplessness is a severe state for individuals, which can resemble a depressive state and goes beyond mere passivity. In this thesis, we expose the theoretical concepts of control perception and learned helplessness, which we concretely apply to the context of voice and silence within organizations to understand how individuals come to stay silent and resigned, and what the consequences are. The studies conducted within the framework of this thesis experimentally examine all the required components of the learned helplessness paradigm, including its antecedents, mediators, and the full scope of its consequences (behavioral, emotional, and cognitive) in the context of voice within organizations. The results obtained in these studies show that acquiescent silence is indeed similar to learned helplessness, thus contributing to an understanding of the factors that lead people to remain silent and the resulting consequences. More broadly, the application of the concept of learned helplessness to social issues is discussed, as well as the theoretical contributions of this work and the remaining questions to be addressed
Allatif, Omran. "Contrôle des corrélats temporels et spectraux de la quantité vocalique : de l'arabe syrien de l'Euphrate au français de Savoie." Grenoble 3, 2008. http://www.theses.fr/2008GRE39020.
Vocalic quantity corresponds to two main correlates : in the temporal and in the spectral domains. In principle, only controlled correlates play a relevant role in linguistic communication. We studied the control for these correlates in two different linguistic systems: dialectal Arabic of Mayadin on Syrian Euphrates, and regional French of the Combe de Savoie. While temporal correlates are mainly manifested by the presence of two metrical categories, short vs. Long vowels, spectral correlates are reflected by the centrality of short with respect to long vowels. In order to determine the principal controlled correlate, we tested for dialectal Arabic the effect of two contexts, or fundamental natural perturbations. The first one, speech rate acceleration, reduces most directly the absolute vowel duration. The second one, interrogative focus, is able to affect notably their spectral structure (formants). We used therefore two processes that are divergent concerning their major effects: reduction vs. Enhancement. Finally, we assigned to short vowels, by digital manipulation of duration, for a perceptive experiment, the duration of long ones, and vice-versa. We revisited the literature on perceptive thresholds concerning segmental duration and formant frequency in order to evaluate our measurements. Our results concerning Syrian Arabic show that the temporal difference between shorts and longs is efficiently controlled. As for centralization, described as a simple vocalic reduction, it turned out to be a phonological process and not an articulatory process that depends on duration: centralization is not a by-product of brevity. Hence, it is not a matter of an articulatory gesture undershooting its target. Shorts are becoming full-fledged vocalic targets. This process is now completed for short [i], which remains basically [e], whatever the type of perturbation is, even under the enhancing influence of interrogative intonation focus. However, this dialectal Arabic vocalic system is not reduced to the triangle of short vowel qualities (sufficient dispersion theory), which might be contrasted with the long ones by means of duration. According to the universal principle of the maximum dispersion theory, this system maintains extreme vowels for long ones, with true [i] [a] [u] exemplars. Regional French from Savoy has maintained, better than standard French, and as well as many Arabic dialects, short and long vowel contrasts, which are correlated to temporal and spectral distances. Our study, manipulating speech rate over three generations, demonstrated that speakers in their seventies do control temporal correlates for pairs in the /A/ and in the /O/ regions, the spectral correlates being controlled by F1 and F2 for the /O/ pair, and only by F2 for the /A/ pair (the contrast for the /E/ pair is not robust). So, like in Arabic, centralization of short vowels does not arise from a simple vocalic reduction, since spectral distances are also controlled. Results also show that for the generation in the age range of 20-25 years, only the spectral correlate remains controlled for the /O/ pair, all contrasts being lost for the /E/ pair and for the /A/ pair. Obviously, intergenerational changes in Savoy French are not as conservative of contrasts as for Arabic of Syrian Euphrates. In conclusion, from a universal point of view, these vowel changes do not undermine the prediction of the maximum dispersion principle. To reinforce this statement we had to consider the special case of Aboriginal vowel systems of Northern Australia, which plead in favor of the sufficient dispersion theory, since they are classically presented as defective in high vowels, i. E. With no [i] and [u] exemplars. Enhancement of intonation does not induce short [e] to come back to [i] in Arabic speakers, not more than decelerating speech rate would be able to re-establish lost quantity contrasts in Savoy French. But intonational enhancement allows a recovering of [i] by Kayardild aboriginal women speakers. This finding leads us to believe that the reversibility of a change may be used in case of the violation (which does not occur in Arabic or French) of a universal principle such as the extremal vowel dispersion
Ben, youssef Atef. "Contrôle de têtes parlantes par inversion acoustico-articulatoire pour l'apprentissage et la réhabilitation du langage." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00721957.
CORNILLET, Alban. "Discours de l'émotion, du contrôle au management. Contribution à une sociolinguistique de l'efficace." Phd thesis, Université Rennes 2, 2005. http://tel.archives-ouvertes.fr/tel-00009356.
Rolland, de Rengerve Antoine. "Apprentissage Interactif en Robotique Autonome : vers de nouveaux types d'IHM." Thesis, Cergy-Pontoise, 2013. http://www.theses.fr/2013CERG0664/document.
An autonomous robot collaborating with humans should be able to learn how to navigate and manipulate objects in the same task. In a classical approach, independent functional modules are considered to manage the different aspects of the task (navigation, arm control,...) . To the contrary, the goal of this thesis is to show that learning tasks of different kinds can be tackled by learning sensorimotor attractors from a few task nonspecific structures. We thus proposed an architecture which can learn and encode attractors to perform navigation tasks as well as arm control.We started by considering a model inspired from place-cells for navigation of autonomous robots. On-line and interactive learning of place-action couples can let attraction basins emerge, allowing an autonomous robot to follow a trajectory. The robot behavior can be corrected and guided by interacting with it. The successive corrections and their sensorimotor coding enables to define the attraction basin of the trajectory. My first contribution was to adapt this principle of sensorimotor attractor building for the impedance control of a robot arm. While a proprioceptive posture is maintained, the arm movements can be corrected by modifying on-line the motor command expressed as muscular activations. The resulting motor attractors are simple associations between the proprioceptive information of the arm and these motor commands. I then showed that the robot could learn visuomotor attractors by combining the proprioceptive and visual information with the motor attractors. The visuomotor control corresponds to a homeostatic system trying to maintain an equilibrium between the two kinds of information. In the case of ambiguous visual information, the robot may perceive an external stimulus (e.g. a human hand) as its own hand. According to the principle of homeostasis, the robot will act to reduce the incoherence between this external information and its proprioceptive information. It then displays a behavior of immediately observed gestures imitation. This mechanism of homeostasis, completed by a memory of the observed sequences and action inhibition capability during the observation phase, enables a robot to perform deferred imitation and learn by observation. In the case of more complex tasks, we also showed that learning transitions can be the basis for learning sequences of gestures, like in the case of cognitive map learning in navigation. The use of motivational contexts then enables to choose between different learned sequences.We then addressed the issue of integrating in the same architecture behaviors involving visuomotor navigation and robotic arm control to grab objects. The difficulty is to be able to synchronize the different actions so the robot act coherently. Erroneous behaviors of the robot are detected by evaluating the actions predicted by the model with respect to corrections forced by the human teacher. These situations can be learned as multimodal contexts modulating the action selection process in order to adapt the behavior so the robot reproduces the desired task.Finally, we will present the perspectives of this work in terms of sensorimotor control, for both navigation and robotic arm control, and its link to human robot interface issues. We will also insist on the fact that different kinds of imitation behavior can result from the emergent properties of a sensorimotor control architecture
Delalez, Samuel. "Vokinesis : instrument de contrôle suprasegmental de la synthèse vocale." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS458/document.
This work belongs to the field of performative control of voice synthesis, and more precisely of real-time modification of pre-recorded voice signals. In a context where such systems were only capable of modifying parameters such as pitch, duration and voice quality, our work was carried around the question of performative modification of voice rhythm. One significant part of this thesis has been devoted to the development of Vokinesis, a program for performative modification of pre-recorded voice. It has been developed under 4 goals: to allow for voice rhythm control, to obtain a modular system, usable in public performances situations as well as for research applications. To achieve this development, a reflexion about the nature of voice rhythm and how it should be controlled has been carried out. It appeared that the basic inter-linguistic rhtyhmic unit is syllable-sized, but that syllabification rules are too language-dependant to provide a invariant inter-linguistic rhythmic pattern. We showed that accurate and expressive sequencing of vocal rhythm is performed by controlling the timing of two phases, which together form a rhythmic group: the rhythmic nucleus and the rhythmic link. We developed several rhythm control methods, tested with several control interfaces. An objective evaluation showed that one of our methods allows for very accurate control of rhythm. New strategies for voice pitch and quality control with a graphic tablet have been established. A reflexion about the pertinence of graphic tablets for pitch control, regarding the rise of new continuous musical interfaces, lead us to the conclusion that they best fit intonation control (speech), but that PMC (Polyphonic Multidimensional controllers) are better for melodic control (singing, or other instruments).The development of Vokinesis also required the implementation of the VoPTiQ (Voice Pitch, Time and Quality modification) signal processing method, which combines an adaptation of the RT-PSOLA algorithm and some specific filtering techniques for voice quality modulations. The use of Vokinesis as a musical instrument has been successfully evaluated in public representations of the Chorus Digitalis ensemble, for various singing styles (from pop to contemporary music). Its use for electro music has also been explored by interfacing the Ableton Live composition environnment with Vokinesis. Application perspectives are diverse: scientific studies (research in prosody, expressive speech, neurosciences), sound and music production, language learning and teaching, speech therapies
Carment, Loic. "Le contrôle moteur et oculomoteur dans la schizophrénie : l’attention et la modulation de l’excitabilité corticale : principaux contributeurs du déficit sensorimoteur ? Manual dexterity in schizophrenia - A neglected clinical marker ? Manual dexterity and aging : a pilot study disentangling sensorimotor from cognitive decline." Thesis, Sorbonne université, 2018. http://www.theses.fr/2018SORUS265.
Sensorimotor, attention and working memory impairments have been consistently reported in schizophrenia, even at an early stage of its evolution. The presence and severity of these deficits may, from the prodromal stage, predict the course of the disease. However, the interaction between cognitive and sensorimotor impairments and their related neural correlates remain uncharted. In this study, we wanted to assess whether attentional and working memory processing contribute to sensorimotor impairment in a visuomotor grip force tracking task in 25 stabilized patients with schizophrenia, 17 unaffected healthy siblings and 25 healthy age and gender-matched controls. Subjects performed a visuomotor grip force tracking task with increasing cognitive load: (i) simple tracking, (ii) tracking with visual distractors (requiring inhibition of saccades), and (iii) tracking with addition of numbers (requiring saccades). During the visuomotor tracking task, gaze was simultaneously recorded and cortical excitability and inhibition were assessed using transcranial magnetic stimulation. The behavioral and physiological results, obtained in this thesis, pinpoint altered attentional processing (divided attention and filtering of irrelevant information) and an imbalance of cortical excitability and inhibition as key contributors to sensorimotor impairments in schizophrenia. Moreover, altered task-related modulation of cortical excitability and inhibition in siblings is consistent with a genetic risk for cortical abnormality
Ben, Youssef Atef. "Contrôle de têtes parlantes par inversion acoustico-articulatoire pour l’apprentissage et la réhabilitation du langage." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT088/document.
Speech sounds may be complemented by displaying speech articulators shapes on a computer screen, hence producing augmented speech, a signal that is potentially useful in all instances where the sound itself might be difficult to understand, for physical or perceptual reasons. In this thesis, we introduce a system called visual articulatory feedback, in which the visible and hidden articulators of a talking head are controlled from the speaker's speech sound. The motivation of this research was to develop such a system that could be applied to Computer Aided Pronunciation Training (CAPT) for learning of foreign languages, or in the domain of speech therapy. We have based our approach to this mapping problem on statistical models build from acoustic and articulatory data. In this thesis we have developed and evaluated two statistical learning methods trained on parallel synchronous acoustic and articulatory data recorded on a French speaker by means of an electromagnetic articulograph. Our Hidden Markov models (HMMs) approach combines HMM-based acoustic recognition and HMM-based articulatory synthesis techniques to estimate the articulatory trajectories from the acoustic signal. Gaussian mixture models (GMMs) estimate articulatory features directly from the acoustic ones. We have based our evaluation of the improvement results brought to these models on several criteria: the Root Mean Square Error between the original and recovered EMA coordinates, the Pearson Product-Moment Correlation Coefficient, displays of the articulatory spaces and articulatory trajectories, as well as some acoustic or articulatory recognition rates. Experiments indicate that the use of states tying and multi-Gaussian per state in the acoustic HMM improves the recognition stage, and that the minimum generation error (MGE) articulatory HMMs parameter updating results in a more accurate inversion than the conventional maximum likelihood estimation (MLE) training. In addition, the GMM mapping using MLE criteria is more efficient than using minimum mean square error (MMSE) criteria. In conclusion, we have found that the HMM inversion system has a greater accuracy compared with the GMM one. Beside, experiments using the same statistical methods and data have shown that the face-to-tongue inversion problem, i.e. predicting tongue shapes from face and lip shapes cannot be solved in a general way, and that it is impossible for some phonetic classes. In order to extend our system based on a single speaker to a multi-speaker speech inversion system, we have implemented a speaker adaptation method based on the maximum likelihood linear regression (MLLR). In MLLR, a linear regression-based transform that adapts the original acoustic HMMs to those of the new speaker was calculated to maximise the likelihood of adaptation data. Finally, this speaker adaptation stage has been evaluated using an articulatory phonetic recognition system, as there are not original articulatory data available for the new speakers. Finally, using this adaptation procedure, we have developed a complete articulatory feedback demonstrator, which can work for any speaker. This system should be assessed by perceptual tests in realistic conditions
Brenon, Alexis. "Modèle profond pour le contrôle vocal adaptatif d'un habitat intelligent." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM057/document.
Smart-homes, resulting of the merger of home-automation, ubiquitous computing and artificial intelligence, support inhabitants in their activity of daily living to improve their quality of life.Allowing dependent and aged people to live at home longer, these homes provide a first answer to society problems as the dependency tied to the aging population.In voice controlled home, the home has to answer to user's requests covering a range of automated actions (lights, blinds, multimedia control, etc.).To achieve this, the control system of the home need to be aware of the context in which a request has been done, but also to know user habits and preferences.Thus, the system must be able to aggregate information from a heterogeneous home-automation sensors network and take the (variable) user behavior into account.The development of smart home control systems is hard due to the huge variability regarding the home topology and the user habits.Furthermore, the whole set of contextual information need to be represented in a common space in order to be able to reason about them and make decisions.To address these problems, we propose to develop a system which updates continuously its model to adapt itself to the user and which uses raw data from the sensors through a graphical representation.This new method is particularly interesting because it does not require any prior inference step to extract the context.Thus, our system uses deep reinforcement learning; a convolutional neural network allowing to extract contextual information and reinforcement learning used for decision-making.Then, this memoir presents two systems, a first one only based on reinforcement learning showing limits of this approach against real environment with thousands of possible states.Introduction of deep learning allowed to develop the second one, ARCADES, which gives good performances proving that this approach is relevant and opening many ways to improve it
Rochet-Capellan, Amélie. "De la substance à la forme : rôle des contraintes motrices orofaciales et brachiomanuelles de la parole dans l’émergence du langage." Phd thesis, Grenoble INPG, 2007. http://www.theses.fr/2007INPG0129.
And what if the sensori-motor properties of speech model language ? This hypothesis drove language in the field of complexity and embodied cognition. Here, we introduce different kind of evidences showing the part of the motricity of speech in the genesis of language. Orofacial motricity, first, with the assumption that the properties of inter-articulators coordination may constraint the morphogenesis of langage. Orofacial and brachiomanual motricity, then, with the hypothesis that language may emerge from hand-mouth coordination that support the act of pointing by the voice and by the hand. Hence, our experiments analyze the recorded motions of speakers of french during differents tasks in order to establish the properties of jaw-tongue-lips coordinations in speech and of jaw-hand coordination in pointing. These studies lie within the global and recent research framework that propose to investigate language as a complex-system