Literatura científica selecionada sobre o tema "Tête – Capture de mouvements"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Tête – Capture de mouvements".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Artigos de revistas sobre o assunto "Tête – Capture de mouvements"
Keegan, Kevin, Charlotte Paindaveine e Michael Schramme. "Progrès récents dans l’évaluation objective des boiteries équines à l’aide de capteurs inertiels". Le Nouveau Praticien Vétérinaire équine 17-18, n.º 61-62 (2023): 12–24. http://dx.doi.org/10.1051/npvequi/2024020.
Texto completo da fonteBels, Vincent L., Charles Brillet e Véronique Delheusy. "Etude cinématique de la prise de nourriture chez Eublepharis macularius (Reptilia, Gekkonidae) et comparaison au sein des geckos". Amphibia-Reptilia 16, n.º 2 (1995): 185–201. http://dx.doi.org/10.1163/156853895x00361.
Texto completo da fonteHmed, Choukri. "Des mouvements sociaux « sur une tête d'épingle » ?" Politix 84, n.º 4 (2008): 145. http://dx.doi.org/10.3917/pox.084.0145.
Texto completo da fonteDelheusy, Véronique, e Vincent Bels. "Comportement agonistique du gecko géant diurne Phelsuma madagascariensis grandis". Amphibia-Reptilia 15, n.º 1 (1994): 63–79. http://dx.doi.org/10.1163/156853894x00551.
Texto completo da fonteDubuisson, Colette, Johanne Boulanger, Jules Desrosiers e Linda Lelièvre. "Les mouvements de tête dans les interrogatives en langue des signes québécoise". Revue québécoise de linguistique 20, n.º 2 (7 de maio de 2009): 93–121. http://dx.doi.org/10.7202/602706ar.
Texto completo da fonteProchasson, Christophe. "Un chahuteur discret : Jean-Jacques Becker historien de la Première Guerre mondiale". Cahiers Jaurès N° 251, n.º 1 (25 de abril de 2024): 23–32. http://dx.doi.org/10.3917/cj.251.0023.
Texto completo da fonteAngiboust, Sylvain. "La tête et les jambes. L’immersion dans le cinéma d’action contemporain". Figures de l'Art. Revue d'études esthétiques 26, n.º 1 (2014): 359–68. http://dx.doi.org/10.3406/fdart.2014.1649.
Texto completo da fonteVerrette, René. "Le régionalisme mauricien des années trente". Revue d'histoire de l'Amérique française 47, n.º 1 (26 de agosto de 2008): 27–52. http://dx.doi.org/10.7202/305181ar.
Texto completo da fonteCôté, Gérald. "Musiques et identités remixées". Articles 35, n.º 1 (14 de fevereiro de 2017): 53–78. http://dx.doi.org/10.7202/1038944ar.
Texto completo da fonteVernet, Julien. "A Community of Resistance: The Organization of Protest in New Orleans against the U.S. Territorial Administration, 1803–1805". French Colonial History 11 (1 de maio de 2010): 47–70. http://dx.doi.org/10.2307/41938197.
Texto completo da fonteTeses / dissertações sobre o assunto "Tête – Capture de mouvements"
Bossard, Martin. "Perception visuelle du mouvement propre : effets des mouvements de la tête durant la marche sur l'estimation de la distance parcourue à partir du flux optique". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0254/document.
Texto completo da fonteWhen exploring their environment, humans and other animals have the ability to use many sources of information to estimate the distance they travel. Several studies have shown that optic flow is a significant cue to perceive distance travelled. Furthermore, it was found that adding various viewpoint oscillations to a purely translational optic flow, simulating forward self-motion, modulated this perception. In a series of experiments, we tested whether the perception of distance travelled was also affected by viewpoint oscillation, similar to head motion during natural walking. A first series of experiments, participants were exposed to an immersive optic flow simulating forward self-motion and they were asked to indicate when they thought they had reached the remembered position of a previously seen target. Two further experiments aimed to test whether the idiosyncrasy of viewpoint oscillations affects the perception of distance travelled in stationary observers and whether the absence of their own viewpoint oscillation played an important role in subjects’ estimates, while they were walking on a treadmill. And finally, in a last experiment we tried to develop a dynamic measure of distance travelled to a previously seen target, with a continuous pointing task method. Overall, our results show that viewpoint oscillations play an important role in visual self-motion perception and that several parameters (including visual information, proprioceptive information and ecological aspects of natural walking) seem to be involved in this process
Barrielle, Vincent. "Leveraging Blendshapes for Realtime Physics-Based Facial Animation". Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0003.
Texto completo da fonteGenerating synthetic facial animation is a crucial step in the creation of content for a wide variety of digital media such as movies and video games. However, producing convincing results is challenging, since humans are experts in analyzing facial expressions and will hence detect any artifact. The dominant paradigm for the production of high-quality facial animation is the blendshapes paradigm, where facial expressions are decomposed as a linear combination of more basic expressions. However, this technique requires large amounts of work to reach the desired quality, which reserves high-quality animation to large budget movies. Producing high-quality facial animation is possible using physical simulation, but this requires the costly acquisition of medical imaging data.We propose to merge the blendshapes and physical simulation paradigms, to build upon the ubiquity of blendshapes while benefiting from physical simulation for complex effects. We therefore introduce blendforces, a paradigm where blendshapes are interpreted as a basis for approximating the forces emanating from the facial muscles. We show that, combined with an appropriate face physical system, these blendforces can be used to produce convincing facial animation, with natural skin dynamics, handling of lips contacts, sticky lips, inertial effects and handling of gravity. We encompass this framework within a practical realtime performance capture setup, where we produce realtime facial animation with physical effects from a simple RGB camera feed. To the best of our knowledge, this constitutes the first instance of realtime physical simulation applied to the challenging task of facial animation
Di, loreto Cédric. "Apport des simulations immersives pour l’étude du comportement dynamique des occupants d’un véhicule". Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAE065.
Texto completo da fonteWhiplash remains a strong socio-economic issue in road accidents. Research in this field has led to the development of injury criteriathat are still difficult to validate for all situations. The hypotheses of this project are that head stabilization strategies are influenced by activities prior to thedynamic event as well as by certain cognitive availabilities. To answer this, this thesis experimented different dynamic environments and explored the use of virtual reality as a simulation tool for the study of the subject's dynamic behaviorand evaluated the relevance of these tools.A first experimentation allowed to show the importance of the alert in the subject by using an automatic emergency braking system of an equipped vehicle. A second study consisting in the replication of this experiment in a hexapod driving simulator was able to show that the subject's behaviorwas comparable despite the lower dynamic performance of the system. Finally, a last study carried out on accelerated subjects on a laboratory-controlled cart, whose emotional state was controlled, was able to account for the importance of the integration of physiological parameters in the study of head stabilization strategies in the subject.Immersive simulations proved to be relevant to control the subject's cognitive environment and the importance of the latter could be observed. The use of these technologies allows us to glimpse new experimental possibilities that can lead to a better understanding of the subject's stabilization strategies
Ju, Qinjie. "Utilisation de l'eye-tracking pour l'interaction mobile dans un environnement réel augmenté". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEC011/document.
Texto completo da fonteEye-tracking has a very strong potential in human computer interaction (HCI) as an input modality, particularly in mobile situations. In this thesis, we concentrate in demonstrating this potential by highlighting the scenarios in which the eye-tracking possesses obvious advantages comparing with all the other interaction modalities. During our research, we find that this technology lacks convenient action triggering methods, which can scale down the performance of interacting by gaze. In this instance, we investigate the combination of eye-tracking and fixed-gaze head movement, which allows us to trigger various commands without using our hands or changing gaze direction. We have proposed a new algorithm for fixed-gaze head movement detection using only scene images captured by the scene camera equipped in front of the head-mounted eye-tracker, for the purpose of saving computation time. To test the performance of our fixed-gaze head movement detection algorithm and the acceptance of triggering commands by these movements when the user's hands are occupied by another task, we have implemented some tests in the EyeMusic application that we have designed and developed. The EyeMusic system is a music reading system, which can play the notes of a measure in a music score that the user does not understand. By making a voluntary head movement when fixing his/her gaze on the same point of a music score, the user can obtain the desired audio feedback. The design, development and usability testing of the first prototype for this application are presented in this thesis. The usability of our EyeMusic application is confirmed by the experimental results, as 85% of participants were able to use all the head movements we implemented in the prototype. The average success rate of this application is 70%, which is partly influenced by the performance of the eye-tracker we use. The performance of our fixed-gaze head movement detection algorithm is 85%, and there were no significant differences between the performance of each head movement. Apart from the EyeMusic application, we have explored two other scenarios that are based on the same control principles: EyeRecipe and EyePay, the details of these two applications are also presented in this thesis
Naert, Lucie. "Capture, annotation and synthesis of motions for the data-driven animation of sign language avatars". Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS561.
Texto completo da fonteThis thesis deals with the capture, annotation, synthesis and evaluation of arm and hand motions for the animation of avatars communicating in Sign Languages (SL). Currently, the production and dissemination of SL messages often depend on video recordings which lack depth information and for which editing and analysis are complex issues. Signing avatars constitute a powerful alternative to video. They are generally animated using either procedural or data-driven techniques. Procedural animation often results in robotic and unrealistic motions, but any sign can be precisely produced. With data-driven animation, the avatar's motions are realistic but the variety of the signs that can be synthesized is limited and/or biased by the initial database. As we considered the acceptance of the avatar to be a prime issue, we selected the data-driven approach but, to address its main limitation, we propose to use annotated motions present in an SL Motion Capture database to synthesize novel SL signs and utterances absent from this initial database. To achieve this goal, our first contribution is the design, recording and perceptual evaluation of a French Sign Language (LSF) Motion Capture database composed of signs and utterances performed by deaf LSF teachers. Our second contribution is the development of automatic annotation techniques for different tracks based on the analysis of the kinematic properties of specific joints and existing machine learning algorithms. Our last contribution is the implementation of different motion synthesis techniques based on motion retrieval per phonological component and on the modular reconstruction of new SL content with the additional use of motion generation techniques such as inverse kinematics, parameterized to comply to the properties of real motions
Colas, Tom. "Externalisation en restitution binaurale non-individualisée avec head-tracking : aspects objectifs et perceptifs". Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0053.
Texto completo da fonteBinaural reproduction is a method of sound playback through headphones aimed at simulating natural listening. Often, sounds played through headphones seem to originate from inside the head. For convincing binauralreproduction, sounds must appear to come from outside (externalized), as they do in reality.Using a head-tracking system can enhance externalization during and after headmovements. This thesis examines the persistence of the head movements after-effect on externalization. The first study combined a behavioral experiment and an EEG experiment to identify a neural correlate of externalization.An innovative method leveraging this after-effect was developed to compare "acoustically identical" stimuli with different levels of externalization. The results did not reveal a neural correlate but raised several questions about the influence of the nature and duration of sound sources on the head movements after-effect on externalization. A second study, involving three behavioral experiments, was conducted to address these questions. The results showed that the improvement in externalization after head movements works for various sound sources but decreases after about ten seconds
Li, Jingting. "Facial Micro-Expression Analysis". Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.
Texto completo da fonteThe Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
Weber, Raphaël. "Construction non supervisée d'un modèle expressif spécifique à la personne". Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0005.
Texto completo da fonteAutomatic facial expression analysis has gained a growing interest in the past decades as a result of the wide range of applications that are covered. Medical applications have been considered, notably for automatic behavior analysis for eldery home support.This thesis proposes to compute a continuous person-specific model of expressions in an unsupervized manner (i.e. with no prior knowledge on the morphology of the subject) in order to meet the needs of automatic behavior analysis. Our system must be able to analyze facial expressions in an unconstrained environment in terms of head pose and speaking.This thesis is based on previous work on invariant representation of facial expressions. In this previous work, the computation of the model requires the acquisition of the neutral face, so the model is weakly supervised. Moreover, it is computed with synthesized expressions, so it does note account for the real facial expressions of the subject. We propose in this thesis to make the computation unsupervised by automatically detecting the neutral face and then by automatically adapting the model to the real facial expressions of the subject in an unsupervised manner. The idea of the adaptation is to detect, both globally and locally, the real basic expressions of the subject in order to replace the synthesized basic expressions of the model, while maintaining a set of constraints.We have tested our method of adaptation on posed expressions, spontaneous expressions in a constrained environment and spontaneous expressions in an unconstrained environment. The results show the efficiency of the adaptation and the importance of the set of constraints for the test in an unconstrained environment
Goffart, Laurent. "L'orientation saccadique du regard vers une cible : étude de la contribution du cervelet médio-postérieur chez le chat en condition "tête libre"". Lyon 1, 1996. http://www.theses.fr/1996LYO1T070.
Texto completo da fonteCohen-Lhyver, Benjamin. "Modulation de mouvements de tête pour l'analyse multimodale d'un environnement inconnu". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066574/document.
Texto completo da fonteThe exploration of an unknown environement by a mobile robot is a vast research domain aiming at understanding and implementing efficient, fast and relevant exploration models. However, since the 80s, exploration is no longer restricted to the sole determination of topography a space: to the spatial component has been coupled a semantic one of the explored world. Indeed, in addition to the physical characteristics of the environment — walls, obstacles, usable paths or not, entrances and exits — allowing the robot to create its own internal representation of the world through which it can move in it, exist dynamic components such as the apparition of audiovisual events. These events are of high importance for they can modulate the robot's behavior through their location in space — topographic aspect — and the information they carry — semantic aspect. Although impredictible by nature (since the environment is unknown) all these events are not of equal importance: some carry valuable information for the robot's exploration task, some don't. Following the work on intrinsic motivations to explore an unknown environment, and being rooted in neurological phenomenons, this thesis work consisted in the elaboration of the Head Turning Modulation (HTM) model aiming at giving to a robot capable of head movements, the ability to determine the relative importance of the apparition of an audioivsual event. This "importance" has been formalized through the notion of Congruence which is mainly inspired from (i) Shannon's entropy, (ii) the Mismatch Negativity phenomenon, and (iii) the Reverse Hierarchy Theory. The HTM model, created within the Two!Ears european project, is a learning paradigm based on (i) an auto-supervision (the robot decides when it is necessary or not to learn), (ii) a real-time constraint (the robot learns and reacts as soon as data is perceived), and (iii) an absence of prior knowledge about the environment (there is no "truth" to learn, only the reality of the environment to explore). This model, integrated in the overal Two!Ears framework, has been entirely implemented in a mobile robot with binocular vision and binaural audition. The HTM model thus gather the traditional approach of ascending analysis of perceived signals (extraction of caracteristics, visual or audio recognition etc.) to a descending approach that enables, via motor actions generation in order to deal with perception deficiency (such as visual occlusion), to understand and interprete the audiovisual environment of the robot. This bottom-up/top-down active approach is then exploited to modulate the head movements of a humanoid robot and to study the impact of the Congruence on these movements. The system has been evaluated via realistic simulations, and in real conditions, on the two robotic platforms of the Two!Ears project
Livros sobre o assunto "Tête – Capture de mouvements"
Nadeau-Dubois, Gabriel. Tenir tête. Montréal, Qc: Lux Éditeur, 2013.
Encontre o texto completo da fonteFriedberg, Fred. Comprendre et pratiquer la technique des mouvements oculaires (EMT): Pour soulager les tensions émotionnelles : stress, angoisse, colère, phobies, maux de tête ... Paris: Dunod-Inter Editions, 2006.
Encontre o texto completo da fonteCapítulos de livros sobre o assunto "Tête – Capture de mouvements"
ALLAERT, Benjamin, Ioan Marius BILASCO e Chaabane DJERABA. "Vers une adaptation aux problèmes de pose". In Analyse faciale en conditions non contrôlées, 303–19. ISTE Group, 2024. http://dx.doi.org/10.51926/iste.9111.ch8.
Texto completo da fonteMc Grogan, Manus. "« Chasser le flic viril de sa tête » : mouvements de libération contre mao-spontex dans l’après-Mai 1968". In Prolétaires de tous les pays, qui lave vos chaussettes ?, 143–53. Presses universitaires de Rennes, 2017. http://dx.doi.org/10.3917/pur.banti.2017.01.0143.
Texto completo da fonteTrabalhos de conferências sobre o assunto "Tête – Capture de mouvements"
Bailly, Charles, François Leitner e Laurence Nigay. "Exploration de la Physicalité des Widgets pour l’Interaction Basée sur des mouvements de la Tête le Cas des Menus en Réalité Mixte". In IHM '21: IHM '21 - 32e Conférence Francophone sur l'Interaction Homme-Machine. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3450522.3451326.
Texto completo da fonte