Teses / dissertações sobre o tema "Tête – Capture de mouvements"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Tête – Capture de mouvements".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Bossard, Martin. "Perception visuelle du mouvement propre : effets des mouvements de la tête durant la marche sur l'estimation de la distance parcourue à partir du flux optique". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0254/document.
Texto completo da fonteWhen exploring their environment, humans and other animals have the ability to use many sources of information to estimate the distance they travel. Several studies have shown that optic flow is a significant cue to perceive distance travelled. Furthermore, it was found that adding various viewpoint oscillations to a purely translational optic flow, simulating forward self-motion, modulated this perception. In a series of experiments, we tested whether the perception of distance travelled was also affected by viewpoint oscillation, similar to head motion during natural walking. A first series of experiments, participants were exposed to an immersive optic flow simulating forward self-motion and they were asked to indicate when they thought they had reached the remembered position of a previously seen target. Two further experiments aimed to test whether the idiosyncrasy of viewpoint oscillations affects the perception of distance travelled in stationary observers and whether the absence of their own viewpoint oscillation played an important role in subjects’ estimates, while they were walking on a treadmill. And finally, in a last experiment we tried to develop a dynamic measure of distance travelled to a previously seen target, with a continuous pointing task method. Overall, our results show that viewpoint oscillations play an important role in visual self-motion perception and that several parameters (including visual information, proprioceptive information and ecological aspects of natural walking) seem to be involved in this process
Barrielle, Vincent. "Leveraging Blendshapes for Realtime Physics-Based Facial Animation". Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0003.
Texto completo da fonteGenerating synthetic facial animation is a crucial step in the creation of content for a wide variety of digital media such as movies and video games. However, producing convincing results is challenging, since humans are experts in analyzing facial expressions and will hence detect any artifact. The dominant paradigm for the production of high-quality facial animation is the blendshapes paradigm, where facial expressions are decomposed as a linear combination of more basic expressions. However, this technique requires large amounts of work to reach the desired quality, which reserves high-quality animation to large budget movies. Producing high-quality facial animation is possible using physical simulation, but this requires the costly acquisition of medical imaging data.We propose to merge the blendshapes and physical simulation paradigms, to build upon the ubiquity of blendshapes while benefiting from physical simulation for complex effects. We therefore introduce blendforces, a paradigm where blendshapes are interpreted as a basis for approximating the forces emanating from the facial muscles. We show that, combined with an appropriate face physical system, these blendforces can be used to produce convincing facial animation, with natural skin dynamics, handling of lips contacts, sticky lips, inertial effects and handling of gravity. We encompass this framework within a practical realtime performance capture setup, where we produce realtime facial animation with physical effects from a simple RGB camera feed. To the best of our knowledge, this constitutes the first instance of realtime physical simulation applied to the challenging task of facial animation
Di, loreto Cédric. "Apport des simulations immersives pour l’étude du comportement dynamique des occupants d’un véhicule". Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAE065.
Texto completo da fonteWhiplash remains a strong socio-economic issue in road accidents. Research in this field has led to the development of injury criteriathat are still difficult to validate for all situations. The hypotheses of this project are that head stabilization strategies are influenced by activities prior to thedynamic event as well as by certain cognitive availabilities. To answer this, this thesis experimented different dynamic environments and explored the use of virtual reality as a simulation tool for the study of the subject's dynamic behaviorand evaluated the relevance of these tools.A first experimentation allowed to show the importance of the alert in the subject by using an automatic emergency braking system of an equipped vehicle. A second study consisting in the replication of this experiment in a hexapod driving simulator was able to show that the subject's behaviorwas comparable despite the lower dynamic performance of the system. Finally, a last study carried out on accelerated subjects on a laboratory-controlled cart, whose emotional state was controlled, was able to account for the importance of the integration of physiological parameters in the study of head stabilization strategies in the subject.Immersive simulations proved to be relevant to control the subject's cognitive environment and the importance of the latter could be observed. The use of these technologies allows us to glimpse new experimental possibilities that can lead to a better understanding of the subject's stabilization strategies
Ju, Qinjie. "Utilisation de l'eye-tracking pour l'interaction mobile dans un environnement réel augmenté". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEC011/document.
Texto completo da fonteEye-tracking has a very strong potential in human computer interaction (HCI) as an input modality, particularly in mobile situations. In this thesis, we concentrate in demonstrating this potential by highlighting the scenarios in which the eye-tracking possesses obvious advantages comparing with all the other interaction modalities. During our research, we find that this technology lacks convenient action triggering methods, which can scale down the performance of interacting by gaze. In this instance, we investigate the combination of eye-tracking and fixed-gaze head movement, which allows us to trigger various commands without using our hands or changing gaze direction. We have proposed a new algorithm for fixed-gaze head movement detection using only scene images captured by the scene camera equipped in front of the head-mounted eye-tracker, for the purpose of saving computation time. To test the performance of our fixed-gaze head movement detection algorithm and the acceptance of triggering commands by these movements when the user's hands are occupied by another task, we have implemented some tests in the EyeMusic application that we have designed and developed. The EyeMusic system is a music reading system, which can play the notes of a measure in a music score that the user does not understand. By making a voluntary head movement when fixing his/her gaze on the same point of a music score, the user can obtain the desired audio feedback. The design, development and usability testing of the first prototype for this application are presented in this thesis. The usability of our EyeMusic application is confirmed by the experimental results, as 85% of participants were able to use all the head movements we implemented in the prototype. The average success rate of this application is 70%, which is partly influenced by the performance of the eye-tracker we use. The performance of our fixed-gaze head movement detection algorithm is 85%, and there were no significant differences between the performance of each head movement. Apart from the EyeMusic application, we have explored two other scenarios that are based on the same control principles: EyeRecipe and EyePay, the details of these two applications are also presented in this thesis
Naert, Lucie. "Capture, annotation and synthesis of motions for the data-driven animation of sign language avatars". Thesis, Lorient, 2020. http://www.theses.fr/2020LORIS561.
Texto completo da fonteThis thesis deals with the capture, annotation, synthesis and evaluation of arm and hand motions for the animation of avatars communicating in Sign Languages (SL). Currently, the production and dissemination of SL messages often depend on video recordings which lack depth information and for which editing and analysis are complex issues. Signing avatars constitute a powerful alternative to video. They are generally animated using either procedural or data-driven techniques. Procedural animation often results in robotic and unrealistic motions, but any sign can be precisely produced. With data-driven animation, the avatar's motions are realistic but the variety of the signs that can be synthesized is limited and/or biased by the initial database. As we considered the acceptance of the avatar to be a prime issue, we selected the data-driven approach but, to address its main limitation, we propose to use annotated motions present in an SL Motion Capture database to synthesize novel SL signs and utterances absent from this initial database. To achieve this goal, our first contribution is the design, recording and perceptual evaluation of a French Sign Language (LSF) Motion Capture database composed of signs and utterances performed by deaf LSF teachers. Our second contribution is the development of automatic annotation techniques for different tracks based on the analysis of the kinematic properties of specific joints and existing machine learning algorithms. Our last contribution is the implementation of different motion synthesis techniques based on motion retrieval per phonological component and on the modular reconstruction of new SL content with the additional use of motion generation techniques such as inverse kinematics, parameterized to comply to the properties of real motions
Colas, Tom. "Externalisation en restitution binaurale non-individualisée avec head-tracking : aspects objectifs et perceptifs". Electronic Thesis or Diss., Brest, 2024. http://www.theses.fr/2024BRES0053.
Texto completo da fonteBinaural reproduction is a method of sound playback through headphones aimed at simulating natural listening. Often, sounds played through headphones seem to originate from inside the head. For convincing binauralreproduction, sounds must appear to come from outside (externalized), as they do in reality.Using a head-tracking system can enhance externalization during and after headmovements. This thesis examines the persistence of the head movements after-effect on externalization. The first study combined a behavioral experiment and an EEG experiment to identify a neural correlate of externalization.An innovative method leveraging this after-effect was developed to compare "acoustically identical" stimuli with different levels of externalization. The results did not reveal a neural correlate but raised several questions about the influence of the nature and duration of sound sources on the head movements after-effect on externalization. A second study, involving three behavioral experiments, was conducted to address these questions. The results showed that the improvement in externalization after head movements works for various sound sources but decreases after about ten seconds
Li, Jingting. "Facial Micro-Expression Analysis". Thesis, CentraleSupélec, 2019. http://www.theses.fr/2019CSUP0007.
Texto completo da fonteThe Micro-expressions (MEs) are very important nonverbal communication clues. However, due to their local and short nature, spotting them is challenging. In this thesis, we address this problem by using a dedicated local and temporal pattern (LTP) of facial movement. This pattern has a specific shape (S-pattern) when ME are displayed. Thus, by using a classical classification algorithm (SVM), MEs are distinguished from other facial movements. We also propose a global final fusion analysis on the whole face to improve the distinction between ME (local) and head (global) movements. However, the learning of S-patterns is limited by the small number of ME databases and the low volume of ME samples. Hammerstein models (HMs) are known to be a good approximation of muscle movements. By approximating each S-pattern with a HM, we can both filter outliers and generate new similar S-patterns. By this way, we perform a data augmentation for S-pattern training dataset and improve the ability to differentiate MEs from other facial movements. In the first ME spotting challenge of MEGC2019, we took part in the building of the new result evaluation method. In addition, we applied our method to spotting ME in long videos and provided the baseline result for the challenge. The spotting results, performed on CASME I and CASME II, SAMM and CAS(ME)2, show that our proposed LTP outperforms the most popular spotting method in terms of F1-score. Adding the fusion process and data augmentation improve even more the spotting performance
Weber, Raphaël. "Construction non supervisée d'un modèle expressif spécifique à la personne". Thesis, CentraleSupélec, 2017. http://www.theses.fr/2017CSUP0005.
Texto completo da fonteAutomatic facial expression analysis has gained a growing interest in the past decades as a result of the wide range of applications that are covered. Medical applications have been considered, notably for automatic behavior analysis for eldery home support.This thesis proposes to compute a continuous person-specific model of expressions in an unsupervized manner (i.e. with no prior knowledge on the morphology of the subject) in order to meet the needs of automatic behavior analysis. Our system must be able to analyze facial expressions in an unconstrained environment in terms of head pose and speaking.This thesis is based on previous work on invariant representation of facial expressions. In this previous work, the computation of the model requires the acquisition of the neutral face, so the model is weakly supervised. Moreover, it is computed with synthesized expressions, so it does note account for the real facial expressions of the subject. We propose in this thesis to make the computation unsupervised by automatically detecting the neutral face and then by automatically adapting the model to the real facial expressions of the subject in an unsupervised manner. The idea of the adaptation is to detect, both globally and locally, the real basic expressions of the subject in order to replace the synthesized basic expressions of the model, while maintaining a set of constraints.We have tested our method of adaptation on posed expressions, spontaneous expressions in a constrained environment and spontaneous expressions in an unconstrained environment. The results show the efficiency of the adaptation and the importance of the set of constraints for the test in an unconstrained environment
Goffart, Laurent. "L'orientation saccadique du regard vers une cible : étude de la contribution du cervelet médio-postérieur chez le chat en condition "tête libre"". Lyon 1, 1996. http://www.theses.fr/1996LYO1T070.
Texto completo da fonteCohen-Lhyver, Benjamin. "Modulation de mouvements de tête pour l'analyse multimodale d'un environnement inconnu". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066574/document.
Texto completo da fonteThe exploration of an unknown environement by a mobile robot is a vast research domain aiming at understanding and implementing efficient, fast and relevant exploration models. However, since the 80s, exploration is no longer restricted to the sole determination of topography a space: to the spatial component has been coupled a semantic one of the explored world. Indeed, in addition to the physical characteristics of the environment — walls, obstacles, usable paths or not, entrances and exits — allowing the robot to create its own internal representation of the world through which it can move in it, exist dynamic components such as the apparition of audiovisual events. These events are of high importance for they can modulate the robot's behavior through their location in space — topographic aspect — and the information they carry — semantic aspect. Although impredictible by nature (since the environment is unknown) all these events are not of equal importance: some carry valuable information for the robot's exploration task, some don't. Following the work on intrinsic motivations to explore an unknown environment, and being rooted in neurological phenomenons, this thesis work consisted in the elaboration of the Head Turning Modulation (HTM) model aiming at giving to a robot capable of head movements, the ability to determine the relative importance of the apparition of an audioivsual event. This "importance" has been formalized through the notion of Congruence which is mainly inspired from (i) Shannon's entropy, (ii) the Mismatch Negativity phenomenon, and (iii) the Reverse Hierarchy Theory. The HTM model, created within the Two!Ears european project, is a learning paradigm based on (i) an auto-supervision (the robot decides when it is necessary or not to learn), (ii) a real-time constraint (the robot learns and reacts as soon as data is perceived), and (iii) an absence of prior knowledge about the environment (there is no "truth" to learn, only the reality of the environment to explore). This model, integrated in the overal Two!Ears framework, has been entirely implemented in a mobile robot with binocular vision and binaural audition. The HTM model thus gather the traditional approach of ascending analysis of perceived signals (extraction of caracteristics, visual or audio recognition etc.) to a descending approach that enables, via motor actions generation in order to deal with perception deficiency (such as visual occlusion), to understand and interprete the audiovisual environment of the robot. This bottom-up/top-down active approach is then exploited to modulate the head movements of a humanoid robot and to study the impact of the Congruence on these movements. The system has been evaluated via realistic simulations, and in real conditions, on the two robotic platforms of the Two!Ears project
Wang, Haibo. "Vers un suivi en temps réel de la position de la tête et du visage". Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10158/document.
Texto completo da fonteMonocular 3D head tracking is a core technique for designing intelligent interfaces. Over the last decade, the objective that tracks long-term persistent poses in ever-changing environments remains a challenging problem. In this thesis, we investigate this problem by presenting two alternative frameworks and exploit its potential applications in computer-human interactions. The first framework is a robust implementation of the conventionally differential tracking approach along with a 3D ellipsoid for geometric reasoning. It recursively estimates head poses from prior prediction and dynamically updates its template. These attributes make it robust to ever observation changes and lead to smooth estimations. However, they also bring two severe problems that target movement is limited to be small and template drifting happens from time to time, which together make long-term tracking with a camera impossible.To avoid these crucial limits, the second part of this thesis turns to a novel tracking by detection approach. The novelty of our approach is to joint modeling, learning and tracking in a unified system. The pose tracking is realized by matching online features with the offline learned multi-view feature classes while the learning depends on face texture synthesis, stable class detection and multiview selection that are executed within a simple head modeling system. Extensive experiments witnesses the disappearance of model drifting as well as the success of tracking natural head movements. To further enhance the performances, we also integrate opticalflow correspondences to enforce temporal consistency during tracking by detection and incorporate color prior to clarify possible outlier features in a discriminative way.In the last part of this thesis, we present two applications of the proposed 3D head tracking system. The first is to estimate eye gaze in the presence of natural head rotations. The second is to transfer facial expressions from human being to an online avatar
Masse, Jean-Thomas. "Capture de mouvements humains par capteurs RGB-D". Thesis, Toulouse 3, 2015. http://www.theses.fr/2015TOU30361/document.
Texto completo da fonteSimultaneous apparition of depth and color sensors and super-realtime skeleton detection algorithms led to a surge of new research in Human Motion Capture. This feature is a key part of Human-Machine Interaction. But the applicative context of those new technologies is voluntary, fronto-parallel interaction with the sensor, which allowed the designers certain approximations and requires a specific sensor placement. In this thesis, we present a multi-sensor approach, designed to improve robustness and accuracy of a human's joints positionning, and based on a trajectory smoothing process by temporal integration, and filtering of the skeletons detected in each sensor. The approach has been tested on a new specially constituted database, with a specifically adapted calibration methodology. We also began extending the approach to context-based improvements, with object perception being proposed
Jacob, Thibaut. "Contrôle orbital pour le tracé de trajectoires 3D à l'aide des mouvements de la tête". Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0044.
Texto completo da fonteThe field of 3D sound is experiencing rapid growth due to a combination of factors (standardization of new audio formats, equipment for cinemas, etc.). While mast of the work in this field has focused on 3D sound processing, creating 3D auditory content interactively remains a challenging task because il requires drawing and editing three dimensional trajectories to control the movement of audio sources in space. ln this thesis in Human-Computer Interaction (HGI), we consider the creation of audio source trajectories as a particular case of 3D modelling and propose the following contributions. On a conceptual level, we tirst present a design space of 3D trajectory creation techniques. We also propose a classification of existing camera controls in relation to the type of control and modalities. On an empirical level, we conducted five user studies in order to design a new interaction technique for orbital viewpoint control. This technique allows users to perform 360 wide rotations by leveraging head roll rotations. Finally, we propose an implementation of our interaction technique and present its integration with two applications: Blender, a well-known 3D modelling software, and Performer, which ls used by Radio France to place sound sources in 3D space during live events
Massé, Benoît. "Etude de la direction du regard dans le cadre d'interactions sociales incluant un robot". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM055/document.
Texto completo da fonteRobots are more and more used in a social context. They are required notonly to share physical space with humans but also to interact with them. Inthis context, the robot is expected to understand some verbal and non-verbalambiguous cues, constantly used in a natural human interaction. In particular,knowing who or what people are looking at is a very valuable information tounderstand each individual mental state as well as the interaction dynamics. Itis called Visual Focus of Attention or VFOA. In this thesis, we are interestedin using the inputs from an active humanoid robot – participating in a socialinteraction – to estimate who is looking at whom or what.On the one hand, we want the robot to look at people, so it can extractmeaningful visual information from its video camera. We propose a novelreinforcement learning method for robotic gaze control. The model is basedon a recurrent neural network architecture. The robot autonomously learns astrategy for moving its head (and camera) using audio-visual inputs. It is ableto focus on groups of people in a changing environment.On the other hand, information from the video camera images are used toinfer the VFOAs of people along time. We estimate the 3D head poses (lo-cation and orientation) for each face, as it is highly correlated with the gazedirection. We use it in two tasks. First, we note that objects may be lookedat while not being visible from the robot point of view. Under the assump-tion that objects of interest are being looked at, we propose to estimate theirlocations relying solely on the gaze direction of visible people. We formulatean ad hoc spatial representation based on probability heat-maps. We designseveral convolutional neural network models and train them to perform a re-gression from the space of head poses to the space of object locations. Thisprovide a set of object locations from a sequence of head poses. Second, wesuppose that the location of objects of interest are known. In this context, weintroduce a Bayesian probabilistic model, inspired from psychophysics, thatdescribes the dependency between head poses, object locations, eye-gaze di-rections, and VFOAs, along time. The formulation is based on a switchingstate-space Markov model. A specific filtering procedure is detailed to inferthe VFOAs, as well as an adapted training algorithm.The proposed contributions use data-driven approaches, and are addressedwithin the context of machine learning. All methods have been tested on pub-licly available datasets. Some training procedures additionally require to sim-ulate synthetic scenarios; the generation process is then explicitly detailed
Molet, Tom. "Etude de la capture de mouvements humains pour l'interaction en environnements virtuels /". [S.l.] : [s.n.], 1998. http://library.epfl.ch/theses/?nr=1883.
Texto completo da fonteMalciu, Marius. "Approches orientées modèle pour la capture des mouvements du visage en vision par ordinateur". Phd thesis, Université René Descartes - Paris V, 2001. http://tel.archives-ouvertes.fr/tel-00273232.
Texto completo da fonteChristiaen, Pierre. "Contribution aux recherches en Adéquation Algorithme Architecture dans le cadre de la conception d'un suiveur de regard "tête libre" par caméra". Lille 1, 1996. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1996/50376-1996-81.pdf.
Texto completo da fonteCourtemanche, Simon. "Analyse et simulation des mouvements optimaux en escalade". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENM082/document.
Texto completo da fonteHow optimal are human movements ? This thesis tackles this issue by focusing especially on climbing movements, studied here under three complementary aspects which are the experimental gathering of climbing sequences, the biomechanical analysis of these data, and the synthesis of gestures by timing optimization. Walking has been largely studied, with good results in animation [Mordatch 2013]. We are interested here especially in the original question of climbing motions, whose diversity and multicontact aspect present an interesting complexity for the evaluation of the human motion characteristics. The heterogeneity of climbing gestures can be linked to several factors which are the variety of wall shapes, the multiplicity of climber skill levels, and different climbing categories, namely bouldering, route climbing or speed climbing. Our exploratory approach of this sport consists in three steps: the data collection by multicamera marker-based motion capture, combined with a set of force sensors mounted on an in-laboratory bouldering wall; a gesture analysis by inverse dynamics, taking only kinematic data as inputs, based on the minimization of internal torques to resolve the multicontact ambiguity, intrinsic to the climbing activity, validated by comparison with sensor measurements; and finally, the use of the energy efficiency criterion for synthesizing the best timing associated with a given sequence of movements. Experimental recordings were made at McGill University which has a climbing wall instrumented of 6 force sensors, and a motion capture device of 24 cameras, which allowed us to collect data on a population of nine subjects. The analysis of these data is the second part of this thesis. The addressed challenge is to find the external forces and internal torques from the climber's movements only. To this end we assume an optimal distribution of internal torques. After analysis, the distribution turns out to be rather uniform than proportional to the muscle capacity associated to each body joint. Finally, in a third and last part, we focus on the timing of climbing gestures, taking as input the path of the climber, possibly after inverse kinematics in order to overcome the need for a capture with markers and infrared cameras. As output, an optimal timing for this path is found. This timing is realistic, but lacks of a modelization for hesitation and decision making instants, as well as a model for the contact establishment, with the associated temporal delay currently not taken into account
Devos, Pierre. "Contribution biomécanique à l'analyse cinématique in vivo des mouvements de la main humaine". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2359/document.
Texto completo da fonteThe human hand is a prehensile organ which allows people to handle objects with various sizes and shapes. It is wonderful tool that can be used to perform different simple or complex tasks with strength or great dexterity. It is also a crucial tool in the daily life, both at home and in the workspace, and loss of hand functionality may quickly become disabling for some people. There are few studies in the literature. However, motion capture and kinematic analysis of the hand is becoming more and more of an interest in different areas such as medicine, ergonomics, sport, robotics, virtual reality and video games. Results from these studies have improved knowledge about skills of the hand and how to preserve them. The studies have also improved interactions between people and computers in order to command robots or to progress in virtual reality. The aim of the thesis was to develop methods for an in vivo and subject-specific kinematic analysis in order to contribute to the improvement of knowledge about the human hand motion. A first part of this thesis was to develop a protocol for the motion capture of the hand for male and female subjects aged from 20 to 50 years old. The motion capture was performed using an optoelectronic system with passive markers glued on the skin of the hand. Two sorts of movements were captured. Firstly, functional movements like flexion-extension and abduction-adduction. Secondly, prehensile movements of cylindrical and spherical objects. Then, markers on the motion captures were identified in order to extract their trajectories. The second part of this thesis consisted in the development of a method for the kinematic analysis of external hand movements from the marker trajectories. Validation of this method was achieved using a model of the hand developed in silico. Since no noise was added to the marker trajectories in the silico model; kinematic parameters were estimated with precision. Moreover, assessment of the functional methods showed that the hand motions can be approximated by a plane, a circular arc or a spherical cap depending on the joint studied. After constructing the functional coordinate systems for each segment of the hand using the joint kinematic parameters, it was possible to decompose any joint rotation into three Cardan angles. This decomposition method was validated using the marker trajectories of the hand model, except for the trapeziometacarpal (TMC) and the metacarpophalangeal (MCP1) joints of the thumb which are more difficult to study. The last part of this thesis consisted in the analysis of the functional and the prehensile movements from the motion captures. The curves of the Cardan angles obtained from the functional movements are similar to those presented in the literature for all of the joints, except for the TMC joint. It was also noticed that the joint rotations do not occur around only one axis, but around one dominant axis and one or two secondary axes. However, some differences between the curves of the Cardan angles around the secondary axes obtained in this thesis and those presented in the literature were noticed for some joints. Despite only few prehensile grasps were analyzed, some interesting correlations were also found between the hand shape and the objects grasped, more particularly at the metacarpophalangeal (MCP) and the distal interphalangeal (DIP) joints
Prel, Florent. "Estimation robuste et dynamique de la pose de la tête d'un conducteur en situation de simulation de conduite automobile par vision artificielle". Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/26420/26420.pdf.
Texto completo da fonteSarhan, François-Régis. "Quantification des mouvements de la mimique faciale par motion capture sur une population de volontaires sains". Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2370/document.
Texto completo da fonteThe care of facial paralysis is often complex and therefore requires monitoring over the long term. There are many clinical severity scores with varying levels of sensitivity to assess the deficit of facial movement, but most of them are qualitative. The number of assessment methods is an obstacle to monitor patients and treatment evaluation. We need an objective measurement tool to provide reliable measures of resting asymmetry, symmetry of voluntary movement and synkinesis. The aim of this study is to determine if the 3D motion capture of the face is compatible with these clinical criteria. A descriptive study using a 3-dimensional (3D) motion capture system were performed on healthy volunteers (n=30) age from 20 to 30 years. The motion capture system consists of 17 optoelectronic cameras at a frequency of 100Hz. We captured the movements of the face on healthy volunteers. We obtained absolute values: 3D coordinates and relative displacements. These data were free of manual measurements, and the use of 3D motion capture does not impede the facial movement. The average time of capture was less than 10 minutes. The measurements are painless for subjects. Data are collected in a computer and can be easily exported. These results show the feasibility of 3D motion capture of facial movement. The protocol used here could be standardized to be routinely relevant. lt was used in an experimental study to follow up recovery of a facial transplantation. This technique could help to overcome the uncertainty caused by subjective assessment and optimize therapeutic choices
Comte, Nicolas. "Apprentissage des formes de scoliose à l'aide de modèles anatomiques et de la capture de mouvement". Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALM068.
Texto completo da fonteAs scoliosis is a complex medical condition affecting different parts of the human body, this manuscript will start with anatomical definitions of the spine and an introduction of Adolescent Idiopathic Scoliosis in a dedicated chapter. Next, we will outline the existing methodologies and state-of-the-art approaches that allow for a comprehensive characterization of this condition and its early detection. Then, we will present our contributions and how they address the current challenges.By covering different kind of approaches in this thesis, we can facilitate the dissertation by categorizing them into two distinct types:The first is the static method of examination from medical images. We will point out the current limitations and challenges in the characterization of the spinal alignments using X-ray radiographs. A particular emphasis will be placed on the quantification of the 3D deformities from non-ionizing methods by the external analysis of the torso using machine-learning methods. We will make a review of the literature before a presentation of our contribution that allow a 3D characterization of the full thoracolumbar spine while proposing an accessible, non-ionising examination method.The second type of approach is dynamic, mainly based on motion capture analysis. We will present the different biomarkers that are usually tracked during the acquisitions and show the different methods presented in the literature with their limitations. Then, we will present our approach to address the current challenges in the dynamic characterization of scoliosis with motion capture analysis by leveraging subject-specific kinematic models
Pettré, Julien. "Planification de mouvements de marche pour acteurs digitaux". Toulouse 3, 2003. http://www.theses.fr/2003TOU30200.
Texto completo da fonteTodoskoff, Alexis. "Etude des évolutions temporelles du comportement du conducteur sur autoroute : Analyse multidimensionnelle de signaux relatifs au véhicule et aux mouvements de tête sur simulateur". Valenciennes, 1999. https://ged.uphf.fr/nuxeo/site/esupversions/728192b1-2071-448e-ac3b-b0ba8954072d.
Texto completo da fonteRaynal, Benjamin. "Applications of digital topology for real-time markerless motion capture". Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00597513.
Texto completo da fonteBenchiheub, Mohamed-El-Fatah. "Contribution à l'analyse des mouvements 3D de la Langue des Signes Française (LSF) en Action et en Perception". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS559/document.
Texto completo da fonteNowadays, Sign Language (SL) is still little described, particularly for what concerns the movement of articulators. Research on SL has focused on understanding and modeling linguistic properties. Few investigations have been carried out to understand the kinematics and dynamics of the movement itself and what it brings to understand the LS SL generated by models. This thesis deals with the analysis of movement in the French Sign Language LSF with a main focus on its production as well as its understanding by deaf people.Better understanding the movement in SL requires the creation of new resources for the scientific community studying SL. In this framework, we have created and annotated a corpus of 3D motion data from the upper body and face, using a motion capture system. The processing of this corpus made it possible to specify the kinematics of the movement in SL during the signs and the transitions.The first contribution of this thesis was to quantify to what extent certain classical laws, known in motor control, remained valid during the movements of SL, in order to know if the knowledge acquired in motor control could be exploited in SL.Finding relevant information of the movement that is crucial for understanding SL represented the second part of this thesis. We were basically interested to know which aspects of the movement of SL production models should be replicated as a priority. In this approach, we have examined to what extent deaf individuals, whether signers or not, were able to understand SL according to the amount of information available to them
ROTH, MURIEL. "Développements méthodologiques en imagerie d'activation cérébrale chez l'homme par résonance magnétique nucléaire : quantification de flux, imagerie de l'effet BOLD et correction des mouvements de la tête". Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10016.
Texto completo da fonteFontmarty, Mathias. "Vision et filtrage particulaire pour le suivi tridimensionnel de mouvements humains: applications à la robotique". Phd thesis, Université Paul Sabatier - Toulouse III, 2008. http://tel.archives-ouvertes.fr/tel-00400305.
Texto completo da fonteDatas, Adrien. "Analyse et simulation de mouvements d'atteinte contraints en position et orientation pour un humanoïde de synthèse". Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0005/document.
Texto completo da fonteThe simulation of human movement is an active theme of research, particularly in ergonomic analysis to aid in the design of workstations. The aim of this thesis concerns the automatic generation of reaching tasks in the horizontal plane for a virtual humanoid. An objective expressed in the task space, requires coordination of all joints of the mannequin. The main difficulties encountered in the simulation of realistic movements is related to the natural redundancy of the human. Our approach is focused mainly on two aspects: - Motion of the hand's operator in the task space (spatial and temporal aspect), - Coordination of all kinematic chains. To characterize human movement, we conducted a set of motion capture with position and orientation constraints of the hand in the horizontal plane. These acquisitions allowed to know the spatial and temporal evolution of the hand in the task space, for translation and rotation aspects. These acquired data were coupled with a playback method to analyze the intrinsic relations that link the task space to joint space of the model. The automatic generation scheme of realistic motion is based on a stack of task with a kinematic approach. The assumption used to simulate the action is to follow the shortest path in the task space while limiting the cost in the joint space. The scheme is characterized by a set of parameters. A global map of parameter adjustment enables the simulation of a class of realistic movements. Finally, this scheme is validated quantitatively and qualitatively with comparison between the simulation and the human gesture
Hernoux, Franck. "Conception et évaluation d'un système transparent de capture de mouvements des mains pour l'interaction 3D temps réel en environnements virtuels". Phd thesis, Ecole nationale supérieure d'arts et métiers - ENSAM, 2011. http://pastel.archives-ouvertes.fr/pastel-00651084.
Texto completo da fonteHernoux, Franck. "Conception et évaluation d'un système transparent de capture de mouvements des mains pour l'interaction 3D temps réel en environnements virtuels". Paris, ENSAM, 2010. http://www.theses.fr/2011ENAM0039.
Texto completo da fonteThe purpose of this thesis is to propose and evaluate a markerless system for capturing hand movements in real time to allow 3D interactions in virtual environment (VE). Tools such as keyboard and mouse are not enough for interacting in 3D VE; current motion capture systems are expensive and require wearing equipments. Systems based on cameras and image processing partially fill these gaps, but do not yet allow an accurate, efficient and real-time 3D motion capture. Our system provides a solution to this problem with a 3D camera. We have implemented modalities that allow a more natural interaction with objects and VE. The goal of our system is to obtain performances at least equal to those of common tools in virtual reality while providing a better overall acceptability (i. E. , usefulness, usability, immersion). To achieve this goal, we conducted three experimental studies involving over 100 participants. With the first study, we compared the first version of our system (based on a 3D Camera MESA SwissRanger) to a traditional mouse, for a selection task. The second experiment was focused on the study of the realization of objects-manipulation tasks (position, orientation, scaling) and navigation tasks in VE. For this study, we compared the improved version of our system (based on Microsoft Kinect) with data gloves associated with magnetic sensors. An additional study concerns the evaluation of new modalities of interaction, implemented based on participants' feedback of the second study
Brazey, Denis. "Reconnaissance de formes et suivi de mouvements en 4D temps-réel : Restauration de cartes de profondeur". Thesis, Rouen, INSA, 2014. http://www.theses.fr/2014ISAM0019.
Texto completo da fonteIn this dissertation, we are interested in several issues related to 3D data processing. The first one concerns people detection and tracking in depth map sequences. We propose an improvement of an existing method based on a segmentation stage followed by a tracking module. The second issue is head detection and modelling in 3D point clouds. In order to do this, we adopt a probabilistic approach based on a new spherical mixture model. The last considered application deals with the restoration of deteriorated depth maps. To solve this problem, we propose to use a surface approximation method based on interpolation Dm-splines with scale transforms to approximate and restore the image. Presented results illustrate the efficiency of the developed algorithms
Pasquier, Maud. "Segmentation de la locomotion humaine dans le domaine du sport et de la déficience à partir de capteurs embarqués". Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENT098.
Texto completo da fonteThis thesis focuses on the treatment of multi-sensor related data locomotion humaine.Son goal is to develop tools for segmentation of locomotion from a network of embedded and applied to two different domains sensors : sport with ultra -marathon and disability caused by a symptom of Parkinson's disease . For both applications, the work can achieve this question. one hand , the physical constraints of the network sensors , and secondly , the data processing algorithms . At first , we collected multi-sensor data on a runner Marathon des Sables. The network of sensors being confronted with extreme conditions, it was necessary to adapt to technical before proposing a tool to segment and classify these large amounts of data problems. In a second step , we worked with a doctor, to understand better the '' freezing " (symptom disrupting the progress of some patients with Parkinson's disease ) to propose a new detection method for this symptom
Ntawiniga, Frédéric. "Head Motion Tracking in 3D Space for Drivers". Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25229/25229.pdf.
Texto completo da fonteThis work presents a computer vision module capable of tracking the head motion in 3D space for drivers. This computer vision module was designed to be part of an integrated system to analyze the behaviour of the drivers by replacing costly equipments and accessories that track the head of a driver but are often cumbersome for the user. The vision module operates in five stages: image acquisition, head detection, facial features extraction, facial features detection, and 3D reconstruction of the facial features that are being tracked. Firstly, in the image acquisition stage, two synchronized monochromatic cameras are used to set up a stereoscopic system that will later make the 3D reconstruction of the head simpler. Secondly the driver’s head is detected to reduce the size of the search space for finding facial features. Thirdly, after obtaining a pair of images from the two cameras, the facial features extraction stage follows by combining image processing algorithms and epipolar geometry to track the chosen features that, in our case, consist of the two eyes and the tip of the nose. Fourthly, in a detection stage, the 2D tracking results are consolidated by combining a neural network algorithm and the geometry of the human face to discriminate erroneous results. Finally, in the last stage, the 3D model of the head is reconstructed from the 2D tracking results (e.g. tracking performed in each image independently) and calibration of the stereo pair. In addition 3D measurements according to the six axes of motion known as degrees of freedom of the head (longitudinal, vertical and lateral, roll, pitch and yaw) are obtained. The validation of the results is carried out by running our algorithms on pre-recorded video sequences of drivers using a driving simulator in order to obtain 3D measurements to be compared later with the 3D measurements provided by a motion tracking device installed on the driver’s head.
Yin, Tairan. "The One-Man-Crowd : towards single-user capture of collective motions using virtual reality". Electronic Thesis or Diss., Université de Rennes (2023-....), 2024. https://ged.univ-rennes1.fr/nuxeo/site/esupversions/10c60af3-9dbd-40c7-8e8b-2ba26ae866b4.
Texto completo da fonteCrowd motion data is fundamental for understanding and simulating realistic crowd behaviors. Such data is, however, scarce because of multiple challenges and difficulties involved in its gathering. Virtual Reality (VR) has been leveraged to study individual behavior in crowds, typically by immersing users into simulated virtual crowds and capturing their behavior. In this thesis, we propose and evaluate a novel VR-based approach, lifting the limitations of real-world experiments for the acquisition of crowd motion data. We refer to this approach the One-Man-Crowd paradigm. We first propose to capture crowd motion with a single user. By recording the past trajectories and body movements of the user, and displaying them on virtual characters, the users progressively build the overall crowd behavior by themselves. Then, we propose the new concept of contextual crowds that leverage crowd simulation to mitigate the users' behavioral bias during the capture procedure. We implement two different strategies, namely a Replace-Record-Replay (3R) process and a Replace-Record-Replay-Responsive (4R) process. We evaluate and validate the proposed approach by replicating and comparing with in total five real crowd experiments. Our results suggest that the One-Man-Crowd paradigm offers a promising approach for acquiring realistic crowd motion data in virtual environments
Marion, Damien. "Sharing big display : développement des technologies et métaphores d'interactions nouvelles pour le partage collaboratif d'affichage en groupe ouvert". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0924/document.
Texto completo da fonteScreens have invaded our daily life. Amongst them, large displays are becoming increasingly present in public places. Sometimes, a few interaction are proposed to the users, but most of the time they are simply used as static displays. So, how to interact with these large displays and especially how to allow multi-user interactions? Of course, we have to define rules for collaboration: what we have to do if several people consult the same information at the same time? How to allow a newcomer to become aware of what there is to see when users have already "transformed" the display by their current use?First, our work consisted in testing instrumented multi-user interactions, based on a head tracking system, during a collaborative information seeking task, on a large display. In this part, we have highlighted the importance of the concept of "floating vision". Then, our research focused on the development of a head tracking system allowing intuitive interactions, without needing neither a special equipment nor individual calibration. Our system supports several users (up to six) simultaneously interacting with individualized information on a large display. Finally, we present the study of performance gain within a context of multi-user competitive consultation of information. We compare the benefit of an adaptive display (information move in front of users who are focusing on them), with a standard display. This study is based on user experience (UX) analysis. Thus, we were able to identify the first recommendations about interaction metaphors allowing intuitive interactions with a large display in an open group context.This research was devoted to highlighting the feasibility, interest and proposition of initial orientations concerning the design of multi-user interactions with large public displays based on a head tracking technique
Tabacaru, Sabina. "Humorous implications and meanings : a multi-modal approach to sarcasm in interactional humor". Thesis, Lille 3, 2014. http://www.theses.fr/2014LIL30015.
Texto completo da fonteThis dissertation examines the different techniques used to achieve humor in interaction in two contemporary American television-series—/House M.D./ and /The Big Bang Theory./ Through different writing techniques, we observe the elements that are used in the process of humorous meaning construction. The dialogue between interlocutors plays a central role since it centers on intersubjectivity, and hence, the common ground between speakers.This original study also implies the investigation of the different gestures used by interlocutors in the two series to create humorous effects. These /gestural triggers/ as well as the different humor types have been annotated in ELAN, which allows a more holistic view of the processes involved in humor.The results show an evident preference for sarcasm as well as a preference for certain facial expressions (raising eyebrows and frowning) and head movements (head tilts and head nods). These elements are explained in accordance with a given context and with the speakers’ attitude for a better understanding of humor in interaction
Marcellin, Félix. "Analyse de la précision d’un nouveau système de capture du mouvement optique : cas du Mokam". Thesis, Compiègne, 2021. https://bibliotheque.utc.fr/Default/doc/SYRACUSE/2021COMP2626.
Texto completo da fonteMany motion capture systems emerge each year. However, although these systems are part of the family of metrology tools, the information on measurement accuracy is heterogeneous and their scope is at the discretion of the user. In this context, the research question I asked myself was : how to compare the accuracy of motion capture systems of different technologies and how to define a compatible application ? In this thesis, I was interested in optical motion analysis systems with markers. The two motion capture systems I used are the Mokam system (Kinestesia ,Verton, France) and the Vicon system (Oxford Metrics, United Kingdom). The latter is considered as the reference. In order to obtain the precision information of a system, I proposed an experimental protocol to evaluate the precision in statics and during the realisation of movement. Then I identified applications for postural control or quantified analysis of the locomotion achievable by a system according to its precision capabilities. This research work aloud, in a first step, to evaluate the spatial precision of three-dimensional localisation of these two systems transposable to all marker-based motion capture systems. Secondly, it was shown that measurement accuracy is a crucial point in determining the application of these systems. In other words, not every motion capture system can be dedicated to every application
Fleuriet, Jérome. "Capture fovéale d'une cible visuelle en mouvement : Approche neurophysiologique chez le singe". Thesis, Aix-Marseille 2, 2011. http://www.theses.fr/2011AIX20717.
Texto completo da fonteIntercepting a visual moving target is a spatiotemporal challenge for the brain achieved by various species. Here, we investigated the foveal capture of a moving target by saccadic gaze shifts in the awake monkey. The current theory proposes that the saccadic interception involves two neural pathways. A first pathway would convey to the saccade burst generator a sampled target position signal through the superior colliculus (SC). The second one, through the cerebellum, would convey an additional command on the basis of motion-related signals. A behavioral experiment was performed to analyze the influence of motion-related signals on the saccade dynamics and allowed showing a continuous influence. In a second study, we tested the robustness of the oculomotor system to an unexpected spatiotemporal perturbation (by electrical microstimulation in the deep SC) and showed the presence of accurate correction saccades. Our results argue for a continuous representation of the saccade goal
Chaumeil, Anaïs. "Evaluation et développement de méthodes d'analyse du mouvement sans marqueurs à partir de vidéos". Electronic Thesis or Diss., Lyon 1, 2024. http://www.theses.fr/2024LYO10209.
Texto completo da fonteVideo-based markerless motion capture has benefitted, in the last few years, from the development of automatic point estimation methods, which are based on deep learning techniques. For motion analysis in biomechanics, these methods have numerous advantages, such as the possibility to analyse the movement without participant-worn equipment or outside of the laboratory. The goal of this thesis is thus to contribute to the evaluation and development of video-based markerless motion capture methods for applications in biomechanics. First, an existing video-based markerless motion capture method is evaluated for movements and kinematic parameters, rarely studied in the literature. Then, 2D keypoints estimated by automatic point estimation methods are characterized, and the influence of these characteristics on 3D point reconstruction is studied. Finally, a method using whole confidence heatmaps – which are obtained using automatic point estimation methods – to compute 3D kinematics is proposed and evaluated
Guingo, Geoffrey. "Synthèse de texture dynamique sur objets déformables". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM053.
Texto completo da fonteIn virtual worlds, the objects appearance is a crucial point for users immersion. In order to approximate light-matter relationships, a common way is to use textures. To help artists during the creative process, texture synthesis and texture editing methods have emerged. These methods are differentiated by the ranges of synthesizable textures, and especially by taking into account the heterogeneous textures. These textures are composed of several regions with different contents, whose distribution is lead by a global structure. Each of the zones corresponds to a different material having a specific appearance and dynamic behavior.First, we propose an additive model of static textures, allowing on-the-fly synthesis of heterogeneous textures of arbitrary sizes from an example. This method includes a spatially varying Gaussian noise pattern, as well as a mechanism for synchronization with a structure layer. The aim is to improve the variety of synthesis while preserving plausible small details. Our method consists of an analysis phase, composed of a set of algorithms for instantiating the different layers from an example image, then a real-time synthesis step. During synthesis, the two layers are independently generated, synchronized, and added, preserving details consistency even when the structure layer is deformed to increase variety.In a second step, we propose a new approach to model and control the dynamic deformation of textures, whose implementation in the standard graphical pipeline remains simple. The deformation is modeled at pixels resolution in the form of a warping in the parametric domain. Making possible to have a different behavior for each pixel, and thus depending of texture content. The warping is locally and dynamically defined by real-time integration along the flow lines of a pre-calculated velocity field, and can be controlled by the deformation of the underlying surface geometry, by parameters of environment or through interactive editing. In addition, we propose a method to pre-compute the velocity field from a simple scalar map representing heterogeneous dynamic behaviors, as well as a solution to handle sampling problems occurring in overstretched areas at the time. deformation
Morel, Marion. "Modélisation de séries temporelles multidimensionnelles. Application à l'évaluation générique et automatique du geste sportif". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066235/document.
Texto completo da fonteEither to reduce falling risks in elderly people, to translate the sign language or to control a virtual human being, gesture analysis is thriving research field that aims at recognizing, classifying, segmenting, indexing and evaluating different types of motions. As few studies tackle the evaluation process, this PhD focuses on the design of an autonomous system for the generic evaluation of sport-related gestures. The tool is trained on the basis of experts’ motions recorded with a motion capture system. Dynamic Time Warping (DTW) is deployed to obtain a reference gesture thanks to data alignment and averaging. Nevertheless, this standard method suffers from pathological paths issues that reduce its effectiveness. For this reason, local constraints are added to the new DTW-based algorithm, called CDBA (Constrained DTW Barycenter Averaging). At each time step and for each limb, the quality of a gesture is spatially and temporally assessed. Each new motion is compared to the reference gesture and weighted in terms of data dispersion around the reference.The process is validated on annotated karate and tennis databases. A first online training prototype is given in order to prompt further research on this subject
Fernandez-Abrevaya, Victoria. "Apprentissage à grande échelle de modèles de formes et de mouvements pour le visage 3D". Electronic Thesis or Diss., Université Grenoble Alpes, 2020. https://theses.hal.science/tel-03151303.
Texto completo da fonteData-driven models of the 3D face are a promising direction for capturing the subtle complexities of the human face, and a central component to numerous applications thanks to their ability to simplify complex tasks. Most data-driven approaches to date were built from either a relatively limited number of samples or by synthetic data augmentation, mainly because of the difficulty in obtaining large-scale and accurate 3D scans of the face. Yet, there is a substantial amount of information that can be gathered when considering publicly available sources that have been captured over the last decade, whose combination can potentially bring forward more powerful models.This thesis proposes novel methods for building data-driven models of the 3D face geometry, and investigates whether improved performances can be obtained by learning from large and varied datasets of 3D facial scans. In order to make efficient use of a large number of training samples we develop novel deep learning techniques designed to effectively handle three-dimensional face data. We focus on several aspects that influence the geometry of the face: its shape components including fine details, its motion components such as expression, and the interaction between these two subspaces.We develop in particular two approaches for building generative models that decouple the latent space according to natural sources of variation, e.g.identity and expression. The first approach considers a novel deep autoencoder architecture that allows to learn a multilinear model without requiring the training data to be assembled as a complete tensor. We next propose a novel non-linear model based on adversarial training that further improves the decoupling capacity. This is enabled by a new 3D-2D architecture combining a 3D generator with a 2D discriminator, where both domains are bridged by a geometry mapping layer.As a necessary prerequisite for building data-driven models, we also address the problem of registering a large number of 3D facial scans in motion. We propose an approach that can efficiently and automatically handle a variety of sequences while making minimal assumptions on the input data. This is achieved by the use of a spatiotemporal model as well as a regression-based initialization, and we show that we can obtain accurate registrations in an efficient and scalable manner.Finally, we address the problem of recovering surface normals from natural images, with the goal of enriching existing coarse 3D reconstructions. We propose a method that can leverage all available image and normal data, whether paired or not, thanks to a new cross-modal learning architecture. Core to our approach is a novel module that we call deactivable skip connections, which allows to transfer the local details from the image to the output surface without hurting the performance when autoencoding modalities, achieving state-of-the-art results for the task
Peckel, Mathieu. "Le lien réciproque entre musique et mouvement étudié à travers les mouvements induits par la musique". Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOL025/document.
Texto completo da fonteMusic and movement are inseparable. The movements that are spontaneously procuded when listening to music are thought to be related to the close relationship between the perceptual and motor system in listeners. This particular link is the main topic of this thesis. A first approach was focused on the impact of music-induced movements on music cognition. In two studies, we show that moving along to music neither enhances the retention of new musical pieces (Study 1) nor the retention of the contextual information related to their encoding (Study 2). These results suggest a shallow processing inherent to the expression of musical affordances required for the production of music-induced movements in the motor task. Moreover, they suggest that music is automatically processed in a motoric fashion independantly of the task. Our results also brought forward the importance of the musical groove. A second approach focused on the influence of the perception of musical rhythms on the production of rythmic movements. Our third study tested the hypothesis that different limbs would be differentially influenced depending on the musical tempo. Results show that the tapping taks was the most influenced by the perception of musical rhythms. We argued that this would come from the similar nature of the musical pulse and the timing mecanisms involved in the tapping task and motor resonance phenomena. We also observed different strategies put in place to cope with the task. All these results are discussed in light of the link between perception and action, embodied musical cognition and musical affordances
Szinte, Martin. "The recovery of target locations in space across movements of eyes and head". Phd thesis, Université René Descartes - Paris V, 2012. http://tel.archives-ouvertes.fr/tel-00760375.
Texto completo da fonteDagnes, Nicole. "3D human face analysis for recognition applications and motion capture". Thesis, Compiègne, 2020. http://www.theses.fr/2020COMP2542.
Texto completo da fonteThis thesis is intended as a geometrical study of the three-dimensional facial surface, whose aim is to provide an application framework of entities coming from Differential Geometry context to use as facial descriptors in face analysis applications, like FR and FER fields. Indeed, although every visage is unique, all faces are similar and their morphological features are the same for all mankind. Hence, it is primary for face analysis to extract suitable features. All the facial features, proposed in this study, are based only on the geometrical properties of the facial surface. Then, these geometrical descriptors and the related entities proposed have been applied in the description of facial surface in pattern recognition contexts. Indeed, the final goal of this research is to prove that Differential Geometry is a comprehensive tool oriented to face analysis and geometrical features are suitable to describe and compare faces and, generally, to extract relevant information for human face analysis in different practical application fields. Finally, since in the last decades face analysis has gained great attention also for clinical application, this work focuses on musculoskeletal disorders analysis by proposing an objective quantification of facial movements for helping maxillofacial surgery and facial motion rehabilitation. At this time, different methods are employed for evaluating facial muscles function. This research work investigates the 3D motion capture system, adopting the Technology, Sport and Health platform, located in the Innovation Centre of the University of Technology of Compiègne, in the Biomechanics and Bioengineering Laboratory (BMBI)
Bouvel, Simon. "Méthodes expérimentales et fusion de données imagerie-cinématique pour la modélisation du mouvement pathologique de l'épaule". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066184/document.
Texto completo da fonteThis work takes place in the context of shoulder complex motion measurement in biomechanics. We present the technologies and methods that apply to this problem, and the associated obstacles (particularly the skin tissue deformation), in order to justify our choice of performing data fusion between measurements where the subjects remain still, and other where they are in motion. We suggest to perform this data fusion through spatial interpolation of reference frames from scattered data, specifically with the natural neighbors algorithm, that has been adapted to the framework of this study. A series of experimentations with a manipulator robot has been performed in order to assess the feasibility of the developed method, the robot giving access to ground truth that would be unavailable with human experimentations. The results we obtained encouraged us to pursue the study with human experimentations. These experimentation have been performed using an optoelectronic motion capture technology. The data gathered while subjects remained still, and the one acquired with the subjects in motion allowed us, through natural neighbor’s interpolation, to estimation the motion of the scapula relative to the thorax, for abduction, flexion, and scapular plane elevation movements. The results we obtained were similar to the ones found in the literature, encouraging on the one hand the method we developed, and on the other hand the spatial interpolation approach for bone motion measurement in biomechanics, compensating the skin tissue artefact
Barnachon, Mathieu. "Reconnaissance d'actions en temps réel à partir d'exemples". Phd thesis, Université Claude Bernard - Lyon I, 2013. http://tel.archives-ouvertes.fr/tel-00820113.
Texto completo da fonteFontmarty, Mathias. "Vision et filtrage particulaire pour le suivi tridimensionnel de mouvement humain : applications à la robotique". Toulouse 3, 2008. http://www.theses.fr/2008TOU30162.
Texto completo da fonteA great robotic challenge today is the one of the personal robot. While moving in real environments, the robot must take into account the human presence in its neighborhood in order to avoid them or to facilitate their moves. However, for an active interaction, the robot must also be able to perceive their pose or their moves. To this end, we are willing to set up a human motion tracking system from embedded cameras on the robot. A rough 3D representation of the human body is proposed taking into account biomecanics and anthropomorphic constraints. The model projection is then fitted to the images by exploiting various 2D visual cues (edges, colors, motion) and 3D sparse reconstruction of the scene. In order to estimate the 3D configuration parameters, we use the well-known particle flters. Evolutions are considered in order to efficiently tackle the problem by satisfying strong temporal constraints due to the final application. To address issues step by step, two different contexts are proposed. The first one (ubiquist robotics) operate with fixed ambiance cameras proposing various and complementary view points. The second one (mobile robotics) exploits a stereo camera embedded on the robot
Pichon, Romain. "Atteinte du contrôle postural des personnes avec BPCO : modifications, caractéristiques et activités de la vie quotidienne". Electronic Thesis or Diss., Rennes 2, 2023. http://www.theses.fr/2023REN20039.
Texto completo da fonteChronic Obstructive Pulmonary Disease (COPD) is highly prevalent respiratory pathology. Previous research has shown that postural control is impaired in people with COPD (PwCOPD). The first aim of this thesis was to identify the characteristics of this postural control modification, then focused on four objectives : to identify the discriminating factors of reduced postural control in this population, to study the associations between postural control and clinical factors, to characterise the postural control of PwCOPD during activities of daily living (ADL) and to study the influence of a cognitive task on the postural control of PwCOPD. The main result of this work tend to show that postural control is impaired in around 60% of PwCOPD and that many components of postural control can be impaired. PwCOPD with a reduced postural control are characterised by a low symptom-related quality of life, a low arterial pressure in oxygen, a reduced inspiratory muscle strength and a high body mass index. Biomechanical analysis of a sequence of ADL revealed some changes in postural control parameters (reduction in mean center of mass velocities in the three directions, in the variability of its medio-lateral and vertical velocity, and in its peak vertical velocity) in PwCOPD compared with control participants. The association analysis mainly showed a link between dyspnea during performed ADL and assessed postural control parameters. Finally, our results suggest that a cognitive task has a similar influence on postural control parameters of PwCOPD and control participants