Academic literature on the topic 'Estimation de poses humaines'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Estimation de poses humaines.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Estimation de poses humaines":

1

R, Jayasri. "HUMAN POSE ESTIMATION." International Scientific Journal of Engineering and Management 03, no. 03 (March 23, 2024): 1–9. http://dx.doi.org/10.55041/isjem01426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In "Human Pose Estimation" with integrated feedback mechanisms to assess and guide users in achieving correct poses. Utilizing advanced deep learning techniques in computer vision, the system swiftly detects key points on the human body and provides instant feedback on pose accuracy. Built on convolutional neural networks trained on extensive pose datasets, the system includes pose detection, classification, and feedback stages. By comparing detected poses with predefined correct poses, the system delivers positive feedback for accurate poses and corrective guidance for deviations. Key Words: Human Pose Estimation, Pose Detection, Pose Classification, Correct Pose Assessment, Fitness Training, Key Points Detection, Correct Pose Thresholds
2

Lv Yao-wen, 吕耀文, 王建立 WANG Jian-li, 王昊京 WANG Hao-jing, 刘维 LIU Wei, 吴量 WU Liang, and 曹景太 CAO Jing-tai. "Estimation of camera poses by parabolic motion." Optics and Precision Engineering 22, no. 4 (2014): 1078–85. http://dx.doi.org/10.3788/ope.20142204.1078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shalimova, E. A., E. V. Shalnov, and A. S. Konushin. "Camera parameters estimation from pose detections." Computer Optics 44, no. 3 (June 2020): 385–92. http://dx.doi.org/10.18287/2412-6179-co-600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Some computer vision tasks become easier with known camera calibration. We propose a method for camera focal length, location and orientation estimation by observing human poses in the scene. Weak requirements to the observed scene make the method applicable to a wide range of scenarios. Our evaluation shows that even being trained only on synthetic dataset, the proposed method outperforms known solution. Our experiments show that using only human poses as the input also allows the proposed method to calibrate dynamic visual sensors.
4

Mahajan, Priyanshu, Shambhavi Gupta, and Divya Kheraj Bhanushali. "Body Pose Estimation using Deep Learning." International Journal for Research in Applied Science and Engineering Technology 11, no. 3 (March 31, 2023): 1419–24. http://dx.doi.org/10.22214/ijraset.2023.49688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: Healthcare, sports analysis, gaming, and entertain- ment are just some of the many fields that could benefit from solving the challenging issue of real-time human pose detection and recognition in computer vision. Capturing human motion, analysing physical exercise, and giving feedback on performance can all benefit from reliable detection and recognition of body poses. The recent progress in deep learning has made it possible to create real-time systems that can accurately and quickly recognise and identify human poses.
5

Aju, Abin, Christa Mathew, and O. S. Gnana Prakasi. "PoseNet based Model for Estimation of Karate Poses." Journal of Innovative Image Processing 4, no. 1 (May 16, 2022): 16–25. http://dx.doi.org/10.36548/jiip.2022.1.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In the domain of computer vision, human pose estimation is becoming increasingly significant. It's one of the most compelling areas of research, and it's gaining a lot of interest due to its usefulness and flexibility in a variety of fields, including healthcare, gaming, augmented reality, virtual trainings and sports. Human pose estimation has opened a door of opportunities. This paper proposes a model for estimation and classification of karate poses which can be used in virtual karate posture correction and trainings. A pretrained model, PoseNet has been used for pose estimation using the results of which the angles between specific joints are calculated and fed into a K-Nearest Neighbors Classifier to classify the poses. The results obtained show that the model achieves an accuracy of 98.75%.
6

Astuti, Ani Dwi, Tita Karlita, and Rengga Asmara. "Yoga Pose Rating using Pose Estimation and Cosine Similarity." Jurnal Ilmu Komputer dan Informasi 16, no. 2 (July 3, 2023): 115–24. http://dx.doi.org/10.21609/jiki.v16i2.1151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
One type of exercise that many people do today is yoga. However, doing yoga yourself without an instructor carries a risk of injury if not done correctly. This research proposes an application in the form of a website that can assess the accuracy of a person's yoga position, by using ResNet for pose estimation and cosine similarity for calculating the similarity of positions. The application will recognize a person's body pose and then compare it with the poses of professionals so that the accuracy of their position can be assessed. There are three types of datasets used, the first is the COCO dataset to train a pose estimation model so that it can recognize someone's pose, the second is a reference dataset that contains yoga poses performed by professionals, and the third is a dataset that contains pictures of yoga poses that are considered correct. There are 9 yoga poses used, namely Child's Pose, Swimmers, Downdog, Chair Pose, Crescent Lunge, Planks, Side Plank, Low Cobra, Namaste. The optimal pose estimation model has a precision value of 87% and a recall of 88.2%. The model was obtained using the Adam optimizer, 30 epochs, and a learning rate of 0.0001.
7

Jagtap, Aniket. "Yoga Guide: Yoga Pose Estimation Using Machine Learning." International Journal for Research in Applied Science and Engineering Technology 12, no. 2 (February 29, 2024): 296–97. http://dx.doi.org/10.22214/ijraset.2024.58272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Abstract: A deep learning model is proposed which uses convolutional neural networks/LR algorithm for yoga pose identification along with a human joints localization model followed by a process for identification of errors in the pose for developing the system. After obtaining all the information about the pose of the user the system gives feedback to improve or correct the posture of the user. we propose an improved algorithm to calculate scores that can be applied to all poses. Our application is evaluated on different Yoga poses under different scenes, and its robustness is guaranteed.
8

Sun, Jun, Mantao Wang, Xin Zhao, and Dejun Zhang. "Multi-View Pose Generator Based on Deep Learning for Monocular 3D Human Pose Estimation." Symmetry 12, no. 7 (July 4, 2020): 1116. http://dx.doi.org/10.3390/sym12071116.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
In this paper, we study the problem of monocular 3D human pose estimation based on deep learning. Due to single view limitations, the monocular human pose estimation cannot avoid the inherent occlusion problem. The common methods use the multi-view based 3D pose estimation method to solve this problem. However, single-view images cannot be used directly in multi-view methods, which greatly limits practical applications. To address the above-mentioned issues, we propose a novel end-to-end 3D pose estimation network for monocular 3D human pose estimation. First, we propose a multi-view pose generator to predict multi-view 2D poses from the 2D poses in a single view. Secondly, we propose a simple but effective data augmentation method for generating multi-view 2D pose annotations, on account of the existing datasets (e.g., Human3.6M, etc.) not containing a large number of 2D pose annotations in different views. Thirdly, we employ graph convolutional network to infer a 3D pose from multi-view 2D poses. From experiments conducted on public datasets, the results have verified the effectiveness of our method. Furthermore, the ablation studies show that our method improved the performance of existing 3D pose estimation networks.
9

Su, Jianhua, Zhi-Yong Liu, Hong Qiao, and Chuankai Liu. "Pose-estimation and reorientation of pistons for robotic bin-picking." Industrial Robot: An International Journal 43, no. 1 (January 18, 2016): 22–32. http://dx.doi.org/10.1108/ir-06-2015-0129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Purpose – Picking up pistons in arbitrary poses is an important step on car engine assembly line. The authors usually use vision system to estimate the pose of the pistons and then guide a stable grasp. However, a piston in some poses, e.g. the mouth of the piston faces forward, is hardly to be directly grasped by the gripper. Thus, we need to reorient the piston to achieve a desired pose, i.e. let its mouth face upward, for grasping. Design/methodology/approach – This paper aims to present a vision-based picking system that can grasp pistons in arbitrary poses. The whole picking process is divided into two stages. At localization stage, a hierarchical approach is proposed to estimate the piston’s pose from image which usually involves both heavy noise and edge distortions. At grasping stage, multi-step robotic manipulations are designed to enable the piston to follow a nominal trajectory to reach to the minimum of the distance between the piston’s center and the support plane. That is, under the design input, the piston would be pushed to achieve a desired orientation. Findings – A target piston in arbitrary poses would be picked from the conveyor belt by the gripper with the proposed method. Practical implications – The designed robotic bin-picking system using vision is an advantage in terms of flexibility in automobile manufacturing industry. Originality/value – The authors develop a methodology that uses a pneumatic gripper and 2D vision information for picking up multiple pistons in arbitrary poses. The rough pose of the parts are detected based on a hierarchical approach for detection of multiple ellipses in the environment that usually involve edge distortions. The pose uncertainties of the piston are eliminated by multi-step robotic manipulations.
10

Fujita, Kohei, and Tsuyoshi Tasaki. "PYNet: Poseclass and Yaw Angle Output Network for Object Pose Estimation." Journal of Robotics and Mechatronics 35, no. 1 (February 20, 2023): 8–17. http://dx.doi.org/10.20965/jrm.2023.p0008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
The issues of estimating the poses of simple-shaped objects, such as retail store goods, have been addresses to ease the grasping of objects by robots. Conventional methods to estimate poses with an RGBD camera mounted on robots have difficulty estimating the three-dimensional poses of simple-shaped objects with few shape features. Therefore, in this study, we propose a new class called “poseclass” to indicate the grounding face of an object. The poseclass is of discrete value and solvable as a classification problem; it can be estimated with high accuracy; in addition, the three-dimensional pose estimation problems can be simplified into one-dimensional pose-estimation problem to estimate the yaw angles on the grounding face. We have developed a new neural network (PYNet) to estimate the poseclass and yaw angle, and compared it with conventional methods to determine its ratio of estimating unknown simple-shaped object poses with an angle error of 30° or less. The ratio of PYNet (68.9%) is an 18.1 pt higher than that of the conventional methods (50.8%). Additionally, a PYNet-implemented robot successfully grasped convenience store goods.

Dissertations / Theses on the topic "Estimation de poses humaines":

1

Benzine, Abdallah. "Estimation de poses 3D multi-personnes à partir d'images RGB." Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUS103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L’estimation de poses 3D humaines à partir d’images RGB monoculaires est le processus permettant de localiser les articulations humaines à partir d’une image ou d’une séquence d’images. Elle fournit une information géométrique et de mouvement riche sur le corps humain. La plus part des approches d’estimation de poses 3D existantes supposent que l’image ne contient qu’une personne, entièrement visible. Un tel scénario n’est pas réaliste. Dans des conditions réelles plusieurs personnes interagissent. Elles ont alors tendance à s’occulter mutuellement, ce qui rend l’estimation de poses 3D encore plus ambiguë et complexe. Les travaux réalisés durant cette thèse se sont focalisés sur l’estimation single-shot de poses 3D multi-personnes à partir d’images monoculaires RGB. Nous avons d’abord proposé une approche bottom-up de prédiction de poses 3D multi-personnes qui prédit d’abord les coordonnées 3D de toutes les articulations présentes dans l’image puis fait appel à un processus de regroupement afin de prédire des squelettes 3D complets. Afin d’être robuste aux cas où les personnes dans l’image sont nombreuses et éloignées de la caméra, nous avons développé PandaNet qui repose sur une représentation par ancres et qui intègre un processus permettant d’ignorer les ancres associées de manière ambiguë aux vérités de terrain et une pondération automatique des fonctions de pertes. Enfin, PandaNet est complété avec un Module d’Estimation de Distances Absolues, Absolute Distances Estimation Module (ADEM). L’ensemble, appelé Absolute PandaNet, permet de prédire des poses 3D humaines absolues exprimées dans le repère la caméra
3D human pose estimation from RGB monocular images is the processus allowing to locate human joints from an image or of a sequence of images. It provides rich geometric and motion information about the human body. Most existing 3D pose estimation approaches assume that the image contains only one person, fully visible. Such a scenario is not realistic. In real life conditions several people interact. They then tend to hide each other, which makes 3D pose estimation even more ambiguous and complex. The work carried out during this thesis focused on single-shot estimation. of multi-person 3D poses from RGB monocular images. We first proposed a bottom-up approach for predicting multi-person 3D poses that first predicts the 3D coordinates of all the joints present in the image and then uses a grouping process to predict full 3D skeletons. In order to be robust in cases where the people in the image are numerous and far away from the camera, we developed PandaNet, which is based on an anchor representation and integrates a process that allows ignoring anchors ambiguously associated to ground truthes and an automatic weighting of losses. Finally, PandaNet is completed with an Absolute Distance Estimation Module (ADEM). The combination of these two models, called Absolute PandaNet, allows the prediction of absolute human 3D poses expressed in the camera frame
2

Toony, Razieh. "Calibration-free Pedestrian Partial Pose Estimation Using a High-mounted Kinect." Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Les applications de l’analyse du comportement humain ont subit de rapides développements durant les dernières décades, tant au niveau des systèmes de divertissements que pour des applications professionnelles comme les interfaces humain-machine, les systèmes d’assistance de conduite automobile ou des systèmes de protection des piétons. Cette thèse traite du problème de reconnaissance de piétons ainsi qu’à l’estimation de leur orientation en 3D. Cette estimation est faite dans l’optique que la connaissance de cette orientation est bénéfique tant au niveau de l’analyse que de la prédiction du comportement des piétons. De ce fait, cette thèse propose à la fois une nouvelle méthode pour détecter les piétons et une manière d’estimer leur orientation, par l’intégration séquentielle d’un module de détection et un module d’estimation d’orientation. Pour effectuer cette détection de piéton, nous avons conçu un classificateur en cascade qui génère automatiquement une boîte autour des piétons détectés dans l’image. Suivant cela, des régions sont extraites d’un nuage de points 3D afin de classifier l’orientation du torse du piéton. Cette classification se base sur une image synthétique grossière par tramage (rasterization) qui simule une caméra virtuelle placée immédiatement au-dessus du piéton détecté. Une machine à vecteurs de support effectue la classification à partir de cette image de synthèse, pour l’une des 10 orientations discrètes utilisées lors de l’entrainement (incréments de 30 degrés). Afin de valider les performances de notre approche d’estimation d’orientation, nous avons construit une base de données de référence contenant 764 nuages de points. Ces données furent capturées à l’aide d’une caméra Kinect de Microsoft pour 30 volontaires différents, et la vérité-terrain sur l’orientation fut établie par l’entremise d’un système de capture de mouvement Vicon. Finalement, nous avons démontré les améliorations apportées par notre approche. En particulier, nous pouvons détecter des piétons avec une précision de 95.29% et estimer l’orientation du corps (dans un intervalle de 30 degrés) avec une précision de 88.88%. Nous espérons ainsi que nos résultats de recherche puissent servir de point de départ à d’autres recherches futures.
The application of human behavior analysis has undergone rapid development during the last decades from entertainment system to professional one, as Human Robot Interaction (HRI), Advanced Driver Assistance System (ADAS), Pedestrian Protection System (PPS), etc. Meanwhile, this thesis addresses the problem of recognizing pedestrians and estimating their body orientation in 3D based on the fact that estimating a person’s orientation is beneficial in determining their behavior. In this thesis, a new method is proposed for detecting and estimating the orientation, in which the result of a pedestrian detection module and a orientation estimation module are integrated sequentially. For the goal of pedestrian detection, a cascade classifier is designed to draw a bounding box around the detected pedestrian. Following this, extracted regions are given to a discrete orientation classifier to estimate pedestrian body’s orientation. This classification is based on a coarse, rasterized depth image simulating a top-view virtual camera, and uses a support vector machine classifier that was trained to distinguish 10 orientations (30 degrees increments). In order to test the performance of our approach, a new benchmark database contains 764 sets of point cloud for body-orientation classification was captured. For this benchmark, a Kinect recorded the point cloud of 30 participants and a marker-based motion capture system (Vicon) provided the ground truth on their orientation. Finally we demonstrated the improvements brought by our system, as it detected pedestrian with an accuracy of 95:29% and estimated the body orientation with an accuracy of 88:88%.We hope it can provide a new foundation for future researches.
3

Carbonera, Luvizon Diogo. "Apprentissage automatique pour la reconnaissance d'action humaine et l'estimation de pose à partir de l'information 3D." Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
La reconnaissance d'actions humaines en 3D est une tâche difficile en raisonde la complexité de mouvements humains et de la variété des poses et desactions accomplies par différents sujets. Les technologies récentes baséessur des capteurs de profondeur peuvent fournir les représentationssquelettiques à faible coût de calcul, ce qui est une information utilepour la reconnaissance d'actions.Cependant, ce type de capteurs se limite à des environnementscontrôlés et génère fréquemment des données bruitées. Parallèlement à cesavancées technologiques, les réseaux de neurones convolutifs (CNN) ontmontré des améliorations significatives pour la reconnaissance d’actions etpour l’estimation de la pose humaine en 3D à partir des images couleurs.Même si ces problèmes sont étroitement liés, les deux tâches sont souventtraitées séparément dans la littérature.Dans ce travail, nous analysons le problème de la reconnaissance d'actionshumaines dans deux scénarios: premièrement, nous explorons lescaractéristiques spatiales et temporelles à partir de représentations desquelettes humains, et qui sont agrégées par une méthoded'apprentissage de métrique. Dans le deuxième scénario, nous montrons nonseulement l'importance de la précision de la pose en 3D pour lareconnaissance d'actions, mais aussi que les deux tâches peuvent êtreefficacement effectuées par un seul réseau de neurones profond capabled'obtenir des résultats du niveau de l'état de l'art.De plus, nous démontrons que l'optimisation de bout en bout en utilisant lapose comme contrainte intermédiaire conduit à une précision plus élevée sur latâche de reconnaissance d'action que l'apprentissage séparé de ces tâches. Enfin, nous proposons une nouvellearchitecture adaptable pour l’estimation de la pose en 3D et la reconnaissancede l’actions simultanément et en temps réel. Cette architecture offre une gammede compromis performances vs vitesse avec une seule procédure d’entraînementmultitâche et multimodale
3D human action recognition is a challenging task due to the complexity ofhuman movements and to the variety on poses and actions performed by distinctsubjects. Recent technologies based on depth sensors can provide 3D humanskeletons with low computational cost, which is an useful information foraction recognition. However, such low cost sensors are restricted tocontrolled environment and frequently output noisy data. Meanwhile,convolutional neural networks (CNN) have shown significant improvements onboth action recognition and 3D human pose estimation from RGB images. Despitebeing closely related problems, the two tasks are frequently handled separatedin the literature. In this work, we analyze the problem of 3D human actionrecognition in two scenarios: first, we explore spatial and temporalfeatures from human skeletons, which are aggregated by a shallow metriclearning approach. In the second scenario, we not only show that precise 3Dposes are beneficial to action recognition, but also that both tasks can beefficiently performed by a single deep neural network and stillachieves state-of-the-art results. Additionally, wedemonstrate that optimization from end-to-end using poses as an intermediateconstraint leads to significant higher accuracy on the action task thanseparated learning. Finally, we propose a new scalable architecture forreal-time 3D pose estimation and action recognition simultaneously, whichoffers a range of performance vs speed trade-off with a single multimodal andmultitask training procedure
4

Dogan, Emre. "Human pose estimation and action recognition by multi-robot systems." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI060/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'estimation de la pose humaine et la reconnaissance des activités humaines sont des étapes importantes dans de nombreuses applications comme la robotique, la surveillance et la sécurité, etc. Actuellement abordées dans le domaine, ces tâches ne sont toujours pas résolues dans des environnements non-coopératifs particulièrement. Ces tâches admettent de divers défis comme l'occlusion, les variations des vêtements, etc. Les méthodes qui exploitent des images de profondeur ont l’avantage concernant les défis liés à l'arrière-plan et à l'apparence, pourtant, l’application est limitée pour des raisons matérielles. Dans un premier temps, nous nous sommes concentrés sur la reconnaissance des actions complexes depuis des vidéos. Pour ceci, nous avons introduit une représentation spatio-temporelle indépendante du point de vue. Plus précisément, nous avons capturé le mouvement de la personne en utilisant un capteur de profondeur et l'avons encodé en 3D pour le représenter. Un descripteur 3D a ensuite été utilisé pour la classification des séquences avec la méthodologie bag-of-words. Pour la deuxième partie, notre objectif était l'estimation de pose articulée, qui est souvent une étape intermédiaire pour la reconnaissance de l'activité. Notre motivation était d'incorporer des informations à partir de capteurs multiples et de les fusionner pour surmonter le problème de l'auto-occlusion. Ainsi, nous avons proposé un modèle de flexible mixtures-of-parts multi-vues inspiré par la méthodologie classique de structure pictural. Nous avons démontré que les contraintes géométriques et les paramètres de cohérence d'apparence sont efficaces pour renforcer la cohérence entre les points de vue, aussi que les paramètres classiques. Finalement, nous avons évalué ces nouvelles méthodes sur des datasets publics, qui vérifie que l'utilisation de représentations indépendantes de la vue et l'intégration d'informations à partir de points de vue multiples améliore la performance pour les tâches ciblées dans le cadre de cette manuscrit
Estimating human pose and recognizing human activities are important steps in many applications, such as human computer interfaces (HCI), health care, smart conferencing, robotics, security surveillance etc. Despite the ongoing effort in the domain, these tasks remained unsolved in unconstrained and non cooperative environments in particular. Pose estimation and activity recognition face many challenges under these conditions such as occlusion or self occlusion, variations in clothing, background clutter, deformable nature of human body and diversity of human behaviors during activities. Using depth imagery has been a popular solution to address appearance and background related challenges, but it has restricted application area due to its hardware limitations and fails to handle remaining problems. Specifically, we considered action recognition scenarios where the position of the recording device is not fixed, and consequently require a method which is not affected by the viewpoint. As a second prob- lem, we tackled the human pose estimation task in particular settings where multiple visual sensors are available and allowed to collaborate. In this thesis, we addressed these two related problems separately. In the first part, we focused on indoor action recognition from videos and we consider complex ac- tivities. To this end, we explored several methodologies and eventually introduced a 3D spatio-temporal representation for a video sequence that is viewpoint independent. More specifically, we captured the movement of the person over time using depth sensor and we encoded it in 3D to represent the performed action with a single structure. A 3D feature descriptor was employed afterwards to build a codebook and classify the actions with the bag-of-words approach. As for the second part, we concentrated on articulated pose estimation, which is often an intermediate step for activity recognition. Our motivation was to incorporate information from multiple sources and views and fuse them early in the pipeline to overcome the problem of self-occlusion, and eventually obtain robust estimations. To achieve this, we proposed a multi-view flexible mixture of parts model inspired by the classical pictorial structures methodology. In addition to the single-view appearance of the human body and its kinematic priors, we demonstrated that geometrical constraints and appearance- consistency parameters are effective for boosting the coherence between the viewpoints in a multi-view setting. Both methods that we proposed was evaluated on public benchmarks and showed that the use of view-independent representations and integrating information from multiple viewpoints improves the performance of action recognition and pose estimation tasks, respectively
5

Fathollahi, Ghezelghieh Mona. "Estimation of Human Poses Categories and Physical Object Properties from Motion Trajectories." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6835.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Despite the impressive advancements in people detection and tracking, safety is still a key barrier to the deployment of autonomous vehicles in urban environments [1]. For example, in non-autonomous technology, there is an implicit communication between the people crossing the street and the driver to make sure they have communicated their intent to the driver. Therefore, it is crucial for the autonomous car to infer the future intent of the pedestrian quickly. We believe that human body orientation with respect to the camera can help the intelligent unit of the car to anticipate the future movement of the pedestrians. To further improve the safety of pedestrians, it is important to recognize whether they are distracted, carrying a baby, or pushing a shopping cart. Therefore, estimating the fine- grained 3D pose, i.e. (x,y,z)-coordinates of the body joints provides additional information for decision-making units of driverless cars. In this dissertation, we have proposed a deep learning-based solution to classify the categorized body orientation in still images. We have also proposed an efficient framework based on our body orientation classification scheme to estimate human 3D pose in monocular RGB images. Furthermore, we have utilized the dynamics of human motion to infer the body orientation in image sequences. To achieve this, we employ a recurrent neural network model to estimate continuous body orientation from the trajectories of body joints in the image plane. The proposed body orientation and 3D pose estimation framework are tested on the largest 3D pose estimation benchmark, Human3.6m (both in still images and video), and we have proved the efficacy of our approach by benchmarking it against the state-of-the-art approaches. Another critical feature of self-driving car is to avoid an obstacle. In the current prototypes the car either stops or changes its lane even if it causes other traffic disruptions. However, there are situations when it is preferable to collide with the object, for example a foam box, rather than take an action that could result in a much more serious accident than collision with the object. In this dissertation, for the first time, we have presented a novel method to discriminate between physical properties of these types of objects such as bounciness, elasticity, etc. based on their motion characteristics . The proposed algorithm is tested on synthetic data, and, as a proof of concept, its effectiveness on a limited set of real-world data is demonstrated.
6

Tokunaga, Daniel Makoto. "Local pose estimation of feature points for object based augmented reality." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-22092016-110832/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Usage of real objects as links between real and virtual information is one key aspect in augmented reality. A central issue to achieve this link is the estimation of the visuospatial information of the observed object, or in other words, estimating the object pose. Different objects can have different behaviors when used for interaction. This not only encompasses changes in position, but also folding or deformations. Traditional researches in the area solve those pose estimation problems using different approaches, depending on the type of the object. Additionally, some researches are based only on positional information of observed feature points, simplifying the object information. In this work, we explore the pose estimation of different objects by gathering more information from the observed feature points, and obtaining the local poses of such points, which are not explored in other researches. We apply this local pose estimation idea in two different capturing scenarios, reaching two novel approaches of pose estimation: one based on RGB-D cameras, and another based on RGB and machine learning methods. In the RGB-D based approach, we use the feature point orientation and near surface to obtain its normal; then, find the local 6 degrees-of-freedom (DoF) pose. This approach gives us not only the rigid object pose, but also the approximated pose of deformed objects. On the other hand, our RGB based approach explores machine learning with local appearance changes. Unlike other RGB based works, we replace the complex non-linear systems solvers with a fast and robust method, reaching local rotation of the observed feature points, as well as, full 6 DoF rigid object pose with dramatically lower real-time calculation demands. Both approaches show us that gathering local poses can bring information for the pose estimation of different types of objects.
O uso de objetos reais como meio de conexão entre informações reais e virtuais é um aspecto chave dentro da realidade aumentada. Uma questão central para tal conexão é a estimativa de informações visuo-espaciais do objeto, ou em outras palavras, a detecção da pose do objeto. Diferentes objetos podem ter diferentes comportamentos quando utilizados em interações. Não somente incluindo a mudança de posição, mas também sendo dobradas ou deformadas. Pesquisas tradicionais solucionam tais problemas de detecção usando diferentes abordagens, dependendo do tipo de objeto. Adicionalmente, algumas pesquisas se baseiam somente na informação posicional dos pontos de interesse, simplificando a informação do objeto. Neste trabalho, a detecção de pose de diferente objetos é explorada coletando-se mais informações dos pontos de interesse observados e, por sua vez, obtendo as poses locais de tais pontos, poses que não são exploradas em outras pesquisas. Este conceito da detecção de pose locais é aplicada em dois ambientes de capturas, estendendo-se em duas abordagens inovadoras: uma baseada em câmeras RGB-D, e outra baseada em câmeras RGB e métodos de aprendizado de maquinas. Na abordagem baseada em RGB-D, a orientação e superfície ao redor do ponto de interesse são utilizadas para obter a normal do ponto. Através de tais informações a pose local é obtida. Esta abordagem não só permite a obtenção de poses de objetos rígidos, mas também a pose aproximada de objetos deformáveis. Por outro lado, a abordagem baseada em RGB explora o aprendizado de máquina aplicado em alterações das aparências locais. Diferentemente de outros trabalhos baseados em câmeras RGB, esta abordagem substitui solucionadores não lineares complexos com um método rápido e robusto, permitindo a obtenção de rotações locais dos pontos de interesse, assim como, a pose completa (com 6 graus-de-liberdade) de objetos rígidos, com uma demanda computacional muito menor para cálculos em tempo-real. Ambas as abordagens mostram que a coleta de poses locais podem gerar informações para a detecção de poses de diferentes tipos de objetos.
7

Liebelt, Jörg. "Détection de classes d'objets et estimation de leurs poses à partir de modèles 3D synthétiques." Grenoble, 2010. https://theses.hal.science/tel-00553343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Cette thèse porte sur la détection de classes d'objets et l'estimation de leur poses à partir d'une seule image en utilisant des étapes d'apprentissage, de détection et d'estimation adaptées aux données synthétiques. Nous proposons de créer des représentations en 3D de classes d'objets permettant de gérer simultanément des points de vue différents et la variabilité intra-classe. Deux méthodes différentes sont proposées : La première utilise des données d'entraînement purement synthétiques alors que la seconde approche est basée sur un modèle de parties combinant des images d'entraînement réelles avec des données géométriques synthétiques. Pour l'entraînement de la méthode purement synthétique, nous proposons une procédure non-supervisée de filtrage de descripteurs locaux afin de rendre les descripteurs discriminatifs pour leur pose et leur classe d'objet. Dans le cadre du modèle de parties, l'apparence d'une classe d'objets est apprise de manière discriminative à partir d'une base de données annotée et la géométrie en 3D est apprise de manière générative à partir d'une base de modèles CAO. Pendant la détection, nous introduisons d'abord une méthode de vote en 3D qui renforce la cohérence géométrique en se servant d'une estimation robuste de la pose. Ensuite, nous décrivons une deuxième méthode d'estimation de pose qui permet d'évaluer la probabilité de constellations de parties détectées en 2D en utilisant une géométrie 3D entière. Les estimations approximatives sont ensuite améliorées en se servant d'un alignement de modèles 3D CAO avec des images en 2D ce qui permet de résoudre des ambiguïtés et de gérer des occultations
This dissertation aims at extending object class detection and pose estimation tasks on single 2D images by a 3D model-based approach. The work describes learning, detection and estimation steps adapted to the use of synthetically rendered data with known 3D geometry. Most existing approaches recognize object classes for a particular viewpoint or combine classifiers for a few discrete views. By using existing CAD models and rendering techniques from the domain of computer graphics which are parameterized to reproduce some variations commonly found in real images, we propose instead to build 3D representations of object classes which allow to handle viewpoint changes and intra-class variability. These 3D representations are derived in two different ways : either as an unsupervised filtering process of pose and class discriminant local features on purely synthetic training data, or as a part model which discriminatively learns the object class appearance from an annotated database of real images and builds a generative representation of 3D geometry from a database of synthetic CAD models. During detection, we introduce a 3D voting scheme which reinforces geometric coherence by means of a robust pose estimation, and we propose an alternative probabilistic pose estimation method which evaluates the likelihood of groups of 2D part detections with respect to a full 3D geometry. Both detection methods yield approximate 3D bounding boxes in addition to 2D localizations ; these initializations are subsequently improved by a registration scheme aligning arbitrary 3D models to optical and Synthetic Aperture Radar (SAR) images in order to disambiguate and prune 2D detections and to handle occlusions. The work is evaluated on several standard benchmark datasets and it is shown to achieve state-of-the-art performance for 2D detection in addition to providing 3D pose estimations from single images
8

Blanc, Beyne Thibault. "Estimation de posture 3D à partir de données imprécises et incomplètes : application à l'analyse d'activité d'opérateurs humains dans un centre de tri." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
Dans un contexte d’étude de la pénibilité et de l’ergonomie au travail pour la prévention des troubles musculo-squelettiques, la société Ebhys cherche à développer un outil d’analyse de l’activité des opérateurs humains dans un centre de tri, par l’évaluation d’indicateurs ergonomiques. Pour faire face à l’environnement non contrôlé du centre de tri et pour faciliter l’acceptabilité du dispositif, ces indicateurs sont mesurés à partir d’images de profondeur. Une étude ergonomique nous permet de définir les indicateurs à mesurer. Ces indicateurs sont les zones d’évolution des mains de l’opérateur et d’angulations de certaines articulations du haut du corps. Ce sont donc des indicateurs obtenables à partir d’une analyse de la posture 3D de l’opérateur. Le dispositif de calcul des indicateurs sera donc composé de trois parties : une première partie sépare l’opérateur du reste de la scène pour faciliter l’estimation de posture 3D, une seconde partie calcule la posture 3D de l’opérateur, et la troisième utilise la posture 3D de l’opérateur pour calculer les indicateurs ergonomiques. Tout d’abord, nous proposons un algorithme qui permet d’extraire l’opérateur du reste de l’image de profondeur. Pour ce faire, nous utilisons une première segmentation automatique basée sur la suppression du fond statique et la sélection d’un objet dynamique à l’aide de sa position et de sa taille. Cette première segmentation sert à entraîner un algorithme d’apprentissage qui améliore les résultats obtenus. Cet algorithme d’apprentissage est entraîné à l’aide des segmentations calculées précédemment, dont on sélectionne automatiquement les échantillons de meilleure qualité au cours de l’entraînement. Ensuite, nous construisons un modèle de réseau de neurones pour l’estimation de la posture 3D de l’opérateur. Nous proposons une étude qui permet de trouver un modèle léger et optimal pour l’estimation de posture 3D sur des images de profondeur de synthèse, que nous générons numériquement. Finalement, comme ce modèle n’est pas directement applicable sur les images de profondeur acquises dans les centres de tri, nous construisons un module qui permet de transformer les images de profondeur de synthèse en images de profondeur plus réalistes. Ces images de profondeur plus réalistes sont utilisées pour réentrainer l’algorithme d’estimation de posture 3D, pour finalement obtenir une estimation de posture 3D convaincante sur les images de profondeur acquises en conditions réelles, permettant ainsi de calculer les indicateurs ergonomiques
In a context of study of stress and ergonomics at work for the prevention of musculoskeletal disorders, the company Ebhys wants to develop a tool for analyzing the activity of human operators in a waste sorting center, by measuring ergonomic indicators. To cope with the uncontrolled environment of the sorting center, these indicators are measured from depth images. An ergonomic study allows us to define the indicators to be measured. These indicators are zones of movement of the operator’s hands and zones of angulations of certain joints of the upper body. They are therefore indicators that can be obtained from an analysis of the operator’s 3D pose. The software for calculating the indicators will thus be composed of three steps : a first part segments the operator from the rest of the scene to ease the 3D pose estimation, a second part estimates the operator’s 3D pose, and the third part uses the operator’s 3D pose to compute the ergonomic indicators. First of all, we propose an algorithm that extracts the operator from the rest of the depth image. To do this, we use a first automatic segmentation based on static background removal and selection of a moving element given its position and size. This first segmentation allows us to train a neural network that improves the results. This neural network is trained using the segmentations obtained from the first automatic segmentation, from which the best quality samples are automatically selected during training. Next, we build a neural network model to estimate the operator’s 3D pose. We propose a study that allows us to find a light and optimal model for 3D pose estimation on synthetic depth images, which we generate numerically. However, if this network gives outstanding performances on synthetic depth images, it is not directly applicable to real depth images that we acquired in an industrial context. To overcome this issue, we finally build a module that allows us to transform the synthetic depth images into more realistic depth images. This image-to-image translation model modifies the style of the depth image without changing its content, keeping the 3D pose of the operator from the synthetic source image unchanged on the translated realistic depth frames. These more realistic depth images are then used to re-train the 3D pose estimation neural network, to finally obtain a convincing 3D pose estimation on the depth images acquired in real conditions, to compute de ergonomic indicators
9

Gourjon, Géraud. "L'estimation du mélange génétique dans les populations humaines." Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX20686/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
De nombreuses méthodes ont été développées dans le but d’estimer les apports génétiques (taux de mélange) de populations parentales au pool génétique d’une population mélangée. Certaines de ces méthodes ont été implémentées dans des logiciels, permettant une estimation plus ou moins rapide et précise des taux de mélange. Une comparaison complète des logiciels ADMIX (méthode des moindres carrés pondérés), ADMIX95 (méthode basée sur les identités géniques), Admix 2.0 (méthode basée sur la coalescence), Mistura (méthode de maximum de vraisemblance), LEA (méthode bayésienne de vraisemblance), et LEADMIX (méthode de maximum de vraisemblance basée sur la coalescence) a été menée. L’analyse a été faite à deux niveaux : intra-logiciel (test de chaque logiciel sous un jeu de paramètres définis), inter-logiciel (comparaison des logiciels entre eux). Les taux de mélange ont été estimés à partir de quatre types de marqueurs : autosomaux (groupes sanguins et gènes KIR), et uniparentaux (ADNmt et chromosome Y). Notre étude révèle que la précision et la fiabilité des estimations dépendent de l’adéquation du mélange étudié avec le modèle de la méthode employée, mais également d’un choix judicieux des loci et des populations parentales utilisés. La variabilité des estimations lors de modifications, même minimes, des paramètres de l’étude, nous amène à considérer que les contributions ne doivent pas être présentées sous forme de taux de mélange, mais d’« intervalles indicatifs de mélange » soulignant les tendances migratoires générales
Different methods have been developed to estimate the genetic admixture contributions of parental populations to a hybrid one. Most of these methods are implemented in different software programs that provide estimates having variable accuracy. A full comparison between ADMIX (weighted least square), ADMIX95 (gene identity), Admix 2.0 (coalescent-based), Mistura (maximum-likelihood), LEA (likelihood-based) and LEADMIX (maximum-likelihood) software programs has been carried out, both at the “intra” (test of each software programs) and “inter” level (comparisons between them). We tested all of these programs on a real human population data set, using four kinds of markers, autosomal (Blood groups and KIR genes) and uniparental (mtDNA and Y-Chromosome). We demonstrated that the accuracy of the results depends not only on the method itself but also on the choice of loci and of parental populations. We consider that the results of admixture contribution rates obtained from human population data set should not be considered as an accurate value but rather as an indicative result and we suggest using an “Admixture Indicative Interval” as a measurement of admixture
10

Zvénigorosky-Durel, Vincent. "Etude des parentés génétiques dans les populations humaines anciennes : estimation de la fiabilité et de l'efficacité des méthodes d'analyse." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30260/document.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
L'étude des parentés génétiques permet à l'anthropologie d'identifier la place du sujet au sein des différentes structures dans lesquelles il évolue : l'individu est membre d'une famille biologique, d'un groupe social et d'une population. L'application des méthodes probabilistes classiques (établies pour répondre à des problématiques de médecine légale, comme la méthode des Likelihood Ratios (LR) ou " Rapports de vraisemblance ") aux données STR issues du matériel archéologique a permis la découverte de nombreux liens de parenté, qui ensemble constituent des généalogies parfois complexes. Notre pratique prolongée de ces méthodes nous a cependant amenés à identifier certaines limites de l'interprétation des données STR, en particulier dans les cas de parentés complexes, distantes ou consanguines, ou dans des populations isolées, méconnues ou disparues. Ce travail de thèse s'attache en premier lieu à quantifier la fiabilité et l'efficacité de la méthode des LR dans quatre situations : une population moderne avec une grande diversité allélique, une population moderne avec une faible diversité allélique, une population ancienne de grande taille et une population ancienne de petite taille. Les publications récentes font usage des marqueurs plus nombreux issus des nouvelles technologies de séquençage (NGS) pour mettre en place de nouvelles stratégies de détection des parentés, basées en particulier sur l'analyse des segments chromosomiques partagés par ascendance entre les individus (segments IBD). Ces méthodes ont rendu possible l'estimation plus fiable de probabilités de parenté dans le matériel ancien. Elles sont néanmoins inadaptées à certaines situations caractéristiques de la génétique des parentés archéologiques : elles ne sont pas conçues pour fonctionner avec une seule paire isolée d'individus et reposent, comme les méthodes classiques, sur l'estimation de la diversité allélique dans la population. Nous proposons donc une quantification de la fiabilité et de l'efficacité de la méthode des segments partagés à partir de données NGS, en s'attachant à déterminer la qualité des résultats dans les différentes situations qui correspondent à des tailles de population plus ou moins importantes et à une hétérogénéité plus ou moins grande de l'échantillonnage.[...]
The study of genetic kinship allows anthropology to identify the place of an individual within which they evolve: a biological family, a social group, a population. The application of classical probabilistic methods (that were established to solve cases in legal medicine, such as Likelihood Ratios, or LR) to STR data from archaeological material has permitted the discovery of numerous parental links which together constitute genealogies both simple and complex. Our continued practice of these methods has however led us to identify limits to the interpretation of STR data, especially in cases of complex, distant or inbred kinship. The first part of the present work is constituted by the estimation of the reliability and the efficacy of the LR method in four situations: a large modern population with significant allelic diversity, a large modern population with poor allelic diversity, a large ancient population and a small ancient population. Recent publications use the more numerous markers analysed using Next generation Sequencing (NGS) to implement new strategies in the detection of kinship, especially based on the analysis of chromosome segments shared due to common ancestry (IBD "Identity-by-Descent" segments). These methods have permitted the more reliable estimation of kinship probabilities in ancient material. They are nevertheless ill-suited to certain typical situations that are characteristic of ancient DNA studies: they were not conceived to function using single pairs of isolated individuals and they depend, like classical methods, on the estimation of allelic diversity in the population. We therefore propose the quantification of the reliability and efficiency of the IBD segment method using NGS data, focusing on the estimation of the quality of results in different situations with populations of different sizes and different sets of more or less heterogeneous samples.[...]

Books on the topic "Estimation de poses humaines":

1

Trottier, Guy. La main-d'oeuvre en physiothérapie et en techniques de réadaptation physique au Québec: État de situation et estimation de l'offre et de la demande de ressources humaines jusqu'en 2004. [Québec]: Gouvernement du Québec, Ministère de la santé et des services sociaux, Direction générale de la planification et de l'évaluation, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ontario. Esquisse de cours 12e année: Sciences de l'activité physique pse4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ontario. Esquisse de cours 12e année: Technologie de l'information en affaires btx4e cours préemploi. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ontario. Esquisse de cours 12e année: Études informatiques ics4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ontario. Esquisse de cours 12e année: Mathématiques de la technologie au collège mct4c cours précollégial. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ontario. Esquisse de cours 12e année: Sciences snc4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ontario. Esquisse de cours 12e année: English eae4e cours préemploi. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ontario. Esquisse de cours 12e année: Le Canada et le monde: une analyse géographique cgw4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ontario. Esquisse de cours 12e année: Environnement et gestion des ressources cgr4e cours préemploi. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ontario. Esquisse de cours 12e année: Histoire de l'Occident et du monde chy4c cours précollégial. Vanier, Ont: CFORP, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Estimation de poses humaines":

1

Sciortino, Giuseppa, Giovanni Maria Farinella, Sebastiano Battiato, Marco Leo, and Cosimo Distante. "On the Estimation of Children’s Poses." In Image Analysis and Processing - ICIAP 2017, 410–21. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68548-9_38.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Salinero Santamaría, Sergio, Antía Carmona Balea, Mario Rubio González, Javier Caballero Sandoval, Germán Francés Tostado, Héctor Sánchez San Blas, and Gabriel Villarrubia González. "Poses Estimation Technology for Physical Activity Monitoring." In Advances in Intelligent Systems and Computing, 352–60. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38344-1_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Baroliya, Jitendra Kumar, and Amit Doegar. "Human Body Poses Detection and Estimation Using Convolutional Neural Network." In Proceedings of Data Analytics and Management, 303–15. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-6544-1_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, Bingpeng, Fei Yang, Wen Gao, and Baochang Zhang. "The Application of Extended Geodesic Distance in Head Poses Estimation." In Advances in Biometrics, 192–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11608288_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tobisch, Franziska, Karla Weigelt, Pascal Philipp, and Florian Matthes. "Investigating Effort Estimation in a Large-Scale Agile ERP Transformation Program." In Lecture Notes in Business Information Processing, 70–86. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-61154-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
Abstract:
AbstractAdaptability is vital in today’s rapidly changing business environment, especially within IT. Agile methodologies have emerged to meet this demand and have thereby gained widespread adoption. While successful in smaller, co-located teams and low-criticality projects, applying agile methods in broader contexts poses challenges. Nevertheless, many organizations have started implementing agile methodologies in various areas, including large-scale Enterprise Resource Planning (ERP) projects. In contrast to traditional development, ERP projects involve deploying extensive integrated systems, are substantial in scale, and entail high risks and costs. Accurate predictions, like effort estimations, are crucial to meet customer satisfaction and deliver within plan and budget. However, estimating effort in an agile environment poses its own set of challenges. For instance, coordination efforts and dependencies among teams must be considered. While effort estimation is well-explored in classical software development and small-scale agile contexts, limited research exists in large-scale agile settings, particularly in projects rolling out and customizing standard ERP solutions. To address this gap, we conducted a case study on effort estimation in a large agile ERP transformation program, describing the estimation process, highlighting challenges, and proposing and evaluating mitigations.
6

Guerrero, Pablo, and Javier Ruiz-del-Solar. "Improving Robot Self-localization Using Landmarks’ Poses Tracking and Odometry Error Estimation." In RoboCup 2007: Robot Soccer World Cup XI, 148–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68847-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Yuquan, Seiichi Mita, and Silong Peng. "A Fast Blind Spatially-Varying Motion Deblurring Algorithm with Camera Poses Estimation." In Computer Vision – ACCV 2016, 157–72. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54187-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Xinyu, Jizhou Gao, Sen-ching S. Cheung, and Ruigang Yang. "Manifold Estimation in View-Based Feature Space for Face Synthesis across Poses." In Computer Vision – ACCV 2009, 37–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12307-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Seunghee, Jungmo Koo, Hyungjin Kim, Kwangyik Jung, and Hyun Myung. "A Robust Estimation of 2D Human Upper-Body Poses Using Fully Convolutional Network." In Robot Intelligence Technology and Applications 5, 549–58. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78452-6_44.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Steege, Frank-Florian, Christian Martin, and Horst-Michael Groß. "Estimation of Pointing Poses on Monocular Images with Neural Techniques - An Experimental Comparison." In Lecture Notes in Computer Science, 593–602. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74695-9_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Estimation de poses humaines":

1

Krishnan, Hema, Anagha Jayaraj, Anagha S, Christy Thomas, and Grace Mol Joy. "Pose Estimation of Yoga Poses using ML Techniques." In 2022 IEEE 19th India Council International Conference (INDICON). IEEE, 2022. http://dx.doi.org/10.1109/indicon56171.2022.10040162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Atrevi, Fabrice Dieudonné, Damien Vivet, Florent Duculty, and Bruno Emile. "3D Human Poses Estimation from a Single 2D Silhouette." In International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0005711503610369.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Lu, Chen Hu, Yinqi Li, Jiran Tao, Jianru Xue, and Kuizhi Mei. "Deep Conditional Variational Estimation for Depth-Based Hand Poses." In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019). IEEE, 2019. http://dx.doi.org/10.1109/fg.2019.8756559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Chunyu, Yizhou Wang, Zhouchen Lin, Alan L. Yuille, and Wen Gao. "Robust Estimation of 3D Human Poses from a Single Image." In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Song, Yafei, Xiaowu Chen, Xiaogang Wang, Yu Zhang, and Jia Li. "Fast Estimation of Relative Poses for 6-DOF Image Localization." In 2015 IEEE International Conference on Multimedia Big Data (BigMM). IEEE, 2015. http://dx.doi.org/10.1109/bigmm.2015.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Shaowei, Hanwen Jiang, Jiarui Xu, Sifei Liu, and Xiaolong Wang. "Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hauck, Johannes, Adam Kalisz, and Jorn Thielecke. "Continuous-Time Trajectory Estimation From Noisy Camera Poses Using Cubic Bézier Curves." In 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). IEEE, 2021. http://dx.doi.org/10.1109/case49439.2021.9551621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Wan-Chia, Cheng-Liang Shih, Irin Tri Anggraini, Yanqi Xiao, Nobuo Funabiki, and Chih-Peng Fan. "OpenPose Based Yoga Poses Difficulty Estimation for Dynamic and Static Yoga Exercises." In 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2023. http://dx.doi.org/10.1109/apsipaasc58517.2023.10317354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Amaya, Kotaro, and Mariko Isogawa. "Adaptive and Robust Mmwave-Based 3D Human Mesh Estimation for Diverse Poses." In 2023 IEEE International Conference on Image Processing (ICIP). IEEE, 2023. http://dx.doi.org/10.1109/icip49359.2023.10222059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ishii, Yohei, Hitoshi Hongo, Yoshinori Niwa, and Kazuhiko Yamamoto. "Comparison of different methods for gender estimation from face image of various poses." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.515128.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Estimation de poses humaines":

1

Aihara, Shimpei, Takara Saki, Tyusei Shibata, Toshiaki Matsubara, Ryosuke Mizukami, Yudai Yoshida, and Akira Shionoya. Deep Learning Model for Integrated Estimation of Wheelchair and Human Poses Using Camera Images. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317545.

Full text
APA, Harvard, Vancouver, ISO, and other styles

To the bibliography