Letteratura scientifica selezionata sul tema "Estimation de poses humaines"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Estimation de poses humaines".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Articoli di riviste sul tema "Estimation de poses humaines":
R, Jayasri. "HUMAN POSE ESTIMATION". International Scientific Journal of Engineering and Management 03, n. 03 (23 marzo 2024): 1–9. http://dx.doi.org/10.55041/isjem01426.
Lv Yao-wen, 吕耀文, 王建立 WANG Jian-li, 王昊京 WANG Hao-jing, 刘维 LIU Wei, 吴量 WU Liang e 曹景太 CAO Jing-tai. "Estimation of camera poses by parabolic motion". Optics and Precision Engineering 22, n. 4 (2014): 1078–85. http://dx.doi.org/10.3788/ope.20142204.1078.
Shalimova, E. A., E. V. Shalnov e A. S. Konushin. "Camera parameters estimation from pose detections". Computer Optics 44, n. 3 (giugno 2020): 385–92. http://dx.doi.org/10.18287/2412-6179-co-600.
Mahajan, Priyanshu, Shambhavi Gupta e Divya Kheraj Bhanushali. "Body Pose Estimation using Deep Learning". International Journal for Research in Applied Science and Engineering Technology 11, n. 3 (31 marzo 2023): 1419–24. http://dx.doi.org/10.22214/ijraset.2023.49688.
Aju, Abin, Christa Mathew e O. S. Gnana Prakasi. "PoseNet based Model for Estimation of Karate Poses". Journal of Innovative Image Processing 4, n. 1 (16 maggio 2022): 16–25. http://dx.doi.org/10.36548/jiip.2022.1.002.
Astuti, Ani Dwi, Tita Karlita e Rengga Asmara. "Yoga Pose Rating using Pose Estimation and Cosine Similarity". Jurnal Ilmu Komputer dan Informasi 16, n. 2 (3 luglio 2023): 115–24. http://dx.doi.org/10.21609/jiki.v16i2.1151.
Jagtap, Aniket. "Yoga Guide: Yoga Pose Estimation Using Machine Learning". International Journal for Research in Applied Science and Engineering Technology 12, n. 2 (29 febbraio 2024): 296–97. http://dx.doi.org/10.22214/ijraset.2024.58272.
Sun, Jun, Mantao Wang, Xin Zhao e Dejun Zhang. "Multi-View Pose Generator Based on Deep Learning for Monocular 3D Human Pose Estimation". Symmetry 12, n. 7 (4 luglio 2020): 1116. http://dx.doi.org/10.3390/sym12071116.
Su, Jianhua, Zhi-Yong Liu, Hong Qiao e Chuankai Liu. "Pose-estimation and reorientation of pistons for robotic bin-picking". Industrial Robot: An International Journal 43, n. 1 (18 gennaio 2016): 22–32. http://dx.doi.org/10.1108/ir-06-2015-0129.
Fujita, Kohei, e Tsuyoshi Tasaki. "PYNet: Poseclass and Yaw Angle Output Network for Object Pose Estimation". Journal of Robotics and Mechatronics 35, n. 1 (20 febbraio 2023): 8–17. http://dx.doi.org/10.20965/jrm.2023.p0008.
Tesi sul tema "Estimation de poses humaines":
Benzine, Abdallah. "Estimation de poses 3D multi-personnes à partir d'images RGB". Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUS103.
3D human pose estimation from RGB monocular images is the processus allowing to locate human joints from an image or of a sequence of images. It provides rich geometric and motion information about the human body. Most existing 3D pose estimation approaches assume that the image contains only one person, fully visible. Such a scenario is not realistic. In real life conditions several people interact. They then tend to hide each other, which makes 3D pose estimation even more ambiguous and complex. The work carried out during this thesis focused on single-shot estimation. of multi-person 3D poses from RGB monocular images. We first proposed a bottom-up approach for predicting multi-person 3D poses that first predicts the 3D coordinates of all the joints present in the image and then uses a grouping process to predict full 3D skeletons. In order to be robust in cases where the people in the image are numerous and far away from the camera, we developed PandaNet, which is based on an anchor representation and integrates a process that allows ignoring anchors ambiguously associated to ground truthes and an automatic weighting of losses. Finally, PandaNet is completed with an Absolute Distance Estimation Module (ADEM). The combination of these two models, called Absolute PandaNet, allows the prediction of absolute human 3D poses expressed in the camera frame
Toony, Razieh. "Calibration-free Pedestrian Partial Pose Estimation Using a High-mounted Kinect". Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26420.
The application of human behavior analysis has undergone rapid development during the last decades from entertainment system to professional one, as Human Robot Interaction (HRI), Advanced Driver Assistance System (ADAS), Pedestrian Protection System (PPS), etc. Meanwhile, this thesis addresses the problem of recognizing pedestrians and estimating their body orientation in 3D based on the fact that estimating a person’s orientation is beneficial in determining their behavior. In this thesis, a new method is proposed for detecting and estimating the orientation, in which the result of a pedestrian detection module and a orientation estimation module are integrated sequentially. For the goal of pedestrian detection, a cascade classifier is designed to draw a bounding box around the detected pedestrian. Following this, extracted regions are given to a discrete orientation classifier to estimate pedestrian body’s orientation. This classification is based on a coarse, rasterized depth image simulating a top-view virtual camera, and uses a support vector machine classifier that was trained to distinguish 10 orientations (30 degrees increments). In order to test the performance of our approach, a new benchmark database contains 764 sets of point cloud for body-orientation classification was captured. For this benchmark, a Kinect recorded the point cloud of 30 participants and a marker-based motion capture system (Vicon) provided the ground truth on their orientation. Finally we demonstrated the improvements brought by our system, as it detected pedestrian with an accuracy of 95:29% and estimated the body orientation with an accuracy of 88:88%.We hope it can provide a new foundation for future researches.
Carbonera, Luvizon Diogo. "Apprentissage automatique pour la reconnaissance d'action humaine et l'estimation de pose à partir de l'information 3D". Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1015.
3D human action recognition is a challenging task due to the complexity ofhuman movements and to the variety on poses and actions performed by distinctsubjects. Recent technologies based on depth sensors can provide 3D humanskeletons with low computational cost, which is an useful information foraction recognition. However, such low cost sensors are restricted tocontrolled environment and frequently output noisy data. Meanwhile,convolutional neural networks (CNN) have shown significant improvements onboth action recognition and 3D human pose estimation from RGB images. Despitebeing closely related problems, the two tasks are frequently handled separatedin the literature. In this work, we analyze the problem of 3D human actionrecognition in two scenarios: first, we explore spatial and temporalfeatures from human skeletons, which are aggregated by a shallow metriclearning approach. In the second scenario, we not only show that precise 3Dposes are beneficial to action recognition, but also that both tasks can beefficiently performed by a single deep neural network and stillachieves state-of-the-art results. Additionally, wedemonstrate that optimization from end-to-end using poses as an intermediateconstraint leads to significant higher accuracy on the action task thanseparated learning. Finally, we propose a new scalable architecture forreal-time 3D pose estimation and action recognition simultaneously, whichoffers a range of performance vs speed trade-off with a single multimodal andmultitask training procedure
Dogan, Emre. "Human pose estimation and action recognition by multi-robot systems". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI060/document.
Estimating human pose and recognizing human activities are important steps in many applications, such as human computer interfaces (HCI), health care, smart conferencing, robotics, security surveillance etc. Despite the ongoing effort in the domain, these tasks remained unsolved in unconstrained and non cooperative environments in particular. Pose estimation and activity recognition face many challenges under these conditions such as occlusion or self occlusion, variations in clothing, background clutter, deformable nature of human body and diversity of human behaviors during activities. Using depth imagery has been a popular solution to address appearance and background related challenges, but it has restricted application area due to its hardware limitations and fails to handle remaining problems. Specifically, we considered action recognition scenarios where the position of the recording device is not fixed, and consequently require a method which is not affected by the viewpoint. As a second prob- lem, we tackled the human pose estimation task in particular settings where multiple visual sensors are available and allowed to collaborate. In this thesis, we addressed these two related problems separately. In the first part, we focused on indoor action recognition from videos and we consider complex ac- tivities. To this end, we explored several methodologies and eventually introduced a 3D spatio-temporal representation for a video sequence that is viewpoint independent. More specifically, we captured the movement of the person over time using depth sensor and we encoded it in 3D to represent the performed action with a single structure. A 3D feature descriptor was employed afterwards to build a codebook and classify the actions with the bag-of-words approach. As for the second part, we concentrated on articulated pose estimation, which is often an intermediate step for activity recognition. Our motivation was to incorporate information from multiple sources and views and fuse them early in the pipeline to overcome the problem of self-occlusion, and eventually obtain robust estimations. To achieve this, we proposed a multi-view flexible mixture of parts model inspired by the classical pictorial structures methodology. In addition to the single-view appearance of the human body and its kinematic priors, we demonstrated that geometrical constraints and appearance- consistency parameters are effective for boosting the coherence between the viewpoints in a multi-view setting. Both methods that we proposed was evaluated on public benchmarks and showed that the use of view-independent representations and integrating information from multiple viewpoints improves the performance of action recognition and pose estimation tasks, respectively
Fathollahi, Ghezelghieh Mona. "Estimation of Human Poses Categories and Physical Object Properties from Motion Trajectories". Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6835.
Tokunaga, Daniel Makoto. "Local pose estimation of feature points for object based augmented reality". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-22092016-110832/.
O uso de objetos reais como meio de conexão entre informações reais e virtuais é um aspecto chave dentro da realidade aumentada. Uma questão central para tal conexão é a estimativa de informações visuo-espaciais do objeto, ou em outras palavras, a detecção da pose do objeto. Diferentes objetos podem ter diferentes comportamentos quando utilizados em interações. Não somente incluindo a mudança de posição, mas também sendo dobradas ou deformadas. Pesquisas tradicionais solucionam tais problemas de detecção usando diferentes abordagens, dependendo do tipo de objeto. Adicionalmente, algumas pesquisas se baseiam somente na informação posicional dos pontos de interesse, simplificando a informação do objeto. Neste trabalho, a detecção de pose de diferente objetos é explorada coletando-se mais informações dos pontos de interesse observados e, por sua vez, obtendo as poses locais de tais pontos, poses que não são exploradas em outras pesquisas. Este conceito da detecção de pose locais é aplicada em dois ambientes de capturas, estendendo-se em duas abordagens inovadoras: uma baseada em câmeras RGB-D, e outra baseada em câmeras RGB e métodos de aprendizado de maquinas. Na abordagem baseada em RGB-D, a orientação e superfície ao redor do ponto de interesse são utilizadas para obter a normal do ponto. Através de tais informações a pose local é obtida. Esta abordagem não só permite a obtenção de poses de objetos rígidos, mas também a pose aproximada de objetos deformáveis. Por outro lado, a abordagem baseada em RGB explora o aprendizado de máquina aplicado em alterações das aparências locais. Diferentemente de outros trabalhos baseados em câmeras RGB, esta abordagem substitui solucionadores não lineares complexos com um método rápido e robusto, permitindo a obtenção de rotações locais dos pontos de interesse, assim como, a pose completa (com 6 graus-de-liberdade) de objetos rígidos, com uma demanda computacional muito menor para cálculos em tempo-real. Ambas as abordagens mostram que a coleta de poses locais podem gerar informações para a detecção de poses de diferentes tipos de objetos.
Liebelt, Jörg. "Détection de classes d'objets et estimation de leurs poses à partir de modèles 3D synthétiques". Grenoble, 2010. https://theses.hal.science/tel-00553343.
This dissertation aims at extending object class detection and pose estimation tasks on single 2D images by a 3D model-based approach. The work describes learning, detection and estimation steps adapted to the use of synthetically rendered data with known 3D geometry. Most existing approaches recognize object classes for a particular viewpoint or combine classifiers for a few discrete views. By using existing CAD models and rendering techniques from the domain of computer graphics which are parameterized to reproduce some variations commonly found in real images, we propose instead to build 3D representations of object classes which allow to handle viewpoint changes and intra-class variability. These 3D representations are derived in two different ways : either as an unsupervised filtering process of pose and class discriminant local features on purely synthetic training data, or as a part model which discriminatively learns the object class appearance from an annotated database of real images and builds a generative representation of 3D geometry from a database of synthetic CAD models. During detection, we introduce a 3D voting scheme which reinforces geometric coherence by means of a robust pose estimation, and we propose an alternative probabilistic pose estimation method which evaluates the likelihood of groups of 2D part detections with respect to a full 3D geometry. Both detection methods yield approximate 3D bounding boxes in addition to 2D localizations ; these initializations are subsequently improved by a registration scheme aligning arbitrary 3D models to optical and Synthetic Aperture Radar (SAR) images in order to disambiguate and prune 2D detections and to handle occlusions. The work is evaluated on several standard benchmark datasets and it is shown to achieve state-of-the-art performance for 2D detection in addition to providing 3D pose estimations from single images
Blanc, Beyne Thibault. "Estimation de posture 3D à partir de données imprécises et incomplètes : application à l'analyse d'activité d'opérateurs humains dans un centre de tri". Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0106.
In a context of study of stress and ergonomics at work for the prevention of musculoskeletal disorders, the company Ebhys wants to develop a tool for analyzing the activity of human operators in a waste sorting center, by measuring ergonomic indicators. To cope with the uncontrolled environment of the sorting center, these indicators are measured from depth images. An ergonomic study allows us to define the indicators to be measured. These indicators are zones of movement of the operator’s hands and zones of angulations of certain joints of the upper body. They are therefore indicators that can be obtained from an analysis of the operator’s 3D pose. The software for calculating the indicators will thus be composed of three steps : a first part segments the operator from the rest of the scene to ease the 3D pose estimation, a second part estimates the operator’s 3D pose, and the third part uses the operator’s 3D pose to compute the ergonomic indicators. First of all, we propose an algorithm that extracts the operator from the rest of the depth image. To do this, we use a first automatic segmentation based on static background removal and selection of a moving element given its position and size. This first segmentation allows us to train a neural network that improves the results. This neural network is trained using the segmentations obtained from the first automatic segmentation, from which the best quality samples are automatically selected during training. Next, we build a neural network model to estimate the operator’s 3D pose. We propose a study that allows us to find a light and optimal model for 3D pose estimation on synthetic depth images, which we generate numerically. However, if this network gives outstanding performances on synthetic depth images, it is not directly applicable to real depth images that we acquired in an industrial context. To overcome this issue, we finally build a module that allows us to transform the synthetic depth images into more realistic depth images. This image-to-image translation model modifies the style of the depth image without changing its content, keeping the 3D pose of the operator from the synthetic source image unchanged on the translated realistic depth frames. These more realistic depth images are then used to re-train the 3D pose estimation neural network, to finally obtain a convincing 3D pose estimation on the depth images acquired in real conditions, to compute de ergonomic indicators
Gourjon, Géraud. "L'estimation du mélange génétique dans les populations humaines". Thesis, Aix-Marseille 2, 2010. http://www.theses.fr/2010AIX20686/document.
Different methods have been developed to estimate the genetic admixture contributions of parental populations to a hybrid one. Most of these methods are implemented in different software programs that provide estimates having variable accuracy. A full comparison between ADMIX (weighted least square), ADMIX95 (gene identity), Admix 2.0 (coalescent-based), Mistura (maximum-likelihood), LEA (likelihood-based) and LEADMIX (maximum-likelihood) software programs has been carried out, both at the “intra” (test of each software programs) and “inter” level (comparisons between them). We tested all of these programs on a real human population data set, using four kinds of markers, autosomal (Blood groups and KIR genes) and uniparental (mtDNA and Y-Chromosome). We demonstrated that the accuracy of the results depends not only on the method itself but also on the choice of loci and of parental populations. We consider that the results of admixture contribution rates obtained from human population data set should not be considered as an accurate value but rather as an indicative result and we suggest using an “Admixture Indicative Interval” as a measurement of admixture
Zvénigorosky-Durel, Vincent. "Etude des parentés génétiques dans les populations humaines anciennes : estimation de la fiabilité et de l'efficacité des méthodes d'analyse". Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30260/document.
The study of genetic kinship allows anthropology to identify the place of an individual within which they evolve: a biological family, a social group, a population. The application of classical probabilistic methods (that were established to solve cases in legal medicine, such as Likelihood Ratios, or LR) to STR data from archaeological material has permitted the discovery of numerous parental links which together constitute genealogies both simple and complex. Our continued practice of these methods has however led us to identify limits to the interpretation of STR data, especially in cases of complex, distant or inbred kinship. The first part of the present work is constituted by the estimation of the reliability and the efficacy of the LR method in four situations: a large modern population with significant allelic diversity, a large modern population with poor allelic diversity, a large ancient population and a small ancient population. Recent publications use the more numerous markers analysed using Next generation Sequencing (NGS) to implement new strategies in the detection of kinship, especially based on the analysis of chromosome segments shared due to common ancestry (IBD "Identity-by-Descent" segments). These methods have permitted the more reliable estimation of kinship probabilities in ancient material. They are nevertheless ill-suited to certain typical situations that are characteristic of ancient DNA studies: they were not conceived to function using single pairs of isolated individuals and they depend, like classical methods, on the estimation of allelic diversity in the population. We therefore propose the quantification of the reliability and efficiency of the IBD segment method using NGS data, focusing on the estimation of the quality of results in different situations with populations of different sizes and different sets of more or less heterogeneous samples.[...]
Libri sul tema "Estimation de poses humaines":
Trottier, Guy. La main-d'oeuvre en physiothérapie et en techniques de réadaptation physique au Québec: État de situation et estimation de l'offre et de la demande de ressources humaines jusqu'en 2004. [Québec]: Gouvernement du Québec, Ministère de la santé et des services sociaux, Direction générale de la planification et de l'évaluation, 1990.
Ontario. Esquisse de cours 12e année: Sciences de l'activité physique pse4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Technologie de l'information en affaires btx4e cours préemploi. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Études informatiques ics4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Mathématiques de la technologie au collège mct4c cours précollégial. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Sciences snc4m cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: English eae4e cours préemploi. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Le Canada et le monde: une analyse géographique cgw4u cours préuniversitaire. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Environnement et gestion des ressources cgr4e cours préemploi. Vanier, Ont: CFORP, 2002.
Ontario. Esquisse de cours 12e année: Histoire de l'Occident et du monde chy4c cours précollégial. Vanier, Ont: CFORP, 2002.
Capitoli di libri sul tema "Estimation de poses humaines":
Sciortino, Giuseppa, Giovanni Maria Farinella, Sebastiano Battiato, Marco Leo e Cosimo Distante. "On the Estimation of Children’s Poses". In Image Analysis and Processing - ICIAP 2017, 410–21. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68548-9_38.
Salinero Santamaría, Sergio, Antía Carmona Balea, Mario Rubio González, Javier Caballero Sandoval, Germán Francés Tostado, Héctor Sánchez San Blas e Gabriel Villarrubia González. "Poses Estimation Technology for Physical Activity Monitoring". In Advances in Intelligent Systems and Computing, 352–60. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-38344-1_35.
Baroliya, Jitendra Kumar, e Amit Doegar. "Human Body Poses Detection and Estimation Using Convolutional Neural Network". In Proceedings of Data Analytics and Management, 303–15. Singapore: Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-99-6544-1_23.
Ma, Bingpeng, Fei Yang, Wen Gao e Baochang Zhang. "The Application of Extended Geodesic Distance in Head Poses Estimation". In Advances in Biometrics, 192–98. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11608288_26.
Tobisch, Franziska, Karla Weigelt, Pascal Philipp e Florian Matthes. "Investigating Effort Estimation in a Large-Scale Agile ERP Transformation Program". In Lecture Notes in Business Information Processing, 70–86. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-61154-4_5.
Guerrero, Pablo, e Javier Ruiz-del-Solar. "Improving Robot Self-localization Using Landmarks’ Poses Tracking and Odometry Error Estimation". In RoboCup 2007: Robot Soccer World Cup XI, 148–58. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-68847-1_13.
Xu, Yuquan, Seiichi Mita e Silong Peng. "A Fast Blind Spatially-Varying Motion Deblurring Algorithm with Camera Poses Estimation". In Computer Vision – ACCV 2016, 157–72. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54187-7_11.
Huang, Xinyu, Jizhou Gao, Sen-ching S. Cheung e Ruigang Yang. "Manifold Estimation in View-Based Feature Space for Face Synthesis across Poses". In Computer Vision – ACCV 2009, 37–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12307-8_4.
Lee, Seunghee, Jungmo Koo, Hyungjin Kim, Kwangyik Jung e Hyun Myung. "A Robust Estimation of 2D Human Upper-Body Poses Using Fully Convolutional Network". In Robot Intelligence Technology and Applications 5, 549–58. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-78452-6_44.
Steege, Frank-Florian, Christian Martin e Horst-Michael Groß. "Estimation of Pointing Poses on Monocular Images with Neural Techniques - An Experimental Comparison". In Lecture Notes in Computer Science, 593–602. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74695-9_61.
Atti di convegni sul tema "Estimation de poses humaines":
Krishnan, Hema, Anagha Jayaraj, Anagha S, Christy Thomas e Grace Mol Joy. "Pose Estimation of Yoga Poses using ML Techniques". In 2022 IEEE 19th India Council International Conference (INDICON). IEEE, 2022. http://dx.doi.org/10.1109/indicon56171.2022.10040162.
Atrevi, Fabrice Dieudonné, Damien Vivet, Florent Duculty e Bruno Emile. "3D Human Poses Estimation from a Single 2D Silhouette". In International Conference on Computer Vision Theory and Applications. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0005711503610369.
Xu, Lu, Chen Hu, Yinqi Li, Jiran Tao, Jianru Xue e Kuizhi Mei. "Deep Conditional Variational Estimation for Depth-Based Hand Poses". In 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019). IEEE, 2019. http://dx.doi.org/10.1109/fg.2019.8756559.
Wang, Chunyu, Yizhou Wang, Zhouchen Lin, Alan L. Yuille e Wen Gao. "Robust Estimation of 3D Human Poses from a Single Image". In 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2014. http://dx.doi.org/10.1109/cvpr.2014.303.
Song, Yafei, Xiaowu Chen, Xiaogang Wang, Yu Zhang e Jia Li. "Fast Estimation of Relative Poses for 6-DOF Image Localization". In 2015 IEEE International Conference on Multimedia Big Data (BigMM). IEEE, 2015. http://dx.doi.org/10.1109/bigmm.2015.10.
Liu, Shaowei, Hanwen Jiang, Jiarui Xu, Sifei Liu e Xiaolong Wang. "Semi-Supervised 3D Hand-Object Poses Estimation with Interactions in Time". In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01445.
Hauck, Johannes, Adam Kalisz e Jorn Thielecke. "Continuous-Time Trajectory Estimation From Noisy Camera Poses Using Cubic Bézier Curves". In 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE). IEEE, 2021. http://dx.doi.org/10.1109/case49439.2021.9551621.
Huang, Wan-Chia, Cheng-Liang Shih, Irin Tri Anggraini, Yanqi Xiao, Nobuo Funabiki e Chih-Peng Fan. "OpenPose Based Yoga Poses Difficulty Estimation for Dynamic and Static Yoga Exercises". In 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2023. http://dx.doi.org/10.1109/apsipaasc58517.2023.10317354.
Amaya, Kotaro, e Mariko Isogawa. "Adaptive and Robust Mmwave-Based 3D Human Mesh Estimation for Diverse Poses". In 2023 IEEE International Conference on Image Processing (ICIP). IEEE, 2023. http://dx.doi.org/10.1109/icip49359.2023.10222059.
Ishii, Yohei, Hitoshi Hongo, Yoshinori Niwa e Kazuhiko Yamamoto. "Comparison of different methods for gender estimation from face image of various poses". In Quality Control by Artificial Vision, a cura di Kenneth W. Tobin, Jr. e Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.515128.
Rapporti di organizzazioni sul tema "Estimation de poses humaines":
Aihara, Shimpei, Takara Saki, Tyusei Shibata, Toshiaki Matsubara, Ryosuke Mizukami, Yudai Yoshida e Akira Shionoya. Deep Learning Model for Integrated Estimation of Wheelchair and Human Poses Using Camera Images. Purdue University, 2022. http://dx.doi.org/10.5703/1288284317545.