Статті в журналах з теми "Pose-aware"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Pose-aware.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Pose-aware".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Wu, Lele, Zhenbo Yu, Yijiang Liu, and Qingshan Liu. "Limb Pose Aware Networks for Monocular 3D Pose Estimation." IEEE Transactions on Image Processing 31 (2022): 906–17. http://dx.doi.org/10.1109/tip.2021.3136613.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Teller, S., Jiawen Chen, and H. Balakrishnan. "Pervasive pose-aware applications and infrastructure." IEEE Computer Graphics and Applications 23, no. 4 (July 2003): 14–18. http://dx.doi.org/10.1109/mcg.2003.1210859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Masi, Iacopo, Feng-Ju Chang, Jongmoo Choi, Shai Harel, Jungyeon Kim, KangGeon Kim, Jatuporn Leksut, et al. "Learning Pose-Aware Models for Pose-Invariant Face Recognition in the Wild." IEEE Transactions on Pattern Analysis and Machine Intelligence 41, no. 2 (February 1, 2019): 379–93. http://dx.doi.org/10.1109/tpami.2018.2792452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Xiao, Yabo, Dongdong Yu, Xiao Juan Wang, Lei Jin, Guoli Wang, and Qian Zhang. "Learning Quality-Aware Representation for Multi-Person Pose Regression." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2822–30. http://dx.doi.org/10.1609/aaai.v36i3.20186.

Повний текст джерела
Анотація:
Off-the-shelf single-stage multi-person pose regression methods generally leverage the instance score (i.e., confidence of the instance localization) to indicate the pose quality for selecting the pose candidates. We consider that there are two gaps involved in existing paradigm: 1) The instance score is not well interrelated with the pose regression quality. 2) The instance feature representation, which is used for predicting the instance score, does not explicitly encode the structural pose information to predict the reasonable score that represents pose regression quality. To address the aforementioned issues, we propose to learn the pose regression quality-aware representation. Concretely, for the first gap, instead of using the previous instance confidence label (e.g., discrete {1,0} or Gaussian representation) to denote the position and confidence for person instance, we firstly introduce the Consistent Instance Representation (CIR) that unifies the pose regression quality score of instance and the confidence of background into a pixel-wise score map to calibrates the inconsistency between instance score and pose regression quality. To fill the second gap, we further present the Query Encoding Module (QEM) including the Keypoint Query Encoding (KQE) to encode the positional and semantic information for each keypoint and the Pose Query Encoding (PQE) which explicitly encodes the predicted structural pose information to better fit the Consistent Instance Representation (CIR). By using the proposed components, we significantly alleviate the above gaps. Our method outperforms previous single-stage regression-based even bottom-up methods and achieves the state-of-the-art result of 71.7 AP on MS COCO test-dev set.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yu, Han, Congju Du, and Li Yu. "Scale-aware heatmap representation for human pose estimation." Pattern Recognition Letters 154 (February 2022): 1–6. http://dx.doi.org/10.1016/j.patrec.2021.12.018.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Glasner, Daniel, Meirav Galun, Sharon Alpert, Ronen Basri, and Gregory Shakhnarovich. "Viewpoint-aware object detection and continuous pose estimation." Image and Vision Computing 30, no. 12 (December 2012): 923–33. http://dx.doi.org/10.1016/j.imavis.2012.09.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhang, Hongjia, Junwen Huang, Xin Xu, Qiang Fang, and Yifei Shi. "Symmetry-Aware 6D Object Pose Estimation via Multitask Learning." Complexity 2020 (October 21, 2020): 1–7. http://dx.doi.org/10.1155/2020/8820500.

Повний текст джерела
Анотація:
Although 6D object pose estimation has been intensively explored in the past decades, the performance is still not fully satisfactory, especially when it comes to symmetric objects. In this paper, we study the problem of 6D object pose estimation by leveraging the information of object symmetry. To this end, a network is proposed that predicts 6D object pose and object reflectional symmetry as well as the key points simultaneously via a multitask learning scheme. Consequently, the pose estimation is aware of and regulated by the symmetry axis and the key points of the to-be-estimated objects. Moreover, we devise an optimization function to refine the predicted 6D object pose by considering the predicted symmetry. Experiments on two datasets demonstrate that the proposed symmetry-aware approach outperforms the existing methods in terms of predicting 6D pose estimation of symmetric objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zou, Xinyi, Guiqing Li, Mengxiao Yin, Yuxin Liu, and Yupan Wang. "Deformation-Graph-Driven and Deformation Aware Spectral Pose Transfer." Journal of Computer-Aided Design & Computer Graphics 33, no. 8 (April 1, 2021): 1234–45. http://dx.doi.org/10.3724/sp.j.1089.2021.18667.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Zhou, Desen, and Qian He. "PoSeg: Pose-Aware Refinement Network for Human Instance Segmentation." IEEE Access 8 (2020): 15007–16. http://dx.doi.org/10.1109/access.2020.2967147.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Shen, Mali, Yun Gu, Ning Liu, and Guang-Zhong Yang. "Context-Aware Depth and Pose Estimation for Bronchoscopic Navigation." IEEE Robotics and Automation Letters 4, no. 2 (April 2019): 732–39. http://dx.doi.org/10.1109/lra.2019.2893419.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Adeli, Vida, Ehsan Adeli, Ian Reid, Juan Carlos Niebles, and Hamid Rezatofighi. "Socially and Contextually Aware Human Motion and Pose Forecasting." IEEE Robotics and Automation Letters 5, no. 4 (October 2020): 6033–40. http://dx.doi.org/10.1109/lra.2020.3010742.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Bin, Yanrui, Zhao-Min Chen, Xiu-Shen Wei, Xinya Chen, Changxin Gao, and Nong Sang. "Structure-aware human pose estimation with graph convolutional networks." Pattern Recognition 106 (October 2020): 107410. http://dx.doi.org/10.1016/j.patcog.2020.107410.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Zhu, Qi. "Pose-aware Person Re-Identification with Spatial-temporal Attention." IOP Conference Series: Materials Science and Engineering 646 (October 17, 2019): 012051. http://dx.doi.org/10.1088/1757-899x/646/1/012051.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Wu, Yunjie, Zhengxing Sun, Youcheng Song, Yunhan Sun, YiJie Zhong, and Jinlong Shi. "Shape-Pose Ambiguity in Learning 3D Reconstruction from Images." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 4 (May 18, 2021): 2978–85. http://dx.doi.org/10.1609/aaai.v35i4.16405.

Повний текст джерела
Анотація:
Learning single-image 3D reconstruction with only 2D images supervision is a promising research topic. The main challenge in image-supervised 3D reconstruction is the shape-pose ambiguity, which means a 2D supervision can be explained by an erroneous 3D shape from an erroneous pose. It will introduce high uncertainty and mislead the learning process. Existed works rely on multi-view images or pose-aware annotations to resolve the ambiguity. In this paper, we propose to resolve the ambiguity without extra pose-aware labels or annotations. Our training data is single-view images from the same object category. To overcome the shape-pose ambiguity, we introduce a pose-independent GAN to learn the category-specific shape manifold from the image collections. With the learned shape space, we resolve the shape-pose ambiguity in original images by training a pseudo pose regressor. Finally, we learn a reconstruction network with both the common re-projection loss and a pose-independent discrimination loss, making the results plausible from all views. Through experiments on synthetic and real image datasets, we demonstrate that our method can perform comparably to existing methods while not requiring any extra pose-aware annotations, making it more applicable and adaptable.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Chang, Inho, Min-Gyu Park, Je Woo Kim, and Ju Hong Yoon. "Absolute 3D Human Pose Estimation Using Noise-Aware Radial Distance Predictions." Symmetry 15, no. 1 (December 22, 2022): 25. http://dx.doi.org/10.3390/sym15010025.

Повний текст джерела
Анотація:
We present a simple yet effective pipeline for absolute three-dimensional (3D) human pose estimation from two-dimensional (2D) joint keypoints, namely, the 2D-to-3D human pose lifting problem. Our method comprises two simple baseline networks, a 3D conversion function, and a correction network. The former two networks predict the root distance and the root-relative joint distance simultaneously. Given the input and predicted distances, the 3D conversion function recovers the absolute 3D pose, and the correction network reduces 3D pose noise caused by input uncertainties. Furthermore, to cope with input noise implicitly, we adopt a Siamese architecture that enforces the consistency of features between two training inputs, i.e., ground truth 2D joint keypoints and detected 2D joint keypoints. Finally, we experimentally validate the advantages of the proposed method and demonstrate its competitive performance over state-of-the-art absolute 2D-to-3D pose-lifting methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Tang, Jilin, Yi Yuan, Tianjia Shao, Yong Liu, Mengmeng Wang, and Kun Zhou. "Structure-aware Person Image Generation with Pose Decomposition and Semantic Correlation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 2656–64. http://dx.doi.org/10.1609/aaai.v35i3.16369.

Повний текст джерела
Анотація:
In this paper we tackle the problem of pose guided person image generation, which aims to transfer a person image from the source pose to a novel target pose while maintaining the source appearance. Given the inefficiency of standard CNNs in handling large spatial transformation, we propose a structure-aware flow based method for high-quality person image generation. Specifically, instead of learning the complex overall pose changes of human body, we decompose the human body into different semantic parts (e.g., head, torso, and legs) and apply different networks to predict the flow fields for these parts separately. Moreover, we carefully design the network modules to effectively capture the local and global semantic correlations of features within and among the human parts respectively. Extensive experimental results show that our method can generate high-quality results under large pose discrepancy and outperforms state-of-the-art methods in both qualitative and quantitative comparisons.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Wang, Hui, Peng He, Nannan Li, and Junjie Cao. "Pose Recognition of 3D Human Shapes via Multi-View CNN with Ordered View Feature Fusion." Electronics 9, no. 9 (August 23, 2020): 1368. http://dx.doi.org/10.3390/electronics9091368.

Повний текст джерела
Анотація:
Rapid pose classification and pose retrieval in 3D human datasets are important problems in shape analysis. In this paper, we extend the Multi-View Convolutional Neural Network (MVCNN) with ordered view feature fusion for orientation-aware 3D human pose classification and retrieval. Firstly, we combine each learned view feature in an orderly manner to form a compact representation for orientation-aware pose classification. Secondly, for pose retrieval, the Siamese network is adopted to learn descriptor vectors, where their L2 distances are close for pairs of shapes with the same poses and are far away for pairs of shapes with different poses. Furthermore, we also construct a larger 3D Human Pose Recognition Dataset (HPRD) consisting of 100,000 shapes for the evaluation of pose classification and retrieval. Experiments and comparisons demonstrate that our method obtains better results than previous works of pose classification and retrieval on the 3D human datasets, such as SHREC’14, FAUST, and HPRD.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Lv, Kai, Hao Sheng, Zhang Xiong, Wei Li, and Liang Zheng. "Pose-Based View Synthesis for Vehicles: A Perspective Aware Method." IEEE Transactions on Image Processing 29 (2020): 5163–74. http://dx.doi.org/10.1109/tip.2020.2980130.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Fu, Mingliang, Yuquan Leng, Haitao Luo, and Weijia Zhou. "An Occlusion-Aware Framework for Real-Time 3D Pose Tracking." Sensors 18, no. 8 (August 20, 2018): 2734. http://dx.doi.org/10.3390/s18082734.

Повний текст джерела
Анотація:
Random forest-based methods for 3D temporal tracking over an image sequence have gained increasing prominence in recent years. They do not require object’s texture and only use the raw depth images and previous pose as input, which makes them especially suitable for textureless objects. These methods learn a built-in occlusion handling from predetermined occlusion patterns, which are not always able to model the real case. Besides, the input of random forest is mixed with more and more outliers as the occlusion deepens. In this paper, we propose an occlusion-aware framework capable of real-time and robust 3D pose tracking from RGB-D images. To this end, the proposed framework is anchored in the random forest-based learning strategy, referred to as RFtracker. We aim to enhance its performance from two aspects: integrated local refinement of random forest on one side, and online rendering based occlusion handling on the other. In order to eliminate the inconsistency between learning and prediction of RFtracker, a local refinement step is embedded to guide random forest towards the optimal regression. Furthermore, we present an online rendering-based occlusion handling to improve the robustness against dynamic occlusion. Meanwhile, a lightweight convolutional neural network-based motion-compensated (CMC) module is designed to cope with fast motion and inevitable physical delay caused by imaging frequency and data transmission. Finally, experiments show that our proposed framework can cope better with heavily-occluded scenes than RFtracker and preserve the real-time performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Wang, Xun, Yan Tian, Xuran Zhao, Tao Yang, Judith Gelernter, Jialei Wang, Guohua Cheng, and Wei Hu. "Improving Multiperson Pose Estimation by Mask-aware Deep Reinforcement Learning." ACM Transactions on Multimedia Computing, Communications, and Applications 16, no. 3 (September 4, 2020): 1–18. http://dx.doi.org/10.1145/3397340.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Choi, Daewoong, Hyeonjoong Cho, Kyeongeun Seo, Sangyub Lee, Jaekyu Lee, and Jaejin Ko. "Designing Hand Pose Aware Virtual Keyboard With Hand Drift Tolerance." IEEE Access 7 (2019): 96035–47. http://dx.doi.org/10.1109/access.2019.2929310.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Hong, Chaoqun, Liang Chen, Yuxin Liang, and Zhiqiang Zeng. "Stacked Capsule Graph Autoencoders for geometry-aware 3D head pose estimation." Computer Vision and Image Understanding 208-209 (July 2021): 103224. http://dx.doi.org/10.1016/j.cviu.2021.103224.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Chaonan, Sheng Liu, Lu Yao, and Siyu Zou. "Video-based body geometric aware network for 3D human pose estimation." Optoelectronics Letters 18, no. 5 (May 2022): 313–20. http://dx.doi.org/10.1007/s11801-022-2015-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Rahimi, Mohammad Masoud, Kourosh Khoshelham, Mark Stevenson, and Stephan Winter. "Pose-aware monocular localization of occluded pedestrians in 3D scene space." ISPRS Open Journal of Photogrammetry and Remote Sensing 2 (December 2021): 100006. http://dx.doi.org/10.1016/j.ophoto.2021.100006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Cho, Yeong-Jun, and Kuk-Jin Yoon. "PaMM: Pose-Aware Multi-Shot Matching for Improving Person Re-Identification." IEEE Transactions on Image Processing 27, no. 8 (August 2018): 3739–52. http://dx.doi.org/10.1109/tip.2018.2815840.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Zhang, Xiaoyan, Zhenhua Tang, Junhui Hou, and Yanbin Hao. "3D human pose estimation via human structure-aware fully connected network." Pattern Recognition Letters 125 (July 2019): 404–10. http://dx.doi.org/10.1016/j.patrec.2019.05.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zhang, Maomao, Ao Li, Honglei Liu, and Minghui Wang. "Coarse-to-Fine Hand–Object Pose Estimation with Interaction-Aware Graph Convolutional Network." Sensors 21, no. 23 (December 3, 2021): 8092. http://dx.doi.org/10.3390/s21238092.

Повний текст джерела
Анотація:
The analysis of hand–object poses from RGB images is important for understanding and imitating human behavior and acts as a key factor in various applications. In this paper, we propose a novel coarse-to-fine two-stage framework for hand–object pose estimation, which explicitly models hand–object relations in 3D pose refinement rather than in the process of converting 2D poses to 3D poses. Specifically, in the coarse stage, 2D heatmaps of hand and object keypoints are obtained from RGB image and subsequently fed into pose regressor to derive coarse 3D poses. As for the fine stage, an interaction-aware graph convolutional network called InterGCN is introduced to perform pose refinement by fully leveraging the hand–object relations in 3D context. One major challenge in 3D pose refinement lies in the fact that relations between hand and object change dynamically according to different HOI scenarios. In response to this issue, we leverage both general and interaction-specific relation graphs to significantly enhance the capacity of the network to cover variations of HOI scenarios for successful 3D pose refinement. Extensive experiments demonstrate state-of-the-art performance of our approach on benchmark hand–object datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Gu, Yanlei, Huiyang Zhang, and Shunsuke Kamijo. "Multi-Person Pose Estimation using an Orientation and Occlusion Aware Deep Learning Network." Sensors 20, no. 6 (March 12, 2020): 1593. http://dx.doi.org/10.3390/s20061593.

Повний текст джерела
Анотація:
Image based human behavior and activity understanding has been a hot topic in the field of computer vision and multimedia. As an important part, skeleton estimation, which is also called pose estimation, has attracted lots of interests. For pose estimation, most of the deep learning approaches mainly focus on the joint feature. However, the joint feature is not sufficient, especially when the image includes multi-person and the pose is occluded or not fully visible. This paper proposes a novel multi-task framework for the multi-person pose estimation. The proposed framework is developed based on Mask Region-based Convolutional Neural Networks (R-CNN) and extended to integrate the joint feature, body boundary, body orientation and occlusion condition together. In order to further improve the performance of the multi-person pose estimation, this paper proposes to organize the different information in serial multi-task models instead of the widely used parallel multi-task network. The proposed models are trained on the public dataset Common Objects in Context (COCO), which is further augmented by ground truths of body orientation and mutual-occlusion mask. Experiments demonstrate the performance of the proposed method for multi-person pose estimation and body orientation estimation. The proposed method can detect 84.6% of the Percentage of Correct Keypoints (PCK) and has an 83.7% Correct Detection Rate (CDR). Comparisons further illustrate the proposed model can reduce the over-detection compared with other methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Yang, Cong, Gilles Simon, John See, Marie-Odile Berger, and Wenyong Wang. "WatchPose: A View-Aware Approach for Camera Pose Data Collection in Industrial Environments." Sensors 20, no. 11 (May 27, 2020): 3045. http://dx.doi.org/10.3390/s20113045.

Повний текст джерела
Анотація:
Collecting correlated scene images and camera poses is an essential step towards learning absolute camera pose regression models. While the acquisition of such data in living environments is relatively easy by following regular roads and paths, it is still a challenging task in constricted industrial environments. This is because industrial objects have varied sizes and inspections are usually carried out with non-constant motions. As a result, regression models are more sensitive to scene images with respect to viewpoints and distances. Motivated by this, we present a simple but efficient camera pose data collection method, WatchPose, to improve the generalization and robustness of camera pose regression models. Specifically, WatchPose tracks nested markers and visualizes viewpoints in an Augmented Reality- (AR) based manner to properly guide users to collect training data from broader camera-object distances and more diverse views around the objects. Experiments show that WatchPose can effectively improve the accuracy of existing camera pose regression models compared to the traditional data acquisition method. We also introduce a new dataset, Industrial10, to encourage the community to adapt camera pose regression methods for more complex environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Bhandari, Bhishan, Geonu Lee, and Jungchan Cho. "Body-Part-Aware and Multitask-Aware Single-Image-Based Action Recognition." Applied Sciences 10, no. 4 (February 24, 2020): 1531. http://dx.doi.org/10.3390/app10041531.

Повний текст джерела
Анотація:
Action recognition is an application that, ideally, requires real-time results. We focus on single-image-based action recognition instead of video-based because of improved speed and lower cost of computation. However, a single image contains limited information, which makes single-image-based action recognition a difficult problem. To get an accurate representation of action classes, we propose three feature-stream-based shallow sub-networks (image-based, attention-image-based, and part-image-based feature networks) on the deep pose estimation network in a multitasking manner. Moreover, we design the multitask-aware loss function, so that the proposed method can be adaptively trained with heterogeneous datasets where only human pose annotations or action labels are included (instead of both pose and action information), which makes it easier to apply the proposed approach to new data on behavioral analysis on intelligent systems. In our extensive experiments, we showed that these streams represent complementary information and, hence, the fused representation is robust in distinguishing diverse fine-grained action classes. Unlike other methods, the human pose information was trained using heterogeneous datasets in a multitasking manner; nevertheless, it achieved 91.91% mean average precision on the Stanford 40 Actions Dataset. Moreover, we demonstrated the proposed method can be flexibly applied to multi-labels action recognition problem on the V-COCO Dataset.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Ren, Pengfei, Haifeng Sun, Weiting Huang, Jiachang Hao, Daixuan Cheng, Qi Qi, Jingyu Wang, and Jianxin Liao. "Spatial-aware stacked regression network for real-time 3D hand pose estimation." Neurocomputing 437 (May 2021): 42–57. http://dx.doi.org/10.1016/j.neucom.2021.01.045.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Gao, Yafei, Yida Wang, Pietro Falco, Nassir Navab, and Federico Tombari. "Variational Object-Aware 3-D Hand Pose From a Single RGB Image." IEEE Robotics and Automation Letters 4, no. 4 (October 2019): 4239–46. http://dx.doi.org/10.1109/lra.2019.2930425.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Wu, Yiming, Wei Ji, Xi Li, Gang Wang, Jianwei Yin, and Fei Wu. "Context-Aware Deep Spatiotemporal Network for Hand Pose Estimation From Depth Images." IEEE Transactions on Cybernetics 50, no. 2 (February 2020): 787–97. http://dx.doi.org/10.1109/tcyb.2018.2873733.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Gill, Harriet. "Promoting sexual health." Children and Young People Now 2014, no. 9 (April 29, 2014): 34. http://dx.doi.org/10.12968/cypn.2014.9.34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Feng, Wei, Wentao Liu, Tong Li, Jing Peng, Chen Qian, and Xiaolin Hu. "Turbo Learning Framework for Human-Object Interactions Recognition and Human Pose Estimation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 898–905. http://dx.doi.org/10.1609/aaai.v33i01.3301898.

Повний текст джерела
Анотація:
Human-object interactions (HOI) recognition and pose estimation are two closely related tasks. Human pose is an essential cue for recognizing actions and localizing the interacted objects. Meanwhile, human action and their interacted objects’ localizations provide guidance for pose estimation. In this paper, we propose a turbo learning framework to perform HOI recognition and pose estimation simultaneously. First, two modules are designed to enforce message passing between the tasks, i.e. pose aware HOI recognition module and HOI guided pose estimation module. Then, these two modules form a closed loop to utilize the complementary information iteratively, which can be trained in an end-to-end manner. The proposed method achieves the state-of-the-art performance on two public benchmarks including Verbs in COCO (V-COCO) and HICO-DET datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Lu, Mingqi, Yaocong Hu, and Xiaobo Lu. "A pose-aware dynamic weighting model using feature integration for driver action recognition." Engineering Applications of Artificial Intelligence 113 (August 2022): 104918. http://dx.doi.org/10.1016/j.engappai.2022.104918.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Tanida, Yuki, and Kaori Fujinami. "Visible Position Estimation in Whole Wrist Circumference Device towards Forearm Pose-aware Display." EAI Endorsed Transactions on Context-aware Systems and Applications 4, no. 13 (March 14, 2018): 154341. http://dx.doi.org/10.4108/eai.14-3-2018.154341.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Zheng, Xiangtao, Xiumei Chen, and Xiaoqiang Lu. "A Joint Relationship Aware Neural Network for Single-Image 3D Human Pose Estimation." IEEE Transactions on Image Processing 29 (2020): 4747–58. http://dx.doi.org/10.1109/tip.2020.2972104.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Deng, Weihong, Jiani Hu, Zhongjun Wu, and Jun Guo. "From one to many: Pose-Aware Metric Learning for single-sample face recognition." Pattern Recognition 77 (May 2018): 426–37. http://dx.doi.org/10.1016/j.patcog.2017.10.020.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Voit, Michael, and Manuel Martin. "Body Pose Monitoring of Passengers for Context-aware Assistance Systems of the Future." ATZ worldwide 119, no. 4 (March 15, 2017): 60–63. http://dx.doi.org/10.1007/s38311-017-0005-4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Chung, Minyoung, Minkyung Lee, Jioh Hong, Sanguk Park, Jusang Lee, Jingyu Lee, Il-Hyung Yang, Jeongjin Lee, and Yeong-Gil Shin. "Pose-aware instance segmentation framework from cone beam CT images for tooth segmentation." Computers in Biology and Medicine 120 (May 2020): 103720. http://dx.doi.org/10.1016/j.compbiomed.2020.103720.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Yang, Sen, Ze Feng, Zhicheng Wang, Yanjie Li, Shoukui Zhang, Zhibin Quan, Shu-tao Xia, and Wankou Yang. "Detecting and grouping keypoints for multi-person pose estimation using instance-aware attention." Pattern Recognition 136 (April 2023): 109232. http://dx.doi.org/10.1016/j.patcog.2022.109232.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Meng, Xianjia, Yong Yang, Kang Li, and Zuobin Ying. "A Structure-Aware Adversarial Framework with the Keypoint Biorientation Field for Multiperson Pose Estimation." Wireless Communications and Mobile Computing 2022 (February 14, 2022): 1–17. http://dx.doi.org/10.1155/2022/3447827.

Повний текст джерела
Анотація:
Human pose estimation is aimed at locating the anatomical parts or keypoints of the human body and is regarded as a core component in obtaining detailed human understanding in images or videos. However, the occlusion and overlap upon human bodies and complex backgrounds often result in implausible pose predictions. To address the problem, we propose a structure-aware adversarial framework, which combines cues of local joint interconnectivity and priors about the holistic structure of human bodies, achieving high-quality results for multiperson human pose estimation. Effective learning of such cues and priors is typically a challenge. The presented framework uses a nonparametric representation, which is referred to as the Keypoint Biorientation Field (KBOF), to learn orientation cues of joint interinteractivity in the image, just as human vision can explore geometric constraints of joint interconnectivity. Additionally, a module using multiscale feature representation with inflated convolution for joint heatmap detection and Keypoint Biorientation Field detection is applied in our framework to fully explore the local features of joint points and the bidirectional connectivity between them at the microscopic level. Finally, we employ improving generative adversarial networks which use KBOF and multiscale feature extraction that implicitly leverages the cues and priors about the structure of human bodies for global structural inference. The adversarial network enables our framework to combine information about the connections between local body joints at the microscopic level and the structural priors of the human body at the global level, thus enhancing the performance of our framework. The effectiveness and robustness of the network are evaluated on the task of human pose prediction in two widely used benchmark datasets, i.e., MPII and COCO datasets. Our approach outperforms the state-of-the-art methods, especially in the case of complex scenes. Our method achieves an improvement of 2.6% and 1.7% compared to the latest method on the MPII test set and COCO validation set, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Wu, Chenrui, Long Chen, and Shiqing Wu. "Cross-Attention-Based Reflection-Aware 6D Pose Estimation Network for Non-Lambertian Objects from RGB Images." Machines 10, no. 12 (November 22, 2022): 1107. http://dx.doi.org/10.3390/machines10121107.

Повний текст джерела
Анотація:
Six-dimensional pose estimation for non-Lambertian objects, such as metal parts, is essential in intelligent manufacturing. Current methods pay much less attention to the influence of the surface reflection problem in 6D pose estimation. In this paper, we propose a cross-attention-based reflection-aware 6D pose estimation network (CAR6D) for solving the surface reflection problem in 6D pose estimation. We use a pseudo-Siamese network structure to extract features from both an RGB image and a 3D model. The cross-attention layers are designed as a bi-directional filter for each of the inputs (the RGB image and 3D model) to focus on calculating the correspondences of the objects. The network is trained to segment the reflection area from the object area. Training images with ground-truth labels of the reflection area are generated with a physical-based rendering method. The experimental results on a 6D dataset of metal parts demonstrate the superiority of CAR6D in comparison with other state-of-the-art models.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hu, Hezhen, Wengang Zhou, and Houqiang Li. "Hand-Model-Aware Sign Language Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1558–66. http://dx.doi.org/10.1609/aaai.v35i2.16247.

Повний текст джерела
Анотація:
Hand gestures play a dominant role in the expression of sign language. Current deep-learning based video sign language recognition (SLR) methods usually follow a data-driven paradigm under the supervision of the category label. However, those methods suffer limited interpretability and may encounter the overfitting issue due to limited sign data sources. In this paper, we introduce the hand prior and propose a new hand-model-aware framework for isolated SLR with the modeling hand as the intermediate representation. We first transform the cropped hand sequence into the latent semantic feature. Then the hand model introduces the hand prior and provides a mapping from the semantic feature to the compact hand pose representation. Finally, the inference module enhances the spatio-temporal pose representation and performs the final recognition. Due to the lack of annotation on the hand pose under current sign language datasets, we further guide its learning by utilizing multiple weakly-supervised losses to constrain its spatial and temporal consistency. To validate the effectiveness of our method, we perform extensive experiments on four benchmark datasets, including NMFs-CSL, SLR500, MSASL and WLASL. Experimental results demonstrate that our method achieves state-of-the-art performance on all four popular benchmarks with a notable margin.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Chen, Zerui, Yan Huang, Hongyuan Yu, and Liang Wang. "Learning a Robust Part-Aware Monocular 3D Human Pose Estimator via Neural Architecture Search." International Journal of Computer Vision 130, no. 1 (October 26, 2021): 56–75. http://dx.doi.org/10.1007/s11263-021-01525-0.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zins, Matthieu, Gilles Simon, and Marie-Odile Berger. "Object-Based Visual Camera Pose Estimation From Ellipsoidal Model and 3D-Aware Ellipse Prediction." International Journal of Computer Vision 130, no. 4 (March 7, 2022): 1107–26. http://dx.doi.org/10.1007/s11263-022-01585-w.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Jabłoński, Mirosław. "Silhouette Processing Via Mathematical Morphology with Pose-Aware Structuring Elements Based on 3D Model." Image Processing & Communications 17, no. 4 (December 1, 2012): 71–78. http://dx.doi.org/10.2478/v10248-012-0031-1.

Повний текст джерела
Анотація:
Abstract In the paper, the method of poseaware silhouette processing is presented. Morphological closing is proposed to enhance segmented silhouette object. The contribution of the work is adaptation of structuring element used for mathematical morphology erosions and dilations. It is proposed to use camera parameters, 3D model of the scene, model of the silhouette and its position to compute structuring element adequate to the individual projected to the camera image. Structuring element computation and basic morphology operators were implemented in OpenCL environment and tested on parallel GPU platform. Comparison with utility software packages is provided and results are briefly discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Wang, Jiang, Farhan Ullah, Ying Cai, and Jing Li. "Non-Stationary Representation for Continuity Aware Head Pose Estimation Via Deep Neural Decision Trees." IEEE Access 7 (2019): 181947–58. http://dx.doi.org/10.1109/access.2019.2959584.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Fotouhi, Javad, Bernhard Fuerst, Alex Johnson, Sing Chun Lee, Russell Taylor, Greg Osgood, Nassir Navab, and Mehran Armand. "Pose-aware C-arm for automatic re-initialization of interventional 2D/3D image registration." International Journal of Computer Assisted Radiology and Surgery 12, no. 7 (May 19, 2017): 1221–30. http://dx.doi.org/10.1007/s11548-017-1611-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії