Статті в журналах з теми "Pose variations"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Pose variations.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Pose variations".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Du, Shan, and Rabab Ward. "Face recognition under pose variations." Journal of the Franklin Institute 343, no. 6 (September 2006): 596–613. http://dx.doi.org/10.1016/j.jfranklin.2006.08.006.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Benjamin Dias, M., and Bernard F. Buxton. "Separating shape and pose variations." Image and Vision Computing 22, no. 10 (September 2004): 851–61. http://dx.doi.org/10.1016/j.imavis.2004.02.002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Palese, Marcella. "Variations by generalized symmetries of local Noether strong currents equivalent to global canonical Noether currents." Communications in Mathematics 24, no. 2 (December 1, 2016): 125–35. http://dx.doi.org/10.1515/cm-2016-0009.

Повний текст джерела
Анотація:
Abstract We will pose the inverse problem question within the Krupka variational sequence framework. In particular, the interplay of inverse problems with symmetry and invariance properties will be exploited considering that the cohomology class of the variational Lie derivative of an equivalence class of forms, closed in the variational sequence, is trivial. We will focalize on the case of symmetries of globally defined field equations which are only locally variational and prove that variations of local Noether strong currents are variationally equivalent to global canonical Noether currents. Variations, taken to be generalized symmetries and also belonging to the kernel of the second variational derivative of the local problem, generate canonical Noether currents - associated with variations of local Lagrangians - which in particular turn out to be conserved along any section. We also characterize the variation of the canonical Noether currents associated with a local variational problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

CHEN, SHAOKANG, BRIAN C. LOVELL, and TING SHAN. "ROBUST ADAPTED PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 03 (May 2009): 491–520. http://dx.doi.org/10.1142/s0218001409007284.

Повний текст джерела
Анотація:
Recognizing faces with uncontrolled pose, illumination, and expression is a challenging task due to the fact that features insensitive to one variation may be highly sensitive to the other variations. Existing techniques dealing with just one of these variations are very often unable to cope with the other variations. The problem is even more difficult in applications where only one gallery image per person is available. In this paper, we describe a recognition method, Adapted Principal Component Analysis (APCA), that can simultaneously deal with large variations in both illumination and facial expression using only a single gallery image per person. We have now extended this method to handle head pose variations in two steps. The first step is to apply an Active Appearance Model (AAM) to the non-frontal face image to construct a synthesized frontal face image. The second is to use APCA for classification robust to lighting and pose. The proposed technique is evaluated on three public face databases — Asian Face, Yale Face, and FERET Database — with images under different lighting conditions, facial expressions, and head poses. Experimental results show that our method performs much better than other recognition methods including PCA, FLD, PRM and LTP. More specifically, we show that by using AAM for frontal face synthesis from high pose angle faces, the recognition rate of our APCA method increases by up to a factor of 4.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Osuna, Isaac Aaron Rodriguez, Pablo Cobelli, and Nahuel Olaiz. "Bubble Formation in Pulsed Electric Field Technology May Pose Limitations." Micromachines 13, no. 8 (July 31, 2022): 1234. http://dx.doi.org/10.3390/mi13081234.

Повний текст джерела
Анотація:
Currently, increasing amounts of pulsed electric fields (PEF) are employed to improve a person’s life quality. This technology is based on the application of the shortest high voltage electrical pulse, which generates an increment over the cell membrane permeability. When applying these pulses, an unwanted effect is electrolysis, which could alter the treatment. This work focused on the study of the local variations of the electric field and current density around the bubbles formed by the electrolysis of water by PEF technology and how these variations alter the electroporation protocol. The assays, in the present work, were carried out at 2 KV/cm, 1.2 KV/cm and 0.6 KV/cm in water, adjusting the conductivity with NaCl at 2365 μs/cm with a single pulse of 800 μs. The measurements of the bubble diameter variations due to electrolysis as a function of time allowed us to develop an experimental model of the behavior of the bubble diameter vs. time, which was used for simulation purposes. In the in silico model, we calculated that the electric field and observed an increment of current density around the bubble can be up to four times the base value due to the edge effect around it, while the thermal effects were undesirable due to the short duration of the pulses (variations of ±0.1 °C are undesirable ). This research revealed that the rise of electric current is not just because of the shift in electrical conductivity due to chemical and thermal effects, but also varies with the bubble coverage over the electrode surface and variations in the local electric field by edge effect. All these variations can conduce to unwanted limitations over PEF treatment. In the future, we recommend tests on the variation of local current conductivity and electric fields.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Shanmuganathan, M., and T. Nalini. "Face Recognition using Nearest Neighbour and Nearest Mean Classification Framework : Empirical Analysis, Conclusions and Future Directions." Journal of Physics: Conference Series 2251, no. 1 (April 1, 2022): 012010. http://dx.doi.org/10.1088/1742-6596/2251/1/012010.

Повний текст джерела
Анотація:
Abstract Human Face recognition algorithms have made huge progress in the last decade. In this manuscript, we have presented an approach for the implementation of a face recognition system in a successful manner by varying pose, scale, lighting, and age variation. The different empirical analysis was performed with various datasets for face detection and face identification. Face identification system detects efficiently segments and recognizes face in a cluttered sequence under varying pose, lighting and age variations. From this experimental analysis morphological model outperformed k-NNC, NMC based closest mean classifier and informative knowledge distillation with fairly reasonable accuracy. Three proposed methods on the basis of an efficient way of handling the face recognition problems. The morphological method outperformed well when compared with k-NNC, NMC based closest mean classifier a proposed method, and another innovative method named Informative knowledge Distillation. The morphological method is suitable for large datasets where occlusion, pose variation, age variations, and different expression of images.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Et. al., K. Suma,. "Dense Feature Based Face Recognition from Surveillance Video using Convolutional Neural Network." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 5 (April 11, 2021): 1436–49. http://dx.doi.org/10.17762/turcomat.v12i5.2040.

Повний текст джерела
Анотація:
Face Recognition is a field of identifying the person from the facial features and has wide application range in security, human computer interactions, finance etc. In recent years, many researchers have developed different algorithms to identify the Faces from various illumination variations and Pose variation, but these two problems remain unsolved in Face Recognition (FR) field. The Local Binary Pattern (LBP) has already proved its robustness in illumination variation. This paper proposes a four-patch Local Binary Pattern based FR utilizing Convolutional Neural Network (CNN) for identifying the Facial images from various illumination conditions and Pose variation.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

HU, Yuan, Jingqi YAN, Wei LI, and Pengfei SHI. "3D Face Landmarking Method under Pose and Expression Variations." IEICE Transactions on Information and Systems E94-D, no. 3 (2011): 729–33. http://dx.doi.org/10.1587/transinf.e94.d.729.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Drira, Hassen, Boulbaba Ben Amor, A. Srivastava, M. Daoudi, and R. Slama. "3D Face Recognition under Expressions, Occlusions, and Pose Variations." IEEE Transactions on Pattern Analysis and Machine Intelligence 35, no. 9 (September 2013): 2270–83. http://dx.doi.org/10.1109/tpami.2013.48.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhou, Shaohua Kevin, and Rama Chellappa. "Image-based face recognition under illumination and pose variations." Journal of the Optical Society of America A 22, no. 2 (February 1, 2005): 217. http://dx.doi.org/10.1364/josaa.22.000217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Baddar, Wissam J., and Yong Man Ro. "Mode Variational LSTM Robust to Unseen Modes of Variation: Application to Facial Expression Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3215–23. http://dx.doi.org/10.1609/aaai.v33i01.33013215.

Повний текст джерела
Анотація:
Spatio-temporal feature encoding is essential for encoding the dynamics in video sequences. Recurrent neural networks, particularly long short-term memory (LSTM) units, have been popular as an efficient tool for encoding spatio-temporal features in sequences. In this work, we investigate the effect of mode variations on the encoded spatio-temporal features using LSTMs. We show that the LSTM retains information related to the mode variation in the sequence, which is irrelevant to the task at hand (e.g. classification facial expressions). Actually, the LSTM forget mechanism is not robust enough to mode variations and preserves information that could negatively affect the encoded spatio-temporal features. We propose the mode variational LSTM to encode spatio-temporal features robust to unseen modes of variation. The mode variational LSTM modifies the original LSTM structure by adding an additional cell state that focuses on encoding the mode variation in the input sequence. To efficiently regulate what features should be stored in the additional cell state, additional gating functionality is also introduced. The effectiveness of the proposed mode variational LSTM is verified using the facial expression recognition task. Comparative experiments on publicly available datasets verified that the proposed mode variational LSTM outperforms existing methods. Moreover, a new dynamic facial expression dataset with different modes of variation, including various modes like pose and illumination variations, was collected to comprehensively evaluate the proposed mode variational LSTM. Experimental results verified that the proposed mode variational LSTM encodes spatio-temporal features robust to unseen modes of variation.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Ye, Yingsheng, Xingming Zhang, and Wing W. Y. Ng. "Color Distribution Pattern Metric for Person Reidentification." Wireless Communications and Mobile Computing 2017 (2017): 1–11. http://dx.doi.org/10.1155/2017/4089505.

Повний текст джерела
Анотація:
Accompanying the growth of surveillance infrastructures, surveillance IP cameras mount up rapidly, crowding Internet of Things (IoT) with countless surveillance frames and increasing the need of person reidentification (Re-ID) in video searching for surveillance and forensic fields. In real scenarios, performance of current proposed Re-ID methods suffers from pose and viewpoint variations due to feature extraction containing background pixels and fixed feature selection strategy for pose and viewpoint variations. To deal with pose and viewpoint variations, we propose the color distribution pattern metric (CDPM) method, employing color distribution pattern (CDP) for feature representation and SVM for classification. Different from other methods, CDP does not extract features over a certain number of dense blocks and is free from varied pedestrian image resolutions and resizing distortion. Moreover, it provides more precise features with less background influences under different body types, severe pose variations, and viewpoint variations. Experimental results show that our CDPM method achieves state-of-the-art performance on both 3DPeS dataset and ImageLab Pedestrian Recognition dataset with 68.8% and 79.8% rank 1 accuracy, respectively, under the single-shot experimental setting.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Li, Deshi, and Xiaoliang Wang. "An Analytical Measuring Rectification Algorithm of Monocular Systems in Dynamic Environment." Journal of Sensors 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/4132721.

Повний текст джерела
Анотація:
Range estimation is crucial for maintaining a safe distance, in particular for vision navigation and localization. Monocular autonomous vehicles are appropriate for outdoor environment due to their mobility and operability. However, accurate range estimation using vision system is challenging because of the nonholonomic dynamics and susceptibility of vehicles. In this paper, a measuring rectification algorithm for range estimation under shaking conditions is designed. The proposed method focuses on how to estimate range using monocular vision when a shake occurs and the algorithm only requires the pose variations of the camera to be acquired. Simultaneously, it solves the problem of how to assimilate results from different kinds of sensors. To eliminate measuring errors by shakes, we establish a pose-range variation model. Afterwards, the algebraic relation between distance increment and a camera’s poses variation is formulated. The pose variations are presented in the form of roll, pitch, and yaw angle changes to evaluate the pixel coordinate incensement. To demonstrate the superiority of our proposed algorithm, the approach is validated in a laboratory environment using Pioneer 3-DX robots. The experimental results demonstrate that the proposed approach improves in the range accuracy significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Gunawan, Alexander Agung Santoso, and Reza A. Prasetyo. "Face Recognition Performance in Facing Pose Variation." CommIT (Communication and Information Technology) Journal 11, no. 1 (August 1, 2017): 1. http://dx.doi.org/10.21512/commit.v11i1.1847.

Повний текст джерела
Анотація:
There are many real world applications of face recognition which require good performance in uncontrolled environments such as social networking, and environment surveillance. However, many researches of face recognition are done in controlled situations. Compared to the controlled environments, face recognition in uncontrolled environments comprise more variation, for example in the pose, light intensity, and expression. Therefore, face recognition in uncontrolled conditions is more challenging than in controlled settings. In thisresearch, we would like to discuss handling pose variations in face recognition. We address the representation issue us ing multi-pose of face detection based on yaw angle movement of the head as extensions of the existing frontal face recognition by using Principal Component Analysis (PCA). Then, the matching issue is solved by using Euclidean distance. This combination is known as Eigenfaces method. The experiment is done with different yaw angles and different threshold values to get the optimal results. The experimental results show that: (i) the more pose variation of face images used as training data is, the better recognition results are, but it also increases the processing time, and (ii) the lower threshold value is, the harder it recognizes a face image, but it also increases the accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Zhang, Zhenduo, Yongru Chen, Wenming Yang, Guijin Wang, and Qingmin Liao. "Pose-Invariant Face Recognition via Adaptive Angular Distillation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 3390–98. http://dx.doi.org/10.1609/aaai.v36i3.20249.

Повний текст джерела
Анотація:
Pose-invariant face recognition is a practically useful but challenging task. This paper introduces a novel method to learn pose-invariant feature representation without normalizing profile faces to frontal ones or learning disentangled features. We first design a novel strategy to learn pose-invariant feature embeddings by distilling the angular knowledge of frontal faces extracted by teacher network to student network, which enables the handling of faces with large pose variations. In this way, the features of faces across variant poses can cluster compactly for the same person to create a pose-invariant face representation. Secondly, we propose a Pose-Adaptive Angular Distillation loss to mitigate the negative effect of uneven distribution of face poses in the training dataset to pay more attention to the samples with large pose variations. Extensive experiments on two challenging benchmarks (IJB-A and CFP-FP) show that our approach consistently outperforms the existing methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Prędota, Stanisław. "Over Vlaams-Nederlandse woordenboeken." Werkwinkel 9, no. 1 (July 17, 2014): 91–105. http://dx.doi.org/10.2478/werk-2014-0006.

Повний текст джерела
Анотація:
Abstract The contemporary Dutch language belongs to European multi-centered languages and has three variations: Dutch of the Kingdom of the Netherlands, Dutch in Northern Belgium, and Dutch in Surinam. There are differences among the above variations which mainly regard the pronunciation and lexicon. The Flemish and Surinam variations pose a great challenge, especially for the translators of the Flemish and Surinam literature. Similarly, they pose also a significant theoretical and practical problem for the authors of one and two-language dictionaries of the Dutch language. The contemporary lexicography attempts to register the differences which one can find between the standard of the Dutch language and: its Northern Belgium variation, as well as its Surinam variation. It needs to be noted that lexicographers so far have been paying much attention to lexical differences between Dutch of the Kingdom of the Netherlands and Dutch of the Northern Belgium. In this very paper there are described four printed Flemish-Dutch dictionaries and one online dictionary, we also characterize the Prisma Handwoordenboek Nederlands met onderscheid tussen het Belgisch-Nederlands en Nederlands-Nederlands met medewerking van W. Martin en W. Smedts.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

ABAYOMI-ALLI, A., E. O. OMIDIORA, S. O. OLABIYISI, J. A. Ojo, and A. Y. AKINGBOYE. "BLACKFACE SURVEILLANCE CAMERA DATABASE FOR EVALUATING FACE RECOGNITION IN LOW QUALITY SCENARIOS." Journal of Natural Sciences Engineering and Technology 15, no. 2 (November 22, 2017): 13–31. http://dx.doi.org/10.51406/jnset.v15i2.1668.

Повний текст джерела
Анотація:
Many face recognition algorithms perform poorly in real life surveillance scenarios because they were tested with datasets that are already biased with high quality images and certain ethnic or racial types. In this paper a black face surveillance camera (BFSC) database was described, which was collected from four low quality cameras and a professional camera. There were fifty (50) random volunteers and 2,850 images were collected for the frontal mugshot, surveillance (visible light), surveillance (IR night vision), and pose variations datasets, respectively. Images were taken at distance 3.4, 2.4, and 1.4 metres from the camera, while the pose variation images were taken at nine distinct pose angles with an increment of 22.5 degrees to the left and right of the subject. Three Face Recognition Algorithms (FRA), a commercially available Luxand SDK, Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were evaluated for performance comparison in low quality scenarios. Results obtained show that camera quality (resolution), face-to-camera distance, average recognition time, lighting conditions and pose variations all affect the performance of FRAs. Luxand SDK, PCA and LDA returned an overall accuracy of 97.5%, 93.8% and 92.9% after categorizing the BFSC images into excellent, good and acceptable quality scales.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Champagne, Zachary M., Robert Schoen, and Claire M. Riddell. "Variations in Both-Addends-Unknown Problems." Teaching Children Mathematics 21, no. 2 (September 2014): 114–21. http://dx.doi.org/10.5951/teacchilmath.21.2.0114.

Повний текст джерела
Анотація:
Early elementary school students are expected to solve twelve distinct types of word problems. A math researcher and two teachers pose a structure for thinking about one problem type that has not been studied as closely as the other eleven.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Wang, Wen-Yao, Hong-Qing Cai, Si-Yuan Qu, Wei-Hao Lin, Cheng-Cheng Liang, Hao Liu, Ze-Xiong Xie, and Ying-Jin Yuan. "Genomic Variation-Mediating Fluconazole Resistance in Yeast." Biomolecules 12, no. 6 (June 17, 2022): 845. http://dx.doi.org/10.3390/biom12060845.

Повний текст джерела
Анотація:
Fungal infections pose a serious and growing threat to public health. These infections can be treated with antifungal drugs by killing hazardous fungi in the body. However, the resistance can develop over time when fungi are exposed to antifungal drugs by generating genomic variations, including mutation, aneuploidy, and loss of heterozygosity. The variations could reduce the binding affinity of a drug to its target or block the pathway through which drugs exert their activity. Here, we review genomic variation-mediating fluconazole resistance in the yeast Candida, with the hope of highlighting the functional consequences of genomic variations for the antifungal resistance.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Mudunuri, Sivaram Prasad, and Soma Biswas. "Low Resolution Face Recognition Across Variations in Pose and Illumination." IEEE Transactions on Pattern Analysis and Machine Intelligence 38, no. 5 (May 1, 2016): 1034–40. http://dx.doi.org/10.1109/tpami.2015.2469282.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wang, Chao, Yongping Li, and Xubo Song. "Video-to-video face authentication system robust to pose variations." Expert Systems with Applications 40, no. 2 (February 2013): 722–35. http://dx.doi.org/10.1016/j.eswa.2012.08.009.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Ekundayo, Jamiu M., Reza Rezaee, and Chunyan Fan. "Measurement of gas contents in shale reservoirs – impact of gas density and implications for gas resource estimates." APPEA Journal 61, no. 2 (2021): 606. http://dx.doi.org/10.1071/aj20177.

Повний текст джерела
Анотація:
Gas shale reservoirs pose unique measurement challenges due to their ultra-low petrophysical properties and complicated pore structures. A small variation in an experimental parameter, under high-pressure conditions, may result in huge discrepancies in gas contents and the resource estimates derived from such data. This study illustrates the impact of the equation of state on the gas content determined for a shale sample. The gas content was determined from laboratory-measured high-pressure methane adsorption isotherms and theoretically described by a hybrid type model. The modelling involved the use of the Dubinin–Radushkevich isotherm to obtain the adsorbed phase density followed by the Langmuir isotherm to describe the resultant absolute adsorptions. Significant variations were observed in measured adsorption isotherms due to the variations in gas densities calculated from different equations of states. The model parameters and the gas in-place volumes estimated from those parameters also varied significantly.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Zhang, Hua, Li Jia Wang, Zhen Jie Wang, and Wei Yi Yuan. "View-Invariant Face Detection for Colorful Image." Advanced Materials Research 945-949 (June 2014): 1880–84. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.1880.

Повний текст джерела
Анотація:
To overcome illumination changes and pose variations, a pose-invariant face detection method is presented. First, an illumination compensation method based on reference white is presented to overcome the lighting variations. The reference white is obtained according to the component Y from YCbCr color space. Then, a mixture face model is constructed by the Cb and Cr from YCbCr color space and H from the HSV color space to extract faces from colorful image. At last, an eyes model is designed to locate eyes in the obtained face images, which can distinguish face from neck and arms ultimately. The presented method is conducted on the CASIA face database. The experimental results have shown that our method is robust to pose changes and illumination variations, and it can achieve well performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Wen, Xiaoyu, Juxiang Zhou, Jianhou Gan, and Sen Luo. "A discriminative multiscale feature extraction network for facial expression recognition in the wild." Measurement Science and Technology 35, no. 4 (January 4, 2024): 045005. http://dx.doi.org/10.1088/1361-6501/ad191c.

Повний текст джерела
Анотація:
Abstract Driven by advancements in deep learning technologies, substantial progress has been achieved in the field of facial expression recognition over the past decade, while challenges remain brought about by occlusions, pose variations and subtle expression differences in unconstrained (wild) scenarios. Therefore, a novel multiscale feature extraction method is proposed in this paper, that leverages convolutional neural networks to simultaneously extract deep semantic features and shallow geometric features. Through the mechanism of channel-wise self-attention, prominent features are further extracted and compressed, preserving advantageous features for distinction and thereby reducing the impact of occlusions and pose variations on expression recognition. Meanwhile, inspired by the large cosine margin concept used in face recognition, a center cosine loss function is proposed to avoid the misclassification caused by the underlying interclass similarity and substantial intra-class feature variations in the task of expression recognition. This function is designed to enhance the classification performance of the network through making the distribution of samples within the same class more compact and that between different classes sparser. The proposed method is benchmarked against several advanced baseline models on three mainstream wild datasets and two datasets that present realistic occlusion and pose variation challenges. Accuracies of 89.63%, 61.82%, and 91.15% are achieved on RAF-DB, AffectNet and FERPlus, respectively, demonstrating the greater robustness and reliability of this method compared to the state-of-the-art alternatives in the real world.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhao, Gaopeng, Sixiong Xu, and Yuming Bo. "LiDAR-Based Non-Cooperative Tumbling Spacecraft Pose Tracking by Fusing Depth Maps and Point Clouds." Sensors 18, no. 10 (October 12, 2018): 3432. http://dx.doi.org/10.3390/s18103432.

Повний текст джерела
Анотація:
How to determine the relative pose between the chaser spacecraft and the high-speed tumbling target spacecraft at close range, which is an essential step in space proximity missions, is very challenging. This paper proposes a LiDAR-based pose tracking method by fusing depth maps and point clouds. The key point is to estimate the roll angle variation in adjacent sensor data by using the line detection and matching in depth maps. The simplification of adaptive voxelized grid point cloud based on the real-time relative position is adapted in order to satisfy the real-time requirement in the approaching process. In addition, the Iterative Closest Point algorithm is used to align the simplified sparse point cloud with the known target model point cloud in order to obtain the relative pose. Numerical experiments, which simulate the typical tumbling motion of the target and the approaching process, are performed to demonstrate the method. The experimental results show that the method has capability of estimating the real-time 6-DOF relative pose and dealing with large pose variations.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Akhtar, Zahid, Ajita Rattani, and Gian Luca Foresti. "Temporal Analysis Of Adaptive Face Recognition." Journal of Artificial Intelligence and Soft Computing Research 4, no. 4 (October 1, 2014): 243–55. http://dx.doi.org/10.1515/jaiscr-2015-0012.

Повний текст джерела
Анотація:
Abstract Aging has profound effects on facial biometrics as it causes change in shape and texture. However, aging remains an under-studied problem in comparison to facial variations due to pose, illumination and expression changes. A commonly adopted solution in the state-of-the-art is the virtual template synthesis for aging and de-aging transformations involving complex 3D modelling techniques. These methods are also prone to estimation errors in the synthesis. Another viable solution is to continuously adapt the template to the temporal variation (ageing) of the query data. Though efficacy of template update procedures has been proven for expression, lightning and pose variations, the use of template update for facial aging has not received much attention so far. Therefore, this paper first analyzes the performance of existing baseline facial representations, based on local features, under ageing effect then investigates the use of template update procedures for temporal variance due to the facial age progression process. Experimental results on FGNET and MORPH aging database using commercial VeriLook face recognition engine demonstrate that continuous template updating is an effective and simple way to adapt to variations due to the aging process.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Yu, Yang, Shaoting Zhang, Fei Yang, and Dimitris Metaxas. "Multi-Pose and Occluded Facial Landmark Localization Via Sparse Shape Representation." International Journal on Artificial Intelligence Tools 24, no. 04 (August 2015): 1540019. http://dx.doi.org/10.1142/s0218213015400199.

Повний текст джерела
Анотація:
Automatic facial landmark localization is a challenging problem for real world images because of face pose variations and occlusions. This paper proposes a unified framework to robustly locate facial landmarks under different poses and occlusions. Instead of explicitly modeling the statistical point distribution, we use a sparse linear combination to approximate the observed shape, and hence alleviate the multi-pose problem. In addition, we use the sparsity constraint to handle outliers caused by occlusions. We also model the initial misalignment and use convex optimization techniques to solve them simultaneously and efficiently. We evaluated the proposed method extensively on both synthetic and real data. The experimental results are promising on handling pose variations and occlusions.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Liang, Xiao, Masahiro Hirano, and Yuji Yamakawa. "Real-Time Marker-Based Tracking and Pose Estimation for a Rotating Object Using High-Speed Vision." Journal of Robotics and Mechatronics 34, no. 5 (October 20, 2022): 1063–72. http://dx.doi.org/10.20965/jrm.2022.p1063.

Повний текст джерела
Анотація:
Object tracking and pose estimation have always been challenging tasks in robotics, particularly for rotating objects. Rotating objects move quickly and with complex pose variations. In this study, we introduce a marker-based tracking and pose estimation method for rotating objects using a high-speed vision system. The method can obtain pose information at frequencies greater than 500 Hz, and can still estimate the pose when parts of the markers are lost during tracking. A robot catching experiment shows that the accuracy and frequency of this system are capable of high-speed tracking tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Alrjebi, Mustafa M., Wanquan Liu, and ng Li. "Face recognition against pose variations using multi-resolution multiple colour fusion." International Journal of Machine Intelligence and Sensory Signal Processing 1, no. 4 (2016): 304. http://dx.doi.org/10.1504/ijmissp.2016.085269.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Alrjebi, Mustafa M., Wanquan Liu, and Ling Li. "Face recognition against pose variations using multi-resolution multiple colour fusion." International Journal of Machine Intelligence and Sensory Signal Processing 1, no. 4 (2016): 304. http://dx.doi.org/10.1504/ijmissp.2016.10006096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Tai, Ying, Jian Yang, Yigong Zhang, Lei Luo, Jianjun Qian, and Yu Chen. "Face Recognition With Pose Variations and Misalignment via Orthogonal Procrustes Regression." IEEE Transactions on Image Processing 25, no. 6 (June 2016): 2673–83. http://dx.doi.org/10.1109/tip.2016.2551362.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Jo, Jaeik, Heeseung Choi, Ig-Jae Kim, and Jaihie Kim. "Single-view-based 3D facial reconstruction method robust against pose variations." Pattern Recognition 48, no. 1 (January 2015): 73–85. http://dx.doi.org/10.1016/j.patcog.2014.07.013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Choi, Sang-Il, Chong-Ho Choi, and Nojun Kwak. "Face recognition based on 2D images under illumination and pose variations." Pattern Recognition Letters 32, no. 4 (March 2011): 561–71. http://dx.doi.org/10.1016/j.patrec.2010.11.021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Naser, Omer Abdulhaleem, Sharifah Mumtazah Syed Ahmad, Khairulmizam Samsudin, and Marsyita Hanafi. "Investigating the Impact of Yaw Pose Variation on Facial Recognition Performance." Advances in Artificial Intelligence and Machine Learning 03, no. 02 (2023): 1039–55. http://dx.doi.org/10.54364/aaiml.2023.1162.

Повний текст джерела
Анотація:
Facial recognition systems often struggle with detecting faces in poses that deviate from the frontal view. Therefore, this paper investigates the impact of variations in yaw poses on the accuracy of facial recognition systems and presents a robust approach optimized to detect faces with pose variations ranging from 0◦ to ±90◦ . The proposed system integrates MTCNN, FaceNet, and SVC, and is trained and evaluated on the Taiwan dataset, which includes face images with diverse yaw poses. The training dataset consists of 89 subjects, with approximately 70 images per subject, and the testing dataset consists of 49 subjects, each with approximately 5 images. Our system achieved a training accuracy of 99.174% and a test accuracy of 96.970%, demonstrating its efficiency in detecting faces with pose variations. These findings suggest that the proposed approach can be a valuable tool in improving facial recognition accuracy in real-world scenarios.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Zhang, La, Haiyun Guo, Kuan Zhu, Honglin Qiao, Gaopan Huang, Sen Zhang, Huichen Zhang, Jian Sun, and Jinqiao Wang. "Hybrid Modality Metric Learning for Visible-Infrared Person Re-Identification." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 1s (February 28, 2022): 1–15. http://dx.doi.org/10.1145/3473341.

Повний текст джерела
Анотація:
Visible-infrared person re-identification (Re-ID) has received increasing research attention for its great practical value in night-time surveillance scenarios. Due to the large variations in person pose, viewpoint, and occlusion in the same modality, as well as the domain gap brought by heterogeneous modality, this hybrid modality person matching task is quite challenging. Different from the metric learning methods for visible person re-ID, which only pose similarity constraints on class level, an efficient metric learning approach for visible-infrared person Re-ID should take both the class-level and modality-level similarity constraints into full consideration to learn sufficiently discriminative and robust features. In this article, the hybrid modality is divided into two types, within modality and cross modality. We first fully explore the variations that hinder the ranking results of visible-infrared person re-ID and roughly summarize them into three types: within-modality variation, cross-modality modality-related variation, and cross-modality modality-unrelated variation. Then, we propose a comprehensive metric learning framework based on four kinds of paired-based similarity constraints to address all the variations within and cross modality. This framework focuses on both class-level and modality-level similarity relationships between person images. Furthermore, we demonstrate the compatibility of our framework with any paired-based loss functions by giving detailed implementation of combing it with triplet loss and contrastive loss separately. Finally, extensive experiments of our approach on SYSU-MM01 and RegDB demonstrate the effectiveness and superiority of our proposed metric learning framework for visible-infrared person Re-ID.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Beham, M. Parisa, S. M. Mansoor Roomi, J. Alageshan, and V. Kapileshwaran. "Performance Analysis of Pose Invariant Face Recognition Approaches in Unconstrained Environments." International Journal of Computer Vision and Image Processing 5, no. 1 (January 2015): 66–81. http://dx.doi.org/10.4018/ijcvip.2015010104.

Повний текст джерела
Анотація:
Face recognition and authentication are two significant and dynamic research issues in computer vision applications. There are many factors that should be accounted for face recognition; among them pose variation is a major challenge which severely influence in the performance of face recognition. In order to improve the performance, several research methods have been developed to perform the face recognition process with pose invariant conditions in constrained and unconstrained environments. In this paper, the authors analyzed the performance of a popular texture descriptors viz., Local Binary Pattern, Local Derivative Pattern and Histograms of Oriented Gradients for pose invariant problem. State of the art preprocessing techniques such as Discrete Cosine Transform, Difference of Gaussian, Multi Scale Retinex and Gradient face have also been applied before feature extraction. In the recognition phase K- nearest neighbor classifier is used to accomplish the classification task. To evaluate the efficiency of pose invariant face recognition algorithm three publicly available databases viz. UMIST, ORL and LFW datasets have been used. The above said databases have very wide pose variations and it is proved that the state of the art method is efficient only in constrained situations.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Zulkarnain, Syavira Tiara, and Nanik Suciati. "Selective local binary pattern with convolutional neural network for facial expression recognition." International Journal of Electrical and Computer Engineering (IJECE) 12, no. 6 (December 1, 2022): 6724. http://dx.doi.org/10.11591/ijece.v12i6.pp6724-6735.

Повний текст джерела
Анотація:
<span lang="EN-US">Variation in images in terms of head pose and illumination is a challenge in facial expression recognition. This research presents a hybrid approach that combines the conventional and deep learning, to improve facial expression recognition performance and aims to solve the challenge. We propose a selective local binary pattern (SLBP) method to obtain a more stable image representation fed to the learning process in convolutional neural network (CNN). In the preprocessing stage, we use adaptive gamma transformation to reduce illumination variability. The proposed SLBP selects the discriminant features in facial images with head pose variation using the median-based standard deviation of local binary pattern images. We experimented on the Karolinska directed emotional faces (KDEF) dataset containing thousands of images with variations in head pose and illumination and Japanese female facial expression (JAFFE) dataset containing seven facial expressions of Japanese females’ frontal faces. The experiments show that the proposed method is superior compared to the other related approaches with an accuracy of 92.21% on KDEF dataset and 94.28% on JAFFE dataset.</span>
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Lee, Jae-Hyeon, and Chang-Hwan Son. "Trap-Based Pest Counting: Multiscale and Deformable Attention CenterNet Integrating Internal LR and HR Joint Feature Learning." Remote Sensing 15, no. 15 (July 31, 2023): 3810. http://dx.doi.org/10.3390/rs15153810.

Повний текст джерела
Анотація:
Pest counting, which predicts the number of pests in the early stage, is very important because it enables rapid pest control, reduces damage to crops, and improves productivity. In recent years, light traps have been increasingly used to lure and photograph pests for pest counting. However, pest images have a wide range of variability in pest appearance owing to severe occlusion, wide pose variation, and even scale variation. This makes pest counting more challenging. To address these issues, this study proposes a new pest counting model referred to as multiscale and deformable attention CenterNet (Mada-CenterNet) for internal low-resolution (LR) and high-resolution (HR) joint feature learning. Compared with the conventional CenterNet, the proposed Mada-CenterNet adopts a multiscale heatmap generation approach in a two-step fashion to predict LR and HR heatmaps adaptively learned to scale variations, that is, changes in the number of pests. In addition, to overcome the pose and occlusion problems, a new between-hourglass skip connection based on deformable and multiscale attention is designed to ensure internal LR and HR joint feature learning and incorporate geometric deformation, thereby resulting in improved pest counting accuracy. Through experiments, the proposed Mada-CenterNet is verified to generate the HR heatmap more accurately and improve pest counting accuracy owing to multiscale heatmap generation, joint internal feature learning, and deformable and multiscale attention. In addition, the proposed model is confirmed to be effective in overcoming severe occlusions and variations in pose and scale. The experimental results show that the proposed model outperforms state-of-the-art crowd counting and object detection models.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Saghafi, Mohammadali, Aini Hussain, Mohamad Hanif Md. Saad, Mohd Asyraf Zulkifley, Nooritawati Md Tahir, and Mohd Faisal Ibrahim. "Pose and Illumination Invariance of Attribute Detectors in Person Re-identification." International Journal of Engineering & Technology 7, no. 4.11 (October 2, 2018): 174. http://dx.doi.org/10.14419/ijet.v7i4.11.20796.

Повний текст джерела
Анотація:
The use of attributes in person re-identification and video surveillance applications has grabbed attentions of many researchers in recent times. Attributes are suitable tools for mid-level representation of a part or a region in an image as it is more similar to human perception as compared to the quantitative nature of the normal visual features description of those parts. Hence, in this paper, the preliminary experimental results to evaluate the robustness of attribute detectors against pose and light variations in contrast to the use of local appearance features is discussed. Results attained proven that the attribute-based detectors are capable to overcome the negative impact of pose and light variation towards person re-identification activities. In addition, the degree of importance of different attributes in re-identification is evaluated and compared with other previous works in this field.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Chen, Si, Dong Yan, and Yan Yan. "Directional Correlation Filter Bank for Robust Head Pose Estimation and Face Recognition." Mathematical Problems in Engineering 2018 (October 21, 2018): 1–10. http://dx.doi.org/10.1155/2018/1923063.

Повний текст джерела
Анотація:
During the past few decades, face recognition has been an active research area in pattern recognition and computer vision due to its wide range of applications. However, one of the most challenging problems encountered by face recognition is the difficulty of handling large head pose variations. Therefore, the efficient and effective head pose estimation is a critical step of face recognition. In this paper, a novel feature extraction framework, called Directional Correlation Filter Bank (DCFB), is presented for head pose estimation. Specifically, in the proposed framework, the 1-Dimensional Optimal Tradeoff Filters (1D-OTF) corresponding to different head poses are simultaneously and jointly designed in the low-dimensional linear subspace. Different from the traditional methods that heavily rely on the precise localization of the key facial feature points, our proposed framework exploits the frequency domain of the face images, which effectively captures the high-order statistics of faces. As a result, the obtained features are compact and discriminative. Experimental results on public face databases with large head pose variations show the superior performance obtained by the proposed framework on the tasks of both head pose estimation and face recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Hajraoui, Abdellatif, and Mohamed Sabri. "Generic and Robust Method for Head Pose Estimation." Indonesian Journal of Electrical Engineering and Computer Science 4, no. 2 (November 1, 2016): 439. http://dx.doi.org/10.11591/ijeecs.v4.i2.pp439-446.

Повний текст джерела
Анотація:
Head pose estimation has fascinated the research community due to its application in facial motion capture, human-computer interaction and video conferencing. It is a pre-requisite to gaze tracking, face recognition, and facial expression analysis. In this paper, we present a generic and robust method for model-based global 2D head pose estimation from single RGB Image. In our approach we use of the one part the Gabor filters to conceive a robust pose descriptor to illumination and facial expression variations, and that target the pose information. Moreover, we ensure the classification of these descriptors using a SVM classifier. The approach has proved effective view the rate for the correct pose estimations that we got.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Kour, Sukhbir, Ashish Choudhary, Azhar Malik, and Rudra Kaul. "Non surgical retreatment three rooted maxillary premolars: A case report." IP Indian Journal of Conservative and Endodontics 7, no. 2 (June 15, 2022): 98–102. http://dx.doi.org/10.18231/j.ijce.2022.021.

Повний текст джерела
Анотація:
Effective and successful endodontic treatment requires dentists to have adequate information on the clinical variations in root canal anatomy. Maxillary premolars exhibit anatomical variations in the numbers of roots and canals, which pose a challenge during root canal therapy these variations must be considered for successful endodontic therapy. Herein, we illustrate the diagnosis and clinical management of previously endodontically treated three rooted maxillary premolars using Cone Beam Computed Tomography (CBCT).
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Clemente, Carolina, Gonçalo Chambel, Diogo C. F. Silva, António Mesquita Montes, Joana F. Pinto, and Hugo Plácido da Silva. "Feasibility of 3D Body Tracking from Monocular 2D Video Feeds in Musculoskeletal Telerehabilitation." Sensors 24, no. 1 (December 29, 2023): 206. http://dx.doi.org/10.3390/s24010206.

Повний текст джерела
Анотація:
Musculoskeletal conditions affect millions of people globally; however, conventional treatments pose challenges concerning price, accessibility, and convenience. Many telerehabilitation solutions offer an engaging alternative but rely on complex hardware for body tracking. This work explores the feasibility of a model for 3D Human Pose Estimation (HPE) from monocular 2D videos (MediaPipe Pose) in a physiotherapy context, by comparing its performance to ground truth measurements. MediaPipe Pose was investigated in eight exercises typically performed in musculoskeletal physiotherapy sessions, where the Range of Motion (ROM) of the human joints was the evaluated parameter. This model showed the best performance for shoulder abduction, shoulder press, elbow flexion, and squat exercises. Results have shown a MAPE ranging between 14.9% and 25.0%, Pearson’s coefficient ranging between 0.963 and 0.996, and cosine similarity ranging between 0.987 and 0.999. Some exercises (e.g., seated knee extension and shoulder flexion) posed challenges due to unusual poses, occlusions, and depth ambiguities, possibly related to a lack of training data. This study demonstrates the potential of HPE from monocular 2D videos, as a markerless, affordable, and accessible solution for musculoskeletal telerehabilitation approaches. Future work should focus on exploring variations of the 3D HPE models trained on physiotherapy-related datasets, such as the Fit3D dataset, and post-preprocessing techniques to enhance the model’s performance.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Ewaisha, Mahmoud, Marwa El Shawarby, Hazem Abbas, and Ibrahim Sobh. "End-to-End Multitask Learning for Driver Gaze and Head Pose Estimation." Electronic Imaging 2020, no. 16 (January 26, 2020): 110–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-108.

Повний текст джерела
Анотація:
Modern automobiles accidents occur mostly due to inattentive behavior of drivers, which is why driver’s gaze estimation is becoming a critical component in automotive industry. Gaze estimation has introduced many challenges due to the nature of the surrounding environment like changes in illumination, or driver’s head motion, partial face occlusion, or wearing eye decorations. Previous work conducted in this field includes explicit extraction of hand-crafted features such as eye corners and pupil center to be used to estimate gaze, or appearance-based methods like Convolutional Neural Networks which implicitly extracts features from an image and directly map it to the corresponding gaze angle. In this work, a multitask Convolutional Neural Network architecture is proposed to predict subject’s gaze yaw and pitch angles, along with the head pose as an auxiliary task, making the model robust to head pose variations, without needing any complex preprocessing or hand-crafted feature extraction.Then the network’s output is clustered into nine gaze classes relevant in the driving scenario. The model achieves 95.8% accuracy on the test set and 78.2% accuracy in cross-subject testing, proving the model’s generalization capability and robustness to head pose variation.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Sang, Gaoli, Jing Li, and Qijun Zhao. "Pose-Invariant Face Recognition via RGB-D Images." Computational Intelligence and Neuroscience 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/3563758.

Повний текст джерела
Анотація:
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tu, Huan, Gesang Duoji, Qijun Zhao, and Shuang Wu. "Improved Single Sample Per Person Face Recognition via Enriching Intra-Variation and Invariant Features." Applied Sciences 10, no. 2 (January 14, 2020): 601. http://dx.doi.org/10.3390/app10020601.

Повний текст джерела
Анотація:
Face recognition using a single sample per person is a challenging problem in computer vision. In this scenario, due to the lack of training samples, it is difficult to distinguish between inter-class variations caused by identity and intra-class variations caused by external factors such as illumination, pose, etc. To address this problem, we propose a scheme to improve the recognition rate by both generating additional samples to enrich the intra-variation and eliminating external factors to extract invariant features. Firstly, a 3D face modeling module is proposed to recover the intrinsic properties of the input image, i.e., 3D face shape and albedo. To obtain the complete albedo, we come up with an end-to-end network to estimate the full albedo UV map from incomplete textures. The obtained albedo UV map not only eliminates the influence of the illumination, pose, and expression, but also retains the identity information. With the help of the recovered intrinsic properties, we then generate images under various illuminations, expressions, and poses. Finally, the albedo and the generated images are used to assist single sample per person face recognition. The experimental results on Face Recognition Technology (FERET), Labeled Faces in the Wild (LFW), Celebrities in Frontal-Profile (CFP) and other face databases demonstrate the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Zeng, Junying. "An Improved Sparse Representation Face Recognition Algorithm for Variations of Illumination and Pose." Journal of Information and Computational Science 12, no. 16 (November 1, 2015): 5987–94. http://dx.doi.org/10.12733/jics20106876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Casasent, David. "Face recognition with pose and illumination variations using new SVRDM support-vector machine." Optical Engineering 43, no. 8 (August 1, 2004): 1804. http://dx.doi.org/10.1117/1.1763935.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Passalis, G., P. Perakis, T. Theoharis, and I. A. Kakadiaris. "Using Facial Symmetry to Handle Pose Variations in Real-World 3D Face Recognition." IEEE Transactions on Pattern Analysis and Machine Intelligence 33, no. 10 (October 2011): 1938–51. http://dx.doi.org/10.1109/tpami.2011.49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

., Ganapatikrishna P. Hegde. "REAL TIME VOTING SYSTEM USING FACE RECOGNITION FOR DIFFERENT EXPRESSIONS AND POSE VARIATIONS." International Journal of Research in Engineering and Technology 03, no. 07 (July 25, 2014): 381–84. http://dx.doi.org/10.15623/ijret.2014.0307065.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії