Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Facial identities.

Статті в журналах з теми "Facial identities"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Facial identities".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Schweinberger, Stefan R., David Robertson, and Jürgen M. Kaufmann. "Hearing Facial Identities." Quarterly Journal of Experimental Psychology 60, no. 10 (October 2007): 1446–56. http://dx.doi.org/10.1080/17470210601063589.

Повний текст джерела
Анотація:
While audiovisual integration is well known in speech perception, faces and speech are also informative with respect to speaker recognition. To date, audiovisual integration in the recognition of familiar people has never been demonstrated. Here we show systematic benefits and costs for the recognition of familiar voices when these are combined with time-synchronized articulating faces, of corresponding or noncorresponding speaker identity, respectively. While these effects were strong for familiar voices, they were smaller or nonsignificant for unfamiliar voices, suggesting that the effects depend on the previous creation of a multimodal representation of a person's identity. Moreover, the effects were reduced or eliminated when voices were combined with the same faces presented as static pictures, demonstrating that the effects do not simply reflect the use of facial identity as a “cue” for voice recognition. This is the first direct evidence for audiovisual integration in person recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Mileva, Mila, Andrew W. Young, Robin S. S. Kramer, and A. Mike Burton. "Understanding facial impressions between and within identities." Cognition 190 (September 2019): 184–98. http://dx.doi.org/10.1016/j.cognition.2019.04.027.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Abir, Intiaz, Hasan Firdaus Mohd Zaki, and Azhar Mohd Ibrahim. "EVALUATION OF SIMULTANEOUS IDENTITY, AGE AND GENDER RECOGNITION FOR CROWD FACE MONITORING." ASEAN Engineering Journal 13, no. 1 (February 28, 2023): 11–20. http://dx.doi.org/10.11113/aej.v13.17612.

Повний текст джерела
Анотація:
Nowadays, facial recognition combined with age estimation and gender prediction has been deeply involved with the factors associated with crowd monitoring. This is considered to be a major and complex job for humans. This paper proposes a unified facial recognition system based on already available deep learning and machine learning models (i.e., FaceNet, ResNet, Support Vector Machine, AgeNet and GenderNet) that automatically and simultaneously performs person identification, age estimation and gender prediction. Then the system is evaluated on a newly proposed multi-face, realistic and challenging test dataset. The current face recognition technology primarily focuses on static datasets of known identities and does not focus on novel identities. This approach is not suitable for continuous crowd monitoring. In our proposed system, whenever novel identities are found during inference, the system will save those novel identities with an appropriate label for each unique identity and the system will be updated periodically in order to correctly recognise those identities in the future inference iterations. However, extracting the facial features of the whole dataset whenever a new identity is detected is not an efficient solution. To address this issue, we propose an incremental feature extraction based training method which aims to reduce the computational load of feature extraction. When tested on the proposed test dataset, our proposed system correctly recognizes pre-trained identities, estimates age, and predicts gender with an average accuracy of 49%, 66.5% and 93.54% respectively. We conclude that the evaluated pre-trained models can be sensitive and not robust to uncontrolled environment (e.g., abrupt lighting conditions).
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Aizawa, Hiroaki, Kimiya Murase, and Kunihito Kato. "Disentanglement Learning of Emotions and Identities from Facial Image." IEEJ Transactions on Electronics, Information and Systems 141, no. 9 (September 1, 2021): 962–68. http://dx.doi.org/10.1541/ieejeiss.141.962.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Yongmin, Shaogang Gong, and Heather Liddell. "Recognising trajectories of facial identities using kernel discriminant analysis." Image and Vision Computing 21, no. 13-14 (December 2003): 1077–86. http://dx.doi.org/10.1016/j.imavis.2003.08.010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Kim, Nayeon, Sukhee Cho, and Byungjun Bae. "SMaTE: A Segment-Level Feature Mixing and Temporal Encoding Framework for Facial Expression Recognition." Sensors 22, no. 15 (August 1, 2022): 5753. http://dx.doi.org/10.3390/s22155753.

Повний текст джерела
Анотація:
Despite advanced machine learning methods, the implementation of emotion recognition systems based on real-world video content remains challenging. Videos may contain data such as images, audio, and text. However, the application of multimodal models using two or more types of data to real-world video media (CCTV, illegally filmed content, etc.) lacking sound or subtitles is difficult. Although facial expressions in image sequences can be utilized in emotion recognition, the diverse identities of individuals in real-world content limits computational models of relationships between facial expressions. This study proposed a transformation model which employed a video vision transformer to focus on facial expression sequences in videos. It effectively understood and extracted facial expression information from the identities of individuals, instead of fusing multimodal models. The design entailed capture of higher-quality facial expression information through mixed-token embedding facial expression sequences augmented via various methods into a single data representation, and comprised two modules: spatial and temporal encoders. Further, temporal position embedding, focusing on relationships between video frames, was proposed and subsequently applied to the temporal encoder module. The performance of the proposed algorithm was compared with that of conventional methods on two emotion recognition datasets of video content, with results demonstrating its superiority.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Tüttenberg, Simone C., and Holger Wiese. "Learning own- and other-race facial identities from natural variability." Quarterly Journal of Experimental Psychology 72, no. 12 (July 9, 2019): 2788–800. http://dx.doi.org/10.1177/1747021819859840.

Повний текст джерела
Анотація:
Exposure to multiple varying face images of the same person encourages the formation of identity representations which are sufficiently robust to allow subsequent recognition from new, never-before seen images. While recent studies suggest that identity information is initially harder to perceive in images of other- relative to own-race identities, it remains unclear whether these difficulties propagate to face learning, that is, to the formation of robust face representations. We report two experiments in which Caucasian and East Asian participants sorted multiple images of own- and other-race persons according to identity in an implicit learning task and subsequently either matched novel images of learnt and previously unseen faces for identity (Experiment 1) or made old/new decisions for new images of learnt and unfamiliar identities (Experiment 2). Caucasian participants demonstrated own-race advantages during sorting, matching, and old/new recognition, while corresponding effects were absent in East Asian participants with substantial other-race expertise. Surprisingly, East Asian participants showed enhanced learning for other-race identities during matching in Experiment 1, which may reflect their increased motivation to individuate other-race faces. Thus, our results highlight the importance of perceptual expertise for own- and other-race processing, but may also lend support to recent suggestions on how expertise and socio-cognitive factors can interact.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shen, Chengxiao, Liping Qian, and Ningning Yu. "Adaptive Facial Imagery Clustering via Spectral Clustering and Reinforcement Learning." Applied Sciences 11, no. 17 (August 30, 2021): 8051. http://dx.doi.org/10.3390/app11178051.

Повний текст джерела
Анотація:
In an era of big data, face images captured in social media and forensic investigations, etc., generally lack labels, while the number of identities (clusters) may range from a few dozen to thousands. Therefore, it is of practical importance to cluster a large number of unlabeled face images into an efficient range of identities or even the exact identities, which can avoid image labeling by hand. Here, we propose adaptive facial imagery clustering that involves face representations, spectral clustering, and reinforcement learning (Q-learning). First, we use a deep convolutional neural network (DCNN) to generate face representations, and we adopt a spectral clustering model to construct a similarity matrix and achieve clustering partition. Then, we use an internal evaluation measure (the Davies–Bouldin index) to evaluate the clustering quality. Finally, we adopt Q-learning as the feedback module to build a dynamic multiparameter debugging process. The experimental results on the ORL Face Database show the effectiveness of our method in terms of an optimal number of clusters of 39, which is almost the actual number of 40 clusters; our method can achieve 99.2% clustering accuracy. Subsequent studies should focus on reducing the computational complexity of dealing with more face images.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Martindale, Anne‐Marie, and Pamela Fisher. "Disrupted faces, disrupted identities? Embodiment, life stories and acquired facial ‘disfigurement’." Sociology of Health & Illness 41, no. 8 (June 26, 2019): 1503–19. http://dx.doi.org/10.1111/1467-9566.12973.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hamza, Muhammad, Samabia Tehsin, Mamoona Humayun, Maram Fahaad Almufareh, and Majed Alfayad. "A Comprehensive Review of Face Morph Generation and Detection of Fraudulent Identities." Applied Sciences 12, no. 24 (December 7, 2022): 12545. http://dx.doi.org/10.3390/app122412545.

Повний текст джерела
Анотація:
A robust facial recognition system that has soundness and completeness is essential for authorized control access to lawful resources. Due to the availability of modern image manipulation technology, the current facial recognition systems are vulnerable to different biometric attacks. Image morphing attack is one of these attacks. This paper compares and analyzes state-of-the-art morphing attack detection (MAD) methods. The performance of different MAD methods is also compared on a wide range of source image databases. Moreover, it also describes the morph image generation techniques along with the limitations, strengths, and drawbacks of each morphing technique. Results are investigated and compared with in-depth analysis providing insight into the vulnerabilities of existing systems. This paper provides vital information that is essential for building a next generation morph attack detection system.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Redfern, Annabelle S., and Christopher P. Benton. "Expressive Faces Confuse Identity." i-Perception 8, no. 5 (September 19, 2017): 204166951773111. http://dx.doi.org/10.1177/2041669517731115.

Повний текст джерела
Анотація:
We used highly variable, so-called ‘ambient’ images to test whether expressions affect the identity recognition of real-world facial images. Using movie segments of two actors unknown to our participants, we created image pairs – each image within a pair being captured from the same film segment. This ensured that, within pairs, variables such as lighting were constant whilst expressiveness differed. We created two packs of cards, one containing neutral face images, the other, their expressive counterparts. Participants sorted the card packs into piles, one for each perceived identity. As with previous studies, the perceived number of identities was higher than the veridical number of two. Interestingly, when looking within piles, we found a strong difference between the expressive and neutral sorting tasks. With expressive faces, identity piles were significantly more likely to contain cards of both identities. This finding demonstrates that, over and above other image variables, expressiveness variability can cause identity confusion; evidently, expression is not disregarded or factored out when we classify facial identity in real-world images. Our results provide clear support for a face processing architecture in which both invariant and changeable facial information may be drawn upon to drive our decisions of identity.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Khairunnisa, Khairunnisa, Rismayanti Rismayanti, and Rully Alhari. "ANALISIS IDENTIFIKASI WAJAH MENGGUNAKAN GABOR FILTER DAN SKIN MODEL." JURNAL TEKNOLOGI INFORMASI 2, no. 2 (February 1, 2019): 150. http://dx.doi.org/10.36294/jurti.v2i2.430.

Повний текст джерела
Анотація:
Abstract - Identification of faces in digital images is a complex process and requires a combination of various methods. The complexity of facial identification is increasing along with the increasing need for high accuracy of facial images. This research analyzes the combination of Skin Color Model and Gabor Filters in the process of identifying facial identities in digital images. The Skin Color Model method is used to separate the face area from facial images based on skin color values on facial images. The face area is then extracted using Gabor Filter. This research resulted in the highest accuracy was 93.6349%. and the lowest accuracy is around 82.45%. The implementation of a combination of Skin Color Models and Gabor Filters can be an alternative method of identifying faces in digital images. Keywords - Digital Image, Face Identification, Skin Color Model, Gabor Filter.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

K. Naji, Saba, and Muthana H. Hamd. "HUMAN IDENTIFICATION BASED ON FACE RECOGNITION SYSTEM." Journal of Engineering and Sustainable Development 25, no. 01 (January 1, 2021): 80–91. http://dx.doi.org/10.31272/jeasd.25.1.7.

Повний текст джерела
Анотація:
Due to, the great electronic development, which reinforced the need to define people's identities, different methods, and databases to identification people's identities have emerged. In this paper, we compare the results of two texture analysis methods: Local Binary Pattern (LBP) and Local Ternary Pattern (LTP). The comparison based on comparing the extracting facial texture features of 40 and 401 subjects taken from ORL and UFI databases respectively. As well, the comparison has taken in the account using three distance measurements such as; Manhattan Distance (MD), Euclidean Distance (ED), and Cosine Distance (CD). Where the maximum accuracy of the LBP method (99.23%) is obtained with a Manhattan and ORL database, while the LTP method attained (98.76%) using the same distance and database. While, the facial database of UFI shows low quality, which is satisfied 75.98% and 73.82% recognition rates using LBP and LTP respectively with Manhattan distance.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Galinsky, Dina F., Ezgi Erol, Konstantina Atanasova, Martin Bohus, Annegret Krause-Utz, and Stefanie Lis. "Do I trust you when you smile? Effects of sex and emotional expression on facial trustworthiness appraisal." PLOS ONE 15, no. 12 (December 3, 2020): e0243230. http://dx.doi.org/10.1371/journal.pone.0243230.

Повний текст джерела
Анотація:
Background Trust is a prerequisite for successful social relations. People tend to form a first impression of people‘s trustworthiness based on their facial appearance. The sex of the judging person and its congruency with the sex of the judged people influence these appraisals. Moreover, trustworthiness and happiness share some facial features, which has led to studies investigating the interplay between both social judgments. Studies revealed high correlation in judging happiness and trustworthiness across different facial identities. However, studies are missing that investigate whether this relationship exists on a within-subject level and whether in-group biases such as the congruency between the sex of the judging and judged individual influence this relationship. In the present study, we addressed these questions. Methods Data were collected in an online-survey in two separate samples (N = 30, German sample, N = 107 Dutch sample). Subjects assessed the intensity of happiness and trustworthiness expressed in neutral and calm facial expression of the same characters (50% males, 50% females). Statistical analyses comprised rm-Anova designs based on rating scores and estimates of within-subject associations between both judgments. Results Our findings replicate high correlations between happiness and trustworthiness ratings across facial identities based on average scores across participants. However, the strength of this association was strongly dependent on the methodological approach and inter-subject variability was high. Our data revealed an in-group advantage for trustworthiness in women. Moreover, the faces’ sex and emotional expressions differentially influenced the within-subject correlation between both judgments in men and women. Conclusion Our findings replicate previous studies on the association between happiness and trustworthiness judgments. We extend our understanding of the link between both social judgments by uncovering that within-subject variability is high and influenced by sex and the availability and appraisal of positive emotional facial cues.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Schweinberger, Stefan R., Nadine Kloth, and David M. C. Robertson. "Hearing facial identities: Brain correlates of face–voice integration in person identification." Cortex 47, no. 9 (October 2011): 1026–37. http://dx.doi.org/10.1016/j.cortex.2010.11.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Bryce, Nadine. "Teacher Candidates’ Collaboration and Identity in Online Discussions." Journal of University Teaching and Learning Practice 11, no. 1 (January 1, 2014): 73–91. http://dx.doi.org/10.53761/1.11.1.7.

Повний текст джерела
Анотація:
In an online context, without facial, verbal or gestural cues, establishing identities through naming social positions appeared essential to effective written communication for graduate pre-service teacher candidates enrolled in a course on literacy education for elementary students. As they engaged in small group asynchronous discussions about course readings, candidates named their identities and deferred to course authors more often than they referenced group identities, or attempted to bond with one another. They engaged least frequently in disagreeing with one another, or challenging the authority of course texts, creating polite, cordial exchanges in most groups. Male candidates challenged their group members more often, suggesting differences in communication styles shaped their responses. Dialogue journaling shows promise in facilitating learner connection and building a sense of community by facilitating dialogue and decreasing psychological distance between participants who are geographically and temporally separated.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Mo, Langyuan, Haokun Li, Chaoyang Zou, Yubing Zhang, Ming Yang, Yihong Yang, and Mingkui Tan. "Towards Accurate Facial Motion Retargeting with Identity-Consistent and Expression-Exclusive Constraints." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 2 (June 28, 2022): 1981–89. http://dx.doi.org/10.1609/aaai.v36i2.20093.

Повний текст джерела
Анотація:
We address the problem of facial motion retargeting that aims to transfer facial motion from a 2D face image to 3D characters. Existing methods often formulate this problem as a 3D face reconstruction problem, which estimates the face attributes such as face identity and expression from face images. However, due to the lack of ground-truth labels for both identity and expression, most 3D-face reconstruction-based methods fail to capture the facial identity and expression accurately. As a result, these methods may not achieve promising performance. To address this, we propose an identity-consistent constraint to learn accurate identities by encouraging consistent identity prediction across multiple frames. Based on a more accurate identity, we are able to obtain a more accurate facial expression. Moreover, we further propose an expression-exclusive constraint to improve performance by avoiding the co-occurrence of contradictory expression units (e.g., ``brow lower'' vs. ``brow raise''). Extensive experiments on facial motion retargeting and 3D face reconstruction tasks demonstrate the superiority of the proposed method over existing methods. Our code and supplementary materials are available at https://github.com/deepmo24/CPEM.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Schwartz, Emily, Kathryn O’Nell, Rebecca Saxe, and Stefano Anzellotti. "Challenging the Classical View: Recognition of Identity and Expression as Integrated Processes." Brain Sciences 13, no. 2 (February 10, 2023): 296. http://dx.doi.org/10.3390/brainsci13020296.

Повний текст джерела
Анотація:
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Redfern, Annabelle S., and Christopher P. Benton. "Expression Dependence in the Perception of Facial Identity." i-Perception 8, no. 3 (June 2017): 204166951771066. http://dx.doi.org/10.1177/2041669517710663.

Повний текст джерела
Анотація:
We recognise familiar faces irrespective of their expression. This ability, crucial for social interactions, is a fundamental feature of face perception. We ask whether this constancy of facial identity may be compromised by changes in expression. This, in turn, addresses the issue of whether facial identity and expression are processed separately or interact. Using an identification task, participants learned the identities of two actors from naturalistic (so-called ambient) face images taken from movies. Training was either with neutral images or their expressive counterparts, perceived expressiveness having been determined experimentally. Expressive training responses were slower and more erroneous than neutral training responses. When tested with novel images of the actors that varied in expressiveness, neutrally trained participants gave slower and less accurate responses to images of high compared with low expressiveness. These findings clearly demonstrate that facial expressions impede the processing and learning of facial identity. Because this expression dependence is consistent with a late bifurcation model of face processing, in which changeable facial aspects and identity are coded in a common framework, it suggests that expressions are a part of facial identity representation.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Luo, Yifan, Feng Ye, Bin Weng, Shan Du, and Tianqiang Huang. "A Novel Defensive Strategy for Facial Manipulation Detection Combining Bilateral Filtering and Joint Adversarial Training." Security and Communication Networks 2021 (August 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/4280328.

Повний текст джерела
Анотація:
Facial manipulation enables facial expressions to be tampered with or facial identities to be replaced in videos. The fake videos are so realistic that they are even difficult for human eyes to distinguish. This poses a great threat to social and public information security. A number of facial manipulation detectors have been proposed to address this threat. However, previous studies have shown that the accuracy of these detectors is sensitive to adversarial examples. The existing defense methods are very limited in terms of applicable scenes and defense effects. This paper proposes a new defense strategy for facial manipulation detectors, which combines a passive defense method, bilateral filtering, and a proactive defense method, joint adversarial training, to mitigate the vulnerability of facial manipulation detectors against adversarial examples. The bilateral filtering method is applied in the preprocessing stage of the model without any modification to denoise the input adversarial examples. The joint adversarial training starts from the training stage of the model, which mixes various adversarial examples and original examples to train the model. The introduction of joint adversarial training can train a model that defends against multiple adversarial attacks. The experimental results show that the proposed defense strategy positively helps facial manipulation detectors counter adversarial examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Tucker, Aaron. "The Citizen Question: Making Identities Visible Via Facial Recognition Software at the Border." IEEE Technology and Society Magazine 39, no. 4 (December 2020): 52–59. http://dx.doi.org/10.1109/mts.2020.3031847.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

CHEN, CHING-WEN, and CHUNG-LIN HUANG. "HUMAN FACE RECOGNITION FROM A SINGLE FRONT VIEW." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 04 (October 1992): 571–93. http://dx.doi.org/10.1142/s021800149200031x.

Повний текст джерела
Анотація:
This paper presents a face recognition system which can identify the unknown identity effectively using the front-view facial features. In front-view facial feature extractions, we can capture the contours of eyes and mouth by the deformable template model because of their analytically describable shapes. However, the shapes of eyebrows, nostrils and face are difficult to model using a deformable template. We extract them by using the active contour model (snake). After the contours of all facial features have been captured, we calculate effective feature values from these extracted contours and construct databases for unknown identities classification. In the database generation phase, 12 models are photographed, and feature vectors are calculated for each portrait. In the identification phase if any one of these 12 persons has his picture taken again, the system can recognize his identity.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Thierry, S. M., A. C. Twele, and C. J. Mondloch. "Mandatory First Impressions: Happy Expressions Increase Trustworthiness Ratings of Subsequent Neutral Images." Perception 50, no. 2 (February 2021): 103–15. http://dx.doi.org/10.1177/0301006620987205.

Повний текст джерела
Анотація:
First impressions of traits are formed rapidly and nonconsciously, suggesting an automatic process. We examined whether first impressions of trustworthiness are mandatory, another component of automaticity in face processing. In Experiment 1a, participants rated faces displaying subtle happy, subtle angry, and neutral expressions on trustworthiness. Happy faces were rated as more trustworthy than neutral faces; angry faces were rated as less trustworthy. In Experiment 1b, participants learned eight identities, half showing subtle happy and half showing subtle angry expressions. They then rated neutral images of these same identities (plus four novel neutral faces) on trustworthiness. Multilevel modeling analyses showed that identities previously shown with subtle expressions of happiness were rated as more trustworthy than novel identities. There was no effect of previously seen subtle angry expressions on ratings of trustworthiness. Mandatory first impressions based on subtle facial expressions were also reflected in two ratings designed to assess real-world outcomes. Participants indicated that they were more likely to vote for identities that had posed happy expressions and more likely to loan them money. These findings demonstrate that first impressions of trustworthiness based on previously seen subtle happy, but not angry, expressions are mandatory and are likely to have behavioral consequences.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kramer, Robin S. S., and Ellen M. Gardner. "Facial Trustworthiness and Criminal Sentencing: A Comment on Wilson and Rule (2015)." Psychological Reports 123, no. 5 (November 22, 2019): 1854–68. http://dx.doi.org/10.1177/0033294119889582.

Повний текст джерела
Анотація:
Our first impressions of others, whether accurate or unfounded, have real-world consequences in terms of how we judge and treat those people. Previous research has suggested that criminal sentencing is influenced by the perceived facial trustworthiness of defendants in murder trials. In real cases, those who appeared less trustworthy were more likely to receive death rather than life sentences. Here, we carried out several attempts to replicate this finding, utilizing the original set of stimuli (Study 1), multiple images of each identity (Study 2), and a larger sample of identities (Study 3). In all cases, we found little support for the association between facial trustworthiness and sentencing. Furthermore, there was clear evidence that the specific image chosen to depict each identity had a significant influence on subsequent judgments. Taken together, our findings suggest that perceptions of facial trustworthiness have no real-world influence on sentencing outcomes in serious criminal cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Evtimov, Ivan, Pascal Sturmfels, and Tadayoshi Kohno. "FoggySight: A Scheme for Facial Lookup Privacy." Proceedings on Privacy Enhancing Technologies 2021, no. 3 (April 27, 2021): 204–26. http://dx.doi.org/10.2478/popets-2021-0044.

Повний текст джерела
Анотація:
Abstract Advances in deep learning algorithms have enabled better-than-human performance on face recognition tasks. In parallel, private companies have been scraping social media and other public websites that tie photos to identities and have built up large databases of labeled face images. Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users. In this work, we tackle the problem of providing privacy from such face recognition systems. We propose and evaluate FoggySight, a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media. FoggySight’s core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms. We explore different settings for this scheme and find that it does enable protection of facial privacy – including against a facial recognition service with unknown internals.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Martins, Pedro, José Silvestre Silva, and Alexandre Bernardino. "Multispectral Facial Recognition in the Wild." Sensors 22, no. 11 (June 1, 2022): 4219. http://dx.doi.org/10.3390/s22114219.

Повний текст джерела
Анотація:
This work proposes a multi-spectral face recognition system in an uncontrolled environment, aiming to identify or authenticate identities (people) through their facial images. Face recognition systems in uncontrolled environments have shown impressive performance improvements over recent decades. However, most are limited to the use of a single spectral band in the visible spectrum. The use of multi-spectral images makes it possible to collect information that is not obtainable in the visible spectrum when certain occlusions exist (e.g., fog or plastic materials) and in low- or no-light environments. The proposed work uses the scores obtained by face recognition systems in different spectral bands to make a joint final decision in identification. The evaluation of different methods for each of the components of a face recognition system allowed the most suitable ones for a multi-spectral face recognition system in an uncontrolled environment to be selected. The experimental results, expressed in Rank-1 scores, were 99.5% and 99.6% in the TUFTS multi-spectral database with pose variation and expression variation, respectively, and 100.0% in the CASIA NIR-VIS 2.0 database, indicating that the use of multi-spectral images in an uncontrolled environment is advantageous when compared with the use of single spectral band images.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Campanella, S., C. Hanoteau, X. Seron, F. Joassin, and R. Bruyer. "Categorical perception of unfamiliar facial identities, the face-space metaphor, and the morphing technique." Visual Cognition 10, no. 2 (February 2003): 129–56. http://dx.doi.org/10.1080/713756676.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Mills, Melissa. "Being a Musician: Musical Identity and the Adolescent Singer." Bulletin of the Council for Research in Music Education, no. 186 (October 1, 2010): 43–54. http://dx.doi.org/10.2307/41110433.

Повний текст джерела
Анотація:
Abstract This study investigated six adolescents’ (ages 12-14) perceptions of musical identity as influenced by participation in a community children’s choir. Research questions focused on the role of the conductor, peers, and ensemble participation on students’ musical identities. Data collection included focus group interviews and individual interviews with choristers, their parents, the choir conductor, and one former choir member. Through an embedded analysis of student definitions of musicianship, an interesting dichotomy emerged. Despite participating in a rich musical experience, choristers did not equate these experiences with improving their individual musicianship. Additional emergent themes included the chorister’s strong opinions on the connection between external (e.g., facial expressions) and internal (e.g., feeling the music) expressions of musicianship, as well as their desire to be perceived as "normal" while maintaining their emerging musical identities.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Liu, Leyuan, Rubin Jiang, Jiao Huo, and Jingying Chen. "Self-Difference Convolutional Neural Network for Facial Expression Recognition." Sensors 21, no. 6 (March 23, 2021): 2250. http://dx.doi.org/10.3390/s21062250.

Повний текст джерела
Анотація:
Facial expression recognition (FER) is a challenging problem due to the intra-class variation caused by subject identities. In this paper, a self-difference convolutional network (SD-CNN) is proposed to address the intra-class variation issue in FER. First, the SD-CNN uses a conditional generative adversarial network to generate the six typical facial expressions for the same subject in the testing image. Second, six compact and light-weighted difference-based CNNs, called DiffNets, are designed for classifying facial expressions. Each DiffNet extracts a pair of deep features from the testing image and one of the six synthesized expression images, and compares the difference between the deep feature pair. In this way, any potential facial expression in the testing image has an opportunity to be compared with the synthesized “Self”—an image of the same subject with the same facial expression as the testing image. As most of the self-difference features of the images with the same facial expression gather tightly in the feature space, the intra-class variation issue is significantly alleviated. The proposed SD-CNN is extensively evaluated on two widely-used facial expression datasets: CK+ and Oulu-CASIA. Experimental results demonstrate that the SD-CNN achieves state-of-the-art performance with accuracies of 99.7% on CK+ and 91.3% on Oulu-CASIA, respectively. Moreover, the model size of the online processing part of the SD-CNN is only 9.54 MB (1.59 MB ×6), which enables the SD-CNN to run on low-cost hardware.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Cho, Durkhyun, Jin Han Lee, and Il Hong Suh. "CLEANIR: Controllable Attribute-Preserving Natural Identity Remover." Applied Sciences 10, no. 3 (February 7, 2020): 1120. http://dx.doi.org/10.3390/app10031120.

Повний текст джерела
Анотація:
We live in an era of privacy concerns. As smart devices such as smartphones, service robots and surveillance cameras spread, preservation of our privacy becomes one of the major concerns in our daily life. Traditionally, the problem was resolved by simple approaches such as image masking or blurring. While these provide effective ways to remove identities from images, there are certain limitations when it comes to a matter of recognition from the processed images. For example, one may want to get ambient information from scenes even when privacy-related information such as facial appearance is removed or changed. To address the issue, our goal in this paper is not only to modify identity from faces but also keeps facial attributes such as color, pose and facial expression for further applications. We propose a novel face de-identification method based on a deep generative model in which we design the output vector from an encoder to be disentangled into two parts: identity-related part and the rest representing facial attributes. We show that by solely modifying the identity-related part from the latent vector, our method effectively modifies the facial identity to a completely new one while the other attributes that are loosely related to personal identity are preserved. To validate the proposed method, we provide results from experiments that measure two different aspects: effectiveness of personal identity modification and facial attribute preservation.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Wevers, Rosa. "Unmasking Biometrics’ Biases." TMG Journal for Media History 21, no. 2 (November 1, 2018): 89. http://dx.doi.org/10.18146/2213-7653.2018.368.

Повний текст джерела
Анотація:
The article investigates the role of identity and the body in biometric technologies, contesting the conception that biometrics are neutral. It discusses biometrics’ exclusionary effects with regards to gender, race, class and ability, among others, by unveiling its historical links to nineteenth-century pseudoscientific practices. It does so through an analysis of Zach Blas’ Facial Weaponization Suite, an artistic critique of this dominant conception that draws attention to biometrics’ contested history and its current implications for marginalised identities.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Maniyar, Huzaifa, Suneeta Veerappa Budihal, and Saroja V. Siddamal. "Persons facial image synthesis from audio with Generative Adversarial Networks." ECTI Transactions on Computer and Information Technology (ECTI-CIT) 16, no. 2 (May 28, 2022): 135–41. http://dx.doi.org/10.37936/ecticit.2022162.246995.

Повний текст джерела
Анотація:
This paper proposes to build a framework with Generative Adversarial Network (GANs) to synthesize a person's facial image from audio input. Image and speech are the two main sources of information exchange between two entities. In some data intensive applications, a large amount of audio has to be translated into an understandable image format, with automated system, without human interference. This paper provides an end-to-end model for intelligible image reconstruction from an audio signal. The model uses a GAN architecture, which generates image features using audio waveforms for image synthesis. The model was created to produce facial images from audio of individual identities of a synthesized image of the speakers, based on the training dataset. The images of labelled persons are generated using excitation signals and the method obtained results with an accuracy of 96.88% for ungrouped data and 93.91% for grouped data.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Tüttenberg, Simone C., and Holger Wiese. "Learning own- and other-race facial identities: Testing implicit recognition with event-related brain potentials." Neuropsychologia 134 (November 2019): 107218. http://dx.doi.org/10.1016/j.neuropsychologia.2019.107218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Günther, Manuel, Stefan Böhringer, Dagmar Wieczorek, and Rolf P. Würtz. "Reconstruction of images from Gabor graphs with applications in facial image processing." International Journal of Wavelets, Multiresolution and Information Processing 13, no. 04 (July 2015): 1550019. http://dx.doi.org/10.1142/s0219691315500198.

Повний текст джерела
Анотація:
Graphs labeled with complex-valued Gabor jets are one of the important data formats for face recognition and the classification of facial images into medically relevant classes like genetic syndromes. We here present an interpolation rule and an iterative algorithm for the reconstruction of images from these graphs. This is especially important if graphs have been manipulated for information processing. One such manipulation is averaging the graphs of a single syndrome, another one building a composite face from the features of various individuals. In reconstructions of averaged graphs of genetic syndromes, the patients' identities are suppressed, while the properties of the syndromes are emphasized. These reconstructions from average graphs have a much better quality than averaged images.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Moser, Lucio, Chinyu Chien, Mark Williams, Jose Serra, Darren Hendler, and Doug Roble. "Semi-supervised video-driven facial animation transfer for production." ACM Transactions on Graphics 40, no. 6 (December 2021): 1–18. http://dx.doi.org/10.1145/3478513.3480515.

Повний текст джерела
Анотація:
We propose a simple algorithm for automatic transfer of facial expressions, from videos to a 3D character, as well as between distinct 3D characters through their rendered animations. Our method begins by learning a common, semantically-consistent latent representation for the different input image domains using an unsupervised image-to-image translation model. It subsequently learns, in a supervised manner, a linear mapping from the character images' encoded representation to the animation coefficients. At inference time, given the source domain (i.e., actor footage), it regresses the corresponding animation coefficients for the target character. Expressions are automatically remapped between the source and target identities despite differences in physiognomy. We show how our technique can be used in the context of markerless motion capture with controlled lighting conditions, for one actor and for multiple actors. Additionally, we show how it can be used to automatically transfer facial animation between distinct characters without consistent mesh parameterization and without engineered geometric priors. We compare our method with standard approaches used in production and with recent state-of-the-art models on single camera face tracking.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Molnar, Joseph A., Nicholas Walker, Thomas N. Steele, Christopher Craig, Jeffrey Williams, Jeffrey E. Carter, and James H. Holmes. "Initial Experience With Autologous Skin Cell Suspension for Treatment of Deep Partial-Thickness Facial Burns." Journal of Burn Care & Research 41, no. 5 (March 2, 2020): 1045–51. http://dx.doi.org/10.1093/jbcr/iraa037.

Повний текст джерела
Анотація:
Abstract Facial burns present a challenge in burn care, as hypertrophic scarring and dyspigmentation can interfere with patients’ personal identities, ocular and oral functional outcomes, and have long-term deleterious effects. The purpose of this study is to evaluate our initial experience with non-cultured, autologous skin cell suspension (ASCS) for the treatment of deep partial-thickness (DPT) facial burns. Patients were enrolled at a single burn center during a multicenter, prospective, single-arm, observational study involving the compassionate use of ASCS for the treatment of large total BSA (TBSA) burns. Treatment decisions concerning facial burns were made by the senior author. Facial burns were initially excised and treated with allograft. The timing of ASCS application was influenced by an individual’s clinical status; however, all patients were treated within 30 days of injury. Outcomes included subjective cosmetic parameters and the number of reoperations within 3 months. Five patients (4 males, 1 female) were treated with ASCS for DPT facial burns. Age ranged from 2.1 to 40.7 years (mean 18.2 ± 17.3 years). Average follow-up was 231.2 ± 173.1 days (range 63–424 days). Two patients required reoperation for partial graft loss within 3 months in areas of full-thickness injury. There were no major complications and one superficial hematoma. Healing and cosmetic outcomes were equivalent to, and sometimes substantially better than, outcomes typical of split-thickness autografting. Non-cultured, ASCS was successfully used to treat DPT facial burns containing confluent dermis with remarkable cosmetic outcomes. Treatment of DPT burns with ASCS may be an alternative to current treatments, particularly in patients prone to dyspigmentation, scarring sequelae, and with limited donor sites.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Ackerman, Ada. "Redonner visage aux gueules cassées. Sculpture et chirurgie plastique pendant et après la Première Guerre mondiale." RACAR : Revue d'art canadienne 41, no. 1 (September 30, 2016): 5–21. http://dx.doi.org/10.7202/1037548ar.

Повний текст джерела
Анотація:
During the First World War, French, British, and US sculptors dedicated their creative practice and knowledge to making masks for soldiers with facial injuries, thus allying art and science in an attempt to restore the most essential aspect of the soldiers’ identities. As artistic resources were mobilized to counter the destructive effects of the war, this new kind of sculpture engendered myths and fantasies about the artists’ power. This article argues that ultimately, though, the practice of mask-making was used as a strategy that benefitted the preservation of the prevailing economic and social order.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Cheie, Lavinia, and Laura Visu-Petra. "Relating Individual Differences in Trait-Anxiety to Memory Functioning in Young Children." Journal of Individual Differences 33, no. 2 (January 2012): 109–18. http://dx.doi.org/10.1027/1614-0001/a000079.

Повний текст джерела
Анотація:
There is extensive evidence indicating cognitive biases at several stages of information processing in high-anxious children. Little research, however, has investigated a potential memory bias toward negative information in high-anxious young children. We studied immediate and delayed verbal recall as well as delayed visual recognition in a sample of high-trait-anxious (HA) and low-trait-anxious (LA) preschoolers (N = 76, mean age = 65 months), using stimuli containing task-irrelevant emotional valence (positive, negative, neutral). The findings revealed that, compared to their LA counterparts, HA preschoolers displayed (1) a tendency to be less accurate in the immediate verbal recall task, (2) poorer recall of negative words in the immediate condition and poorer recall of neutral words in the delayed condition, (3) impaired delayed recognition of identities with happy facial expressions and a tendency to better recognize identities expressing anger. Results are discussed considering the dynamic interplay between personality, emotion and cognitive factors during early development.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Temyrkanova, E. K., and A. Saurambekova. "YOLO NETWORK TRAINING FOR FACE RECOGNITION IN MEDICAL MASKS." PHYSICO-MATHEMATICAL SERIES 2, no. 336 (April 15, 2021): 125–30. http://dx.doi.org/10.32014/2021.2518-1726.31.

Повний текст джерела
Анотація:
The detection of face masks is a very important issue for the safety and prevention of Covid-19. In the medical field, the mask reduces the potential risk of infection from an infected person, regardless of whether they have symptoms or not. Thus, the detection of masks on the face becomes a very important and complex task. The efficiency of facial recognition systems can significantly deteriorate due to occlusions, such as medical masks, hats, facial hair, and sunglasses. Currently, there are a number of different methods for recognizing objects in an image. One of the most popular methods is convolutional neural networks and their modifications. This article provides a brief description of the YOLO network, an example of training that can detect faces with a mask and without a mask, and the results of the work. The recognition model has been trained on different object recognition pre-trained models with the same data and evaluated on multiple environments to achieve good accuracy for limited identities.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Hadi, Hutama, Hasdi Radiles, Rika Susanti, and Mulyono Mulyono. "Human Face Identification Using Haar Cascade Classifier and LBPH Based on Lighting Intensity." Indonesian Journal of Artificial Intelligence and Data Mining 5, no. 1 (May 14, 2022): 13. http://dx.doi.org/10.24014/ijaidm.v5i1.15245.

Повний текст джерела
Анотація:
The problem in implementing online learning during the Covid-19 era is the lack of internet access for video streaming, especially in small towns or villages. The solution idea is to minimize the video bandwidth quota by only showing emoticons. The first step of the process is the system must be able to lock the face area to be translated. This study aims to identify areas of the human face based on camera captures. The research was conducted using the Haar cascade classifier algorithm to recognize the facial area of the captured image. Then the Local Binary Pattern Histogram algorithm will recognize the identity of the face. The lighting scenario will be used as a distracting effect on the image. The results showed that based on 30 sets of images tested in bright conditions, the system was able to recognize facial identities up to 62%, normal conditions 51% and dark conditions 46%.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Nakanishi, Yuko J., and Jeffrey Western. "Evaluation of Biometric Technologies for Access Control at Transportation Facilities and Border Crossings." Transportation Research Record: Journal of the Transportation Research Board 1938, no. 1 (January 2005): 1–8. http://dx.doi.org/10.1177/0361198105193800101.

Повний текст джерела
Анотація:
To ensure that only authorized individuals–-legitimate workers, travelers, and visitors–-enter a transportation facility or border crossing, their identities must be ascertained. Because manual procedures are time-consuming, resource intensive, and vulnerable to human error and manipulation, the use of biometric technologies should be considered. This paper discusses several biometric technologies–-fingerprint recognition, iris recognition, facial recognition, and hand geometry–-and assesses their feasibility for use in access control at transportation facilities and border crossings. The advantages and disadvantages of the technologies are provided, as are cost, accuracy, and other performance data. Potential privacy and data issues are also discussed.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Cao, Chen, Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollhoefer, Shun-Suke Saito, Stephen Lombardi, et al. "Authentic volumetric avatars from a phone scan." ACM Transactions on Graphics 41, no. 4 (July 2022): 1–19. http://dx.doi.org/10.1145/3528223.3530143.

Повний текст джерела
Анотація:
Creating photorealistic avatars of existing people currently requires extensive person-specific data capture, which is usually only accessible to the VFX industry and not the general public. Our work aims to address this drawback by relying only on a short mobile phone capture to obtain a drivable 3D head avatar that matches a person's likeness faithfully. In contrast to existing approaches, our architecture avoids the complex task of directly modeling the entire manifold of human appearance, aiming instead to generate an avatar model that can be specialized to novel identities using only small amounts of data. The model dispenses with low-dimensional latent spaces that are commonly employed for hallucinating novel identities, and instead, uses a conditional representation that can extract person-specific information at multiple scales from a high resolution registered neutral phone scan. We achieve high quality results through the use of a novel universal avatar prior that has been trained on high resolution multi-view video captures of facial performances of hundreds of human subjects. By fine-tuning the model using inverse rendering we achieve increased realism and personalize its range of motion. The output of our approach is not only a high-fidelity 3D head avatar that matches the person's facial shape and appearance, but one that can also be driven using a jointly discovered shared global expression space with disentangled controls for gaze direction. Via a series of experiments we demonstrate that our avatars are faithful representations of the subject's likeness. Compared to other state-of-the-art methods for lightweight avatar creation, our approach exhibits superior visual quality and animateability.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Cheie, Lavinia, Laura Visu-Petra, and Mircea Miclea. "Trait anxiety, visual search and memory for facial identities in preschoolers: An investigation using taskirrelevant emotional information." Procedia - Social and Behavioral Sciences 33 (2012): 622–26. http://dx.doi.org/10.1016/j.sbspro.2012.01.196.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Deruelle, Christine, and Joël Fagot. "Categorizing facial identities, emotions, and genders: Attention to high- and low-spatial frequencies by children and adults." Journal of Experimental Child Psychology 90, no. 2 (February 2005): 172–84. http://dx.doi.org/10.1016/j.jecp.2004.09.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Hendrawan, Aria, Basworo Ardi Pramono, and Whisnumurti Adhiwibowo. "PENGGUNAAN MODEL HIDDEN MARKOV DAN METODE NEURAL NETWORK SEBAGAI PENERAPAN TEKNOLOGI PENGENALAN WAJAH." ScientiCO : Computer Science and Informatics Journal 2, no. 1 (July 3, 2019): 13. http://dx.doi.org/10.22487/j26204118.2019.v2.i1.12173.

Повний текст джерела
Анотація:
The human face recognition system is one of the fields that is quite developed at this time, where applications can be applied in the field of security (security system) such as permission to access room, surveillance (surveillance), as well as the search for individual identities in the police database. The face recognition approach aims to detect faces in 2-dimensional images and sequential images of videos that have many methods such as local, global, and hybrid approaches. Hidden Model Markov (HMM) is another promising method that works well for images with different lighting variations, facial expressions, and orientations. HMM is a set of statistical models used to characterize signal properties. An artificial neural network-based approach is learned from image examples and relies on techniques from machine learning to find relevant facial image characteristics. The characteristics studied were in the form of discriminant functions (ie non-linear decision surfaces), then used for face recognition. In this study there will be an application to compare Hidden Markov Models and Neural Network Method as a Face Recognition Technology Algorithm Model.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Tsai, Yu-Shiuan, and Si-Jie Chen. "A Study on the Application of Walking Posture for Identifying Persons with Gait Recognition." Applied Sciences 12, no. 15 (August 7, 2022): 7909. http://dx.doi.org/10.3390/app12157909.

Повний текст джерела
Анотація:
In terms of gait recognition, face recognition is currently the most commonly used technology with high accuracy. However, in an image, there is not necessarily a face. Therefore, face recognition cannot be used if there is no face at all. However, when we cannot obtain facial information, we still want to know the person’s identity. Thus, we must use information other than facial features to identify the person. Since each person’s behavior will be somewhat different, we hope to learn the difference between one specific human body and others and use this behavior to identify the human body because deep learning technology advances this idea. Therefore, we used OpenPose along with LSTM for personal identification. We found that using people’s walking posture is feasible for identifying their identities. Presently, the environment for making judgments is limited, in terms of height, and there will be restrictions on distance. In the future, using various angles and distances will be explored. This method can also solve the problem of half-body identification and is also helpful for finding people.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Schwarz, Franziska, Klaus Schwarz, and Reiner Creutzburg. "Improving Detection of Manipulated Passport Photos - Training Course for Border Control Inspectors to Detect Morphed Facial Passport Photos - Part I: Introduction, State-of-the-Art and Preparatory Tests and Experiments." Electronic Imaging 2021, no. 3 (June 18, 2021): 136–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.3.mobmu-136.

Повний текст джерела
Анотація:
In recent years, ID controllers have observed an increase in the use of fraudulently obtained ID documents [1]. This often involves deception during the application process to get a genuine document with a manipulated passport photo. One of the methods used by fraudsters is the presentation of a morphed facial image. Face morphing is used to assign multiple identities to a biometric passport photo. It is possible to modify the photo so that two or more persons, usually the known applicant and one or more unknown companions, can use the passport to pass through a border control [2]. In this way, persons prohibited from crossing a border can cross it unnoticed using a face morphing attack and thus acquire a different identity. The face morphing attack aims to weaken the application for an identity card and issue a genuine identity document with a morphed facial image. A survey among experts at the Security Printers Conference revealed that a relevant number of at least 1,000 passports with morphed facial images had been detected in the last five years in Germany alone [1]. Furthermore, there are indications of a high number of unreported cases. This high presumed number of unreported cases can also be explained by the lack of morphed photographs’ detection capabilities. Such identity cards would be recognized if the controllers could recognize the morphed facial images. Various studies have shown that the human eye has a minimal ability to recognize morphed faces as such [2], [3], [4], [5], [6]. This work consists of two parts. Both parts are based on the complete development of a training course for passport control officers to detect morphed facial images. Part one contains the conception and the first test trials of how the training course has to be structured to achieve the desired goals and thus improve the detection of morphed facial images for passport inspectors. The second part of this thesis will include the complete training course and the evaluation of its effectiveness.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Perez, Alfredo J., Sherali Zeadally, Scott Griffith, Luis Y. Matos Garcia, and Jaouad A. Mouloud. "A User Study of a Wearable System to Enhance Bystanders’ Facial Privacy." IoT 1, no. 2 (October 10, 2020): 198–217. http://dx.doi.org/10.3390/iot1020013.

Повний текст джерела
Анотація:
The privacy of users and information are becoming increasingly important with the growth and pervasive use of mobile devices such as wearables, mobile phones, drones, and Internet of Things (IoT) devices. Today many of these mobile devices are equipped with cameras which enable users to take pictures and record videos anytime they need to do so. In many such cases, bystanders’ privacy is not a concern, and as a result, audio and video of bystanders are often captured without their consent. We present results from a user study in which 21 participants were asked to use a wearable system called FacePET developed to enhance bystanders’ facial privacy by providing a way for bystanders to protect their own privacy rather than relying on external systems for protection. While past works in the literature focused on privacy perceptions of bystanders when photographed in public/shared spaces, there has not been research with a focus on user perceptions of bystander-based wearable devices to enhance privacy. Thus, in this work, we focus on user perceptions of the FacePET device and/or similar wearables to enhance bystanders’ facial privacy. In our study, we found that 16 participants would use FacePET or similar devices to enhance their facial privacy, and 17 participants agreed that if smart glasses had features to conceal users’ identities, it would allow them to become more popular.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Natu, Vaidehi S., Fang Jiang, Abhijit Narvekar, Shaiyan Keshvari, Volker Blanz, and Alice J. O'Toole. "Dissociable Neural Patterns of Facial Identity across Changes in Viewpoint." Journal of Cognitive Neuroscience 22, no. 7 (July 2010): 1570–82. http://dx.doi.org/10.1162/jocn.2009.21312.

Повний текст джерела
Анотація:
We examined the neural response patterns for facial identity independent of viewpoint and for viewpoint independent of identity. Neural activation patterns for identity and viewpoint were collected in an fMRI experiment. Faces appeared in identity-constant blocks, with variable viewpoint, and in viewpoint-constant blocks, with variable identity. Pattern-based classifiers were used to discriminate neural response patterns for all possible pairs of identities and viewpoints. To increase the likelihood of detecting distinct neural activation patterns for identity, we tested maximally dissimilar “face”–“antiface” pairs and normal face pairs. Neural response patterns for four of six identity pairs, including the “face”–“antiface” pairs, were discriminated at levels above chance. A behavioral experiment showed accord between perceptual and neural discrimination, indicating that the classifier tapped a high-level visual identity code. Neural activity patterns across a broad span of ventral temporal (VT) cortex, including fusiform gyrus and lateral occipital areas (LOC), were required for identity discrimination. For viewpoint, five of six viewpoint pairs were discriminated neurally. Viewpoint discrimination was most accurate with a broad span of VT cortex, but the neural and perceptual discrimination patterns differed. Less accurate discrimination of viewpoint, more consistent with human perception, was found in right posterior superior temporal sulcus, suggesting redundant viewpoint codes optimized for different functions. This study provides the first evidence that it is possible to dissociate neural activation patterns for identity and viewpoint independently.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Kramer, Robin S. S. "Within-person variability in men’s facial width-to-height ratio." PeerJ 4 (March 10, 2016): e1801. http://dx.doi.org/10.7717/peerj.1801.

Повний текст джерела
Анотація:
Background.In recent years, researchers have investigated the relationship between facial width-to-height ratio (FWHR) and a variety of threat and dominance behaviours. The majority of methods involved measuring FWHR from 2D photographs of faces. However, individuals can vary dramatically in their appearance across images, which poses an obvious problem for reliable FWHR measurement.Methods.I compared the effect sizes due to the differences between images taken with unconstrained camera parameters (Studies 1 and 2) or varied facial expressions (Study 3) to the effect size due to identity, i.e., the differences between people. In Study 1, images of Hollywood actors were collected from film screenshots, providing the least amount of experimental control. In Study 2, controlled photographs, which only varied in focal length and distance to camera, were analysed. In Study 3, images of different facial expressions, taken in controlled conditions, were measured.Results.Analyses revealed that simply varying the focal length and distance between the camera and face had a relatively small effect on FWHR, and therefore may prove less of a problem if uncontrolled in study designs. In contrast, when all camera parameters (including the camera itself) are allowed to vary, the effect size due to identity was greater than the effect of image selection, but the ranking of the identities was significantly altered by the particular image used. Finally, I found significant changes to FWHR when people posed with four of seven emotional expressions in comparison with neutral, and the effect size due to expression was larger than differences due to identity.Discussion.The results of these three studies demonstrate that even when head pose is limited to forward facing, changes to the camera parameters and a person’s facial expression have sizable effects on FWHR measurement. Therefore, analysing images that fail to constrain some of these variables can lead to noisy and unreliable results, but also relationships caused by previously unconsidered confounds.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії