To see the other types of publications on this topic, follow the link: Identification faciale.

Journal articles on the topic 'Identification faciale'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Identification faciale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Demeule, Caroline. "Psychological Issues of Firearms Suicide Attempts." Recherches en psychanalyse 20, no. 2 (February 8, 2016): 140a—149a. http://dx.doi.org/10.3917/rep.020.0140a.

Full text
Abstract:
La tentative de suicide par arme à feu attaque le visage, support symbolique de l'identité. L'explosion faciale peut se comprendre comme une monstrification, mais elle est avant tout une monstration, exposée au regard de l'autre. À partir d'une analyse clinique, l'hypothèse d'une identification mélancolique partielle au visage mort permet de rendre compte de la fragilité de l'inscription symbolique du visage et de la renaissance psychique paradoxale que procure le fracas facial : c'est en détruisant le visage, objet d'un regard défaillant en des temps précoces, que le sujet peut dépasser cette identification et se sentir exister.
APA, Harvard, Vancouver, ISO, and other styles
2

Zribi, Ahmed, and Jacques Faure. "Apport de l’analyse céphalométrique tridimensionnelle dans l’étude des déterminants morphologiques de l’esthétique faciale." L'Orthodontie Française 85, no. 1 (March 2014): 51–58. http://dx.doi.org/10.1051/orthodfr/2013071.

Full text
Abstract:
Les études sur l’esthétique sont nombreuses, mais rares sont celles qui ont bénéficié de l’apport des nouvelles techniques d’imagerie 3D. L’objet de ce travail est de déterminer quels sont les critères céphalométriques les plus déterminants dans l’esthétique faciale par identification des corrélations les plus fortes entre la note esthétique et les valeurs céphalométriques tridimensionnelles de l’analyse Cepha 3Dt. Un groupe de 91 patients (de 10 à 60 ans) a été jugé par 50 juges sélectionnés au hasard (de 12 à 65 ans) à l’aide d’une échelle analogique. Les plus fortes corrélations sont ensuite recherchées entre les notes esthétiques et les valeurs céphalométriques 3D sur l’échantillon global, et sur les sous-échantillons de classe II et de classe III. L’esthétique faciale apparaît ainsi surtout liée : à la dimension antéro-postérieure, au décalage maxillo-mandibulaire, aux rapports des zones antérieures (alvéolaires ou basales), avec une priorité de l’étage alvéolaire sur l’étage basal et surtout sur l’architecture. Dans le groupe de classe II, le décalage sagittal et la divergence mandibulaire font jeu égal dans la détermination de l’esthétique faciale.
APA, Harvard, Vancouver, ISO, and other styles
3

Desbois, Claire. "La reconstitution faciale en identification médico-legale. Les critères objectifs de ressemblance." Études sur la mort 156, no. 2 (September 14, 2022): 83–96. http://dx.doi.org/10.3917/eslm.156.0083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Peterson, Rex A., and Dean L. Johnston. "Facile Identification of the Facial Nerve Branches." Clinics in Plastic Surgery 14, no. 4 (October 1987): 785–88. http://dx.doi.org/10.1016/s0094-1298(20)31502-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chandrasekhar, Tadi, and Ch Sumanth Kumar. "Improved Facial Identification Using Adaptive Neuro-Fuzzy Logic Inference System." Indian Journal Of Science And Technology 16, no. 13 (April 4, 2023): 1014–20. http://dx.doi.org/10.17485/ijst/v16i13.1833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anta, Juan Ángel. "Identificación facial de emociones: utilidad en los cuerpos y fuerzas de seguridad (policías locales)." Vox juris 33, no. 1 (June 30, 2017): 21–30. http://dx.doi.org/10.24265/voxjuris.2017.v33n1.03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Salvadi, Shoba Rani, D. Nagendra Rao, and S. Vathsal. "Visual Mapping for Gender Identification from Facial Images using BAM and DON." Indian Journal Of Science And Technology 16, no. 17 (May 2, 2023): 1295–301. http://dx.doi.org/10.17485/ijst/v16i17.349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yoshino, Mineo. "Cranio-Facial Identification." Japanese journal of science and technology for identification 2, no. 2 (1997): 45–55. http://dx.doi.org/10.3408/jasti.2.45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pham, Van-Huy, Diem-Phuc Tran, and Van-Dung Hoang. "Personal Identification Based on Deep Learning Technique Using Facial Images for Intelligent Surveillance Systems." International Journal of Machine Learning and Computing 9, no. 4 (August 2019): 465–70. http://dx.doi.org/10.18178/ijmlc.2019.9.4.827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fiorentini, Chiara, Susanna Schmidt, and Paolo Viviani. "The Identification of Unfolding Facial Expressions." Perception 41, no. 5 (January 1, 2012): 532–55. http://dx.doi.org/10.1068/p7052.

Full text
Abstract:
We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s−1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.
APA, Harvard, Vancouver, ISO, and other styles
11

Stevanov, Zorica, and Suncica Zdravkovic. "Identification based on facial parts." Psihologija 40, no. 1 (2007): 37–56. http://dx.doi.org/10.2298/psi0701037s.

Full text
Abstract:
Two opposing views dominate face identification literature, one suggesting that the face is processed as a whole and another suggesting analysis based on parts. Our research tried to establish which of these two is the dominant strategy and our results fell in the direction of analysis based on parts. The faces were covered with a mask and the participants were uncovering different parts, one at the time, in an attempt to identify a person. Already at the level of a single facial feature, such as mouth or eye and top of the nose, some observers were capable to establish the identity of a familiar face. Identification is exceptionally successful when a small assembly of facial parts is visible, such as eye, eyebrow and the top of the nose. Some facial parts are not very informative on their own but do enhance recognition when given as a part of such an assembly. Novel finding here is importance of the top of the nose for the face identification. Additionally observers have a preference toward the left side of the face. Typically subjects view the elements in the following order: left eye, left eyebrow, right eye, lips, region between the eyes, right eyebrow, region between the eyebrows, left check, right cheek. When observers are not in a position to see eyes, eyebrows or top of the nose, they go for lips first and then region between the eyebrows, region between the eyes, left check, right cheek and finally chin.
APA, Harvard, Vancouver, ISO, and other styles
12

Farrior, Jay B., and Hector Santini. "Facial Nerve Identification in Children." Otolaryngology–Head and Neck Surgery 93, no. 2 (April 1985): 173–76. http://dx.doi.org/10.1177/019459988509300209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Lee, Eric, Thomas Whalen, John Sakalauskas, Glen Baigent, Chandra Bisesar, Andrew McCarthy, Glenda Reid, and Cynthia Wotton. "Suspect identification by facial features." Ergonomics 47, no. 7 (June 10, 2004): 719–47. http://dx.doi.org/10.1080/00140130310001629720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Righini, C. A. "Facial nerve identification during parotidectomy." European Annals of Otorhinolaryngology, Head and Neck Diseases 129, no. 4 (August 2012): 214–19. http://dx.doi.org/10.1016/j.anorl.2011.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Saha, Priya, Gouri Nath, Mrinal Kanti Bhowmik, Debotosh Bhattacharjee, and Barin Kumar De. "NEI Facial Expressions for Identification." IERI Procedia 4 (2013): 358–63. http://dx.doi.org/10.1016/j.ieri.2013.11.051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Weeratna, Jayanie, and Ajith Tennakoon. "Management of The Dead and Missing People in the Easter Bombings in Colombo, Sri Lanka." Journal of Clinical and Health Sciences 6, no. 1(Special) (June 30, 2021): 59. http://dx.doi.org/10.24191/jchs.v6i1(special).13163.

Full text
Abstract:
Identification of the victims is considered as one of the most important initial steps in the management of a mass disaster. Comparison of ante-mortem and post-mortem fingerprints (ridgeology), dental data and DNA profiles have been recognized as primary identification methods for Disaster Victim Identification (DVI). However, facial recognition and personal belongings are the widely used tools of identification in large disasters. A series of bombings hit Sri Lanka on the morning of 21 st of April 2019. In the city of Colombo around 131 people died. Most of the identifications were achieved through visual recognition, with a minor percentage by odontology, genetics and fingerprints. The procedure adopted in the response to the disaster is described in this paper highlighting the importance of advanced preparedness, inter-institutional cooperation, the empathetic approach in caring for the grieving families and the procedure to adopt in visual recognition in DVI.
APA, Harvard, Vancouver, ISO, and other styles
17

Taylor, Alisdair J. G., and Louise Bryant. "The Effect of Facial Attractiveness on Facial Expression Identification." Swiss Journal of Psychology 75, no. 4 (October 2016): 175–81. http://dx.doi.org/10.1024/1421-0185/a000183.

Full text
Abstract:
Abstract. Emotion perception studies typically explore how judgments of facial expressions are influenced by invariant characteristics such as sex or by variant characteristics such as gaze. However, few studies have considered the importance of factors that are not easily categorized as invariant or variant. We investigated one such factor, attractiveness, and the role it plays in judgments of emotional expression. We asked 26 participants to categorize different facial expressions (happy, neutral, and angry) that varied with respect to facial attractiveness (attractive, unattractive). Participants were significantly faster when judging expressions on attractive as compared to unattractive faces, but there was no interaction between facial attractiveness and facial expression, suggesting that the attractiveness of a face does not play an important role in the judgment of happy or angry facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
18

Taylor, Alisdair James Gordon, and Maria Jose. "Physical Aggression and Facial Expression Identification." Europe’s Journal of Psychology 10, no. 4 (November 28, 2014): 650–59. http://dx.doi.org/10.5964/ejop.v10i4.816.

Full text
Abstract:
Social information processing theories suggest that aggressive individuals may exhibit hostile perceptual biases when interpreting other’s behaviour. This hypothesis was tested in the present study which investigated the effects of physical aggression on facial expression identification in a sample of healthy participants. Participants were asked to judge the expressions of faces presented to them and to complete a self-report measure of aggression. Relative to low physically aggressive participants, high physically aggressive participants were more likely to mistake non-angry facial expressions as being angry facial expressions (misattribution errors), supporting the idea of a hostile predisposition. These differences were not explained by gender, or response times. There were no differences in identifying angry expressions in general between aggression groups (misperceived errors). These findings add support to the idea that aggressive individuals exhibit hostile perceptual biases when interpreting facial expressions.
APA, Harvard, Vancouver, ISO, and other styles
19

Alvergne, Alexandra, Fanny Perreau, Allan Mazur, Ulrich Mueller, and Michel Raymond. "Identification of visual paternity cues in humans." Biology Letters 10, no. 4 (April 2014): 20140063. http://dx.doi.org/10.1098/rsbl.2014.0063.

Full text
Abstract:
Understanding how individuals identify their relatives has implications for the evolution of social behaviour. Kinship cues might be based on familiarity, but in the face of paternity uncertainty and costly paternal investment, other mechanisms such as phenotypic matching may have evolved. In humans, paternal recognition of offspring and subsequent discriminative paternal investment have been linked to father–offspring facial phenotypic similarities. However, the extent to which paternity detection is impaired by environmentally induced facial information is unclear. We used 27 portraits of fathers and their adult sons to quantify the level of paternity detection according to experimental treatments that manipulate the location, type and quantity of visible facial information. We found that (i) the lower part of the face, that changes most with development, does not contain paternity cues, (ii) paternity can be detected even if relational information within the face is disrupted and (iii) the signal depends on the presence of specific information rather than their number. Taken together, the results support the view that environmental effects have little influence on the detection of paternity using facial similarities. This suggests that the cognitive dispositions enabling the facial detection of kinship relationships ignore genetic irrelevant facial information.
APA, Harvard, Vancouver, ISO, and other styles
20

KOÇ UÇAR, Habibe, and Esra SARIGEÇİLİ. "Çocuklarda idiyopatik periferik fasiyal sinir paralizisinde steroid tedavisinin etkinliği ve prognostik faktörlerin belirlenmesi." Cukurova Medical Journal 47, no. 2 (June 30, 2022): 660–71. http://dx.doi.org/10.17826/cumj.1053502.

Full text
Abstract:
Purpose: The aim of the study is intended to investigate the etiology and clinical features of children with idiopathic peripheral facial palsy (IPFP) and to identify probable prognostic factors. It is also intended to investigate corticosteroid therapy and compare its efficacy. Materials and Methods: A total of 80 patients with newly diagnosed IPFP were included in the study. Demographic, clinical features and laboratory findings including age, gender, House Brackmann Facial Nerve Grading System (HBGS) grade at admission and follow-up, and the dosage and onset of steroid treatment were reviewed. We assigned our patients to 3 groups: Group 1: Patients given 1 mg/kg oral steroid treatment (1 mg/kg/day oral prednisolone). Group 2: Patients given 2 mg/kg oral steroid treatment (2 mg/kg/day oral prednisolone), and Group 3: Patients who did not receive oral steroid treatment. Results: A total of 80 children (41 girls and 39 boys) with a median age of 11 years were included in the study. The complete recovery was detected in %78,8(n:63) with IPFP. Of all patients, 78.8% (n=63) showed complete recovery. Admission after more than 24 hours was found to reduce the likelihood of ER by 10 times (1/0.10), while patients with HBGS grade of 5 were found to be 33.3 times (1/0.03) less likely to achieve ER than patients with HBGS grades of 2 to 3. Finally, steroid treatment at 2 mg/kg/d increased the probability of early recovery by 8.38 times. Conclusion: The prognosis of IPFP in children was very good. The prognostic factors affecting the early recovery were being HBGS grade 2 or 3 on the 21th day and receiving steroid treatment in the first 24 hours and 2 mg/kg/d dose.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Seunghyun, Byeong Seon An, and Eui Chul Lee. "Comparative Analysis of AI-Based Facial Identification and Expression Recognition Using Upper and Lower Facial Regions." Applied Sciences 13, no. 10 (May 15, 2023): 6070. http://dx.doi.org/10.3390/app13106070.

Full text
Abstract:
The COVID-19 pandemic has significantly impacted society, having led to a lack of social skills in children who became used to interacting with others while wearing masks. To analyze this issue, we investigated the effects of masks on face identification and facial expression recognition, using deep learning models for these operations. The results showed that when using the upper or lower facial regions for face identification, the upper facial region allowed for an accuracy of 81.36%, and the lower facial region allowed for an accuracy of 55.52%. Regarding facial expression recognition, the upper facial region allowed for an accuracy of 39% compared to 49% for the lower facial region. Furthermore, our analysis was conducted for a number of facial expressions, and specific emotions such as happiness and contempt were difficult to distinguish using only the upper facial region. Because this study used a model trained on data generated from human labeling, it is assumed that the effects on humans would be similar. Therefore, this study is significant because it provides engineering evidence of a decline in facial expression recognition; however, wearing masks does not cause difficulties in identification.
APA, Harvard, Vancouver, ISO, and other styles
22

., Abhilasha Shukla. "FACIAL EXPRESSION IDENTIFICATION BY USING FEATURES OF SALIENT FACIAL LANDMARKS." International Journal of Research in Engineering and Technology 05, no. 08 (August 25, 2016): 313–19. http://dx.doi.org/10.15623/ijret.2016.0508053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

K, Sneha, Manoj V, Darshan Gowda M S, Girish S, Harsha C, and Giri Gowrav R. "Advancements in Face Recognition through Machine Learning Techniques." Journal of Data Engineering and Knowledge Discovery 1, no. 1 (April 18, 2024): 26–31. http://dx.doi.org/10.46610/jodekd.2024.v01i01.004.

Full text
Abstract:
The majority of disciplines in the modern world rely heavily on face recognition. The identification of fraud and security is one of the most popular disciplines. The process of positioning the facial landmarks on the face to provide precise points for facial recognition is known as facial alignment. Identification and face detection are crucial for detecting fraud. Consequently, the identification of profile and semi-profile facial features is essential for security reasons. The facial-aligned dataset can be used to get the Hourglass model's face alignment, which improves face recognition accuracy when employing the Haar-Cascade Classifier. The precision and accuracy rates are used to gauge performance. When compared to facial recognition using Principal Component Analysis and Support Vector Machines (SVM), it produces better results. SVM, PCA, and deep learning approaches have been utilized in recent years for face recognition; nevertheless, the accuracy rate varies depending on the situation.
APA, Harvard, Vancouver, ISO, and other styles
24

Yoshino, Mineo. "Recent Advances in Facial Image Identification." Japanese journal of science and technology for identification 7, no. 1 (2002): 1–17. http://dx.doi.org/10.3408/jasti.7.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Okamoto, Noriyoshi, and Osamu Nakamura. "Personal Identification by a Facial Image." IEEJ Transactions on Electronics, Information and Systems 122, no. 10 (2002): 1705–12. http://dx.doi.org/10.1541/ieejeiss1987.122.10_1705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

HR, CHENNAMMA, and LALITHA RANGARAJAN. "MUGSHOT IDENTIFICATION FROM MANIPULATED FACIAL IMAGES." International Journal of Machine Intelligence 4, no. 1 (May 30, 2012): 407. http://dx.doi.org/10.9735/0975-2927.4.1.407-407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shapiro, Peter N., and Steven Penrod. "Meta-analysis of facial identification studies." Psychological Bulletin 100, no. 2 (1986): 139–56. http://dx.doi.org/10.1037/0033-2909.100.2.139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Kovera, Margaret Bull, Steven D. Penrod, Carolyn Pappas, and Debra L. Thill. "Identification of computer-generated facial composites." Journal of Applied Psychology 82, no. 2 (1997): 235–46. http://dx.doi.org/10.1037/0021-9010.82.2.235.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Borude, Priyanka R., S. T. Gandhe, P. A. Dhulekar, and G. M. Phade. "Identification and Tracking of Facial Features." Procedia Computer Science 49 (2015): 2–10. http://dx.doi.org/10.1016/j.procs.2015.04.220.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Lynnerup, Niels, Marie Andersen, and Helle Petri Lauritsen. "Facial image identification using Photomodeler®." Legal Medicine 5, no. 3 (September 2003): 156–60. http://dx.doi.org/10.1016/s1344-6223(03)00054-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Oswald, Karl M., and Marjorie L. Coleman. "Memory demands on facial composite identification." Applied Cognitive Psychology 21, no. 3 (2007): 345–60. http://dx.doi.org/10.1002/acp.1276.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Liu, Xiaoqian, and Xiaoyang Wang. "Automatic Identification of a Depressive State in Primary Care." Healthcare 10, no. 12 (November 22, 2022): 2347. http://dx.doi.org/10.3390/healthcare10122347.

Full text
Abstract:
The Center for Epidemiologic Studies Depression Scale (CES-D) performs well in screening depression in primary care. However, people are looking for alternatives because it screens for too many items. With the popularity of social media platforms, facial movement can be recorded ecologically. Considering that there are nonverbal behaviors, including facial movement, associated with a depressive state, this study aims to establish an automatic depression recognition model to be easily used in primary healthcare. We integrated facial activities and gaze behaviors to establish a machine learning algorithm (Kernal Ridge Regression, KRR). We compared different algorithms and different features to achieve the best model. The results showed that the prediction effect of facial and gaze features was higher than that of only facial features. In all of the models we tried, the ridge model with a periodic kernel showed the best performance. The model showed a mutual fund R-squared (R2) value of 0.43 and a Pearson correlation coefficient (r) value of 0.69 (p < 0.001). Then, the most relevant variables (e.g., gaze directions and facial action units) were revealed in the present study.
APA, Harvard, Vancouver, ISO, and other styles
33

Wu, Dan, Ming Fang, and Feiran Fu. "Person Re-Identification Net of Spindle Net Fusing Facial Feature." Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 37, no. 5 (October 2019): 1070–76. http://dx.doi.org/10.1051/jnwpu/20193751070.

Full text
Abstract:
In the field of person re-identification, the extraction of pedestrian features is mainly focused on the extraction of features from the whole pedestrian or limb torso, and the facial features are less used. The facial features is integrated into the network to enhance pedestrian recognition accuracy rate. By introducing the MTCNN facial extraction network in the framework of person re-identification network Spindle Net, and improves the accuracy of person re-identification by improving the weight of facial features in the overall pedestrian characteristics. The experimental results show that the accuracy of Rank-1 on the CUHK01, CUHK03, VIPeR, PRID, i-LIDS, and 3DPeS data sets is 7% higher than that of Spindle Net.
APA, Harvard, Vancouver, ISO, and other styles
34

Duchovičová, Soňa, Barbora Zahradníková, and Peter Schreiber. "Facial Composite System Using Real Facial Features." Research Papers Faculty of Materials Science and Technology Slovak University of Technology 22, no. 35 (December 1, 2014): 9–15. http://dx.doi.org/10.2478/rput-2014-0029.

Full text
Abstract:
Abstract Facial feature points identification plays an important role in many facial image applications, like face detection, face recognition, facial expression classification, etc. This paper describes the early stages of the research in the field of evolving a facial composite, primarily the main steps of face detection and facial features extraction. Technological issues are identified and possible strategies to solve some of the problems are proposed.
APA, Harvard, Vancouver, ISO, and other styles
35

Khairunnisa, Khairunnisa, Rismayanti Rismayanti, and Rully Alhari. "ANALISIS IDENTIFIKASI WAJAH MENGGUNAKAN GABOR FILTER DAN SKIN MODEL." JURNAL TEKNOLOGI INFORMASI 2, no. 2 (February 1, 2019): 150. http://dx.doi.org/10.36294/jurti.v2i2.430.

Full text
Abstract:
Abstract - Identification of faces in digital images is a complex process and requires a combination of various methods. The complexity of facial identification is increasing along with the increasing need for high accuracy of facial images. This research analyzes the combination of Skin Color Model and Gabor Filters in the process of identifying facial identities in digital images. The Skin Color Model method is used to separate the face area from facial images based on skin color values on facial images. The face area is then extracted using Gabor Filter. This research resulted in the highest accuracy was 93.6349%. and the lowest accuracy is around 82.45%. The implementation of a combination of Skin Color Models and Gabor Filters can be an alternative method of identifying faces in digital images. Keywords - Digital Image, Face Identification, Skin Color Model, Gabor Filter.
APA, Harvard, Vancouver, ISO, and other styles
36

Rodríguez-Azar, Paula Ivone, José Manuel Mejía-Muñoz, and Carlos Alberto Ochoa-Zezzatti. "Recognition of Facial Expressions Using Vision Transformer." Científica 26, no. 2 (December 2022): 1–9. http://dx.doi.org/10.46842/ipn.cien.v26n2a02.

Full text
Abstract:
The identification of emotions through the reading of non-verbal signals, such as gestures and facial expressions, has generated a new application in the field of Facial Expression Recognition (FER) and human-computer interaction. Through the recognition of facial expressions, it would be possible to improve industrial equipment by making it safer through social intelligence that has excellent applications in the area of industrial security. That is why this research proposes to classify a series of images from the database called FER-2013, which contains data on seven different emotions, which are anger, disgust, fear, joy, sadness, surprise, neutral. For the recognition of expressions, a Vision Transformer architecture was implemented, of which 87% precision was obtained, while the top test accuracy was 99%.
APA, Harvard, Vancouver, ISO, and other styles
37

Qin, Bosheng, Letian Liang, Jingchao Wu, Qiyao Quan, Zeyu Wang, and Dongxiao Li. "Automatic Identification of Down Syndrome Using Facial Images with Deep Convolutional Neural Network." Diagnostics 10, no. 7 (July 17, 2020): 487. http://dx.doi.org/10.3390/diagnostics10070487.

Full text
Abstract:
Down syndrome is one of the most common genetic disorders. The distinctive facial features of Down syndrome provide an opportunity for automatic identification. Recent studies showed that facial recognition technologies have the capability to identify genetic disorders. However, there is a paucity of studies on the automatic identification of Down syndrome with facial recognition technologies, especially using deep convolutional neural networks. Here, we developed a Down syndrome identification method utilizing facial images and deep convolutional neural networks, which quantified the binary classification problem of distinguishing subjects with Down syndrome from healthy subjects based on unconstrained two-dimensional images. The network was trained in two main steps: First, we formed a general facial recognition network using a large-scale face identity database (10,562 subjects) and then trained (70%) and tested (30%) a dataset of 148 Down syndrome and 257 healthy images curated through public databases. In the final testing, the deep convolutional neural network achieved 95.87% accuracy, 93.18% recall, and 97.40% specificity in Down syndrome identification. Our findings indicate that the deep convolutional neural network has the potential to support the fast, accurate, and fully automatic identification of Down syndrome and could add considerable value to the future of precision medicine.
APA, Harvard, Vancouver, ISO, and other styles
38

Bach, D. R., K. Buxtorf, D. Grandjean, and W. K. Strik. "The influence of emotion clarity on emotional prosody identification in paranoid schizophrenia." Psychological Medicine 39, no. 6 (November 12, 2008): 927–38. http://dx.doi.org/10.1017/s0033291708004704.

Full text
Abstract:
BackgroundIdentification of emotional facial expression and emotional prosody (i.e. speech melody) is often impaired in schizophrenia. For facial emotion identification, a recent study suggested that the relative deficit in schizophrenia is enhanced when the presented emotion is easier to recognize. It is unclear whether this effect is specific to face processing or part of a more general emotion recognition deficit.MethodWe used clarity-graded emotional prosodic stimuli without semantic content, and tested 25 in-patients with paranoid schizophrenia, 25 healthy control participants and 25 depressive in-patients on emotional prosody identification. Facial expression identification was used as a control task.ResultsPatients with paranoid schizophrenia performed worse than both control groups in identifying emotional prosody, with no specific deficit in any individual emotion category. This deficit was present in high-clarity but not in low-clarity stimuli. Performance in facial control tasks was also impaired, with identification of emotional facial expression being a better predictor of emotional prosody identification than illness-related factors. Of those, negative symptoms emerged as the best predictor for emotional prosody identification.ConclusionsThis study suggests a general deficit in identifying high-clarity emotional cues. This finding is in line with the hypothesis that schizophrenia is characterized by high noise in internal representations and by increased fluctuations in cerebral networks.
APA, Harvard, Vancouver, ISO, and other styles
39

Mathialagan, Arulalan. "POSTERIOR AURICULAR NERVE – A NOVEL LANDMARK FOR IDENTIFICATION OF THE FACIAL NERVE IN SUPERFICIAL PAROTIDECTOMY." UP STATE JOURNAL OF OTOLARYNGOLOGY AND HEAD AND NECK SURGERY Volume 9, upjohns/volume9/Issue2 (December 14, 2021): 10–14. http://dx.doi.org/10.36611/upjohns/volume9/issue2/2.

Full text
Abstract:
ABSTRACT Background-Facial nerve identification and preservation is the most critical step in parotid surgery. Though there are described landmarks to locate the facial nerve trunk, they have individual variations. The posterior auricular nerve (PAN) is a branch of the facial nerve and is always present, it can be followed to reach the facial nerve trunk. MATERIALS AND METHODS A retrospective cohort study in which analysis of parotidectomy performed from January 2017 to November 2018 at our tertiary referral center was done. RESULTS A total of 23 parotidectomies were performed, of which 18 cases were pleomorphic adenoma. In four cases of pleomorphic adenoma we could clearly identify and preserve the PAN. Using PAN as the landmark the facial nerve trunk was located, all its peripheral branches were dissected and preserved. PAN identification narrows down the target area of dissection to identify the facial nerve trunk. CONCLUSION The posterior auricular branch of the facial nerve can be used as a standard landmark in parotid surgeries, that almost always leads to the facial nerve trunk. CLINICAL SIGNIFICANCE Though identification of PAN may be difficult in all cases, effort must be made to identify it under magnification. If done meticulously PAN can be an ideal landmark to identify facial nerve in parotid surgery. KEYWORDS Parotid surgery, Superficial Parotidectomy, Posterior auricular nerve, Facial nerve.
APA, Harvard, Vancouver, ISO, and other styles
40

Thirumala, Chinmayi. "AI-Based Facial Emotion Recognition." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 04 (April 22, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31413.

Full text
Abstract:
Facial expression, age, and gender detection have received a lot of interest in recent years due to their wide range of applications in fields including healthcare, security, marketing, and entertainment. With the proliferation of artificial intelligence (AI) techniques, especially deep learning, facial analysis systems' accuracy and efficiency have improved significantly. This paper provides a comprehensive analysis of the most recent advances in AI-based facial expression, age, and gender identification algorithms. We explore the constraints and limitations of current techniques, such as the necessity for big annotated datasets, biases in training data, and interpretability issues in deep learning models. This study seeks to give academics, practitioners, and policymakers, a thorough grasp of the most recent state-of-the-art methodologies and trends in AI-based facial emotion, age, and gender identification, thereby supporting future advancements and responsible usage of these technologies. Keywords: Facial expression, age, gender, identification, deep learning, artificial intelligence
APA, Harvard, Vancouver, ISO, and other styles
41

Koodalsamy, Banumalar, Manikandan Bairavan Veerayan, and Vanaja Narayanasamy. "Face Recognition using Deep Learning." E3S Web of Conferences 387 (2023): 05001. http://dx.doi.org/10.1051/e3sconf/202338705001.

Full text
Abstract:
Identifying a person primarily relies on their facial features, which even distinguish identical twins. As a result, facial recognition and identification become crucial for distinguishing individuals. Biometric authentication technology, specifically facial recognition systems, are utilized to verify one’s identity. This technology has gained popularity in modern applications, such as phone unlock systems, criminal identification systems, and home security systems. Due to its reliance on a facial image rather than external factors like a card or key, this method is considered more secure. The process of recognizing a person involves two primary steps: face detection and face identification. This article delves into the concept of developing a face recognition system utilizing Python’s OpenCV library through deep learning. Due to its exceptional accuracy, deep learning is an ideal method for facial recognition. The proposed approach involves utilizing the Haar cascade techniques for face detection, followed by the following steps for face identification. To begin with, facial features are extracted through a combination of CNN methods and the linear binary pattern histogram (LBPH) algorithm. For attendance to be marked as “present,” the check-in and check-out times of the detected face must be legitimate. If not, the face will be displayed as “unknown.”
APA, Harvard, Vancouver, ISO, and other styles
42

Sadahide, Ayako, Hideki Itoh, Ken Moritou, Hirofumi Kameyama, Ryoya Oda, Hitoshi Tabuchi, and Yoshiaki Kiuchi. "A Clinical Trial Evaluating the Efficacy of Deep Learning-Based Facial Recognition for Patient Identification in Diverse Hospital Settings." Bioengineering 11, no. 4 (April 15, 2024): 384. http://dx.doi.org/10.3390/bioengineering11040384.

Full text
Abstract:
Background: Facial recognition systems utilizing deep learning techniques can improve the accuracy of facial recognition technology. However, it remains unclear whether these systems should be available for patient identification in a hospital setting. Methods: We evaluated a facial recognition system using deep learning and the built-in camera of an iPad to identify patients. We tested the system under different conditions to assess its authentication scores (AS) and determine its efficacy. Our evaluation included 100 patients in four postures: sitting, supine, and lateral positions, with and without masks, and under nighttime sleeping conditions. Results: Our results show that the unmasked certification rate of 99.7% was significantly higher than the masked rate of 90.8% (p < 0.0001). In addition, we found that the authentication rate exceeded 99% even during nighttime sleeping. Furthermore, the facial recognition system was safe and acceptable for patient identification within a hospital environment. Even for patients wearing masks, we achieved a 100% success rate for authentication regardless of illumination if they were sitting with their eyes open. Conclusions: This is the first systematical study to evaluate facial recognition among hospitalized patients under different situations. The facial recognition system using deep learning for patient identification shows promising results, proving its safety and acceptability, especially in hospital settings where accurate patient identification is crucial.
APA, Harvard, Vancouver, ISO, and other styles
43

Zherdev, I. Yu, and V. A. Barabanschikov. "Facial Expression Identification with Intrasaccadic Stimulus Substitution." Experimental Psychology (Russia) 14, no. 2 (2021): 68–84. http://dx.doi.org/10.17759/exppsy.2021140205.

Full text
Abstract:
Extreme temporal condition for visual identification task is held. A gaze-contingent eyetracking study was used to assess how presaccadic stimulus influences the one presented during a reactive saccade. A strong forward masking effect is found. Identification rate of second image is below chance, but still in accordance with previous studies, where no masking was present. Identification rate, erratic responses, statistical connection with alternative response (2AFC task), physical properties of saccades are similar to simple intrasaccadic identification task [3; 5]. Two aspects of transsaccadic visual perception were hypothesized, possessing common temporal structure: sensoric (geometric primitive detection) and gnostical (naturalistically valid object identification).
APA, Harvard, Vancouver, ISO, and other styles
44

Phillips, P. Jonathon, Amy N. Yates, Ying Hu, Carina A. Hahn, Eilidh Noyes, Kelsey Jackson, Jacqueline G. Cavazos, et al. "Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms." Proceedings of the National Academy of Sciences 115, no. 24 (May 29, 2018): 6171–76. http://dx.doi.org/10.1073/pnas.1721355115.

Full text
Abstract:
Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.
APA, Harvard, Vancouver, ISO, and other styles
45

Pomarol-Clotet, E., F. Hynes, C. Ashwin, E. T. Bullmore, P. J. McKenna, and K. R. Laws. "Facial emotion processing in schizophrenia: a non-specific neuropsychological deficit?" Psychological Medicine 40, no. 6 (September 24, 2009): 911–19. http://dx.doi.org/10.1017/s0033291709991309.

Full text
Abstract:
BackgroundIdentification of facial emotions has been found to be impaired in schizophrenia but there are uncertainties about the neuropsychological specificity of the finding.MethodTwenty-two patients with schizophrenia and 20 healthy controls were given tests requiring identification of facial emotion, judgement of the intensity of emotional expressions without identification, familiar face recognition and the Benton Facial Recognition Test (BFRT). The schizophrenia patients were selected to be relatively intellectually preserved.ResultsThe patients with schizophrenia showed no deficit in identifying facial emotion, although they were slower than the controls. They were, however, impaired on judging the intensity of emotional expression without identification. They showed impairment in recognizing familiar faces but not on the BFRT.ConclusionsWhen steps are taken to reduce the effects of general intellectual impairment, there is no deficit in identifying facial emotions in schizophrenia. There may, however, be a deficit in judging emotional intensity. The impairment found in naming familiar faces is consistent with other evidence of semantic memory impairment in the disorder.
APA, Harvard, Vancouver, ISO, and other styles
46

Kniaz, V. V., and Z. N. Smirnova. "MUSIC-ELICITED EMOTION IDENTIFICATION USING OPTICAL FLOW ANALYSIS OF HUMAN FACE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-5/W6 (May 18, 2015): 27–32. http://dx.doi.org/10.5194/isprsarchives-xl-5-w6-27-2015.

Full text
Abstract:
Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. <br><br> Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.
APA, Harvard, Vancouver, ISO, and other styles
47

White, David, P. Jonathon Phillips, Carina A. Hahn, Matthew Hill, and Alice J. O'Toole. "Perceptual expertise in forensic facial image comparison." Proceedings of the Royal Society B: Biological Sciences 282, no. 1814 (September 7, 2015): 20151292. http://dx.doi.org/10.1098/rspb.2015.1292.

Full text
Abstract:
Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces.
APA, Harvard, Vancouver, ISO, and other styles
48

Kamel, Peter Victor, Ahmed Saad Ahmed, Usama Saeed Imam, Ahmed Safaa Ahmed, and Sherif El Prince Sayed. "Different techniques for identification of facial nerve during superficial parotidectomy." Egyptian Journal of Surgery 43, no. 2 (March 22, 2024): 510–14. http://dx.doi.org/10.4103/ejs.ejs_315_23.

Full text
Abstract:
Background Parotidectomy is a common surgical procedure for the treatment of benign and malignant lesions of the parotid gland. Identification of the facial nerve trunk is essential during surgery of the parotid gland to avoid facial nerve injury. A comprehensive knowledge of its anatomy and meticulous dissection are the keys for the identification of the facial nerve trunk and its branches. Aim To compare between the traditional antegrade parotidectomy and retrograde in identification of facial nerve during superficial parotidectomy, determination the best anatomical landmark, the time of exploration of facial nerve, outcomes, facial nerve complication, duration of surgery, patient satisfaction as well as other complications. Methods Twelve patients who were diagnosed with having parotid gland neoplasms, and had undergone superficial Parotidectomy were recruited and assessed for eligibility at General Surgery Department, Beni-Suef University Hospital. Patients were divided according to the surgical technique into two equal groups, group A (the antegrade dissection group), and group B (the retrograde dissection group), follow-up was 6 months. Results There was no statistically significant differences between both groups regarding pain, paresthesia and pathology postoperation (P value>0.05). Longer mean operation time was observed in the antegrade dissection group in comparison with the retrograde dissection group (2.06±0.75 and 1.61±0.31 h, respectively), which was statistically insignificant (P value>0.05). There was a statistically significant increase in facial nerve injury among patients in the antegrade dissection group in comparison with the retrograde dissection group (P value=0.046). There was no statistically significant difference between techniques regarding hospital stay duration and complications three months postoperation (P value>0.05). Conclusion Retrograde facial nerve dissection technique is better than the classical antegrade technique in the superficial parotidectomy within this study.
APA, Harvard, Vancouver, ISO, and other styles
49

Kinney, Sam E., and Richard Prass. "Facial Nerve Dissection by use of Acoustic (Loudspeaker) Facial EMG Monitoring." Otolaryngology–Head and Neck Surgery 95, no. 4 (November 1986): 458–63. http://dx.doi.org/10.1177/019459988609500407.

Full text
Abstract:
The development of the surgical microscope in 1953, and the subsequent development of microsurgical instrumentation, signaled the beginning of modern-day acoustic neuroma surgery. Preservation of facial nerve function and total tumor removal is the goal of all acoustic neuroma surgery. The refinement of the translabyrinthine removal of acoustic neuromas by Dr. William House’ significantly improved preservation of facial nerve function. This is made possible by the anatomic identification of the facial nerve at the lateral end of the internal auditory canal. When the surgery is accomplished from a suboccipital or retrosigmoid approach, the facial nerve may be identified at the brain stem or within the internal auditory canal. Identifying the facial nerve from the posterior approach is not as anatomically precise as from the lateral approach through the labyrinth. The use of a facial nerve stimulator can greatly facilitate Identification of the facial nerve in these procedures.
APA, Harvard, Vancouver, ISO, and other styles
50

Nelson, Monica A., and Megan M. Hodge. "Effects of Facial Paralysis and Audiovisual Information on Stop Place Identification." Journal of Speech, Language, and Hearing Research 43, no. 1 (February 2000): 158–71. http://dx.doi.org/10.1044/jslhr.4301.158.

Full text
Abstract:
This study investigated how listeners' perceptions of bilabial and lingua-alveolar voiced stops in auditory (A) and audiovisual (AV) presentation modes were influenced by articulatory function in a girl with bilateral facial paralysis (BFP) and a girl with normal facial movement (NFM). The Fuzzy Logic Model of Perception (FLMP) was used to make predictions about listeners' identifications of stop place based on assumptions about the nature (clear, ambiguous, or conflicting) of the A or AV cues produced by each child during /b/ and /d/ CV syllables. As predicted, (a) listeners' identification scores for NFM were very high and reliable, regardless of presentation mode or stop place, (b) listeners' identification scores for BFP were high for lingua-alveolar place, regardless of presentation mode, but more variable and less reliable than for NFM; significantly lower (overall at a chance level) for bilabial place in the A mode; and lowest for bilabial place in the AV mode. Conflicting visual cues for stop place for BFP's productions of /bV/ syllables influenced listeners' perceptions, resulting in most of her bilabial syllables being misidentified in the AV mode. F2 locus equations for each child's /bV/ and /dV/ syllables showed patterns similar to those reported by previous investigators, but with less differentiation between stop place for BFP than NFM. These acoustic results corresponded to the perceptual results obtained. (That is, when presented with only auditory information, on average, listeners perceived BFP's target /b/ syllables to be near the boundary between /b/ and /d/.)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography