Добірка наукової літератури з теми "Face Analysi"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Face Analysi".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Face Analysi"

1

Salmela, Viljami, Ilkka Muukkonen, Jussi Numminen, and Kaisu Ölander. "Spatiotemporal dynamics of face processing network studied with combined multivariate EEG and fMRI analysi." Journal of Vision 17, no. 10 (August 31, 2017): 1263. http://dx.doi.org/10.1167/17.10.1263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Muqorobin, Muqorobin, and Nendy Akbar Rozaq Rais. "Analysis of the Role of Information Systems Technology in Lecture Learning during the Corona Virus Pandemic." International Journal of Computer and Information System (IJCIS) 1, no. 1 (August 27, 2020): 47–51. http://dx.doi.org/10.29040/ijcis.v1i2.15.

Повний текст джерела
Анотація:
Abstract— At this time the spread of the Corona Covid-19 Virus was sweeping the world, Indonesia was also affected, especially in the world of education where the teaching and learning process was usually carried out face-to-face in the classroom. So as a result of this pendemi the teaching and learning process must be done online. The role of information systems technology is very meaningful in lecture learning. This study aims to analyze a model of campus learning conditions and the role of information system technology in college learning amid the COVID-19 corona virus pandemic at STMIK Sinar Nusantara Surakarta. The research method is to make observations and literature studies to obtain data and information used in research. The results of this study indicate the use of information technology has a very important role in the implementation of online distance learning in the midst of the corona covid 19 virus pandemic, among online online media such as: google classroom, whatsapp, zoom. Of the online learning media, it is proven that Google Classroom: 55.9% is widely used as media for sharing materials and assignments, while video conferences lectures are the most users of Google Meet as much as: 70.6%. The results of the analysi s of the online learning value are: 44.1%. Based on this data, it shows that the role of information system technology plays an important role and helps in the teaching and learning process amid the Covid-19 corona virus pandemic.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wiskott, Laurenz. "Phantom faces for face analysis." Pattern Recognition 30, no. 6 (June 1997): 837–46. http://dx.doi.org/10.1016/s0031-3203(96)00132-x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Michele de Oliveira, Sandi, and Nieves Hernández-Flores. "Desafíos interpretativos en el análisis de la imagen sociocultural." Textos en Proceso 1, no. 1 (December 1, 2015): 1–15. http://dx.doi.org/10.17710/tep.2015.1.1.1oli.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Koleva, Emiliya, and Neli Baeva. "A Comparative Analysis of Assessment Results From Face-To-Face and Online Exams." Mathematics and Informatics LXV, no. 4 (August 30, 2022): 335–43. http://dx.doi.org/10.53656/math2022-4-1-aco.

Повний текст джерела
Анотація:
In this study, a comparative analysis of the results of students’ performance on a face-to-face and an online exam is made and presented. The students involved in the research are trained and evaluated by the same examinator. Different statistical tests are made using statistical analysis software. As a result of the research, the hypothesis is confirmed that there is a difference between the two evaluations. Comparison of the grades between the different exams showed that there is a linear relationship between them, there is dependence between the results from both exams and the results from the online exam are slightly higher than the results from the face-to-face exam.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Nikolaievskyi, O. Yu, O. V. Skliarenko, and A. I. Sidorchuk. "ANALYSIS AND COMPARISON OF FACE DETECTION APIS." Telecommunication and information technologies, no. 4 (2019): 39–45. http://dx.doi.org/10.31673/2412-4338.2019.043945.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Phillips, Ian. "Object files and unconscious perception: a reply to Quilty-Dunn." Analysis 80, no. 2 (November 9, 2019): 293–301. http://dx.doi.org/10.1093/analys/anz046.

Повний текст джерела
Анотація:
Abstract A wealth of cases – most notably blindsight and priming under inattention or suppression – have convinced philosophers and scientists alike that perception occurs outside awareness. In recent work (Phillips 2016a, 2018; Phillips and Block 2017, Peters et al. 2017), I dispute this consensus, arguing that any putative case of unconscious perception faces a dilemma. The dilemma divides over how absence of awareness is established. If subjective reports are used, we face the problem of the criterion: the concern that such reports underestimate conscious experience (Eriksen 1960, Holender 1986, Peters and Lau 2015). If objective measures are used, we face the problem of attribution: the concern that the case does not involve genuine individual-level perception. Quilty-Dunn (2019) presents an apparently compelling example of unconscious perception due to Mitroff et al. (2005) which, he contends, evades this dilemma. The case is fascinating. However, as I here argue, it does not escape the dilemma’s clutches.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Scharp, Kevin. "Shrieking in the face of vengeance." Analysis 78, no. 3 (February 6, 2018): 454–63. http://dx.doi.org/10.1093/analys/anx163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Moore, A. W. "Not to be Taken at Face Value." Analysis 69, no. 1 (January 1, 2009): 116–25. http://dx.doi.org/10.1093/analys/ann040.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Del Líbano, Mario, Manuel G. Calvo, Andrés Fernández-Martín, and Guillermo Recio. "Discrimination between smiling faces: Human observers vs. automated face analysis." Acta Psychologica 187 (June 2018): 19–29. http://dx.doi.org/10.1016/j.actpsy.2018.04.019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Face Analysi"

1

DAGNES, NICOLE. "3D Human Face Analysis for recognition applications and motion capture." Doctoral thesis, Politecnico di Torino, 2020. http://hdl.handle.net/11583/2790163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

OLIVETTI, ELENA CARLOTTA. "When 3D geometrical face analysis meets maxillofacial surgery-a methodology for patients affected by dental malocclusion." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2963954.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Lee, Jinho. "Synthesis and analysis of human faces using multi-view, multi-illumination image ensembles." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1133366279.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kafetzi, Evi. "L'Ethos dans l'Argumentation : le cas du face à face Sarkozy / Royal 2007." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0053/document.

Повний текст джерела
Анотація:
En quête d'efficacité et d'influence, tout candidat aux électionsprésidentielles tente de se fabriquer et de donner à voir une image de soi conformeaux attentes des électeurs concernant le profil d'un futur chef d'État. Cette imagede soi séduisante construite à travers le discours, appelée ethos en rhétorique, faitpartie intégrante de l'argumentation au même titre que ses autres composantes, àsavoir le logos et le pathos. Le discours politique, en tant que porteur d'importantsenjeux, est le terrain de construction identitaire par excellence.Ce travail explore les stratégies communicatives dans l'activitéargumentative qu'est le débat politique télévisé. Les données sont constituées parle face à face télévisé du 2 mai 2007 entre Nicolas Sarkozy et Ségolène Royal, àla veille du deuxième tour des élections présidentielles françaises.Je me propose dans ce travail de dégager les règles et les mécanismes surlesquels repose la fabrication d'une image de soi télévisuelle par les praticiens dela persuasion que sont les hommes et les femmes politiques, afin de parvenir àleurs fins. Les outils langagiers dont les deux adversaires se servent lors du dueltélévisé en question ici, pour nous servir une image de soi conforme au modèleprésidentiel « idéal » sont analysés un par un. Ainsi, ayant une meilleureconnaissance des coulisses de la rhétorique audiovisuelle, l'électeur-téléspectateurdevient maître de sa décision et responsable de son choix, et apprend à se méfierdes sentiments et des impressions que lui inspirent les praticiens de la persuasion
In search of effectiveness and influence, every candidate who stands forpresidential elections attempts to create and give to the audience a self-imageconsistent with the elector's expectations concerning a future head of state'sprofile. This attractive self-image created through discourse, called ethos inrhetoric, is an integral part of argumentation, as well as its other components,logos and pathos. Political discourse, as a vector of important stakes, constitutesthe ground of identity construction par excellence.This work explores communication strategies in argumentation activity,and particularly in televised political debate. The data is constituted by thetelevised face to face of the 2nd of May 2007 between Nicolas Sarkozy andSégolène Royal, at the eve of the second ballot of the French presidentialelection.What I propose in this work is to draw up the rules and mechanisms thatgovern the making of one's televised self-image by politicians, spin doctors, inorder to achieve their ends. I propose to analyse, one by one, the linguistic toolsthat the two opponents use in order to give the audience a self-image consistentwith an ideal presidential model, during the televised duel that we're studyinghere. In this way, having a better knowledge of what goes on behind the scenesof audiovisual rhetoric, the elector-televiewer becomes master of his decisionand has the control of his choice and learns to beware of feelings andimpressions inspired by the professionals of persuasion
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Bordei, Cristina. "Face analysis using polynomials." Thesis, Poitiers, 2016. http://www.theses.fr/2016POIT2259/document.

Повний текст джерела
Анотація:
Considéré comme l'un des sujets de recherche les plus actifs et visibles de la vision par ordinateur, de la reconnaissance des formes et de la biométrie, l'analyse faciale a fait l'objet d'études approfondies au cours des deux dernières décennies. Le travail de cette thèse a pour objectif de proposer de nouvelles techniques d'utilisation de représentations de texture basées polynômes pour l'analyse faciale.La première partie de cette thèse est dédiée à l'intégration de bases de polynômes dans les modèles actifs d'apparence. Nous proposons premièrement une manière d'utiliser les coefficients polynomiaux dans la modélisation de l'apparence. Ensuite, afin de réduire la complexité du modèle nous proposons de choisir et d'utiliser les meilleurs coefficients en tant que représentation de texture. Enfin, nous montrons comment ces derniers peuvent être utilisés dans un algorithme de descente de gradient.La deuxième partie de la thèse porte sur l'utilisation des bases polynomiales pour la détection des points/zones d'intérêt et comme descripteur pour la reconnaissance des expressions faciales. Inspirés par des techniques de détection des singularités dans des champ de vecteurs, nous commençons par présenter un algorithme utilisé pour l'extraction des points d'intérêt dans une image. Puis nous montrons comment les bases polynomiales peuvent être utilisées pour extraire des informations sur les expressions faciales. Puisque les coefficients polynomiaux fournissent une analyse précise multi-échelles et multi-orientation et traitent le problème de redondance efficacement ils sont utilisés en tant que descripteurs dans un algorithme de classification d'expression faciale
As one of the most active and visible research topic in computer vision, pattern recognition and biometries, facial analysis has been extensively studied in the past two decades. The work in this thesis presents novel techniques to use polynomial basis texture representations for facial analysis. The first part of this thesis, is dedicated to the integration of polynomial bases in the Active Appearance Models - a set of statistical tools that proved to be very efficient in modeling faces. First we propose a way to use the coefficients obtained after polynomial projections in the appearance modeling. Then, in order to reduce model complexity we proposed to select and use as a texture representation the strongest polynomial coefficients. Finally we show how in addition to the texture representation polynomial coefficients can be used in a gradient descent algorithm since polynomial decomposition is equivalent to a filter bank.The second part of the thesis concems the use of the polynomial bases for interesting points and areas detection and as a descriptor for facial expression recognition. We start by presenting an algorithm used for accurate image keypoints localization inspired by techniques of singularities detection in a vector field. Our approach consists in two major steps: the calculation of an image vector field of normals and the keypoint selection within the field both presented in a multi-scale multi resolution scheme. Finally we show how polynomial bases can be used to extract informations about facial expressions. Polynomial coefficients are used as descriptors in an facial expression classification algorithm
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Patrick, William Charles. "Investigation, Analysis, and Modeling of Longwall Face-to-Face Transfers." Diss., This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-06092008-112841/.

Повний текст джерела
Анотація:
Thesis (Ph. D.)--Virginia Polytechnic Institute and State University, 1993.
Vita. Abstract. Attached pocket for diagrams. Includes bibliographical references (leaves 155-162). Also available via the Internet.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Al-Dahoud, Ahmad. "The computational face for facial emotion analysis: Computer based emotion analysis from the face." Thesis, University of Bradford, 2018. http://hdl.handle.net/10454/17384.

Повний текст джерела
Анотація:
Facial expressions are considered to be the most revealing way of understanding the human psychological state during face-to-face communication. It is believed that a more natural interaction between humans and machines can be undertaken through the detailed understanding of the different facial expressions which imitate the manner by which humans communicate with each other. In this research, we study the different aspects of facial emotion detection, analysis and investigate possible hidden identity clues within the facial expressions. We study a deeper aspect of facial expressions whereby we try to identify gender and human identity - which can be considered as a form of emotional biometric - using only the dynamic characteristics of the smile expressions. Further, we present a statistical model for analysing the relationship between facial features and Duchenne (real) and non-Duchenne (posed) smiles. Thus, we identify that the expressions in the eyes contain discriminating features between Duchenne and non-Duchenne smiles. Our results indicate that facial expressions can be identified through facial movement analysis models where we get an accuracy rate of 86% for classifying the six universal facial expressions and 94% for classifying the common 18 facial action units. Further, we successfully identify the gender using only the dynamic characteristics of the smile expression whereby we obtain an 86% classification rate. Likewise, we present a framework to study the possibility of using the smile as a biometric whereby we show that the human smile is unique and stable.
Al-Zaytoonah University
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Buchala, Samarasena. "Computational analysis of face images." Thesis, University of Hertfordshire, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431938.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Amin, Syed Hassan. "Analysis of 3D face reconstruction." Thesis, Imperial College London, 2009. http://hdl.handle.net/10044/1/6163.

Повний текст джерела
Анотація:
This thesis investigates the long standing problem of 3D reconstruction from a single 2D face image. Face reconstruction from a single 2D face image is an ill posed problem involving estimation of the intrinsic and the extrinsic camera parameters, light parameters, shape parameters and the texture parameters. The proposed approach has many potential applications in the law enforcement, surveillance, medicine, computer games and the entertainment industries. This problem is addressed using an analysis by synthesis framework by reconstructing a 3D face model from identity photographs. The identity photographs are a widely used medium for face identi cation and can be found on identity cards and passports. The novel contribution of this thesis is a new technique for creating 3D face models from a single 2D face image. The proposed method uses the improved dense 3D correspondence obtained using rigid and non-rigid registration techniques. The existing reconstruction methods use the optical ow method for establishing 3D correspondence. The resulting 3D face database is used to create a statistical shape model. The existing reconstruction algorithms recover shape by optimizing over all the parameters simultaneously. The proposed algorithm simplifies the reconstruction problem by using a step wise approach thus reducing the dimension of the parameter space and simplifying the opti- mization problem. In the alignment step, a generic 3D face is aligned with the given 2D face image by using anatomical landmarks. The texture is then warped onto the 3D model by using the spatial alignment obtained previously. The 3D shape is then recovered by optimizing over the shape parameters while matching a texture mapped model to the target image. There are a number of advantages of this approach. Firstly, it simpli es the optimization requirements and makes the optimization more robust. Second, there is no need to accurately recover the illumination parameters. Thirdly, there is no need for recovering the texture parameters by using a texture synthesis approach. Fourthly, quantitative analysis is used for improving the quality of reconstruction by improving the cost function. Previous methods use qualitative methods such as visual analysis, and face recognition rates for evaluating reconstruction accuracy. The improvement in the performance of the cost function occurs as a result of improvement in the feature space comprising the landmark and intensity features. Previously, the feature space has not been evaluated with respect to reconstruction accuracy thus leading to inaccurate assumptions about its behaviour. The proposed approach simpli es the reconstruction problem by using only identity images, rather than placing eff ort on overcoming the pose, illumination and expression (PIE) variations. This makes sense, as frontal face images under standard illumination conditions are widely available and could be utilized for accurate reconstruction. The reconstructed 3D models with texture can then be used for overcoming the PIE variations.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Wang, Wei. "Human Face and Behavior Analysis." Doctoral thesis, Università degli studi di Trento, 2018. https://hdl.handle.net/11572/367945.

Повний текст джерела
Анотація:
Human face and behavior analysis are very important research topics in the field of computer vision and they have broad applications in our everyday life. For instance, face alignment, face aging, face expression analysis and action recognition have been well studied and applied for security and entertainment. With these face analyzing techniques (e.g., face aging), we could enhance the performance of cross-age face verification system which now has been used for banks and electronic devices to recognize their clients. With the help of action recognition system, we could better summarize the user uploaded videos or generate logs for surveillance videos. This could help us retrieve the videos more accurately and easily. The dictionary learning and neural networks are powerful machine learning models for these research tasks. Initially, we focus on the multi-view action recognition task. First, a class-wise dictionary is pre-trained which encourages the sparse representations of the between-class videos from different views to lie close by. Next, we integrate the classifiers and the dictionary learning model into a unified model to learn the dictionary and classifiers jointly. For face alignment, we frame the standard cascaded face alignment problem as a recurrent process by using a recurrent neural network. Importantly, by combining a convolutional neural network with a recurrent one we alleviate hand-crafted features to learn task-specific features. For human face aging task, it takes as input a single image and automatically outputs a series of aged faces. Since human face aging is a smooth progression, it is more appropriate to age the face by going through smooth transitional states. In this way, the intermediate aged faces between the age groups can be generated. Towards this target, we employ a recurrent neural network. The hidden units in the RFA are connected autoregressively allowing the framework to age the person by referring to the previous aged faces. For smile video generation, one person may smile in different ways (e.g., closing/opening the eyes or mouth). This is a one-to-many image-to-video generation problem, and we introduce a deep neural architecture named conditional multi-mode network (CMM-Net) to approach it. A multi-mode recurrent generator is trained to induce diversity and generate K different sequences of video frames.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Face Analysi"

1

Décrire la conversation en ligne: La face à face distanciel. Lyon: ENS, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Meneghini, Fabio. Clinical facial analysis: Elements, principles, and techniques. 2nd ed. Berlin: Springer, 2012.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Daoudi, Mohamed, Anuj Srivastava, and Remco Veltkamp, eds. 3D Face Modeling, Analysis and Recognition. Solaris South Tower, Singapore: John Wiley & Sons SingaporePte Ltd, 2013. http://dx.doi.org/10.1002/9781118592656.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bartlett, Marian Stewart. Face Image Analysis by Unsupervised Learning. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4615-1637-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

M, Newton Elaine, and Information Technology Laboratory (National Institute of Standards and Technology). Mathematical and Computational Sciences Division., eds. Meta-analysis of face recognition algorithms. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, Mathematics and Computational Sciences Division, National Institute of Standards and Technology, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Elaine, Newton, and Information Technology Laboratory (National Institute of Standards and Technology). Mathematical and Computational Sciences Division, eds. Meta-analysis of face recognition algorithms. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, Mathematics and Computational Sciences Division, National Institute of Standards and Technology, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Bartlett, Marian Stewart. Face Image Analysis by Unsupervised Learning. Boston, MA: Springer US, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Bartlett, Marian Stewart. Face image analysis by unsupervised learning. Boston: Kluwer Academic Publishers, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

M, Newton Elaine, and Information Technology Laboratory (National Institute of Standards and Technology). Mathematical and Computational Sciences Division, eds. Meta-analysis of face recognition algorithms. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, Mathematics and Computational Sciences Division, National Institute of Standards and Technology, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

M, Newton Elaine, and Information Technology Laboratory (National Institute of Standards and Technology). Mathematical and Computational Sciences Division., eds. Meta-analysis of face recognition algorithms. Gaithersburg, MD: U.S. Dept. of Commerce, Technology Administration, Mathematics and Computational Sciences Division, National Institute of Standards and Technology, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Face Analysi"

1

Wiskott, Laurenz. "Phantom faces for face analysis." In Computer Analysis of Images and Patterns, 480–87. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/3-540-63460-6_153.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gopalan, Raghuraman, William R. Schwartz, Rama Chellappa, and Ankur Srivastava. "Face Detection." In Visual Analysis of Humans, 71–90. London: Springer London, 2011. http://dx.doi.org/10.1007/978-0-85729-997-0_5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gutta, Srinivas, and Harry Wechsler. "Partial Faces for Face Recognition: Left vs Right Half." In Computer Analysis of Images and Patterns, 630–37. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45179-2_77.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ravaut, Frédéric, and Georges Stamon. "Face Image Processing Supporting Epileptic Seizure Analysis." In Face Recognition, 610–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_40.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Meneghini, Fabio, and Paolo Biondi. "The Aging Face." In Clinical Facial Analysis, 157–74. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27228-8_10.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhao, Wenyi, Arvindh Krishnaswamy, Rama Chellappa, Daniel L. Swets, and John Weng. "Discriminant Analysis of Principal Components for Face Recognition." In Face Recognition, 73–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_4.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Copland, Fiona, and Helen Donaghue. "Face." In Analysing Discourses in Teacher Observation Feedback Conferences, 77–98. New York, NY : Routledge, 2021. |: Routledge, 2021. http://dx.doi.org/10.4324/9781351184694-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Patras, Ioannis. "Face Pose Analysis." In Encyclopedia of Biometrics, 324–29. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-73003-5_191.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Patras, Ioannis. "Face Pose Analysis." In Encyclopedia of Biometrics, 462–67. Boston, MA: Springer US, 2015. http://dx.doi.org/10.1007/978-1-4899-7488-4_191.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Biederman, Irving, and Peter Kalocsai. "Neural and Psychophysical Analysis of Object and Face Recognition." In Face Recognition, 3–25. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Face Analysi"

1

Saxen, Frerk, Sebastian Handrich, Philipp Werner, Ehsan Othman, and Ayoub Al-Hamadi. "Detecting Arbitrarily Rotated Faces for Face Analysis." In 2019 IEEE International Conference on Image Processing (ICIP). IEEE, 2019. http://dx.doi.org/10.1109/icip.2019.8803631.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Libin. "Face liveness detection by focusing on frontal faces and image backgrounds." In 2014 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). IEEE, 2014. http://dx.doi.org/10.1109/icwapr.2014.6961297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kukharev, G., K. Maulenov, and N. Shchegoleva. "CAN I PROTECT MY FACE IMAGE FROM RECOGNITION?" In 9th International Conference "Distributed Computing and Grid Technologies in Science and Education". Crossref, 2021. http://dx.doi.org/10.54546/mlit.2021.30.41.001.

Повний текст джерела
Анотація:
The "Fawkes" procedure is discussed as a method of protection against unauthorized use andrecognition of facial images from social networks. As an example, the results of an experiment aregiven, confirming the fact of a low result of face image recognition within CNN, when the Fawkesprocedure is applied with the parameter mode = "high". Based on a comparative analysis with theoriginal images of faces, textural changes and graphical features of the structural destruction of imagessubjected to the Fawkes procedure are shown. In addition to this analysis, multilevel parametricestimates of these destructions are given and, on their basis, the reason for the impossibility ofrecognizing images of faces subjected to the Fawkes procedure, as well as their use in deep learningproblems, is explained. The structural similarity index (ISSIM) and phase correlation of images areused as quantitative assessment tools. It is also noted that facial images subjected to the Fawkesprocedure are well recognized outside of deep learning methods. For this purpose, models of twosimple systems for recognizing face images subjected to the Fawkes procedure are proposed, and theresults of the experiments performed are presented. It is argued that the use of simple face imagerecognition systems in a computer complex with CNN will make it possible to train such complexesand destroy the myth about the possibility of protecting face images. In conclusion, the question isposed as to whether it is possible to protect your face from recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

"Fake Face Image Detection Using Deep Learning-Based Local and Global Matching." In The 2nd Siberian Scientific Workshop on Data Analysis Technologies with Applications. CEUR-WS.org, 2021. http://dx.doi.org/10.47813/sibdata-2-2021-20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

George, Giji, Rainu Boben, B. Radhakrishnan, and L. Padma Suresh. "Face recognition on surgically altered faces using principal component analysis." In 2017 International Conference on Circuit ,Power and Computing Technologies (ICCPCT). IEEE, 2017. http://dx.doi.org/10.1109/iccpct.2017.8074324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Dente, Pasquale, Dennis Küster, and Eva Krumhuber. "Boxing the face." In FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813857.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Iskra, Andrej, and Helena Gabrijelčič Tomc. "Analysis of observing and recognition profile facial images using eye tracking system." In 10th International Symposium on Graphic Engineering and Design. University of Novi Sad, Faculty of technical sciences, Department of graphic engineering and design,, 2020. http://dx.doi.org/10.24867/grid-2020-p54.

Повний текст джерела
Анотація:
Facial images have been the subject of research for many years, using the eye-tracking system. However, most researchers concentrate on the frontal view of facial images. Much less research has been done on faces shown at different angles or profile views of faces in facial images. However, as we know, in reality we often view faces from different angles and not just from a frontal view. In our research we used a profile presentation of facial images and analyzed memory and recognition depending on the display time and dimensions of the facial images. Two tests were performed, i.e. the observation and the recognition test, and we used the well-known yes/no detection theory. We used four different display times in the observation test (1, 2, 4 and 8 seconds) and two different dimensions of facial images 640 × 480 and 1280 × 960). All facial images were taken from the standardized face database Minear&Park. We measured the recognition success which is mostly presented as a discrimination index A’, incorrect recognition (FA – false alarm) and time-spatial method based on fixation duration and saccade length. In this case, eye tracking provides us with objective results when viewing facial images. In the results it was found that extending the display time of facial images improves recognition performance and that the dependence is logarithmic. At the same time, wrong recognition decreased. Both parameters are independent of the dimensions of the facial images. This fact has been proven by some other researchers also for frontal facial images. It was also discovered that with an increase of the display time of facial images an increase of the fixation duration and saccade lengths occurred. In all results we detected major changes at the display time of four seconds, which we consider as a time, where the subjects looked at the whole face and their gaze returned to the center of the face (in our case eye and mouth).
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Brick, Timothy R., Michael D. Hunter, and Jeffrey F. Cohn. "Get the FACS fast: Automated FACS face analysis benefits from the addition of velocity." In 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (ACII 2009). IEEE, 2009. http://dx.doi.org/10.1109/acii.2009.5349600.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Colombo, Alessandro, Claudio Cusano, and Raimondo Schettini. "Face^3 a 2D+3D Robust Face Recognition System." In 14th International Conference on Image Analysis and Processing (ICIAP 2007). IEEE, 2007. http://dx.doi.org/10.1109/iciap.2007.4362810.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Hemathilaka, Susith, and Achala Aponso. "An Analysis of Face Recognition under Face Mask Occlusions." In 2nd International Conference on Machine Learning Techniques and Data Science (MLDS 2021). Academy and Industry Research Collaboration Center (AIRCC), 2021. http://dx.doi.org/10.5121/csit.2021.111804.

Повний текст джерела
Анотація:
The face mask is an essential sanitaryware in daily lives growing during the pandemic period and is a big threat to current face recognition systems. The masks destroy a lot of details in a large area of face and it makes it difficult to recognize them even for humans. The evaluation report shows the difficulty well when recognizing masked faces. Rapid development and breakthrough of deep learning in the recent past have witnessed most promising results from face recognition algorithms. But they fail to perform far from satisfactory levels in the unconstrained environment during the challenges such as varying lighting conditions, low resolution, facial expressions, pose variation and occlusions. Facial occlusions are considered one of the most intractable problems. Especially when the occlusion occupies a large region of the face because it destroys lots of official features.
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Face Analysi"

1

Bays, J. Timothy, David L. King, and Molly J. O'Hagan. Carbon-Type Analysis and Comparison of Original and Reblended FACE Diesel Fuels (FACE 2, FACE 4, and FACE 7). Office of Scientific and Technical Information (OSTI), October 2012. http://dx.doi.org/10.2172/1118119.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Phillips, P. Jonathon, and Elaine M. Newton. Meta-analysis of face recognition algorithms. Gaithersburg, MD: National Institute of Standards and Technology, 2001. http://dx.doi.org/10.6028/nist.ir.6719.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Author, Not Given. Strategic Energy Analysis (Fact Sheet). Office of Scientific and Technical Information (OSTI), February 2014. http://dx.doi.org/10.2172/1122288.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Pollard, Kimberly A., Lamar Garrett, and Phuong Tran. Bone Conduction Systems for Full-Face Respirators: Speech Intelligibility Analysis. Fort Belvoir, VA: Defense Technical Information Center, April 2014. http://dx.doi.org/10.21236/ada600090.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kampman, Christina M., Charles A. Mangio, Thomas L. Parry, and Bonnie J. Wilkinson. Framework for Analytic Cognition (FAC): A Guide for Doing All-Source Intelligence Analysis. Fort Belvoir, VA: Defense Technical Information Center, December 2011. http://dx.doi.org/10.21236/ada568691.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Socolinsky, Diego A., and Andrea Selinger. A Comparative Analysis of Face Recognition Performance With Visible and Thermal Infrared Imagery. Fort Belvoir, VA: Defense Technical Information Center, January 2002. http://dx.doi.org/10.21236/ada453159.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Mark E. Kubiske. Final Harvest of Above-Ground Biomass and Allometric Analysis of the Aspen FACE Experiment. Office of Scientific and Technical Information (OSTI), April 2013. http://dx.doi.org/10.2172/1073624.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Sun, Yipeng. Analysis on linac quadrupole misalignment in FACET commissioning 2012. Office of Scientific and Technical Information (OSTI), July 2012. http://dx.doi.org/10.2172/1045190.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Williams, Dean N. 3rd Annual Earth System Grid Federation and 3rd Annual Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Face-to-Face Meeting Report December 2013. Office of Scientific and Technical Information (OSTI), February 2014. http://dx.doi.org/10.2172/1124881.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

IVARSON, KRISTINE, and CRAIG AROLA. SOFTWARE FOR SUPPORT OF GROUNDWATER CONTAMINANT FATE AND TRANSPORT ANALYSIS 13345. Office of Scientific and Technical Information (OSTI), January 2013. http://dx.doi.org/10.2172/1658904.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії