Literatura académica sobre el tema "Facial video processing"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Facial video processing".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Artículos de revistas sobre el tema "Facial video processing"
Muthanna Shibel, Ahmed, Sharifah Mumtazah Syed Ahmad, Luqman Hakim Musa y Mohammed Nawfal Yahya. "DEEP LEARNING DETECTION OF FACIAL BIOMETRIC PRESENTATION ATTACK". LIFE: International Journal of Health and Life-Sciences 8 (23 de octubre de 2023): 61–78. http://dx.doi.org/10.20319/lijhls.2022.8.6178.
Texto completoKroczek, Leon O. H., Angelika Lingnau, Valentin Schwind, Christian Wolff y Andreas Mühlberger. "Angry facial expressions bias towards aversive actions". PLOS ONE 16, n.º 9 (1 de septiembre de 2021): e0256912. http://dx.doi.org/10.1371/journal.pone.0256912.
Texto completoTej, Maddimsetty Bullaiaha. "Eye of Devil: Face Recognition in Real World Surveillance Video with Feature Extraction and Pattern Matching". International Journal for Research in Applied Science and Engineering Technology 9, n.º 12 (31 de diciembre de 2021): 2334–37. http://dx.doi.org/10.22214/ijraset.2021.39711.
Texto completoMahalim, Vaishnavi Sanjay, Seema Goroba Admane, Divya Vinod Kundgar y Ankit Hirday Narayan Singh. "Development of Real-Time Emotion Recognition System Using Facial Expressions". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, n.º 10 (1 de octubre de 2023): 1–11. http://dx.doi.org/10.55041/ijsrem26415.
Texto completoBlanes-Vidal, Victoria, Tomas Majtner, Luis David Avendaño-Valencia, Knud B. Yderstraede y Esmaeil S. Nadimi. "Invisible Color Variations of Facial Erythema: A Novel Early Marker for Diabetic Complications?" Journal of Diabetes Research 2019 (2 de septiembre de 2019): 1–7. http://dx.doi.org/10.1155/2019/4583895.
Texto completoSingh,, Mr Ankit. "Real-Time Emotion Recognition System Using Facial Expressions". INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, n.º 04 (19 de abril de 2024): 1–5. http://dx.doi.org/10.55041/ijsrem31021.
Texto completoS, Manjunath, Banashree P, Shreya M, Sneha Manjunath Hegde y Nischal H P. "Driver Drowsiness Detection System". International Journal for Research in Applied Science and Engineering Technology 10, n.º 5 (31 de mayo de 2022): 129–35. http://dx.doi.org/10.22214/ijraset.2022.42109.
Texto completoLee, Seongmin, Hyunse Yoon, Sohyun Park, Sanghoon Lee y Jiwoo Kang. "Stabilized Temporal 3D Face Alignment Using Landmark Displacement Learning". Electronics 12, n.º 17 (4 de septiembre de 2023): 3735. http://dx.doi.org/10.3390/electronics12173735.
Texto completoRocha Neto, Aluizio, Thiago P. Silva, Thais Batista, Flávia C. Delicato, Paulo F. Pires y Frederico Lopes. "Leveraging Edge Intelligence for Video Analytics in Smart City Applications". Information 12, n.º 1 (31 de diciembre de 2020): 14. http://dx.doi.org/10.3390/info12010014.
Texto completoSelva, Selva y Selva Kumar S. "Hybridization of Deep Sequential Network for Emotion Recognition Using Unconstraint Video Analysis". Journal of Cybersecurity and Information Management 13, n.º 2 (2024): 109–23. http://dx.doi.org/10.54216/jcim.130209.
Texto completoTesis sobre el tema "Facial video processing"
Söderström, Ulrik. "Very low bitrate facial video coding : based on principal component analysis". Licentiate thesis, Umeå University, Applied Physics and Electronics, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-895.
Texto completoThis thesis introduces a coding scheme for very low bitrate video coding through the aid of principal component analysis. Principal information of the facial mimic for a person can be extracted and stored in an Eigenspace. Entire video frames of this persons face can then be compressed with the Eigenspace to only a few projection coefficients. Principal component video coding encodes entire frames at once and increased frame size does not increase the necessary bitrate for encoding, as standard coding schemes do. This enables video communication with high frame rate, spatial resolution and visual quality at very low bitrates. No standard video coding technique provides these four features at the same time.
Theoretical bounds for using principal components to encode facial video sequences are presented. Two different theoretical bounds are derived. One that describes the minimal distortion when a certain number of Eigenimages are used and one that describes the minimum distortion when a minimum number of bits are used.
We investigate how the reconstruction quality for the coding scheme is affected when the Eigenspace, mean image and coefficients are compressed to enable efficient transmission. The Eigenspace and mean image are compressed through JPEG-compression while the while the coefficients are quantized. We show that high compression ratios can be used almost without any decrease in reconstruction quality for the coding scheme.
Different ways of re-using the Eigenspace for a person extracted from one video sequence to encode other video sequences are examined. The most important factor is the positioning of the facial features in the video frames.
Through a user test we find that it is extremely important to consider secondary workloads and how users make use of video when experimental setups are designed.
Doyle, Jason Emory. "Automatic Dynamic Tracking of Horse Head Facial Features in Video Using Image Processing Techniques". Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/87582.
Texto completoMS
Cheng, Xin. "Nonrigid face alignment for unknown subject in video". Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/65338/1/Xin_Cheng_Thesis.pdf.
Texto completoOuzar, Yassine. "Reconnaissance automatique sans contact de l'état affectif de la personne par fusion physio-visuelle à partir de vidéo du visage". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0076.
Texto completoHuman affective state recognition remains a challenging topic due to the complexity of emotions, which involves experiential, behavioral, and physiological elements. Since it is difficult to comprehensively describe emotion in terms of single modalities, recent studies have focused on artificial intelligence approaches and fusion strategy to exploit the complementarity of multimodal signals using artificial intelligence approaches. The main objective is to study the feasibility of a physio-visual fusion for the recognition of the affective state of the person (emotions/stress) from facial videos. The fusion of facial expressions and physiological signals allows to take advantage of each modality. Facial expressions are easy to acquire and provide an external view of the affective state, while physiological signals improve reliability and address the problem of falsified facial expressions. The research developed in this thesis lies at the intersection of artificial intelligence, affective computing, and biomedical engineering. Our contribution focuses on two points. First, we propose a new end-to-end approach for instantaneous pulse rate estimation directly from facial video recordings using the principle of imaging photoplethysmography (iPPG). This method is based on a deep spatio-temporal network (X-iPPGNet) that learns the iPPG concept from scratch, without incorporating prior knowledge or going through manual iPPG signal extraction. The second contribution focuses on a physio-visual fusion for spontaneous emotions and stress recognition from facial videos. The proposed model includes two pipelines to extract the features of each modality. The physiological pipeline is common to both the emotion and stress recognition systems. It is based on MTTS-CAN, a recent method for estimating the iPPG signal, while two distinct neural models were used to predict the person's emotions and stress from the visual information contained in the video (e.g. facial expressions): a spatio-temporal network combining the Squeeze-Excitation module and the Xception architecture for estimating the emotional state and a transfer learning approach for estimating the stress level. This approach reduces development effort and overcomes the lack of data. A fusion of physiological and facial features is then performed to predict the emotional or stress states
Emir, Alkazhami. "Facial Identity Embeddings for Deepfake Detection in Videos". Thesis, Linköpings universitet, Datorseende, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170587.
Texto completoSkalban, Yvonne. "Automatic generation of factual questions from video documentaries". Thesis, University of Wolverhampton, 2013. http://hdl.handle.net/2436/314607.
Texto completoBaccouche, Moez. "Apprentissage neuronal de caractéristiques spatio-temporelles pour la classification automatique de séquences vidéo". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00932662.
Texto completoLin, Jie-Zhua y 林界專. "Driver Smoking Behavior Detection with Facial Video Processing". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/22527977596947216050.
Texto completo國立中興大學
電機工程學系所
103
Researches and analyses from the US bureau of safety experts shows that the probability of car accidents for smokers is 1.5 times larger than that for non-smokers. The United Kingdom and Germany experts believe that the 5% of car accidents are related to smoking while driving. According to the US experts, smoking can lead to three driving hazards: reduced vision, distracting, and odors stimulations. People need a high degree of attention and a clear vision while driving, and the smoke may irritate eyes and respiratory tract, and then the vision will be blurred. The acrid smoke causes coughing and distraction. Besides, the influence of smoke on the vision is very terrible. Based on statistics, when at dusk, smoking four cigarettes during a short time will reduce the vision by 20-30%. When driving in such circumstances, the risk is very high. Therefore, this thesis proposes the cigarette detection algorithms by facial and mouth region detections when the driver smoking behavior happens. The proposed algorithms use four major skills, which include the face detection, face boundary detection, mouth positioning, and cigarette detection. Driver facial videos are captured through the camera, and the proposed algorithms use color information and separate the statistical threshold condition at different daytime. Firstly, the facial region detection is active based on facial features, and the possible facial region will be found. Next, based on the symmetry and concentration properties of facial features, the facial boundaries are detected, and the face size is estimated. Finally, we use the luminance threshold condition to detect the cigarette near the mouth region for driver smoking behavior detection. In the thesis, the proposed algorithms use the brightness information for the cigarette detection. The test video sequences are analyzed by statistics, and the threshold value is set to filter out a cigarette, and then the projection method is used to detect the location of the cigarette. Based on 29 test video sequences taken by our own, after experiments we find that the proposed method II can perform the highest average detection accuracy and the lowest average false positive rate, which are 79.81 % and 20.19 % respectively for cigarette detection, and which are 95.71 % and 4.29 % respectively for non-cigarette detection. The experimental results show that the processing speed is up to 38 frames per second by the Quad-core PC (2.6GHz, 4GB memory capacity).
Arpino, Vincent J. "An assessment of facial profile preference of surgical patients using video imaging". 1996. http://catalog.hathitrust.org/api/volumes/oclc/48079456.html.
Texto completoDahmane, Mohamed. "Analyse de mouvements faciaux à partir d'images vidéo". Thèse, 2011. http://hdl.handle.net/1866/7120.
Texto completoIn a face-to-face talk, language is supported by nonverbal communication, which plays a central role in human social behavior by adding cues to the meaning of speech, providing feedback, and managing synchronization. Information about the emotional state of a person is usually carried out by facial attributes. In fact, 55% of a message is communicated by facial expressions whereas only 7% is due to linguistic language and 38% to paralanguage. However, there are currently no established instruments to measure such behavior. The computer vision community is therefore interested in the development of automated techniques for prototypic facial expression analysis, for human computer interaction applications, meeting video analysis, security and clinical applications. For gathering observable cues, we try to design, in this research, a framework that can build a relatively comprehensive source of visual information, which will be able to distinguish the facial deformations, thus allowing to point out the presence or absence of a particular facial action. A detailed review of identified techniques led us to explore two different approaches. The first approach involves appearance modeling, in which we use the gradient orientations to generate a dense representation of facial attributes. Besides the facial representation problem, the main difficulty of a system, which is intended to be general, is the implementation of a generic model independent of individual identity, face geometry and size. We therefore introduce a concept of prototypic referential mapping through a SIFT-flow registration that demonstrates, in this thesis, its superiority to the conventional eyes-based alignment. In a second approach, we use a geometric model through which the facial primitives are represented by Gabor filtering. Motivated by the fact that facial expressions are not only ambiguous and inconsistent across human but also dependent on the behavioral context; in this approach, we present a personalized facial expression recognition system whose overall performance is directly related to the localization performance of a set of facial fiducial points. These points are tracked through a sequence of video frames by a modification of a fast Gabor phase-based disparity estimation technique. In this thesis, we revisit the confidence measure, and introduce an iterative conditional procedure for displacement estimation that improves the robustness of the original methods.
Libros sobre el tema "Facial video processing"
Ziyou, Xiong y Huang Thomas S. 1936-, eds. Facial analysis from continuous video with applications to human-computer interface. Boston: Kluwer Academic Publishers, 2004.
Buscar texto completoS, Pandzic Igor y Forchheimer Robert, eds. MPEG-4 facial animation: The standard, implementation and applications. Chichester: J. Wiley, 2002.
Buscar texto completoDarris, Dobbs, ed. Animating facial features and expression. Rockland, Mass: Charles River Media, 1999.
Buscar texto completoForchheimer, Robert y Igor S. Pandzic. MPEG-4 Facial Animation. Wiley & Sons, Incorporated, John, 2003.
Buscar texto completoColmenarez, Antonio J. Facial Analysis from Continuous Video with Applications to Human-Computer Interface. Springer, 2013.
Buscar texto completoForchheimer, Robert y Igor S. Pandzic. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley & Sons, Incorporated, John, 2003.
Buscar texto completoForchheimer, Robert y Igor S. Pandzic. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley & Sons, Incorporated, John, 2007.
Buscar texto completo(Editor), Igor S. Pandzic y Robert Forchheimer (Editor), eds. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, 2002.
Buscar texto completoForchheimer, Robert y Igor S. Pandzic. MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley & Sons, Incorporated, John, 2003.
Buscar texto completoDobbs, Darris y Bill Fleming. Animating Facial Features & Expressions. Charles River Media, 1998.
Buscar texto completoCapítulos de libros sobre el tema "Facial video processing"
Boccignone, Giuseppe, Vittorio Cuculo, Giuliano Grossi, Raffaella Lanzarotti y Raffaella Migliaccio. "Virtual EMG via Facial Video Analysis". En Image Analysis and Processing - ICIAP 2017, 197–207. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-68560-1_18.
Texto completoLin, Chenhan, Fei Long, Junfeng Yao, Ming-Ting Sun y Jinsong Su. "Learning Spatiotemporal and Geometric Features with ISA for Video-Based Facial Expression Recognition". En Neural Information Processing, 435–44. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70090-8_45.
Texto completoClippingdale, Simon y Mahito Fujii. "Face Recognition for Video Indexing: Randomization of Face Templates Improves Robustness to Facial Expression". En Visual Content Processing and Representation, 32–40. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39798-4_7.
Texto completoOuzar, Yassine, Lynda Lagha, Frédéric Bousefsaf y Choubeila Maaoui. "Multimodal Stress State Detection from Facial Videos Using Physiological Signals and Facial Features". En Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, 139–50. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37745-7_10.
Texto completoWang, Chao, Yunhong Wang y Zhaoxiang Zhang. "Incremental Learning of Patch-Based Bag of Facial Words Representation for Online Face Recognition in Videos". En Advances in Multimedia Information Processing – PCM 2012, 1–9. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34778-8_1.
Texto completoWeismayer, Christian y Ilona Pezenka. "Cross-Cultural Differences in Emotional Response to Destination Commercials". En Information and Communication Technologies in Tourism 2024, 43–54. Cham: Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-58839-6_5.
Texto completoWaters, Keith y Demetri Terzopoulos. "The computer synthesis of expressive faces". En Processing the Facial Image, 87–94. Oxford University PressOxford, 1992. http://dx.doi.org/10.1093/oso/9780198522614.003.0013.
Texto completoJi, Yi y Khalid Idrissi. "Automatic Facial Expression Recognition by Facial Parts Location with Boosted-LBP". En Intelligent Computer Vision and Image Processing, 42–55. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-3906-5.ch004.
Texto completoUddin, Md Zia. "A local feature-based facial expression recognition system from depth video". En Emerging Trends in Image Processing, Computer Vision and Pattern Recognition, 407–19. Elsevier, 2015. http://dx.doi.org/10.1016/b978-0-12-802045-6.00026-0.
Texto completoFatima, Eram, Ankit Kumar y Anil Kumar Singh. "Face Recognition using Convolutional Neural Network Algorithms". En Artificial intelligence and Multimedia Data Engineering, 60–69. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815196443123010007.
Texto completoActas de conferencias sobre el tema "Facial video processing"
Kalayci, Sacide, Hazim Kemal Ekenel y Hatice Gunes. "Automatic analysis of facial attractiveness from video". En 2014 IEEE International Conference on Image Processing (ICIP). IEEE, 2014. http://dx.doi.org/10.1109/icip.2014.7025851.
Texto completoChen, Xin, Chen Cao, Zehao Xue y Wei Chu. "Joint Audio-Video Driven Facial Animation". En ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8461502.
Texto completoLiu, Sijiang, Yifeng Zhou, Wensha Tian, Chengrong Yu y Bo Jiang. "An automatic facial beautification method for video post-processing". En Third International Workshop on Pattern Recognition, editado por Xudong Jiang, Guojian Chen y Zhenxiang Chen. SPIE, 2018. http://dx.doi.org/10.1117/12.2501849.
Texto completoModak, Masooda, Namrata Patel y Kalyani Pampattiwar. "Facial Expression based Assistant for Interviewee using Video Processing". En 2023 6th International Conference on Advances in Science and Technology (ICAST). IEEE, 2023. http://dx.doi.org/10.1109/icast59062.2023.10454925.
Texto completoSaeed, Usman y Jean-Luc Dugelay. "Person Recognition Form Video using Facial Mimics". En 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 2007. http://dx.doi.org/10.1109/icassp.2007.366724.
Texto completoChen, Bo y Klara Nahrstedt. "FIS: Facial Information Segmentation for Video Redaction". En 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2019. http://dx.doi.org/10.1109/mipr.2019.00071.
Texto completoChen, Xu, Anustup Choudhury, Peter van Beek y Andrew Segall. "Facial video super resolution using semantic exemplar components". En 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351013.
Texto completoJiangang Yu y Bir Bhanu. "Super-resolution of deformed facial images in video". En 2008 15th IEEE International Conference on Image Processing. IEEE, 2008. http://dx.doi.org/10.1109/icip.2008.4711966.
Texto completoKim, Minsu, Hong Joo Lee, Sangmin Lee y Yong Man Ro. "Robust Video Facial Authentication With Unsupervised Mode Disentanglement". En 2020 IEEE International Conference on Image Processing (ICIP). IEEE, 2020. http://dx.doi.org/10.1109/icip40778.2020.9191052.
Texto completoChatterjee, S., S. Banerjee y K. K. Biswas. "Reconstruction of local features for facial video compression". En Proceedings of 7th IEEE International Conference on Image Processing. IEEE, 2000. http://dx.doi.org/10.1109/icip.2000.899272.
Texto completo