Gotowa bibliografia na temat „FACIAL DATASET”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „FACIAL DATASET”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Artykuły w czasopismach na temat "FACIAL DATASET"
Xu, Xiaolin, Yuan Zong, Cheng Lu i Xingxun Jiang. "Enhanced Sample Self-Revised Network for Cross-Dataset Facial Expression Recognition". Entropy 24, nr 10 (17.10.2022): 1475. http://dx.doi.org/10.3390/e24101475.
Pełny tekst źródłaKim, Jung Hwan, Alwin Poulose i Dong Seog Han. "The Extensive Usage of the Facial Image Threshing Machine for Facial Emotion Recognition Performance". Sensors 21, nr 6 (12.03.2021): 2026. http://dx.doi.org/10.3390/s21062026.
Pełny tekst źródłaOliver, Miquel Mascaró, i Esperança Amengual Alcover. "UIBVFED: Virtual facial expression dataset". PLOS ONE 15, nr 4 (6.04.2020): e0231266. http://dx.doi.org/10.1371/journal.pone.0231266.
Pełny tekst źródłaBodavarapu, Pavan Nageswar Reddy, i P. V. V. S. Srinivas. "Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques". Indian Journal of Science and Technology 14, nr 12 (27.03.2021): 971–83. http://dx.doi.org/10.17485/ijst/v14i12.14.
Pełny tekst źródłaWang, Xiaoqing, Xiangjun Wang i Yubo Ni. "Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks". Computational Intelligence and Neuroscience 2018 (9.07.2018): 1–10. http://dx.doi.org/10.1155/2018/7208794.
Pełny tekst źródłaManikowska, Michalina, Damian Sadowski, Adam Sowinski i Michal R. Wrobel. "DevEmo—Software Developers’ Facial Expression Dataset". Applied Sciences 13, nr 6 (17.03.2023): 3839. http://dx.doi.org/10.3390/app13063839.
Pełny tekst źródłaBordjiba, Yamina, Hayet Farida Merouani i Nabiha Azizi. "Facial expression recognition via a jointly-learned dual-branch network". International journal of electrical and computer engineering systems 13, nr 6 (1.09.2022): 447–56. http://dx.doi.org/10.32985/ijeces.13.6.4.
Pełny tekst źródłaBüdenbender, Björn, Tim T. A. Höfling, Antje B. M. Gerdes i Georg W. Alpers. "Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions’ prototypicality". PLOS ONE 18, nr 2 (10.02.2023): e0281309. http://dx.doi.org/10.1371/journal.pone.0281309.
Pełny tekst źródłaYap, Chuin Hong, Ryan Cunningham, Adrian K. Davison i Moi Hoon Yap. "Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer". Journal of Imaging 7, nr 8 (11.08.2021): 142. http://dx.doi.org/10.3390/jimaging7080142.
Pełny tekst źródłaJin, Zhijia, Xiaolu Zhang, Jie Wang, Xiaolin Xu i Jiangjian Xiao. "Fine-Grained Facial Expression Recognition in Multiple Smiles". Electronics 12, nr 5 (22.02.2023): 1089. http://dx.doi.org/10.3390/electronics12051089.
Pełny tekst źródłaRozprawy doktorskie na temat "FACIAL DATASET"
Yu, Kaimin. "Towards Realistic Facial Expression Recognition". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.
Pełny tekst źródłaGodavarthy, Sridhar. "Microexpression Spotting in Video Using Optical Strain". Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1642.
Pełny tekst źródłaKUMAR, NAVEEN. "MULTIMODAL HYBRID BIOMETRIC IDENTIFICATION USING FACIAL AND ELECTROCARDIOGRAM FEATURES". Thesis, 2018. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16314.
Pełny tekst źródłaMoreira, Gonçalo Rebelo de Almeida. "Neuromorphic Event-based Facial Identity Recognition". Master's thesis, 2021. http://hdl.handle.net/10316/98251.
Pełny tekst źródłaA investigação na área do reconhecimento facial existe já há mais de meio século. O grandeinteresse neste tópico advém do seu tremendo potencial para impactar várias indústrias, comoa de vídeovigilância, autenticação pessoal, investigação criminal, lazer, entre outras. A maioriados algoritmos estado da arte baseiam-se apenas na aparência facial, especificamente, estesmétodos utilizam as caraterísticas estáticas da cara humana (e.g., a distância entre os olhos,a localização do nariz, a forma do nariz) para determinar com bastante eficácia a identidadede um sujeito. Contudo, é também discutido o facto de que os humanos fazem uso de outrotipo de informação facial para identificar outras pessoas, nomeadamente, o movimento facialidiossincrático de uma pessoa. Este conjunto de dados faciais é relevante devido a ser difícil de replicar ou de falsificar, enquanto que a aparência é facilmente alterada com ajuda deferramentas computacionais baratas e disponíveis a qualquer um.Por outro lado, câmaras de eventos são dispositivos neuromórficos, bastante recentes, quesão ótimos a codificar informação da dinâmica de uma cena. Estes sensores são inspiradospelo modo de funcionamento biológico do olho humano. Em vez de detetarem as várias intensidades de luz de uma cena, estes captam as variações dessas intensidades no cenário. Demodo que, e comparando com câmaras standard, estes mecanismos sensoriais têm elevadaresolução temporal, não sofrendo de imagem tremida, e são de baixo consumo, entre outrosbenefícios. Algumas das suas aplicações são Localização e Mapeamento Simultâneo (SLAM)em tempo real, deteção de anomalias e reconhecimento de ações/gestos.Tomando tudo isto em conta, o foco principal deste trabalho é de avaliar a aptidão da tecnologia fornecida pelas câmaras de eventos para completar tarefas mais complexas, nestecaso, reconhecimento de identidade facial, e o quão fácil será a sua integração num sistemano mundo real. Adicionalmente, é também disponibilizado o Dataset criado no âmbito destadissertação (NVSFD Dataset) de modo a possibilitar investigação futura sobre o tópico.
Facial recognition research has been around for longer than a half-century, as of today. Thisgreat interest in the field stems from its tremendous potential to enhance various industries,such as video surveillance, personal authentication, criminal investigation, and leisure. Moststateoftheart algorithms rely on facial appearance, particularly, these methods utilize the staticcharacteristics of the human face (e.g., the distance between both eyes, nose location, noseshape) to determine the subject’s identity extremely accurately. However, it is further argued thathumans also make use of another type of facial information to identify other people, namely, one’s idiosyncratic facial motion. This kind of facial data is relevant due to being hardly replicableor forged, whereas appearance can be easily distorted by cheap software available to anyone.On another note, eventcameras are quite recent neuromorphic devices that are remarkable at encoding dynamic information in a scene. These sensors are inspired by the biologicaloperation mode of the human eye. Rather than detecting the light intensity, they capture lightintensity variations in the setting. Thus, in comparison to standard cameras, this sensing mechanism has a high temporal resolution, therefore it does not suffer from motion blur, and haslow power consumption, among other benefits. A few of its early applications have been realtime Simultaneous Localization And Mapping (SLAM), anomaly detection, and action/gesturerecognition.Taking it all into account, the main purpose of this work is to evaluate the aptitude of the technology offered by eventcameras for completing a more complex task, that being facialidentity recognition, and how easily it could be integrated into real world systems. Additionally, itis also provided the Dataset created in the scope of this dissertation (NVSFD Dataset) in orderto facilitate future third-party investigation on the topic.
Cavalini, Diandre de Paula. "Image Sentiment Analysis of Social Media Data". Master's thesis, 2021. http://hdl.handle.net/10400.6/11847.
Pełny tekst źródłaMuitas vezes uma imagem vale mais que mil palavras, e esta é uma pequena afirmação que representa um dos maiores desafios da área de classificação do sentimento contido nas imagens. O principal tema desta dissertação é a realização da análise do sentimento contido em imagens das mídias sociais, principalmente do Twitter, de modo que possam ser identificadas as situações que representam riscos (identificação de situações negativas) ou as quais possam se tornar um (previsão de situações negativas). Apesar da diversidade de trabalhos feitos na área da análise de sentimento em imagens, ainda é uma tarefa desafiante. Diversos fatores contribuem para a dificuldade , tantos fatores mais globais como questões socioculturais, quanto questões do próprio âmbito de análise de sentimento em imagens, como a dificuldade em achar dados confiáveis e devidamente etiquetados para serem utilizados, quanto fatores enfrentados durante a classificação, como por exemplo, é normal associar imagens com cores mais escuras e pouco brilho à sentimentos negativos, afinal a maioria é assim, entretanto há casos que fogem dessa regra, e são esses casos que afetam a precisão dos modelos desenvolvidos. Porém, visando contornar esses problemas enfrentados na classificação, foi desenvolvido um modelo multitarefas, o qual irá considerar informações globais, áreas salientes nas imagens, expressões faciais de rostos contidos nas imagens e informação textual, de modo que cada componente se complemente durante a classificação. Durante os experimentos foi possível observar que o uso dos modelos propostos podem trazer vantagens para a classificação do sentimento em imagens e até mesmo contornar alguns problemas evidenciados nos trabalhos já existentes, como por exemplo a ironia do texto. Assim sendo, este trabalho tem como objetivo apresentar o estado da arte e o estudo realizado, de modo a possibilitar a apresentação e implementação do modelo multitarefas proposto e realização das experiências e discussão dos resultados obtidos, de forma a verificar a eficácia do método proposto. Por fim, as conclusões sobre o trabalho feito e trabalho futuro serão apresentados.
Triggiani, Maurizio. "Integration of machine learning techniques in chemometrics practices". Doctoral thesis, 2022. http://hdl.handle.net/11589/237998.
Pełny tekst źródłaCzęści książek na temat "FACIAL DATASET"
Hlaváč, Miroslav, Ivan Gruber, Miloš Železný i Alexey Karpov. "Semi-automatic Facial Key-Point Dataset Creation". W Speech and Computer, 662–68. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66429-3_66.
Pełny tekst źródłaLi, Yuezun, Pu Sun, Honggang Qi i Siwei Lyu. "Toward the Creation and Obstruction of DeepFakes". W Handbook of Digital Face Manipulation and Detection, 71–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_4.
Pełny tekst źródłaFeinland, Jacob, Jacob Barkovitch, Dokyu Lee, Alex Kaforey i Umur Aybars Ciftci. "Poker Bluff Detection Dataset Based on Facial Analysis". W Image Analysis and Processing – ICIAP 2022, 400–410. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06433-3_34.
Pełny tekst źródłaJalal, Anand Singh, Dilip Kumar Sharma i Bilal Sikander. "FFV: Facial Feature Vector Image Dataset with Facial Feature Analysis and Feature Ranking". W Smart Intelligent Computing and Applications, Volume 2, 393–401. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9705-0_38.
Pełny tekst źródłaZhu, Hao, Wayne Wu, Wentao Zhu, Liming Jiang, Siwei Tang, Li Zhang, Ziwei Liu i Chen Change Loy. "CelebV-HQ: A Large-Scale Video Facial Attributes Dataset". W Lecture Notes in Computer Science, 650–67. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20071-7_38.
Pełny tekst źródłaWei, Sijie, Xiaojun Jing, Aoran Chen, Qianqian Chen, Junsheng Mu i Bohan Li. "AffectRAF: A Dataset Designed Based on Facial Expression Recognition". W Lecture Notes in Electrical Engineering, 1044–50. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4775-9_135.
Pełny tekst źródłaMatias, Jhennifer Cristine, Tobias Rossi Müller, Felipe Zago Canal, Gustavo Gino Scotton, Antonio Reis de Sa Junior, Eliane Pozzebon i Antonio Carlos Sobieranski. "MIGMA: The Facial Emotion Image Dataset for Human Expression Recognition". W Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 153–62. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93420-0_15.
Pełny tekst źródłaSingh, Shivendra, i Shajulin Benedict. "Indian Semi-Acted Facial Expression (iSAFE) Dataset for Human Emotions Recognition". W Communications in Computer and Information Science, 150–62. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4828-4_13.
Pełny tekst źródłaKumar, Vikas, Shivansh Rao i Li Yu. "Noisy Student Training Using Body Language Dataset Improves Facial Expression Recognition". W Computer Vision – ECCV 2020 Workshops, 756–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_53.
Pełny tekst źródłaTiwari, Shubham, Yash Sethia, Ashwani Tanwar, Ritesh Kumar i Rudresh Dwivedi. "FRLL-Beautified: A Dataset of Fun Selfie Filters with Facial Attributes". W Communications in Computer and Information Science, 456–65. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39059-3_30.
Pełny tekst źródłaStreszczenia konferencji na temat "FACIAL DATASET"
Haibin Yan, Marcelo H. Ang i Aun Neow Poo. "Cross-dataset facial expression recognition". W IEEE International Conference on Robotics and Automation. IEEE, 2011. http://dx.doi.org/10.1109/icra.2011.5979705.
Pełny tekst źródłaGhafourian, Sarvenaz, Ramin Sharifi i Amirali Baniasadi. "Facial Emotion Recognition in Imbalanced Datasets". W 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120920.
Pełny tekst źródłaHaag, Kathrin, i Hiroshi Shimodaira. "The University of Edinburgh Speaker Personality and MoCap Dataset". W FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813860.
Pełny tekst źródłaTimoshenko, Denis, Konstantin Simonchik, Vitaly Shutov, Polina Zhelezneva i Valery Grishkin. "Large Crowdcollected Facial Anti-Spoofing Dataset". W 2019 Computer Science and Information Technologies (CSIT). IEEE, 2019. http://dx.doi.org/10.1109/csitechnol.2019.8895208.
Pełny tekst źródłaPrincipi, Filippo, Stefano Berretti, Claudio Ferrari, Naima Otberdout, Mohamed Daoudi i Alberto Del Bimbo. "The Florence 4D Facial Expression Dataset". W 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE, 2023. http://dx.doi.org/10.1109/fg57933.2023.10042606.
Pełny tekst źródłaHuang, Jiajun, Xueyu Wang, Bo Du, Pei Du i Chang Xu. "DeepFake MNIST+: A DeepFake Facial Animation Dataset". W 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00224.
Pełny tekst źródłaVarkarakis, Viktor, i Peter Corcoran. "Dataset Cleaning — A Cross Validation Methodology for Large Facial Datasets using Face Recognition". W 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020. http://dx.doi.org/10.1109/qomex48832.2020.9123123.
Pełny tekst źródłaGalea, Nathan, i Dylan Seychell. "Facial Expression Recognition in the Wild: Dataset Configurations". W 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2022. http://dx.doi.org/10.1109/mipr54900.2022.00045.
Pełny tekst źródłaSomanath, Gowri, MV Rohith i Chandra Kambhamettu. "VADANA: A dense dataset for facial image analysis". W 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). IEEE, 2011. http://dx.doi.org/10.1109/iccvw.2011.6130517.
Pełny tekst źródłaYan, Yanfu, Ke Lu, Jian Xue, Pengcheng Gao i Jiayi Lyu. "FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation". W 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2019. http://dx.doi.org/10.1109/icmew.2019.0-104.
Pełny tekst źródłaRaporty organizacyjne na temat "FACIAL DATASET"
Kimura, Marcia L., Rebecca L. Erikson i Nicholas J. Lombardo. Non-Cooperative Facial Recognition Video Dataset Collection Plan. Office of Scientific and Technical Information (OSTI), sierpień 2013. http://dx.doi.org/10.2172/1126360.
Pełny tekst źródłaТарасова, Олена Юріївна, i Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.
Pełny tekst źródłaMackie, S. J., C. M. Furlong, P. K. Pedersen i O. H. Ardakani. Stratigraphy, facies heterogeneities, and structure in the Montney Formation of northeastern British Columbia: relation to H2S distribution. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329796.
Pełny tekst źródłaTennant, David. Business Surveys on the Impact of COVID-19 on Jamaican Firms. Inter-American Development Bank, maj 2021. http://dx.doi.org/10.18235/0003251.
Pełny tekst źródłaMichalak, Julia, Josh Lawler, John Gross i Caitlin Littlefield. A strategic analysis of climate vulnerability of national park resources and values. National Park Service, wrzesień 2021. http://dx.doi.org/10.36967/nrr-2287214.
Pełny tekst źródłaCorriveau, L., J. F. Montreuil, O. Blein, E. Potter, M. Ansari, J. Craven, R. Enkin i in. Metasomatic iron and alkali calcic (MIAC) system frameworks: a TGI-6 task force to help de-risk exploration for IOCG, IOA and affiliated primary critical metal deposits. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329093.
Pełny tekst źródłaProjectile fluid penetration and flammability of respirators and other head/facial personal protective equipment (FPFPPE) (dataset). U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, czerwiec 2019. http://dx.doi.org/10.26616/nioshrd-1010-2019-1.
Pełny tekst źródła