Gotowa bibliografia na temat „FACIAL DATASET”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „FACIAL DATASET”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "FACIAL DATASET"

1

Xu, Xiaolin, Yuan Zong, Cheng Lu i Xingxun Jiang. "Enhanced Sample Self-Revised Network for Cross-Dataset Facial Expression Recognition". Entropy 24, nr 10 (17.10.2022): 1475. http://dx.doi.org/10.3390/e24101475.

Pełny tekst źródła
Streszczenie:
Recently, cross-dataset facial expression recognition (FER) has obtained wide attention from researchers. Thanks to the emergence of large-scale facial expression datasets, cross-dataset FER has made great progress. Nevertheless, facial images in large-scale datasets with low quality, subjective annotation, severe occlusion, and rare subject identity can lead to the existence of outlier samples in facial expression datasets. These outlier samples are usually far from the clustering center of the dataset in the feature space, thus resulting in considerable differences in feature distribution, which severely restricts the performance of most cross-dataset facial expression recognition methods. To eliminate the influence of outlier samples on cross-dataset FER, we propose the enhanced sample self-revised network (ESSRN) with a novel outlier-handling mechanism, whose aim is first to seek these outlier samples and then suppress them in dealing with cross-dataset FER. To evaluate the proposed ESSRN, we conduct extensive cross-dataset experiments across RAF-DB, JAFFE, CK+, and FER2013 datasets. Experimental results demonstrate that the proposed outlier-handling mechanism can reduce the negative impact of outlier samples on cross-dataset FER effectively and our ESSRN outperforms classic deep unsupervised domain adaptation (UDA) methods and the recent state-of-the-art cross-dataset FER results.
Style APA, Harvard, Vancouver, ISO itp.
2

Kim, Jung Hwan, Alwin Poulose i Dong Seog Han. "The Extensive Usage of the Facial Image Threshing Machine for Facial Emotion Recognition Performance". Sensors 21, nr 6 (12.03.2021): 2026. http://dx.doi.org/10.3390/s21062026.

Pełny tekst źródła
Streszczenie:
Facial emotion recognition (FER) systems play a significant role in identifying driver emotions. Accurate facial emotion recognition of drivers in autonomous vehicles reduces road rage. However, training even the advanced FER model without proper datasets causes poor performance in real-time testing. FER system performance is heavily affected by the quality of datasets than the quality of the algorithms. To improve FER system performance for autonomous vehicles, we propose a facial image threshing (FIT) machine that uses advanced features of pre-trained facial recognition and training from the Xception algorithm. The FIT machine involved removing irrelevant facial images, collecting facial images, correcting misplacing face data, and merging original datasets on a massive scale, in addition to the data-augmentation technique. The final FER results of the proposed method improved the validation accuracy by 16.95% over the conventional approach with the FER 2013 dataset. The confusion matrix evaluation based on the unseen private dataset shows a 5% improvement over the original approach with the FER 2013 dataset to confirm the real-time testing.
Style APA, Harvard, Vancouver, ISO itp.
3

Oliver, Miquel Mascaró, i Esperança Amengual Alcover. "UIBVFED: Virtual facial expression dataset". PLOS ONE 15, nr 4 (6.04.2020): e0231266. http://dx.doi.org/10.1371/journal.pone.0231266.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bodavarapu, Pavan Nageswar Reddy, i P. V. V. S. Srinivas. "Facial expression recognition for low resolution images using convolutional neural networks and denoising techniques". Indian Journal of Science and Technology 14, nr 12 (27.03.2021): 971–83. http://dx.doi.org/10.17485/ijst/v14i12.14.

Pełny tekst źródła
Streszczenie:
Background/Objectives: There is only limited research work is going on in the field of facial expression recognition on low resolution images. Mostly, all the images in the real world will be in low resolution and might also contain noise, so this study is to design a novel convolutional neural network model (FERConvNet), which can perform better on low resolution images. Methods: We proposed a model and then compared with state-of-art models on FER2013 dataset. There is no publicly available dataset, which contains low resolution images for facial expression recognition (Anger, Sad, Disgust, Happy, Surprise, Neutral, Fear), so we created a Low Resolution Facial Expression (LRFE) dataset, which contains more than 6000 images of seven types of facial expressions. The existing FER2013 dataset and LRFE dataset were used. These datasets were divided in the ratio 80:20 for training and testing and validation purpose. A HDM is proposed, which is a combination of Gaussian Filter, Bilateral Filter and Non local means denoising Filter. This hybrid denoising method helps us to increase the performance of the convolutional neural network. The proposed model was then compared with VGG16 and VGG19 models. Findings: The experimental results show that the proposed FERConvNet_HDM approach is effective than VGG16 and VGG19 in facial expression recognition on both FER2013 and LRFE dataset. The proposed FERConvNet_HDM approach achieved 85% accuracy on Fer2013 dataset, outperforming the VGG16 and VGG19 models, whose accuracies are 60% and 53% on Fer2013 dataset respectively. The same FERConvNet_HDM approach when applied on LRFE dataset achieved 95% accuracy. After analyzing the results, our FERConvNet_HDM approach performs better than VGG16 and VGG19 on both Fer2013 and LRFE dataset. Novelty/Applications: HDM with convolutional neural networks, helps in increasing the performance of convolutional neural networks in Facial expression recognition. Keywords: Facial expression recognition; facial emotion; convolutional neural network; deep learning; computer vision
Style APA, Harvard, Vancouver, ISO itp.
5

Wang, Xiaoqing, Xiangjun Wang i Yubo Ni. "Unsupervised Domain Adaptation for Facial Expression Recognition Using Generative Adversarial Networks". Computational Intelligence and Neuroscience 2018 (9.07.2018): 1–10. http://dx.doi.org/10.1155/2018/7208794.

Pełny tekst źródła
Streszczenie:
In the facial expression recognition task, a good-performing convolutional neural network (CNN) model trained on one dataset (source dataset) usually performs poorly on another dataset (target dataset). This is because the feature distribution of the same emotion varies in different datasets. To improve the cross-dataset accuracy of the CNN model, we introduce an unsupervised domain adaptation method, which is especially suitable for unlabelled small target dataset. In order to solve the problem of lack of samples from the target dataset, we train a generative adversarial network (GAN) on the target dataset and use the GAN generated samples to fine-tune the model pretrained on the source dataset. In the process of fine-tuning, we give the unlabelled GAN generated samples distributed pseudolabels dynamically according to the current prediction probabilities. Our method can be easily applied to any existing convolutional neural networks (CNN). We demonstrate the effectiveness of our method on four facial expression recognition datasets with two CNN structures and obtain inspiring results.
Style APA, Harvard, Vancouver, ISO itp.
6

Manikowska, Michalina, Damian Sadowski, Adam Sowinski i Michal R. Wrobel. "DevEmo—Software Developers’ Facial Expression Dataset". Applied Sciences 13, nr 6 (17.03.2023): 3839. http://dx.doi.org/10.3390/app13063839.

Pełny tekst źródła
Streszczenie:
The COVID-19 pandemic has increased the relevance of remote activities and digital tools for education, work, and other aspects of daily life. This reality has highlighted the need for emotion recognition technology to better understand the emotions of computer users and provide support in remote environments. Emotion recognition can play a critical role in improving the remote experience and ensuring that individuals are able to effectively engage in computer-based tasks remotely. This paper presents a new dataset, DevEmo, that can be used to train deep learning models for the purpose of emotion recognition of computer users. The dataset consists of 217 video clips of 33 students solving programming tasks. The recordings were collected in the participants’ actual work environment, capturing the students’ facial expressions as they engaged in programming tasks. The DevEmo dataset is labeled to indicate the presence of the four emotions (anger, confusion, happiness, and surprise) and a neutral state. The dataset provides a unique opportunity to explore the relationship between emotions and computer-related activities, and has the potential to support the development of more personalized and effective tools for computer-based learning environments.
Style APA, Harvard, Vancouver, ISO itp.
7

Bordjiba, Yamina, Hayet Farida Merouani i Nabiha Azizi. "Facial expression recognition via a jointly-learned dual-branch network". International journal of electrical and computer engineering systems 13, nr 6 (1.09.2022): 447–56. http://dx.doi.org/10.32985/ijeces.13.6.4.

Pełny tekst źródła
Streszczenie:
Human emotion recognition depends on facial expressions, and essentially on the extraction of relevant features. Accurate feature extraction is generally difficult due to the influence of external interference factors and the mislabelling of some datasets, such as the Fer2013 dataset. Deep learning approaches permit an automatic and intelligent feature extraction based on the input database. But, in the case of poor database distribution or insufficient diversity of database samples, extracted features will be negatively affected. Furthermore, one of the main challenges for efficient facial feature extraction and accurate facial expression recognition is the facial expression datasets, which are usually considerably small compared to other image datasets. To solve these problems, this paper proposes a new approach based on a dual-branch convolutional neural network for facial expression recognition, which is formed by three modules: The two first ones ensure features engineering stage by two branches, and features fusion and classification are performed by the third one. In the first branch, an improved convolutional part of the VGG network is used to benefit from its known robustness, the transfer learning technique with the EfficientNet network is applied in the second branch, to improve the quality of limited training samples in datasets. Finally, and in order to improve the recognition performance, a classification decision will be made based on the fusion of both branches’ feature maps. Based on the experimental results obtained on the Fer2013 and CK+ datasets, the proposed approach shows its superiority compared to several state-of-the-art results as well as using one model at a time. Those results are very competitive, especially for the CK+ dataset, for which the proposed dual branch model reaches an accuracy of 99.32, while for the FER-2013 dataset, the VGG-inspired CNN obtains an accuracy of 67.70, which is considered an acceptable accuracy, given the difficulty of the images of this dataset.
Style APA, Harvard, Vancouver, ISO itp.
8

Büdenbender, Björn, Tim T. A. Höfling, Antje B. M. Gerdes i Georg W. Alpers. "Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions’ prototypicality". PLOS ONE 18, nr 2 (10.02.2023): e0281309. http://dx.doi.org/10.1371/journal.pone.0281309.

Pełny tekst źródła
Streszczenie:
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models’ accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material’s prototypicality on AFC’s ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Style APA, Harvard, Vancouver, ISO itp.
9

Yap, Chuin Hong, Ryan Cunningham, Adrian K. Davison i Moi Hoon Yap. "Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer". Journal of Imaging 7, nr 8 (11.08.2021): 142. http://dx.doi.org/10.3390/jimaging7080142.

Pełny tekst źródła
Streszczenie:
Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.
Style APA, Harvard, Vancouver, ISO itp.
10

Jin, Zhijia, Xiaolu Zhang, Jie Wang, Xiaolin Xu i Jiangjian Xiao. "Fine-Grained Facial Expression Recognition in Multiple Smiles". Electronics 12, nr 5 (22.02.2023): 1089. http://dx.doi.org/10.3390/electronics12051089.

Pełny tekst źródła
Streszczenie:
Smiling has often been incorrectly interpreted as “happy” in the popular facial expression datasets (AffectNet, RAF-DB, FERPlus). Smiling is the most complex human expression, with positive, neutral, and negative smiles. We focused on fine-grained facial expression recognition (FER) and built a new smiling face dataset, named Facial Expression Emotions. This dataset categorizes smiles into six classes of smiles, containing a total of 11,000 images labeled with corresponding fine-grained facial expression classes. We propose Smile Transformer, a network architecture for FER based on the Swin Transformer, to enhance the local perception capability of the model and improve the accuracy of fine-grained face recognition. Moreover, a convolutional block attention module (CBAM) was designed, to focus on important features of the face image and suppress unnecessary regional responses. For better classification results, an image quality evaluation module was used to assign different labels to images with different qualities. Additionally, a dynamic weight loss function was designed, to assign different learning strategies according to the labels during training, focusing on hard yet recognizable samples and discarding unidentifiable samples, to achieve better recognition. Overall, we focused on (a) creating a novel dataset of smiling facial images from online annotated images, and (b) developing a method for improved FER in smiling images. Facial Expression Emotions achieved an accuracy of 88.56% and could serve as a new benchmark dataset for future research on fine-grained FER.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "FACIAL DATASET"

1

Yu, Kaimin. "Towards Realistic Facial Expression Recognition". Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/9459.

Pełny tekst źródła
Streszczenie:
Automatic facial expression recognition has attracted significant attention over the past decades. Although substantial progress has been achieved for certain scenarios (such as frontal faces in strictly controlled laboratory settings), accurate recognition of facial expression in realistic environments remains unsolved for the most part. The main objective of this thesis is to investigate facial expression recognition in unconstrained environments. As one major problem faced by the literature is the lack of realistic training and testing data, this thesis presents a web search based framework to collect realistic facial expression dataset from the Web. By adopting an active learning based method to remove noisy images from text based image search results, the proposed approach minimizes the human efforts during the dataset construction and maximizes the scalability for future research. Various novel facial expression features are then proposed to address the challenges imposed by the newly collected dataset. Finally, a spectral embedding based feature fusion framework is presented to combine the proposed facial expression features to form a more descriptive representation. This thesis also systematically investigates how the number of frames of a facial expression sequence can affect the performance of facial expression recognition algorithms, since facial expression sequences may be captured under different frame rates in realistic scenarios. A facial expression keyframe selection method is proposed based on keypoint based frame representation. Comprehensive experiments have been performed to demonstrate the effectiveness of the presented methods.
Style APA, Harvard, Vancouver, ISO itp.
2

Godavarthy, Sridhar. "Microexpression Spotting in Video Using Optical Strain". Scholar Commons, 2010. https://scholarcommons.usf.edu/etd/1642.

Pełny tekst źródła
Streszczenie:
Microexpression detection plays a vital role in applications such as lie detection and psychological consultations. Current research is progressing in the direction of automating microexpression recognition by aiming at classifying the microexpressions in terms of FACS Action Units. Although high detection rates are being achieved, the datasets used for evaluation of these systems are highly restricted. They are limited in size - usually still pictures or extremely short videos; motion constrained; containing only a single microexpression and do not contain negative cases where microexpressions are absent. Only a few of these systems run in real time and even fewer have been tested on real life videos. This work proposes a novel method for automated spotting of facial microexpressions as a preprocessing step to existing microexpression recognition systems. By identifying and rejecting sequences that do not contain microexpressions, longer sequences can be converted into shorter, constrained, relevant sequences which comprise of only single microexpressions, which can then be passed as input to existing systems, improving their performance and efficiency. This method utilizes the small temporal extent of microexpressions for their identification. The extent is determined by the period for which strain, due to the non-rigid motion caused during facial movement, is impacted on the facial skin. The subject's face is divided into sub-regions, and facial strain is calculated for each of these regions. The strain patterns in individual regions are used to identify subtle changes which facilitate the detection of microexpressions. The strain magnitude is calculated using the central difference method over the robust and dense optical flow field of each subject's face. The computed strain is then thresholded using a variable threshold. If the duration for which the strain is above the threshold corresponds to the duration of a microexpression, detection is reported. The datasets used for algorithm evaluation are comprised of a mix of natural and enacted microexpressions. The results were promising with up to 80% true detection rate. Increased false positive spots in the Canal 9 dataset can be attributed to talking by the subjects causing fine movements in the mouth region. Performing speech detection to identify sequences where the subject is talking and excluding the mouth region during those periods could help reduce the number of false positives.
Style APA, Harvard, Vancouver, ISO itp.
3

KUMAR, NAVEEN. "MULTIMODAL HYBRID BIOMETRIC IDENTIFICATION USING FACIAL AND ELECTROCARDIOGRAM FEATURES". Thesis, 2018. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16314.

Pełny tekst źródła
Streszczenie:
A single biometric is always prone to errors and misleading results. The inclination has been towards employing two or more biometric traits for designing any schema, accomplishing a superior efficiency. As per the application, these schemas can be deployed for identification as well for recognition, proving to be instrumental in multitudinal fields. Biometric traits examined in the proposed schema are electrocardiogram and face traits. For the face traits, endeavour has been put for entropy biased identification, employing DCT before PPCA. Further exploration of Electrocardiogram signal has been compassed, including the diverse feature points and classifiers. Fiducial points and temporal locale of these points, combined with entire PQRST segment, prove to be an enriching feature set, resulting in much improved results. The score level fusion is exercised along with normalization. The novel fusion schema worked for these two biometric traits does not put any bias on any the traits but works to add the end outputs of the two separate schemas, such as they are employed independently, the best of which is selected as end identification of personal. The schema is certified using the Yale facial dataset and ECG-ID set from Physionet. This addition has shown tremendous enrichment to our accuracy results, strengthening our methodology.
Style APA, Harvard, Vancouver, ISO itp.
4

Moreira, Gonçalo Rebelo de Almeida. "Neuromorphic Event-based Facial Identity Recognition". Master's thesis, 2021. http://hdl.handle.net/10316/98251.

Pełny tekst źródła
Streszczenie:
Dissertação de Mestrado Integrado em Engenharia Electrotécnica e de Computadores apresentada à Faculdade de Ciências e Tecnologia
A investigação na área do reconhecimento facial existe já há mais de meio século. O grandeinteresse neste tópico advém do seu tremendo potencial para impactar várias indústrias, comoa de vídeovigilância, autenticação pessoal, investigação criminal, lazer, entre outras. A maioriados algoritmos estado­ da­ arte baseiam-­se apenas na aparência facial, especificamente, estesmétodos utilizam as caraterísticas estáticas da cara humana (e.g., a distância entre os olhos,a localização do nariz, a forma do nariz) para determinar com bastante eficácia a identidadede um sujeito. Contudo, é também discutido o facto de que os humanos fazem uso de outrotipo de informação facial para identificar outras pessoas, nomeadamente, o movimento facialidiossincrático de uma pessoa. Este conjunto de dados faciais é relevante devido a ser difí­cil de replicar ou de falsificar, enquanto que a aparência é facilmente alterada com ajuda deferramentas computacionais baratas e disponíveis a qualquer um.Por outro lado, câmaras de eventos são dispositivos neuromórficos, bastante recentes, quesão ótimos a codificar informação da dinâmica de uma cena. Estes sensores são inspiradospelo modo de funcionamento biológico do olho humano. Em vez de detetarem as várias inten­sidades de luz de uma cena, estes captam as variações dessas intensidades no cenário. Demodo que, e comparando com câmaras standard, estes mecanismos sensoriais têm elevadaresolução temporal, não sofrendo de imagem tremida, e são de baixo consumo, entre outrosbenefícios. Algumas das suas aplicações são Localização e Mapeamento Simultâneo (SLAM)em tempo real, deteção de anomalias e reconhecimento de ações/gestos.Tomando tudo isto em conta, o foco principal deste trabalho é de avaliar a aptidão da tec­nologia fornecida pelas câmaras de eventos para completar tarefas mais complexas, nestecaso, reconhecimento de identidade facial, e o quão fácil será a sua integração num sistemano mundo real. Adicionalmente, é também disponibilizado o Dataset criado no âmbito destadissertação (NVSFD Dataset) de modo a possibilitar investigação futura sobre o tópico.
Facial recognition research has been around for longer than a half-century, as of today. Thisgreat interest in the field stems from its tremendous potential to enhance various industries,such as video surveillance, personal authentication, criminal investigation, and leisure. Moststate­of­the­art algorithms rely on facial appearance, particularly, these methods utilize the staticcharacteristics of the human face (e.g., the distance between both eyes, nose location, noseshape) to determine the subject’s identity extremely accurately. However, it is further argued thathumans also make use of another type of facial information to identify other people, namely, one’s idiosyncratic facial motion. This kind of facial data is relevant due to being hardly replicableor forged, whereas appearance can be easily distorted by cheap software available to anyone.On another note, event­cameras are quite recent neuromorphic devices that are remark­able at encoding dynamic information in a scene. These sensors are inspired by the biologicaloperation mode of the human eye. Rather than detecting the light intensity, they capture lightintensity variations in the setting. Thus, in comparison to standard cameras, this sensing mech­anism has a high temporal resolution, therefore it does not suffer from motion blur, and haslow power consumption, among other benefits. A few of its early applications have been real­time Simultaneous Localization And Mapping (SLAM), anomaly detection, and action/gesturerecognition.Taking it all into account, the main purpose of this work is to evaluate the aptitude of the technology offered by event­cameras for completing a more complex task, that being facialidentity recognition, and how easily it could be integrated into real world systems. Additionally, itis also provided the Dataset created in the scope of this dissertation (NVSFD Dataset) in orderto facilitate future third-party investigation on the topic.
Style APA, Harvard, Vancouver, ISO itp.
5

Cavalini, Diandre de Paula. "Image Sentiment Analysis of Social Media Data". Master's thesis, 2021. http://hdl.handle.net/10400.6/11847.

Pełny tekst źródła
Streszczenie:
Often a picture is worth a thousand words, and this is a small statement that represents one of the biggest challenges in the Image Sentiment Analysis area. The main theme of this dissertation is the Image Sentiment Analysis of social media, mainly from Twitter, so that it is identified as situations that represent risks (identification of negative situations) or that become a risk (prediction of negative situations). Despite the diversity of work done in the area of image sentiment analysis, it is still a challenging task. Several factors contribute to the difficulty, both more global factors likewise sociocultural issues, and issues within the scope of the analysis of feeling in images, such as the difficulty in finding reliable and properly labeled data to be used, as well as factors faced during the classification, for example, it is normal to associate images with darker colors and low brightness to negative feelings, after all, most are like that, but some cases escape this rule, and it is these cases that affect the accuracy of the developed models. However, in order to overcome these problems faced in classification, a multitasking model was developed, which will consider the entire image information, information from the salient areas in the images, and the facial expressions of faces contained in the images, and textual information, so that each component complements the other during classification. During the experiments it was possible to observe that the use of the proposed models can bring advantages for the classification of feeling in images and even work around some problems evidenced in existing works, such as the irony of the text. Therefore, this work aims to present the state of the art and the study carried out, in order to enable the presentation and implementation of the proposed model and carrying out the experiments and discussion of the results obtained, in order to verify the effectiveness of what was proposed. Finally, conclusions about the work done and future work will be presented.
Muitas vezes uma imagem vale mais que mil palavras, e esta é uma pequena afirmação que representa um dos maiores desafios da área de classificação do sentimento contido nas imagens. O principal tema desta dissertação é a realização da análise do sentimento contido em imagens das mídias sociais, principalmente do Twitter, de modo que possam ser identificadas as situações que representam riscos (identificação de situações negativas) ou as quais possam se tornar um (previsão de situações negativas). Apesar da diversidade de trabalhos feitos na área da análise de sentimento em imagens, ainda é uma tarefa desafiante. Diversos fatores contribuem para a dificuldade , tantos fatores mais globais como questões socioculturais, quanto questões do próprio âmbito de análise de sentimento em imagens, como a dificuldade em achar dados confiáveis e devidamente etiquetados para serem utilizados, quanto fatores enfrentados durante a classificação, como por exemplo, é normal associar imagens com cores mais escuras e pouco brilho à sentimentos negativos, afinal a maioria é assim, entretanto há casos que fogem dessa regra, e são esses casos que afetam a precisão dos modelos desenvolvidos. Porém, visando contornar esses problemas enfrentados na classificação, foi desenvolvido um modelo multitarefas, o qual irá considerar informações globais, áreas salientes nas imagens, expressões faciais de rostos contidos nas imagens e informação textual, de modo que cada componente se complemente durante a classificação. Durante os experimentos foi possível observar que o uso dos modelos propostos podem trazer vantagens para a classificação do sentimento em imagens e até mesmo contornar alguns problemas evidenciados nos trabalhos já existentes, como por exemplo a ironia do texto. Assim sendo, este trabalho tem como objetivo apresentar o estado da arte e o estudo realizado, de modo a possibilitar a apresentação e implementação do modelo multitarefas proposto e realização das experiências e discussão dos resultados obtidos, de forma a verificar a eficácia do método proposto. Por fim, as conclusões sobre o trabalho feito e trabalho futuro serão apresentados.
Style APA, Harvard, Vancouver, ISO itp.
6

Triggiani, Maurizio. "Integration of machine learning techniques in chemometrics practices". Doctoral thesis, 2022. http://hdl.handle.net/11589/237998.

Pełny tekst źródła
Streszczenie:
Food safety is a key objective in all the development plans of the European Union. To ensure the quality and the sustainability of the agricultural production (both intensive and extensive) a well-designed analysis strategy is needed. Climate change, precision agriculture, green revolution and industry 4.0 are areas of study that need innovative practices and approaches that aren’t possible without precise and constant process monitoring. The need for product quality assessment during the whole supply chain is paramount and cost reduction is also another constant need. Non targeted Nuclear Magnetic Resonance (NMR) analysis is still a second-choice approach for food analysis and monitoring, one of the problems of this approach is the big amount of information returned. This kind of data needs a new and improved method of handling and analysis. Classical chemometrics practices are not well suited for this new field of study. In this thesis, we approached the problem of food fingerprinting and discrimination by the means of non-targeted NMR spectroscopy combined with modern machine learning algorithms and databases meant for the correct and easy access of data. The introduction of machine learning techniques alongside the clear benefits introduces a new layer of complexity regarding the need for trusted data sources for algorithm training and integrity, if this kind of approach proves is worth in the global market, we’ll need not only to create a good dataset, but we’ll need to be prepared to defend against also more clever attacks like adversarial machine learning attacks. Comparing the machine learning results with the classic chemometric approach we’ll highlight the strengths and the weakness of both approaches, and we’ll use them to prepare the framework needed to tackle the challenges of future agricultural productions.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "FACIAL DATASET"

1

Hlaváč, Miroslav, Ivan Gruber, Miloš Železný i Alexey Karpov. "Semi-automatic Facial Key-Point Dataset Creation". W Speech and Computer, 662–68. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-66429-3_66.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Li, Yuezun, Pu Sun, Honggang Qi i Siwei Lyu. "Toward the Creation and Obstruction of DeepFakes". W Handbook of Digital Face Manipulation and Detection, 71–96. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-87664-7_4.

Pełny tekst źródła
Streszczenie:
AbstractAI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for large-scale datasets. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5, 639 high-quality DeepFake videos of celebrities generated using an improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF. Then we introduce Landmark Breaker, the first dedicated method to disrupt facial landmark extraction, and apply it to the obstruction of the generation of DeepFake videos. The experiments are conducted on three state-of-the-art facial landmark extractors using our Celeb-DF dataset.
Style APA, Harvard, Vancouver, ISO itp.
3

Feinland, Jacob, Jacob Barkovitch, Dokyu Lee, Alex Kaforey i Umur Aybars Ciftci. "Poker Bluff Detection Dataset Based on Facial Analysis". W Image Analysis and Processing – ICIAP 2022, 400–410. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-06433-3_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Jalal, Anand Singh, Dilip Kumar Sharma i Bilal Sikander. "FFV: Facial Feature Vector Image Dataset with Facial Feature Analysis and Feature Ranking". W Smart Intelligent Computing and Applications, Volume 2, 393–401. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9705-0_38.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Zhu, Hao, Wayne Wu, Wentao Zhu, Liming Jiang, Siwei Tang, Li Zhang, Ziwei Liu i Chen Change Loy. "CelebV-HQ: A Large-Scale Video Facial Attributes Dataset". W Lecture Notes in Computer Science, 650–67. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20071-7_38.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Wei, Sijie, Xiaojun Jing, Aoran Chen, Qianqian Chen, Junsheng Mu i Bohan Li. "AffectRAF: A Dataset Designed Based on Facial Expression Recognition". W Lecture Notes in Electrical Engineering, 1044–50. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-4775-9_135.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Matias, Jhennifer Cristine, Tobias Rossi Müller, Felipe Zago Canal, Gustavo Gino Scotton, Antonio Reis de Sa Junior, Eliane Pozzebon i Antonio Carlos Sobieranski. "MIGMA: The Facial Emotion Image Dataset for Human Expression Recognition". W Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 153–62. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93420-0_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Singh, Shivendra, i Shajulin Benedict. "Indian Semi-Acted Facial Expression (iSAFE) Dataset for Human Emotions Recognition". W Communications in Computer and Information Science, 150–62. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-4828-4_13.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Kumar, Vikas, Shivansh Rao i Li Yu. "Noisy Student Training Using Body Language Dataset Improves Facial Expression Recognition". W Computer Vision – ECCV 2020 Workshops, 756–73. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66415-2_53.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Tiwari, Shubham, Yash Sethia, Ashwani Tanwar, Ritesh Kumar i Rudresh Dwivedi. "FRLL-Beautified: A Dataset of Fun Selfie Filters with Facial Attributes". W Communications in Computer and Information Science, 456–65. Cham: Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39059-3_30.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "FACIAL DATASET"

1

Haibin Yan, Marcelo H. Ang i Aun Neow Poo. "Cross-dataset facial expression recognition". W IEEE International Conference on Robotics and Automation. IEEE, 2011. http://dx.doi.org/10.1109/icra.2011.5979705.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Ghafourian, Sarvenaz, Ramin Sharifi i Amirali Baniasadi. "Facial Emotion Recognition in Imbalanced Datasets". W 9th International Conference on Artificial Intelligence and Applications (AIAPP 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.120920.

Pełny tekst źródła
Streszczenie:
The wide usage of computer vision has become popular in the recent years. One of the areas of computer vision that has been studied is facial emotion recognition, which plays a crucial role in the interpersonal communication. This paper tackles the problem of intraclass variances in the face images of emotion recognition datasets. We test the system on augmented datasets including CK+, EMOTIC, and KDEF dataset samples. After modifying our dataset, using SMOTETomek approach, we observe improvement over the default method.
Style APA, Harvard, Vancouver, ISO itp.
3

Haag, Kathrin, i Hiroshi Shimodaira. "The University of Edinburgh Speaker Personality and MoCap Dataset". W FAA '15: Facial Analysis and Animation. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2813852.2813860.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Timoshenko, Denis, Konstantin Simonchik, Vitaly Shutov, Polina Zhelezneva i Valery Grishkin. "Large Crowdcollected Facial Anti-Spoofing Dataset". W 2019 Computer Science and Information Technologies (CSIT). IEEE, 2019. http://dx.doi.org/10.1109/csitechnol.2019.8895208.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Principi, Filippo, Stefano Berretti, Claudio Ferrari, Naima Otberdout, Mohamed Daoudi i Alberto Del Bimbo. "The Florence 4D Facial Expression Dataset". W 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG). IEEE, 2023. http://dx.doi.org/10.1109/fg57933.2023.10042606.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Huang, Jiajun, Xueyu Wang, Bo Du, Pei Du i Chang Xu. "DeepFake MNIST+: A DeepFake Facial Animation Dataset". W 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021. http://dx.doi.org/10.1109/iccvw54120.2021.00224.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Varkarakis, Viktor, i Peter Corcoran. "Dataset Cleaning — A Cross Validation Methodology for Large Facial Datasets using Face Recognition". W 2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX). IEEE, 2020. http://dx.doi.org/10.1109/qomex48832.2020.9123123.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Galea, Nathan, i Dylan Seychell. "Facial Expression Recognition in the Wild: Dataset Configurations". W 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE, 2022. http://dx.doi.org/10.1109/mipr54900.2022.00045.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Somanath, Gowri, MV Rohith i Chandra Kambhamettu. "VADANA: A dense dataset for facial image analysis". W 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops). IEEE, 2011. http://dx.doi.org/10.1109/iccvw.2011.6130517.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Yan, Yanfu, Ke Lu, Jian Xue, Pengcheng Gao i Jiayi Lyu. "FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation". W 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). IEEE, 2019. http://dx.doi.org/10.1109/icmew.2019.0-104.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "FACIAL DATASET"

1

Kimura, Marcia L., Rebecca L. Erikson i Nicholas J. Lombardo. Non-Cooperative Facial Recognition Video Dataset Collection Plan. Office of Scientific and Technical Information (OSTI), sierpień 2013. http://dx.doi.org/10.2172/1126360.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Тарасова, Олена Юріївна, i Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Pełny tekst źródła
Streszczenie:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
Style APA, Harvard, Vancouver, ISO itp.
3

Mackie, S. J., C. M. Furlong, P. K. Pedersen i O. H. Ardakani. Stratigraphy, facies heterogeneities, and structure in the Montney Formation of northeastern British Columbia: relation to H2S distribution. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/329796.

Pełny tekst źródła
Streszczenie:
In the last decade, the Lower Triassic Montney Formation of the Western Canada Sedimentary Basin (WCSB) has undergone significant development, providing a rich dataset to study structural, stratigraphic, and facies control on the variations in hydrogen sulphide (H2S) gas content. Splitting the siltstone dominated Montney into the three regional members (i.e., Lower Montney, Middle Montney, and Upper Montney) allows for detailed analysis on the enrichment of H2S within a local-scale study area in northeastern British Columbia (BC). Within this study area, Upper Montney H2S content increases within individual parasequences both up-dip and towards the east. In addition to potential up-dip migration, there may be greater sulphur-bearing components in the east, allowing for the sulphate reduction required to generate H2S. The overlying Middle Triassic thins eastward, providing proximity to the overlying anhydrite-rich beds of the Upper Triassic Charlie Lake Formation. Further, the overlying Middle Triassic Sunset Prairie Formation has an erosional edge that corresponds with eastern elevated H2S concentrations within the Upper Montney unit. Mapped structures are syn-depositional to the Middle Triassic, potentially providing conduits for early sulphate-rich fluid migration. In the Middle and Lower Montney, elevated H2S generally occurs with proximity to the top of the Permian Belloy Formation. Within this study area, limited Lower Montney data is available and thus needs to be further corroborated with regional data. Both the Middle and Lower Montney display elevated H2S in trends that generally align with mapped faults. The faults may have acted as conduits for sulphate-rich fluids to migrate during early burial then migrate laterally through facies that may have been permeable during early burial, such as the carbonate-rich facies at the boundary between the Middle and Lower Montney. Further core and isotope analyses are required to fully understand this relationship.
Style APA, Harvard, Vancouver, ISO itp.
4

Tennant, David. Business Surveys on the Impact of COVID-19 on Jamaican Firms. Inter-American Development Bank, maj 2021. http://dx.doi.org/10.18235/0003251.

Pełny tekst źródła
Streszczenie:
The datasets come from two surveys of Jamaican businesses conducted between May and June 2020. Two sets of self-administered surveys were conducted using Survey Monkey. A very small sample of financial institutions was surveyed to gain perspective on the challenges facing financiers as a result of the pandemic, and their efforts to respond to such challenges. Nine financial institutions completed this survey, and the results were used to complement the information derived from the second and major survey. The second survey targeted non-financial businesses operating in Jamaica. The sample of firms was selected from a list of all registered Jamaican firms, obtained from the Companies Office of Jamaica. A stratified random sample was used based on firm type, region, and sector. Some firms may have also participated in the study through contact made by their respective affiliations, which were approached to endorse the study and encourage their members to engage. A total of 390 firms completed the second survey. A significant degree of representation was achieved across size, type and age of business, sector and location of operation. Good gender representation was also achieved.
Style APA, Harvard, Vancouver, ISO itp.
5

Michalak, Julia, Josh Lawler, John Gross i Caitlin Littlefield. A strategic analysis of climate vulnerability of national park resources and values. National Park Service, wrzesień 2021. http://dx.doi.org/10.36967/nrr-2287214.

Pełny tekst źródła
Streszczenie:
The U.S. national parks have experienced significant climate-change impacts and rapid, on-going changes are expected to continue. Despite the significant climate-change vulnerabilities facing parks, relatively few parks have conducted comprehensive climate-change vulnerability assessments, defined as assessments that synthesize vulnerability information from a wide range of sources, identify key climate-change impacts, and prioritize vulnerable park resources (Michalak et al. In review). In recognition that funding and planning capacity is limited, this project was initiated to identify geographies, parks, and issues that are high priorities for conducting climate-change vulnerability assessments (CCVA) and strategies to efficiently address the need for CCVAs across all U.S. National Park Service (NPS) park units (hereafter “parks”) and all resources. To help identify priority geographies and issues, we quantitatively assessed the relative magnitude of vulnerability factors potentially affecting park resources and values. We identified multiple vulnerability factors (e.g., temperature change, wildfire potential, number of at-risk species, etc.) and sought existing datasets that could be developed into indicators of these factors. To be included in the study, datasets had to be spatially explicit or already summarized for individual parks and provide consistent data for at least all parks within the contiguous U.S. (CONUS). The need for consistent data across such a large geographic extent limited the number of datasets that could be included, excluded some important drivers of climate-change vulnerability, and prevented adequate evaluation of some geographies. The lack of adequately-scaled data for many key vulnerability factors, such as freshwater flooding risks and increased storm activity, highlights the need for both data development and more detailed vulnerability assessments at local to regional scales where data for these factors may be available. In addition, most of the available data at this scale were related to climate-change exposures, with relatively little data available for factors associated with climate-change sensitivity or adaptive capacity. In particular, we lacked consistent data on the distribution or abundance of cultural resources or accessible data on infrastructure across all parks. We identified resource types, geographies, and critical vulnerability factors that lacked data for NPS’ consideration in addressing data gaps. Forty-seven indicators met our criteria, and these were combined into 21 climate-change vulnerability factors. Twenty-seven indicators representing 12 vulnerability factors addressed climate-change exposure (i.e., projected changes in climate conditions and impacts). A smaller number of indictors measured sensitivity (12 indicators representing 5 vulnerability factors). The sensitivity indicators often measured park or landscape characteristics which may make resources more or less responsive to climate changes (e.g., current air quality) as opposed to directly representing the sensitivity of specific resources within the park (e.g., a particular rare species or type of historical structure). Finally, 6 indicators representing 4 vulnerability factors measured external adaptive capacity for living resources (i.e., characteristics of the park and/or surrounding landscape which may facilitate or impede species adaptation to climate changes). We identified indicators relevant to three resource groups: terrestrial living, aquatic living (including living cultural resources such as culturally significant landscapes, plant, or animal species) and non-living resources (including infrastructure and non-living cultural resources such as historic buildings or archeological sites). We created separate indicator lists for each of these resource groups and analyzed them separately. To identify priority geographies within CONUS,...
Style APA, Harvard, Vancouver, ISO itp.
6

Corriveau, L., J. F. Montreuil, O. Blein, E. Potter, M. Ansari, J. Craven, R. Enkin i in. Metasomatic iron and alkali calcic (MIAC) system frameworks: a TGI-6 task force to help de-risk exploration for IOCG, IOA and affiliated primary critical metal deposits. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/329093.

Pełny tekst źródła
Streszczenie:
Australia's and China's resources (e.g. Olympic Dam Cu-U-Au-Ag and Bayan Obo REE deposits) highlight how discovery and mining of iron oxide copper-gold (IOCG), iron oxide±apatite (IOA) and affiliated primary critical metal deposits in metasomatic iron and alkali-calcic (MIAC) mineral systems can secure a long-term supply of critical metals for Canada and its partners. In Canada, MIAC systems comprise a wide range of undeveloped primary critical metal deposits (e.g. NWT NICO Au-Co-Bi-Cu and Québec HREE-rich Josette deposits). Underexplored settings are parts of metallogenic belts that extend into Australia and the USA. Some settings, such as the Camsell River district explored by the Dene First Nations in the NWT, have infrastructures and 100s of km of historic drill cores. Yet vocabularies for mapping MIAC systems are scanty. Ability to identify metasomatic vectors to ore is fledging. Deposit models based on host rock types, structural controls or metal associations underpin the identification of MIAC-affinities, assessment of systems' full mineral potential and development of robust mineral exploration strategies. This workshop presentation reviews public geoscience research and tools developed by the Targeted Geoscience Initiative to establish the MIAC frameworks of prospective Canadian settings and global mining districts and help de-risk exploration for IOCG, IOA and affiliated primary critical metal deposits. The knowledge also supports fundamental research, environmental baseline assessment and societal decisions. It fulfills objectives of the Canadian Mineral and Metal Plan and the Critical Mineral Mapping Initiative among others. The GSC-led MIAC research team comprises members of the academic, private and public sectors from Canada, Australia, Europe, USA, China and Dene First Nations. The team's novel alteration mapping protocols, geological, mineralogical, geochemical and geophysical framework tools, and holistic mineral systems and petrophysics models mitigate and solve some of the exploration and geosciences challenges posed by the intricacies of MIAC systems. The group pioneers the use of discriminant alteration diagrams and barcodes, the assembly of a vocab for mapping and core logging, and the provision of field short courses, atlas, photo collections and system-scale field, geochemical, rock physical properties and geophysical datasets are in progress to synthesize shared signatures of Canadian settings and global MIAC mining districts. Research on a metamorphosed MIAC system and metamorphic phase equilibria modelling of alteration facies will provide a foundation for framework mapping and exploration of high-grade metamorphic terranes where surface and near surface resources are still to be discovered and mined as are those of non-metamorphosed MIAC systems.
Style APA, Harvard, Vancouver, ISO itp.
7

Projectile fluid penetration and flammability of respirators and other head/facial personal protective equipment (FPFPPE) (dataset). U.S. Department of Health and Human Services, Public Health Service, Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, czerwiec 2019. http://dx.doi.org/10.26616/nioshrd-1010-2019-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii