Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Face detection on thermal image.

Zeitschriftenartikel zum Thema „Face detection on thermal image“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Face detection on thermal image" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Seo, Jongwoo, und In-Jeong Chung. „Face Liveness Detection Using Thermal Face-CNN with External Knowledge“. Symmetry 11, Nr. 3 (10.03.2019): 360. http://dx.doi.org/10.3390/sym11030360.

Der volle Inhalt der Quelle
Annotation:
Face liveness detection is important for ensuring security. However, because faces are shown in photographs or on a display, it is difficult to detect the real face using the features of the face shape. In this paper, we propose a thermal face-convolutional neural network (Thermal Face-CNN) that knows the external knowledge regarding the fact that the real face temperature of the real person is 36~37 degrees on average. First, we compared the red, green, and blue (RGB) image with the thermal image to identify the data suitable for face liveness detection using a multi-layer neural network (MLP), convolutional neural network (CNN), and C-support vector machine (C-SVM). Next, we compared the performance of the algorithms and the newly proposed Thermal Face-CNN in a thermal image dataset. The experiment results show that the thermal image is more suitable than the RGB image for face liveness detection. Further, we also found that Thermal Face-CNN performs better than CNN, MLP, and C-SVM when the precision is slightly more crucial than recall through F-measure.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Albar, Albar, Hendrick Hendrick und Rahmad Hidayat. „Segmentation Method for Face Modelling in Thermal Images“. Knowledge Engineering and Data Science 3, Nr. 2 (31.12.2020): 99. http://dx.doi.org/10.17977/um018v3i22020p99-105.

Der volle Inhalt der Quelle
Annotation:
Face detection is mostly applied in RGB images. The object detection usually applied the Deep Learning method for model creation. One method face spoofing is by using a thermal camera. The famous object detection methods are Yolo, Fast RCNN, Faster RCNN, SSD, and Mask RCNN. We proposed a segmentation Mask RCNN method to create a face model from thermal images. This model was able to locate the face area in images. The dataset was established using 1600 images. The images were created from direct capturing and collecting from the online dataset. The Mask RCNN was configured to train with 5 epochs and 131 iterations. The final model predicted and located the face correctly using the test image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ma, Chao, Ngo Trung, Hideaki Uchiyama, Hajime Nagahara, Atsushi Shimada und Rin-ichiro Taniguchi. „Adapting Local Features for Face Detection in Thermal Image“. Sensors 17, Nr. 12 (27.11.2017): 2741. http://dx.doi.org/10.3390/s17122741.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kowalski, Marcin, und Krzysztof Mierzejewski. „Detection of 3D face masks with thermal infrared imaging and deep learning techniques“. Photonics Letters of Poland 13, Nr. 2 (30.06.2021): 22. http://dx.doi.org/10.4302/plp.v13i2.1091.

Der volle Inhalt der Quelle
Annotation:
Biometric systems are becoming more and more efficient due to increasing performance of algorithms. These systems are also vulnerable to various attacks. Presentation of falsified identity to a biometric sensor is one the most urgent challenges for the recent biometric recognition systems. Exploration of specific properties of thermal infrared seems to be a comprehensive solution for detecting face presentation attacks. This letter presents outcome of our study on detecting 3D face masks using thermal infrared imaging and deep learning techniques. We demonstrate results of a two-step neural network-featured method for detecting presentation attacks. Full Text: PDF ReferencesS.R. Arashloo, J. Kittler, W. Christmas, "Face Spoofing Detection Based on Multiple Descriptor Fusion Using Multiscale Dynamic Binarized Statistical Image Features", IEEE Trans. Inf. Forensics Secur. 10, 11 (2015). CrossRef A. Anjos, M.M. Chakka, S. Marcel, "Motion-based counter-measures to photo attacks inface recognition", IET Biometrics 3, 3 (2014). CrossRef M. Killioǧlu, M. Taşkiran, N. Kahraman, "Anti-spoofing in face recognition with liveness detection using pupil tracking", Proc. SAMI IEEE, (2017). CrossRef A. Asaduzzaman, A. Mummidi, M.F. Mridha, F.N. Sibai, "Improving facial recognition accuracy by applying liveness monitoring technique", Proc. ICAEE IEEE, (2015). CrossRef M. Kowalski, "A Study on Presentation Attack Detection in Thermal Infrared", Sensors 20, 14 (2020). CrossRef C. Galdi, et al, "PROTECT: Pervasive and useR fOcused biomeTrics bordEr projeCT - a case study", IET Biometrics 9, 6 (2020). CrossRef D.A. Socolinsky, A. Selinger, J. Neuheisel, "Face recognition with visible and thermal infrared imagery", Comput. Vis Image Underst. 91, 1-2 (2003) CrossRef L. Sun, W. Huang, M. Wu, "TIR/VIS Correlation for Liveness Detection in Face Recognition", Proc. CAIP, (2011). CrossRef J. Seo, I. Chung, "Face Liveness Detection Using Thermal Face-CNN with External Knowledge", Symmetry 2019, 11, 3 (2019). CrossRef A. George, Z. Mostaani, D Geissenbuhler, et al., "Biometric Face Presentation Attack Detection With Multi-Channel Convolutional Neural Network", IEEE Trans. Inf. Forensics Secur. 15, (2020). CrossRef S. Ren, K. He, R. Girshick, J. Sun, "Proceedings of IEEE Conference on Computer Vision and Pattern Recognition", Proc. CVPR IEEE 39, (2016). CrossRef K. He, X. Zhang, S. Ren, J. Sun, "Deep Residual Learning for Image Recognition", Proc. CVPR, (2016). CrossRef K. Mierzejewski, M. Mazurek, "A New Framework for Assessing Similarity Measure Impact on Classification Confidence Based on Probabilistic Record Linkage Model", Procedia Manufacturing 44, 245-252 (2020). CrossRef
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cho, Se, Na Baek, Min Kim, Ja Koo, Jong Kim und Kang Park. „Face Detection in Nighttime Images Using Visible-Light Camera Sensors with Two-Step Faster Region-Based Convolutional Neural Network“. Sensors 18, Nr. 9 (07.09.2018): 2995. http://dx.doi.org/10.3390/s18092995.

Der volle Inhalt der Quelle
Annotation:
Conventional nighttime face detection studies mostly use near-infrared (NIR) light cameras or thermal cameras, which are robust to environmental illumination variation and low illumination. However, for the NIR camera, it is difficult to adjust the intensity and angle of the additional NIR illuminator according to its distance from an object. As for the thermal camera, it is expensive to use as a surveillance camera. For these reasons, we propose a nighttime face detection method based on deep learning using a single visible-light camera. In a long-distance night image, it is difficult to detect faces directly from the entire image due to noise and image blur. Therefore, we propose Two-Step Faster region-based convolutional neural network (R-CNN) based on the image preprocessed by histogram equalization (HE). As a two-step scheme, our method sequentially performs the detectors of body and face areas, and locates the face inside a limited body area. By using our two-step method, the processing time by Faster R-CNN can be reduced while maintaining the accuracy of face detection by Faster R-CNN. Using a self-constructed database called Dongguk Nighttime Face Detection database (DNFD-DB1) and an open database of Fudan University, we proved that the proposed method performs better compared to other existing face detectors. In addition, the proposed Two-Step Faster R-CNN outperformed single Faster R-CNN and our method with HE showed higher accuracies than those without our preprocessing in nighttime face detection.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Latinović, Nikola, Tijana Vuković, Ranko Petrović, Miloš Pavlović, Marko Kadijević, Ilija Popadić und Mladen Veinović. „Implementation challenge and analysis of thermal image degradation on R-CNN face detection“. Telfor Journal 12, Nr. 2 (2020): 98–103. http://dx.doi.org/10.5937/telfor2002098l.

Der volle Inhalt der Quelle
Annotation:
Face detection systems with color cameras were rapidly evolving and have been well researched. In environments with good visibility they can reach excellent accuracy. But changes in illumination conditions can result in performance degradation, which is the one of the major limitations in visible light face detection systems. The solution to this problem could be in using thermal infrared cameras, since their operation doesn't depend on illumination. Recent studies have shown that deep learning methods can achieve an impressive performance on object detection tasks, and face detection in particular. The goal of this paper is to find an effective way to take advantages from thermal infrared spectra and provide an analysis of various image degradation influence on thermal face detection performance in a system based on R-CNN with special accent on implementation on a hardware platform for video signal processing that institute Vlatacom has developed, called vVSP.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fitriyah, Hurriyatul, und Edita Rosana Widasari. „Face Detection of Thermal Images in Various Standing Body-Pose using Facial Geometry“. IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 14, Nr. 4 (31.10.2020): 407. http://dx.doi.org/10.22146/ijccs.59672.

Der volle Inhalt der Quelle
Annotation:
Automatic face detection in frontal view for thermal images is a primary task in a health system e.g. febrile identification or security system e.g. intruder recognition. In a daily state, the scanned person does not always stay in frontal face view. This paper develops an algorithm to identify a frontal face in various standing body-pose. The algorithm used an image processing method where first it segmented face based on human skin’s temperature. Some exposed non-face body parts could also get included in the segmentation result, hence discriminant features of a face were applied. The shape features were based on the characteristic of a frontal face, which are: (1) Size of a face, (2) facial Golden Ratio, and (3) Shape of a face is oval. The algorithm was tested on various standing body-pose that rotate 360° towards 2 meters and 4 meters camera-to-object distance. The accuracy of the algorithm on face detection in a manageable environment is 95.8%. It detected face whether the person was wearing glasses or not.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

van Doremalen, Rob F. M., Jaap J. van Netten, Jeff G. van Baal, Miriam M. R. Vollenbroek-Hutten und Ferdinand van der Heijden. „Infrared 3D Thermography for Inflammation Detection in Diabetic Foot Disease: A Proof of Concept“. Journal of Diabetes Science and Technology 14, Nr. 1 (14.06.2019): 46–54. http://dx.doi.org/10.1177/1932296819854062.

Der volle Inhalt der Quelle
Annotation:
Background: Thermal assessment of the plantar surface of the foot using spot thermometers and thermal imaging has been proven effective in diabetic foot ulcer prevention. However, with traditional cameras this is limited to single spots or a two-dimensional (2D) view of the plantar side of foot, where only 50% of the ulcers occur. To improve ulcer detection, the view has to be extended beyond 2D. Our aim is to explore for proof of concept the combination of three-dimensional (3D) models with thermal imaging for inflammation detection in diabetic foot disease. Method: From eight participants with a current diabetic foot ulcer we simultaneously acquired a 3D foot model and three thermal infrared images using a high-resolution medical 3D imaging system aligned with three smartphone-based thermal infrared cameras. Using spatial transformations, we aimed to map thermal images onto the 3D model, to create the 3D visualizations. Expert clinicians assessed these for quality and face validity as +, +/-, -. Results: We could replace the texture maps (color definitions) of the 3D model with the thermal infrared images and created the first-ever 3D thermographs of the diabetic foot. We then converted these models to 3D PDF-files compatible with the hospital IT environment. Face validity was assessed as + in six and +/- in two cases. Conclusions: We have provided a proof of concept for the creation of clinically useful 3D thermal foot images to assess the diabetic foot skin temperature in 3D in a hospital IT environment. Future developments are expected to improve the image-processing techniques to result in easier, handheld applications and driving further research.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bedoya-Echeverry, Sebastián, Hernán Belalcázar-Ramírez, Humberto Loaiza-Correa, Sandra Esperanza Nope-Rodríguez, Carlos Rafael Pinedo-Jaramillo und Andrés David Restrepo-Girón. „Detection of lies by facial thermal imagery analysis“. Revista Facultad de Ingeniería 26, Nr. 44 (25.01.2017): 45. http://dx.doi.org/10.19053/01211129.v26.n44.2017.5771.

Der volle Inhalt der Quelle
Annotation:
An artificial vision system is presented for lie detection by analyzing face thermal image sequences. This system represents an alternative technique to the polygraph. Some of its features are: 1) it has no physical contact with the examinee, 2) it is non-intrusive, 3) it has a potential for private use, and 4) it can simultaneously analyze several persons. The proposed system is based on the detection of physiological changes in temperature in the lacrimal puncta area caused by the subtle increase in blood flow through the nearby vascular network. These changes take place when anxiety appears as a consequence of deception. Thus, the system segments the periorbital area, and tracks consecutive frames using the Kanade-Lucas-Tomasi algorithm. The results show a success rate of 79.2 % in detecting lies using a simple classification based on the comparison between the estimated temperatures in control questions, and the rest of the interrogation procedure. The performance of this system is comparable with previous works, where cameras with better specifications were used.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Kopaczka, Marcin, Lukas Breuer, Justus Schock und Dorit Merhof. „A Modular System for Detection, Tracking and Analysis of Human Faces in Thermal Infrared Recordings“. Sensors 19, Nr. 19 (24.09.2019): 4135. http://dx.doi.org/10.3390/s19194135.

Der volle Inhalt der Quelle
Annotation:
We present a system that utilizes a range of image processing algorithms to allow fully automated thermal face analysis under both laboratory and real-world conditions. We implement methods for face detection, facial landmark detection, face frontalization and analysis, combining all of these into a fully automated workflow. The system is fully modular and allows implementing own additional algorithms for improved performance or specialized tasks. Our suggested pipeline contains a histogtam of oriented gradients support vector machine (HOG-SVM) based face detector and different landmark detecion methods implemented using feature-based active appearance models, deep alignment networks and a deep shape regression network. Face frontalization is achieved by utilizing piecewise affine transformations. For the final analysis, we present an emotion recognition system that utilizes HOG features and a random forest classifier and a respiratory rate analysis module that computes average temperatures from an automatically detected region of interest. Results show that our combined system achieves a performance which is comparable to current stand-alone state-of-the-art methods for thermal face and landmark datection and a classification accuracy of 65.75% for four basic emotions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Marzec, Mariusz, Robert Koprowski, Zygmunt Wróbel, Agnieszka Kleszcz und Sławomir Wilczyński. „Automatic method for detection of characteristic areas in thermal face images“. Multimedia Tools and Applications 74, Nr. 12 (31.10.2013): 4351–68. http://dx.doi.org/10.1007/s11042-013-1745-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Nata Septiadi, Wayan, Ni Made Dian Sulistiowati und Abdul Wakhid. „THERMOGRAPHIC EVALUATION FOR THE DIVERSE STAGE OF ANXIETY ON FACE TEMPERATURE AT FRONTAL AND TEMPORAL USING THERMAL IMAGING“. Humanities & Social Sciences Reviews 7, Nr. 5 (05.11.2019): 1130–36. http://dx.doi.org/10.18510/hssr.2019.75149.

Der volle Inhalt der Quelle
Annotation:
that it is able to provide accurate results about the temperature picture. The purpose of this study was to examine if there were differences in anxiety conditions at facial temperatures measured using thermal imaging. Methodology: Eighty-one participants were taking the pre-clinical exams was chosen as the inclusion criteria and were divided into four categories of anxiety range (not anxious, mild anxiety, moderate anxiety, and severe anxiety) based on their score measured that using the General Anxiety Disorder (GAD-7) as the instrument. The participants were measured their face temperature using thermal imaging on the upper forehead (frontal) and left-right forehead (temporal). Data was analyzed to show the characteristic of anxiety level on the frontal and temporal temperature. There were difference anxiety conditions (no anxious, mildly anxious, moderate anxious and severe anxious) to thermal imaging face temperatures in the frontal and temporal. Main findings: The results showed that more increased the temporal and frontal of face temperature, more severe the anxiety. There is a significant negative relationship between face temperature and anxiety level (p <0.05). Implications: These findings showed that anxiety can be fast screening with a thermal imaging image. Further research is needed to determine the specificity and sensitivity of thermal imaging as an anxiety detection tool with a short time and without invasive action as one of the technological advances. Novelty: There are no studies that discussed the correlation between anxiety with face temperature using thermal imaging.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Sun, Pengcheng, Dan Zeng, Xiaoyan Li, Lin Yang, Liyuan Li, Zhouxia Chen und Fansheng Chen. „A 3D Mask Presentation Attack Detection Method Based on Polarization Medium Wave Infrared Imaging“. Symmetry 12, Nr. 3 (03.03.2020): 376. http://dx.doi.org/10.3390/sym12030376.

Der volle Inhalt der Quelle
Annotation:
Facial recognition systems are often spoofed by presentation attack instruments (PAI), especially by the use of three-dimensional (3D) face masks. However, nonuniform illumination conditions and significant differences in facial appearance will lead to the performance degradation of existing presentation attack detection (PAD) methods. Based on conventional thermal infrared imaging, a PAD method based on the medium wave infrared (MWIR) polarization characteristics of the surface material is proposed in this paper for countering a flexible 3D silicone mask presentation attack. A polarization MWIR imaging system for face spoofing detection is designed and built, taking advantage of the fact that polarization-based MWIR imaging is not restricted by external light sources (including visible light and near-infrared light sources) in spite of facial appearance. A sample database of real face images and 3D face mask images is constructed, and the gradient amplitude feature extraction method, based on MWIR polarization facial images, is designed to better distinguish the skin of a real face from the material used to make a 3D mask. Experimental results show that, compared with conventional thermal infrared imaging, polarization-based MWIR imaging is more suitable for the PAD method of 3D silicone masks and shows a certain robustness in the change of facial temperature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Rao, M. Sivasankara, K. Tejasree, P. Sathwik, P. Sandeep Kumar und M. Sailohith. „Real Time Face Mask Detection and Thermal Screening with Audio Response for COVID-19“. Revista Gestão Inovação e Tecnologias 11, Nr. 4 (22.07.2021): 2703–14. http://dx.doi.org/10.47059/revistageintec.v11i4.2311.

Der volle Inhalt der Quelle
Annotation:
The coronavirus COVID-19 pandemic is continuously spreading until now everywhere on the earth, and causing a severe health crisis. So the helpful and safe-keeping method is wearing a face mask in all areas where people are gathered, according to the World Health Organization (WHO). Along with the face mask, body temperature and sanitization also plays a vital role in being safer. Thus, monitoring the individuals that are wearing the mask or not is more significant. In this paper, we propose a system that uses TensorFlow, Keras, MobileNetV2, and OpenCV to detect the face mask. A dataset contains images of persons with and without masks obtained from multiple sources and trained on a deep learning model. Then the automatic temperature checking and Sanitation are done. Finally, the proposed system gives an audio/voice output whether the face mask is present or not, the person's body temperature. Our approach would be beneficial in reducing the spread of this infectious disease and will encourage people to use face masks, getting regularly sanitized and monitoring the temperature can keep the workplace safe.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Kakaraparthi, Vimal, Qijia Shao, Charles J. Carver, Tien Pham, Nam Bui, Phuc Nguyen, Xia Zhou und Tam Vu. „FaceSense“. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, Nr. 3 (09.09.2021): 1–27. http://dx.doi.org/10.1145/3478129.

Der volle Inhalt der Quelle
Annotation:
Face touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing monitoring systems only capture objects approaching the face, rather than detecting actual touches. As such, these systems are prone to false positives upon hand or object movement in proximity to one's face (e.g., picking up a phone). We present FaceSense, an ear-worn system capable of identifying actual touches and differentiating them between sensitive/mucosal areas from other facial areas. Following a multimodal approach, FaceSense integrates low-resolution thermal images and physiological signals. Thermal sensors sense the thermal infrared signal emitted by an approaching hand, while physiological sensors monitor impedance changes caused by skin deformation during a touch. Processed thermal and physiological signals are fed into a deep learning model (TouchNet) to detect touches and identify the facial zone of the touch. We fabricated prototypes using off-the-shelf hardware and conducted experiments with 14 participants while they perform various daily activities (e.g., drinking, talking). Results show a macro-F1-score of 83.4% for touch detection with leave-one-user-out cross-validation and a macro-F1-score of 90.1% for touch zone identification with a personalized model.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Jakkaew, Prasara, und Takao Onoye. „Non-Contact Respiration Monitoring and Body Movements Detection for Sleep Using Thermal Imaging“. Sensors 20, Nr. 21 (05.11.2020): 6307. http://dx.doi.org/10.3390/s20216307.

Der volle Inhalt der Quelle
Annotation:
Monitoring of respiration and body movements during sleep is a part of screening sleep disorders related to health status. Nowadays, thermal-based methods are presented to monitor the sleeping person without any sensors attached to the body to protect privacy. A non-contact respiration monitoring based on thermal videos requires visible facial landmarks like nostril and mouth. The limitation of these techniques is the failure of face detection while sleeping with a fixed camera position. This study presents the non-contact respiration monitoring approach that does not require facial landmark visibility under the natural sleep environment, which implies an uncontrolled sleep posture, darkness, and subjects covered with a blanket. The automatic region of interest (ROI) extraction by temperature detection and breathing motion detection is based on image processing integrated to obtain the respiration signals. A signal processing technique was used to estimate respiration and body movements information from a sequence of thermal video. The proposed approach has been tested on 16 volunteers, for which video recordings were carried out by themselves. The participants were also asked to wear the Go Direct respiratory belt for capturing reference data. The result revealed that our proposed measuring respiratory rate obtains root mean square error (RMSE) of 1.82±0.75 bpm. The advantage of this approach lies in its simplicity and accessibility to serve users who require monitoring the respiration during sleep without direct contact by themselves.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Jamrozik, Wojciech, und Jacek Górka. „Detection of slag inclusions using infrared thermal imagining system“. MATEC Web of Conferences 338 (2021): 01012. http://dx.doi.org/10.1051/matecconf/202133801012.

Der volle Inhalt der Quelle
Annotation:
Assuring high quality of welded joins is a vital task in many industrial branches also when joints are made manually. It is the case metal-arc welding with covered electrode. One of main imperfection, that can occur in this process is slag inclusion. In the paper an method for detection of slag inclusion in multipass manual welding is proposed and validated. The key idea of the method is that small temperature disturbances will be noticeable in consecutive cross-section of joint in the cooling pass. Temperature distribution weld face was measured with longwave infrared camera (LWIR). For consecutive cross-section made in IR representation of joint differences in mean temperature was calculated to assess the cooling rate directly after the elements were welded. It can be made because on each thermogram the whole joint is visible, thus position of electrode in time can be easily marked. Results of slag inclusion detection were compared with radiographic images of made joints. In the future additional studies will be performed in order to generalize proposed method to wider group of materials and for more complex welds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Samadiani, Huang, Cai, Luo, Chi, Xiang und He. „A Review on Automatic Facial Expression Recognition Systems Assisted by Multimodal Sensor Data“. Sensors 19, Nr. 8 (18.04.2019): 1863. http://dx.doi.org/10.3390/s19081863.

Der volle Inhalt der Quelle
Annotation:
Facial Expression Recognition (FER) can be widely applied to various research areas, such as mental diseases diagnosis and human social/physiological interaction detection. With the emerging advanced technologies in hardware and sensors, FER systems have been developed to support real-world application scenes, instead of laboratory environments. Although the laboratory-controlled FER systems achieve very high accuracy, around 97%, the technical transferring from the laboratory to real-world applications faces a great barrier of very low accuracy, approximately 50%. In this survey, we comprehensively discuss three significant challenges in the unconstrained real-world environments, such as illumination variation, head pose, and subject-dependence, which may not be resolved by only analysing images/videos in the FER system. We focus on those sensors that may provide extra information and help the FER systems to detect emotion in both static images and video sequences. We introduce three categories of sensors that may help improve the accuracy and reliability of an expression recognition system by tackling the challenges mentioned above in pure image/video processing. The first group is detailed-face sensors, which detect a small dynamic change of a face component, such as eye-trackers, which may help differentiate the background noise and the feature of faces. The second is non-visual sensors, such as audio, depth, and EEG sensors, which provide extra information in addition to visual dimension and improve the recognition reliability for example in illumination variation and position shift situation. The last is target-focused sensors, such as infrared thermal sensors, which can facilitate the FER systems to filter useless visual contents and may help resist illumination variation. Also, we discuss the methods of fusing different inputs obtained from multimodal sensors in an emotion system. We comparatively review the most prominent multimodal emotional expression recognition approaches and point out their advantages and limitations. We briefly introduce the benchmark data sets related to FER systems for each category of sensors and extend our survey to the open challenges and issues. Meanwhile, we design a framework of an expression recognition system, which uses multimodal sensor data (provided by the three categories of sensors) to provide complete information about emotions to assist the pure face image/video analysis. We theoretically analyse the feasibility and achievability of our new expression recognition system, especially for the use in the wild environment, and point out the future directions to design an efficient, emotional expression recognition system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Rodriguez-Lozano, Francisco J., Fernando León-García, M. Ruiz de Adana, Jose M. Palomares und J. Olivares. „Non-Invasive Forehead Segmentation in Thermographic Imaging“. Sensors 19, Nr. 19 (22.09.2019): 4096. http://dx.doi.org/10.3390/s19194096.

Der volle Inhalt der Quelle
Annotation:
The temperature of the forehead is known to be highly correlated with the internal body temperature. This area is widely used in thermal comfort systems, lie-detection systems, etc. However, there is a lack of tools to achieve the segmentation of the forehead using thermographic images and non-intrusive methods. In fact, this is usually segmented manually. This work proposes a simple and novel method to segment the forehead region and to extract the average temperature from this area solving this lack of non-user interaction tools. Our method is invariant to the position of the face, and other different morphologies even with the presence of external objects. The results provide an accuracy of 90% compared to the manual segmentation using the coefficient of Jaccard as a metric of similitude. Moreover, due to the simplicity of the proposed method, it can work with real-time constraints at 83 frames per second in embedded systems with low computational resources. Finally, a new dataset of thermal face images is presented, which includes some features which are difficult to find in other sets, such as glasses, beards, moustaches, breathing masks, and different neck rotations and flexions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Laureti, Stefano, Muhammad Khalid Rizwan, Hamed Malekmohammadi, Pietro Burrascano, Maurizio Natali, Luigi Torre, Marco Rallini, Ivan Puri, David Hutchins und Marco Ricci. „Delamination Detection in Polymeric Ablative Materials Using Pulse-Compression Thermography and Air-Coupled Ultrasound“. Sensors 19, Nr. 9 (13.05.2019): 2198. http://dx.doi.org/10.3390/s19092198.

Der volle Inhalt der Quelle
Annotation:
Ablative materials are used extensively in the aerospace industry for protection against high thermal stresses and temperatures, an example being glass/silicone composites. The extreme conditions faced and the cost-risk related to the production/operating stage of such high-tech materials indicate the importance of detecting any anomaly or defect arising from the manufacturing process. In this paper, two different non-destructive testing techniques, namely active thermography and ultrasonic testing, have been used to detect a delamination in a glass/silicone composite. It is shown that a frequency modulated chirp signal and pulse-compression can successfully be used in active thermography for detecting such a delamination. Moreover, the same type of input signal and post-processing can be used to generate an image using air-coupled ultrasound, and an interesting comparison between the two can be made to further characterise the defect.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Rahman, MD Abdur, M. Shamim Hossain, Nabil A. Alrajeh und B. B. Gupta. „A Multimodal, Multimedia Point-of-Care Deep Learning Framework for COVID-19 Diagnosis“. ACM Transactions on Multimedia Computing, Communications, and Applications 17, Nr. 1s (31.03.2021): 1–24. http://dx.doi.org/10.1145/3421725.

Der volle Inhalt der Quelle
Annotation:
In this article, we share our experiences in designing and developing a suite of deep neural network–(DNN) based COVID-19 case detection and recognition framework. Existing pathological tests such as RT-PCR-based pathogen RNA detection from nasal swabbing seem to display low detection rates during the early stages of virus contraction. Moreover, the reliance on a few overburdened laboratories based around an epicenter capable of supplying large numbers of RT-PCR tests makes this testing method non-scalable when the rate of infections is high. Similarly, finding an effective drug or vaccine with which to combat COVID-19 requires a long time and many clinical trials. The development of pathological COVID-19 tests is hindered by shortages in the supply chain of chemical reagents necessary for testing on a large scale. This diminishes the speed of diagnosis and the ability to filter out COVID-19 positive patients from uninfected patients on a national level. Existing research has shown that DNN has been successful in identifying COVID-19 from radiological media such as CT scans and X-ray images, audio media such as cough sounds, optical coherence tomography to identify conjunctivitis and pink eye symptoms on the ocular surface, body temperature measurement using smartphone fingerprint sensors or thermal cameras, the use of live facial detection to identify safe social distancing practices from camera images, and face mask detection from camera images. We also investigate the utility of federated learning in diagnosis cases where private data can be trained via edge learning. These point-of-care modalities can be integrated with DNN-based RT-PCR laboratory test results to assimilate multiple modalities of COVID-19 detection and thereby provide more dimensions of diagnosis. Finally, we will present our initial test results, which are encouraging.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Zefri, Yahya, Achraf ElKettani, Imane Sebari und Sara Ait Lamallam. „Thermal Infrared and Visual Inspection of Photovoltaic Installations by UAV Photogrammetry—Application Case: Morocco“. Drones 2, Nr. 4 (23.11.2018): 41. http://dx.doi.org/10.3390/drones2040041.

Der volle Inhalt der Quelle
Annotation:
Being sustainable, clean, and eco-friendly, photovoltaic technology is considered as one of the most hoped solutions face to worldwide energetic challenges. Morocco joins this context with the inauguration of numerous clean energy projects. However, one key factor in making photovoltaic installations a profitable investment are regular and effective inspections in order to detect occurred defects. Unmanned aerial vehicles (UAV) are increasingly used in various inspection fields. In this respect, this work focuses on the use of thermal and visual imagery taken by UAV in the inspection of photovoltaic installations. Visual and thermal images of photovoltaic modules, obtained by UAV, from different installations, and with different acquisition conditions and parameters, were exploited to generate orthomosaics for inspection purposes. The methodology was tested on a dataset we have acquired by a mission in Rabat (Morocco), and also on external datasets acquired in Switzerland. As final results, several visual defects were detected in visual RGB and thermal orthomosaics, such as cracks, soiling, and hotspots. In addition, a procedure of semi-automatic hotspots’ extraction was also developed and is presented within this work. On the other side, various tests were conducted on the influence of some acquisition and processing parameters (images’ overlap, the ground sampling distance, the flying height, the use of ground control points, the internal camera parameters’ optimization) on the detection of defects and the quality of visual and thermal generated orthomosaics. In the end, the potential of UAV thermal and visual imagery in the inspection of photovoltaic installations was discussed in function of various parameters. On the basis of the discussion feedback, UAV were concluded as advantageous tools within the thematic of this project, which proves the necessity of their implementation in this context.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Negishi, Toshiaki, Shigeto Abe, Takemi Matsui, He Liu, Masaki Kurosawa, Tetsuo Kirimoto und Guanghao Sun. „Contactless Vital Signs Measurement System Using RGB-Thermal Image Sensors and Its Clinical Screening Test on Patients with Seasonal Influenza“. Sensors 20, Nr. 8 (13.04.2020): 2171. http://dx.doi.org/10.3390/s20082171.

Der volle Inhalt der Quelle
Annotation:
Background: In the last two decades, infrared thermography (IRT) has been applied in quarantine stations for the screening of patients with suspected infectious disease. However, the fever-based screening procedure employing IRT suffers from low sensitivity, because monitoring body temperature alone is insufficient for detecting infected patients. To overcome the drawbacks of fever-based screening, this study aims to develop and evaluate a multiple vital sign (i.e., body temperature, heart rate and respiration rate) measurement system using RGB-thermal image sensors. Methods: The RGB camera measures blood volume pulse (BVP) through variations in the light absorption from human facial areas. IRT is used to estimate the respiration rate by measuring the change in temperature near the nostrils or mouth accompanying respiration. To enable a stable and reliable system, the following image and signal processing methods were proposed and implemented: (1) an RGB-thermal image fusion approach to achieve highly reliable facial region-of-interest tracking, (2) a heart rate estimation method including a tapered window for reducing noise caused by the face tracker, reconstruction of a BVP signal with three RGB channels to optimize a linear function, thereby improving the signal-to-noise ratio and multiple signal classification (MUSIC) algorithm for estimating the pseudo-spectrum from limited time-domain BVP signals within 15 s and (3) a respiration rate estimation method implementing nasal or oral breathing signal selection based on signal quality index for stable measurement and MUSIC algorithm for rapid measurement. We tested the system on 22 healthy subjects and 28 patients with seasonal influenza, using the support vector machine (SVM) classification method. Results: The body temperature, heart rate and respiration rate measured in a non-contact manner were highly similarity to those measured via contact-type reference devices (i.e., thermometer, ECG and respiration belt), with Pearson correlation coefficients of 0.71, 0.87 and 0.87, respectively. Moreover, the optimized SVM model with three vital signs yielded sensitivity and specificity values of 85.7% and 90.1%, respectively. Conclusion: For contactless vital sign measurement, the system achieved a performance similar to that of the reference devices. The multiple vital sign-based screening achieved higher sensitivity than fever-based screening. Thus, this system represents a promising alternative for further quarantine procedures to prevent the spread of infectious diseases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Awhad, Rahul, Saurabh Jayswal, Adesh More und Jyoti Kundale. „Fraudulent Face Image Detection“. ITM Web of Conferences 32 (2020): 03005. http://dx.doi.org/10.1051/itmconf/20203203005.

Der volle Inhalt der Quelle
Annotation:
Due to the growing advancements in technology, many software applications are being developed to modify and edit images. Such software can be used to alter images. Nowadays, an altered image is so realistic that it becomes too difficult for a person to identify whether the image is fake or real. Such software applications can be used to alter the image of a person’s face also. So, it becomes very difficult to identify whether the image of the face is real or not. Our proposed system is used to identify whether the image of a face is fake or real. The proposed system makes use of machine learning. The system makes use of a convolution neural network and support vector classifier. Both these machine learning models are trained using real as well as fake images. Both these trained models will take an image as an input and will determine whether the image is fake or real.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Wen, Lilong, und Dan Xu. „Face Image Manipulation Detection“. IOP Conference Series: Materials Science and Engineering 533 (30.05.2019): 012054. http://dx.doi.org/10.1088/1757-899x/533/1/012054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Nam, Amir Nobahar Sadeghi. „Face Detection“. Volume 5 - 2020, Issue 9 - September 5, Nr. 9 (29.09.2020): 688–92. http://dx.doi.org/10.38124/ijisrt20sep391.

Der volle Inhalt der Quelle
Annotation:
Face detection is one of the challenging problems in the image processing, as a main part of automatic face recognition. Employing the color and image segmentation procedures, a simple and effective algorithm is presented to detect human faces on the input image. To evaluate the performance, the results of the proposed methodology is compared with ViolaJones face detection method.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Wei, Gang, und Ishwar K. Sethi. „Face detection for image annotation“. Pattern Recognition Letters 20, Nr. 11-13 (November 1999): 1313–21. http://dx.doi.org/10.1016/s0167-8655(99)00100-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Gaurav Melkani and Dr. Sunil Maggu. „Image-Based Face Detection and Recognition“. International Journal for Modern Trends in Science and Technology 6, Nr. 12 (01.01.2021): 466–70. http://dx.doi.org/10.46501/ijmtst061290.

Der volle Inhalt der Quelle
Annotation:
Face recognition from image or video is a popular topic in biometrics research. Many public places usually have surveillance cameras for video capture and these cameras have their significant value for security purpose. It is widely acknowledgedthatthefacerecognitionhaveplayedanimportant role in surveillance system as it doesn’t need the object’s cooperation. The actual advantages of face based identification over other biometrics are uniqueness and acceptance. As human face is a dynamic object having high degree of variability in its appearance, that makes face detection a difficult problem in computer vision. In this field, accuracy and speed of identification is a mainissue. The goal of this paper is to evaluate various face detection and recognition methods, provide complete solution for image based face detection and recognition with higher accuracy, better response rate as an initial step for video surveillance. Solution is proposed based on performed tests on various face rich databases in terms of subjects, pose, emotions, race andlight.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Muhimmah, Izzati, Fadhillah Abriyani und Arrie Kurniawardhani. „Automatic wrinkles detection on face image“. IOP Conference Series: Materials Science and Engineering 482 (11.03.2019): 012026. http://dx.doi.org/10.1088/1757-899x/482/1/012026.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Oualla, Mohamed, Khalid Ounachad und Abdelalim Sadiq. „Building Face Detection with Face Divine Proportions“. International Journal of Online and Biomedical Engineering (iJOE) 17, Nr. 04 (06.04.2021): 63. http://dx.doi.org/10.3991/ijoe.v17i04.19149.

Der volle Inhalt der Quelle
Annotation:
<p class="0abstract"><span lang="EN-US">In this paper, we proposed an algorithm for detecting multiple human faces in an image based on haar-like features to represent the invariant characteristics of a face. The choice of relevant and more representative features is based on the divine proportions of a face. This technique, widely used in the world of beauty, especially in aesthetic medicine, allows the face to be divided into a set of specific regions according to known mathematical measures. Then we used the Adaboost algorithm for the learning phase. All of our work is based on the Viola and Jones algorithm, in particular their innovative technique called Integral Image, which calculates the value of a Haar-Like feature extracted from a face image. In the rest of this article, we will show that our approach is promising and can achieve high detection rates of up to 99%.</span></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Wojtkowski, Maciej, Patrycjusz Stremplewski, Egidijus Auksorius und Dawid Borycki. „Spatio-Temporal Optical Coherence Imaging – a new tool for in vivo microscopy“. Photonics Letters of Poland 11, Nr. 2 (01.07.2019): 44. http://dx.doi.org/10.4302/plp.v11i2.905.

Der volle Inhalt der Quelle
Annotation:
Optical Coherence Imaging (OCI) including Optical Coherence Tomography (OCT) and Optical Coherence Microscopy (OCM) uses interferometric detection to generate high-resolution volumetric images of the sample at high speeds. Such capabilities are significant for in vivo imaging, including ophthalmology, brain, intravascular imaging, as well as endoscopic examination. Instrumentation and software development allowed to create many clinical instruments. Nevertheless, most of OCI setups scan the incident light laterally. Hence, OCI can be further extended by wide-field illumination and detection. This approach, however, is very susceptible to the so-called crosstalk-generated noise. Here, we describe our novel approach to overcome this issue with spatio-temporal optical coherence manipulation (STOC), which employs spatial phase modulation of the incident light. Full Text: PDF ReferencesL. Wang, P. P. Ho, C. Liu, G. Zhang, and R. R. Alfano, "Ballistic 2-D Imaging Through Scattering Walls Using an Ultrafast Optical Kerr Gate", Science 253, 769-771 (1991). CrossRef D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and et al., "Optical coherence tomography", Science 254, 1178-1181 (1991). CrossRef J. A. Izatt, E. A. Swanson, J. G. Fujimoto, M. R. Hee, and G. M. Owen, "Optical coherence microscopy in scattering media", Opt. Lett. 19, 590-592 (1994). CrossRef D. Borycki, M. Nowakowski, and M. Wojtkowski, "Control of the optical field coherence by spatiotemporal light modulation", Opt. Lett. 38, 4817-4820 (2013). CrossRef D. Borycki, M. Hamkalo, M. Nowakowski, M. Szkulmowski, and M. Wojtkowski, "Spatiotemporal optical coherence (STOC) manipulation suppresses coherent cross-talk in full-field swept-source optical coherence tomography", Biomed. Opt. Express 10, 2032-2054 (2019). CrossRef P. Stremplewski, E. Auksorius, P. Wnuk, L. Kozon, P. Garstecki, and M. Wojtkowski, "In vivo volumetric imaging by crosstalk-free full-field OCT", Optica 6, 608-617 (2019). CrossRef L. Vabre, A. Dubois, and A. C. Boccara, "Thermal-light full-field optical coherence tomography", Opt. Lett. 27, 530-532 (2002). CrossRef M. Laubscher, M. Ducros, B. Karamata, T. Lasser, and R. Salathé, "Video-rate three-dimensional optical coherence tomography", Opt. Express 10, 429-435 (2002). CrossRef Dubois and A. C. Boccara, Full-Field Optical Coherence Tomography, (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008), pp. 565-591. CrossRef O. Thouvenin, K. Grieve, P. Xiao, C. Apelian, and A. C. Boccara, "En face coherence microscopy [Invited]", Biomedical Opt. Express 8, 622-639 (2017). CrossRef F. Fercher, C. K. Hitzenberger, M. Sticker, E. Moreno-Barriuso, R. Leitgeb, W. Drexler, and H. Sattmann, "A thermal light source technique for optical coherence tomography", Optics Commun. 185, 57-64 (2000). CrossRef R. A. Leitgeb, "En face optical coherence tomography: a technology review [Invited]", Biomed Opt Express 10, 2177-2201 (2019). CrossRef J. Fujimoto and W. Drexler, Introduction to Optical Coherence Tomography, (Springer, Berlin, Heidelberg, 2008), pp. 1-45. CrossRef J. A. Izatt, M. A. Choma, and A.-H. Dhalla, Theory of Optical Coherence Tomography, (Springer International Publishing, Cham, 2015), pp. 65-94. CrossRef
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Santhosh, Chella, M. Ravi Kumar, J. Lakshmi Prasanna, I. Ram Kumar, U. Vinay Kumar und S. Navya Sri. „Face Mask Detection Using LabView“. International Journal of Online and Biomedical Engineering (iJOE) 17, Nr. 06 (25.06.2021): 49. http://dx.doi.org/10.3991/ijoe.v17i06.21995.

Der volle Inhalt der Quelle
Annotation:
<p>Rapid worldwide spread of Corona virus Disease 2019 (COVID 19) has resulted in a global pandemic. In present scenario due to covid-19, the mask has been an important part of our live for our safety as well as for the others safety so there is a need for efficient face mask detection applications in crowded areas like shopping malls, Public transportation etc. To ensure safety of the people in the surroundings. Face Mask Detection using NI LabVIEW. In this project a real-time system is developed to detect whether the person is wearing a mask or not by acquiring a real-time image of him through a Camera. The main challenges in detecting the mask are there are masks with various colors and patterns and secondly the background, light intensity are also the factors that affect the result. So, all these factors should be taken into consideration while developing the system in real-time. This system used for this application consists of vision development module. Vision development module helps to develop applications for machine vision and image processing applications we can use it with LabVIEW for real- time systems. A camera with good pixel quality is used for image acquisition. The captured image is of RGB format, it is difficult to analyze the image in this format, so it undergoes color plane extraction in this only a single plane of the image is considered which separates the mask from surroundings and results in a grey scale image for further processing. The image later is compared to a custom-made template dataset using pattern matching algorithm from vision assistant which helps to detect the mask region. overlaying techniques are used to highlight the mask region which shows that the person is wearing the mask.<strong></strong></p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zhang, Hua, Li Jia Wang, Zhen Jie Wang und Wei Yi Yuan. „View-Invariant Face Detection for Colorful Image“. Advanced Materials Research 945-949 (Juni 2014): 1880–84. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.1880.

Der volle Inhalt der Quelle
Annotation:
To overcome illumination changes and pose variations, a pose-invariant face detection method is presented. First, an illumination compensation method based on reference white is presented to overcome the lighting variations. The reference white is obtained according to the component Y from YCbCr color space. Then, a mixture face model is constructed by the Cb and Cr from YCbCr color space and H from the HSV color space to extract faces from colorful image. At last, an eyes model is designed to locate eyes in the obtained face images, which can distinguish face from neck and arms ultimately. The presented method is conducted on the CASIA face database. The experimental results have shown that our method is robust to pose changes and illumination variations, and it can achieve well performance.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Jun, In-Ja, und Kyung-Yong Chung. „Image Enhancement Method Research for Face Detection“. Journal of the Korea Contents Association 9, Nr. 10 (28.10.2009): 13–21. http://dx.doi.org/10.5392/jkca.2009.9.10.013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Mandhala, V. enkata Naresh, Debnath Bhattacharyya und Tai-hoon Kim. „Face Detection using Image Morphology – A Review“. International Journal of Security and Its Applications 10, Nr. 4 (30.04.2016): 89–94. http://dx.doi.org/10.14257/ijsia.2016.10.4.10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Chen, Chichyang, Chiun-Wen Hsu und Te-Lung Lin. „Image prediction using face detection and triangulation“. Pattern Recognition Letters 22, Nr. 13 (November 2001): 1347–57. http://dx.doi.org/10.1016/s0167-8655(01)00081-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Di Wen, Hu Han und Anil K. Jain. „Face Spoof Detection With Image Distortion Analysis“. IEEE Transactions on Information Forensics and Security 10, Nr. 4 (April 2015): 746–61. http://dx.doi.org/10.1109/tifs.2015.2400395.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Lee, Joo-shin. „Face region detection algorithm of natural-image“. Journal of Korea Institute of Information, Electronics, and Communication Technology 7, Nr. 1 (31.03.2014): 55–60. http://dx.doi.org/10.17661/jkiiect.2014.7.1.055.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Goswami, Gaurav, Brian M. Powell, Mayank Vatsa, Richa Singh und Afzel Noore. „FaceDCAPTCHA: Face detection based color image CAPTCHA“. Future Generation Computer Systems 31 (Februar 2014): 59–68. http://dx.doi.org/10.1016/j.future.2012.08.013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Kaminski, Jeremy Yrmeyahu, Dotan Knaan und Adi Shavit. „Single image face orientation and gaze detection“. Machine Vision and Applications 21, Nr. 1 (13.06.2008): 85–98. http://dx.doi.org/10.1007/s00138-008-0143-1.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Grudzień, Artur, Marcin Kowalski und Norbert Pałka. „Thermal Face Verification through Identification“. Sensors 21, Nr. 9 (10.05.2021): 3301. http://dx.doi.org/10.3390/s21093301.

Der volle Inhalt der Quelle
Annotation:
This paper reports on a new approach to face verification in long-wavelength infrared radiation. Two face images were combined into one double image, which was then used as an input for a classification based on neural networks. For testing, we exploited two external and one homemade thermal face databases acquired in various variants. The method is reported to achieve a true acceptance rate of about 83%. We proved that the proposed method outperforms other studied baseline methods by about 20 percentage points. We also analyzed the issue of extending the performance of algorithms. We believe that the proposed double image method can also be applied to other spectral ranges and modalities different than the face.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Li, Yong Gang, Hai Ming Yin und Xun Wei Gong. „A Beach Image Detection Method Based on Erotic Image Detection“. Applied Mechanics and Materials 278-280 (Januar 2013): 1282–86. http://dx.doi.org/10.4028/www.scientific.net/amm.278-280.1282.

Der volle Inhalt der Quelle
Annotation:
In the process of erotic image detection, a kind of beach images is often error detected. To solve this problem, a detection model of beach image is designed, which divides the image into two parts to detect: the blue sky in the upper portion and the beach in the lower part. At the same time, a beach naked body detection model is designed to deal with the beach images with naked body, in which OpenCV is used to detect face and naked body. The experimental results show that the proposed method can detect beach image accurately, effectively reduce the error rate of normal image.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Haidar, Ahmed M. A., Geoffrey O. Asiegbu, Kamarul Hawari und Faisal A. F. Ibrahim. „Electrical Defect Detection in Thermal Image“. Advanced Materials Research 433-440 (Januar 2012): 3366–70. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.3366.

Der volle Inhalt der Quelle
Annotation:
Electrical and Electronic objects, which have a temperature of operating condition above absolute zero, emit infrared radiation. This radiation can be measured on the infrared spectral band of the electromagnetic spectrum using thermal imaging. Faults on electrical systems are expensive in terms of plant downtime, damage, loss of production or risk from fire. If the threshold temperature is timely detected, the electrical equipment failures can be avoided. This paper presents a straightforward approach for thermal analysis that examines power loads and large area thermal characteristics. A thermal imaging camera was used to collect thermal pictures of the tested system under various operating conditions. These pictures are analyzed using thermal diagnosis system in order to detect the fault location that may occur and improve inspection efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Sharma, Chandni, Shreya Patel, Abhishek More, Kevin Maisuria, Saheli Patel und Foram Shah. „Review of Face Detection based on Color Image and Binary Image“. International Journal of Computer Applications 134, Nr. 1 (15.01.2016): 22–26. http://dx.doi.org/10.5120/ijca2016907756.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Muhimmah, Izzati, Nurul Fatikah Muchlis und Arrie Kurniawardhani. „AUTOMATIC FACIAL REDNESS DETECTION ON FACE SKIN IMAGE“. IIUM Engineering Journal 22, Nr. 1 (04.01.2021): 68–77. http://dx.doi.org/10.31436/iiumej.v22i1.1495.

Der volle Inhalt der Quelle
Annotation:
One facial skin problem is redness. On site examination currently relies on examination through direct observations conducted by doctors and the patient's medical history. However, some patients are reluctant to consult with a doctor because of shame or prohibitive costs. This study attempts to utilize digital image processing algorithms to analyze the patient's facial skin condition automatically, especially redness detection in the face image. The method used for detecting red objects on face skin for this research is Redness method. The output of the Redness method will be optimized by feature selection based on area, mean intensity of the RGB color space, and mean intensity of the Hue Intensity. The dataset used in this research consists of 35 facial images. The sensitivity, specificity, and accuracy are used to measure the detection performance. The performance achieved 54%, 99.1%, and 96.2% for sensitivity, specificity, and accuracy, respectively, according to dermatologists. Meanwhile, according to PT. AVO personnel, the performance achieved 67.4%, 99.1%, and 97.7%, for sensitivity, specificity, and accuracy, respectively. Based on the result, the system is good enough to detect redness in facial images. ABSTRAK: Salah satu masalah kulit wajah adalah kemerahan muka. Pemeriksaan di lokasi kini bergantung pada pemeriksaan melalui pemerhatian langsung yang dilakukan oleh doktor dan sejarah perubatan pesakit. Namun, sebilangan pesakit enggan berunding dengan doktor kerana rasa malu atau kos yang terhad. Kajian ini cuba membuat sistem pengesanan kemerahan wajah yang dapat menganalisis keadaan wajah, terutama kemerahan, melalui gambar kulit wajah. Kaedah yang digunakan untuk mengesan objek merah pada kulit wajah bagi penyelidikan ini adalah kaedah Kemerahan. Keluaran kaedah Kemerahan akan dioptimumkan dengan pemilihan ciri berdasarkan luas, intensiti min RGB, dan intensiti min Hue Intensity. Set data yang digunakan dalam penyelidikan ini terdiri daripada 35 gambar wajah. Nilai pengesahan yang digunakan adalah kepekaan, kekhususan, dan ketepatan. Hasil yang diperoleh berdasarkan pakar dermatologi masing-masing adalah 54%, 99.1%, dan 96.2% untuk kepekaan, kekhususan, dan ketepatan. Sementara itu, PT. Selain itu, menurut kakitangan AVO 67.4%, 99.1%, dan 97.7%, bagi kepekaan, kekhususan, dan ketepatan, masing-masing. Berdasarkan dapatan kajian ini, sistem ini cukup baik bagi mengesan kemerahan pada gambar wajah.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Voloshyn, Mykola V. „THE IMAGE PREPROCESSING METHOD FOR FACE DETECTION PROBLEMS“. KPI Science News, Nr. 4 (15.10.2019): 17–23. http://dx.doi.org/10.20535/kpi-sn.2019.4.174381.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Powell, Brian M., Adam C. Day, Richa Singh, Mayank Vatsa und Afzel Noore. „Image-based face detection CAPTCHA for improved security“. International Journal of Multimedia Intelligence and Security 1, Nr. 3 (2010): 269. http://dx.doi.org/10.1504/ijmis.2010.037541.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Yang, Cheng-Yun, und Homer H. Chen. „Efficient Face Detection in the Fisheye Image Domain“. IEEE Transactions on Image Processing 30 (2021): 5641–51. http://dx.doi.org/10.1109/tip.2021.3087400.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

al_galib, Shahad laith abd, Asma Abdulelah Abdulrahman und Fouad Shaker Tahir Al-azawi. „Face Detection for Color Image Based on MATLAB“. Journal of Physics: Conference Series 1879, Nr. 2 (01.05.2021): 022129. http://dx.doi.org/10.1088/1742-6596/1879/2/022129.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Chyad, Haitham Salman, Raniah Ali Mustafa und Zainab Yasser Mohamed. „Edge Detection for Face Image Using Multiple Filters“. International Journal of Engineering Research and Advanced Technology 07, Nr. 08 (2021): 28–41. http://dx.doi.org/10.31695/ijerat.2021.3736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie