Статті в журналах з теми "Face and Object Matching"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Face and Object Matching.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Face and Object Matching".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Newell, F. N. "Searching for Objects in the Visual Periphery: Effects of Orientation." Perception 25, no. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Повний текст джерела
Анотація:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhang, Ziyou, Ziliang Feng, Wei Wang, and Yanqiong Guo. "A 3D Face Modeling and Recognition Method Based on Binocular Stereo Vision and Depth-Sensing Detection." Journal of Sensors 2022 (July 15, 2022): 1–11. http://dx.doi.org/10.1155/2022/2321511.

Повний текст джерела
Анотація:
The human face is an important channel for human interaction and is the most expressive part of the human body with personalized and diverse characteristics. To improve the modeling speed as well as recognition speed and accuracy of 3D faces, this paper proposes a laser scanning binocular stereo vision imaging method based on binocular stereo vision and depth-sensing detection method. Existing matching methods have a weak anti-interference capability, cannot identify the spacing of objects quickly and efficiently, and have too large errors. Using the technique mentioned in this paper can remedy these shortcomings by mainly using the laser lines scanned onto the object as strong feature cues for left and right views for binocular vision matching and thus for depth measurement perception of the object. The experiments in this paper first explore the effect of parameter variation of the laser scanning binocular vision imaging system on the accuracy of object measurement depth values in an indoor environment; the experimental data are selected from 68 face feature points for modeling, cameras that photograph faces, stereo calibration of the cameras using calibration methods to obtain the parameters of the binocular vision imaging system, and after stereo correction of the left and right camera images, the laser line scan light is used as a pixel matching strong features for binocular camera matching, which can achieve fast recognition and high accuracy and then select the system parameters with the highest accuracy to perform 3D reconstruction experiments on actual target objects in the indoor environment to achieve faster recognition.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Biederman, Irving, and Peter Kalocsais. "Neurocomputational bases of object and face recognition." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, no. 1358 (August 29, 1997): 1203–19. http://dx.doi.org/10.1098/rstb.1997.0103.

Повний текст джерела
Анотація:
A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn–like pattern of activation onto a representation layer that preserves relative spatial filter values in a two–dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a ‘jet’) is centered on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non–accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel and Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Sanyal, Soubhik, Sivaram Prasad Mudunuri, and Soma Biswas. "Discriminative pose-free descriptors for face and object matching." Pattern Recognition 67 (July 2017): 353–65. http://dx.doi.org/10.1016/j.patcog.2017.02.016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Horwitz, Barry, Cheryl L. Grady, James V. Haxby, Mark B. Schapiro, Stanley I. Rapoport, Leslie G. Ungerleider, and Mortimer Mishkin. "Functional Associations among Human Posterior Extrastriate Brain Regions during Object and Spatial Vision." Journal of Cognitive Neuroscience 4, no. 4 (October 1992): 311–22. http://dx.doi.org/10.1162/jocn.1992.4.4.311.

Повний текст джерела
Анотація:
Primate extrastriate visual cortex is organized into an occipitotemporal pathway for object vision and an occipitoparietal pathway for spatial vision. Correlations between normalized regional cerebral blood flow values (regional divided by global flows), obtained using H215O and positron emission tomography, were used to examine functional associations among posterior brain regions for these two pathways in 17 young men during performance of a face matching task and a dot-location matching task. During face matching, there was a significant correlation in the right hemisphere between an extrastriate occipital region that was equally activated during both the face matching and dot-location matching tasks and a region in inferior occipitotemporal cortex that was activated more during the face matching task. The corresponding correlation in the left hemisphere was not significantly different from zero. Significant intrahemispheric correlations among posterior regions were observed more often for the right than for the left hemisphere. During dot-location matching, many significant correlations were found among posterior regions in both hemispheres, but significant correlations between specific regions in occipital and parietal cortex shown to be reliably activated during this spatial vision test were found only in the right cerebral hemisphere. These results suggest that (1) correlational analysis of normalized rCBF can detect functional interactions between components of proposed brain circuits, and (2) face and dot-location matching depend primarily on functional interactions between posterior cortical areas in the right cerebral hemisphere. At the same time, left hemisphere cerebral processing may contribute more to dot-location matching than to face matching.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Su, Ching-Liang. "Manufacture Automation: Model and Object Recognition by Using Object Position Auto Locating Algorithm and Object Comparison Model." JALA: Journal of the Association for Laboratory Automation 5, no. 2 (April 2000): 61–65. http://dx.doi.org/10.1016/s1535-5535-04-00062-0.

Повний текст джерела
Анотація:
This research uses the geometry matching technique to identify the different objects. The object is extracted from the background. The second moment 6 is used to find the orientation and the center point of the extracted object. Since the second moment can find the orientations and the center point of the object, the perfect object and the test object can be aligned to the same orientation. Furthermore, these two images can be shifted to the same centroid. After this, the perfect object can be subtracted from the test face. By using the subtracted result, the objects can be classified. The techniques used in this research can very accurately classify different objects.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Gauthier, Isabel, Marlene Behrmann, and Michael J. Tarr. "Can Face Recognition Really be Dissociated from Object Recognition?" Journal of Cognitive Neuroscience 11, no. 4 (July 1999): 349–70. http://dx.doi.org/10.1162/089892999563472.

Повний текст джерела
Анотація:
We argue that the current literature on prosopagnosia fails to demonstrate unequivocal evidence for a disproportionate impairment for faces as compared to nonface objects. Two prosopagnosic subjects were tested for the discrimination of objects from several categories (face as well as nonface) at different levels of categorization (basic, subordinate, and exemplar levels). Several dependent measures were obtained including accuracy, signal detection measures, and response times. The results from Experiments 1 to 4 demonstrate that, in simultaneous-matching tasks, response times may reveal impairments with nonface objects in subjects whose error rates only indicate a face deficit. The results from Experiments 5 and 6 show that, given limited stimulus presentation times for face and nonface objects, the same subjects may demonstrate a deªcit for both stimulus categories in sensitivity. In Experiments 7, 8 and 9, a match-to-sample task that places greater demands on memory led to comparable recognition sensitivity with both face and nonface objects. Regardless of object category, the prosopagnosic subjects were more affected by manipulations of the level of categorization than normal controls. This result raises questions regarding neuropsychological evidence for the modularity of face recognition, as well as its theoretical and methodological foundations.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Wu, Xiao Kang, Cheng Gang Xie, and Qin Lu. "Algorithm of Video Decomposition and Video Abstraction Generation Based on Face Detection and Recognition." Applied Mechanics and Materials 644-650 (September 2014): 4620–23. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4620.

Повний текст джерела
Анотація:
ion generation based on face detection and recognitionXiaokang Wu1, a, Chenggang Xie2, b, Qin Lu2, c1 National University Of Defense Technology, Changsha 410073, China;a172896292@qq.com, bqingqingzijin_k@126.com,Keywords: face detection, face recognition, key frame, video abstractionAbstract. In order to facilitate users browse the behaviors and expressions of interesting objects in a video quickly, need to remove the redundancy information and extract key frames related to the object interested. This paper uses a fast face detection based on skin color, and recognition technology using spectrum feature matching, decompose the coupling video, and classify frames related to the object into different sets, generate a different video abstraction of each object. Experimental results show that the algorithm under different light conditions has better practicability.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Morie, Takashi, and Teppei Nakano. "A face/object recognition system using coarse region segmentation and dynamic-link matching." International Congress Series 1269 (August 2004): 177–80. http://dx.doi.org/10.1016/j.ics.2004.05.132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Grady, Cheryl L., James V. Haxby, Barry Horwitz, Mark B. Schapiro, Stanley I. Rapoport, Leslie G. Ungerleider, Mortimer Mishkin, Richard E. Carson, and Peter Herscovitch. "Dissociation of Object and Spatial Vision in Human Extrastriate Cortex: Age-Related Changes in Activation of Regional Cerebral Blood Flow Measured with [15 O]Water and Positron Emission Tomography." Journal of Cognitive Neuroscience 4, no. 1 (January 1992): 23–34. http://dx.doi.org/10.1162/jocn.1992.4.1.23.

Повний текст джерела
Анотація:
We previously reported selective activation of regional cerebral blood flow (rCBF) in occipitotemporal cortex during a face matching task (object vision) and activation in superior parietal cortex during a dot-location matching task (spatial vision) in young subjects, The purpose of the present study was to determine the effects of aging on these extrastriate visual processing systems. Eleven young (mean age 27 ± 4 years) and nine old (mean age 72 ± 7 years) male subjects were studied. Positron emission tomographic scans were performed using a Scanditronix PC1024–7B tomograph and H215O to measure rCBF. To locate brain areas that were activated by the visual tasks, pixel-by-pixel difference images were computed between images from a control task and images from the face and dot-location matching tasks. Both young and old subjects showed rCBF activation during face matching primarily in occipitotemporal cortex, and activation of superior parietal cortex during dot-location matching. Statistical comparisons of these activations showed that the old subjects had more activation of occipitotemporal cortex during the spatial task and more activation of superior parietal cortex during the object task than did the young subjects. These results show less functional separation of the dorsal and ventral visual pathways in older subjects, and may reflect an age-related reduction in the processing efficiency of these visual cortical areas.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Liao, Xinpeng L., Pradip Chitrakar, Chengcui Zhang, and Gary Warner. "Object-of-Interest Retrieval in Social Media Image Databases for e-Crime Forum Detection." International Journal of Multimedia Data Engineering and Management 6, no. 3 (July 2015): 32–50. http://dx.doi.org/10.4018/ijmdem.2015070103.

Повний текст джерела
Анотація:
Using object-of-interest matching to detect presence of e-Crime activities in low-duplicate social media images is an interesting yet challenging problem that involves many complications due to the dataset's inherent diversity. SURF-based (Speeded Up Robust Features) object matching, though claimed to be scale and rotation invariant, is not effective as expected in this domain. This paper approaches this problem by an extended paradigm of Generalized Hough Transform using shape matching applied to two types of object-of-interest, Guy Fawkes Mask and Credit Card. We propose an extended GHT that updates the best matching score and the sum up score simultaneously, combined with a face detector and circular magnitude ranker, for detecting Guy Fawkes; also proposed is an extended GHT capable of mining the directional property in Hough space, combined with optical character recognition and an edge density filter, for detecting credit cards. Experiments on two real world datasets indicate that our approach outperforms the baseline GHT and the SURF.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

De Gelder, Beatrice, and Paul Bertelson. "A Comparative Approach to Testing Face Perception: Face and object identification by adults in a simultaneous matching task." Psychologica Belgica 49, no. 2-3 (June 1, 2009): 177. http://dx.doi.org/10.5334/pb-49-2-3-177.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Westphal, Günter, and Rolf P. Würtz. "Combining Feature- and Correspondence-Based Methods for Visual Object Recognition." Neural Computation 21, no. 7 (July 2009): 1952–89. http://dx.doi.org/10.1162/neco.2009.12-07-675.

Повний текст джерела
Анотація:
We present an object recognition system built on a combination of feature- and correspondence-based pattern recognizers. The feature-based part, called preselection network, is a single-layer feedforward network weighted with the amount of information contributed by each feature to the decision at hand. For processing arbitrary objects, we employ small, regular graphs whose nodes are attributed with Gabor amplitudes, termed parquet graphs. The preselection network can quickly rule out most irrelevant matches and leaves only the ambiguous cases, so-called model candidates, to be verified by a rudimentary version of elastic graph matching, a standard correspondence-based technique for face and object recognition. According to the model, graphs are constructed that describe the object in the input image well. We report the results of experiments on standard databases for object recognition. The method achieved high recognition rates on identity and pose. Unlike many other models, it can also cope with varying background, multiple objects, and partial occlusion.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Jurnal, Redaksi Tim. "IMPLEMENTASI METODE DETEKSI TEPI CANNY PADA OBJEK SEBAGAI MODEL KEAMANAN APLIKASI PADA SMARTPHONE ANDROID." Petir 9, no. 1 (January 4, 2019): 16–20. http://dx.doi.org/10.33322/petir.v9i1.187.

Повний текст джерела
Анотація:
The development of technology push security system applications on android smartphone to develop one of its features that is detection of the object. The detection of Objects is a technology that allows us to identify or verify an object through a digital image by matching the texture of the object with the curve of the data objects stored in the database. For example, to match the curve of the face such as the nose, eyes and chin. There are several methods to support the work of object detection among which edge detection. Edge detection can represent the objects contained in the image of the shape and size as well as information about the texture of an object. the best method of edge detection is canny edge detection which has the minimum error rate compared with other edge detection methods. Canny edge detection will generate the image that has been processed into a new image. The new image will be stored on a database that will be matched to the image of a new object that is used as the opening applications on android smartphone.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Liao, B., and H. F. Wang. "The Optimization of SIFT Feature Matching Algorithm on Face Recognition Based on BP Neural Network." Applied Mechanics and Materials 743 (March 2015): 359–64. http://dx.doi.org/10.4028/www.scientific.net/amm.743.359.

Повний текст джерела
Анотація:
In the field of object recognition, the SIFT feature is known to be a very successful local invariant descriptor and has wide application in different domains. However it also has some limitations, for example, in the case of facial illumination variation or under large tilt angle, the identification rate of the SIFT algorithm drops quickly. In order to reduce the probability of mismatching pairs, and improve the matching efficiency of SIFT algorithm, this paper proposes a novel feature matching algorithm. The basic idea is taking the successful-matched SIFT feature points as the training samples to establish a space mapping model based on BP neural network. Then, with the help of this model, the estimated coordinate of the corresponding SIFT feature point in the candidate image is predicted. Finally search the possible matching points around the coordinate. The experiment results show that using the prediction model, the number of mismatching points can be reduced effectively and the number of correct matching pairs increases at the same time
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Takita, Kiyoshi, Takeshi Nagayasu, Hidetsugu Asano, Kenji Terabayashi, and Kazunori Umeda. "Mouth Movement Recognition Using Template Matching and its Implementation in an Intelligent Room." Journal of Robotics and Mechatronics 24, no. 2 (April 20, 2012): 311–19. http://dx.doi.org/10.20965/jrm.2012.p0311.

Повний текст джерела
Анотація:
This paper proposes a method of recognizing movements of the mouth from images and implements the method in an intelligent room. The proposed method uses template matching and recognizes mouth movements for the purpose of indicating a target object in an intelligent room. First, the operator’s face is detected. Then, the mouth region is extracted from the facial region using the result of template matching with a template image of the lips. Dynamic Programming (DP) matching is applied to a similarity measure that is obtained by template matching. The effectiveness of the proposed method is evaluated through experiments to recognize several names of common home appliances and operations.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Alexander, Gene E., Marc J. Mentis, John D. Van Horn, Cheryl L. Grady, Karen F. Berman, Maura L. Furey, Pietro Pietrini, Stanley I. Rapoport, Mark B. Schapiro, and James R. Moeller. "Individual differences in PET activation of object perception and attention systems predict face matching accuracy." NeuroReport 10, no. 9 (June 1999): 1965–71. http://dx.doi.org/10.1097/00001756-199906230-00032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

E. Widjaja, Andree, Hery Hery, and David Habsara Hareva. "The Office Room Security System Using Face Recognition Based on Viola-Jones Algorithm and RBFN." INTENSIF: Jurnal Ilmiah Penelitian dan Penerapan Teknologi Sistem Informasi 5, no. 1 (February 1, 2021): 1–12. http://dx.doi.org/10.29407/intensif.v5i1.14435.

Повний текст джерела
Анотація:
The university as an educational institution can apply technology in the campus environment. Currently, the security system for office space that is integrated with digital data has been somewhat limited. The main problem is that office space security items are not guaranteed as there might be outsiders who can enter the office. Therefore, this study aims to develop a system using biometric (face) recognition based on Viola-Jones and Radial Basis Function Network (RBFN) algorithm to ensure office room security. Based on the results, the system developed shows that object detection can work well with an object detection rate of 80%. This system has a pretty good accuracy because the object matching success is 73% of the object detected. The final result obtained from this study is a prototype development for office security using face recognition features that are useful to improve safety and comfort for occupants of office space (due to the availability of access rights) so that not everyone can enter the office.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Yang, Yu-Xin, Chang Wen, Kai Xie, Fang-Qing Wen, Guan-Qun Sheng, and Xin-Gong Tang. "Face Recognition Using the SR-CNN Model." Sensors 18, no. 12 (December 3, 2018): 4237. http://dx.doi.org/10.3390/s18124237.

Повний текст джерела
Анотація:
In order to solve the problem of face recognition in complex environments being vulnerable to illumination change, object rotation, occlusion, and so on, which leads to the imprecision of target position, a face recognition algorithm with multi-feature fusion is proposed. This study presents a new robust face-matching method named SR-CNN, combining the rotation-invariant texture feature (RITF) vector, the scale-invariant feature transform (SIFT) vector, and the convolution neural network (CNN). Furthermore, a graphics processing unit (GPU) is used to parallelize the model for an optimal computational performance. The Labeled Faces in the Wild (LFW) database and self-collection face database were selected for experiments. It turns out that the true positive rate is improved by 10.97–13.24% and the acceleration ratio (the ratio between central processing unit (CPU) operation time and GPU time) is 5–6 times for the LFW face database. For the self-collection, the true positive rate increased by 12.65–15.31%, and the acceleration ratio improved by a factor of 6–7.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hu, Yongli, Mingquan Zhou, and Zhongke Wu. "A Dense Point-to-Point Alignment Method for Realistic 3D Face Morphing and Animation." International Journal of Computer Games Technology 2009 (2009): 1–9. http://dx.doi.org/10.1155/2009/609350.

Повний текст джерела
Анотація:
We present a new point matching method to overcome the dense point-to-point alignment of scanned 3D faces. Instead of using the rigid spatial transformation in the traditional iterative closest point (ICP) algorithm, we adopt the thin plate spline (TPS) transformation to model the deformation of different 3D faces. Because TPS is a non-rigid transformation with good smooth property, it is suitable for formulating the complex variety of human facial morphology. A closest point searching algorithm is proposed to keep one-to-one mapping, and to get good efficiency the point matching method is accelerated by a KD-tree method. Having constructed the dense point-to-point correspondence of 3D faces, we create 3D face morphing and animation by key-frames interpolation and obtain realistic results. Comparing with ICP algorithm and the optical flow method, the presented point matching method can achieve good matching accuracy and stability. The experiment results have shown that our method is efficient for dense point objects registration.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Wojciulik, Ewa, Nancy Kanwisher, and Jon Driver. "Covert Visual Attention Modulates Face-Specific Activity in the Human Fusiform Gyrus: fMRI Study." Journal of Neurophysiology 79, no. 3 (March 1, 1998): 1574–78. http://dx.doi.org/10.1152/jn.1998.79.3.1574.

Повний текст джерела
Анотація:
Wojciulik, Ewa, Nancy Kanwisher, and Jon Driver. Covert visual attention modulates face-specific activity in the human fusiform gyrus: an fMRI study. J. Neurophysiol. 79: 1574–1578, 1998. Several lines of evidence demonstrate that faces undergo specialized processing within the primate visual system. It has been claimed that dedicated modules for such biologically significant stimuli operate in a mandatory fashion whenever their triggering input is presented. However, the possible role of covert attention to the activating stimulus has never been examined for such cases. We used functional magnetic resonance imaging to test whether face-specific activity in the human fusiform face area (FFA) is modulated by covert attention. The FFA was first identified individually in each subject as the ventral occipitotemporal region that responded more strongly to visually presented faces than to other visual objects under passive central viewing. This then served as the region of interest within which attentional modulation was tested independently, using active tasks and a very different stimulus set. Subjects viewed brief displays each comprising two peripheral faces and two peripheral houses (all presented simultaneously). They performed a matching task on either the two faces or the two houses, while maintaining central fixation to equate retinal stimulation across tasks. Signal intensity was reliably stronger during face-matching than house matching in both right- and left-hemisphere predefined FFAs. These results show that face-specific fusiform activity is reduced when stimuli appear outside (vs. inside) the focus of attention. Despite the modular nature of the FFA (i.e., its functional specificity and anatomic localization), face processing in this region nonetheless depends on voluntary attention.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

JAMHARI, ARDI. "A Perancangan Sistem Pengenalan Wajah Secara Real-Time pada CCTV dengan Metode Eigenface:." Journal of Informatics, Information System, Software Engineering and Applications (INISTA) 2, no. 2 (May 21, 2020): 20–32. http://dx.doi.org/10.20895/inista.v2i2.117.

Повний текст джерела
Анотація:
The development of times and curiosity in a condition become a reason for people to continue to develop security systems at home, one of which is by CCTV. Basically, CCTV security systems only function as recording devices on the scene. Therefore, the security level of the CCTV is still low. For that we need a system that can be a security solution. The system can detect objects in the form of faces as image input. To insert image objects into the system, the system requires a camera. The object detected by the camera will do a matching face with the face image contained in the dataset class. The system is the application of Computer Vision in the security system. Brain memory will provide a picture of a face that we have known before. The analogy can be likened to a machine or device that has the same ability as humans to recognize individuals through facial images. Through this research a comparison of facial image recognition with eigenface algorithm using feature extraction, PCA and LDA will be implemented on a real-time computer platform. The library used in Eigenface is OpenCV. The purpose of this study is to find out which method has a high degree of accuracy in performing facial image recognition by comparing between the two methods used. The problem faced by the author when performing accuracy tests is the different light levels between the dataset and the test subject, and changes in attributes such as hair and beard can affect the resulting accuracy. Based on the test results it is known that the accuracy produced by the Eigenface PCA is better than the LDA eigenface. The best accuracy on eigenface was obtained with a PCA combination of 98.06%.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

MIYAJIMA, Koji, Toshio NORITA, and Anca RALESCU. "MANAGEMENT OF UNCERTAINTY IN TOP-DOWN, FUZZY LOGIC-BASED IMAGE UNDERSTANDING OF NATURAL OBJECTS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 01, no. 02 (December 1993): 183–92. http://dx.doi.org/10.1142/s0218488593000103.

Повний текст джерела
Анотація:
This paper is concerned with the integration of knowledge intensive methods with low level image processing, in order to achieve a top-down image processing system. The knowledge part contains information about the object to be recognized, the model, expressed here as a part-of hierarchy, and expert knowledge on image processing. Both of these bodies of knowledge are allowed to be imprecise and/or incomplete, in which case fuzzy logic based methods are used for representation and inference. As a consequence, uncertainty management issues, such as partial matching, and evidence combination must be addressed. The approach proposed is most suitable for complex/natural objects and is illustrated for the task of recognizing a human face.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Seo Song, Eun, Gil Sang Yoo, and Sung Dae Hong. "A study on the solid image projection mapping system in the curved display of the face shape for the performance art." International Journal of Engineering & Technology 7, no. 3.3 (June 8, 2018): 111. http://dx.doi.org/10.14419/ijet.v7i2.33.13865.

Повний текст джерела
Анотація:
Background/Objectives: Aims to study the projection mapping technology which will project the multi-phase solid image in the curved display real time in the performance.Methods/Statistical analysis: Analyze the shape of the object to be mapped for the project and based on the analyzed characteristics, the structure of the installation of the applicable project to the actual object and the projection mapping production tool shall be developed and be applied to the elastic shape.Findings: According to the analysis result of the curved display, one projector did not suit the image and the distortion of the image occurred. According, it was divided into the four sections and projection mapping was made to minimize the problems, and the projection production tool which applied the geometric matching technology, edge blending technology, UDP telecommunication method etc based on the grid for the image matching using the Max/Msp was developed and the actual face shaped curved display was applied.Improvements/Applications: In order to conduct projection map precisely to the actuator moving real time, Z-depth should be considered and the advanced technology which matches the three dimensional mapping image should be applied.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Zhang, Ke, and Zhao Gao. "Face 3D Modeling Based on Projective Rectification." Key Engineering Materials 620 (August 2014): 181–86. http://dx.doi.org/10.4028/www.scientific.net/kem.620.181.

Повний текст джерела
Анотація:
Face 3D modeling is the difficulty problem in the field of computer graphics, computer vision and artificial intelligence. In recent years, it has become the most active research focus both at home and broad. 3D modeling of face is the key step to realize face recognition, and the technique of face 3D modeling has obtained extensive applications in many fields, such as film, animation, interactive games, video conference, human-computer interaction, reverse engineering, medical and public safety. In this paper, the technology of face 3D modeling based on projective rectification is presented and the reconstruction of face 3D digital model can be achieved by it. Firstly, BP neural network is used to simulate the mapping relationship between the 3D object and its images. Then, the rectification on the left and right images acquired by stereo vision system is implemented according to the principle of epipolar line constraint. On the left and right rectified image planes, the match researching of corresponding points are reduced from 2D plane to the horizontal lines, so the image matching and face 3D modeling can be implemented efficiently.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Rossion, Bruno, Laurence Dricot, Anne Devolder, Jean-Michel Bodart, Marc Crommelinck, Beatrice de Gelder, and Richard Zoontjes. "Hemispheric Asymmetries for Whole-Based and Part-Based Face Processing in the Human Fusiform Gyrus." Journal of Cognitive Neuroscience 12, no. 5 (September 2000): 793–802. http://dx.doi.org/10.1162/089892900562606.

Повний текст джерела
Анотація:
Behavioral studies indicate a right hemisphere advantage for processing a face as a whole and a left hemisphere superiority for processing based on face features. The present PET study identifies the anatomical localization of these effects in well-defined regions of the middle fusiform gyri of both hemispheres. The right middle fusiform gyrus, previously described as a face-specific region, was found to be more activated when matching whole faces than face parts whereas this pattern of activity was reversed in the left homologous region. These lateralized differences appeared to be specific to faces since control objects processed either as wholes or parts did not induce any change of activity within these regions. This double dissociation between two modes of face processing brings new evidence regarding the lateralized localization of face individualization mechanisms in the human brain.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Abdullah, Ihab Amer, and Jane Jaleel Stephan. "A Survey of Face Recognition Systems." Ibn AL- Haitham Journal For Pure and Applied Sciences 34, no. 2 (April 20, 2021): 144–60. http://dx.doi.org/10.30526/34.2.2620.

Повний текст джерела
Анотація:
With the quick grow of multimedia contents, from among this content, face recognition has got a lot of significant, specifically in latest little years. The face as object formed of various recognition characteristics for detect; so, it is still the most challenge research domain for researchers in area of image processing and computer vision. In this survey article, tried to solve the most demanding facial features like illuminations, aging, pose variation, partial occlusion and facial expression. Therefore, it indispensable factors in the system of facial recognition when performed on facial pictures. This paper study the most advanced facial detection techniques too, approaches: Hidden Markov Models, Principal Component Analysis (PCA), Elastic Cluster Plot Matching, Support Vector Machines (SVM), Gabor Waves, Artificial Neural Networks (ANN), Eigen Face, Independent Component Analysis (ICA) and 3D Morphable Model. Additionally to the above works, mentioned various testing facial databases including JAFEE, FEI, Yale, LFW, AT&T(formerly termed as ORL) and AR (Aleix Martinez and Robert Benavente) etc to analyze the results. Even so, the goal of this survey is to present a comprehensive literature review for the face recognition besides its applications after a deepness discussion, some of the experimental results was introduced in the end.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Elhashash, M., and R. Qin. "INVESTIGATING SPHERICAL EPIPOLAR RECTIFICATION FOR MULTI-VIEW STEREO 3D RECONSTRUCTION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2022 (May 17, 2022): 47–52. http://dx.doi.org/10.5194/isprs-annals-v-2-2022-47-2022.

Повний текст джерела
Анотація:
Abstract. Multi-view stereo (MVS) reconstruction is essential for creating 3D models. The approach involves applying epipolar rectification followed by dense matching for disparity estimation. However, existing approaches face challenges in applying dense matching for images with different viewpoints primarily due to large differences in object scale. In this paper, we propose a spherical model for epipolar rectification to minimize distortions caused by differences in principal rays. We evaluate the proposed approach using two aerial-based datasets consisting of multi-camera head systems. We show through qualitative and quantitative evaluation that the proposed approach performs better than frame-based epipolar correction by enhancing the completeness of point clouds by up to 4.05% while improving the accuracy by up to 10.23% using LiDAR data as ground truth.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Dong, Ziqi, Furong Tian, Hua Yang, Tao Sun, Wenchuan Zhang, and Dan Ruan. "A Framework with Elaborate Feature Engineering for Matching Face Trajectory and Mobile Phone Trajectory." Electronics 12, no. 6 (March 13, 2023): 1372. http://dx.doi.org/10.3390/electronics12061372.

Повний текст джерела
Анотація:
The advances in positioning techniques have generated massive trajectory data that represent the mobility of objects, e.g., pedestrians and mobile phones. It is important to integrate information from various modalities for subject tracking or trajectory prediction. Our work attempts to match a face with a corresponding mobile phone based on the heterogeneous trajectories. We propose a framework which associates face trajectories with their corresponding mobile phone trajectories using elaborate and explainable features. Our solution includes two stages: an initial selection of phone trajectories for a given face trajectory and a subsequent identification of which phone trajectory provides an exact match to the given face trajectory. In the first stage, we propose a Multi-Granularity SpatioTemporal Window Searching (MGSTWS) algorithm to select candidate mobile phones that are spatiotemporally close to a given face. In the second stage, we first build an affinity function to score face–phone trajectory point pairs selected by MGSTWS, and construct a feature set for building a face–phone trajectory matching determinator which determines whether a phone trajectory matches a given face trajectory. Our well-designed features guarantee high model simplicity and interpretability. Among the feature set, BGST intelligently leverages disassociation between a face and a mobile phone even if there exists some co-occurence for a non-matching face–phone pair. Based on the feature set, we represent the face–phone matching task as a binary classification problem and train various models, among which LightGBM achieves the best performance with 92.6% accuracy, 96.9% precision, 88.5% recall, and 92.5% F1. Our framework is acceptable in most application scenarios and may benefit some downstream tasks. The preselection-refining architecture of our framework guarantees the applicability and efficiency of the face–phone trajectory pair matching frame.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Sakib Hosen Himel, Md. Ashik Iqbal, Mahidul Islam Rana, and Tusar Mozumder. "Face-App: A real-time face recognition e-attendance system for digital learning." Global Journal of Engineering and Technology Advances 11, no. 1 (April 30, 2022): 013–24. http://dx.doi.org/10.30574/gjeta.2022.11.1.0049.

Повний текст джерела
Анотація:
Attendance is essential for every organization e.g., schools, colleges, universities, and companies. It is exhausting taking attendance in every period. General attendance management system in any organization is a very long process as well as time-consuming, which can piss students/staff off. Nowadays, biometric attendance systems are also accessible. This system may offer various advantages to enterprises, however also has many drawbacks; time-consuming, costs, data breaches, false positives and inaccuracy, no remote access, and many more. This paper deals with the process of taking the attendance using a face recorded camera and Face-App software tool will be complete the further attendance makes the process for staffs and students in an easy and simple manner within a short time. This proposed system (Face-App) uses face detection for the identification of faces from objects (e.g., students and staff), and a face recognizer for matching the faces from stored database images (authentication), and marks attendance according to the matched face images. Face-App system, which can be, controlled using mobile or computer according to requirements. Automated systems help to reduce the need for manual labor and can correct errors on attendance sheets.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Dunn, James D., Stephanie Summersby, Alice Towler, Josh P. Davis, and David White. "UNSW Face Test: A screening tool for super-recognizers." PLOS ONE 15, no. 11 (November 16, 2020): e0241747. http://dx.doi.org/10.1371/journal.pone.0241747.

Повний текст джерела
Анотація:
We present a new test–the UNSW Face Test (www.unswfacetest.com)–that has been specifically designed to screen for super-recognizers in large online cohorts and is available free for scientific use. Super-recognizers are people that demonstrate sustained performance in the very top percentiles in tests of face identification ability. Because they represent a small proportion of the population, screening large online cohorts is an important step in their initial recruitment, before confirmatory testing via standardized measures and more detailed cognitive testing. We provide normative data on the UNSW Face Test from 3 cohorts tested via the internet (combined n = 23,902) and 2 cohorts tested in our lab (combined n = 182). The UNSW Face Test: (i) captures both identification memory and perceptual matching, as confirmed by correlations with existing tests of these abilities; (ii) captures face-specific perceptual and memorial abilities, as confirmed by non-significant correlations with non-face object processing tasks; (iii) enables researchers to apply stricter selection criteria than other available tests, which boosts the average accuracy of the individuals selected in subsequent testing. Together, these properties make the test uniquely suited to screening for super-recognizers in large online cohorts.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Paul, Okuwobi Idowu, and Yong Hua Lu. "Facial Prediction and Recognition Using Wavelets Transform Algorithm and Technique." Applied Mechanics and Materials 666 (October 2014): 251–55. http://dx.doi.org/10.4028/www.scientific.net/amm.666.251.

Повний текст джерела
Анотація:
An efficient facial representation is a crucial step for successful and effective performance of cognitive tasks such as object recognition, fixation, facial recognition system, etc. This paper demonstrates the use of Gabor wavelets transform for efficient facial representation and recognition. Facial recognition is influenced by several factors such as shape, reflectance, pose, occlusion and illumination which make it even more difficult. Gabor wavelet transform is used for facial features vector construction due to its powerful representation of the behavior of receptive fields in human visual system (HVS). The method is based on selecting peaks (high-energized points) of the Gabor wavelet responses as feature points. This paper work introduces the use of Gabor wavelets transform for efficient facial representation and recognition. Compare to predefined graph nodes of elastic graph matching, the approach used in this paper has better representative capability for Gabor wavelets transform. The feature points are automatically extracted using the local characteristics of each individual face in order to decrease the effect of occluded features. Based on the experiment, the proposed method performs better compared to the graph matching and eigenface based methods. The feature points are automatically extracted using the local characteristics of each individual face in order to decrease the effect of occluded features. The proposed system is validated using four different face databases of ORL, FERRET, Purdue and Stirling database.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

LIU, CHENGUANG, HENGDA CHENG, and ARAVIND DASU. "SCALE ROBUST HEAD POSE ESTIMATION BASED ON RELATIVE HOMOGRAPHY TRANSFORMATION." New Mathematics and Natural Computation 10, no. 01 (March 2014): 69–90. http://dx.doi.org/10.1142/s1793005714500045.

Повний текст джерела
Анотація:
Head pose estimation has been widely studied in recent decades due to many significant applications. Different from most of the current methods which utilize face models to estimate head position, we develop a relative homography transformation based algorithm which is robust to the large scale change of the head. In the proposed method, salient Harris corners are detected on a face, and local binary pattern features are extracted around each of the corners. And then, relative homography transformation is calculated by using RANSAC optimization algorithm, which applies homography to a region of interest (ROI) on an image and calculates the transformation of a planar object moving in the scene relative to a virtual camera. By doing so, the face center initialized in the first frame will be tracked frame by frame. Meanwhile, a head shoulder model based Chamfer matching method is proposed to estimate the head centroid. With the face center and the detected head centroid, the head pose is estimated. The experiments show the effectiveness and robustness of the proposed algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Katsulai, Hiroshi, and Hirotaka Niwa. "A Region-Based Stereo." Journal of Robotics and Mechatronics 8, no. 2 (April 20, 1996): 171–76. http://dx.doi.org/10.20965/jrm.1996.p0171.

Повний текст джерела
Анотація:
The stereo, which is a method of obtaining depth information of scene from images obtained from at least two different directions, plays a very important role in applications to robots and similar equipments. The most difficult task in stereo method is to match individual parts of a two-dimensional projected image to those of another image.1,2) With respect to the method of matching, many studies have been conducted, and various techniques have been proposed. The stereo method based on features has attracted attention in recent years. However, it often fails in matching parts when attempting matching using points as features3) because it is difficult to specify points in images. On the other hand, proposals have been made for using line segments, which are easier than points to extract, as opposed to points for matching individual parts.6) Furthermore, methods have been developed which use regional features as an extension of the method which uses line segments.7) The method that uses regional features is considered to have a higher probability of success in matching images than the methods that use points or straight line segments because regional features contain a relatively large amount of description. However, no sufficient studies have been made yet on the region-based stereo. This study situation makes it necessary to conduct basic studies on regionbased stereo. This paper employs regions as features for matching, describes the stereo algorithm that directly employs region segmentation, and investigates the appropriateness of the algorithm by means of computer simulations. It is assumed that the three-dimensional object is a polyhedron, and each face of the object is projected onto a two-dimensional projection plane with uniform brightness using central projection. Region segmentation is delicate and does not necessarily ensure stable results. However, it is considered that a pair of two-dimensional projected images does not contain very large differences if the same scene is to be observed from slightly different directions. This paper uses the centroid of region, which represents the position of region, region shape, and the gray level of region as features for matching. Some consideration is taken on the matching technique to increase the accuracy of matching by performing an operation that is almost equal to enumerating all regional elements, using the sum of the similarity values of regional features as the evaluation function. A three-dimensional plane can be calculated from two matching regions by matching the two boundary points at the same height in the two projected images.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Liu, Yi, Taiyong Bi, Bei Zhang, Qijie Kuang, Haijing Li, Kunlun Zong, Jingping Zhao, Yuping Ning, Shenglin She, and Yingjun Zheng. "Face and object visual working memory deficits in first-episode schizophrenia correlate with multiple neurocognitive performances." General Psychiatry 34, no. 1 (February 2021): e100338. http://dx.doi.org/10.1136/gpsych-2020-100338.

Повний текст джерела
Анотація:
BackgroundWorking memory (WM) deficit is considered a core feature and cognitive biomarker in patients with schizophrenia. Several studies have reported prominent object WM deficits in patients with schizophrenia, suggesting that visual WM in these patients extends to non-spatial domains. However, whether non-spatial WM is similarly affected remains unclear.AimThis study primarily aimed to identify the processing of visual object WM in patients with first-episode schizophrenia.MethodsThe study included 36 patients with first-episode schizophrenia and 35 healthy controls. Visual object WM capacity, including face and house WM capacity, was assessed by means of delayed matching-to-sample visual WM tasks, in which participants must distribute memory so that they can discriminate a target sample. We specifically examined their anhedonia experience by the Temporal Experience of Pleasure Scale and the Snaith-Hamilton Pleasure Scale. Cognitive performance was measured by the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS).ResultsBoth face and house WM capacity was significantly impaired in patients with schizophrenia. For both tasks, the performance of all the subjects was worse under the high-load condition than under the low-load condition. We found that WM capacity was highly positively correlated with the performance on RBANS total scores (r=−0.528, p=0.005), RBANS delayed memory scores (r=−0.470, p=0.013), RBANS attention scores (r=−0.584, p=0.001), RBANS language scores (r=−0.448, p=0.019), Trail-Making Test: Part A raw scores (r=0.465, p=0.015) and simple IQ total scores (r=−0.538, p=0.005), and correlated with scores of the vocabulary test (r=−0.490, p=0.011) and scores of the Block Diagram Test (r=−0.426, p=0.027) in schizophrenia. No significant correlations were observed between WM capacity and Positive and Negative Syndrome Scale symptoms.ConclusionsOur research found that visual object WM capacity is dramatically impaired in patients with schizophrenia and is strongly correlated with other measures of cognition, suggesting a mechanism that is critical in explaining a portion of the broad cognitive deficits observed in schizophrenia.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Nederhouser, M., M. C. Mangini, and K. Okada. "Invariance to contrast inversion when matching objects with face-like surface structure and pigmentation." Journal of Vision 3, no. 9 (March 18, 2010): 93. http://dx.doi.org/10.1167/3.9.93.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Bouhou, Lhoussaine, Rachid El Ayachi, Mohamed Baslam, and Mohamed Oukessou. "Face Detection in a Mixed-Subject Document." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 2828. http://dx.doi.org/10.11591/ijece.v6i6.12725.

Повний текст джерела
Анотація:
<p>Before you recognize anyone, it is essential to identify various characteristics variations from one person to another. among of this characteristics, we have those relating to the face. Nowadays the detection of skin regions in an image has become an important research topic for the location of a face in the image. In this research study, unlike previous research studies related to this topic which have focused on images inputs data faces, we are more interested to the fields face detection in mixed-subject documents (text + images). The face detection system developed is based on the hybrid method to distinguish two categories of objects from the mixed document. The first category is all that is text or images containing figures having no skin color, and the second category is any figure with the same color as the skin. In the second phase the detection system is based on Template Matching method to distinguish among the figures of the second category only those that contain faces to detect them. To validate this study, the system developed is tested on the various documents which including text and image.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Bouhou, Lhoussaine, Rachid El Ayachi, Mohamed Baslam, and Mohamed Oukessou. "Face Detection in a Mixed-Subject Document." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 2828. http://dx.doi.org/10.11591/ijece.v6i6.pp2828-2835.

Повний текст джерела
Анотація:
<p>Before you recognize anyone, it is essential to identify various characteristics variations from one person to another. among of this characteristics, we have those relating to the face. Nowadays the detection of skin regions in an image has become an important research topic for the location of a face in the image. In this research study, unlike previous research studies related to this topic which have focused on images inputs data faces, we are more interested to the fields face detection in mixed-subject documents (text + images). The face detection system developed is based on the hybrid method to distinguish two categories of objects from the mixed document. The first category is all that is text or images containing figures having no skin color, and the second category is any figure with the same color as the skin. In the second phase the detection system is based on Template Matching method to distinguish among the figures of the second category only those that contain faces to detect them. To validate this study, the system developed is tested on the various documents which including text and image.</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Frassinetti, Francesca, Manule Maini, Sabrina Romualdi, Emanuela Galante, and Stefano Avanzi. "Is it Mine? Hemispheric Asymmetries in Corporeal Self-recognition." Journal of Cognitive Neuroscience 20, no. 8 (August 2008): 1507–16. http://dx.doi.org/10.1162/jocn.2008.20067.

Повний текст джерела
Анотація:
The aim of this study was to investigate whether the recognition of “self body parts” is independent from the recognition of other people's body parts. If this is the case, the ability to recognize “self body parts” should be selectively impaired after lesion involving specific brain areas. To verify this hypothesis, patients with lesion of the right (right brain-damaged [RBD]) or left (left brain-damaged [LBD]) hemisphere and healthy subjects were submitted to a visual matching-to-sample task in two experiments. In the first experiment, stimuli depicted their own body parts or other people's body parts. In the second experiment, stimuli depicted parts of three categories: objects, bodies, and faces. In both experiments, participants were required to decide which of two vertically aligned images (the upper or the lower one) matched the central target stimulus. The results showed that the task indirectly tapped into bodily self-processing mechanisms, in that both LBD patients and normal subjects performed the task better when they visually matched their own, as compared to others', body parts. In contrast, RBD patients did not show such an advantage for self body parts. Moreover, they were more impaired than LBD patients and normal subjects when visually matching their own body parts, whereas this difference was not evident in performing the task with other people's body parts. RBD patients' performance for the other stimulus categories (face, body, object), although worse than LBD patients' and normal subjects' performance, was comparable across categories. These findings suggest that the right hemisphere may be involved in the recognition of self body parts, through a fronto-parietal network.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Kisku, Dakshina Ranjan, and Srinibas Rana. "Multithread Face Recognition in Cloud." Journal of Sensors 2016 (2016): 1–21. http://dx.doi.org/10.1155/2016/2575904.

Повний текст джерела
Анотація:
Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification. Recently, biometrics systems have proven to be an essential security tools, in which bulk matching of enrolled people and watch lists is performed every day. To facilitate this process, organizations with large computing facilities need to maintain these facilities. To minimize the burden of maintaining these costly facilities for enrollment and recognition, multinational companies can transfer this responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition. In this paper, we showcase cloud computing-enabled face recognition, which utilizes PCA-characterized face instances and reduces the number of invariant SIFT points that are extracted from each face. To achieve high interclass and low intraclass variances, a set of six PCA-characterized face instances is computed on columns of each face image by varying the number of principal components. Extracted SIFT keypoints are fused using sum and max fusion rules. A novel cohort selection technique is applied to increase the total performance. The proposed protomodel is tested on BioID and FEI face databases, and the efficacy of the system is proven based on the obtained results. We also compare the proposed method with other well-known methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Yang, Xu, and Li Zhang. "Research on the Influence of Time-honored Cross-border Product Market Matching Degree on Purchase Intention." Frontiers in Business, Economics and Management 7, no. 1 (December 26, 2022): 167–73. http://dx.doi.org/10.54097/fbem.v7i1.3967.

Повний текст джерела
Анотація:
With the development of society, the old brand face more consumer diversification demand and fierce market competition leads to the trend of product homogeneity, in order to ensure their products or service advantage, bring new added value, then began the "crossover" mode, use this way to bring consumers three-dimensional experience and demand, to enhance consumer brand awareness. This paper mainly uses the way of combining theory with empirical evidence to conduct analysis and research. The sorting and induction of cross-border products, market matching degree, brand recognition and purchase intention of old brands are made, the model research structure is determined, and the relationship between the cross-border market matching degree of time-honored brands and the purchase intention of the cross-border product is constructed. The intermediary variable introduces brand cognition and proposes assumptions. Then, a reasonable scale was developed based on other literature and the characteristics of the research object, and a questionnaire survey was used for data collection. Finally, the study used SPSS22.0, AMOS and other software to verify the proposed hypotheses, draw conclusions, and make relevant suggestions.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Krosman, Kazimierz, and Janusz Sosnowski. "Correlating Time Series Signals and Event Logs in Embedded Systems." Sensors 21, no. 21 (October 27, 2021): 7128. http://dx.doi.org/10.3390/s21217128.

Повний текст джерела
Анотація:
In many embedded systems, we face the problem of correlating signals characterising device operation (e.g., performance parameters, anomalies) with events describing internal device activities. This leads to the investigation of two types of data: time series, representing signal periodic samples in a background of noise, and sporadic event logs. The correlation process must take into account clock inconsistencies between the data acquisition and monitored devices, which provide time series signals and event logs, respectively. The idea of the presented solution is to classify event logs based on the introduced similarity metric and deriving their distribution in time. The identified event log sequences are matched with time intervals corresponding to specified sample patterns (objects) in the registered signal time series. The matching (correlation) process involves iterative time offset adjustment. The paper presents original algorithms to investigate correlation problems using the object-oriented data models corresponding to two monitoring sources. The effectiveness of this approach has been verified in power consumption analysis using real data collected from the developed Holter device. It is quite universal and can be easily adapted to other device optimisation problems.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Zhang, Hui Jie, Chen Wang, Hong Guang Sun, Yan Wen Li, and Wei Zhou Guan. "Manifold Recognition Based on Discriminative-Analysis of Canonical Correlations." Advanced Materials Research 271-273 (July 2011): 185–90. http://dx.doi.org/10.4028/www.scientific.net/amr.271-273.185.

Повний текст джерела
Анотація:
Nowadays, the idea of recognition based on image sets looms so large in real world applications. From the view of manifold learning, each image set has commonly been regarded as a manifold, and we formulate the problem of set recognition as manifold recognition (MR). Since it is impossible to directly compute the distance between nonlinear manifolds, constructing local linear subspaces is brought into our focus. Among methods offering of subspace matching, canonical correlations have recently drawn intensive attention. For the task of MR, we propose a method of Manifold Recognition Based On Discriminative-analysis of Canonical Correlations (MRDCC). The proposed method is evaluated on two datasets: Honda/UCSD face video database and ETH-80 object database. Comprehensive comparisons and results demonstrate the effectiveness of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Prasad Singothu, Babu Rajendra, and Bolem Sai Chandana. "Objects and Action Detection of Human Faces through Thermal Images Using ANU-Net." Sensors 22, no. 21 (October 27, 2022): 8242. http://dx.doi.org/10.3390/s22218242.

Повний текст джерела
Анотація:
Thermal cameras, as opposed to RBG cameras, work effectively in extremely low illumination situations and can record data outside of the human visual spectrum. For surveillance and security applications, thermal images have several benefits. However, due to the little visual information in thermal images and intrinsic similarity of facial heat maps, completing face identification tasks in the thermal realm is particularly difficult. It can be difficult to attempt identification across modalities, such as when trying to identify a face in thermal images using the ground truth database for the matching visible light domain or vice versa. We proposed a method for detecting objects and actions on thermal human face images, based on the classification of five different features (hat, glasses, rotation, normal, and hat with glasses) in this paper. This model is presented in five steps. To improve the results of feature extraction during the pre-processing step, initially, we resize the images and then convert them to grayscale level using a median filter. In addition, features are extracted from pre-processed images using principle component analysis (PCA). Furthermore, the horse herd optimization algorithm (HOA) is employed for feature selection. Then, to detect the human face in thermal images, the LeNet-5 method is used. It is utilized to detect objects and actions in face areas. Finally, we classify the objects and actions on faces using the ANU-Net approach with the Monarch butterfly optimization (MBO) algorithm to achieve higher classification accuracy. According to experiments using the Terravic Facial Infrared Database, the proposed method outperforms “state-of-the-art” methods for face recognition in thermal images. Additionally, the results for several facial recognition tasks demonstrate good precision.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Damjanović, Sanja, Ferdinand van der Heijden, and Luuk J. Spreeuwers. "Local Stereo Matching Using Adaptive Local Segmentation." ISRN Machine Vision 2012 (August 23, 2012): 1–11. http://dx.doi.org/10.5402/2012/163285.

Повний текст джерела
Анотація:
We propose a new dense local stereo matching framework for gray-level images based on an adaptive local segmentation using a dynamic threshold. We define a new validity domain of the frontoparallel assumption based on the local intensity variations in the 4 neighborhoods of the matching pixel. The preprocessing step smoothes low-textured areas and sharpens texture edges, whereas the postprocessing step detects and recovers occluded and unreliable disparities. The algorithm achieves high stereo reconstruction quality in regions with uniform intensities as well as in textured regions. The algorithm is robust against local radiometrical differences and successfully recovers disparities around the objects edges, disparities of thin objects, and the disparities of the occluded region. Moreover, our algorithm intrinsically prevents errors caused by occlusion to propagate into nonoccluded regions. It has only a small number of parameters. The performance of our algorithm is evaluated on the Middlebury test bed stereo images. It ranks highly on the evaluation list outperforming many local and global stereo algorithms using color images. Among the local algorithms relying on the frontoparallel assumption, our algorithm is the best-ranked algorithm. We also demonstrate that our algorithm is working well on practical examples as for disparity estimation of a tomato seedling and a 3D reconstruction of a face.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

LUO, YUAN, MARINA L. GAVRILOVA, and PATRICK S. P. WANG. "FACIAL METAMORPHOSIS USING GEOMETRICAL METHODS FOR BIOMETRIC APPLICATIONS." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 03 (May 2008): 555–84. http://dx.doi.org/10.1142/s0218001408006399.

Повний текст джерела
Анотація:
Facial expression modeling has been a popular topic in biometrics for many years. One of the emerging recent trends is capturing subtle details such as wrinkles, creases and minor imperfections that are highly important for biometric modeling as well as matching. In this paper, we suggest a novel approach to the problem of expression modeling and morphing based on a geometry-based paradigm. In 2D image space, a distance-based morphing system is utilized to create a line drawing style facial animation from two input images representing frontal and profile views of the face. Aging wrinkles and expression lines are extracted and mapped back to the synthesized facial NPR (nonphotorealistic) sketches. In 3D object space, we present a metamorphosis system that combines the traditional free-form deformation (FFD) model with data interpolation techniques based on the proximity preserving Voronoi diagram. With feature points selected from two images of the target face, the proposed system generates the 3D target facial model by transforming a generic model. Experimental results demonstrate that morphing sequences generated by our systems are of convincing quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

POONKUNTRAN, SHANMUGAM, R. S. RAJESH, and PERUMAL ESWARAN. "ENERGY AWARE FUZZY COLOR SEGMENTATION ALGORITHM — AN APPLICATION TO CRIMINAL IDENTIFICATION USING MOBILE DEVICES." International Journal of Wavelets, Multiresolution and Information Processing 06, no. 05 (September 2008): 707–18. http://dx.doi.org/10.1142/s0219691308002604.

Повний текст джерела
Анотація:
Since its advent, the use of digital camera in mobile phones is getting more popular, where information retrieval based on visual appearance of an object is very useful when specific parameters for the object are not known. Though it is well-liked, it needs energy aware algorithms to carry out the various tasks such as segmentation and feature extraction. In this paper, a new energy aware fuzzy color segmentation algorithm is proposed and which has been applied for face segmentation in criminal identification using mobile devices. The criminals in the application are in three classes. They are New Criminal (NC), Suspected Criminal (SC) and Confirmed Criminal (CC). It is basically a mobile image-based content search engine that takes photographs of criminals as image queries and finds their relevant contents by matching them to the similar contents in the criminal databases. The energy aware fuzzy color segmentation is used to obtain the most significant parts of an image — facial regions of the persons and which are used in building image-based queries to the databases. Content search methodology in the application is also improved through the fuzzy modeling to make the application more flexible and simpler. Through the experiment conducted, it has been found that the proposed color segmentation algorithm is more robust and it reduces the computational time in searching process by minimizing the number of false cases. It could detect the faces in the images where the other known algorithms have failed to detect.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Fu, Xuhui. "Design of Facial Recognition System Based on Visual Communication Effect." Computational Intelligence and Neuroscience 2021 (December 9, 2021): 1–9. http://dx.doi.org/10.1155/2021/1539596.

Повний текст джерела
Анотація:
At present, facial recognition technology is a very cutting-edge science and technology, and it has now become a very hot research branch. In this research, first, the thesis first summarized the research status of facial recognition technology and related technologies based on visual communication and then used the OpenCV open source vision library based on the design of the system architecture and the installed system hardware conditions. The face detection program and the image matching program are realized, and the complete face recognition system based on OpenCV is realized. The experimental results show that the hardware system built by the software can realize the image capture and online recognition. The applied objects are testers. In general, the OpenCV-based face recognition system for testers can reliably, stably, and quickly realize face detection and recognition in this situation. Facial recognition works well.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Nakayama, Ken, Shinsuke Shimojo, and Gerald H. Silverman. "Stereoscopic Depth: Its Relation to Image Segmentation, Grouping, and the Recognition of Occluded Objects." Perception 18, no. 1 (February 1989): 55–68. http://dx.doi.org/10.1068/p180055.

Повний текст джерела
Анотація:
Image regions corresponding to partially hidden objects are enclosed by two types of bounding contour: those inherent to the object itself (intrinsic) and those defined by occlusion (extrinsic). Intrinsic contours provide useful information regarding object shape, whereas extrinsic contours vary arbitrarily depending on accidental spatial relationships in scenes. Because extrinsic contours can only degrade the process of surface description and object recognition, it is argued that they must be removed prior to a stage of template matching. This implies that the two types of contour must be distinguished relatively early in visual processing and we hypothesize that the encoding of depth is critical for this task. The common border is attached to and regarded as intrinsic to the closer region, and detached from and regarded as extrinsic to the farther region. We also suggest that intrinsic borders aid in the segmentation of image regions and thus prevent grouping, whereas extrinsic borders provide a linkage to other extrinsic borders and facilitate grouping. Support for these views is found in a series of demonstrations, and also in an experiment where the expected superiority of recognition was found when partially sampled faces were seen in a back rather than a front stereoscopic depth plane.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Wan, Guoyang, Guofeng Wang, and Yunsheng Fan. "A Robotic grinding station based on an industrial manipulator and vision system." PLOS ONE 16, no. 3 (March 24, 2021): e0248993. http://dx.doi.org/10.1371/journal.pone.0248993.

Повний текст джерела
Анотація:
Due to ever increasing precision and automation demands in robotic grinding, the automatic and robust robotic grinding workstation has become a research hot-spot. This work proposes a grinding workstation constituting of machine vision and an industrial manipulator to solve the difficulty of positioning rough metal cast objects and automatic grinding. Faced with the complex characteristics of industrial environment, such as weak contrast, light nonuniformity and scarcity, a coarse-to-fine two-step localization strategy was used for obtaining the object position. The deep neural network and template matching method were employed for determining the object position precisely in the presence of ambient light. Subsequently, edge extraction and contour fitting techniques were used to measure the position of the contour of the object and to locate the main burr on its surface after eliminating the influence of burr. The grid method was employed for detecting the main burrs, and the offline grinding trajectory of the industrial manipulator was planned with the guidance of the coordinate transformation method. The system greatly improves the automaticity through the entire process of loading, grinding and unloading. It can determine the object position and target the robotic grinding trajectory by the shape of the burr on the surface of an object. The measurements indicate that this system can work stably and efficiently, and the experimental results demonstrate the high accuracy and high efficiency of the proposed method. Meanwhile, it could well overcome the influence of the materials of grinding work pieces, scratch and rust.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії