Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Face and Object Recognition.

Artykuły w czasopismach na temat „Face and Object Recognition”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Face and Object Recognition”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Gülbetekin, Evrim, Seda Bayraktar, Özlenen Özkan, Hilmi Uysal i Ömer Özkan. "Face Perception in Face Transplant Patients". Facial Plastic Surgery 35, nr 05 (20.08.2019): 525–33. http://dx.doi.org/10.1055/s-0038-1666786.

Pełny tekst źródła
Streszczenie:
AbstractThe authors tested face discrimination, face recognition, object discrimination, and object recognition in two face transplantation patients (FTPs) who had facial injury since infancy, a patient who had a facial surgery due to a recent wound, and two control subjects. In Experiment 1, the authors showed them original faces and morphed forms of those faces and asked them to rate the similarity between the two. In Experiment 2, they showed old, new, and implicit faces and asked whether they recognized them or not. In Experiment 3, they showed them original objects and morphed forms of those objects and asked them to rate the similarity between the two. In Experiment 4, they showed old, new, and implicit objects and asked whether they recognized them or not. Object discrimination and object recognition performance did not differ between the FTPs and the controls. However, the face discrimination performance of FTP2 and face recognition performance of the FTP1 were poorer than that of the controls were. Therefore, the authors concluded that the structure of the face might affect face processing.
Style APA, Harvard, Vancouver, ISO itp.
2

Biederman, Irving, i Peter Kalocsais. "Neurocomputational bases of object and face recognition". Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, nr 1358 (29.08.1997): 1203–19. http://dx.doi.org/10.1098/rstb.1997.0103.

Pełny tekst źródła
Streszczenie:
A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn–like pattern of activation onto a representation layer that preserves relative spatial filter values in a two–dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a ‘jet’) is centered on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non–accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel and Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces.
Style APA, Harvard, Vancouver, ISO itp.
3

Gauthier, Isabel, Marlene Behrmann i Michael J. Tarr. "Can Face Recognition Really be Dissociated from Object Recognition?" Journal of Cognitive Neuroscience 11, nr 4 (lipiec 1999): 349–70. http://dx.doi.org/10.1162/089892999563472.

Pełny tekst źródła
Streszczenie:
We argue that the current literature on prosopagnosia fails to demonstrate unequivocal evidence for a disproportionate impairment for faces as compared to nonface objects. Two prosopagnosic subjects were tested for the discrimination of objects from several categories (face as well as nonface) at different levels of categorization (basic, subordinate, and exemplar levels). Several dependent measures were obtained including accuracy, signal detection measures, and response times. The results from Experiments 1 to 4 demonstrate that, in simultaneous-matching tasks, response times may reveal impairments with nonface objects in subjects whose error rates only indicate a face deficit. The results from Experiments 5 and 6 show that, given limited stimulus presentation times for face and nonface objects, the same subjects may demonstrate a deªcit for both stimulus categories in sensitivity. In Experiments 7, 8 and 9, a match-to-sample task that places greater demands on memory led to comparable recognition sensitivity with both face and nonface objects. Regardless of object category, the prosopagnosic subjects were more affected by manipulations of the level of categorization than normal controls. This result raises questions regarding neuropsychological evidence for the modularity of face recognition, as well as its theoretical and methodological foundations.
Style APA, Harvard, Vancouver, ISO itp.
4

Campbell, Alison, i James W. Tanaka. "Inversion Impairs Expert Budgerigar Identity Recognition: A Face-Like Effect for a Nonface Object of Expertise". Perception 47, nr 6 (24.04.2018): 647–59. http://dx.doi.org/10.1177/0301006618771806.

Pełny tekst źródła
Streszczenie:
The face-inversion effect is the finding that picture-plane inversion disproportionately impairs face recognition compared to object recognition and is now attributed to greater orientation-sensitivity of holistic processing for faces but not common objects. Yet, expert dog judges have showed similar recognition deficits for inverted dogs and inverted faces, suggesting that holistic processing is not specific to faces but to the expert recognition of perceptually similar objects. Although processing changes in expert object recognition have since been extensively documented, no other studies have observed the distinct recognition deficits for inverted objects-of-expertise that people as face experts show for faces. However, few studies have examined experts who recognize individual objects similar to how people recognize individual faces. Here we tested experts who recognize individual budgerigar birds. The effect of inversion on viewpoint-invariant budgerigar and face recognition was compared for experts and novices. Consistent with the face-inversion effect, novices showed recognition deficits for inverted faces but not for inverted budgerigars. By contrast, experts showed equal recognition deficits for inverted faces and budgerigars. The results are consistent with the hypothesis that processes underlying the face-inversion effect are specific to the expert individuation of perceptually similar objects.
Style APA, Harvard, Vancouver, ISO itp.
5

Moscovitch, Morris, Gordon Winocur i Marlene Behrmann. "What Is Special about Face Recognition? Nineteen Experiments on a Person with Visual Object Agnosia and Dyslexia but Normal Face Recognition". Journal of Cognitive Neuroscience 9, nr 5 (październik 1997): 555–604. http://dx.doi.org/10.1162/jocn.1997.9.5.555.

Pełny tekst źródła
Streszczenie:
In order to study face recognition in relative isolation from visual processes that may also contribute to object recognition and reading, we investigated CK, a man with normal face recognition but with object agnosia and dyslexia caused by a closed-head injury. We administered recognition tests of up right faces, of family resemblance, of age-transformed faces, of caricatures, of cartoons, of inverted faces, and of face features, of disguised faces, of perceptually degraded faces, of fractured faces, of faces parts, and of faces whose parts were made of objects. We compared CK's performance with that of at least 12 control participants. We found that CK performed as well as controls as long as the face was upright and retained the configurational integrity among the internal facial features, the eyes, nose, and mouth. This held regardless of whether the face was disguised or degraded and whether the face was represented as a photo, a caricature, a cartoon, or a face composed of objects. In the last case, CK perceived the face but, unlike controls, was rarely aware that it was composed of objects. When the face, or just the internal features, were inverted or when the configurational gestalt was broken by fracturing the face or misaligning the top and bottom halves, CK's performance suffered far more than that of controls. We conclude that face recognition normally depends on two systems: (1) a holistic, face-specific system that is dependent on orientationspecific coding of second-order relational features (internal), which is intact in CK and (2) a part-based object-recognition system, which is damaged in CK and which contributes to face recognition when the face stimulus does not satisfy the domain-specific conditions needed to activate the face system.
Style APA, Harvard, Vancouver, ISO itp.
6

McGugin, Rankin W., Ana E. Van Gulick i Isabel Gauthier. "Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance". Journal of Cognitive Neuroscience 28, nr 2 (luty 2016): 282–94. http://dx.doi.org/10.1162/jocn_a_00891.

Pełny tekst źródła
Streszczenie:
The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects.
Style APA, Harvard, Vancouver, ISO itp.
7

Duchaine, Brad, i Ken Nakayama. "Dissociations of Face and Object Recognition in Developmental Prosopagnosia". Journal of Cognitive Neuroscience 17, nr 2 (luty 2005): 249–61. http://dx.doi.org/10.1162/0898929053124857.

Pełny tekst źródła
Streszczenie:
Neuropsychological studies with patients suffering from prosopagnosia have provided the main evidence for the hypothesis that the recognition of faces and objects rely on distinct mechanisms. Yet doubts remain, and it has been argued that no case demonstrating an unequivocal dissociation between face and object recognition exists due in part to the lack of appropriate response time measurements (Gauthier et al., 1999). We tested seven developmental prosopagnosics to measure their accuracy and reaction times with multiple tests of face recognition and compared this with a larger battery of object recognition tests. For our systematic comparison, we used an old/new recognition memory paradigm involving memory tests for cars, tools, guns, horses, natural scenes, and houses in addition to two separate tests for faces. Developmental prosopagnosic subjects performed very poorly with the face memory tests as expected. Four of the seven prosopagnosics showed a very strong dissociation between the face and object tests. Systematic comparison of reaction time measurements for all tests indicates that the dissociations cannot be accounted for by differences in reaction times. Contrary to an account based on speed accuracy tradeoffs, prosopagnosics were systematically faster in nonface tests than in face tests. Thus, our findings demonstrate that face and nonface recognition can dissociate over a wide range of testing conditions. This is further support for the hypothesis that face and nonface recognition relies on separate mechanisms and that developmental prosopagnosia constitutes a disorder separate from developmental agnosia.
Style APA, Harvard, Vancouver, ISO itp.
8

Stevanović, Dušan. "OBJECT DETECTION USING VIOLA-JONES ALGORITHM". Knowledge International Journal 28, nr 4 (10.12.2018): 1349–54. http://dx.doi.org/10.35120/kij28041349d.

Pełny tekst źródła
Streszczenie:
In this paper it has been described and applied method for detecting face and face parts in images using the Viola-Jones algorithm. The work is based on Computer Vision Systems, artificial intelligence that deals with the recognition of two-dimensional or three-dimensional objects. When Cascade Object Detector script is trained, multimedia content is assigned for recognition. In this work the content will be in the form of an image, where the program will have the task of recognizing the objects in the images, separating the parts of the images in the head area, and on each discovered face, separately mark the area around the eyes, nose and mouth.Algorithm for detection and recognition is based on scanning and analyzing front part of human head. Common usage of face detection and recognition can be find in biometry, photography, on autofocus option which is implemented in professional photo cameras or on smiling detectors (Keller, 2007). Marketing is also popular field where face detection and recognition can be used. For example, web cameras that are implemented in TVs, can detect every face in near area. Calculating different type of algorithms and parameters, based on sex, age, ethnicity, system can play precisely segmented television commercials and campaigns. Example of that kind of systems is OptimEyes. (Strasburger, 2013)In other words, every algorithm that has as its main goal to detect and recognize face from image, should give as a feedback information, is there any face and if answer is positive, where is its location on image. In order to achieve acceptable performances, algorithm should minimize false recognitions. These are the cases when the algorithm ignores and does not recognize the real object from the image, and vice versa, when the wrong object is recognized as real. One of the algorithms that is frequently applied in this area of research is the Viola-Jones algorithm. This algorithm is functional in real time, meaning that besides detection, it is also possible to adjust the ability to monitor faces from video material.In this paper, the problem that will be analyzed is facial image detection. Man can do this task in a very simple way, but to do the same with a computer, it is necessary to have a range of precise and accurate information, formulas, methods and techniques. In order to maximize the precision of recognizing the face of the image using the Viola-Jones algorithm, it is desirable that the objects in the images are completely face-to-face with the image-taking device, which will be shown through experiments.
Style APA, Harvard, Vancouver, ISO itp.
9

Yuille, Alan L. "Deformable Templates for Face Recognition". Journal of Cognitive Neuroscience 3, nr 1 (styczeń 1991): 59–70. http://dx.doi.org/10.1162/jocn.1991.3.1.59.

Pełny tekst źródła
Streszczenie:
We describe an approach for extracting facial features from images and for determining the spatial organization between these features using the concept of a deformable template. This is a parameterized geometric model of the object to be recognized together with a measure of how well it fits the image data. Variations in the parameters correspond to allowable deformations of the object and can be specified by a probabilistic model. After the extraction stage the parameters of the deformable template can be used for object description and recognition.
Style APA, Harvard, Vancouver, ISO itp.
10

Jiang, Hairong, Juan P. Wachs i Bradley S. Duerstock. "Integrated vision-based system for efficient, semi-automated control of a robotic manipulator". International Journal of Intelligent Computing and Cybernetics 7, nr 3 (5.08.2014): 253–66. http://dx.doi.org/10.1108/ijicc-09-2013-0042.

Pełny tekst źródła
Streszczenie:
Purpose – The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller. Design/methodology/approach – Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face. Findings – The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects. Originality/value – Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.
Style APA, Harvard, Vancouver, ISO itp.
11

Dong, X. C., i V. I. Ionin. "Using object-oriented databases in face recognition". «System analysis and applied information science», nr 2 (18.08.2020): 54–60. http://dx.doi.org/10.21122/2309-4923-2020-2-54-60.

Pełny tekst źródła
Streszczenie:
The aim of the work is to develop an algorithm functioning by a face recognition system using object-oriented databases. The system provides automatic identification of the desired object or identifies someone using a digital photo or video frame from a video source. The technology includes comparing pre-scanned face elements from the resulting image with prototypes of faces stored in the database. Modern packages of object-oriented databases give the user the opportunity to create a new class with the specified attributes and methods, obtain classes that inherit attributes and methods from super classes, create instances of the class, each of which has a unique object identifier, extract these instances one by one or in groups, and also download and perform these procedures. Using a convolutional neural network in the algorithm allows the transition from specific features of the image to more abstract details.
Style APA, Harvard, Vancouver, ISO itp.
12

Tanaka, James W., i Martha J. Farah. "Parts and Wholes in Face Recognition". Quarterly Journal of Experimental Psychology Section A 46, nr 2 (maj 1993): 225–45. http://dx.doi.org/10.1080/14640749308401045.

Pełny tekst źródła
Streszczenie:
Are faces recognized using more holistic representations than other types of stimuli? Taking holistic representation to mean representation without an internal part structure, we interpret the available evidence on this issue and then design new empirical tests. Based on previous research, we reasoned that if a portion of an object corresponds to an explicitly represented part in a hierarchical visual representation, then when that portion is presented in isolation it will be identified relatively more easily than if it did not have the status of an explicitly represented part. The hypothesis that face recognition is holistic therefore predicts that a part of a face will be disproportionately more easily recognized in the whole face than as an isolated part, relative to recognition of the parts and wholes of other kinds of stimuli. This prediction was borne out in three experiments: subjects were more accurate at identifying the parts of faces, presented in the whole object, than they were at identifying the same part presented in isolation, even though both parts and wholes were tested in a forced-choice format and the whole faces differed only by one part. In contrast, three other types of stimuli–-scrambled faces, inverted faces, and houses–-did not show this advantage for part identification in whole object recognition.
Style APA, Harvard, Vancouver, ISO itp.
13

Wang, Panqu, Isabel Gauthier i Garrison Cottrell. "Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration". Journal of Cognitive Neuroscience 28, nr 4 (kwiecień 2016): 558–74. http://dx.doi.org/10.1162/jocn_a_00919.

Pełny tekst źródła
Streszczenie:
Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing [“The Model”, TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a “spreading transform” for faces (separating them in representational space) that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise [Tong, M. H., Joyce, C. A., & Cottrell, G. W. Why is the fusiform face area recruited for novel categories of expertise? A neurocomputational investigation. Brain Research, 1202, 14–24, 2008].
Style APA, Harvard, Vancouver, ISO itp.
14

Shakeshaft, Nicholas G., i Robert Plomin. "Genetic specificity of face recognition". Proceedings of the National Academy of Sciences 112, nr 41 (28.09.2015): 12887–92. http://dx.doi.org/10.1073/pnas.1421881112.

Pełny tekst źródła
Streszczenie:
Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.
Style APA, Harvard, Vancouver, ISO itp.
15

McGugin, R., J. Richler, G. Herzmann, M. Speegle i I. Gauthier. "The contribution of general object recognition abilities to face recognition". Journal of Vision 12, nr 9 (10.08.2012): 810. http://dx.doi.org/10.1167/12.9.810.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Brooks, Brian E., Luke J. Rosielle i Eric E. Cooper. "The Priming of Face Recognition after Metric Transformations". Perception 31, nr 3 (marzec 2002): 297–313. http://dx.doi.org/10.1068/p3283.

Pełny tekst źródła
Streszczenie:
Four experiments were performed to test whether the perceptual priming of face recognition would show invariance to changes in size, position, reflectional orientation (mirror reversal), and picture-plane rotation. In all experiments, subjects recognized faces in two blocks of trials; in the second block, some of the faces were identical to those in the first, and others had undergone metric transformations. The results show that subjects were equally fast to recognize faces whether or not the faces had changed in size, position, or reflectional orientation between the first and second presentations of the faces. In contrast, subjects were slower to recognize both faces and objects when they were planar-rotated between the first and second presentations. The results suggest that the same metric invariances are shown by both face recognition and basic-level object recognition.
Style APA, Harvard, Vancouver, ISO itp.
17

Moses, Yael, Shimon Ullman i Shimon Edelman. "Generalization to Novel Images in Upright and Inverted Faces". Perception 25, nr 4 (kwiecień 1996): 443–61. http://dx.doi.org/10.1068/p250443.

Pełny tekst źródła
Streszczenie:
An image of a face depends not only on its shape, but also on the viewpoint, illumination conditions, and facial expression. A face recognition system must overcome the changes in face appearance induced by these factors. Two related questions were investigated: the capacity of the human visual system to generalize the recognition of faces to novel images, and the level at which this generalization occurs. This problem was approached by comparing the identification and generalization capacity for upright and inverted faces. For upright faces, remarkably good generalization to novel conditions was found. For inverted faces, the generalization to novel views was significantly worse for both new illumination and viewpoint, although the performance on the training images was similar to that on the upright condition. The results indicate that at least some of the processes that support generalization across viewpoint and illumination are neither universal (because subjects did not generalize as easily for inverted faces as for upright ones) nor strictly object specific (because in upright faces nearly perfect generalization was possible from a single view, by itself insufficient for building a complete object-specific model). It is proposed that generalization in face recognition occurs at an intermediate level that is applicable to a class of objects, and that at this level upright and inverted faces initially constitute distinct object classes.
Style APA, Harvard, Vancouver, ISO itp.
18

Liu, Yuhan. "Enhancing Face Recognition Accuracy Using Data Pre-processing Method and YOLO". Applied and Computational Engineering 8, nr 1 (1.08.2023): 667–74. http://dx.doi.org/10.54254/2755-2721/8/20230291.

Pełny tekst źródła
Streszczenie:
The recognition of objects is an essential aspect of visual perception and finds extensive usage in diverse fields such as self-driving vehicles, security, robotics, and image retrieval. In this study, we investigate the performance of the YOLOv5 (You Only Look Once) algorithm for object detection on the VOC2007 dataset. The YOLOv5 model achieved a moderate overall accuracy and precision, demonstrating its potential for object detection tasks. However, the performance varied across different categories, with lower accuracy observed for less frequent categories and difficulties in distinguishing between closely related categories. We identify potential improvements to the YOLOv5 model's performance, including class balancing using weighted sampling and data augmentation, which may help the model to better learn to detect objects from under-represented categories and improve its ability to distinguish between similar objects. The results of our study imply that the YOLO algorithm has potential for object detection and classification projects in computer vision, however further study and refinement are necessary to broaden its efficacy across a greater variety of object classes and real-world scenarios.
Style APA, Harvard, Vancouver, ISO itp.
19

Rodrigues, João, Roberto Lam i Hans du Buf. "Cortical 3D Face and Object Recognition Using 2D Projections". International Journal of Creative Interfaces and Computer Graphics 3, nr 1 (styczeń 2012): 45–62. http://dx.doi.org/10.4018/jcicg.2012010104.

Pełny tekst źródła
Streszczenie:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for dynamic feature routing, and saliency maps for Focus-of-Attention. All these combined allow faces to be segregated. Events of different facial views are stored in memory and combined to identify the view and recognise a face, including its expression. In this paper, the authors show that with five 2D views and their cortical representations it is possible to determine the left-right and frontal-lateral-profile views, achieving a view-invariant recognition rate of 91%. The authors also show that the same principle with eight views can be applied to 3D object recognition when they are mainly rotated about the vertical axis.
Style APA, Harvard, Vancouver, ISO itp.
20

Fry, Regan, Jeremy Wilmer, Isabella Xie, Mieke Verfaellie i Joseph DeGutis. "Evidence for normal novel object recognition abilities in developmental prosopagnosia". Royal Society Open Science 7, nr 9 (wrzesień 2020): 200988. http://dx.doi.org/10.1098/rsos.200988.

Pełny tekst źródła
Streszczenie:
The issue of the face specificity of recognition deficits in developmental prosopagnosia (DP) is fundamental to the organization of high-level visual memory and has been increasingly debated in recent years. Previous DP investigations have found some evidence of object recognition impairments, but have almost exclusively used familiar objects (e.g. cars), where performance may depend on acquired object-specific experience and related visual expertise. An object recognition test not influenced by experience could provide a better, less contaminated measure of DPs' object recognition abilities. To investigate this, in the current study we tested 30 DPs and 30 matched controls on a novel object memory test (NOMT Ziggerins) and the Cambridge Face Memory Test (CFMT). DPs with severe impairment on the CFMT showed no differences in accuracy or reaction times compared with controls on the NOMT. We found similar results when comparing DPs with a larger sample of 274 web-based controls. Additional individual analyses demonstrated that the rate of object recognition impairment in DPs did not differ from the rate of impairment in either control group. Together, these results demonstrate unimpaired object recognition in DPs for a class of novel objects that serves as a powerful index for broader novel object recognition capacity.
Style APA, Harvard, Vancouver, ISO itp.
21

Мельник, Р. А., Р. І. Квіт i Т. М. Сало. "Face image profiles features extraction for recognition systems". Scientific Bulletin of UNFU 31, nr 1 (4.02.2021): 117–21. http://dx.doi.org/10.36930/40310120.

Pełny tekst źródła
Streszczenie:
The object of research is the algorithm of piecewise linear approximation when applying it to the selection of facial features and compression of its images. One of the problem areas is to obtain the optimal ratio of the degree of compression and accuracy of image reproduction, as well as the accuracy of the obtained facial features, which can be used to search for people in databases. The main characteristics of the image of the face are the coordinates and size of the eyes, mouth, nose and other objects of attention. Dimensions, distances between them, as well as their relationship also form a set of characteristics. A piecewise linear approximation algorithm is used to identify and determine these features. First, it is used to approximate the image of the face to obtain a graph of the silhouette from right to left and, secondly, to approximate fragments of the face to obtain silhouettes of the face from top to bottom. The purpose of the next stage is to implement multilevel segmentation of the approximated images to cover them with rectangles of different intensity. Due to their shape they are called barcodes. These three stages of the algorithm the faces are represented by two barcode images are vertical and horizontal. This material is used to calculate facial features. The medium intensity function in a row or column is used to form an approximation object and as a tool to measure the values of facial image characteristics. Additionally, the widths of barcodes and the distances between them are calculated. Experimental results with faces from known databases are presented. A piecewise linear approximation is used to compress facial images. Experiments have shown how the accuracy of the approximation changes with the degree of compression of the image. The method has a linear complexity of the algorithm from the number of pixels in the image, which allows its testing for large data. Finding the coordinates of a synchronized object, such as the eyes, allows calculating all the distances between the objects of attention on the face in relative form. The developed software has control parameters for conducting research.
Style APA, Harvard, Vancouver, ISO itp.
22

Bhange, Prof Anup. "Face Detection System with Face Recognition". International Journal for Research in Applied Science and Engineering Technology 10, nr 1 (31.01.2022): 1095–100. http://dx.doi.org/10.22214/ijraset.2022.39976.

Pełny tekst źródła
Streszczenie:
Abstract: The face is one of the easiest way to distinguish the individual identity of each other. Face recognition is a personal identification system that uses personal characteristics of a person to identify the person's identity. Now a days Human Face Detection and Recognition become a major field of interest in current research because there is no deterministic algorithm to find faces in a given image. Human face recognition procedure basically consists of two phases, namely face detection, where this process takes place very rapidly in humans, except under conditions where the object is located at a short distance away, the next is recognition, which recognize (by comparing face with picture or either with image captured through webcam) a face as an individual. In face detection and recognition technology, it is mainly introduced from the OpenCV method. Face recognition is one of the much-studied biometrics technology and developed by experts. The area of this project face detection system with face recognition is Image processing. The software requirement for this project is Python. Keywords: face detection, face recognition, cascade_classifier, LBPH.
Style APA, Harvard, Vancouver, ISO itp.
23

Oruganti, Rakesh, i Namratha P. "Cascading Deep Learning Approach for Identifying Facial Expression YOLO Method". ECS Transactions 107, nr 1 (24.04.2022): 16649–58. http://dx.doi.org/10.1149/10701.16649ecst.

Pełny tekst źródła
Streszczenie:
Face detection is one of the biggest tasks to find things. Identification is usually the first stage of facial recognition. and identity verification. In recent years, in-depth learning algorithms have changed dramatically in object acquisition. These algorithms can usually be divided into two groups, namely two-phase machines like Faster R-CNN or single-phase machines like YOLO. While YOLO and its variants are less accurate than the two-phase detection systems, they outperform other components with wider genes. When faced with standard-sized objects, YOLO works well, but can't get smaller objects. A face recognition system that uses AI (Artificial Intelligence) separates or verifies a person's identity by analyzing their faces. In this project, a single neural network predicts binding boxes and class opportunities directly from the full images in a single test.
Style APA, Harvard, Vancouver, ISO itp.
24

Gauthier, Isabel, i Kim M. Curby. "A Perceptual Traffic Jam on Highway N170". Current Directions in Psychological Science 14, nr 1 (luty 2005): 30–33. http://dx.doi.org/10.1111/j.0963-7214.2005.00329.x.

Pełny tekst źródła
Streszczenie:
Whether face processing is modular or not has been the topic of a lively empirical and theoretical debate. In expert observers, the perception of nonface objects in their domain of expertise is remarkably similar to their perception of faces, in patterns of both behavioral performance and brain activation, providing some evidence against the modularity of face perception. However, the studies that have yielded these results do not rule out the possibility that object expertise and face processing occur in spatially overlapping, but functionally independent, brain regions. Recent research using an interference paradigm reveals that expert object (car) processing interferes with face processing. The level of interference was proportional to an individual's level of car expertise. These results may provide the most direct evidence to date that face and object recognition are not functionally independent.
Style APA, Harvard, Vancouver, ISO itp.
25

Salama, Ramiz, i Mohamed Nour. "Security Technologies Using Facial Recognition". Global Journal of Computer Sciences: Theory and Research 13, nr 1 (31.03.2023): 01–27. http://dx.doi.org/10.18844/gjcs.v13i1.8294.

Pełny tekst źródła
Streszczenie:
Abstract Faces are one of the simplest methods to determine a person's identity. Face recognition is a unique identifying method that uses an individual's traits to determine the identity of that individual. The proposed recognition process is divided into two stages: face recognition and object recognition. Unless the item is very close, this procedure is very rapid for humans. The recognition of human faces is introduced next. The stage is then reproduced and used as a model for facial image recognition (face recognition). That's one of the professionally created and well-researched biometrics procedures. The eigenface approach and the Fisher face method are two common face recognition pattern algorithms that have been developed. Recognition of facial images The Eigenface approach is based on the reduction of face dimensional space for facial traits using Principal Component Analysis (PCA). The major goal of applying PCA on face recognition was to generate Eigen faces (face space) by identifying the eigenvector corresponding to the face image's biggest eigenvalue. Image processing and security systems are areas of interest in this research face recognition integrated into a security system. Keywords: face recognition, security systems, camera, python;
Style APA, Harvard, Vancouver, ISO itp.
26

Holzinger, Yael, Shimon Ullman, Daniel Harari, Marlene Behrmann i Galia Avidan. "Minimal Recognizable Configurations Elicit Category-selective Responses in Higher Order Visual Cortex". Journal of Cognitive Neuroscience 31, nr 9 (wrzesień 2019): 1354–67. http://dx.doi.org/10.1162/jocn_a_01420.

Pełny tekst źródła
Streszczenie:
Visual object recognition is performed effortlessly by humans notwithstanding the fact that it requires a series of complex computations, which are, as yet, not well understood. Here, we tested a novel account of the representations used for visual recognition and their neural correlates using fMRI. The rationale is based on previous research showing that a set of representations, termed “minimal recognizable configurations” (MIRCs), which are computationally derived and have unique psychophysical characteristics, serve as the building blocks of object recognition. We contrasted the BOLD responses elicited by MIRC images, derived from different categories (faces, objects, and places), sub-MIRCs, which are visually similar to MIRCs, but, instead, result in poor recognition and scrambled, unrecognizable images. Stimuli were presented in blocks, and participants indicated yes/no recognition for each image. We confirmed that MIRCs elicited higher recognition performance compared to sub-MIRCs for all three categories. Whereas fMRI activation in early visual cortex for both MIRCs and sub-MIRCs of each category did not differ from that elicited by scrambled images, high-level visual regions exhibited overall greater activation for MIRCs compared to sub-MIRCs or scrambled images. Moreover, MIRCs and sub-MIRCs from each category elicited enhanced activation in corresponding category-selective regions including fusiform face area and occipital face area (faces), lateral occipital cortex (objects), and parahippocampal place area and transverse occipital sulcus (places). These findings reveal the psychological and neural relevance of MIRCs and enable us to make progress in developing a more complete account of object recognition.
Style APA, Harvard, Vancouver, ISO itp.
27

Wu, Xiao Kang, Cheng Gang Xie i Qin Lu. "Algorithm of Video Decomposition and Video Abstraction Generation Based on Face Detection and Recognition". Applied Mechanics and Materials 644-650 (wrzesień 2014): 4620–23. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4620.

Pełny tekst źródła
Streszczenie:
ion generation based on face detection and recognitionXiaokang Wu1, a, Chenggang Xie2, b, Qin Lu2, c1 National University Of Defense Technology, Changsha 410073, China;a172896292@qq.com, bqingqingzijin_k@126.com,Keywords: face detection, face recognition, key frame, video abstractionAbstract. In order to facilitate users browse the behaviors and expressions of interesting objects in a video quickly, need to remove the redundancy information and extract key frames related to the object interested. This paper uses a fast face detection based on skin color, and recognition technology using spectrum feature matching, decompose the coupling video, and classify frames related to the object into different sets, generate a different video abstraction of each object. Experimental results show that the algorithm under different light conditions has better practicability.
Style APA, Harvard, Vancouver, ISO itp.
28

Scanlan, Christopher Barry Robert A. Johnston Lesley C. "Are Faces "Special" Objects? Associative and Sem antic Priming of Face and Object Recognition and Naming". Quarterly Journal of Experimental Psychology A 51, nr 4 (1.11.1998): 853–82. http://dx.doi.org/10.1080/027249898391422.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Barry, Christopher, Robert A. Johnston i Lesley C. Scanlan. "Are Faces “Special” Objects? Associative and Sem antic Priming of Face and Object Recognition and Naming". Quarterly Journal of Experimental Psychology Section A 51, nr 4 (listopad 1998): 853–82. http://dx.doi.org/10.1080/713755783.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
30

McGugin, Rankin, Ana Van Gulick i Isabel Gauthier. "Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance". Journal of Vision 15, nr 12 (1.09.2015): 428. http://dx.doi.org/10.1167/15.12.428.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Budiarsa, Rahmat, Retantyo Wardoyo i Aina Musdholifah. "Face recognition for occluded face with mask region convolutional neural network and fully convolutional network: a literature review". International Journal of Electrical and Computer Engineering (IJECE) 13, nr 5 (1.10.2023): 5662. http://dx.doi.org/10.11591/ijece.v13i5.pp5662-5673.

Pełny tekst źródła
Streszczenie:
<p><span lang="EN-US">Face recognition technology has been used in many ways, such as in the authentication and identification process. The object raised is a piece of face image that does not have complete facial information (occluded face), it can be due to acquisition from a different point of view or shooting a face from a different angle. This object was raised because the object can affect the detection and identification performance of the face image as a whole. Deep leaning method can be used to solve face recognition problems. In previous research, more focused on face detection and recognition based on resolution, and detection of face. Mask region convolutional neural network (mask<br /> R-CNN) method still has deficiency in the segmentation section which results in a decrease in the accuracy of face identification with incomplete face information objects. The segmentation used in mask R-CNN is fully convolutional network (FCN). In this research, exploration and modification of many FCN parameters will be carried out using the CNN backbone pooling layer, and modification of mask R-CNN for face identification, besides that, modifications will be made to the bounding box regressor. it is expected that the modification results can provide the best recommendations based on accuracy.</span></p>
Style APA, Harvard, Vancouver, ISO itp.
32

Angadi, Shanmukhappa A., i Sanjeevakumar M. Hatture. "Face Recognition Through Symbolic Modeling of Face Graphs and Texture". International Journal of Pattern Recognition and Artificial Intelligence 33, nr 12 (listopad 2019): 1956008. http://dx.doi.org/10.1142/s0218001419560081.

Pełny tekst źródła
Streszczenie:
Face recognition helps in authentication of the user using remotely acquired facial information. The dynamic nature of face images like pose, illumination, expression, occlusion, aging, etc. degrades the performance of the face recognition system. In this paper, a new face recognition system using facial images with illumination variation, pose variation and partial occlusion is presented. The facial image is described as a collection of three complete connected graphs and these graphs are represented as symbolic objects. The structural characteristics, i.e. graph spectral properties, energy of graph, are extracted and embedded in a symbolic object. The texture features from the cheeks portions are extracted using center symmetric local binary pattern (CS-LBP) descriptor. The global features of the face image, i.e. length and width, are also extracted. Further symbolic data structure is constructed using the above features, namely, the graph spectral properties, energy of graph, global features and texture features. User authentication is performed using a new symbolic similarity metric. The performance is investigated by conducting the experiments with AR face database and VTU-BEC-DB multimodal database. The experimental results demonstrate an identification rate of 95.97% and 97.20% for the two databases.
Style APA, Harvard, Vancouver, ISO itp.
33

Dixon, Mike J., Daniel N. Bub i Martin Arguin. "Semantic and Visual Determinants of Face Recognition in a Prosopagnosic Patient". Journal of Cognitive Neuroscience 10, nr 3 (maj 1998): 362–76. http://dx.doi.org/10.1162/089892998562799.

Pełny tekst źródła
Streszczenie:
Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josée Chouinard— three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.
Style APA, Harvard, Vancouver, ISO itp.
34

Duchaine, B. C., G. Yovel i K. Nakayama. "Severe acquired impairment of face detection and recognition with normal object recognition". Journal of Vision 5, nr 8 (16.03.2010): 39. http://dx.doi.org/10.1167/5.8.39.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
35

Germine, Laura, Nathan Cashdollar, Emrah Düzel i Bradley Duchaine. "A new selective developmental deficit: Impaired object recognition with normal face recognition". Cortex 47, nr 5 (maj 2011): 598–607. http://dx.doi.org/10.1016/j.cortex.2010.04.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Wallis, Guy. "Temporal Order in Human Object Recognition Learning". Journal of Biological Systems 06, nr 03 (wrzesień 1998): 299–313. http://dx.doi.org/10.1142/s0218339098000200.

Pełny tekst źródła
Streszczenie:
The view based approach to object recognition relies upon the co-activation of 2-D pictorial elements or features. This approach is limited to generalising recognition across transformations of objects in which considerable physical similarity is present in the stored 2-D images to which the object is being compared. It is, therefore, unclear how completely novel views of objects might correctly be assigned to known views of an object so as to allow correct recognition from any viewpoint. The answer to this problem may lie in the fact that in the real world we are presented with a further cue as to how we should associate these images, namely that we tend to view objects over extended periods of time. In this paper, neural network and human psychophysics data on face recognition are presented which support the notion that recognition learning can be affected by the order in which images appear, as well as their spatial similarity.
Style APA, Harvard, Vancouver, ISO itp.
37

., S. Kasthuri. "OBJECT DETECTION USING REAL TIME ALGORITHM WITH FACE RECOGNITION". International Journal of Research in Engineering and Technology 03, nr 02 (25.02.2014): 256–59. http://dx.doi.org/10.15623/ijret.2014.0302044.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Wallis, Guy, i Edmund T. Rolls. "INVARIANT FACE AND OBJECT RECOGNITION IN THE VISUAL SYSTEM". Progress in Neurobiology 51, nr 2 (luty 1997): 167–94. http://dx.doi.org/10.1016/s0301-0082(96)00054-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Martínez-Horta, Saul, Andrea Horta-Barba, Jesús Perez-Perez, Mizar Antoran, Javier Pagonabarraga, Frederic Sampedro i Jaime Kulisevsky. "Impaired face-like object recognition in premanifest Huntington's disease". Cortex 123 (luty 2020): 162–72. http://dx.doi.org/10.1016/j.cortex.2019.10.015.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Su, Ching-Liang. "Manufacture Automation: Model and Object Recognition by Using Object Position Auto Locating Algorithm and Object Comparison Model". JALA: Journal of the Association for Laboratory Automation 5, nr 2 (kwiecień 2000): 61–65. http://dx.doi.org/10.1016/s1535-5535-04-00062-0.

Pełny tekst źródła
Streszczenie:
This research uses the geometry matching technique to identify the different objects. The object is extracted from the background. The second moment 6 is used to find the orientation and the center point of the extracted object. Since the second moment can find the orientations and the center point of the object, the perfect object and the test object can be aligned to the same orientation. Furthermore, these two images can be shifted to the same centroid. After this, the perfect object can be subtracted from the test face. By using the subtracted result, the objects can be classified. The techniques used in this research can very accurately classify different objects.
Style APA, Harvard, Vancouver, ISO itp.
41

Nursari, Sri Rezeki Candra, i Rizki Rahmatunisa. "Face Duplication Identifier Using Artificial Nerves". bit-Tech 6, nr 1 (25.08.2023): 78–86. http://dx.doi.org/10.32877/bt.v6i1.899.

Pełny tekst źródła
Streszczenie:
The facial recognition system develops a basic identity verification system based on the natural features of human faces. The study included duplicate passport identification, which checks each person's facial accuracy through a sample of facial data. The data used in this study were 180 face samples at the training stage and 30 face samples at the testing stage. The face sample taken is a forward-facing face that is not obstructed by an object. Face image recognition in this study combines GLCM method, color moment, shape extraction and backpropagation algorithm. The test process recognition rate is 78.83%.
Style APA, Harvard, Vancouver, ISO itp.
42

Kitada, Ryo, Ingrid S. Johnsrude, Takanori Kochiyama i Susan J. Lederman. "Functional Specialization and Convergence in the Occipito-temporal Cortex Supporting Haptic and Visual Identification of Human Faces and Body Parts: An fMRI Study". Journal of Cognitive Neuroscience 21, nr 10 (październik 2009): 2027–45. http://dx.doi.org/10.1162/jocn.2009.21115.

Pełny tekst źródła
Streszczenie:
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
Style APA, Harvard, Vancouver, ISO itp.
43

Firasari, Elly, F. Lia Dwi Cahyanti, Fajar Sarasati i Widiastuti Widiastuti. "COMPARISON OF EIGENFACE AND FISHERFACE METHODS FOR FACE RECOGNITION". Jurnal Techno Nusa Mandiri 19, nr 2 (30.09.2022): 125–30. http://dx.doi.org/10.33480/techno.v19i2.3470.

Pełny tekst źródła
Streszczenie:
Abstract— Biometric information systems have been widely used in the fields of government, shopping centers, education and even security, which offer biological authentication so that the system can recognize its users more quickly. The parts of the human body are identified by a biometric system that has unique and specific characteristics, one of which is the face. Adjustment of facial image deals with objects that are never the same, due to the parts that can change. These changes are caused by facial expressions, light intensity, shooting angle, or changes in facial accessories. With this, the same object with several differences must be recognized as the same object. In this study, the data used were 388 face images and the sata test consisted of 30 face images. Before the face is tested, preprocessing and feature extraction are carried out using the Haar Cascade Classifier and then detected using Eigenface and Fisherface. Based on the research results, the Fisherface method is an algorithm that is accurate and efficient compared to the Eigenface algorithm. The Fisherface algorithm has an accuracy of 88%. while the Eigenface method has an accuracy rate of 76%. Keywords – Haar Cascade Classifier, Eigenface, Fisherface,.
Style APA, Harvard, Vancouver, ISO itp.
44

Edelman, Shimon. "Spanning the Face Space". Journal of Biological Systems 06, nr 03 (wrzesień 1998): 265–79. http://dx.doi.org/10.1142/s0218339098000182.

Pełny tekst źródła
Streszczenie:
The paper outlines a computational approach to face representation and recognition, inspired by two major features of biological perceptual systems: graded-profile overlapping receptive fields, and object-specific responses in the higher visual areas. This approach, according to which a face is ultimately represented by its similarities to a number of reference faces, led to the development of a comprehensive theory of object representation in biological vision, and to its subsequent psychophysical exploration and computational modeling.
Style APA, Harvard, Vancouver, ISO itp.
45

Junaid, Mohd Wasiuddin. "Image Captioning with Face Recognition using Transformers". International Journal for Research in Applied Science and Engineering Technology 10, nr 1 (31.01.2022): 1426–32. http://dx.doi.org/10.22214/ijraset.2022.40057.

Pełny tekst źródła
Streszczenie:
Abstract: The process of generating text from images is called Image Captioning. It not only requires the recognition of the object and the scene but the ability to analyze the state and identify the relationship among these objects. Therefore image captioning integrates the field of computer vision and natural language processing. Thus we introduces a novel image captioning model which is capable of recognizing human faces in an given image using transformer model. The proposed Faster R-CNN-Transformer model architecture comprises of feature extraction from images, extraction of semantic keywords from captions, and encoder-decoder transformers. Faster-RCNN is implemented for face recognition and features are extracted from images using InceptionV3 . The model aims to identify and recognizes the known faces in the images. The Faster R-CNN module creates the bounding box across the face which helps in better interpretation of an image and caption. The dataset used in this model has images with celebrity faces and caption with celebrity names included within itself, respectively has in total 232 celebrities. Due to small size of dataset, we have augmented images and added 100 images with their corresponding captions to increase the size of vocabulary for our model. The BLEU and METEOR scores were generated to evaluate the accuracy/quality of generated captions. Keywords: Image Captioning, Faster R-CNN , Transformers, Bleu score, Meteor score.
Style APA, Harvard, Vancouver, ISO itp.
46

Avidan, Galia, Michal Harel, Talma Hendler, Dafna Ben-Bashat, Ehud Zohary i Rafael Malach. "Contrast Sensitivity in Human Visual Areas and Its Relationship to Object Recognition". Journal of Neurophysiology 87, nr 6 (1.06.2002): 3102–16. http://dx.doi.org/10.1152/jn.2002.87.6.3102.

Pełny tekst źródła
Streszczenie:
An important characteristic of visual perception is the fact that object recognition is largely immune to changes in viewing conditions. This invariance is obtained within a sequence of ventral stream visual areas beginning in area V1 and ending in high order occipito-temporal object areas (the lateral occipital complex, LOC). Here we studied whether this transformation could be observed in the contrast response of these areas. Subjects were presented with line drawings of common objects and faces in five different contrast levels (0, 4, 6, 10, and 100%). Our results show that indeed there was a gradual trend of increasing contrast invariance moving from area V1, which manifested high sensitivity to contrast changes, to the LOC, which showed a significantly higher degree of invariance at suprathreshold contrasts (from 10 to 100%). The trend toward increased invariance could be observed for both face and object images; however, it was more complete for the face images, while object images still manifested substantial sensitivity to contrast changes. Control experiments ruled out the involvement of attention effects or hemodynamic “ceiling” in producing the contrast invariance. The transition from V1 to LOC was gradual with areas along the ventral stream becoming increasingly contrast-invariant. These results further stress the hierarchical and gradual nature of the transition from early retinotopic areas to high order ones, in the build-up of abstract object representations.
Style APA, Harvard, Vancouver, ISO itp.
47

Dodson, C. T. J., John Soldera i Jacob Scharcanski. "Some Information Geometric Aspects of Cyber Security by Face Recognition". Entropy 23, nr 7 (9.07.2021): 878. http://dx.doi.org/10.3390/e23070878.

Pełny tekst źródła
Streszczenie:
Secure user access to devices and datasets is widely enabled by fingerprint or face recognition. Organization of the necessarily large secure digital object datasets, with objects having content that may consist of images, text, video or audio, involves efficient classification and feature retrieval processing. This usually will require multidimensional methods applicable to data that is represented through a family of probability distributions. Then information geometry is an appropriate context in which to provide for such analytic work, whether with maximum likelihood fitted distributions or empirical frequency distributions. The important provision is of a natural geometric measure structure on families of probability distributions by representing them as Riemannian manifolds. Then the distributions are points lying in this geometrical manifold, different features can be identified and dissimilarities computed, so that neighbourhoods of objects nearby a given example object can be constructed. This can reveal clustering and projections onto smaller eigen-subspaces which can make comparisons easier to interpret. Geodesic distances can be used as a natural dissimilarity metric applied over data described by probability distributions. Exploring this property, we propose a new face recognition method which scores dissimilarities between face images by multiplying geodesic distance approximations between 3-variate RGB Gaussians representative of colour face images, and also obtaining joint probabilities. The experimental results show that this new method is more successful in recognition rates than published comparative state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
48

Cui, Wei, i Wei Qi Yan. "A Scheme for Face Recognition in Complex Environments". International Journal of Digital Crime and Forensics 8, nr 1 (styczeń 2016): 26–36. http://dx.doi.org/10.4018/ijdcf.2016010102.

Pełny tekst źródła
Streszczenie:
In this paper, the authors propose a scheme for human face recognition in complex environments. The proposed scheme consists of three phases: moving object removal, face detection and face recognition. It could be applied to certain specific environments such as computer users in office, shopping mall, and reception or pokie machine gamblers in casinos. In these environments, the target human face for recognizing will be considered as the foreground and the moving objects (such as cars, walking persons etc) as the background. The objective of this paper is to implement a scheme for human face recognition so as to improve recognition precision and reduce false alarms. The scheme can be applied to prevent computer users or gamblers from sitting too long in front of the screens in offices or pokie machines in casinos. To the best of the authors' knowledge, this is the first time face recognition in complex environments has been taken into consideration.
Style APA, Harvard, Vancouver, ISO itp.
49

Rahul, G. Sai. "Face Recognition based Attendance System". International Journal for Research in Applied Science and Engineering Technology 9, nr VI (30.06.2021): 4448–55. http://dx.doi.org/10.22214/ijraset.2021.35859.

Pełny tekst źródła
Streszczenie:
Human face is one of the natural traits and crucial part of human body that can uniquely identify an individual. In the current old system the roll numbers are called out by the teachers and their presence or absence is marked accordingly which is time consuming and has a lot of ambiguity that caused inaccuracy and inefficiency of attendance marking. The productive time of the class can be utilized very efficiently by implementing automated attendance system. The main purpose of this project is to build a face recognition-based attendance monitoring system for any educational institution or organization where attendance marking is the demanding task. It enhances and upgrades the current attendance system into more efficient and effective as compared to before. This attendance system which uses HaarCascade a machine learning Object Detection Algorithm used to identify faces in an image or a real time video, Local Binary Pattern Histogram (LBPH) a face recognizer algorithm used to extract features and compare by using python programming and OpenCV libraries saves time and efficiently identifies and eliminates the chances of proxy attendance. This model integrates a camera that captures an input image and training database is created by training the system with the faces of the authorized students.
Style APA, Harvard, Vancouver, ISO itp.
50

Yu, Jimin, Xin Zhang, Tao Wu, Huilan Pan i Wei Zhang. "A Face Detection and Standardized Mask-Wearing Recognition Algorithm". Sensors 23, nr 10 (10.05.2023): 4612. http://dx.doi.org/10.3390/s23104612.

Pełny tekst źródła
Streszczenie:
In the era of coronavirus disease (COVID-19), wearing a mask could effectively protect people from the risk of infection and largely reduce transmission in public places. To prevent the spread of the virus, instruments are needed in public places to monitor whether people are wearing masks, which has higher requirements for the accuracy and speed of detection algorithms. To meet the demand for high accuracy and real-time monitoring, we propose a single-stage approach based on YOLOv4 to identify the face and whether to regulate the wearing of masks. In this approach, we propose a new feature pyramidal network based on the attention mechanism to reduce the loss of object information that can be caused by sampling and pooling in convolutional neural networks. The network is able to deeply mine the feature map for spatial and communication factors, and the multi-scale feature fusion makes the feature map equipped with location and semantic information. Based on the complete intersection over union (CIoU), a penalty function based on the norm is proposed to improve positioning accuracy, which is more accurate at the detection of small objects; the new bounding box regression function is called Norm CIoU (NCIoU). This function is applicable to various object-detection bounding box regression tasks. A combination of the two functions to calculate the confidence loss is used to mitigate the problem of the algorithm bias towards determinating no objects in the image. Moreover, we provide a dataset for recognizing faces and masks (RFM) that includes 12,133 realistic images. The dataset contains three categories: face, standardized mask and non-standardized mask. Experiments conducted on the dataset demonstrate that the proposed approach achieves mAP@.5:.95 69.70% and AP75 73.80%, outperforming the compared methods.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii