Academic literature on the topic 'Face and Object Matching'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Face and Object Matching.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Face and Object Matching"

1

Newell, F. N. "Searching for Objects in the Visual Periphery: Effects of Orientation." Perception 25, no. 1_suppl (August 1996): 110. http://dx.doi.org/10.1068/v96l1111.

Full text
Abstract:
Previous studies have found that the recognition of familiar objects is dependent on the orientation of the object in the picture plane. Here the time taken to locate rotated objects in the periphery was examined. Eye movements were also recorded. In all experiments, familiar objects were arranged in a clock face display. In experiment 1, subjects were instructed to locate a match to a central, upright object from amongst a set of randomly rotated objects. The target object was rotated in the frontoparallel plane. Search performance was dependent on rotation, yielding the classic ‘M’ function found in recognition tasks. When matching a single object in periphery, match times were dependent on the angular deviations between the central and target objects and showed no advantage for the upright (experiment 2). In experiment 3 the central object was shown in either the upright rotation or rotated by 120° from the upright. The target object was similarly rotated given four different match conditions. Distractor objects were aligned with the target object. Search times were faster when the centre and target object were aligned and also when the centre object was rotated and the target was upright. Search times were slower when matching a central upright object to a rotated target object. These results suggest that in simple tasks matching is based on image characteristics. However, in complex search tasks a contribution from the object's representation is made which gives an advantage to the canonical, upright view in peripheral vision.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Ziyou, Ziliang Feng, Wei Wang, and Yanqiong Guo. "A 3D Face Modeling and Recognition Method Based on Binocular Stereo Vision and Depth-Sensing Detection." Journal of Sensors 2022 (July 15, 2022): 1–11. http://dx.doi.org/10.1155/2022/2321511.

Full text
Abstract:
The human face is an important channel for human interaction and is the most expressive part of the human body with personalized and diverse characteristics. To improve the modeling speed as well as recognition speed and accuracy of 3D faces, this paper proposes a laser scanning binocular stereo vision imaging method based on binocular stereo vision and depth-sensing detection method. Existing matching methods have a weak anti-interference capability, cannot identify the spacing of objects quickly and efficiently, and have too large errors. Using the technique mentioned in this paper can remedy these shortcomings by mainly using the laser lines scanned onto the object as strong feature cues for left and right views for binocular vision matching and thus for depth measurement perception of the object. The experiments in this paper first explore the effect of parameter variation of the laser scanning binocular vision imaging system on the accuracy of object measurement depth values in an indoor environment; the experimental data are selected from 68 face feature points for modeling, cameras that photograph faces, stereo calibration of the cameras using calibration methods to obtain the parameters of the binocular vision imaging system, and after stereo correction of the left and right camera images, the laser line scan light is used as a pixel matching strong features for binocular camera matching, which can achieve fast recognition and high accuracy and then select the system parameters with the highest accuracy to perform 3D reconstruction experiments on actual target objects in the indoor environment to achieve faster recognition.
APA, Harvard, Vancouver, ISO, and other styles
3

Biederman, Irving, and Peter Kalocsais. "Neurocomputational bases of object and face recognition." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 352, no. 1358 (August 29, 1997): 1203–19. http://dx.doi.org/10.1098/rstb.1997.0103.

Full text
Abstract:
A number of behavioural phenomena distinguish the recognition of faces and objects, even when members of a set of objects are highly similar. Because faces have the same parts in approximately the same relations, individuation of faces typically requires specification of the metric variation in a holistic and integral representation of the facial surface. The direct mapping of a hypercolumn–like pattern of activation onto a representation layer that preserves relative spatial filter values in a two–dimensional (2D) coordinate space, as proposed by C. von der Malsburg and his associates, may account for many of the phenomena associated with face recognition. An additional refinement, in which each column of filters (termed a ‘jet’) is centered on a particular facial feature (or fiducial point), allows selectivity of the input into the holistic representation to avoid incorporation of occluding or nearby surfaces. The initial hypercolumn representation also characterizes the first stage of object perception, but the image variation for objects at a given location in a 2D coordinate space may be too great to yield sufficient predictability directly from the output of spatial kernels. Consequently, objects can be represented by a structural description specifying qualitative (typically, non–accidental) characterizations of an object's parts, the attributes of the parts, and the relations among the parts, largely based on orientation and depth discontinuities (as shown by Hummel and Biederman). A series of experiments on the name priming or physical matching of complementary images (in the Fourier domain) of objects and faces documents that whereas face recognition is strongly dependent on the original spatial filter values, evidence from object recognition indicates strong invariance to these values, even when distinguishing among objects that are as similar as faces.
APA, Harvard, Vancouver, ISO, and other styles
4

Sanyal, Soubhik, Sivaram Prasad Mudunuri, and Soma Biswas. "Discriminative pose-free descriptors for face and object matching." Pattern Recognition 67 (July 2017): 353–65. http://dx.doi.org/10.1016/j.patcog.2017.02.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Horwitz, Barry, Cheryl L. Grady, James V. Haxby, Mark B. Schapiro, Stanley I. Rapoport, Leslie G. Ungerleider, and Mortimer Mishkin. "Functional Associations among Human Posterior Extrastriate Brain Regions during Object and Spatial Vision." Journal of Cognitive Neuroscience 4, no. 4 (October 1992): 311–22. http://dx.doi.org/10.1162/jocn.1992.4.4.311.

Full text
Abstract:
Primate extrastriate visual cortex is organized into an occipitotemporal pathway for object vision and an occipitoparietal pathway for spatial vision. Correlations between normalized regional cerebral blood flow values (regional divided by global flows), obtained using H215O and positron emission tomography, were used to examine functional associations among posterior brain regions for these two pathways in 17 young men during performance of a face matching task and a dot-location matching task. During face matching, there was a significant correlation in the right hemisphere between an extrastriate occipital region that was equally activated during both the face matching and dot-location matching tasks and a region in inferior occipitotemporal cortex that was activated more during the face matching task. The corresponding correlation in the left hemisphere was not significantly different from zero. Significant intrahemispheric correlations among posterior regions were observed more often for the right than for the left hemisphere. During dot-location matching, many significant correlations were found among posterior regions in both hemispheres, but significant correlations between specific regions in occipital and parietal cortex shown to be reliably activated during this spatial vision test were found only in the right cerebral hemisphere. These results suggest that (1) correlational analysis of normalized rCBF can detect functional interactions between components of proposed brain circuits, and (2) face and dot-location matching depend primarily on functional interactions between posterior cortical areas in the right cerebral hemisphere. At the same time, left hemisphere cerebral processing may contribute more to dot-location matching than to face matching.
APA, Harvard, Vancouver, ISO, and other styles
6

Su, Ching-Liang. "Manufacture Automation: Model and Object Recognition by Using Object Position Auto Locating Algorithm and Object Comparison Model." JALA: Journal of the Association for Laboratory Automation 5, no. 2 (April 2000): 61–65. http://dx.doi.org/10.1016/s1535-5535-04-00062-0.

Full text
Abstract:
This research uses the geometry matching technique to identify the different objects. The object is extracted from the background. The second moment 6 is used to find the orientation and the center point of the extracted object. Since the second moment can find the orientations and the center point of the object, the perfect object and the test object can be aligned to the same orientation. Furthermore, these two images can be shifted to the same centroid. After this, the perfect object can be subtracted from the test face. By using the subtracted result, the objects can be classified. The techniques used in this research can very accurately classify different objects.
APA, Harvard, Vancouver, ISO, and other styles
7

Gauthier, Isabel, Marlene Behrmann, and Michael J. Tarr. "Can Face Recognition Really be Dissociated from Object Recognition?" Journal of Cognitive Neuroscience 11, no. 4 (July 1999): 349–70. http://dx.doi.org/10.1162/089892999563472.

Full text
Abstract:
We argue that the current literature on prosopagnosia fails to demonstrate unequivocal evidence for a disproportionate impairment for faces as compared to nonface objects. Two prosopagnosic subjects were tested for the discrimination of objects from several categories (face as well as nonface) at different levels of categorization (basic, subordinate, and exemplar levels). Several dependent measures were obtained including accuracy, signal detection measures, and response times. The results from Experiments 1 to 4 demonstrate that, in simultaneous-matching tasks, response times may reveal impairments with nonface objects in subjects whose error rates only indicate a face deficit. The results from Experiments 5 and 6 show that, given limited stimulus presentation times for face and nonface objects, the same subjects may demonstrate a deªcit for both stimulus categories in sensitivity. In Experiments 7, 8 and 9, a match-to-sample task that places greater demands on memory led to comparable recognition sensitivity with both face and nonface objects. Regardless of object category, the prosopagnosic subjects were more affected by manipulations of the level of categorization than normal controls. This result raises questions regarding neuropsychological evidence for the modularity of face recognition, as well as its theoretical and methodological foundations.
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Xiao Kang, Cheng Gang Xie, and Qin Lu. "Algorithm of Video Decomposition and Video Abstraction Generation Based on Face Detection and Recognition." Applied Mechanics and Materials 644-650 (September 2014): 4620–23. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4620.

Full text
Abstract:
ion generation based on face detection and recognitionXiaokang Wu1, a, Chenggang Xie2, b, Qin Lu2, c1 National University Of Defense Technology, Changsha 410073, China;a172896292@qq.com, bqingqingzijin_k@126.com,Keywords: face detection, face recognition, key frame, video abstractionAbstract. In order to facilitate users browse the behaviors and expressions of interesting objects in a video quickly, need to remove the redundancy information and extract key frames related to the object interested. This paper uses a fast face detection based on skin color, and recognition technology using spectrum feature matching, decompose the coupling video, and classify frames related to the object into different sets, generate a different video abstraction of each object. Experimental results show that the algorithm under different light conditions has better practicability.
APA, Harvard, Vancouver, ISO, and other styles
9

Morie, Takashi, and Teppei Nakano. "A face/object recognition system using coarse region segmentation and dynamic-link matching." International Congress Series 1269 (August 2004): 177–80. http://dx.doi.org/10.1016/j.ics.2004.05.132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Grady, Cheryl L., James V. Haxby, Barry Horwitz, Mark B. Schapiro, Stanley I. Rapoport, Leslie G. Ungerleider, Mortimer Mishkin, Richard E. Carson, and Peter Herscovitch. "Dissociation of Object and Spatial Vision in Human Extrastriate Cortex: Age-Related Changes in Activation of Regional Cerebral Blood Flow Measured with [15 O]Water and Positron Emission Tomography." Journal of Cognitive Neuroscience 4, no. 1 (January 1992): 23–34. http://dx.doi.org/10.1162/jocn.1992.4.1.23.

Full text
Abstract:
We previously reported selective activation of regional cerebral blood flow (rCBF) in occipitotemporal cortex during a face matching task (object vision) and activation in superior parietal cortex during a dot-location matching task (spatial vision) in young subjects, The purpose of the present study was to determine the effects of aging on these extrastriate visual processing systems. Eleven young (mean age 27 ± 4 years) and nine old (mean age 72 ± 7 years) male subjects were studied. Positron emission tomographic scans were performed using a Scanditronix PC1024–7B tomograph and H215O to measure rCBF. To locate brain areas that were activated by the visual tasks, pixel-by-pixel difference images were computed between images from a control task and images from the face and dot-location matching tasks. Both young and old subjects showed rCBF activation during face matching primarily in occipitotemporal cortex, and activation of superior parietal cortex during dot-location matching. Statistical comparisons of these activations showed that the old subjects had more activation of occipitotemporal cortex during the spatial task and more activation of superior parietal cortex during the object task than did the young subjects. These results show less functional separation of the dorsal and ventral visual pathways in older subjects, and may reflect an age-related reduction in the processing efficiency of these visual cortical areas.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Face and Object Matching"

1

Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition /." Connect to this title, 2006. http://theses.library.uwa.edu.au/adt-WU2007.0046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0046.

Full text
Abstract:
[Truncated abstract] The aim of visual recognition is to identify objects in a scene and estimate their pose. Object recognition from 2D images is sensitive to illumination, pose, clutter and occlusions. Object recognition from range data on the other hand does not suffer from these limitations. An important paradigm of recognition is model-based whereby 3D models of objects are constructed offline and saved in a database, using a suitable representation. During online recognition, a similar representation of a scene is matched with the database for recognizing objects present in the scene . . . The tensor representation is extended to automatic and pose invariant 3D face recognition. As the face is a non-rigid object, expressions can significantly change its 3D shape. Therefore, the last part of this thesis investigates representations and matching techniques for automatic 3D face recognition which are robust to facial expressions. A number of novelties are proposed in this area along with their extensive experimental validation using the largest available 3D face database. These novelties include a region-based matching algorithm for 3D face recognition, a 2D and 3D multimodal hybrid face recognition algorithm, fully automatic 3D nose ridge detection, fully automatic normalization of 3D and 2D faces, a low cost rejection classifier based on a novel Spherical Face Representation, and finally, automatic segmentation of the expression insensitive regions of a face.
APA, Harvard, Vancouver, ISO, and other styles
3

Tewes, Andreas H. [Verfasser]. "A Flexible Object Model for Encoding and Matching Human Faces / Andreas H Tewes." Aachen : Shaker, 2006. http://d-nb.info/1170529097/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Malla, Amol Man. "Automated video-based measurement of eye closure using a remote camera for detecting drowsiness and behavioural microsleeps." Thesis, University of Canterbury. Electrical and Computer Engineering, 2008. http://hdl.handle.net/10092/2111.

Full text
Abstract:
A device capable of continuously monitoring an individual’s levels of alertness in real-time is highly desirable for preventing drowsiness and lapse related accidents. This thesis presents the development of a non-intrusive and light-insensitive video-based system that uses computer-vision methods to localize face, eyes, and eyelids positions to measure level of eye closure within an image, which, in turn, can be used to identify visible facial signs associated with drowsiness and behavioural microsleeps. The system was developed to be non-intrusive and light-insensitive to make it practical and end-user compliant. To non-intrusively monitor the subject without constraining their movement, the video was collected by placing a camera, a near-infrared (NIR) illumination source, and an NIR-pass optical filter at an eye-to-camera distance of 60 cm from the subject. The NIR-illumination source and filter make the system insensitive to lighting conditions, allowing it to operate in both ambient light and complete darkness without visually distracting the subject. To determine the image characteristics and to quantitatively evaluate the developed methods, reference videos of nine subjects were recorded under four different lighting conditions with the subjects exhibiting several levels of eye closure, head orientations, and eye gaze. For each subject, a set of 66 frontal face reference images was selected and manually annotated with multiple face and eye features. The eye-closure measurement system was developed using a top-down passive feature-detection approach, in which the face region of interest (fROI), eye regions of interests (eROIs), eyes, and eyelid positions were sequentially localized. The fROI was localized using an existing Haar-object detection algorithm. In addition, a Kalman filter was used to stabilize and track the fROI in the video. The left and the right eROIs were localized by scaling the fROI with corresponding proportional anthropometric constants. The position of an eye within each eROI was detected by applying a template-matching method in which a pre-formed eye-template image was cross-correlated with the sub-images derived from the eROI. Once the eye position was determined, the positions of the upper and lower eyelids were detected using a vertical integral-projection of the eROI. The detected positions of the eyelids were then used to measure eye closure. The detection of fROI and eROI was very reliable for frontal-face images, which was considered sufficient for an alertness monitoring system as subjects are most likely facing straight ahead when they are drowsy or about to have microsleep. Estimation of the y- coordinates of the eye, upper eyelid, and lower eyelid positions showed average median errors of 1.7, 1.4, and 2.1 pixels and average 90th percentile (worst-case) errors of 3.2, 2.7, and 6.9 pixels, respectively (1 pixel 1.3 mm in reference images). The average height of a fully open eye in the reference database was 14.2 pixels. The average median and 90th percentile errors of the eye and eyelid detection methods were reasonably low except for the 90th percentile error of the lower eyelid detection method. Poor estimation of the lower eyelid was the primary limitation for accurate eye-closure measurement. The median error of fractional eye-closure (EC) estimation (i.e., the ratio of closed portions of an eye to average height when the eye is fully open) was 0.15, which was sufficient to distinguish between the eyes being fully open, half closed, or fully closed. However, compounding errors in the facial-feature detection methods resulted in a 90th percentile EC estimation error of 0.42, which was too high to reliably determine extent of eye-closure. The eye-closure measurement system was relatively robust to variation in facial-features except for spectacles, for which reflections can saturate much of the eye-image. Therefore, in its current state, the eye-closure measurement system requires further development before it could be used with confidence for monitoring drowsiness and detecting microsleeps.
APA, Harvard, Vancouver, ISO, and other styles
5

Morris, Ryan L. "Hand/Face/Object." Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent155655052646378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lennartsson, Mattias. "Object Recognition with Cluster Matching." Thesis, Linköping University, Department of Electrical Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-51494.

Full text
Abstract:

Within this thesis an algorithm for object recognition called Cluster Matching has been developed, implemented and evaluated. The image information is sampled at arbitrary sample points, instead of interest points, and local image features are extracted. These sample points are used as a compact representation of the image data and can quickly be searched for prior known objects. The algorithm is evaluated on a test set of images and the result is surprisingly reliable and time efficient.

APA, Harvard, Vancouver, ISO, and other styles
7

Havard, Catriona. "Eye movement strategies during face matching." Thesis, University of Glasgow, 2007. http://theses.gla.ac.uk/91/.

Full text
Abstract:
Although there is a large literature on face recognition, less is known about the process of face matching, i.e., deciding whether two photographs depict the same person. The research described here examines viewers’ strategies for matching faces, and addresses the issue of which parts of a face are important for this task. Consistent with previous research, several eye-tracking experiments demonstrated a bias to the eye region when looking at faces. In some studies, there was a scanning strategy whereby only one eye on each face was viewed (the left eye on the right face and the right eye on the left face). However, viewing patterns and matching performance could be influenced by manipulating the way the face pair was presented: through face inversion, changing the distance between the two faces and varying the layout. There was a strong bias to look at the face on the left first, and then to look at the face on the right. A left visual field bias for individual faces has been found in a number of previous studies, but this is the first time it has been reported using pairs of faces in a matching task. The bias to look first at the item on the left was also found when trying to match pairs of similar line drawings of objects and therefore is not specific to face stimuli. Finally, the experiments in this thesis suggest that the way face pairs are presented can influence viewers’ accuracy on a matching task, as well as the way in which these faces are viewed. This suggests that the layout of face pairs for matching might be important in real world settings, such as the attempt to identify criminals from security cameras.
APA, Harvard, Vancouver, ISO, and other styles
8

Dowsett, Andrew James. "Methods for improving unfamiliar face matching." Thesis, University of Aberdeen, 2015. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=228194.

Full text
Abstract:
Matching unfamiliar faces is known to be a very difficult task. Yet, despite this, we frequently rely on this method to verify people's identity in high security situations, such as at the airport. Because of such security implications, recent research has focussed on investigating methods to improve our ability to match unfamiliar faces. This has involved methods for improving the document itself, such that photographic-ID presents a better representation of an individual, or training matchers to be better at the task. However, to date, no method has demonstrated significant improvements that would allow the technique to be put into practice in the real world. The experiments in this thesis therefore further explore methods to improve unfamiliar face matching. In the first two chapters both variability and feedback are examined to determine if these previously used techniques do produce reliable improvements. Results show that variability is only of use when training to learn a specific identity, and feedback only leads to improvements when the task is difficult. In the final chapter, collaboration is explored as a new method for improving unfamiliar face matching in general. Asking two people to perform the task together did produce consistent accuracy improvements, and importantly, also demonstrated individual training benefits. Overall, the results further demonstrate that unfamiliar face matching is difficult, and although finding methods to improve this is not straightforward, collaboration does appear to be successful and worth exploring further. The findings are discussed in relation to previous attempts at improving unfamiliar face matching, and the effect these may have on real world applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Harvard, Catriona. "Eye movements strategies during face matching." Thesis, University of Glasgow, 2007. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.502694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Whitney, Hannah L. "Object agnosia and face processing." Thesis, University of Southampton, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548326.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Face and Object Matching"

1

Information routing, correspondence finding, and object recognition in the brain. Berlin: Springer-Verlag, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lamdan, Yehezkel. Object recognition by affine invariant matching. New York: Courant Institute of Mathematical Sciences, New York University, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bastuscheck, C. Marc. Object recognition by 3-dimensional curve matching. New York: Courant Institute of Mathematical Sciences, New York University, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dawson, K. M. Implicit model matching as an approach to three-dimensional object recognition. Dublin: Trinity College, Department of Computer Science, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Raymond Shu Tak. Invariant object recognition based on elastic graph matching: Theory and applications. Amsterdam: IOS Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wiskott, Laurenz. Labeled graphs and dynamic link matching for face recognition and scene analysis. Thun: Deutsch, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vision and separation: Between mother and baby. London: Free Association Books, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vision and separation: Between mother and baby. Northvale, N.J: J. Aronson, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wright, Kenneth. Vision and separation: Between mother and baby. North Vale: Jason Aronson, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Postoutenko, Kirill, ed. Totalitarian Communication. Bielefeld, Germany: transcript Verlag, 2010. http://dx.doi.org/10.14361/9783839413937.

Full text
Abstract:
Totalitarianism has been an object of extensive communicative research since its heyday: already in the late 1930s, such major cultural figures as George Orwell or Hannah Arendt were busy describing the visual and verbal languages of Stalinism and Nazism. After the war, many fashionable trends in social sciences and humanities (ranging from Begriffsgeschichte and Ego-Documentology to Critical Linguistics and Critical Discourse Analysis) were called upon to continue this media-centered trend in the face of increasing political determination of the burgeoing field. Nevertheless, the integration of historical, sociological and linguistic knowledge about totalitarian society on a firm factual ground remains the thing of the future. This book is the first step in this direction. By using history and theory of communication as an integrative methodological device, it reaches out to those properties of totalitarian society which appear to be beyond the grasp of specific disciplines. Furthermore, this functional approach allows to extend the analysis of communicative practices commonly associated with fascist Italy, Nazi Germany and Soviet Union, to other locations (France, United States of America and Great Britain in the 1930s) or historical contexts (post-Soviet developments in Russia or Kyrgyzstan). This, in turn, leads to the revaluation of the very term »totalitarian«: no longer an ideological label or a stock attribute of historical narration, it gets a life of its own, defining a specific constellation of hierarchies, codes and networks within a given society.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Face and Object Matching"

1

Śluzek, Andrzej, Mariusz Paradowski, and Duanduan Yang. "Reinforcement of Keypoint Matching by Co-segmentation in Object Retrieval: Face Recognition Case Study." In Neural Information Processing, 34–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-34500-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Jin Ok, Jun Yeong Jang, and Chin Hyun Chung. "On a Face Detection with an Adaptive Template Matching and an Efficient Cascaded Object Detection." In Lecture Notes in Computer Science, 414–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11538356_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Chun Young, and Jun Hwang. "On the Face Detection with Adaptive Template Matching and Cascaded Object Detection for Ubiquitous Computing Environment." In Computational Science and Its Applications – ICCSA 2005, 1204–12. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11424758_127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nastar, Chahab. "Face Recognition Using Deformable Matching." In Face Recognition, 206–29. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bennamoun, M., and G. J. Mamic. "Object Representation and Feature Matching." In Object Recognition, 161–94. London: Springer London, 2002. http://dx.doi.org/10.1007/978-1-4471-3722-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tiebe, Oliver, Cong Yang, Muhammad Hassan Khan, Marcin Grzegorzek, and Dominik Scarpin. "Stripes-Based Object Matching." In Computer and Information Science, 59–72. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-40171-3_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Treiber, Marco. "Flexible Shape Matching." In An Introduction to Object Recognition, 117–43. London: Springer London, 2010. http://dx.doi.org/10.1007/978-1-84996-235-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Griffin, Jason W., and Natalie V. Motta-Mena. "Face and Object Recognition." In Encyclopedia of Evolutionary Psychological Science, 1–8. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-319-16999-6_2762-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Griffin, Jason W., and Natalie V. Motta-Mena. "Face and Object Recognition." In Encyclopedia of Evolutionary Psychological Science, 2876–83. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-319-19650-3_2762.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Moghaddam, Baback, and Alex Pentland. "Beyond Linear Eigenspaces: Bayesian Matching for Face Recognition." In Face Recognition, 230–43. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998. http://dx.doi.org/10.1007/978-3-642-72201-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Face and Object Matching"

1

Sanyal, Soubhik, Sivaram Prasad Mudunuri, and Soma Biswas. "Discriminative Pose-Free Descriptors for Face and Object Matching." In 2015 IEEE International Conference on Computer Vision (ICCV). IEEE, 2015. http://dx.doi.org/10.1109/iccv.2015.437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dave, Parag, and Hiroshi Sakurai. "Maximal Volume Decomposition and its Application to Feature Recognition." In ASME 1995 15th International Computers in Engineering Conference and the ASME 1995 9th Annual Engineering Database Symposium collocated with the ASME 1995 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/cie1995-0788.

Full text
Abstract:
Abstract A method has been developed that decomposes an object having both planar and curved faces into volumes, called maximal volumes, using the halfspaces of the object. A maximal volume has as few concave edges as possible without introducing additional halfspaces. The object is first decomposed into minimal cells by extending the faces of the object. These minimal cells are then composed to form maximal volumes. The combinations of such minimal cells that result in maximal volumes are searched efficiently by examining the relationships among those minimal cells. With this decomposition method, a delta volume, which is the volume difference between the raw material and the finished part, is decomposed into maximal volumes. By subtracting maximal volumes from each other in different orders and applying graph matching to the resulting volumes, multiple interpretations of features can be generated.
APA, Harvard, Vancouver, ISO, and other styles
3

Sakurai, Hiroshi, and Chia-Wei Chin. "Defining and Recognizing Cavity and Protrusion by Volumes." In ASME 1993 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1993. http://dx.doi.org/10.1115/cie1993-0008.

Full text
Abstract:
Abstract In design and manufacturing, cavity features, such as holes and pockets, and protrusion features, such as bosses and ribs are commonly used. In this work, cavity and protrusion in a solid object were defined with the volumes enclosed by the faces of the object and their extensions. These definitions of cavity and protrusion match our intuitive notions of cavity and protrusion better than the commonly used definitions that consider the convexity and concavity of edges. Together with an algorithm called “spatial decomposition and composition”, the definitions provide a method to find cavities and protrusions in solid models. By applying graph matching commonly used in feature recognition to the volumes of cavity and protrusion, all the features in a solid model can be recognized whether they intersect or not.
APA, Harvard, Vancouver, ISO, and other styles
4

Webb, Robert H. "Confocal scanning laser ophthalmoscope." In OSA Annual Meeting. Washington, D.C.: Optica Publishing Group, 1986. http://dx.doi.org/10.1364/oam.1986.tuo5.

Full text
Abstract:
A confocal scanning imager moves an illumination spot over the object and moves a (virtual) detector synchronously over the image. In the confocal scanning laser ophthalmoscope this is accomplished by reusing the source optics for detection. The common optical elements are all mirrors—either flat or spherical—and the scanners are positioned to compensate astigmatism due to mirror tilt. The source beam aperture at the horizontal scanner is small. Light returning from the eye is processed by the same elements but now the polygon’s facet is overfilled. A solid state (high quantum efficiency) detector may be at either pupillary or retinal conjugate plane in the descanned beam and still have proper throughput matching. Our 1-mm avalanche photodiode at a pupillary plane is preceded by interchangeable stops at an image (retinal) plane. Not only can we reject scattered light to a degree unusual for viewing the retina, but we choose selectively among direct and scattered components of the light returning from the eye. The retinal view is surprising. One (of many) consequences is that this ophthalmoscope gives crisp and complete retinal images in He-Ne light without dilation of the pupil.
APA, Harvard, Vancouver, ISO, and other styles
5

Hancock, Peter J. B., Alex H. McIntyre, and Josef Kittler. "Caricaturing to Improve Face Matching." In 2009 Symposium on Bio-inspired Learning and Intelligent Systems for Security (BLISS). IEEE, 2009. http://dx.doi.org/10.1109/bliss.2009.17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

ter Haar, Frank B., and Remco C. Veltkamp. "A 3D face matching framework." In 2008 IEEE International Conference on Shape Modeling and Applications (SMI). IEEE, 2008. http://dx.doi.org/10.1109/smi.2008.4547956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Jun-Yong, Wei-Shi Zheng, and Jianhuang Lai. "Transductive VIS-NIR face matching." In 2012 19th IEEE International Conference on Image Processing (ICIP 2012). IEEE, 2012. http://dx.doi.org/10.1109/icip.2012.6467140.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kroon, Bart, Alan Hanjalic, and Sander M. P. Maas. "Eye localization for face matching." In the 2008 international conference. New York, New York, USA: ACM Press, 2008. http://dx.doi.org/10.1145/1386352.1386401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Balikai, Anupriya, and Peter Hall. "Depiction Inviariant Object Matching." In British Machine Vision Conference 2012. British Machine Vision Association, 2012. http://dx.doi.org/10.5244/c.26.56.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zardetto, Diego, Monica Scannapieco, and Tiziana Catarci. "Effective automated Object Matching." In 2010 IEEE 26th International Conference on Data Engineering (ICDE 2010). IEEE, 2010. http://dx.doi.org/10.1109/icde.2010.5447904.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Face and Object Matching"

1

Gyaourova, A., C. Kamath, and S. Cheung. Block Matching for Object Tracking. Office of Scientific and Technical Information (OSTI), October 2003. http://dx.doi.org/10.2172/15009731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, David. Parallel approximate string matching applied to occluded object recognition. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.5608.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cass, Todd A. Feature Matching for Object Localization in the Presence of Uncertainty. Fort Belvoir, VA: Defense Technical Information Center, May 1990. http://dx.doi.org/10.21236/ada231405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Luo, Ming, Daniel DeMenthon, Xiaodong Yu, and David Doermann. SOFTCBIR: Object Searching in Videos Combining Keypoint Matching and Graduated Assignment. Fort Belvoir, VA: Defense Technical Information Center, May 2006. http://dx.doi.org/10.21236/ada448477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Asari, Vijayan, Paheding Sidike, Binu Nair, Saibabu Arigela, Varun Santhaseelan, and Chen Cui. PR-433-133700-R01 Pipeline Right-of-Way Automated Threat Detection by Advanced Image Analysis. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), December 2015. http://dx.doi.org/10.55274/r0010891.

Full text
Abstract:
A novel algorithmic framework for the robust detection and classification of machinery threats and other potentially harmful objects intruding onto a pipeline right-of-way (ROW) is designed from three perspectives: visibility improvement, context-based segmentation, and object recognition/classification. In the first part of the framework, an adaptive image enhancement algorithm is utilized to improve the visibility of aerial imagery to aid in threat detection. In this technique, a nonlinear transfer function is developed to enhance the processing of aerial imagery with extremely non-uniform lighting conditions. In the second part of the framework, the context-based segmentation is developed to eliminate regions from imagery that are not considered to be a threat to the pipeline. Context based segmentation makes use of a cascade of pre-trained classifiers to search for regions that are not threats. The context based segmentation algorithm accelerates threat identification and improves object detection rates. The last phase of the framework is an efficient object detection model. Efficient object detection �follows a three-stage approach which includes extraction of the local phase in the image and the use of local phase characteristics to locate machinery threats. The local phase is an image feature extraction technique which partially removes the lighting variance and preserves the edge information of the object. Multiple orientations of the same object are matched and the correct orientation is selected using feature matching by histogram of local phase in a multi-scale framework. The classifier outputs locations of threats to pipeline.�The advanced automatic image analysis system is intended to be capable of detecting construction equipment along the ROW of pipelines with a very high degree of accuracy in comparison with manual threat identification by a human analyst. �
APA, Harvard, Vancouver, ISO, and other styles
6

Schwartz, William Alexander. The Rise of the Far Right and the Domestication of the War on Terror. Goethe-Universität, Institut für Humangeographie, March 2022. http://dx.doi.org/10.21248/gups.62762.

Full text
Abstract:
Today in the United States, the notion that ‘the rise of the far right’ poses the greatest threat to democratic values, and by extension, to the nation itself, has slowly entered into common sense. The antecedent of this development is the object of our study. Explored through the prism of what we refer to as the domestication of the War on Terror, this publication adopts and updates the theoretical approach first forwarded in Policing the Crisis: Mugging, the State, the Law and Order (Hall et al. 1978). Drawing on this seminal work, a sequence of three disparate media events are explored as they unfold in the United States in mid-2015: the rise of the Trump campaign; the release of an op-ed in The New York Times warning of a rise in right-wing extremsim; and a mass shooting at a historic African American church in Charleston, South Carolina. By the end of 2015, as these disparate events converge into what we call the public face of the rise of the far right phenomenon, we subsequently turn our attention to its origins in policing and the law in the wake of the global War on Terror and the Great Recession. It is only from there, that we turn our attention to the poltical class struggle as expressed in the rise of 'populism' on the one hand, and the domestication of the War on Terror on the other, and in doing so, attempt to situate the role of the rise of the far right phenomenon within it.
APA, Harvard, Vancouver, ISO, and other styles
7

The Oil Industry Challenges and Strategic Responses. Universidad de Deusto, 2018. http://dx.doi.org/10.18543/fwgz8427.

Full text
Abstract:
Oil and gas prices and uncertainty in the main global markets, are likely to have a profound effect on the decisions made by O&G companies regarding exploration, appraisal development and operations. In addition to commodity prices, there has been increasing volatility in the relationships between industry, government policy makers and communities. Hence, the general object of this study will consist of analyzing the evolution of the industry within the new landscape and assess the challenges and strategies in response to them that the O&G industry must have to face in the coming years. In addition, this is complemented by a description of the value chain operations and market aspects as support and comprehension facilitator In summary, this document presents the in-depth strategic-focused conclusions that can be made from critically reviewing the current value chain. In this document, Chapter 2 first analyzes the new landscape and challenges that O&G companies are facing in respect to the four subject areas that has been considered to conform the new landscape: climate change policies and challenges, social concerns and new market trends, technological developments and applications, and regulations. Within each of these categories, a number of key developments and trends have been defined and described, along with the multiple challenges and decisions that industry players shall face. The dynamics of demand and supply are discussed in Chapter 3, along with the future uncertainties and factors that will have a profound effect on this balance. Within this chapter, the evolution of investments in E&P is also discussed, leading on to aspects of investments with regards to refining, and subsequently portfolio management. As a kind of conclusion, Chapter 4 pairs the new landscape issues identified in Chapter 2, with seven general challenges and related strategies for the industry. Furthermore, a second level of challenge and response granularity has been identified, which companies shall address in order to remain competitive in the new era of O&G industry. These two chapters, which deal with the strategic responses and business models, should be read jointly, as they try to look at the current situation - and future perspectives of the O&G industry, and how industry players may respond with different strategies, be they of a general or a more specific nature.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography