Dissertations / Theses on the topic 'Face and Object Recognition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Face and Object Recognition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Gathers, Ann D. "DEVELOPMENTAL FMRI STUDY: FACE AND OBJECT RECOGNITION." Lexington, Ky. : [University of Kentucky Libraries], 2005. http://lib.uky.edu/ETD/ukyanne2005d00276/etd.pdf.
Full textTitle from document title page (viewed on November 4, 2005). Document formatted into pages; contains xi, 152 p. : ill. Includes abstract and vita. Includes bibliographical references (p. 134-148).
Nilsson, Linus. "Object Tracking and Face Recognition in Video Streams." Thesis, Umeå universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-58076.
Full textBanarse, D. S. "A generic neural network architecture for deformation invariant object recognition." Thesis, Bangor University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.362146.
Full textCollin, Charles Alain. "Effects of spatial frequency overlap on face and object recognition." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=36896.
Full textA second question that is examined concerns the effect of calibration of stimuli on recognition of spatially filtered images. Past studies using non-calibrated presentation methods have inadvertently introduced aberrant frequency content to their stimuli. The effect this has on recognition performance has not been examined, leading to doubts about the comparability of older and newer studies. Examining the impact of calibration on recognition is an ancillary goal of this dissertation.
Seven experiments examining the above questions are reported here. Results suggest that spatial frequency overlap had a strong effect on face recognition and a lesser effect on object recognition. Indeed, contrary to much previous research it was found that the band of frequencies occupied by a face image had little effect on recognition, but that small variations in overlap had significant effects. This suggests that the overlap factor is important in understanding various phenomena in visual recognition. Overlap effects likely contribute to the apparent superiority of certain spatial bands for different recognition tasks, and to the inferiority of line drawings in face recognition. Results concerning the mnemonic representation of faces and objects suggest that these are both encoded in a format that retains spatial frequency information, and do not support certain proposed fundamental differences in how these two stimulus classes are stored. Data on calibration generally shows non-calibration having little impact on visual recognition, suggesting moderate confidence in results of older studies.
Higgs, David Robert. "Parts-based object detection using multiple views /." Link to online version, 2005. https://ritdml.rit.edu/dspace/handle/1850/1000.
Full textMian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0046.
Full textMian, Ajmal Saeed. "Representations and matching techniques for 3D free-form object and face recognition /." Connect to this title, 2006. http://theses.library.uwa.edu.au/adt-WU2007.0046.
Full textHolub, Alex David Perona Pietro. "Discriminative vs. generative object recognition : objects, faces, and the web /." Diss., Pasadena, Calif. : California Institute of Technology, 2007. http://resolver.caltech.edu/CaltechETD:etd-05312007-204007.
Full textVilaplana, Besler Verónica. "Region-based face detection, segmentation and tracking. framework definition and application to other objects." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/33330.
Full textUn dels problemes més importants en l'àrea de visió artificial és el reconeixement automàtic de classes d'objectes. En particular, la detecció de la classe de cares humanes és un problema que genera especial interès degut al gran nombre d'aplicacions que requereixen com a primer pas detectar les cares a l'escena. A aquesta tesis s'analitza el problema de detecció de cares com un problema conjunt de detecció i segmentació, per tal de localitzar de manera precisa les cares a l'escena amb màscares que arribin a precisions d'un píxel. Malgrat l'objectiu principal de la tesi és aquest, en el procés de trobar una solució s'ha intentat crear un marc de treball general i tan independent com fos possible del tipus d'objecte que s'està buscant. Amb aquest propòsit, la tècnica proposada fa ús d'un model jeràrquic d'imatge basat en regions, l'arbre binari de particions (BPT: Binary Partition Tree), en el qual els objectes s'obtenen com a unió de regions que provenen d'una partició de la imatge. En aquest treball, s'ha optimitzat el model per a les tasques de detecció i segmentació de cares. Per això, es proposen diferents criteris de fusió i de parada, els quals es comparen en un conjunt ampli d'experiments. En el sistema proposat, la variabilitat dins de la classe cara s'estudia dins d'un marc de treball d'aprenentatge automàtic. La classe cara es caracteritza fent servir un conjunt de descriptors, que es mesuren en els nodes de l'arbre, així com un conjunt de classificadors d'una única classe. El sistema està format per dos classificadors forts. Primer s'utilitza una cascada de classificadors binaris que realitzen una simplificació de l'espai de cerca i, posteriorment, s'aplica un conjunt de classificadors més complexes que produeixen la classificació final dels nodes de l'arbre. El sistema es testeja de manera exhaustiva sobre diferents bases de dades de cares, sobre les quals s'obtenen segmentacions precises provant així la robustesa del sistema en front a variacions d'escala, posició, orientació, condicions d'il·luminació i complexitat del fons de l'escena. A aquesta tesi es mostra també que la tècnica proposada per cares pot ser fàcilment adaptable a la detecció i segmentació d'altres classes d'objectes. Donat que la construcció del model d'imatge no depèn de la classe d'objecte que es pretén buscar, es pot detectar i segmentar diferents classes d'objectes fent servir, sobre el mateix model d'imatge, el model d'objecte apropiat. Nous models d'objecte poden ser fàcilment construïts mitjançant la selecció i l'entrenament d'un conjunt adient de descriptors i classificadors. Finalment, es proposa un mecanisme de seguiment. Aquest mecanisme combina l'eficiència de l'algorisme mean-shift amb l'ús de regions per fer el seguiment i segmentar les cares al llarg d'una seqüència de vídeo a la qual tant la càmera com la cara es poden moure. Aquest mètode s'estén al cas de seguiment d'altres objectes deformables, utilitzant una versió basada en regions de la tècnica de graph-cut per obtenir la segmentació final de l'objecte a cada imatge. Els experiments realitzats mostren que les dues versions del sistema de seguiment basat en l'algorisme mean-shift produeixen segmentacions acurades, fins i tot en entorns complicats com ara quan l'objecte i el fons de l'escena presenten colors similars o quan es produeix un moviment ràpid, ja sigui de la càmera o de l'objecte.
Gunn, Steve R. "Dual active contour models for image feature extraction." Thesis, University of Southampton, 1996. https://eprints.soton.ac.uk/250089/.
Full textFasel, Ian Robert. "Learning real-time object detectors probabilistic generative approaches /." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3216357.
Full textTitle from first page of PDF file (viewed July 24, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 87-91).
Clausen, Sally. "I never forget a face! : memory for faces and individual differences in spatial ability and gender." Honors in the Major Thesis, University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1394.
Full textBachelors
Sciences
Psychology
Kramer, Annika. "Model based methods for locating, enhancing and recognising low resolution objects in video." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/585.
Full textParkhi, Omkar Moreshwar. "Features and methods for improving large scale face recognition." Thesis, University of Oxford, 2015. https://ora.ox.ac.uk/objects/uuid:7704244a-b327-4e5c-a58e-7bfe769ed988.
Full textReiss, Jason Edward. "Object substitution masking what is the neural fate of the unreportable target? /." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 200 p, 2007. http://proquest.umi.com/pqdweb?did=1397916081&sid=9&Fmt=2&clientId=8331&RQT=309&VName=PQD.
Full textŠajboch, Antonín. "Sledování a rozpoznávání lidí na videu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255441.
Full textMoore, Viviene M. "The effects of age of acquisition in processing people's faces and names." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4836/.
Full textHolm, Linus. "Predictive eyes precede retrieval : visual recognition as hypothesis testing." Doctoral thesis, Umeå : Department of Psychology, Umeå University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1179.
Full textBichsel, Martin. "Strategies of robust object recognition for the automatic identification of human faces /." [S.l.] : [s.n.], 1991. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=9467.
Full textPapageorgiou, Constantine P. "A Trainable System for Object Detection in Images and Video Sequences." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/5566.
Full textRajnoha, Martin. "Určování podobnosti objektů na základě obrazové informace." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-437979.
Full textWang, Zeng. "Laser-based detection and tracking of dynamic objects." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:c7f2da08-fa1e-4121-b06b-31aad16ecddd.
Full textMorris, Ryan L. "Hand/Face/Object." Kent State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=kent155655052646378.
Full textHanafi, Marsyita. "Face recognition from face signatures." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/10566.
Full textHelmer, Scott. "Embodied object recognition." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42481.
Full textWells, William Mercer. "Statistical object recognition." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12606.
Full textIncludes bibliographical references (p. 169-177).
by William Mercer Wells, III.
Ph.D.
Figueroa, Flores Carola. "Visual Saliency for Object Recognition, and Object Recognition for Visual Saliency." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671964.
Full textEl reconocimiento de objetos para los seres humanos es un proceso instantáneo, preciso y extremadamente adaptable. Además, tenemos la capacidad innata de aprender nuevas categorias de objetos a partir de unos pocos ejemplos. El cerebro humano reduce la complejidad de los datos entrantes filtrando parte de la información y procesando las cosas que captan nuestra atención. Esto, combinado con nuestra predisposición biológica a responder a determinadas formas o colores, nos permite reconocer en una simple mirada las regiones más importantes o destacadas de una imagen. Este mecanismo se puede observar analizando en qué partes de las imágenes los sujetos ponen su atención; por ejemplo donde fijan sus ojos cuando se les muestra una imagen. La forma más precisa de registrar este comportamiento es rastrear los movimientos de los ojos mientras se muestran imágenes. La estimación computacional del ‘saliency’, tiene como objetivo diseñar algoritmos que, dada una imagen de entrada, estimen mapas de ‘saliency’. Estos mapas se pueden utilizar en una variada gama de aplicaciones, incluida la detección de objetos, la compresión de imágenes y videos y el seguimiento visual. La mayoría de la investigación en este campo se ha centrado en estimar automáticamente estos mapas de ‘saliency’, dada una imagen de entrada. En cambio, en esta tesis, nos propusimos incorporar la estimación de ‘saliency’ en un procedimiento de reconocimiento de objeto, puesto que, queremos investigar si los mapas de ‘saliency’ pueden mejorar los resultados de la tarea de reconocimiento de objetos. En esta tesis, identificamos varios problemas relacionados con la estimación del ‘saliency’ visual. Primero, pudimos determinar en qué medida se puede aprovechar la estimación del ‘saliency’ para mejorar el entrenamiento de un modelo de reconocimiento de objetos cuando se cuenta con escasos datos de entrenamiento. Para resolver este problema, diseñamos una red de clasificación de imágenes que incorpora información de ‘saliency’ como entrada. Esta red procesa el mapa de ‘saliency’ a través de una rama de red dedicada y utiliza las características resultantes para modular las características visuales estándar ascendentes de la entrada de la imagen original. Nos referiremos a esta técnica como clasificación de imágenes moduladas por prominencia (SMIC en inglés). En numerosos experimentos realizando sobre en conjuntos de datos de referencia estándar para el reconocimiento de objetos ‘fine-grained’, mostramos que nuestra arquitectura propuesta puede mejorar significativamente el rendimiento, especialmente en conjuntos de datos con datos con escasos datos de entrenamiento. Luego, abordamos el principal inconveniente del problema anterior: es decir, SMIC requiere explícitamente un algoritmo de ‘saliency’, el cual debe entrenarse en un conjunto de datos de ‘saliency’. Para resolver esto, implementamos un mecanismo de alucinación que nos permite incorporar la rama de estimación de ‘saliency’ en una arquitectura de red neuronal entrenada de extremo a extremo que solo necesita la imagen RGB como entrada. Un efecto secundario de esta arquitectura es la estimación de mapas de ‘saliency’. En varios experimentos, demostramos que esta arquitectura puede obtener resultados similares en el reconocimiento de objetos como SMIC pero sin el requisito de mapas de ‘saliency’ para entrenar el sistema. Finalmente, evaluamos la precisión de los mapas de ‘saliency’ que ocurren como efecto secundario del reconocimiento de objetos. Para ello, utilizamos un de conjuntos de datos de referencia para la evaluación de la prominencia basada en experimentos de seguimiento ocular. Sorprendentemente, los mapas de ‘saliency’ estimados son muy similares a los mapas que se calculan a partir de experimentos de seguimiento ocular humano. Nuestros resultados muestran que estos mapas de ‘saliency’ pueden obtener resultados competitivos en mapas de ‘saliency’ de referencia.
For humans, the recognition of objects is an almost instantaneous, precise and extremely adaptable process. Furthermore, we have the innate capability to learn new object classes from only few examples. The human brain lowers the complexity of the incoming data by filtering out part of the information and only processing those things that capture our attention. This, mixed with our biological predisposition to respond to certain shapes or colors, allows us to recognize in a simple glance the most important or salient regions from an image. This mechanism can be observed by analyzing on which parts of images subjects place attention; where they fix their eyes when an image is shown to them. The most accurate way to record this behavior is to track eye movements while displaying images. Computational saliency estimation aims to identify to what extent regions or objects stand out with respect to their surroundings to human observers. Saliency maps can be used in a wide range of applications including object detection, image and video compression, and visual tracking. The majority of research in the field has focused on automatically estimating saliency maps given an input image. Instead, in this thesis, we set out to incorporate saliency maps in an object recognition pipeline: we want to investigate whether saliency maps can improve object recognition results. In this thesis, we identify several problems related to visual saliency estimation. First, to what extent the estimation of saliency can be exploited to improve the training of an object recognition model when scarce training data is available. To solve this problem, we design an image classification network that incorporates saliency information as input. This network processes the saliency map through a dedicated network branch and uses the resulting characteristics to modulate the standard bottom-up visual characteristics of the original image input. We will refer to this technique as saliency-modulated image classification (SMIC). In extensive experiments on standard benchmark datasets for fine-grained object recognition, we show that our proposed architecture can significantly improve performance, especially on dataset with scarce training data. Next, we address the main drawback of the above pipeline: SMIC requires an explicit saliency algorithm that must be trained on a saliency dataset. To solve this, we implement a hallucination mechanism that allows us to incorporate the saliency estimation branch in an end-to-end trained neural network architecture that only needs the RGB image as an input. A side-effect of this architecture is the estimation of saliency maps. In experiments, we show that this architecture can obtain similar results on object recognition as SMIC but without the requirement of ground truth saliency maps to train the system. Finally, we evaluated the accuracy of the saliency maps that occur as a side-effect of object recognition. For this purpose, we use a set of benchmark datasets for saliency evaluation based on eye-tracking experiments. Surprisingly, the estimated saliency maps are very similar to the maps that are computed from human eye-tracking experiments. Our results show that these saliency maps can obtain competitive results on benchmark saliency maps. On one synthetic saliency dataset this method even obtains the state-of-the-art without the need of ever having seen an actual saliency image for training.
Universitat Autònoma de Barcelona. Programa de Doctorat en Informàtica
Zhou, Shaohua. "Unconstrained face recognition." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1800.
Full textThesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Ustun, Bulend. "3d Face Recognition." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609075/index.pdf.
Full textWong, Vincent. "Human face recognition /." Online version of thesis, 1994. http://hdl.handle.net/1850/11882.
Full textLee, Colin K. "Infrared face recognition." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Jun%5FLee%5FColin.pdf.
Full textThesis advisor(s): Monique P. Fargues, Gamani Karunasiri. Includes bibliographical references (p. 135-136). Also available online.
Furesjö, Fredrik. "Multiple cue object recognition." Licentiate thesis, KTH, Numerical Analysis and Computer Science, NADA, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277.
Full textNature is rich in examples of how vision can be successfully used for sensing and perceiving the world and how the gathered information can be utilized to perform a variety of different objectives. The key to successful vision is the internal representations of the visual agent, which enable the agent to successfully perceive properties about the world. Humans perceive a multitude of properties of the world through our visual sense, such as motion, shape, texture, and color. In addition we also perceive the world to be structured into objects which are clustered into different classes - categories. For such a rich perception of the world many different internal representations that can be combined in different ways are necessary. So far much work in computer vision has been focused on finding new and, out of some perspective, better descriptors and not much work has been done on how to combine different representations.
In this thesis a purposive approach in the context of a visual agent to object recognition is taken. When considering object recognition from this view point the situatedness in form of the context and task of the agent becomes central. Further a multiple feature representation of objects is proposed, since a single feature might not be pertinent to the task at hand nor be robust in a given context.
The first contribution of this thesis is an evaluation of single feature object representations that have previously been used in computer vision for object recognition. In the evaluation different interest operators combined with different photometric descriptors are tested together with a shape representation and a statistical representation of the whole appearance. Further a color representation, inspired from human color perception, is presented and used in combination with the shape descriptor to increase the robustness of object recognition in cluttered scenes.
In the last part, which contains the second contribution, of this thesis a vision system for object recognition based on multiple feature object representation is presented together with an architecture of the agent that utilizes the proposed representation. By taking a system perspective to object recognition we will consider the representations performance under a given context and task. The scenario considered here is derived from a fetch scenario performed by a service robot.
Karlsen, Mats-Gøran. "Android object recognition framework." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-19219.
Full textFuresjö, Fredrik. "Multiple cue object recognition /." Stockholm : KTH Numerical Analysis and Computer Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277.
Full textFergus, Robert. "Visual object category recognition." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425029.
Full textLavoie, Matt J. "Three dimensional object recognition." Honors in the Major Thesis, University of Central Florida, 1991. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/3.
Full textBachelors
Arts and Sciences
Computer Sciences
Meng, Meng. "Human Object Interaction Recognition." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10008/document.
Full textIn this thesis, we have investigated the human object interaction recognition by using the skeleton data and local depth information provided by RGB-D sensors. There are two main applications we address in this thesis: human object interaction recognition and abnormal activity recognition. First, we propose a spatio-temporal modeling of human-object interaction videos for on-line and off-line recognition. In the spatial modeling of human object interactions, we propose low-level feature and object related distance feature which adopted on on-line human object interaction recognition and abnormal gait detection. Then, we propose object feature, a rough description of the object shape and size as new features to model human-object interactions. This object feature is fused with the low-level feature for online human object interaction recognition. In the temporal modeling of human object interactions, we proposed a shape analysis framework based on low-level feature and object related distance feature for full sequence-based off-line recognition. Experiments carried out on two representative benchmarks demonstrate the proposed method are effective and discriminative for human object interaction analysis. Second, we extend the study to abnormal gait detection by using the on-line framework of human object interaction classification. The experiments conducted following state-of-the-art settings on the benchmark shows the effectiveness of proposed method. Finally, we collected a multi-view human object interaction dataset involving abnormal and normal human behaviors by RGB-D sensors. We test our model on the new dataset and evaluate the potential of the proposed approach
Baker, Jonathan D. (Jonathan Daniel). "Multiresolution statistical object recognition." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37721.
Full textIncludes bibliographical references (leaves 105-108).
by Jonathan D. Baker.
M.S.
Cox, David Daniel. "Reverse engineering object recognition." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42042.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Page 95 blank.
Includes bibliographical references (p. 83-94).
Any given object in the world can cast an effectively infinite number of different images onto the retina, depending on its position relative to the viewer, the configuration of light sources, and the presence of other objects in the visual field. In spite of this, primates can robustly recognize a multitude of objects in a fraction of a second, with no apparent effort. The computational mechanisms underlying these amazing abilities are poorly understood. This thesis presents a collection of work from human psychophysics, monkey electrophysiology, and computational modelling in an effort to reverse-engineer the key computational components that enable this amazing ability in the primate visual system.
by David Daniel Cox.
Ph.D.
Wallenberg, Marcus. "Embodied Visual Object Recognition." Doctoral thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-132762.
Full textEmbodied Visual Object Recognition
FaceTrack
Matas, J. "Colour-based object recognition." Thesis, University of Surrey, 1995. http://epubs.surrey.ac.uk/843934/.
Full textJohnson, Taylor Christine. "Object Recognition and Classification." Thesis, The University of Arizona, 2012. http://hdl.handle.net/10150/243970.
Full textLee, Yeongseon. "Bayesian 3D multiple people tracking using multiple indoor cameras and microphones." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29668.
Full textCommittee Chair: Rusell M. Mersereau; Committee Member: Biing Hwang (Fred) Juang; Committee Member: Christopher E. Heil; Committee Member: Georgia Vachtsevanos; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Whitney, Hannah L. "Object agnosia and face processing." Thesis, University of Southampton, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548326.
Full textQu, Yawe, and Mingxi Yang. "Online Face Recognition Game." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-248.
Full textThe purpose of this project is to test and improve people’s ability of face recognition.
Although there are some tests on the internet with the same purpose, the problem is that people
may feel bored and give up before finishing the tests. Consequently they may not benefit from
testing nor from training. To solve this problem, face recognition and online game are put
together in this project. The game is supposed to provide entertainment when people are playing,
so that more people can take the test and improve their abilities of face recognition.
In the game design, the game is assumed to take place in the face recognition lab, which is
an imaginary lab. The player plays the main role in this game and asked to solve a number of
problems. There are several scenarios waiting for the player, which mainly need face recognition
skills from the player. At the end the player obtains the result of evaluation of her/his skills in
face recognition.
Batur, Aziz Umit. "Illumination-robust face recognition." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/15440.
Full textGraham, Daniel B. "Pose-varying face recognition." Thesis, University of Manchester, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.488288.
Full textZhou, Mian. "Gobor-boosting face recognition." Thesis, University of Reading, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494814.
Full textAbi, Antoun Ramzi. "Pose-Tolerant Face Recognition." Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/244.
Full textLincoln, Michael C. "Pose-independent face recognition." Thesis, University of Essex, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250063.
Full text