Dissertations / Theses on the topic 'Low Resolution Face Recognition'

To see the other types of publications on this topic, follow the link: Low Resolution Face Recognition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Low Resolution Face Recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Arachchige, Somi Ruwan Budhagoda. "Face recognition in low resolution video sequences using super resolution /." Online version of thesis, 2008. http://hdl.handle.net/1850/7770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Roeder, James Roger. "Assessment of super-resolution for face recognition from very-low resolution images." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2009. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kramer, Annika. "Model based methods for locating, enhancing and recognising low resolution objects in video." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/585.

Full text
Abstract:
Visual perception is our most important sense which enables us to detect and recognise objects even in low detail video scenes. While humans are able to perform such object detection and recognition tasks reliably, most computer vision algorithms struggle with wide angle surveillance videos that make automatic processing difficult due to low resolution and poor detail objects. Additional problems arise from varying pose and lighting conditions as well as non-cooperative subjects. All these constraints pose problems for automatic scene interpretation of surveillance video, including object detection, tracking and object recognition.Therefore, the aim of this thesis is to detect, enhance and recognise objects by incorporating a priori information and by using model based approaches. Motivated by the increasing demand for automatic methods for object detection, enhancement and recognition in video surveillance, different aspects of the video processing task are investigated with a focus on human faces. In particular, the challenge of fully automatic face pose and shape estimation by fitting a deformable 3D generic face model under varying pose and lighting conditions is tackled. Principal Component Analysis (PCA) is utilised to build an appearance model that is then used within a particle filter based approach to fit the 3D face mask to the image. This recovers face pose and person-specific shape information simultaneously. Experiments demonstrate the use in different resolution and under varying pose and lighting conditions. Following that, a combined tracking and super resolution approach enhances the quality of poor detail video objects. A 3D object mask is subdivided such that every mask triangle is smaller than a pixel when projected into the image and then used for model based tracking. The mask subdivision then allows for super resolution of the object by combining several video frames. This approach achieves better results than traditional super resolution methods without the use of interpolation or deblurring.Lastly, object recognition is performed in two different ways. The first recognition method is applied to characters and used for license plate recognition. A novel character model is proposed to create different appearances which are then matched with the image of unknown characters for recognition. This allows for simultaneous character segmentation and recognition and high recognition rates are achieved for low resolution characters down to only five pixels in size. While this approach is only feasible for objects with a limited number of different appearances, like characters, the second recognition method is applicable to any object, including human faces. Therefore, a generic 3D face model is automatically fitted to an image of a human face and recognition is performed on a mask level rather than image level. This approach does not require an initial pose estimation nor the selection of feature points, the face alignment is provided implicitly by the mask fitting process.
APA, Harvard, Vancouver, ISO, and other styles
4

SILVA, José Ivson Soares da. "Reconhecimento facial em imagens de baixa resolução." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16367.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-07T12:14:52Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5)
Made available in DSpace on 2016-04-07T12:14:52Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) dissertação_jiss_ciênciadacomputação.pdf: 2819671 bytes, checksum: 98f583c2b7105c3a5b369b2b48097633 (MD5) Previous issue date: 2015-02-24
FADE
Tem crescido o uso de sistemas computacionais para reconhecimento de pessoas por meio de dados biométricos, consequentemente os métodos para realizar o reconhecimento tem evoluído. A biometria usada no reconhecimento pode ser face, voz, impressão digital ou qualquer característica física capaz de distinguir as pessoas. Mudanças causadas por cirurgias, envelhecimento ou cicatrizes, podem não causar mudanças significativas nas características faciais tornando possível o reconhecimento após essas mudanças de aparência propositais ou não. Por outro lado tais mudanças se tornam um desafio para sistemas de reconhecimento automático. Além das mudanças físicas há outros fatores na obtenção da imagem que influenciam o reconhecimento facial como resolução da imagem, posição da face em relação a câmera, iluminação do ambiente, oclusão, expressão. A distância que uma pessoa aparece na cena modifica a resolução da região da sua face, o objetivo de sistemas direcionados a esse contexto é que a influência da resolução nas taxas de reconhecimento seja minimizada. Uma pessoa mais distante da câmera tem sua face na imagem numa resolução menor que uma que esteja mais próxima. Sistemas de reconhecimento facial têm um menor desempenho ao tratar imagens faciais de baixa resolução. Uma das fases de um sistema de reconhecimento é a extração de características, que processa os dados de entrada e fornece um conjunto de informações mais representativas das imagens. Na fase de extração de características os padrões da base de dados de treinamento são recebidos numa mesma dimensão, ou seja, no caso de imagens numa mesma resolução. Caso as imagens disponíveis para o treinamento sejam de resoluções diferentes ou as imagens de teste sejam de resolução diferente do treinamento, faz-se necessário que na fase de pré-processamento haja um tratamento de resolução. O tratamento na resolução pode ser aplicando um aumento da resolução das imagens menores ou redução da resolução das imagens maiores. O aumento da resolução não garante um ganho de informação que possa melhorar o desempenho dos sistemas. Neste trabalho são desenvolvidos dois métodos executados na fase de extração de características realizada por Eigenface, os vetores de características são redimensionados para uma nova escala menor por meio de interpolação, semelhante ao que acontece no redimensionamento de imagens. No primeiro método, após a extração de características, os vetores de características e as imagens de treinamento são redimensionados. Então, as imagens de treinamento e teste são projetadas no espaço de características pelos vetores de dimensão reduzida. No segundo método, apenas os vetores de características são redimensionados e multiplicados por um fator de compensação. Então, as imagens de treinamento são projetadas pelos vetores originais e as imagens de teste são projetadas pelos vetores reduzidos para o mesmo espaço. Os métodos propostos foram testados em 4 bases de dados de reconhecimento facial com a presença de problemas de variação de iluminação, variação de expressão facial, presença óculos e posicionamento do rosto.
In the last decades the use of computational systems to recognize people by biometric data is increasing, consequently the efficacy of methods to perform recognition is improving. The biometry used for recognition can be face, voice, fingerprint or other physical feature that enables the distiction of different persons. Facial changes caused by surgery, aging or scars, does not necessarily causes significant changes in facial features. For a human it is possible recognize other person after these interventions of the appearance. On the other hand, these interventions become a challenge to computer recognition systems. Beyond the physical changes there are other factors in aquisition of an image that influence the face recognition such as the image resolution, position between face and camera, light from environment, occlusions and variation of facial expression. The distance that a person is at image aquisition changes the resolution of face image. The objective of systems for this context is to minimize the influence of the image resolution for the recognition. A person more distant from the camera has the image of the face in a smaller resolution than a person near the camera. Face recognition systems have a poor performance to analyse low resolution image. One of steps of a recognition system is the features extraction that processes the input data so provides more representative images. In the features extraction step the images from the training database are received at same dimension, in other words, to analyse the images they have the same resolution. If the training images have different resolutions of test images it is necessary a preprocessing to normalize the image resolution. The preprocessing of an image can be to increase the resolution of small images or to reduce the resolution of big images. The increase resolution does not guarantee that there is a information gain that can improves the performance of the recognition systems. In this work two methods are developed at features extraction step based on Eigenface. The feature vectors are resized to a smaller scale, similar to image resize. In first method, after the feature extraction step, the feature vectors and the training images are resized. Then the training and test images are projected to feature space by the resized feature vectors. In second method, only the feature vectors are resized and multiplied by a compensation factor. The training images are projected by original feature vectors and the test images are projected by resized feature vectors to the same space. The proposed methods were tested in 4 databases of face recognition with presence of light variation, variation of facial expression, use of glasses and face position.
APA, Harvard, Vancouver, ISO, and other styles
5

Prado, Kelvin Salton do. "Comparação de técnicas de reconhecimento facial para identificação de presença em um ambiente real e semicontrolado." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-07012018-222531/.

Full text
Abstract:
O reconhecimento facial é uma tarefa que os seres humanos realizam naturalmente todos os dias e praticamente sem esforço nenhum. Porém para uma máquina este processo não é tão simples. Com o aumento do poder computacional das máquinas atuais criou-se um grande interesse no processamento de imagens e vídeos digitais, com aplicações nas mais diversas áreas de conhecimento. Este trabalho objetiva a comparação de técnicas de reconhecimento facial, já conhecidas na literatura, com o intuito de identificar qual técnica possui melhor desempenho em um ambiente real e semicontrolado. Secundariamente avalia-se a possibilidade da utilização de uma ou mais técnicas de reconhecimento facial para identificar automaticamente a presença de alunos em uma sala de aula de artes marciais, utilizando imagens das câmeras de vigilância instaladas no recinto, levando em consideração aspectos importantes, tais como: imagens com pouca nitidez, luminosidade não ideal, movimentação constante dos alunos e o fato das câmeras estarem em um ângulo fixo. Este trabalho está relacionado às áreas de Processamento de Imagens e Reconhecimento de Padrões, e integra a linha de pesquisa de \"Monitoramento de Presença\" do projeto \"Ensino e Monitoramento de Atividades Físicas via Técnicas de Inteligência Artificial\" (Processo 2014.1.923.86.4, publicado no DOE 125(45), em 10/03/2015), projeto este executado em conjunto da Universidade de São Paulo, Faculdade Campo Limpo Paulista e Academia Central Kungfu-Wushu. Com os experimentos realizados e apresentados neste trabalho foi possível concluir que, dentre os métodos de reconhecimento facial utilizados, o método Local Binary Patterns teve o melhor desempenho no ambiente proposto. Por outro lado, o método Eigenfaces teve o pior desempenho de acordo com os experimentos realizados. Além disso, foi possível concluir também que não é viável a realização da detecção de presença automática de forma confiável no ambiente proposto, pois a taxa de reconhecimento facial foi relativamente baixa, se comparada a outros trabalhos do estado da arte, trabalhos estes que usam de ambientes de testes mais amigáveis, mas ao mesmo tempo menos comumente encontrados em nosso dia-a-dia. Acredita-se que foi possível alcançar os objetivos propostos pelo trabalho e que o mesmo possa contribuir para o estado da arte atual na área de visão computacional, mais precisamente no âmbito do reconhecimento facial. Ao final são sugeridos alguns trabalhos futuros que podem ser utilizados como ponto de partida para a continuação desta pesquisa ou até mesmo de novas pesquisas relacionadas a este tema
Face recognition is a task that human beings perform naturally in their everyday lives, usually with no effort at all. To machines, however, this process is not so simple. With the increasing computational power of current machines, a great interest was created in the field of digital videos and images processing, with applications in most diverse areas of knowledge. This work aims to compare face recognition techniques already know in the literature, in order to identify which technique has the best performance in a real and semicontrolled environment. As a secondary objective, we evaluate the possibility of using one or more face recognition techniques to automatically identify the presence of students in a martial arts classroom using images from the surveillance cameras installed in the room, taking into account important aspects such as images with low sharpness, illumination variation, constant movement of students and the fact that the cameras are at a fixed angle. This work is related to the Image Processing and Pattern Recognition areas, and integrates the research line \"Presence Monitoring\" of the project entitled \"Education and Monitoring of Physical Activities using Artificial Intelligence Techniques\" (Process 2014.1.923.86.4, published in DOE 125 (45) on 03/10/2015), developed as a partnership between the University of São Paulo, Campo Limpo Paulista Faculty, and Kungfu-Wushu Central Academy. With the experiments performed and presented in this work it was possible to conclude that, amongst all face recognition methods that were tested, Local Binary Patterns had the best performance in the proposed environment. On the other hand, Eigenfaces had the worse performance according to the experiments. Moreover, it was also possible to conclude that it is not feasible to perform the automatic presence detection reliably in the proposed environment, since the face recognition rate was relatively low, compared to the state of the art which uses, in general, more friendly test environments but at the same time less likely found in our daily lives. We believe that it was possible to achieve the objectives proposed by this work and that can contribute to the current state of the art in the computer vision field and, more precisely, in the face recognition area. Finally, some future work is suggested that can be used as a starting point for the continuation of this work or even for new researches related to this topic
APA, Harvard, Vancouver, ISO, and other styles
6

Bilson, Amy Jo. "Image size and resolution in face recognition /." Thesis, Connect to this title online; UW restricted, 1987. http://hdl.handle.net/1773/9166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Frank Chi-Hao. "Super-resolution image processing with application to face recognition." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/16703/1/Frank_Lin_Thesis.pdf.

Full text
Abstract:
Subject identification from surveillance imagery has become an important task for forensic investigation. Good quality images of the subjects are essential for the surveillance footage to be useful. However, surveillance videos are of low resolution due to data storage requirements. In addition, subjects typically occupy a small portion of a camera's field of view. Faces, which are of primary interest, occupy an even smaller array of pixels. For reliable face recognition from surveillance video, there is a need to generate higher resolution images of the subject's face from low-resolution video. Super-resolution image reconstruction is a signal processing based approach that aims to reconstruct a high-resolution image by combining a number of low-resolution images. The low-resolution images that differ by a sub-pixel shift contain complementary information as they are different "snapshots" of the same scene. Once geometrically registered onto a common high-resolution grid, they can be merged into a single image with higher resolution. As super-resolution is a computationally intensive process, traditional reconstruction-based super-resolution methods simplify the problem by restricting the correspondence between low-resolution frames to global motion such as translational and affine transformation. Surveillance footage however, consists of independently moving non-rigid objects such as faces. Applying global registration methods result in registration errors that lead to artefacts that adversely affect recognition. The human face also presents additional problems such as selfocclusion and reflectance variation that even local registration methods find difficult to model. In this dissertation, a robust optical flow-based super-resolution technique was proposed to overcome these difficulties. Real surveillance footage and the Terrascope database were used to compare the reconstruction quality of the proposed method against interpolation and existing super-resolution algorithms. Results show that the proposed robust optical flow-based method consistently produced more accurate reconstructions. This dissertation also outlines a systematic investigation of how super-resolution affects automatic face recognition algorithms with an emphasis on comparing reconstruction- and learning-based super-resolution approaches. While reconstruction-based super-resolution approaches like the proposed method attempt to recover the aliased high frequency information, learning-based methods synthesise them instead. Learning-based methods are able to synthesise plausible high frequency detail at high magnification ratios but the appearance of the face may change to the extent that the person no longer looks like him/herself. Although super-resolution has been applied to facial imagery, very little has been reported elsewhere on measuring the performance changes from super-resolved images. Intuitively, super-resolution improves image fidelity, and hence should improve the ability to distinguish between faces and consequently automatic face recognition accuracy. This is the first study to comprehensively investigate the effect of super-resolution on face recognition. Since super-resolution is a computationally intensive process it is important to understand the benefits in relation to the trade-off in computations. A framework for testing face recognition algorithms with multi-resolution images was proposed, using the XM2VTS database as a sample implementation. Results show that super-resolution offers a small improvement over bilinear interpolation in recognition performance in the absence of noise and that super-resolution is more beneficial when the input images are noisy since noise is attenuated during the frame fusion process.
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Frank Chi-Hao. "Super-resolution image processing with application to face recognition." Queensland University of Technology, 2008. http://eprints.qut.edu.au/16703/.

Full text
Abstract:
Subject identification from surveillance imagery has become an important task for forensic investigation. Good quality images of the subjects are essential for the surveillance footage to be useful. However, surveillance videos are of low resolution due to data storage requirements. In addition, subjects typically occupy a small portion of a camera's field of view. Faces, which are of primary interest, occupy an even smaller array of pixels. For reliable face recognition from surveillance video, there is a need to generate higher resolution images of the subject's face from low-resolution video. Super-resolution image reconstruction is a signal processing based approach that aims to reconstruct a high-resolution image by combining a number of low-resolution images. The low-resolution images that differ by a sub-pixel shift contain complementary information as they are different "snapshots" of the same scene. Once geometrically registered onto a common high-resolution grid, they can be merged into a single image with higher resolution. As super-resolution is a computationally intensive process, traditional reconstruction-based super-resolution methods simplify the problem by restricting the correspondence between low-resolution frames to global motion such as translational and affine transformation. Surveillance footage however, consists of independently moving non-rigid objects such as faces. Applying global registration methods result in registration errors that lead to artefacts that adversely affect recognition. The human face also presents additional problems such as selfocclusion and reflectance variation that even local registration methods find difficult to model. In this dissertation, a robust optical flow-based super-resolution technique was proposed to overcome these difficulties. Real surveillance footage and the Terrascope database were used to compare the reconstruction quality of the proposed method against interpolation and existing super-resolution algorithms. Results show that the proposed robust optical flow-based method consistently produced more accurate reconstructions. This dissertation also outlines a systematic investigation of how super-resolution affects automatic face recognition algorithms with an emphasis on comparing reconstruction- and learning-based super-resolution approaches. While reconstruction-based super-resolution approaches like the proposed method attempt to recover the aliased high frequency information, learning-based methods synthesise them instead. Learning-based methods are able to synthesise plausible high frequency detail at high magnification ratios but the appearance of the face may change to the extent that the person no longer looks like him/herself. Although super-resolution has been applied to facial imagery, very little has been reported elsewhere on measuring the performance changes from super-resolved images. Intuitively, super-resolution improves image fidelity, and hence should improve the ability to distinguish between faces and consequently automatic face recognition accuracy. This is the first study to comprehensively investigate the effect of super-resolution on face recognition. Since super-resolution is a computationally intensive process it is important to understand the benefits in relation to the trade-off in computations. A framework for testing face recognition algorithms with multi-resolution images was proposed, using the XM2VTS database as a sample implementation. Results show that super-resolution offers a small improvement over bilinear interpolation in recognition performance in the absence of noise and that super-resolution is more beneficial when the input images are noisy since noise is attenuated during the frame fusion process.
APA, Harvard, Vancouver, ISO, and other styles
9

Naim, Mamoun. "New techniques in the recognition of very low resolution images." Thesis, University of Reading, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.266343.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Kai Chee. "Object identification from a low resolution laser radar system." Thesis, University of Surrey, 1992. http://epubs.surrey.ac.uk/844536/.

Full text
Abstract:
Range is a very important and useful physical property. We can extract most of the physical features of an object from a 3-D image. This thesis is about analysing range images taken from a low resolution laser radar system. The objective of this research is to locate and attempt to identify obstacles in the surroundings for an unmanned small tracked vehicle to find its way. A short range (less than 30 metres) laser radar range finder, provided by the Ministry of Defense, gathered range images around the vehicle. Trees, rocks and walls are classified as obstacles. Roads, grassland and bushes are classified as passable objects. In the cases where the objects cannot be identified, we use the steepness as a guideline to classify the object as obstacles or not. Simple image processing techniques are applied to analyse the range image and satisfactory results are obtained. Obstacles can be located in the range images. The images are first segmented by three methods. Firstly, the range gating method is applied which segments the images- according to the information in their range histograms. Secondly, the gradient thresholding method is applied which distinguishes the steep obstacles from the non-steep objects. Thirdly, the spatial isolation is applied which isolates each individual object. The only information contained in a range image is the three dimensions of the object, so we concentrated on the analysis of the physical properties. Besides the size and shape, the texture of an object can also be extracted. Texture reflects what type of objects we are looking at. Walls, plains and other flat objects have fine textures while trees and bushes have rough textures. We have investigated various textural properties derived from the co-occurrence matrix. Another important physical property is the gradient because high gradient always implies obstacles, and these are things which an un-manned vehicle must avoid. The classification method uses the distance function to classify objects. Finally, the algorithm is implemented on an array of transputers. Promising results were observed. By implementing the algorithm onto an array of transputers, the processing time was reduced, and the obstacles can be identified from the range images.
APA, Harvard, Vancouver, ISO, and other styles
11

Peyrard, Clément. "Single image super-resolution based on neural networks for text and face recognition." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI083/document.

Full text
Abstract:
Cette thèse porte sur les méthodes de super-résolution (SR) pour l’amélioration des performances des systèmes de reconnaissance automatique (OCR, reconnaissance faciale). Les méthodes de Super-Résolution (SR) permettent de générer des images haute résolution (HR) à partir d’images basse résolution (BR). Contrairement à un rééchantillonage par interpolation, elles restituent les hautes fréquences spatiales et compensent les artéfacts (flou, crénelures). Parmi elles, les méthodes d’apprentissage automatique telles que les réseaux de neurones artificiels permettent d’apprendre et de modéliser la relation entre les images BR et HR à partir d’exemples. Ce travail démontre l’intérêt des méthodes de SR à base de réseaux de neurones pour les systèmes de reconnaissance automatique. Les réseaux de neurones à convolutions sont particulièrement adaptés puisqu’ils peuvent être entraînés à extraire des caractéristiques non-linéaires bidimensionnelles pertinentes tout en apprenant la correspondance entre les espaces BR et HR. Sur des images de type documents, la méthode proposée permet d’améliorer la précision en reconnaissance de caractère de +7.85 points par rapport à une simple interpolation. La création d’une base d’images annotée et l’organisation d’une compétition internationale (ICDAR2015) ont souligné l’intérêt et la pertinence de telles approches. Pour les images de visages, les caractéristiques faciales sont cruciales pour la reconnaissance automatique. Une méthode en deux étapes est proposée dans laquelle la qualité de l’image est d’abord globalement améliorée, pour ensuite se focaliser sur les caractéristiques essentielles grâce à des modèles spécifiques. Les performances d’un système de vérification faciale se trouvent améliorées de +6.91 à +8.15 points. Enfin, pour le traitement d’images BR en conditions réelles, l’utilisation de réseaux de neurones profonds permet d’absorber la variabilité des noyaux de flous caractérisant l’image BR, et produire des images HR ayant des statistiques naturelles sans connaissance du modèle d’observation exact
This thesis is focussed on super-resolution (SR) methods for improving automatic recognition system (Optical Character Recognition, face recognition) in realistic contexts. SR methods allow to generate high resolution images from low resolution ones. Unlike upsampling methods such as interpolation, they restore spatial high frequencies and compensate artefacts such as blur or jaggy edges. In particular, example-based approaches learn and model the relationship between low and high resolution spaces via pairs of low and high resolution images. Artificial Neural Networks are among the most efficient systems to address this problem. This work demonstrate the interest of SR methods based on neural networks for improved automatic recognition systems. By adapting the data, it is possible to train such Machine Learning algorithms to produce high-resolution images. Convolutional Neural Networks are especially efficient as they are trained to simultaneously extract relevant non-linear features while learning the mapping between low and high resolution spaces. On document text images, the proposed method improves OCR accuracy by +7.85 points compared with simple interpolation. The creation of an annotated image dataset and the organisation of an international competition (ICDAR2015) highlighted the interest and the relevance of such approaches. Moreover, if a priori knowledge is available, it can be used by a suitable network architecture. For facial images, face features are critical for automatic recognition. A two step method is proposed in which image resolution is first improved, followed by specialised models that focus on the essential features. An off-the-shelf face verification system has its performance improved from +6.91 up to +8.15 points. Finally, to address the variability of real-world low-resolution images, deep neural networks allow to absorb the diversity of the blurring kernels that characterise the low-resolution images. With a single model, high-resolution images are produced with natural image statistics, without any knowledge of the actual observation model of the low-resolution image
APA, Harvard, Vancouver, ISO, and other styles
12

Zoetgnandé, Yannick. "Fall detection and activity recognition using stereo low-resolution thermal imaging." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S073.

Full text
Abstract:
De nos jours, il est important de trouver des solutions pour détecter et prévenir les chutes des personnes âgées. Nous avons proposé un dispositif bas coût à base d’une paire de capteurs thermiques. La contrepartie de ces capteurs bas-coût est leur faible résolution (80x60 pixels), la faible fréquence de rafraîchissement, le bruit et des effets de halo. Nous avons donc proposé quelques approches pour contourner ces inconvénients. Tout d’abord, nous avons proposé une nouvelle méthode de calibration avec une grille adaptée à l’image thermique et une méthodologie assurant la robustesse de l’estimation des paramètres malgré la faible résolution. Ensuite, pour la vision 3D, nous avons proposé une méthode de mise en correspondance stéréo avec une précision sous-pixels (appelée ST pour Subpixel Thermal) composée : 1) d’une méthode robuste d’extraction des caractéristiques basée sur la congruence de phase, 2) d’une mise en correspondance de ces caractéristiques au pixel près, et 3) d’une mise correspondance raffinée en précision sous-pixel basée sur la corrélation de phase locale. Nous avons également proposé une méthode de super-résolution appelée Edge Focused Thermal Super-Resolution (EFTS) qui contient un module d’extraction de contours amenant le réseau de neurones artificiels de se concentrer sur les contours des objets dans les images. Par la suite, pour la détection des chutes, nous avons proposé une nouvelle méthode (TSFD pour Thermal Stereo Fall Détection) basée sur les correspondances stéréo mais sans calibration et un apprentissage de points au sol. Enfin, pour la surveillance des activités des personnes âgées, nous avons exploré de nombreuses approches basées sur l’apprentissage profond pour classer des activités avec une quantité limitée de données d’apprentissage
Nowadays, it is essential to find solutions to detect and prevent the falls of seniors. We proposed a low-cost device based on a pair of thermal sensors. The counterpart of these low-cost sensors is their low resolution (80x60 pixels), low refresh rate, noise, and halo effects. We proposed some approaches to bypass these drawbacks. First, we proposed a calibration method with a grid adapted to the thermal image and a framework ensuring the robustness of the parameters estimation despite the low resolution. Then, for 3D vision, we proposed a threefold sub-pixel stereo matching framework (called ST for Subpixel Thermal): 1) robust features extraction method based on phase congruency, 2) matching of these features in pixel precision, and 3) refined matching in sub-pixel accuracy based on local phase correlation. We also proposed a super-resolution method called Edge Focused Thermal Super-resolution (EFTS), which includes an edge extraction module enforcing the neural networks to focus on the edge in images. After that, for fall detection, we proposed a new method (called TSFD for Thermal Stereo Fall Detection) based on stereo point matching but without calibration and the classification of matches as on the ground or not on the ground. Finally, we explored many approaches to learn activities from a limited amount of data for seniors activity monitoring
APA, Harvard, Vancouver, ISO, and other styles
13

Al-Hassan, Nadia. "Mathematically inspired approaches to face recognition in uncontrolled conditions : super resolution and compressive sensing." Thesis, University of Buckingham, 2014. http://bear.buckingham.ac.uk/6/.

Full text
Abstract:
Face recognition systems under uncontrolled conditions using surveillance cameras is becoming essential for establishing the identity of a person at a distance from the camera and providing safety and security against terrorist, attack, robbery and crime. Therefore, the performance of face recognition in low-resolution degraded images with low quality against images with high quality/and of good resolution/size is considered the most challenging tasks and constitutes focus of this thesis. The work in this thesis is designed to further investigate these issues and the following being our main aim: “To investigate face identification from a distance and under uncontrolled conditions by primarily addressing the problem of low-resolution images using existing/modified mathematically inspired super resolution schemes that are based on the emerging new paradigm of compressive sensing and non-adaptive dictionaries based super resolution.” We shall firstly investigate and develop the compressive sensing (CS) based sparse representation of a sample image to reconstruct a high-resolution image for face recognition, by taking different approaches to constructing CS-compliant dictionaries such as Gaussian Random Matrix and Toeplitz Circular Random Matrix. In particular, our focus is on constructing CS non-adaptive dictionaries (independent of face image information), which contrasts with existing image-learnt dictionaries, but satisfies some form of the Restricted Isometry Property (RIP) which is sufficient to comply with the CS theorem regarding the recovery of sparsely represented images. We shall demonstrate that the CS dictionary techniques for resolution enhancement tasks are able to develop scalable face recognition schemes under uncontrolled conditions and at a distance. Secondly, we shall clarify the comparisons of the strength of sufficient CS property for the various types of dictionaries and demonstrate that the image-learnt dictionary far from satisfies the RIP for compressive sensing. Thirdly, we propose dictionaries based on the high frequency coefficients of the training set and investigate the impact of using dictionaries on the space of feature vectors of the low-resolution image for face recognition when applied to the wavelet domain. Finally, we test the performance of the developed schemes on CCTV images with unknown model of degradation, and show that these schemes significantly outperform existing techniques developed for such a challenging task. However, the performance is still not comparable to what could be achieved in controlled environment, and hence we shall identify remaining challenges to be investigated in the future.
APA, Harvard, Vancouver, ISO, and other styles
14

da, Silva Gomes Joao Paulo. "Brain inspired approach to computational face recognition." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3544.

Full text
Abstract:
Face recognition that is invariant to pose and illumination is a problem solved effortlessly by the human brain, but the computational details that underlie such efficient recognition are still far from clear. This thesis draws on research from psychology and neuroscience about face and object recognition and the visual system in order to develop a novel computational method for face detection, feature selection and representation, and memory structure for recall. A biologically plausible framework for developing a face recognition system will be presented. This framework can be divided into four parts: 1) A face detection system. This is an improved version of a biologically inspired feedforward neural network that has modifiable connections and reflects the hierarchical and elastic structure of the visual system. The face detection system can detect if a face is present in an input image, and determine the region which contains that face. The system is also capable of detecting the pose of the face. 2) A face region selection mechanism. This mechanism is used to determine the Gabor-style features corresponding to the detected face, i.e., the features from the region of interest. This region of interest is selected using a feedback mechanism that connects the higher level layer of the feedforward neural network where ultimately the face is detected to an intermediate level where the Gabor style features are detected. 3) A face recognition system which is based on the binary encoding of the Gabor style features selected to represent a face. Two alternative coding schemes are presented, using 2 and 4 bits to represent a winning orientation at each location. The effectiveness of the Gabor-style features and the different coding schemes in discriminating faces from different classes is evaluated using the Yale B Face Database. The results from this evaluation show that this representation is close to other results on the same database. 4) A theoretical approach for a memory system capable of memorising sequences of poses. A basic network for memorisation and recall of sequences of labels have been implemented, and from this it is extrapolated a memory model that could use the ability of this model to memorise and recall sequences, to assist in the recognition of faces by memorising sequences of poses. Finally, the capabilities of the detection and recognition parts of the system are demonstrated using a demo application that can learn and recognise faces from a webcam.
APA, Harvard, Vancouver, ISO, and other styles
15

Herrmann, Christian [Verfasser]. "Video-to-Video Face Recognition for Low-Quality Surveillance Data / Christian Herrmann." Karlsruhe : KIT Scientific Publishing, 2018. http://www.ksp.kit.edu.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Yan. "Low-Cost, Real-Time Face Detection, Tracking and Recognition for Human-Robot Interactions." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1307548707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Abraham, Ashley N. "Word Recognition in High and Low Skill Spellers: Context effects on Lexical Ambiguity Resolution." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1493035902158255.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PONTES, BRUNO SILVA. "HUMAN POSTURE RECOGNITION PRESERVING PRIVACY: A CASE STUDY USING A LOW RESOLUTION ARRAY THERMAL SENSOR." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=29776@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
O reconhecimento de posturas é um dos desafios para o sensoriamento humano, que auxilia no acompanhamento de pessoas em ambientes de moradia assistidos. Estes ambientes, por sua vez, auxiliam médicos no diagnóstico de saúde de seus pacientes, principalmente através do reconhecimento de atividades do dia a dia em tempo real, que é visto na área médica como uma das melhores formas de antecipar situações críticas de saúde. Além disso, o envelhecimento da população mundial, escassez de recursos em hospitais para atender todas as pessoas e aumento dos custos de assistência médica impulsionam o desenvolvimento de sistemas para apoiar os ambientes de moradia assistidos. Preservar a privacidade nestes ambientes monitorados por sensores é um fator crítico para a aceitação do usuário, por isso há uma demanda em soluções que não requerem imagens. Este trabalho evidencia o uso de um sensor térmico de baixa resolução no sensoriamento humano, mostrando que é viável detectar a presença e reconhecer posturas humanas, usando somente os dados deste sensor.
Postures recognition is one of the human sensing challenges, that helps ambient assisted livings in people accompanying. On the other hand, these ambients assist doctors in the diagnosis of their patients health, mainly through activities of daily livings real time recognition, which is seen in the medical field as one of the best ways to anticipate critical health situations. In addition, the world s population aging, lack of hospital resources to meet all people and increased health care costs drive the development of systems to support ambient assisted livings. Preserving privacy in these ambients monitored by sensors is a critical factor for user acceptance, so there is a demand for solutions that does not requires images. This work demonstrates the use of a low resolution thermal array sensor in human sensing, showing that it is feasible to detect the presence and to recognize human postures, using only the data of this sensor.
APA, Harvard, Vancouver, ISO, and other styles
19

Nguyen, Thanh Kien. "Human identification at a distance using iris and face." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/62876/1/Kien_Nguyen%20Thanh_Thesis.pdf.

Full text
Abstract:
This research has successfully applied super-resolution and multiple modality fusion techniques to address the major challenges of human identification at a distance using face and iris. The outcome of the research is useful for security applications.
APA, Harvard, Vancouver, ISO, and other styles
20

Youmaran, Richard. "Algorithms to Process and Measure Biometric Information Content in Low Quality Face and Iris Images." Thesis, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/19729.

Full text
Abstract:
Biometric systems allow identification of human persons based on physiological or behavioral characteristics, such as voice, handprint, iris or facial characteristics. The use of face and iris recognition as a way to authenticate user’s identities has been a topic of research for years. Present iris recognition systems require that subjects stand close (<2m) to the imaging camera and look for a period of about three seconds until the data are captured. This cooperative behavior is required in order to capture quality images for accurate recognition. This will eventually restrict the amount of practical applications where iris recognition can be applied, especially in an uncontrolled environment where subjects are not expected to cooperate such as criminals and terrorists, for example. For this reason, this thesis develops a collection of methods to deal with low quality face and iris images and that can be applied for face and iris recognition in a non-cooperative environment. This thesis makes the following main contributions: I. For eye and face tracking in low quality images, a new robust method is developed. The proposed system consists of three parts: face localization, eye detection and eye tracking. This is accomplished using traditional image-based passive techniques such as shape information of the eye and active based methods which exploit the spectral properties of the pupil under IR illumination. The developed method is also tested on underexposed images where the subject shows large head movements. II. For iris recognition, a new technique is developed for accurate iris segmentation in low quality images where a major portion of the iris is occluded. Most existing methods perform generally quite well but tend to overestimate the occluded regions, and thus lose iris information that could be used for identification. This information loss is potentially important in the covert surveillance applications we consider in this thesis. Once the iris region is properly segmented using the developed method, the biometric feature information is calculated for the iris region using the relative entropy technique. Iris biometric feature information is calculated using two different feature decomposition algorithms based on Principal Component Analysis (PCA) and Independent Component Analysis (ICA). III. For face recognition, a new approach is developed to measure biometric feature information and the changes in biometric sample quality resulting from image degradations. A definition of biometric feature information is introduced and an algorithm to measure it proposed, based on a set of population and individual biometric features, as measured by a biometric algorithm under test. Examples of its application were shown for two different face recognition algorithms based on PCA (Eigenface) and Fisher Linear Discriminant (FLD) feature decompositions.
APA, Harvard, Vancouver, ISO, and other styles
21

Tang, Yinhang. "Contributions to biometrics : curvatures, heterogeneous cross-resolution FR and anti spoofing." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC060/document.

Full text
Abstract:
Visage est l’une des meilleures biométries pour la reconnaissance de l’identité de personnes, car l’identification d’une personne par le visage est l’habitude instinctive humaine, et l’acquisition de données faciales est naturelle, non intrusive et bien acceptée par le public. Contrairement à la reconnaissance de visage par l’image 2D sur l’apparence, la reconnaissance de visage en 3D sur la forme est théoriquement plus stable et plus robuste à la variance d’éclairage, aux petits changements de pose de la tête et aux cosmétiques pour le visage. Spécifiquement, les courbures sont les plus importants attributs géométriques pour décrire la forme géométrique d’une surface. Elles sont bénéfiques à la caractérisation de la forme du visage qui permet de diminuer l’impact des variances environnementales. Cependant, les courbures traditionnelles ne sont définies que sur des surfaces lisses. Il est donc nécessaire de généraliser telles notions sur des surfaces discrètes, par exemple des visages 3D représenté par maillage triangulaire, et d’évaluer leurs performances en reconnaissance de visage 3D. En outre, même si un certain nombre d’algorithmes 3D FR avec une grande précision sont disponibles, le coût d’acquisition de telles données de haute résolution est difficilement acceptable pour les applications pratiques. Une question majeure est donc d’exploiter les algorithmes existants pour la reconnaissance de modèles à faible résolution collecté avec l’aide d’un nombre croissant de caméras consommateur de profondeur (Kinect). Le dernier problème, mais non le moindre, est la menace sur sécurité des systèmes de reconnaissance de visage 3D par les attaques de masque fabriqué. Cette thèse est consacrée à l’étude des attributs géométriques, des mesures de courbure principale, adaptées aux maillages triangulaires, et des schémas de reconnaissance de visage 3D impliquant des telles mesures de courbure principale. En plus, nous proposons aussi un schéma de vérification sur la reconnaissance de visage 3D collecté en comparant des modèles de résolutions hétérogènes équipement aux deux résolutions, et nous évaluons la performance anti-spoofing du système de RF 3D. Finalement, nous proposons une biométrie système complémentaire de reconnaissance veineuse de main basé sur la détection de vivacité et évaluons sa performance. Dans la reconnaissance de visage 3D par la forme géométrique, nous introduisons la généralisation des courbures principales conventionnelles et des directions principales aux cas des surfaces discrètes à maillage triangulaire, et présentons les concepts des mesures de courbure principale correspondants et des vecteurs de courbure principale. Utilisant ces courbures généralisées, nous élaborons deux descriptions de visage 3D et deux schémas de reconnaissance correspondent. Avec le premier descripteur de caractéristiques, appelé Local Principal Curvature Measures Pattern (LPCMP), nous générons trois images spéciales, appelée curvature faces, correspondant à trois mesures de courbure principale et encodons les curvature faces suivant la méthode de Local Binary Pattern. Il peut décrire la surface faciale de façon exhaustive par l’information de forme locale en concaténant un ensemble d’histogrammes calculés à partir de petits patchs dans les visages de courbure. Dans le deuxième système de reconnaissance de visage 3D sans enregistrement, appelée Principal Curvature Measures based meshSIFT descriptor (PCM-meshSIFT), les mesures de courbure principales sont d’abord calculées dans l’espace de l’échelle Gaussienne, et les extrèmes de la Différence de Courbure (DoC) sont définis comme les points de caractéristique. Ensuite, nous utilisons trois mesures de courbure principales et leurs vecteurs de courbure principaux correspondants pour construire trois descripteurs locaux pour chaque point caractéristique, qui sont invariants en rotation. [...]
Face is one of the best biometrics for person recognition related application, because identifying a person by face is human instinctive habit, and facial data acquisition is natural, non-intrusive, and socially well accepted. In contrast to traditional appearance-based 2D face recognition, shape-based 3D face recognition is theoretically more stable and robust to illumination variance, small head pose changes, and facial cosmetics. The curvatures are the most important geometric attributes to describe the shape of a smooth surface. They are beneficial to facial shape characterization which makes it possible to decrease the impact of environmental variances. However, exiting curvature measurements are only defined on smooth surface. It is required to generalize such notions to discrete meshed surface, e.g., 3D face scans, and to evaluate their performance in 3D face recognition. Furthermore, even though a number of 3D FR algorithms with high accuracy are available, they all require high-resolution 3D scans whose acquisition cost is too expensive to prevent them to be implemented in real-life applications. A major question is thus how to leverage the existing 3D FR algorithms and low-resolution 3D face scans which are readily available using an increasing number of depth-consumer cameras, e.g., Kinect. The last but not least problem is the security threat from spoofing attacks on 3D face recognition system. This thesis is dedicated to study the geometric attributes, principal curvature measures, suitable to triangle meshes, and the 3D face recognition schemes involving principal curvature measures. Meanwhile, based on these approaches, we propose a heterogeneous cross-resolution 3D FR scheme, evaluate the anti-spoofing performance of shape-analysis based 3D face recognition system, and design a supplementary hand-dorsa vein recognition system based on liveness detection with discriminative power. In 3D shape-based face recognition, we introduce the generalization of the conventional point-wise principal curvatures and principal directions for fitting triangle mesh case, and present the concepts of principal curvature measures and principal curvature vectors. Based on these generalized curvatures, we design two 3D face descriptions and recognition frameworks. With the first feature description, named as Local Principal Curvature Measures Pattern descriptor (LPCMP), we generate three curvature faces corresponding to three principal curvature measures, and encode the curvature faces following Local Binary Pattern method. It can comprehensively describe the local shape information of 3D facial surface by concatenating a set of histograms calculated from small patches in the encoded curvature faces. In the second registration-free feature description, named as Principal Curvature Measures based meshSIFT descriptor (PCM-meshSIFT), the principal curvature measures are firstly computed in the Gaussian scale space, and the extremum of Difference of Curvautre (DoC) is defined as keypoints. Then we employ three principal curvature measures and their corresponding principal curvature vectors to build three rotation-invariant local 3D shape descriptors for each keypoint, and adopt the sparse representation-based classifier for keypoint matching. The comprehensive experimental results based on FRGCv2 database and Bosphorus database demonstrate that our proposed 3D face recognition scheme are effective for face recognition and robust to poses and occlusions variations. Besides, the combination of the complementary shape-based information described by three principal curvature measures significantly improves the recognition ability of system. To deal with the problem towards heterogeneous cross-resolution 3D FR, we continuous to adopt the PCM-meshSIFT based feature descriptor to perform the related 3D face recognition. [...]
APA, Harvard, Vancouver, ISO, and other styles
22

Ali, Afiya. "Recognition of facial affect in individuals scoring high and low in psychopathic personality characteristics." The University of Waikato, 2007. http://adt.waikato.ac.nz/public/adt-uow20070129.190938/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Lirussi, Igor. "Human-Robot interaction with low computational-power humanoids." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19120/.

Full text
Abstract:
This article investigates the possibilities of human-humanoid interaction with robots whose computational power is limited. The project has been carried during a year of work at the Computer and Robot Vision Laboratory (VisLab), part of the Institute for Systems and Robotics in Lisbon, Portugal. Communication, the basis of interaction, is simultaneously visual, verbal, and gestural. The robot's algorithm provides users a natural language communication, being able to catch and understand the person’s needs and feelings. The design of the system should, consequently, give it the capability to dialogue with people in a way that makes possible the understanding of their needs. The whole experience, to be natural, is independent from the GUI, used just as an auxiliary instrument. Furthermore, the humanoid can communicate with gestures, touch and visual perceptions and feedbacks. This creates a totally new type of interaction where the robot is not just a machine to use, but a figure to interact and talk with: a social robot.
APA, Harvard, Vancouver, ISO, and other styles
24

Rajnoha, Martin. "Určování podobnosti objektů na základě obrazové informace." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2021. http://www.nusl.cz/ntk/nusl-437979.

Full text
Abstract:
Monitoring of public areas and their automatic real-time processing became increasingly significant due to the changing security situation in the world. However, the problem is an analysis of low-quality records, where even the state-of-the-art methods fail in some cases. This work investigates an important area of image similarity – biometric identification based on face image. The work deals primarily with the face super-resolution from a sequence of low-resolution images and it compares this approach to the single-frame methods, that are still considered as the most accurate. A new dataset was created for this purpose, which is directly designed for the multi-frame face super-resolution methods from the low-resolution input sequence, and it is of comparable size with the leading world datasets. The results were evaluated by both a survey of human perception and defined objective metrics. A hypothesis that multi-frame methods achieve better results than single-frame methods was proved by a comparison of both methods. Architectures, source code and the dataset were released. That caused a creation of the basis for future research in this field.
APA, Harvard, Vancouver, ISO, and other styles
25

Hallum, Luke Edward Graduate School of Biomedical Engineering Faculty of Engineering UNSW. "Prosthetic vision : Visual modelling, information theory and neural correlates." Publisher:University of New South Wales. Graduate School of Biomedical Engineering, 2008. http://handle.unsw.edu.au/1959.4/41450.

Full text
Abstract:
Electrical stimulation of the retina affected by photoreceptor loss (e.g., cases of retinitis pigmentosa) elicits the perception of luminous spots (so-called phosphenes) in the visual field. This phenomenon, attributed to the relatively high survival rates of neurons comprising the retina's inner layer, serves as the cornerstone of efforts to provide a microelectronic retinal prosthesis -- a device analogous to the cochlear implant. This thesis concerns phosphenes -- their elicitation and modulation, and, in turn, image analysis for use in a prosthesis. This thesis begins with a comparative review of visual modelling of electrical epiretinal stimulation and analogous acoustic modelling of electrical cochlear stimulation. The latter models involve coloured noise played to normal listeners so as to investigate speech processing and electrode design for use in cochlear implants. Subsequently, four experiments (three psychophysical and one numerical), and two statistical analyses, are presented. Intrinsic signal optical imaging in cerebral cortex is canvassed appendically. The first experiment describes a visual tracking task administered to 20 normal observers afforded simulated prosthetic vision. Fixation, saccade, and smooth pursuit, and the effect of practice, were assessed. Further, an image analysis scheme is demonstrated that, compared to existing approaches, assisted fixation and pursuit (but not saccade) accuracy (35.8% and 6.8%, respectively), and required less phosphene array scanning. Subsequently, (numerical) information-theoretic reasoning is provided for the scheme's superiority. This reasoning was then employed to further optimise the scheme (resulting in a filter comprising overlapping Gaussian kernels), and may be readily extended to arbitrary arrangements of many phosphenes. A face recognition study, wherein stimuli comprised either size- or intensity-modulated phosphenes, is then presented. The study involved unpracticed observers (n=85), and showed no 'size' --versus--'intensity' effect. Overall, a 400-phosphene (100-phosphene) image afforded subjects 89.0% (64.0%) correct recognition (two-interval forced-choice paradigm) when five seconds' scanning was allowed. Performance fell (64.5%) when the 400-phosphene image was stabilised on the retina and presented briefly. Scanning was similar in 400- and 100-phosphene tasks. The final chapter presents the statistical effects of sampling and rendering jitter on the phosphene image. These results may generalise to low-resolution imaging systems involving loosely packed pixels.
APA, Harvard, Vancouver, ISO, and other styles
26

chen, bo-hua, and 陳柏樺. "Discriminant Coupled Subspace Learning for Low-Resolution Face Recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/40872631830289967323.

Full text
Abstract:
碩士
國立高雄應用科技大學
資訊工程系
99
This study proposed a discriminant coupled subspace to deal with low-resolution face image set recognition problem. Compared to the traditional super-resolution method, It need a preprocess to synthesis of high-resolution images set from low-resolution images before identification procedures, It through the joint sub-space design in this letter, construct high-resolution set and low resolution face image set features of the relationship and solve the traditional method, due to synthesis of high-resolution face images time-consuming problem. In the joint sub-space of discriminant , the goal is to make the training data of high-resolution image set and low-resolution image set with the highest degree of similarity. In addition, low-resolution images due to loss of face images in high-frequency information, which enables high resolution include more relationship information to reduce of identification errors. Thus, in the sub-space design, the relationship further through the data between the minimum of false positives , making the learning subspace with better discernment. It using Yale B face database and Honda UCSD Video database to verify the correctness of the method in the Experiment.
APA, Harvard, Vancouver, ISO, and other styles
27

Yip, Andrew, and Pawan Sinha. "Role of color in face recognition." 2001. http://hdl.handle.net/1721.1/7266.

Full text
Abstract:
One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.
APA, Harvard, Vancouver, ISO, and other styles
28

Yang-TingChou and 周暘庭. "Low Resolution Face Recognition Using Image Data Multi-Extraction Approaches." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/04560411143087203272.

Full text
Abstract:
博士
國立成功大學
電腦與通信工程研究所
104
Machine learning and computer vision have been widely applied in our daily lives, in this dissertation, we focus on the exploitation of face recognition algorithms for improving the performance under several poor situations such as varying environment, limited image information, and irregular status, especially, the low resolution problem in face recognition occurs in video surveillance applications. Due to losing the detailed information, the low resolution problem in face recognition degrades the recognition performance dramatically. To overcome this problem, we propose the novel face recognition systems based on the image data multi-extraction techniques including multi-size discrete cosine transform, multi-component generalized linear regression, and kernel regression classifications. First of all, in order to extract more information from a low resolution face image, we propose to extract feature vectors from the multi-size discrete cosine transforms (mDCT) and the recognition mechanism with selective Gaussian mixture models (sGMM). The mDCT could extract enough visual features from low-resolution face images while the sGMM could exclude unreliable observation features in recognition phase. Thus, the mDCT and the sGMM can greatly improve recognition rate for low resolution conditions. Experiments are carried out on GT and AR face databases in image resolution of 16×16 and 12×12 pixels. The simulation results show that the proposed system achieves better performance than the existing methods for low resolution face recognition. Secondly, we propose a generalized linear regression classification (GLRC) to fully use all the information of multiple components of input images since the image capture devices always acquire color information. The proposed GLRC achieves the global adaptive weighted optimization for linear regression classification, which can automatically use the distinction components for recognition. For color identify recognition, we also suggest several similarity measures for the proposed GLRC to be tested in different color spaces. Experiments are conducted on two object datasets and two face databases in image size of 20×20 pixels including COIL-100, SOIL-47, SDUMLA-HMT and FEI. For performance comparisons, the GLRC approach is compared to the contemporary popular methods including color PCA, color LDA, color CCA, LRC, RLRC, SRC, color LRC, color RLRC, and color SRC. Simulation results demonstrate that the proposed GLRC method achieves the best performance in multi-component identity recognition. Finally, a novel class-specific kernel regression classification is proposed for face recognition under very low resolution and severe illumination variation conditions. Since the low resolution problem coupled with illumination variations makes data distribution ill-posed, the nonlinear projection rendered by a kernel function would enhance the modeling capability of linear regression for the ill-posed data distribution. The explicit knowledge of the nonlinear mapping function can be avoided by using the kernel trick. To reduce nonlinear redundancy, the low rank-r approximation is suggested to make the kernel projection be feasible for classification. With the proposed class-specific kernel projection combined with linear regression classification, the class label can be determined by calculating the minimum projection error. Experiments on 8×8 and 8×6 images down-sampled from extended Yale B, FERET and AR facial databases reveal that the proposed algorithm outperforms the state-of-the-art methods under severe illumination variation and very low resolution conditions.
APA, Harvard, Vancouver, ISO, and other styles
29

Jarudi, Izzat N., and Pawan Sinha. "Relative Contributions of Internal and External Features to Face Recognition." 2003. http://hdl.handle.net/1721.1/7274.

Full text
Abstract:
The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.
APA, Harvard, Vancouver, ISO, and other styles
30

Yang-TingChou and 周暘庭. "Low Resolution Face Recognition by Using Variable Block DCT and Selective Likelihood GMM." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/72985187521051226601.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
100
The low resolution problem in face recognition, which often occurs in video surveillance applications, degrades the detection performance dramatically. To overcome the low resolution problem, in this thesis, we propose a novel face recognition system, which collects the observation vectors extracted from variable block discrete cosine transform (VB_DCT) and recognizes the identify by using selective likelihood Gaussian mixture modeling (SL_GMM). The VB_DCT successfully extends the observation vectors from small to global views of low resolution faces while the SL_GMM greatly helps to exclude insignificant local features during the recognition phase to improve the detection performance significantly. Experimental results, which were carried out on the ORL database and the AR database in size of 12×12 pixels after subsampling, show that the proposed method achieves better performance for low resolution face recognition, even under partial occlusion.
APA, Harvard, Vancouver, ISO, and other styles
31

Sanyal, Soubhik. "Discriminative Descriptors for Unconstrained Face and Object Recognition." Thesis, 2017. http://etd.iisc.ac.in/handle/2005/4177.

Full text
Abstract:
Face and object recognition is a challenging problem in the field of computer vision. It deals with identifying faces or objects form an image or video. Due to its numerous applications in biometrics, security, multimedia processing, on-line shopping, psychology and neuroscience, automated vehicle parking systems, autonomous driving and machine inspection, it has drawn attention from a lot of researches. Researchers have studied different aspects of this problem. Among them pose robust matching is a very important problem with various applications like recognizing faces and objects in uncontrolled scenarios in which the images appear in wide variety of pose and illumination conditions along with low resolution. In this thesis, we propose three discriminative pose-free descriptors, Subspace Point Representation (DPF-SPR), Layered Canonical Correlated (DPF-LCC ) and Aligned Discriminative Pose Robust (ADPR) descriptor, for matching faces and objects across pose. They are also robust for recognition in low resolution and varying illumination. We use training examples at very few poses to generate virtual intermediate pose subspaces. An image is represented by a feature set obtained by projecting its low-level feature on these subspaces. This way we gather more information regarding the unseen poses by generating synthetic data and make our features more robust towards unseen pose variations. Then we apply a discriminative transform to make this feature set suitable for recognition for generating two of our descriptors namely DPF-SPR and DPF-LCC. In one approach, we transform it to a vector by using subspace to point representation technique which generates our DPF-SPR descriptors. In the second approach, layered structures of canonical correlated subspaces are formed, onto which the feature set is projected which generates our DPF-LCC descriptor. In a third approach we first align the remaining subspaces with the frontal one before learning the discriminative metric and concatenate the aligned discriminative projected features to generate ADPR. Experiments on recognizing faces and objects across varying pose are done. Specifically we have done experiments on MultiPIE and Surveillance Cameras Face database for face recognition and COIL-20 and RGB-D dataset for object recognition. We show that our approaches can even improve the recognition rate over the state-of-the-art deep learning approaches. We also perform extensive analysis of our three descriptors to get a better qualitative understanding. We compare with state-of-the-art to show the effectiveness of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Tai-Yun, and 李黛雲. "Face recognition under low illumination." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/77802921778252892191.

Full text
Abstract:
碩士
國立政治大學
資訊科學學系
91
The main objective of this thesis is to develop a face recognition system that could recognize human faces even when the surrounding environment is totally dark. The images of objects in total darkness can be captured using a relatively low-cost camcorder with the NightShot® function. By overcoming the illumination factor, a face recognition system would continue to function independent of the surrounding lighting condition. However, images acquired exhibit non-uniformity due to irregular illumination and current face recognition systems may not be put in use directly. In this thesis, we first investigate the characteristics of NIR images and propose an image formation model. A homomorphic processing technique built upon the image model is then developed to reduce the artifact of the captured images. After that, we conduct experiments to show that existing holistic face recognition systems perform poorly with NIR images. Finally, a more robust feature-based method is proposed to achieve better recognition rate under low illumination. A nearest neighbor classifier using Euclidean distance function is employed to recognize familiar faces from a database. The feature-based recognition method we developed achieves a recognition rate of 75% on a database of 32 people, with one sample image for each subject.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Shih-Rong, and 王仕融. "Low-cost face recognition system." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/qu53ce.

Full text
Abstract:
碩士
健行科技大學
資訊工程系碩士班
106
Face recognition is one of the most widely used information security management solutions in today. Usually face recognition system includes image capture, face area capture, face feature vector calculation, training and construction of face feature vector database, and finally face recognition. In general applications, higher hardware performance requirements are needed to quickly extract and analyze face features. Therefore, it is more difficult for low-end hardware devices to achieve face recognition applications such as access control. This study uses the free face recognition service provided by Microsoft to send face photos to the cloud platform through the Internet, and calculates and obtains face feature vectors, reducing the need for large and complex photo processing loading. Therefore, the Raspberry Pi can be used as the terminal hardware device realizes the functions required by the face recognition system. The entire system hardwire consists of the Raspberry Pi 3, Raspberry Pi Camera Module, and network devices. The software include Raspbian OS, Python 3.0, and OpenCV as the development language. Implementation system function include face capture, network control for using Microsoft facial recognition services, face feature extraction and analysis, face feature vector database construction and face recognition and access control management applications; to achieve machine learning required data collection, training, testing and actual use.
APA, Harvard, Vancouver, ISO, and other styles
34

Chang, C. K., and 張嘉鍇. "Contour Recognition on Low-resolution Hexagonal Images." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/00277939775797570818.

Full text
Abstract:
碩士
國立中山大學
機械工程學系
87
Nowadays the image system on PC can easily display the resolution of 800×600, 1024×768 and even more 1280×1024. Therefore, most researches of image put emphasis on High-resolution, such as identification of faces and fingerprints. However, there is still room for development of Low-Resolution; low storage capacity is one of the advantages of Low-Resolution system. From the researches of hexagonal grid, we know that from the view of microcosmic, the angle resolution and connection of hexagonal grid will be better than the rectangular grid. The hexagonal grid image will also have the better quality. In contrast, when the resolution of space is high, there is small difference between two systems in display and processing. Therefore, we suggest the usage of hexagonal grid on Low-Resolution image. At the same time, we develop the Curve Bend Function suitable for the usage of hexagonal grid images, and use the Curve Bend Function to find out the contour features of objects. We also discuss the usage of Curve Bend Function on Low-Resolution image. It will promote the development of Curve Bend Function on Low-Resolution Hexagonal image. At last, We have a contrast between the Low-Resolution images of rectangular grid and hexagonal grid.
APA, Harvard, Vancouver, ISO, and other styles
35

Lin, Horng-Horng, and 林泓宏. "Recognition of Printed Digits of Low Resolution." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/38980470503213984112.

Full text
Abstract:
碩士
國立交通大學
資訊科學系
87
In this paper, we develop an on-line inspecting system for invoice printing and introduce some techniques to automatically recognize the printed digits on the invoices. The poor quality of an invoice image, the textured background, and the low resolution of each printed digit make the research challenging. To overcome these problems, a robust method based on a minimal gray-level analysis is proposed to preprocess the invoice images. The preprocessing includes the number block extraction, textured background removal, and digit image enhancement. For each digit image of 8(8 pixels, a row-based method is developed to extract the features of the digit, which is then applied to a tree classifier for recognition. For the prototype system developed in the laboratory, a recognition rate of 95% can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Jun-Hao, and 陳鈞豪. "Recognition of Very Low Resolution License Plate." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/62735437293716715114.

Full text
Abstract:
碩士
雲林科技大學
電機工程系碩士班
98
The current license plate recognition system most of them are applied in a car stolen vehicle management, inspections, Stolen vehicle investigates, ETC, but the license plate recognition system is rarely used in vehicles to assist in crime, the main reason for this is because when a major case occurs, the police can only in accordance with the roads of cameras to assist at the video cases. Usually, the camera screen road is a low resolution, even the human eye cannot resolve the numbers, because the license plate recognition system can only be confined to clear characters, cannot distinguish very fuzzy numbers. Therefore, this paper focuses on the effective identification of how effectively increases extremely fuzzy number plate identification rate. License plate recognition system is usually divided into two parts, one is the license plate capture system, and the other is character recognition system, which is closely related to these two parts, the failure of any part of the whole license plate recognition rate reduced. First, through the camera''s shooting pictures are mostly fuzzy not clear that the human eye can discern actual observation about the entire scope of the license plate, license plate cutting actual brand photo is resized, you must first determine the scope of the human eye, about a considerable degree of magnification to use professional cutting software for license plate cutting. Second, we presented the paper simulation license plate imaging and a two-dimensional point spread function ( PSF Point Square Function ) do fold product ( Convolution ) may get a blurry image, simulation license plate therefore, you can compared to the actual photography plate and simulation of the difference between the license plate. The results from the license plate recognition, digital and hybrid combinations of English literacy rate there is room for improvement, but a combination of identification rate has achieved more about 90%. In future research, and how to effectively search for the best point spread function, and how to enhance digital and English mixed combinations of identification rate, will become important topics.
APA, Harvard, Vancouver, ISO, and other styles
37

Wang, Yu-Chun, and 王佑鈞. "Low Resolution Feature Evaluation and Appliance Recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/33705527791850047823.

Full text
Abstract:
碩士
國立臺灣大學
工程科學及海洋工程學研究所
99
Appliance state recognition method distinguishes the status of each appliance through smart meters, reduces energy consumption by providing residents with the energy information. However, most researches extract features without evaluating, and may not perform the best efficiency of their algorithm. On the other hand, high cost sensors and the difficulty in deployment not only frustrate the residents, but also decrease the user usability. In this paper, I evaluate features of appliance power consumption with 4 evaluation functions (Euclidean distance measure、Fuzzy Entropy、Max-Relevance and mRMR), find out the best low resolution feature for appliance state recognition method. To reduce the cost, I use low resolution feature data as input of non-intrusive load monitoring (NILM) system. Provide appliances combination data predict method, avoid exhaustive training and decrease the training effort on the user. To improve accuracy, adjust weight parameters in the algorithm by comparing with last result. The experimental results show that variance of current in frequency domain performs best when using single feature. For multi-dimension feature, the subset composed of variance of current in frequency domain, minimum variance ratio of inactive power in time domain, average of power factor in frequency domain and average of apparent power in frequency domain has the highest score in feature evaluating. In appliance state recognition, the algorithm provided in this paper reached about 80% joint accuracy in 2 dataset, using average of active power and average of apparent power as the subset of features.
APA, Harvard, Vancouver, ISO, and other styles
38

Tsai, Chung-Song, and 蔡忠松. "Low Resolution Infrared Image Recognition for Home Security." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/37092771919627391161.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
88
The recognition algorithm of low-resolution (64x64 pixels) infrared image applied for home security system is presented in this thesis. Our recognition targets are humans, dogs and cats, because of the application for home security system. There are three procedures of our recognition algorithm: pre-processing, feature extraction and statistical pattern recognition. At first, in the pre-processing procedure, we threshold the image and label the object. The purpose of this procedure is to filter out noise and extract the object that we want to recognize. Secondly, we extract three features (standard deviations of vertical and horizontal projection histogram and ratio of area and perimeter) form the object got from pre-processing procedure. Finally, we create the Gaussian probability model and use this model to recognize the object with statistical pattern recognition method. The recognition result shows our recognition algorithm can effectively discriminate humans for dogs or cats. Besides, we design a digital signal processor-based circuit according to the need of home security system, and implement our recognition algorithm on the circuit.
APA, Harvard, Vancouver, ISO, and other styles
39

Liao, Yu-Chia, and 廖昱嘉. "Recognition of Low-resolution Vehicle License Plate Images." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/51874237864667469639.

Full text
Abstract:
碩士
國立交通大學
多媒體工程研究所
101
Although there are lots of studies about recognizing vehicle license plate (VLP) images in recent years, the recognitions of low resolution VLP image are still deficient. The proposed method focuses on the recognitions of low-resolution VLP image. This method can treat VLP images with pretty small size, and only a single VLP image is need. First, the hyphen detection and character position estimation will be applied on a manually cropped VLP image which is seriously blurred. Then, a single-character template matching will be performed based on the estimated positions. Finally, the refinement of recognition results from the single-character template matching will be conducted via expanding a single-character template to a multiple-character template. Experiment results show that the proposed method is quite efficient to recognize VLP images with low resolution. The results are helpful for locating a suspect vehicle on a low-resolution image in the field of crime investigation.
APA, Harvard, Vancouver, ISO, and other styles
40

Chan, Ming-Da, and 詹明達. "False reduction and super-resolution for face detection and recognition." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/37082202051549081823.

Full text
Abstract:
碩士
國立高雄應用科技大學
光電與通訊研究所
96
Face detection has been receiving extensive attention with increasing demands in locating faces correctly. One of difficult problems in face detection is that the detecting performance is significantly influenced by many variables in locating faces, e.g., lighting, pose, facial expression, glasses, and cluttered backgrounds, which may vary depending on environmental factors of face image acquisition. Since the false acceptance of face which resulted from falsely declaration will cause more damage than the false rejection of face; therefore, this thesis presents a novel false face reducer based on facial T-shape region, which is constructed by eyes, nose, and mouth, to reduce the false acceptance rate. The basic steps of this technique are lighting compensation, normalization, and T-eigenface analysis, respectively. Then we genetically select the discriminative T-eigenface subset with the proposed fitness function through leave-one-out cross-validation and the Mahalanobis classifier. This thesis describes a set of experiments using four more test datasets, BANCA G1, BioID, JAFFE, ABC news photos, and our pictures, to evaluate the recall rate and the precision rate for without and with false face reduction. The experimental results are given as follows: (1) In the BANCA G1 database, the recall rate is decreased from 97.67% to 95.89%, while the precision rate is improved from 86.81% to 97.97% where totally 188 false acceptances are reduced. (2) In the BioID database, the recall rate is decreased from 94.93% to 88.09%, while the precision rate is improved from 89.36% to 98.84% where totally 150 false acceptances are reduced. (3) In the JAFFE database, the recall and precision rates are improved from 100% and 99.53% to both 100% with no false acceptance. (4) In the ABC database, the recall rate is decreased from 91.46% to 69.4%, while the precision rate is improved from 91.83% to 95.39% where totally 141 false acceptances are reduced. (5) In our pictures of 11 images with 65 faces, the recall rate is decreased from 84.61% to 81.53%, while the precision rate is improved from 35.35% to 92.98% where totally 97 false acceptances are reduced. Based on the presented performance, it ensures to exclude the problem of false acceptance when do the face detection and shows the possibility to disseminate the research. Although using the reducer, it may cause some of true faces got false rejection, but the damage is worth when considering the reduction of false acceptance. Compared with the related famous methodologies, like Betaface, Pittsburgh Pattern Recognition, and IDIAP, it is aware of the high reliability validated by further experiments.
APA, Harvard, Vancouver, ISO, and other styles
41

Ching-Ning, Huang, and 黃靖甯. "Face Recognition Based on Low-Rank Matrices Recovery." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/90490420768963192498.

Full text
Abstract:
博士
大葉大學
電機工程學系
103
The research of face recognition has begun in the 1960s, and the performance has been greatly improved due to the development of semiconductor processes and optical imaging techniques. The key to success for face recognition relies on whether it has well-designed algorithms to achieve a satisfactory recognition rate and practical recognition speed. Previously numerous classical algorithms have been developed to meet both requirements of satisfactory recognition rate and practical recognition speed, but however, most of the algorithms still cannot well deal with the scenarios when training and/or testing probe images are damaged or not perfect. In recent years, with the progress of mathematical methods, linear representation (or linear combination) methods have also made a significant progress, so that those algorithms have the ability to solve the problem when face images are impaired or contaminated by undesirable noises. Among those methods, the most popular algorithms are sparse representation based classification (SRC) and collaborative representation based classification (CRC). SRC is a sparse representation with L1-norm minimization; therefore it has a relatively better recognition rate but the performance of recognition speed is not well. However, CRC is a collaborative representation with L2-norm minimization; therefore it can achieve a satisfactory recognition speed but the performance of recognition rate is not well expected, specifically when face images are impaired or contaminated by undesirable noises. As mentioned above, either SRC or CRC cannot simultaneously achieve both requirements of satisfactory recognition rate and practical recognition speed when face images are damaged or undesirable outliers are involved. To effectively solve the above problems, this thesis employs CRC as a classification method, and proposes a novel method called adaptable dense representation (ADR), which utilizes both methods of sparse representation and low-rank recovery matrix to represent the training samples, to improve the advantages of CRC that has poor performance when face images are not perfect. Experimental results show that 68.5% recognition rate of AR database with the variations of expression, illuminance, and disguise can be achieved when only CRC is used. However, the recognition rate of 90.6% can be achieved when both CRC and ADR are adopted, indicating the feasibility of the proposed method to effectively enhance the ability of CRC in identity authentication when face images are impaired or undesirable outliers are involved.
APA, Harvard, Vancouver, ISO, and other styles
42

Lee, Chen-han, and 李承翰. "An Active Human-Machine Interface based on Multi-Resolution Face Recognition." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/7pktxw.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
99
We proposed to perform face recognition in an active and multi-resolution approach. The design target is to provide a real-time natural (non-contact) human interface for an IPTV system. For face image features, both local binary pattern and local directional pattern are adopted for robustness. In addition, these features and histogram statistics are extracted from one face image which is decomposed into nine blocks with different size. These decomposed face block regions are determined from an average face from which fast robust features for recognition can be obtained. The histogram statistics are extracted from each region individually and then are concatenated to yield the final feature vector. Weighted chi-square measurement is utilized for face recognition. Experiments verified that the proposed active face recognition method is insensitive to changing facial expressions. In addition, higher recognition accuracy and lower false positive rate can be achieved, which can also be applied to gender recognition. To develop an active human-machine interface under the condition that the face recognition has to be carried out in a multi-resolution approach, we proposed to use three-dimensional (3D) histogram which comprised histogram statistics across both time and space dimensions. The most distinguished feature of the proposed method is that it can perform face recognition very well when peoples are with different distances to the camera. In addition, the weighting factors are subjected to be updated in the recognition process and the discriminated features are selected through an learning algorithm, such that the system can maintain a stable recognition accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Shih, Yu-chun, and 施佑駿. "Face Recognition System Based on Local Regions and Multi-resolution Analysis." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/14143728708260194735.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
98
In recent years, with the development of biometric applications, identity recognition using biometric information becomes the most hot research field. Face recognition has attracted more and more attention because of its enshrouded, non-contact properties and its wide range of applications, such as security monitoring and computer interaction. However, face recognition is still a most challenging research areas, because of the fact, in uncontrolled environments, the appearance of face will be deformed by variations of illumination, expression, pose and occlusion etc. In this research, we mainly use the strategy of local region and multi-resolution analysis to reduce the impact caused by illumination, different expression and pose. The feature extracted method is based on Local Binary Patterns (LBP). Furthermore, we propose that using Local Binary Patterns-Three Orthogonal Planes (LBP-TOP) to extract the features that both include the spatial domain information and multi-scale information such that the extracted features are discriminative and robust. Feature matching method is replacing the similarity metric based on histogram with a local distance transform. Using local distance transform further improves the performance and, in some cases, it can reduce the dimension of feature. The proposed method is evaluated with the ORL database and the self-built database. Experimental results demonstrate the good performance of our method.
APA, Harvard, Vancouver, ISO, and other styles
44

"A generative learning method for low-resolution character recognition." Thesis, 2009. http://hdl.handle.net/2237/11663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ishida, Hiroyuki, and 皓之 石田. "A generative learning method for low-resolution character recognition." Thesis, 2009. http://hdl.handle.net/2237/11663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Chia-Chih 1979. "Recognizing human activities from low-resolution videos." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-12-4621.

Full text
Abstract:
Human activity recognition is one of the intensively studied areas in computer vision. Most existing works do not assume video resolution to be a problem due to general applications of interests. However, with continuous concerns about global security and emerging needs for intelligent video analysis tools, activity recognition from low-resolution and low-quality videos has become a crucial topic for further research. In this dissertation, We present a series of approaches which are developed specifically to address the related issues regarding low-level image preprocessing, single person activity recognition, and human-vehicle interaction reasoning from low-resolution surveillance videos. Human cast shadows are one of the major issues which adversely effect the performance of an activity recognition system. This is because human shadow direction varies depending on the time of the day and the date of the year. To better resolve this problem, we propose a shadow removal technique which effectively eliminates a human shadow cast from a light source of unknown direction. A multi-cue shadow descriptor is employed to characterize the distinctive properties of shadows. Our approach detects, segments, and then removes shadows. We propose two different methods to recognize single person actions and activities from low-resolution surveillance videos. The first approach adopts a joint feature histogram based representation, which is the concatenation of subspace projected gradient and optical flow features in time. However, in this problem, the use of low-resolution, coarse, pixel-level features alone limits the recognition accuracy. Therefore, in the second work, we contributed a novel mid-level descriptor, which converts an activity sequence into simultaneous temporal signals at body parts. With our representation, activities are recognized through both the local video content and the short-time spectral properties of body parts' movements. We draw the analogies between activity and speech recognition and show that our speech-like representation and recognition scheme improves recognition performance in several low-resolution datasets. To complete the research on this subject, we also tackle the challenging problem of recognizing human-vehicle interactions from low-resolution aerial videos. We present a temporal logic based approach which does not require training from event examples. At the low-level, we employ dynamic programming to perform fast model fitting between the tracked vehicle and the rendered 3-D vehicle models. At the semantic-level, given the localized event region of interest (ROI), we verify the time series of human-vehicle spatial relationships with the pre-specified event definitions in a piecewise fashion. Our framework can be generalized to recognize any type of human-vehicle interaction from aerial videos.
text
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Yang-Kai, and 張揚凱. "A Fast Facial Expression Recognition Method at Low-Resolution Images." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/22771910057319138983.

Full text
Abstract:
碩士
中華大學
資訊工程學系碩士班
94
In this thesis, we propose a novel image-based facial expression recognition method called “expression transition” to identify six kinds of facial expressions (anger, fear, happiness, neutral, sadness, and surprise) at low-resolution images. Two approaches are applied to calculate the expression transition matrices including direct mapping and singular value decomposition (SVD). The boosted tree classifiers and template matching are used to locate and crop the effective face region that may characterize facial expressions. Furthermore, the transformed facial images with a set of expression transition matrices are compared to identify the facial expressions. The experimental results show that the proposed facial expression recognition system may recognize 120 test facial images in the Chon-Kanade facial expression database with high accuracy and efficiency.
APA, Harvard, Vancouver, ISO, and other styles
48

Pan, Yi-An, and 潘奕安. "Automatic Facial Expression Recognition System in Low Resolution Image Sequence." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/98430729414018113883.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
92
According to predictions of computer science professionals of PC Magazine, “Computer Will Be More Human” is going to be the first of all developments within the computer science industry in the near future. Particularly, the automatic facial expression recognition system is the key technology to approach this goal. A completely automatic facial expression recognition system is proposed in this thesis, which consists of four partitions: a color and geometry-based face detection process, a facial feature extraction process including points and regional features, an optical flow based key-frame selection process, and an expression recognition process by the fuzzy neural network. For the face detection, instead of the conventional method for detecting the elliptic skin color region, a novel approach by searching facial features and examining a triangular geometric relationship is proposed to confirm the exact facial area. Concerning the facial feature extraction, the multi-feature mechanism, which includes the optical flow (describe the motion situation), feature points (describe the distribution of features), and invariant moments that represent the regional information, is presented to have a high identification efficiency. Besides, via combining the concept of the fuzzy mechanism and the neural network approach, the fuzzy neural network approach is proposed for the classification process. Experiment results show that due to the proposed “Key-frame selection” mechanism, the recognition system only operates while the expression is changed, not frame-by-frame, which significantly decreases enormous time for the recognition process. Moreover, since the “Key-frame” only appears for the maximal intensity of facial expression, it enables to raise the recognition rate as expected and mitigate the misclassified situation caused by indefinite features.
APA, Harvard, Vancouver, ISO, and other styles
49

Baptista, Renato Manuel Lemos. "Face Recognition in Low Quality Video Images via Sparse Encoding." Master's thesis, 2013. http://hdl.handle.net/10316/40440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Freitas, Tiago Daniel Santos. "3D Face Recognition Under Unconstrained settings using Low-Cost Sensors." Master's thesis, 2016. https://repositorio-aberto.up.pt/handle/10216/84513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography