Dissertations / Theses on the topic 'Eye detection'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Eye detection.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hossain, Akdas, and Emma Miléus. "Eye Movement Event Detection for Wearable Eye Trackers." Thesis, Linköpings universitet, Matematik och tillämpad matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129616.
Full textTrejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.
Full textIn this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.
Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.
Miao, Yufan. "Landmark Detection for Mobile Eye Tracking." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-301499.
Full textBandara, Indrachapa Buwaneka. "Driver drowsiness detection based on eye blink." Thesis, Bucks New University, 2009. http://bucks.collections.crest.ac.uk/9782/.
Full textYi, Fei. "Robust eye coding mechanisms in humans during face detection." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.
Full textAnderson, Travis M. "Motion detection algorithm based on the common housefly eye." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400965531&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.
Full textSamadzadegan, Sepideh. "Automatic and Adaptive Red Eye Detection and Removal : Investigation and Implementation." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-77977.
Full textVidal, Diego Armando Benavides. "A Kernel matching approach for eye detection in surveillance images." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/24112.
Full textSubmitted by Raquel Almeida (raquel.df13@gmail.com) on 2017-06-27T13:16:54Z No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5)
Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-08-15T10:55:01Z (GMT) No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5)
Made available in DSpace on 2017-08-15T10:55:01Z (GMT). No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5) Previous issue date: 2017-08-15
A detecção ocular é um problema aberto em pesquisa a ser resolvido eficientemente por detecção facial em sistemas de segurança. Características como precisão e custo computacional são consider- ados para uma abordagem de sucesso. Nós descrevemos uma abordagem integrada que segmenta os ROI emitidos por um detector Viola e Jones, constrói características HOGs e aprende uma função especial para mapear essas características para um espaço dimensional elevado onde a detecção alcança uma melhor precisão. Esse mapeamento segue a eficiente abordagem de funções Kernel, que se mostrou possível mas não foi feita para esse problema antes. Um classificador SVM linear é usado para detecção ocular através dessas características mapeadas. Experimentos extensivos são mostrados com diferentes bancos de dados e o método proposto alcança uma precisão elevada com baixo custo computacional adicional do que o detector Viola e Jones. O método também podem ser estendido para lidar com outros modelos equivalentes.
Eye detection is a open research problem to be solved efficiently by face detection and human surveillance systems. Features such as accuracy and computational cost are to be considered for a successful approach. We describe an integrated approach that takes the outputted ROI by a Viola and Jones detector, construct HOGs features on those and learn an special function to mapping these to a higher dimension space where the detection achieve a better accuracy. This mapping follows the efficient kernels match approach which was shown possible but had not been done for this problem before. Linear SVM is then used as classifier for eye detection using those mapped features. Extensive experiments are shown with different databases and the proposed method achieve higher accuracy with low added computational cost than Viola and Jones detector. The approach can also be extended to deal with other appearance models.
Ignat, Simon, and Filip Mattsson. "Eye Blink Detection and Brain-Computer Interface for Health Care Applications." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200571.
Full textTesárek, Viktor. "Detekce mrkání a rozpoznávání podle mrkání očí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217560.
Full textMalla, Amol Man. "Automated video-based measurement of eye closure using a remote camera for detecting drowsiness and behavioural microsleeps." Thesis, University of Canterbury. Electrical and Computer Engineering, 2008. http://hdl.handle.net/10092/2111.
Full textMergenthaler, Konstantin K. "The control of fixational eye movements." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2939/.
Full textWährend des alltäglichen Sehens führen wir große (Sakkaden) und Miniatur- oder fixationale Augenbewegungen durch. Die visuelle Wahrnehmung unserer Umwelt geschieht jedoch maßgeblich während des sogenannten Fixierens, obwohl das Auge auch in dieser Zeit ständig in Bewegung ist. Es ist bekannt, dass die fixationalen Augenbewegungen durch die gestellten Aufgaben und die Sichtbedingungen verändert werden. Trotzdem sind die Fixationsbewegungen noch sehr schlecht verstanden, besonders auch wegen ihrer zwei konträren Hauptfunktionen: Das stabilisieren des Bildes und das Vermeiden der Ermüdung retinaler Rezeptoren. In der vorliegenden Dissertation untersuchen wir die zeitlichen und räumlichen Eigenschaften der Fixationsbewegungen, die mit hoher zeitlicher und räumlicher Präzision aufgezeichnet wurden, während die Versuchspersonen entweder einen sichtbaren Punkt oder aber den Ort eines verschwundenen Punktes in völliger Dunkelheit fixieren sollten. Zunächst führen wir einen verbesserten Algorithmus ein, der die Aufspaltung in schnelle (Mikrosakkaden) und langsame (Drift) Fixationsbewegungen ermöglicht. Den beiden Typen von Fixationsbewegungen werden unterschiedliche Beiträge zur Wahrnehmung zugeschrieben. Anschließend wird für die Zeitreihen mit und ohne Mikrosakkaden das zeitliche Skalenverhalten untersucht. Für die Fixationsbewegung während des Fixierens auf den Punkt konnten wir feststellen, dass diese sich nicht durch Brownsche Molekularbewegung beschreiben lässt. Stattdessen fanden wir persistentes Verhalten auf den kurzen und antipersistentes Verhalten auf den längeren Zeitskalen. Während die Position des Übergangspunktes für Zeitreihen mit oder ohne Mikrosakkaden gleich ist, unterscheidet sie sich generell zwischen horizontaler und vertikaler Komponente der Augen. Weitere Analysen zielen auf Eigenschaften der Mikrosakkadenrate und -amplitude, sowie Auslösemechanismen von Mikrosakkaden durch bestimmte Eigenschaften der vorhergehenden Drift ab. Mittels eines Kästchenzählalgorithmus konnten wir die zufällige Generierung (Poisson Prozess) ausschließen. Des weiteren setzten wir ein Modell auf der Grundlage einer Zufallsbewegung mit zeitverzögerter Rückkopplung für den langsamen Teil der Augenbewegung auf. Dies erlaubt uns durch den Vergleich mit den erhobenen Daten die Dauer des Kontrollkreislaufes zu bestimmen. Interessanterweise unterscheiden sich die Dauern für vertikale und horizontale Augenbewegungen, was sich jedoch dadurch erklären lässt, dass das Modell auch durch die bekannte Neurophysiologie der Sakkadengenerierung, die sich räumlich wie auch strukturell zwischen vertikaler und horizontaler Komponente unterscheiden, motiviert ist. Die erhaltenen Dauern legen für die horizontale Komponente einen externen und für die vertikale Komponente einen internen Kontrollkreislauf dar. Ein interner Kontrollkreislauf ist nur für die vertikale Kompoente bekannt. Schließlich wird das Skalenverhalten des Modells noch semianalytisch bestätigt. Zusammenfassend waren wir in der Lage, unterschiedliche Eigenschaften von Teilen der Fixationsbewegung zu identifizieren und ein Modell zu entwerfen, welches auf der bekannten Neurophysiologie aufbaut und bekannte Einschränkungen der Kontrolle der Fixationsbewegung beinhaltet.
CUBA, GYLLENSTEN OLLANTA. "Evaluation of classification algorithms for smooth pursuit eye movements : Evaluating current algorithms for smooth pursuit detection on Tobii Eye Trackers." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155899.
Full textSung, Wei-Hong. "Investigating minimal Convolution Neural Networks (CNNs) for realtime embedded eye feature detection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281338.
Full textMed den snabba ökningen av neurala nätverk kan många uppgifter som brukade vara svåra att utföra i traditionella metoder nu lösas bra, särskilt inom datorsynsfältet. Men eftersom uppgifterna vi måste lösa har blivit mer och mer komplexa, blir de neurala nätverken vi använder djupare och större. Därför, även om vissa inbäddade system är kraftfulla för närvarande, lider de flesta inbäddade system fortfarande av minnes- och beräkningsbegränsningar, vilket innebär att det är svårt att distribuera våra stora neurala nätverk på dessa inbäddade enheter. Projektet syftar till att utforska olika metoder för att komprimera den ursprungliga stora modellen. Det vill säga, vi tränar först en baslinjemodell, YOLOv3[1], som är ett berömt objektdetekteringsnätverk, och sedan använder vi två metoder för att komprimera basmodellen. Den första metoden är beskärning med hjälp av sparsity training, och vi kanalskärning enligt skalningsfaktorvärdet efter sparsity training. Baserat på idén om denna metod har vi gjort tre utforskningar. För det första tar vi unionens maskstrategi för att lösa dimensionsproblemet för genvägsrelaterade lager i YOLOv3[1]. För det andra försöker vi absorbera informationen om skiftande faktorer i efterföljande lager. Slutligen implementerar vi lagerskärningen och kombinerar det med kanalbeskärning. Den andra metoden är beskärning med NAS, som använder en djup förstärkningsram för att automatiskt hitta det bästa kompressionsförhållandet för varje lager. I slutet av denna rapport analyserar vi de viktigaste resultaten och slutsatserna i vårt experiment och syftar till det framtida arbetet som potentiellt kan förbättra vårt projekt.
Einestam, Ragnar, and Karl Casserfelt. "PiEye in the Wild: Exploring Eye Contact Detection for Small Inexpensive Hardware." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20696.
Full textEye contact detection sensors have the possibility of inferring user attention, which can beutilized by a system in a multitude of different ways, including supporting human-computerinteraction and measuring human attention patterns. In this thesis we attempt to builda versatile eye contact sensor using a Raspberry Pi that is suited for real world practicalusage. In order to ensure practicality, we constructed a set of criteria for the system basedon previous implementations. To meet these criteria, we opted to use an appearance-basedmachine learning method where we train a classifier with training images in order to inferif users look at the camera or not. Our aim was to investigate how well we could detecteye contacts on the Raspberry Pi in terms of accuracy, speed and range. After extensivetesting on combinations of four different feature extraction methods, we found that LinearDiscriminant Analysis compression of pixel data provided the best overall accuracy, butPrincipal Component Analysis compression performed the best when tested on imagesfrom the same dataset as the training data. When investigating the speed of the system,we found that down-scaling input images had a huge effect on the speed, but also loweredthe accuracy and range. While we managed to mitigate the effects the scale had on theaccuracy, the range of the system is still relative to the scale of input images and byextension speed.
Harms, Looström Julia, and Emma Frisk. "Bird's-eye view vision-system for heavy vehicles with integrated human-detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54527.
Full textBarbieri, Gillian Sylvia Anna-Stasia. "The role of spatial derivatives in feature detection." Thesis, University of Birmingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368742.
Full textPesin, Jimy. "Detection and removal of eyeblink artifacts from EEG using wavelet analysis and independent component analysis /." Online version of thesis, 2007. http://hdl.handle.net/1850/8952.
Full textPatel, Brindal A. "R-Eye| An image processing-based embedded system for face detection and tracking." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10141532.
Full textThe current project presents the development of R-Eye, a face detection and tracking system implemented as an embedded device based on the Arduino microcontroller. The system is programmed in Python using the Viola-Jones algorithm for image processing. Several experiments designed to measure and compare the performance of the system under various conditions show that the system performs well when used with an integrated camera, reaching a 93% face recognition accuracy for a clear face. The accuracy is lower when detecting a face with accessories, such as a pair of eyeglasses (80%), or when a low-resolution low-quality camera is used. Experimental results also show that the system is capable of detecting and tracking a face within a frame containing multiple faces.
Carroll, Joshua Adam. "Eye-safe UV stand-off Raman spectroscopy for explosive detection in the field." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/80879/1/Joshua_Carroll_Thesis.pdf.
Full textGiesel, M., A. Yakovleva, Marina Bloj, A. R. Wade, A. M. Norcia, and J. M. Harris. "Relative contributions to vergence eye movements of two binocular cues for motion-in-depth." Springer Nature Group, 2019. http://hdl.handle.net/10454/17514.
Full textWhen we track an object moving in depth, our eyes rotate in opposite directions. This type of “disjunctive” eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal.
Supported by NIH EY018875 (AMN), BBSRC grants BB/M001660/1 (JH), BB/M002543/1 (AW), and BB/MM001210/1 (MB).
Richards, Othello Lennox. "When Eyes and Ears Compete: Eye Tracking How Television News Viewers Read and Recall Pull Quote Graphics." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6801.
Full textChaudhuri, Matthew Alan. "Optimization of a hardware/software coprocessing platform for EEG eyeblink detection and removal /." Online version of thesis, 2008. http://hdl.handle.net/1850/8967.
Full textHusseini, Orabi Ahmed. "Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.
Full textShojaeizadeh, Mina. "Automatic Detection of Cognitive Load and User's Age Using a Machine Learning Eye Tracking System." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/476.
Full textCoetzer, Reinier Casper. "Development of a robust active infrared-based eye tracking system." Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/26399.
Full textDissertation (MEng)--University of Pretoria, 2011.
Electrical, Electronic and Computer Engineering
unrestricted
Dybäck, Matilda, and Johanna Wallgren. "Pupil dilation as an indicator for auditory signal detection : Towards an objective hearing test based on eye tracking." Thesis, KTH, Skolan för teknik och hälsa (STH), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192703.
Full textEn tidig upptäckt av hörselnedsättning hos barn är viktig för barnets tal- och språkutveckling. För barn mellan 3-6 månader saknas det en tillförlitlig metod för att mäta hörsel och bestämma hörtrösklar. Ett hörseltest baserad på pupillreaktion på ljud som mäts med en eye tracker bygger på en automatisk fysiologisk reaktion och skulle kunna användas istället för de objektiva test som används idag. Hitintills har pupillreaktion på tal påvisats, men det saknas studier som studerat eventuella reaktioner på sinustoner. Syftet med denna uppsats var att undersöka om det finns en enhetlig pupillreaktion på de olika frekvenserna av sinustoner som vanligen används i hörseltest. Vidare var studiens syfte att fastställa ett tillförlitligt tidsfönster för pupillreaktion. Fyra olika typer av tester utfördes. Pupillreaktionen mot sinustoner med fyra olika frekvensnivåer (500 Hz, 1000 Hz, 2000 Hz och 4000 Hz), och fyra olika ljudnivåer (tystnad, 30 dB, 50 dB och 70 dB) undersöktes i ett test på vuxna deltagare (N=20, 15 kvinnor, 5 män). Olika ljusnivåer och distraktioner på eye tracker-skärmen undersöktes i tre test (N=5, 4 kvinnor, 1 man). Skillnaderna mellan ljudnivåer och frekvensnivåer testades med statistiska tester. Resultaten visade att pupillreaktion på sinustoner inträffade konsekvent mellan 300 ms och 2000 ms med individuella variationer. Denna reaktionstid inträffar tidigare än för taljud. En statistisk signifikant skillnad mellan tystnad och olika ljudnivåer kunde endast ses för frekvensnivån 4000 Hz. Ingen statistisk skillnad uppmättes mellan olika ljudnivåer eller om det fanns distraktioner på eye tracker-skärmen. De i studien framkomna resultaten tyder på att pupillreaktioner mot rena sinustoner hos vuxna är en möjlig metod för att identifiera hörseltrösklar för åtminstone 4000 Hz. Större studier behöver göras för att fastställa detta och en noggrannare undersökning behöver genomföras för de andra frekvenserna.
Bediz, Yusuf. "Automatic Eye Tracking And Intermediate View Reconstruction For 3d Imaging Systems." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607528/index.pdf.
Full textYekhshatyan, Lora. "Detecting distraction and degraded driver performance with visual behavior metrics." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/910.
Full textInce, Kutalmis Gokalp. "Computer Simulation And Implementation Of A Visual 3-d Eye Gaze Tracker For Autostreoscopic Displays." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611343/index.pdf.
Full textduring simulations and approximately 1°
for the experimental setup. 3-D estimation inaccuracy of the system along x- and y-axis is obtained as smaller than 2°
during the simulations and the experiments. However, estimation accuracy along z-direction is significantly sensitive to pupil detection and head pose estimation errors. For typical error levels, 20cm inaccuracy along z-direction is observed during simulations, whereas this inaccuracy reaches 80cm in the experimental setup.
Donovan, Tim. "Performance changes in wrist fracture detection and lung nodule perception following the perceptual feedback ot eye movements." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524759.
Full textGabbard, Ryan Dwight. "Identifying the Impact of Noise on Anomaly Detection through Functional Near-Infrared Spectroscopy (fNIRS) and Eye-tracking." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1501711461736129.
Full textFreeman, Jason Robert. "The Rise of the Listicle: Using Eye-Tracking and Signal Detection Theory to Measure This Growing Phenomenon." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6803.
Full textChen, Lihui. "Towards an efficient, unsupervised and automatic face detection system for unconstrained environments." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/8132.
Full textEscorcia, Gutierrez José. "Image Segmentation Methods for Automatic Detection of the Anatomical Structure of the Eye in People with Diabetic Retinopathy." Doctoral thesis, Universitat Rovira i Virgili, 2021. http://hdl.handle.net/10803/671543.
Full textEsta tesis se enmarca dentro del plan integral de prevención contra la Retinopatía Diabética (RD), ejecutado por el Gobierno de España alineado a las políticas de la Organización Mundial de la Salud para promover iniciativas que conciencien a la población con diabetes sobre la importancia de exámenes oculares de manera periódica. Para poder determinar el nivel de retinopatía diabética hace falta localizar e identificar diferentes tipos de lesiones en la retina. Para conseguirlo primero se han de eliminar de la imagen las estructures anatómicas normales del ojo (vasos sanguíneos, disco óptico y fóvea) para hacer visibles las anomalías. Esta tesis se ha centrado en este paso de limpieza de la imagen. En primer lugar, esta tesis propone un novedoso enfoque para la segmentación rápida y automática del disco óptico basado en la Teoría de Portafolio de Markowitz. En base a esta teoría se propone un innovador modelo de fusión de color capaz de soportar cualquier metodología de segmentación en el campo de las imágenes médicas. Este enfoque se estructura como una etapa de preprocesamiento potente y en tiempo real que podría integrarse en la práctica clínica diaria para acelerar el diagnóstico de RD debido a su simplicidad, rendimiento y velocidad. La segunda contribución de esta tesis es un método para segmentar simultáneamente los vasos sanguíneos y detectar la zona avascular foveal, reduciendo considerablemente el tiempo de procesamiento para tal tarea. Adicionalmente, la primera componente del espacio de color xyY (que representa los valores de crominancia) es la que predomina del estudio de las diferentes componentes de color realizado en esta tesis para la segmentación de vasos sanguíneos y la detección de la fóvea. Finalmente, se propone una recolección automática de muestras para interpolarlas basadas en la información estadística de color y que a su vez son la base del algoritmo Convexity Shape Prior. La tesis también propone otro método de segmentación de vasos sanguíneos basado en una selección efectiva de características soportada en árboles de decisión. Se ha conseguido encontrar las 5 características más relevantes para la segmentación de estas estructuras oculares. La validación utilizando tres técnicas de clasificación (árbol de decisión, red neuronal artificial y máquina de soporte vectorial).
This thesis is framed within the comprehensive plan for early prevention of Diabetic Retinopathy (DR) launched by the Spain government following the World Health Organization to promote initiatives that raise awareness of the importance of regular eye exams among people with diabetes. To determine the level of diabetic retinopathy, we need to find and identify different types of lesions in the eye fundus. First, the normal anatomic structures of the eye (blood vessels, optic disc and fovea) must be removed from the image, in order to make visible the abnormalities. This thesis has focused on this step of image cleaning. This thesis proposes a novel framework for fast and fully automatic optic disc segmentation based on Markowitz's Modern Portfolio Theory to generate an innovative color fusion model capable of admitting any segmentation methodology in the medical imaging field. This approach acts as a powerful and real-time pre-processing stage that could be integrated into daily clinical practice to accelerate the diagnosis of DR due to its simplicity, performance, and speed. This thesis's second contribution is a method to simultaneously make a blood vessel segmentation and foveal avascular zone detection, considerably reducing the required image processing time. In addition, the first component of the xyY color space representing the chrominance values is the most supported according to the approach developed in this thesis for blood vessel segmentation and fovea detection. Finally, several samples are collected for a color interpolation procedure based on statistic color information and are used by the well-known Convexity Shape Prior segmentation algorithm. The thesis also proposes another blood vessel segmentation method that relies on an effective feature selection based on decision tree learning. This method is validated using three different classification techniques (i.e., Decision Tree, Artificial Neural Network, and Support Vector Machine).
Vineela, Sanampudi. "Eye Corner Detection." Thesis, 2015. http://ethesis.nitrkl.ac.in/7682/1/2015_Eye_Vineela.pdf.
Full textChang, Hui-Yin, and 張惠茵. "Algorithm Design of Eye Status Detection for Close Eye Alert Systems." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/27226090604110195496.
Full text國立中興大學
電機工程學系所
102
In this thesis, we discuss several methods of eye status detection for close eye alert systems. The proposed close eye alert system based on tracking eye states was implemented. If the system detects the close eye state, it will show a warning message to alert. The proposed system contains two parts: the eye tracking and the eye state detection. Firstly, we pre-process original facial images for future easier processing due to some noises in images. We use several methods to do the pre-process, which are described as follows: 1. Convert the color space to the YCbCr format, 2. Use the mean filter to smooth the images, 3. The Sobel filter is used to get the edges of images, 4.Use “Self Quotient Image” algorithm to remove influences of different light conditions, and then we use different sizes of templates to detect the possible candidate area of eyes. Next, we propose four methods to detect the eye states, which are described as follows: 1. Use the skin color base on the threshold of the Cr color, 2. Use the skin color base on the normalized RGB pixels, 3. Use the search window to find out the minimum gray-level value, 4. Use the cross filter to find out the minimum gray-level value. Thus, we can get the close eye states by comparing the color variations of the first image with that of the present image. In our experiments, we use a personal computer with 2.66GHz Quad-core CPUs for simulations. There are up to 15 video clips in the video database for four different individuals, which includes: the clips for frontal view without glasses, the clips with frontal view and wearing thin rim glasses, the clips for frontal view and black frame glasses, and the clips with upward view without glasses. The experimental results show that our methods can detect the eye states in various situations, such as wearing glasses. Under the premise that eyes are open at the first frame, compared with the other methods, the proposed method by using the search window has better detection accuracy for the eye state detection on non-glasses situations, and the proposed method by using the cross filter has better detection accuracy for the eye state detection on glasses situations.
Wang, Sheng-Wen, and 王聖文. "Automatic Eye Detection and Glasses Removal." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56a3j8.
Full text國立交通大學
電機與控制工程系所
92
This thesis addresses an algorithm to automatically detect the eye location from a given face image and remove the glasses while one has worn. Our system consists of three modules: face segmentation, eye detection, and eyeglasses removal while one has worn on eyeglasses. First, we use the universal skin-color map to detect the face regions, which can ensure sufficient adaptability to ambient lighting conditions. Then, a special filter, called circle-frequency filter, is used to locate the eye regions because of its invariant characteristic in wide face orientations and rotations. Finally, for the widely using of glasses, we proposed a novel method to remove the eyeglasses automatically based on edge detection and modified fuzzy rule-based (MFRB) filter. The simulation and results demonstrate that our approach detects eye location efficiently and demonstrates high fidelity to the non-wearing-glasses facial image after glasses removal.
Chou, Ta-Feng, and 周達峰. "A COMPARISON OF EYE DETECTION ALGORITHMS." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/74749524768101402959.
Full text國立交通大學
電機學院碩士在職專班電子與光電組
94
This thesis is towards the research of detecting people's eyes in the face. People's eyes contain transmit and rich information. When we are tired and sleepy, eyes will not be getting conscientious. On the other hand, Eyes will be also large when you are energetic. If we can watch driver's eyes, We can design a driving security system reminding the driver automatically. Eyes are also important to distinguish people's identity. If we can set up the face image database of the staff of a company, Then we can distinguish a person's identity by one's facial features. Especially eyes are the window of the soul, how big and where in the face, and iris are crucial to determine the identity of a person. There are many methods to detect eyes in a face, For example, 1, The rough eye outline prediction by RCER (Rough Contour Estimation Routine), mathematical morphology and deformable template model; 2, edge detection technologies. We make use of these technologies to find the position and shape of eyes, and then the accuracy of each method is computed. This thesis aims at finding out an efficient method to detect the eyes and their shapes. Comparison among these methods are made. Furthermore, advantages and disadvantages of these methods are finally shown and noted.
Panda, Deepti Ranjan. "Eye Detection Using Wavelets and ANN." Thesis, 2007. http://ethesis.nitrkl.ac.in/54/1/dipti.pdf.
Full textLin, Hui-Wen, and 林慧雯. "Detection of Eye Blinking of a Driver." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/92275145791570371441.
Full text國立臺灣師範大學
資訊工程研究所
94
Since car accidents may be caused by a driver's drowsiness, some assistance systems have been designed to warn the driver before he falls asleep. Such a system could monitor the driver’s eyes to detect blinking. If the blinking becomes to frequent or prolonged, then it may indicate drowsiness. We propose a vision-based eye blinking detection system in this paper. Four steps, extracting a human face, detecting eye location, tracking the eyes and detecting blinking are developed for our system. Using a video camera mounted in the image of the driver’s face is first extracted based on skin color. Second, the location of the eyes of the driver is detected according to eye features. Third, we track the eyes’ location in next frame based on the shape and the location of human eyes relative to the face. Finally, we detect eye blinking of the driver. Our system has been tested during actual driving, and shows that it can be adapted to changing illumination while the car is in motion.
Weng, Chung-Ren, and 翁崇荏. "A Fast Algorithm of Eye Blink Detection." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/81350316070926319375.
Full text國立交通大學
資訊科學系所
93
The information about eye blinks shows one’s fatigue. Detecting blink is useful to monitoring system or warning system. Automatically detecting fatigue of drivers may save them from accidents. We propose a fast algorithm for eye location and eye blink detection. This algorithm is adapted to the complex background and the situation which people wear glasses. In the first place, we classify pixels as skin color and non-skin color to build a skin mask of original image. Second, we patch the skin mask with morphological operations and locate the face bounding block. Applying horizontal projection to the bounding block and filtering with face feature conditions, we locate eye region. After gathering statistics of pixels in eye region, we can find out which frames contain eye blinks.
Chang, Chia-Wei, and 張家瑋. "Human Detection Using Single Fish-eye Camera." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/8w6ade.
Full text國立交通大學
資訊科學與工程研究所
106
This paper proposes a new algorithm for human detection using single downward-viewing fish-eye camera. In recent years, methods for human detection using projective camera have been studied extensively, but researches on human detection from fish-eye camera images are very limited and time-consuming, and most of them were used in simple and uncluttered environments. In addition, the advantage of fisheye lenses is that they can cover a very wide range with only one camera. When people are close, or they may be blocked, using fish-eye camera has a better view than other cameras. The main purpose of this paper is to propose a new method which can detect, track and estimate the number of people in more complicated scenes and expect to be applied in real-time situations. Our detecting algorithm makes use of elliptic templates and HOG features, and then apply a set of support vector machines (SVMs) to find out whether there are people in the template or not. Meanwhile, we track people’s position by applying color features and analyzing their movement in a specific time.
Kumar, Rahul. "DataBase Generation for Eye Detection with Spectacles." Thesis, 2018. http://ethesis.nitrkl.ac.in/9652/1/2018_MT_216EE1268_RKumar_Database.pdf.
Full textRae, Robert Andrew. "Detection of eyelid position in eye-tracking systems." 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=95039&T=F.
Full textChang, Ting-Hsuan, and 張庭瑄. "Automatic Red-Eye Reduction with Fast Face Detection." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/82482374910385611330.
Full text國立臺灣大學
資訊工程學研究所
96
Red-Eye Reduction Technology is used for some artifacts in still images. This technology can help user get a better photo print. In recent years, digital cameras and camera phones are more popular. People can take many pictures easily. Some photos taken in dark with flashlight will have some red-eye artifact. This subject becomes more important nowadays. In this thesis, we propose a fast face detection and automatic red-eye reduction method, which eliminates red-eye effect automatically. Increasing the processing speed and decreasing the false alarm rate to produce a better photo print.
Hsu, Yi-Cheng, and 徐亦澂. "Circular Deformable Template Application in Eye Openness Detection." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/19755713531899394541.
Full text國立交通大學
電機與控制工程系所
96
Sleepiness and driving is a dangerous combination, drowsy driving can be just as fatal. Accordingly, it is necessary to develop a drowsy driver awareness system. To avoid interrupting the driver, it is necessary to build a system under non-invasive and non-contact condition. Image process system suits to achieve such a request. Hence, it is recommended to judge the drowsiness state by observing eye status of operators via eye video. In this thesis, we use a CCD camera as the image sources and use skin color map to segment skin region. Then we use PCA algorithm to find the eye region for circular template searching. The circular template will locate the iris region and finally we can analyze this region to classify the eye openness state. By numerical simulation, we have obtained a high accuracy on eye openness detection and it would be helpful for the drowsy detection system.
Lin, Yu-Sheng, and 林祐聖. "Automatic Eye Detection and Reflection Separation within Glasses." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/45673049838875420176.
Full text國立交通大學
電機與控制工程系所
94
Eye detection has been applied to many applications, for instance, human or faces recognition, eye gaze detection, drowsiness detection, and so on. However, eye detection often misdiagnoses for the interference caused by glasses when one wears spectacles. This thesis addresses an algorithm to automatically detect the eye location from a given face image and separate reflections within glasses while one has worn glasses. Our system consists of three modules: face segmentation, optic-area detection, and the separating of glasses reflections while one has worn glasses. First, we use the universal skin-color map to detect the face regions, which can ensure sufficient adaptability to ambient lighting conditions. Then, we proposed a novel method to detect the eye region and separate the reflection within glasses based on edge detection, corner detection, and anisotropic diffusion transform. The principle of separating reflection is based on that the correct decomposition of the reflection image whose summation of corners and edges is the smallest among all possible decompositions. The simulation and results demonstrate that the principle of separating reflection can be applied to the reflection within glasses effectively and result in good reflection separation.
Chang, Ting-Hsuan. "Automatic Red-Eye Reduction with Fast Face Detection." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1606200815074300.
Full textWANG, YU-HUI, and 王玉輝. "Eye Detection Using the Color Sequence Fuzzy Automata." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/98003077095482736700.
Full text聖約翰科技大學
電子工程系碩士班
96
Eyes are one of the most important facial features and eyes detection play a quite important role in many useful applications, such as face detection, face recognition, facial expression analysis, or eye-gaze tracking systems. In this paper, the authors introduced an eye localization system based on the line color sequence (LCS) in the color images. The algorithm not only focuses on the identification of color types, but also engages in determining the spatial relationships between the colors. The novelty of this work is that the algorithm works without complete face location. With the fuzzy automata support, the lower mathematical computation is the advantage of our proposed method.