To see the other types of publications on this topic, follow the link: Eye detection.

Dissertations / Theses on the topic 'Eye detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Eye detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hossain, Akdas, and Emma Miléus. "Eye Movement Event Detection for Wearable Eye Trackers." Thesis, Linköpings universitet, Matematik och tillämpad matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129616.

Full text
Abstract:
Eye tracking research is a growing area and the fields as where eye trackingcould be used in research are large. To understand the eye tracking data dif-ferent filters are used to classify the measured eye movements. To get accu-rate classification this thesis has investigated the possibility to measure bothhead movements and eye movements in order to improve the estimated gazepoint.The thesis investigates the difference in using head movement compensationwith a velocity based filter, I-VT filter, to using the same filter without headmovement compensation. Further on different velocity thresholds are testedto find where the performance of the filter is the best. The study is made with amobile eye tracker, where this problem exist since you have no absolute frameof reference as opposed to when using remote eye trackers. The head move-ment compensation shows promising results with higher precision overall.
APA, Harvard, Vancouver, ISO, and other styles
2

Trejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.

Full text
Abstract:

In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.

Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.

APA, Harvard, Vancouver, ISO, and other styles
3

Miao, Yufan. "Landmark Detection for Mobile Eye Tracking." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-301499.

Full text
Abstract:
Mobile eye tracking studies in urban environments can provide important insights into several processes of human behavior, ranging from wayfinding to human-environment interaction. The analysis of this kind of eye tracking data are based on a semi-manual or even sometimes completely manual process, consuming immense post-processing time. In this thesis, we propose an approach based on computer vision methods that allows fully automatic analysis of eye tracking data, captured in an urban environment. We present our approach, as well as the results of three experiments that were conducted in order to evaluate the robustness of the system in open, as well as in narrow spaces. Furthermore, we give directions towards computation time optimization in order to achieve analysis on the fly of the captured eye tracking data, opening the way for human-environment interaction in real time.
APA, Harvard, Vancouver, ISO, and other styles
4

Bandara, Indrachapa Buwaneka. "Driver drowsiness detection based on eye blink." Thesis, Bucks New University, 2009. http://bucks.collections.crest.ac.uk/9782/.

Full text
Abstract:
Accidents caused by drivers’ drowsiness behind the steering wheel have a high fatality rate because of the discernible decline in the driver’s abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedback to maintain maximum performance. The main objective of this research study is to develop a reliable metric and system for the detection of driver impairment due to drowsiness. More specifically, the goal of the research is to develop the best possible metric for detection of drowsiness, based on measures that can be detected during driving. This thesis describes the new studies that have been performed to develop, validate, and refine such a metric. A computer vision system is used to monitor the driver’s physiological eye blink behaviour. The novel application of green LED illumination overcame one of the major difficulties of the eye sclera segmentation problem due to illumination changes. Experimentation in a driving simulator revealed various visual cues, typically characterizing the level of alertness of the driver, and these cues were combined to infer the drowsiness level of the driver. Analysis of the data revealed that eye blink duration and eye blink frequency were important parameters in detecting drowsiness. From these measured parameters, a continuous measure of drowsiness, the New Drowsiness Scale (NDS), is derived. The NDS ranges from one to ten, where a decrease in NDS corresponds to an increase in drowsiness. Based upon previous research into the effects of drowsiness on driving performance, measures relating to the lateral placement of the vehicle within the lane are of particular interest in this study. Standard deviations of average deviations were measured continuously throughout the study. The NDS scale, based upon the gradient of the linear regression of standard deviation of average blink frequency and duration, is demonstrated as a reliable method for identifying the development of drowsiness in drivers. Deterioration of driver performance (reflected by increasingly severe lane deviation) is correlated with a decreasing NDS score. The final experimental results show the validity of the proposed model for driver drowsiness detection.
APA, Harvard, Vancouver, ISO, and other styles
5

Yi, Fei. "Robust eye coding mechanisms in humans during face detection." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.

Full text
Abstract:
We can detect faces more rapidly and efficiently compared to non-face object categories (Bell et al., 2008; Crouzet, 2011), even when only partial information is visible (Tang et al., 2014). Face inversion impairs our ability to recognise faces. The key to understand this effect is to determine what special face features are processed and how coding of these features is affected by face inversion. Previous studies from our lab showed coding of the contralateral eye in an upright face detection task, which was maximal around the N170 recorded at posterior-lateral electrodes (Ince et al., 2016b; Rousselet et al., 2014). In chapter 2, we used the Bubble technique to determine whether brain responses also reflect the processing of eyes in inverted faces and how it does so in a simple face detection task. The results suggest that in upright and inverted faces alike the N170 reflects coding of the contralateral eye, but face inversion quantitatively weakens the early processing of the contralateral eye, specifically in the transition between the P1 and the N170 and delays this local feature coding. Group and individual results support this claim. First, regardless of face orientation, the N170 coded the eyes contralateral to the posterior-lateral electrodes, which was the case in all participants. Second, face inversion delayed coding of contralateral eye information. Third, time course analysis of contralateral eye coding revealed weaker contralateral eye coding for inverted compared to upright faces in the transition between the P1 and the N170. Fourth, single-trial EEG responses were driven by the corresponding single-trial visibility of the left eye. The N170 amplitude was larger and latency shorter as the left eye visibility increased in upright and upside-down faces for the majority of participants. However, for images of faces, eye position and face orientation were confounded, i.e., the upper visual field usually contains eyes in upright faces; in upside-down faces lower visual field contains eyes. Thus, the impaired processing of the contralateral eye by inversion might be simply attributed to that face inversion removes the eyes away from upper visual filed. In chapter 3, we manipulated three vertical locations of images in which eyes are presented in upper, centre and lower visual field relative to fixation cross (the centre of the screen) so that in upright and inverted faces the eyes can shift from the upper to the lower visual field. We used the similar technique as in chapter 2 during a face detection task. First, we found 2 that regardless of face orientation and position, the modulations of ERPs recorded at the posterior-lateral electrodes were associated with the contralateral eye. This suggests that coding of the contralateral eye underlying the N170. Second, face inversion delayed processing of the contralateral eye when the eyes of faces were presented in the same position, Above, Below or at the Centre of the screen. Also, in the early N170, most of our participants showed weakened contralateral eye sensitivity by inversion of faces, of which the eyes appeared in the same position. The results suggest that face inversion related changes in processing of the contralateral eye cannot be simply considered as the results of differences of eye position. The scan-paths traced by human eye movements are similar to the low-level computation saliency maps produced by contrast based computer vision algorithms (Itti et al., 1998). This evidence leads us to a question of whether the coding function to encode the eyes is due to the significance in the eye regions. In chapter 4, we aim to answer the question. We introduced two altered version of original faces: normalised and reversed contrast faces in a face detection task - removing eye saliency (Simoncelli and Olshausen, 2001) and reversing face contrast polarity (Gilad et al., 2009) in a simple face detection task. In each face condition, we observed ERPs, that recorded at contralateral posterior lateral electrodes, were sensitive to eye regions. Both contrast manipulations delayed and reduced eye sensitivity during the rising part of the N170, roughly 120 – 160 ms post-stimulus onset. Also, there were no such differences between two contrast-manipulated faces. These results were observed in the majority of participants. They suggest that the processing of contralateral eye is due partially to low-level factors and may reflect feature processing in the early N170.
APA, Harvard, Vancouver, ISO, and other styles
6

Anderson, Travis M. "Motion detection algorithm based on the common housefly eye." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400965531&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Samadzadegan, Sepideh. "Automatic and Adaptive Red Eye Detection and Removal : Investigation and Implementation." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-77977.

Full text
Abstract:
Redeye artifact is the most prevalent problem in the flash photography, especially using compact cameras with built-in flash, which bothers both amateur and professional photographers. Hence, removing the affected redeye pixels has become an important skill. This thesis work presents a completely automatic approach for the purpose of redeye detection and removal and it consists of two modules: detection and correction of the redeye pixels in an individual eye, detection of two red eyes in an individual face.This approach is considered as a combination of some of the previous attempts in the area of redeye removal together with some minor and major modifications and novel ideas. The detection procedure is based on the redness histogram analysis followed by two adaptive methods, general and specific approaches, in order to find a threshold point. The correction procedure is a four step algorithm which does not solely rely on the detected redeye pixels. It also applies some more pixel checking, such as enlarging the search area and neighborhood checking, to improve the reliability of the whole procedure by reducing the image degradation risk. The second module is based on a skin-likelihood detection algorithm. A completely novel approach which is utilizing the Golden Ratio in order to segment the face area into some specific regions is implemented in the second module. The proposed method in this thesis work is applied on more than 40 sample images; by considering some requirements and constrains, the achieved results are satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
8

Vidal, Diego Armando Benavides. "A Kernel matching approach for eye detection in surveillance images." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/24112.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2016.
Submitted by Raquel Almeida (raquel.df13@gmail.com) on 2017-06-27T13:16:54Z No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5)
Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-08-15T10:55:01Z (GMT) No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5)
Made available in DSpace on 2017-08-15T10:55:01Z (GMT). No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5) Previous issue date: 2017-08-15
A detecção ocular é um problema aberto em pesquisa a ser resolvido eficientemente por detecção facial em sistemas de segurança. Características como precisão e custo computacional são consider- ados para uma abordagem de sucesso. Nós descrevemos uma abordagem integrada que segmenta os ROI emitidos por um detector Viola e Jones, constrói características HOGs e aprende uma função especial para mapear essas características para um espaço dimensional elevado onde a detecção alcança uma melhor precisão. Esse mapeamento segue a eficiente abordagem de funções Kernel, que se mostrou possível mas não foi feita para esse problema antes. Um classificador SVM linear é usado para detecção ocular através dessas características mapeadas. Experimentos extensivos são mostrados com diferentes bancos de dados e o método proposto alcança uma precisão elevada com baixo custo computacional adicional do que o detector Viola e Jones. O método também podem ser estendido para lidar com outros modelos equivalentes.
Eye detection is a open research problem to be solved efficiently by face detection and human surveillance systems. Features such as accuracy and computational cost are to be considered for a successful approach. We describe an integrated approach that takes the outputted ROI by a Viola and Jones detector, construct HOGs features on those and learn an special function to mapping these to a higher dimension space where the detection achieve a better accuracy. This mapping follows the efficient kernels match approach which was shown possible but had not been done for this problem before. Linear SVM is then used as classifier for eye detection using those mapped features. Extensive experiments are shown with different databases and the proposed method achieve higher accuracy with low added computational cost than Viola and Jones detector. The approach can also be extended to deal with other appearance models.
APA, Harvard, Vancouver, ISO, and other styles
9

Ignat, Simon, and Filip Mattsson. "Eye Blink Detection and Brain-Computer Interface for Health Care Applications." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tesárek, Viktor. "Detekce mrkání a rozpoznávání podle mrkání očí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217560.

Full text
Abstract:
This master thesis deals with the issues of the eye blink recognition from video. The main task is to analyse algorithms dealing with a detection of persons and make a program that could recognize the eye blink. Analysis of these algorithms and their problems are in the first part of this thesis. In the second part design and properties of my program are described. The realization of the program is based on the method of move detection using the accumulated difference frame, which helps to identify the eye areas. The eye blink detection algorithm tests a match between a tresholded pattern of the eye area taken from the actual frame and the frame before. The resolution whether the eye blink happened or not, is based on the level of the match. The algorithm is designed for watching a sitting man, which is slightly moving. The background can be a little dynamic as well. An average quality video with a moderator and dynamic backround was used as a tested subject.
APA, Harvard, Vancouver, ISO, and other styles
11

Malla, Amol Man. "Automated video-based measurement of eye closure using a remote camera for detecting drowsiness and behavioural microsleeps." Thesis, University of Canterbury. Electrical and Computer Engineering, 2008. http://hdl.handle.net/10092/2111.

Full text
Abstract:
A device capable of continuously monitoring an individual’s levels of alertness in real-time is highly desirable for preventing drowsiness and lapse related accidents. This thesis presents the development of a non-intrusive and light-insensitive video-based system that uses computer-vision methods to localize face, eyes, and eyelids positions to measure level of eye closure within an image, which, in turn, can be used to identify visible facial signs associated with drowsiness and behavioural microsleeps. The system was developed to be non-intrusive and light-insensitive to make it practical and end-user compliant. To non-intrusively monitor the subject without constraining their movement, the video was collected by placing a camera, a near-infrared (NIR) illumination source, and an NIR-pass optical filter at an eye-to-camera distance of 60 cm from the subject. The NIR-illumination source and filter make the system insensitive to lighting conditions, allowing it to operate in both ambient light and complete darkness without visually distracting the subject. To determine the image characteristics and to quantitatively evaluate the developed methods, reference videos of nine subjects were recorded under four different lighting conditions with the subjects exhibiting several levels of eye closure, head orientations, and eye gaze. For each subject, a set of 66 frontal face reference images was selected and manually annotated with multiple face and eye features. The eye-closure measurement system was developed using a top-down passive feature-detection approach, in which the face region of interest (fROI), eye regions of interests (eROIs), eyes, and eyelid positions were sequentially localized. The fROI was localized using an existing Haar-object detection algorithm. In addition, a Kalman filter was used to stabilize and track the fROI in the video. The left and the right eROIs were localized by scaling the fROI with corresponding proportional anthropometric constants. The position of an eye within each eROI was detected by applying a template-matching method in which a pre-formed eye-template image was cross-correlated with the sub-images derived from the eROI. Once the eye position was determined, the positions of the upper and lower eyelids were detected using a vertical integral-projection of the eROI. The detected positions of the eyelids were then used to measure eye closure. The detection of fROI and eROI was very reliable for frontal-face images, which was considered sufficient for an alertness monitoring system as subjects are most likely facing straight ahead when they are drowsy or about to have microsleep. Estimation of the y- coordinates of the eye, upper eyelid, and lower eyelid positions showed average median errors of 1.7, 1.4, and 2.1 pixels and average 90th percentile (worst-case) errors of 3.2, 2.7, and 6.9 pixels, respectively (1 pixel 1.3 mm in reference images). The average height of a fully open eye in the reference database was 14.2 pixels. The average median and 90th percentile errors of the eye and eyelid detection methods were reasonably low except for the 90th percentile error of the lower eyelid detection method. Poor estimation of the lower eyelid was the primary limitation for accurate eye-closure measurement. The median error of fractional eye-closure (EC) estimation (i.e., the ratio of closed portions of an eye to average height when the eye is fully open) was 0.15, which was sufficient to distinguish between the eyes being fully open, half closed, or fully closed. However, compounding errors in the facial-feature detection methods resulted in a 90th percentile EC estimation error of 0.42, which was too high to reliably determine extent of eye-closure. The eye-closure measurement system was relatively robust to variation in facial-features except for spectacles, for which reflections can saturate much of the eye-image. Therefore, in its current state, the eye-closure measurement system requires further development before it could be used with confidence for monitoring drowsiness and detecting microsleeps.
APA, Harvard, Vancouver, ISO, and other styles
12

Mergenthaler, Konstantin K. "The control of fixational eye movements." Phd thesis, Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2009/2939/.

Full text
Abstract:
In normal everyday viewing, we perform large eye movements (saccades) and miniature or fixational eye movements. Most of our visual perception occurs while we are fixating. However, our eyes are perpetually in motion. Properties of these fixational eye movements, which are partly controlled by the brainstem, change depending on the task and the visual conditions. Currently, fixational eye movements are poorly understood because they serve the two contradictory functions of gaze stabilization and counteraction of retinal fatigue. In this dissertation, we investigate the spatial and temporal properties of time series of eye position acquired from participants staring at a tiny fixation dot or at a completely dark screen (with the instruction to fixate a remembered stimulus); these time series were acquired with high spatial and temporal resolution. First, we suggest an advanced algorithm to separate the slow phases (named drift) and fast phases (named microsaccades) of these movements, which are considered to play different roles in perception. On the basis of this identification, we investigate and compare the temporal scaling properties of the complete time series and those time series where the microsaccades are removed. For the time series obtained during fixations on a stimulus, we were able to show that they deviate from Brownian motion. On short time scales, eye movements are governed by persistent behavior and on a longer time scales, by anti-persistent behavior. The crossover point between these two regimes remains unchanged by the removal of microsaccades but is different in the horizontal and the vertical components of the eyes. Other analyses target the properties of the microsaccades, e.g., the rate and amplitude distributions, and we investigate, whether microsaccades are triggered dynamically, as a result of earlier events in the drift, or completely randomly. The results obtained from using a simple box-count measure contradict the hypothesis of a purely random generation of microsaccades (Poisson process). Second, we set up a model for the slow part of the fixational eye movements. The model is based on a delayed random walk approach within the velocity related equation, which allows us to use the data to determine control loop durations; these durations appear to be different for the vertical and horizontal components of the eye movements. The model is also motivated by the known physiological representation of saccade generation; the difference between horizontal and vertical components concurs with the spatially separated representation of saccade generating regions. Furthermore, the control loop durations in the model suggest an external feedback loop for the horizontal but not for the vertical component, which is consistent with the fact that an internal feedback loop in the neurophysiology has only been identified for the vertical component. Finally, we confirmed the scaling properties of the model by semi-analytical calculations. In conclusion, we were able to identify several properties of the different parts of fixational eye movements and propose a model approach that is in accordance with the described neurophysiology and described limitations of fixational eye movement control.
Während des alltäglichen Sehens führen wir große (Sakkaden) und Miniatur- oder fixationale Augenbewegungen durch. Die visuelle Wahrnehmung unserer Umwelt geschieht jedoch maßgeblich während des sogenannten Fixierens, obwohl das Auge auch in dieser Zeit ständig in Bewegung ist. Es ist bekannt, dass die fixationalen Augenbewegungen durch die gestellten Aufgaben und die Sichtbedingungen verändert werden. Trotzdem sind die Fixationsbewegungen noch sehr schlecht verstanden, besonders auch wegen ihrer zwei konträren Hauptfunktionen: Das stabilisieren des Bildes und das Vermeiden der Ermüdung retinaler Rezeptoren. In der vorliegenden Dissertation untersuchen wir die zeitlichen und räumlichen Eigenschaften der Fixationsbewegungen, die mit hoher zeitlicher und räumlicher Präzision aufgezeichnet wurden, während die Versuchspersonen entweder einen sichtbaren Punkt oder aber den Ort eines verschwundenen Punktes in völliger Dunkelheit fixieren sollten. Zunächst führen wir einen verbesserten Algorithmus ein, der die Aufspaltung in schnelle (Mikrosakkaden) und langsame (Drift) Fixationsbewegungen ermöglicht. Den beiden Typen von Fixationsbewegungen werden unterschiedliche Beiträge zur Wahrnehmung zugeschrieben. Anschließend wird für die Zeitreihen mit und ohne Mikrosakkaden das zeitliche Skalenverhalten untersucht. Für die Fixationsbewegung während des Fixierens auf den Punkt konnten wir feststellen, dass diese sich nicht durch Brownsche Molekularbewegung beschreiben lässt. Stattdessen fanden wir persistentes Verhalten auf den kurzen und antipersistentes Verhalten auf den längeren Zeitskalen. Während die Position des Übergangspunktes für Zeitreihen mit oder ohne Mikrosakkaden gleich ist, unterscheidet sie sich generell zwischen horizontaler und vertikaler Komponente der Augen. Weitere Analysen zielen auf Eigenschaften der Mikrosakkadenrate und -amplitude, sowie Auslösemechanismen von Mikrosakkaden durch bestimmte Eigenschaften der vorhergehenden Drift ab. Mittels eines Kästchenzählalgorithmus konnten wir die zufällige Generierung (Poisson Prozess) ausschließen. Des weiteren setzten wir ein Modell auf der Grundlage einer Zufallsbewegung mit zeitverzögerter Rückkopplung für den langsamen Teil der Augenbewegung auf. Dies erlaubt uns durch den Vergleich mit den erhobenen Daten die Dauer des Kontrollkreislaufes zu bestimmen. Interessanterweise unterscheiden sich die Dauern für vertikale und horizontale Augenbewegungen, was sich jedoch dadurch erklären lässt, dass das Modell auch durch die bekannte Neurophysiologie der Sakkadengenerierung, die sich räumlich wie auch strukturell zwischen vertikaler und horizontaler Komponente unterscheiden, motiviert ist. Die erhaltenen Dauern legen für die horizontale Komponente einen externen und für die vertikale Komponente einen internen Kontrollkreislauf dar. Ein interner Kontrollkreislauf ist nur für die vertikale Kompoente bekannt. Schließlich wird das Skalenverhalten des Modells noch semianalytisch bestätigt. Zusammenfassend waren wir in der Lage, unterschiedliche Eigenschaften von Teilen der Fixationsbewegung zu identifizieren und ein Modell zu entwerfen, welches auf der bekannten Neurophysiologie aufbaut und bekannte Einschränkungen der Kontrolle der Fixationsbewegung beinhaltet.
APA, Harvard, Vancouver, ISO, and other styles
13

CUBA, GYLLENSTEN OLLANTA. "Evaluation of classification algorithms for smooth pursuit eye movements : Evaluating current algorithms for smooth pursuit detection on Tobii Eye Trackers." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155899.

Full text
Abstract:
Eye tracking is a field that has been growing immensely over the last decade. Accompanying this growth is a need for simplified and automatic analysis of eye tracking data. A part of that analysis is eye movement classification, and while there are many adequate classification methods for fixations and saccades, the tools for smooth pursuit classification are still lacking. This thesis gives an overview of the field,and analyses five different methods for classifying smooth pursuits, fixations,and saccades. The analysis also explores evaluation methods that avoid the laborious way of manually tagging data to get a reference classification. Despite earlier reports of decent performance, the overall results for all the analysed algorithms is poor. In particular, the slowest pursuits are consistently misclassified. Most certainly, the inclusion of the slow pursuits have skewed the results, but even disregarding them doesn’t yield particularly impressive results. This begs the question of what concessions one has to make in terms of prerequisites on the data, or qualifiers for the resulting analysis, to achieve adequate performance,and given those, when would such a classification be preferred to something tailored to the problem at hand?
APA, Harvard, Vancouver, ISO, and other styles
14

Sung, Wei-Hong. "Investigating minimal Convolution Neural Networks (CNNs) for realtime embedded eye feature detection." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281338.

Full text
Abstract:
With the rapid rise of neural networks, many tasks that used to be difficult to complete in traditional methods can now be solved well, especially in the computer vision field. However, as the tasks we have to solve have become more and more complex, the neural networks we use are becoming deeper and larger. Therefore, although some embedded systems are powerful nowadays, most embedded systems still suffer from memory and computation limitations, which means it is hard to deploy our large neural networks on these embedded devices. This project aims to explore different methods to compress the original large model. That is, we first train a baseline model, YOLOv3[1], which is a famous object detection network, and then we use two methods to compress the baseline model. The first method is pruning by using sparsity training, and we do channel pruning according to the scaling factor value after sparsity training. Based on the idea of this method, we have made three explorations. Firstly, we take the union mask strategy to solve the dimension problem of the shortcut-related layers in YOLOv3[1]. Secondly, we try to absorb the shifting factor information into subsequent layers. Finally, we implement the layer pruning and combine it with channel pruning. The second method is pruning by using Neural Architecture Search (NAS), which uses a deep reinforcement framework to automatically find the best compression ratio for each layer. At the end of this report, we analyze the key findings and conclusions of our experiment and purpose the future work which could potentially improve our project.
Med den snabba ökningen av neurala nätverk kan många uppgifter som brukade vara svåra att utföra i traditionella metoder nu lösas bra, särskilt inom datorsynsfältet. Men eftersom uppgifterna vi måste lösa har blivit mer och mer komplexa, blir de neurala nätverken vi använder djupare och större. Därför, även om vissa inbäddade system är kraftfulla för närvarande, lider de flesta inbäddade system fortfarande av minnes- och beräkningsbegränsningar, vilket innebär att det är svårt att distribuera våra stora neurala nätverk på dessa inbäddade enheter. Projektet syftar till att utforska olika metoder för att komprimera den ursprungliga stora modellen. Det vill säga, vi tränar först en baslinjemodell, YOLOv3[1], som är ett berömt objektdetekteringsnätverk, och sedan använder vi två metoder för att komprimera basmodellen. Den första metoden är beskärning med hjälp av sparsity training, och vi kanalskärning enligt skalningsfaktorvärdet efter sparsity training. Baserat på idén om denna metod har vi gjort tre utforskningar. För det första tar vi unionens maskstrategi för att lösa dimensionsproblemet för genvägsrelaterade lager i YOLOv3[1]. För det andra försöker vi absorbera informationen om skiftande faktorer i efterföljande lager. Slutligen implementerar vi lagerskärningen och kombinerar det med kanalbeskärning. Den andra metoden är beskärning med NAS, som använder en djup förstärkningsram för att automatiskt hitta det bästa kompressionsförhållandet för varje lager. I slutet av denna rapport analyserar vi de viktigaste resultaten och slutsatserna i vårt experiment och syftar till det framtida arbetet som potentiellt kan förbättra vårt projekt.
APA, Harvard, Vancouver, ISO, and other styles
15

Einestam, Ragnar, and Karl Casserfelt. "PiEye in the Wild: Exploring Eye Contact Detection for Small Inexpensive Hardware." Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20696.

Full text
Abstract:
Ögonkontakt-sensorer skapar möjligheten att tolka användarens uppmärksamhet, vilketkan användas av system på en mängd olika vis. Dessa inkluderar att skapa nya möjligheterför människa-dator-interaktion och mäta mönster i uppmärksamhet hos individer.I den här uppsatsen gör vi ett försök till att konstruera en ögonkontakt-sensor med hjälpav en Raspberry Pi, med målet att göra den praktisk i verkliga scenarion. För att fastställaatt den är praktisk satte vi upp ett antal kriterier baserat på tidigare användning avögonkontakt-sensorer. För att möta dessa kriterier valde vi att använda en maskininlärningsmetodför att träna en klassificerare med bilder för att lära systemet att upptäcka omen användare har ögonkontakt eller ej. Vårt mål var att undersöka hur god prestanda vikunde uppnå gällande precision, hastighet och avstånd. Efter att ha testat kombinationerav fyra olika metoder för feature extraction kunde vi fastslå att den bästa övergripandeprecisionen uppnåddes genom att använda LDA-komprimering på pixeldatan från varjebild, medan PCA-komprimering var bäst när input-bilderna liknande de från träningen.När vi undersökte systemets hastighet fann vi att nedskalning av bilder hade en stor effektpå hastigheten, men detta sänkte också både precision och maximalt avstånd. Vi lyckadesminska den negativa effekten som en minskad skala hos en bild hade på precisionen, mendet maximala avståndet som sensorn fungerade på var fortfarande relativ till skalan och iförlängningen hastigheten.
Eye contact detection sensors have the possibility of inferring user attention, which can beutilized by a system in a multitude of different ways, including supporting human-computerinteraction and measuring human attention patterns. In this thesis we attempt to builda versatile eye contact sensor using a Raspberry Pi that is suited for real world practicalusage. In order to ensure practicality, we constructed a set of criteria for the system basedon previous implementations. To meet these criteria, we opted to use an appearance-basedmachine learning method where we train a classifier with training images in order to inferif users look at the camera or not. Our aim was to investigate how well we could detecteye contacts on the Raspberry Pi in terms of accuracy, speed and range. After extensivetesting on combinations of four different feature extraction methods, we found that LinearDiscriminant Analysis compression of pixel data provided the best overall accuracy, butPrincipal Component Analysis compression performed the best when tested on imagesfrom the same dataset as the training data. When investigating the speed of the system,we found that down-scaling input images had a huge effect on the speed, but also loweredthe accuracy and range. While we managed to mitigate the effects the scale had on theaccuracy, the range of the system is still relative to the scale of input images and byextension speed.
APA, Harvard, Vancouver, ISO, and other styles
16

Harms, Looström Julia, and Emma Frisk. "Bird's-eye view vision-system for heavy vehicles with integrated human-detection." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-54527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Barbieri, Gillian Sylvia Anna-Stasia. "The role of spatial derivatives in feature detection." Thesis, University of Birmingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.368742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Pesin, Jimy. "Detection and removal of eyeblink artifacts from EEG using wavelet analysis and independent component analysis /." Online version of thesis, 2007. http://hdl.handle.net/1850/8952.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Patel, Brindal A. "R-Eye| An image processing-based embedded system for face detection and tracking." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10141532.

Full text
Abstract:

The current project presents the development of R-Eye, a face detection and tracking system implemented as an embedded device based on the Arduino microcontroller. The system is programmed in Python using the Viola-Jones algorithm for image processing. Several experiments designed to measure and compare the performance of the system under various conditions show that the system performs well when used with an integrated camera, reaching a 93% face recognition accuracy for a clear face. The accuracy is lower when detecting a face with accessories, such as a pair of eyeglasses (80%), or when a low-resolution low-quality camera is used. Experimental results also show that the system is capable of detecting and tracking a face within a frame containing multiple faces.

APA, Harvard, Vancouver, ISO, and other styles
20

Carroll, Joshua Adam. "Eye-safe UV stand-off Raman spectroscopy for explosive detection in the field." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/80879/1/Joshua_Carroll_Thesis.pdf.

Full text
Abstract:
This project focused on maximising the detection range of an eye-safe stand-off Raman system for use in detecting explosives. Investigation of the effect on detection range through differing laser parameters in this thesis provided optimal laser settings to achieve the largest possible detection range of explosives, while still remaining under the eye-safe limit.
APA, Harvard, Vancouver, ISO, and other styles
21

Giesel, M., A. Yakovleva, Marina Bloj, A. R. Wade, A. M. Norcia, and J. M. Harris. "Relative contributions to vergence eye movements of two binocular cues for motion-in-depth." Springer Nature Group, 2019. http://hdl.handle.net/10454/17514.

Full text
Abstract:
Yes
When we track an object moving in depth, our eyes rotate in opposite directions. This type of “disjunctive” eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal.
Supported by NIH EY018875 (AMN), BBSRC grants BB/M001660/1 (JH), BB/M002543/1 (AW), and BB/MM001210/1 (MB).
APA, Harvard, Vancouver, ISO, and other styles
22

Richards, Othello Lennox. "When Eyes and Ears Compete: Eye Tracking How Television News Viewers Read and Recall Pull Quote Graphics." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6801.

Full text
Abstract:
This study applied dual processing theory, the theory of working memory, and the theory of cue summation to examine how the video and audio in a television news story interact with or against each other when the story uses pull quote graphics to convey key information to viewers. Using eye-tracking, the study produced visual depictions of exactly what viewers look at on the screen when the words in the reporter's voice track match the text in the pull quote graphic verbatim, when the reporter summarizes the text in the graphic, and when the reporter's voice track ignores the text in the pull quote. The study tested the effect on recall when viewers were presented with these three story conditions—high redundancy, medium redundancy, and low redundancy, respectively. Key findings included the following: first, that stories with low redundancy resulted in lower recall and memory sensitivity scores (a measure of memory strength) than pull quotes that the reporter either summarized or read verbatim on the air. Second, it was found that neither high-redundancy nor medium-redundancy stories were superior or inferior to the other when looking at the effect on recall and memory sensitivity. And finally, in high-, medium-, and low-redundancy conditions, subjects stated that they relied more on the reporter's narration than the pull quote to get information. The study states possible implications for news producers and reporters and suggests future research in the broadcast television news industry.
APA, Harvard, Vancouver, ISO, and other styles
23

Chaudhuri, Matthew Alan. "Optimization of a hardware/software coprocessing platform for EEG eyeblink detection and removal /." Online version of thesis, 2008. http://hdl.handle.net/1850/8967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Husseini, Orabi Ahmed. "Multi-Modal Technology for User Interface Analysis including Mental State Detection and Eye Tracking Analysis." Thesis, Université d'Ottawa / University of Ottawa, 2017. http://hdl.handle.net/10393/36451.

Full text
Abstract:
We present a set of easy-to-use methods and tools to analyze human attention, behaviour, and physiological responses. A potential application of our work is evaluating user interfaces being used in a natural manner. Our approach is designed to be scalable and to work remotely on regular personal computers using expensive and noninvasive equipment. The data sources our tool processes are nonintrusive, and captured from video; i.e. eye tracking, and facial expressions. For video data retrieval, we use a basic webcam. We investigate combinations of observation modalities to detect and extract affective and mental states. Our tool provides a pipeline-based approach that 1) collects observational, data 2) incorporates and synchronizes the signal modality mentioned above, 3) detects users' affective and mental state, 4) records user interaction with applications and pinpoints the parts of the screen users are looking at, 5) analyzes and visualizes results. We describe the design, implementation, and validation of a novel multimodal signal fusion engine, Deep Temporal Credence Network (DTCN). The engine uses Deep Neural Networks to provide 1) a generative and probabilistic inference model, and 2) to handle multimodal data such that its performance does not degrade due to the absence of some modalities. We report on the recognition accuracy of basic emotions for each modality. Then, we evaluate our engine in terms of effectiveness of recognizing basic six emotions and six mental states, which are agreeing, concentrating, disagreeing, interested, thinking, and unsure. Our principal contributions include the implementation of a 1) multimodal signal fusion engine, 2) real time recognition of affective and primary mental states from nonintrusive and inexpensive modality, 3) novel mental state-based visualization techniques, 3D heatmaps, 3D scanpaths, and widget heatmaps that find parts of the user interface where users are perhaps unsure, annoyed, frustrated, or satisfied.
APA, Harvard, Vancouver, ISO, and other styles
25

Shojaeizadeh, Mina. "Automatic Detection of Cognitive Load and User's Age Using a Machine Learning Eye Tracking System." Digital WPI, 2018. https://digitalcommons.wpi.edu/etd-dissertations/476.

Full text
Abstract:
As the amount of information captured about users increased over the last decade, interest in personalized user interfaces has surged in the HCI and IS communities. Personalization is an effective means for accommodating for differences between individuals. The fundamental idea behind personalization rests on the notion that if a system can gather useful information about the user, generate a relevant user model and apply it appropriately, it would be possible to adapt the behavior of a system and its interface to the user at the individual level. Personal-ization of a user interface features can enhance usability. With recent technological advances, personalization can be achieved automatically and unobtrusively. A user interface can deploy a NeuroIS technology such as eye-tracking that learns from the user's visual behavior to provide users an experience most unique to them. The advantage of eye-tracking technology is that subjects cannot consciously manipulate their responses since they are not readily subject to manipulation. The objective of this dissertation is to develop a theoretical framework for user personalization during reading comprehension tasks based on two machine learning (ML) models. The proposed ML-based profiling process consists of user's age characterization and user's cognitive load detection, while the user reads text. To this end, detection of cognitive load through eye-movement features was investigated during different cognitive tasks (see Chapters 3, 4 and 6) with different task conditions. Furthermore, in separate studies (see Chapters 5 and 6) the relationship between user's eye-movements and their age population (e.g., younger and older generations) were carried out during a reading comprehension task. A Tobii X300 eye tracking device was used to record the eye movement data for all studies. Eye-movement data was acquired via Tobii eye tracking software, and then preprocessed and analyzed in R for the aforementioned studies. Machine learning techniques were used to build predictive models. The aggregated results of the studies indicate that machine learning accompanied with a NeuroIS tool like eye-tracking, can be used to model user characteristics like age and user mental states like cognitive load, automatically and implicitly with accuracy above chance (range of 70-92%). The results of this dissertation can be used in a more general framework to adaptively modify content to better serve the users mental and age needs. Text simplification and modification techniques might be developed to be used in various scenarios.
APA, Harvard, Vancouver, ISO, and other styles
26

Coetzer, Reinier Casper. "Development of a robust active infrared-based eye tracking system." Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/26399.

Full text
Abstract:
Eye tracking has a number of useful applications ranging from monitoring a vehicle driver for possible signs of fatigue, providing an interface to enable severely disabled people to communicate with others, to a number of medical applications. Most eye tracking applications require a non-intrusive way of tracking the eyes, making a camera-based approach a natural choice. However, although significant progress has been made in recent years, modern eye tracking systems still have not overcome a number of challenges including eye occlusions, variable ambient lighting conditions and inter-subject variability. This thesis describes the complete design and implementation of a real-time camera-based eye tracker, which was developed mainly for indoor applications. The developed eye tracker relies on the so-called bright/dark pupil effect for both the eye detection and eye tracking phases. The bright/dark pupil effect was realised by the development of specialised hardware and near-infrared illumination, which were interfaced with a machine vision camera. For the eye detection phase the performance of three different types of classifiers, namely neurals networks, SVMs and AdaBoost were directly compared with each other on a dataset consisting of 17 individual subjects from different ethnic backgrounds. For the actual tracking of the eyes, a Kalman filter was combined with the mean-shift tracking algorithm. A PC application with a graphical user interface (GUI) was also developed to integrate the various aspects of the eye tracking system, which allows the user to easily configure and use the system. Experimental results have shown the eye detection phase to be very robust, whereas the eye tracking phase was also able to accurately track the eyes from frame-to-frame in real-time, given a few constraints. AFRIKAANS : Oogvolging het ’n beduidende aantal toepassings wat wissel van die deteksie van bestuurderuitputting, die voorsiening van ’n rekenaarintervlak vir ernstige fisies gestremde mense, tot ’n groot aantal mediese toepassings. Die meeste toepassings van oogvolging vereis ’n nie-indringende manier om die oë te volg, wat ’n kamera-gebaseerde benadering ’n natuurlike keuse maak. Alhoewel daar alreeds aansienlike vordering gemaak is in die afgelope jare, het moderne oogvolgingstelsels egter nogsteeds verskeie uitdagings nie oorkom nie, insluitende oog okklusies, veranderlike beligtingsomstandighede en variansies tussen gebruikers. Die verhandeling beskryf die volledige ontwerp en implementering van ’n kamera-gebaseerde oogvolgingsstelsel wat in reële tyd werk. Die ontwikkeling van die oogvolgingsstelsel maak staat op die sogenaamde helder/donker pupil effek vir beide die oogdeteksie en oogvolging fases. Die helder/donker pupil effek was moontlik gemaak deur die ontwikkeling van gespesialiseerde hardeware en naby-infrarooi illuminasie. Vir die oogdeteksie fase was die akkuraatheid van drie verskillende tipes klassifiseerders getoets en direk vergelyk, insluitende neurale netwerke, SVMs en AdaBoost. Die datastel waarmee die klassifiseerders getoets was, het bestaan uit 17 individuele toetskandidate van verskillende etniese groepe. Vir die oogvolgings fase was ’n Kalman filter gekombineer met die gemiddelde-verskuiwings algoritme. ’n Rekenaar program met ’n grafiese gebruikersintervlak was ontwikkel vir ’n persoonlike rekenaar, sodat al die verskillende aspekte van die oogvolgingsstelsel met gemak opgestel kon word. Eksperimentele resultate het getoon dat die oogdeteksie fase uiters akkuraat en robuust was, terwyl die oogvolgings fase ook hoogs akuraat die oë gevolg het, binne sekere beperkinge. Copyright
Dissertation (MEng)--University of Pretoria, 2011.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
27

Dybäck, Matilda, and Johanna Wallgren. "Pupil dilation as an indicator for auditory signal detection : Towards an objective hearing test based on eye tracking." Thesis, KTH, Skolan för teknik och hälsa (STH), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-192703.

Full text
Abstract:
An early detection of hearing loss in children is important for the child's speech and language development. For children between 3-6 months, a reliable method to measure hearing and determine hearing thresholds is missing. A hearing test based on the pupillary response to auditory signal detection as measured by eye tracking is based on an automatic physiological response. This hearing test could be used instead of the objective hearing tests used today. The presence of pupillary response has been shown in response to speech, but it is unstudied in response to sinus tones. The objective of this thesis was to study whether there is a consistent pupillary response to different sinus tone frequencies commonly used in hearing tests and if yes, to determine reliably the time window of this response. Four different tests were done. The adult pupillary response in regard to sinus tone stimuli with four frequency levels (500 Hz, 1000 Hz, 2000 Hz and 4000 Hz), and four loudness levels (silence, 30 dB, 50 dB and 70 dB) was tested (N=20, 15 females, 5 males). Different brightness levels and distractions on the eye tracking screen were investigated in three substudies (N=5, 4 females, 1 male). Differences between silence and loudness levels within frequency levels were tested for statistical significance. A pupillary response in regard to sinus tones occurred consistently between 300 ms and 2000 ms with individual variation, i.e. earlier than for speech sounds. Differences between silence and loudness levels were only statistically significant for 4000 Hz. No statistical difference was shown between different brightness levels or if there were distractions present on the eye tracker screen. The conclusion is that pupillary response to pure sinus tones in adults is a possible measure of hearing threshold for at least 4000 Hz. Larger studies are needed to confirm this, and also to more thoroughly investigate the other frequencies.
En tidig upptäckt av hörselnedsättning hos barn är viktig för barnets tal- och språkutveckling. För barn mellan 3-6 månader saknas det en tillförlitlig metod för att mäta hörsel och bestämma hörtrösklar. Ett hörseltest baserad på pupillreaktion på ljud som mäts med en eye tracker bygger på en automatisk fysiologisk reaktion och skulle kunna användas istället för de objektiva test som används idag. Hitintills har pupillreaktion på tal påvisats, men det saknas studier som studerat eventuella reaktioner på sinustoner. Syftet med denna uppsats var att undersöka om det finns en enhetlig pupillreaktion på de olika frekvenserna av sinustoner som vanligen används i hörseltest. Vidare var studiens syfte att fastställa ett tillförlitligt tidsfönster för pupillreaktion. Fyra olika typer av tester utfördes. Pupillreaktionen mot sinustoner med fyra olika frekvensnivåer (500 Hz, 1000 Hz, 2000 Hz och 4000 Hz), och fyra olika ljudnivåer (tystnad, 30 dB, 50 dB och 70 dB) undersöktes i ett test på vuxna deltagare (N=20, 15 kvinnor, 5 män). Olika ljusnivåer och distraktioner på eye tracker-skärmen undersöktes i tre test (N=5, 4 kvinnor, 1 man). Skillnaderna mellan ljudnivåer och frekvensnivåer testades med statistiska tester. Resultaten visade att pupillreaktion på sinustoner inträffade konsekvent mellan 300 ms och 2000 ms med individuella variationer. Denna reaktionstid inträffar tidigare än för taljud. En statistisk signifikant skillnad mellan tystnad och olika ljudnivåer kunde endast ses för frekvensnivån 4000 Hz. Ingen statistisk skillnad uppmättes mellan olika ljudnivåer eller om det fanns distraktioner på eye tracker-skärmen. De i studien framkomna resultaten tyder på att pupillreaktioner mot rena sinustoner hos vuxna är en möjlig metod för att identifiera hörseltrösklar för åtminstone 4000 Hz. Större studier behöver göras för att fastställa detta och en noggrannare undersökning behöver genomföras för de andra frekvenserna.
APA, Harvard, Vancouver, ISO, and other styles
28

Bediz, Yusuf. "Automatic Eye Tracking And Intermediate View Reconstruction For 3d Imaging Systems." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607528/index.pdf.

Full text
Abstract:
In recent years, the utilization of 3D display systems became popular in many application areas. One of the most important issues in the utilization of these systems is to render the correct view to the observer based on his/her position. In this thesis, we propose and implement a single user view rendering system for autostereoscopic/stereoscopic displays. The system can easily be installed on a standard PC together with an autostereoscopic display or stereoscopic glasses (shutter, polarized, pulfrich, and anaglyph) with appropriate video card. Proposed system composes of three main blocks: view point detection, view point tracking and intermediate view reconstruction. Haar object detection method, which is based on boosted cascade of simple feature classifiers, is utilized as the view point detection method. After detection, feature points are found on the detected region and accordingly they are fed to the feature tracker. View point of the observer is calculated by using the tracked position of the observer on the image. Correct stereoscopic view is, then, rendered on the display. A 3D warping-based method is utilized in the system as the intermediate view reconstruction method. System is implemented on a computer with Pentium IV 3.0 GHz processor using E-D 3D shutter glasses and Creative NX Webcam.
APA, Harvard, Vancouver, ISO, and other styles
29

Yekhshatyan, Lora. "Detecting distraction and degraded driver performance with visual behavior metrics." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/910.

Full text
Abstract:
Driver distraction contributes to approximately 43% of motor-vehicle crashes and 27% of near-crashes. Rapidly developing in-vehicle technology and electronic devices place additional demands on drivers, which might lead to distraction and diminished capacity to perform driving tasks. This situation threatens safe driving. Technology that can detect and mitigate distraction by alerting drivers could play a central role in maintaining safety. Correctly identifying driver distraction in real time is a critical challenge in developing distraction mitigation systems, and this function has not been well developed. Moreover, the greatest benefit may be from real-time distraction detection in advance of dangerous breakdowns in driver performance. Based on driver performance, two types of distraction - visual and cognitive - are identified. These types of distraction have very different effects on visual behavior and driving performance; therefore, they require different algorithms for detection. Distraction detection algorithms typically rely on either eye measures or driver performance measures because the effect of distraction on the coordination of measures has not been established. Combining both eye glance and vehicle data could enhance the ability of algorithms to detect and differentiate visual and cognitive distraction. The goal of this research is to examine whether poor coordination between visual behavior and vehicle control can identify diminished attention to driving in advance of breakdowns in lane keeping. The primary hypothesis of this dissertation is that detection of changes in eye-steering relationship caused by distraction could provide a prospective indication of vehicle state changes. Three specific aims are pursued to test this hypothesis. The first aim examines the effect of distracting activity on eye and steering movements to assess the degree to which the correlation parameters are indicative of distraction. The second aim applies a control-theoretic system identification approach to the eye movement and steering data to distinguish between distracted and non-distracted conditions. The third aim examines whether changes of eye-steering coordination associated with distraction provide a prospective indication of breakdowns in driver performance, i.e., lane departures. Together, the three aims show how that a combination of visual and steering behavior, i.e., eye-steering model, can differentiate between non-distracted and distracted state. This model revealed sensitivity to distraction associated with off-road glances. The models derived for different drivers have similar structure and fit to data from other drivers reasonably well. In addition, the differences in model order and model coefficients indicate the variability in driving behavior: some people generate more complex behavior than others. As was expected, eye-steering correlation on straight roads is not as strong as observed on curvy roads. However, eye-steering correlation measured through correlation coefficient and time delay between two movements is sensitive to different types of distraction. Time delay mediates changes in lane position and the eye-steering system predicts breakdowns in lane keeping. This dissertation contributes to developing a distraction detection system that integrates visual and steering behavior. More broadly, these results suggest that integrating eye and steering data can be helpful in detecting and mitigating impairments beyond distraction, such as those associated with alcohol, fatigue, and aging.
APA, Harvard, Vancouver, ISO, and other styles
30

Ince, Kutalmis Gokalp. "Computer Simulation And Implementation Of A Visual 3-d Eye Gaze Tracker For Autostreoscopic Displays." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611343/index.pdf.

Full text
Abstract:
In this thesis, a visual 3-D eye gaze tracker is designed and implemented to tested via computer simulations and on an experimental setup. Proposed tracker is designed to examine human perception on autostereoscopic displays when the viewer is 3m away from such displays. Two different methods are proposed for calibrating personal parameters and gaze estimation, namely line of gaze (LoG) and line of sight (LoS) solutions. 2-D and 3-D estimation performances of the proposed system are observed both using computer simulations and the experimental setup. In terms of 2-D and 3-D performance criteria, LoS solution generates slightly better results compared to that of LoG on experimental setup and their performances are found to be comparable in simulations. 2-D estimation inaccuracy of the system is obtained as smaller than 0.5°
during simulations and approximately 1°
for the experimental setup. 3-D estimation inaccuracy of the system along x- and y-axis is obtained as smaller than 2°
during the simulations and the experiments. However, estimation accuracy along z-direction is significantly sensitive to pupil detection and head pose estimation errors. For typical error levels, 20cm inaccuracy along z-direction is observed during simulations, whereas this inaccuracy reaches 80cm in the experimental setup.
APA, Harvard, Vancouver, ISO, and other styles
31

Donovan, Tim. "Performance changes in wrist fracture detection and lung nodule perception following the perceptual feedback ot eye movements." Thesis, Lancaster University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.524759.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gabbard, Ryan Dwight. "Identifying the Impact of Noise on Anomaly Detection through Functional Near-Infrared Spectroscopy (fNIRS) and Eye-tracking." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1501711461736129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Freeman, Jason Robert. "The Rise of the Listicle: Using Eye-Tracking and Signal Detection Theory to Measure This Growing Phenomenon." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6803.

Full text
Abstract:
As online technology continues to progress, the modes of communication through which content can be shared have exponentially grown. These include advances in navigational options for presenting information and news online. Though the listicle has been around for centuries, the internet has proliferated its growth, as content producers rely on its structure as a vehicle for sharing information. This research shows that in the case of listicles, format had no direct effect on recall, however, participants who had a greater interest in the content showed significantly higher levels of memory sensitivity. This critical finding suggests that news outlets and content producers should concern themselves with ensuring that their content is interesting and relevant to their audience more so than worrying about whether the listicle is in clickable or scrollable form. This first attempt to examine listicles by comparing their navigational difference in terms of recall performance lays a framework for future research on listicles.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Lihui. "Towards an efficient, unsupervised and automatic face detection system for unconstrained environments." Thesis, Loughborough University, 2006. https://dspace.lboro.ac.uk/2134/8132.

Full text
Abstract:
Nowadays, there is growing interest in face detection applications for unconstrained environments. The increasing need for public security and national security motivated our research on the automatic face detection system. For public security surveillance applications, the face detection system must be able to cope with unconstrained environments, which includes cluttered background and complicated illuminations. Supervised approaches give very good results on constrained environments, but when it comes to unconstrained environments, even obtaining all the training samples needed is sometimes impractical. The limitation of supervised approaches impels us to turn to unsupervised approaches. In this thesis, we present an efficient and unsupervised face detection system, which is feature and configuration based. It combines geometric feature detection and local appearance feature extraction to increase stability and performance of the detection process. It also contains a novel adaptive lighting compensation approach to normalize the complicated illumination in real life environments. We aim to develop a system that has as few assumptions as possible from the very beginning, is robust and exploits accuracy/complexity trade-offs as much as possible. Although our attempt is ambitious for such an ill posed problem-we manage to tackle it in the end with very few assumptions.
APA, Harvard, Vancouver, ISO, and other styles
35

Escorcia, Gutierrez José. "Image Segmentation Methods for Automatic Detection of the Anatomical Structure of the Eye in People with Diabetic Retinopathy." Doctoral thesis, Universitat Rovira i Virgili, 2021. http://hdl.handle.net/10803/671543.

Full text
Abstract:
Aquesta tesi s'emmarca dins del pla integral de prevenció precoç de la Retinopatia Diabètica (RD) posat en marxa pel govern espanyol seguint les recomanacions de l'Organització Mundial de la Salut de promoure iniciatives que consciencien sobre la importància de fer revisions oculars regulars entre les persones amb diabetis. Per tal de poder determinar el nivell de retinopatia diabètica cal localitzar i identificar diferents tipus de lesions a la retina. Per poder fer-ho, cal que primer s'eliminin de la imatge les estructures anatòmiques normals de l'ull (vasos sanguinis, disc òptic i fòvea) a fi de fer més visibles les anomalies. Aquesta tesi s'ha centrat en aquest pas de neteja de la imatge. En primer lloc, aquesta tesi proposa un nou marc per a la segmentació ràpida i automàtica del disc òptic basat en la Teoria del Portafoli de Markowitz. En base a aquesta teoria es proposa un model innovador de fusió de colors capaç d'admetre qualsevol metodologia de segmentació en el camp de la imatge mèdica. Aquest enfoc s'estructura com una etapa de pre-processament potent i en temps real que es podria integrar-se a la pràctica clínica diària, permetent accelerar el diagnòstic de la DR a causa de la seva simplicitat, rendiment i velocitat. La segona contribució d'aquesta tesi és un mètode per fer simultàniament una segmentació dels vasos sanguinis i la detecció de la zona avascular foveal, reduint considerablement el temps de processament d'imatges. A més a més, el primer component de l'espai de color xyY (que representa els valors de crominància) és el que predomina en l'estudi dels diferents components de color desenvolupat en aquesta tesi, centrat en la segmentació dels vasos sanguinis i la detecció de fòvea. Finalment, es proposa una recopilació automàtica de mostres per fer una interpolació estadística del color i que són utilitzades en l'algorisme de segmentació de Convexity Shape Prior. La tesi també proposa un altre mètode de segmentació dels vasos sanguinis que es basa en una selecció de característiques efectiva basada arbres de decisions. S'ha aconseguit trobar les 5 característiques més rellevants per segmentar aquestes estructures oculars. La validació mitjançant tres tècniques de classificació diferents (arbres de decisions, xarxes neuronals i màquines de suport vectorial).
Esta tesis se enmarca dentro del plan integral de prevención contra la Retinopatía Diabética (RD), ejecutado por el Gobierno de España alineado a las políticas de la Organización Mundial de la Salud para promover iniciativas que conciencien a la población con diabetes sobre la importancia de exámenes oculares de manera periódica. Para poder determinar el nivel de retinopatía diabética hace falta localizar e identificar diferentes tipos de lesiones en la retina. Para conseguirlo primero se han de eliminar de la imagen las estructures anatómicas normales del ojo (vasos sanguíneos, disco óptico y fóvea) para hacer visibles las anomalías. Esta tesis se ha centrado en este paso de limpieza de la imagen. En primer lugar, esta tesis propone un novedoso enfoque para la segmentación rápida y automática del disco óptico basado en la Teoría de Portafolio de Markowitz. En base a esta teoría se propone un innovador modelo de fusión de color capaz de soportar cualquier metodología de segmentación en el campo de las imágenes médicas. Este enfoque se estructura como una etapa de preprocesamiento potente y en tiempo real que podría integrarse en la práctica clínica diaria para acelerar el diagnóstico de RD debido a su simplicidad, rendimiento y velocidad. La segunda contribución de esta tesis es un método para segmentar simultáneamente los vasos sanguíneos y detectar la zona avascular foveal, reduciendo considerablemente el tiempo de procesamiento para tal tarea. Adicionalmente, la primera componente del espacio de color xyY (que representa los valores de crominancia) es la que predomina del estudio de las diferentes componentes de color realizado en esta tesis para la segmentación de vasos sanguíneos y la detección de la fóvea. Finalmente, se propone una recolección automática de muestras para interpolarlas basadas en la información estadística de color y que a su vez son la base del algoritmo Convexity Shape Prior. La tesis también propone otro método de segmentación de vasos sanguíneos basado en una selección efectiva de características soportada en árboles de decisión. Se ha conseguido encontrar las 5 características más relevantes para la segmentación de estas estructuras oculares. La validación utilizando tres técnicas de clasificación (árbol de decisión, red neuronal artificial y máquina de soporte vectorial).
This thesis is framed within the comprehensive plan for early prevention of Diabetic Retinopathy (DR) launched by the Spain government following the World Health Organization to promote initiatives that raise awareness of the importance of regular eye exams among people with diabetes. To determine the level of diabetic retinopathy, we need to find and identify different types of lesions in the eye fundus. First, the normal anatomic structures of the eye (blood vessels, optic disc and fovea) must be removed from the image, in order to make visible the abnormalities. This thesis has focused on this step of image cleaning. This thesis proposes a novel framework for fast and fully automatic optic disc segmentation based on Markowitz's Modern Portfolio Theory to generate an innovative color fusion model capable of admitting any segmentation methodology in the medical imaging field. This approach acts as a powerful and real-time pre-processing stage that could be integrated into daily clinical practice to accelerate the diagnosis of DR due to its simplicity, performance, and speed. This thesis's second contribution is a method to simultaneously make a blood vessel segmentation and foveal avascular zone detection, considerably reducing the required image processing time. In addition, the first component of the xyY color space representing the chrominance values is the most supported according to the approach developed in this thesis for blood vessel segmentation and fovea detection. Finally, several samples are collected for a color interpolation procedure based on statistic color information and are used by the well-known Convexity Shape Prior segmentation algorithm. The thesis also proposes another blood vessel segmentation method that relies on an effective feature selection based on decision tree learning. This method is validated using three different classification techniques (i.e., Decision Tree, Artificial Neural Network, and Support Vector Machine).
APA, Harvard, Vancouver, ISO, and other styles
36

Vineela, Sanampudi. "Eye Corner Detection." Thesis, 2015. http://ethesis.nitrkl.ac.in/7682/1/2015_Eye_Vineela.pdf.

Full text
Abstract:
Detection of corners of the eye is a good research topic. It plays an important role in multiple tasks performed in the field of Computer Vision. It also plays a key role in biometric systems. In this the- sis, initially, the existing corner detection methods are discussed. Using Hough transform line, circle and ellipse were found out in the given image. The proposed work includes, finding the eye region in the given face image using Template Matching method. Later on, we fit a rectangle to the matched eye region. And then, we find out the corners of the rectangle and approximate them to be the corners of the eye.
APA, Harvard, Vancouver, ISO, and other styles
37

Chang, Hui-Yin, and 張惠茵. "Algorithm Design of Eye Status Detection for Close Eye Alert Systems." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/27226090604110195496.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
102
In this thesis, we discuss several methods of eye status detection for close eye alert systems. The proposed close eye alert system based on tracking eye states was implemented. If the system detects the close eye state, it will show a warning message to alert. The proposed system contains two parts: the eye tracking and the eye state detection. Firstly, we pre-process original facial images for future easier processing due to some noises in images. We use several methods to do the pre-process, which are described as follows: 1. Convert the color space to the YCbCr format, 2. Use the mean filter to smooth the images, 3. The Sobel filter is used to get the edges of images, 4.Use “Self Quotient Image” algorithm to remove influences of different light conditions, and then we use different sizes of templates to detect the possible candidate area of eyes. Next, we propose four methods to detect the eye states, which are described as follows: 1. Use the skin color base on the threshold of the Cr color, 2. Use the skin color base on the normalized RGB pixels, 3. Use the search window to find out the minimum gray-level value, 4. Use the cross filter to find out the minimum gray-level value. Thus, we can get the close eye states by comparing the color variations of the first image with that of the present image. In our experiments, we use a personal computer with 2.66GHz Quad-core CPUs for simulations. There are up to 15 video clips in the video database for four different individuals, which includes: the clips for frontal view without glasses, the clips with frontal view and wearing thin rim glasses, the clips for frontal view and black frame glasses, and the clips with upward view without glasses. The experimental results show that our methods can detect the eye states in various situations, such as wearing glasses. Under the premise that eyes are open at the first frame, compared with the other methods, the proposed method by using the search window has better detection accuracy for the eye state detection on non-glasses situations, and the proposed method by using the cross filter has better detection accuracy for the eye state detection on glasses situations.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Sheng-Wen, and 王聖文. "Automatic Eye Detection and Glasses Removal." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/56a3j8.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
92
This thesis addresses an algorithm to automatically detect the eye location from a given face image and remove the glasses while one has worn. Our system consists of three modules: face segmentation, eye detection, and eyeglasses removal while one has worn on eyeglasses. First, we use the universal skin-color map to detect the face regions, which can ensure sufficient adaptability to ambient lighting conditions. Then, a special filter, called circle-frequency filter, is used to locate the eye regions because of its invariant characteristic in wide face orientations and rotations. Finally, for the widely using of glasses, we proposed a novel method to remove the eyeglasses automatically based on edge detection and modified fuzzy rule-based (MFRB) filter. The simulation and results demonstrate that our approach detects eye location efficiently and demonstrates high fidelity to the non-wearing-glasses facial image after glasses removal.
APA, Harvard, Vancouver, ISO, and other styles
39

Chou, Ta-Feng, and 周達峰. "A COMPARISON OF EYE DETECTION ALGORITHMS." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/74749524768101402959.

Full text
Abstract:
碩士
國立交通大學
電機學院碩士在職專班電子與光電組
94
This thesis is towards the research of detecting people's eyes in the face. People's eyes contain transmit and rich information. When we are tired and sleepy, eyes will not be getting conscientious. On the other hand, Eyes will be also large when you are energetic. If we can watch driver's eyes, We can design a driving security system reminding the driver automatically. Eyes are also important to distinguish people's identity. If we can set up the face image database of the staff of a company, Then we can distinguish a person's identity by one's facial features. Especially eyes are the window of the soul, how big and where in the face, and iris are crucial to determine the identity of a person. There are many methods to detect eyes in a face, For example, 1, The rough eye outline prediction by RCER (Rough Contour Estimation Routine), mathematical morphology and deformable template model; 2, edge detection technologies. We make use of these technologies to find the position and shape of eyes, and then the accuracy of each method is computed. This thesis aims at finding out an efficient method to detect the eyes and their shapes. Comparison among these methods are made. Furthermore, advantages and disadvantages of these methods are finally shown and noted.
APA, Harvard, Vancouver, ISO, and other styles
40

Panda, Deepti Ranjan. "Eye Detection Using Wavelets and ANN." Thesis, 2007. http://ethesis.nitrkl.ac.in/54/1/dipti.pdf.

Full text
Abstract:
A Biometric system provides perfect identification of individual based on a unique biological feature or characteristic possessed by a person such as finger print, hand writing, heart beat, face recognition and eye detection. Among them eye detection is a better approach since Human Eye does not change throughout the life of an individual. It is regarded as the most reliable and accurate biometric identification system available. In our project we are going to develop a system for ‘eye detection using wavelets and ANN’ with software simulation package such as matlab 7.0 tool box in order to verify the uniqueness of the human eyes and its performance as a biometric. Eye detection involves first extracting the eye from a digital face image, and then encoding the unique patterns of the eye in such a way that they can be compared with preregistered eye patterns. The eye detection system consists of an automatic segmentation system that is based on the wavelet transform, and then the Wavelet analysis is used as a pre-processor for a back propagation neural network with conjugate gradient learning. The inputs to the neural network are the wavelet maxima neighborhood coefficients of face images at a particular scale. The output of the neural network is the classification of the input into an eye or non-eye region. An accuracy of 81% is observed for test images under different environment conditions not included during training.
APA, Harvard, Vancouver, ISO, and other styles
41

Lin, Hui-Wen, and 林慧雯. "Detection of Eye Blinking of a Driver." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/92275145791570371441.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊工程研究所
94
Since car accidents may be caused by a driver's drowsiness, some assistance systems have been designed to warn the driver before he falls asleep. Such a system could monitor the driver’s eyes to detect blinking. If the blinking becomes to frequent or prolonged, then it may indicate drowsiness. We propose a vision-based eye blinking detection system in this paper. Four steps, extracting a human face, detecting eye location, tracking the eyes and detecting blinking are developed for our system. Using a video camera mounted in the image of the driver’s face is first extracted based on skin color. Second, the location of the eyes of the driver is detected according to eye features. Third, we track the eyes’ location in next frame based on the shape and the location of human eyes relative to the face. Finally, we detect eye blinking of the driver. Our system has been tested during actual driving, and shows that it can be adapted to changing illumination while the car is in motion.
APA, Harvard, Vancouver, ISO, and other styles
42

Weng, Chung-Ren, and 翁崇荏. "A Fast Algorithm of Eye Blink Detection." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/81350316070926319375.

Full text
Abstract:
碩士
國立交通大學
資訊科學系所
93
The information about eye blinks shows one’s fatigue. Detecting blink is useful to monitoring system or warning system. Automatically detecting fatigue of drivers may save them from accidents. We propose a fast algorithm for eye location and eye blink detection. This algorithm is adapted to the complex background and the situation which people wear glasses. In the first place, we classify pixels as skin color and non-skin color to build a skin mask of original image. Second, we patch the skin mask with morphological operations and locate the face bounding block. Applying horizontal projection to the bounding block and filtering with face feature conditions, we locate eye region. After gathering statistics of pixels in eye region, we can find out which frames contain eye blinks.
APA, Harvard, Vancouver, ISO, and other styles
43

Chang, Chia-Wei, and 張家瑋. "Human Detection Using Single Fish-eye Camera." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/8w6ade.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
106
This paper proposes a new algorithm for human detection using single downward-viewing fish-eye camera. In recent years, methods for human detection using projective camera have been studied extensively, but researches on human detection from fish-eye camera images are very limited and time-consuming, and most of them were used in simple and uncluttered environments. In addition, the advantage of fisheye lenses is that they can cover a very wide range with only one camera. When people are close, or they may be blocked, using fish-eye camera has a better view than other cameras. The main purpose of this paper is to propose a new method which can detect, track and estimate the number of people in more complicated scenes and expect to be applied in real-time situations. Our detecting algorithm makes use of elliptic templates and HOG features, and then apply a set of support vector machines (SVMs) to find out whether there are people in the template or not. Meanwhile, we track people’s position by applying color features and analyzing their movement in a specific time.
APA, Harvard, Vancouver, ISO, and other styles
44

Kumar, Rahul. "DataBase Generation for Eye Detection with Spectacles." Thesis, 2018. http://ethesis.nitrkl.ac.in/9652/1/2018_MT_216EE1268_RKumar_Database.pdf.

Full text
Abstract:
Face and subsequently eye detection is primary step for facial image processing. Most of these applications faces are considered without spectacles that poses challenge to detect ocular features. In this work, we aim to create a facial data-base with and without spectacles to facilitate the validation of algorithms for face or eye detection. We designed an experiment to incorporate variability in several parameters to be detected and measured. We have generated the ground truth to provide a platform for the researchers to quantitatively evaluate the performance of their algorithms on this database. Eye movement in terms of view angle and velocity are measured and co-related with the stimulus movement on the screen empirically.
APA, Harvard, Vancouver, ISO, and other styles
45

Rae, Robert Andrew. "Detection of eyelid position in eye-tracking systems." 2004. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=95039&T=F.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Chang, Ting-Hsuan, and 張庭瑄. "Automatic Red-Eye Reduction with Fast Face Detection." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/82482374910385611330.

Full text
Abstract:
碩士
國立臺灣大學
資訊工程學研究所
96
Red-Eye Reduction Technology is used for some artifacts in still images. This technology can help user get a better photo print. In recent years, digital cameras and camera phones are more popular. People can take many pictures easily. Some photos taken in dark with flashlight will have some red-eye artifact. This subject becomes more important nowadays. In this thesis, we propose a fast face detection and automatic red-eye reduction method, which eliminates red-eye effect automatically. Increasing the processing speed and decreasing the false alarm rate to produce a better photo print.
APA, Harvard, Vancouver, ISO, and other styles
47

Hsu, Yi-Cheng, and 徐亦澂. "Circular Deformable Template Application in Eye Openness Detection." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/19755713531899394541.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
96
Sleepiness and driving is a dangerous combination, drowsy driving can be just as fatal. Accordingly, it is necessary to develop a drowsy driver awareness system. To avoid interrupting the driver, it is necessary to build a system under non-invasive and non-contact condition. Image process system suits to achieve such a request. Hence, it is recommended to judge the drowsiness state by observing eye status of operators via eye video. In this thesis, we use a CCD camera as the image sources and use skin color map to segment skin region. Then we use PCA algorithm to find the eye region for circular template searching. The circular template will locate the iris region and finally we can analyze this region to classify the eye openness state. By numerical simulation, we have obtained a high accuracy on eye openness detection and it would be helpful for the drowsy detection system.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Yu-Sheng, and 林祐聖. "Automatic Eye Detection and Reflection Separation within Glasses." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/45673049838875420176.

Full text
Abstract:
碩士
國立交通大學
電機與控制工程系所
94
Eye detection has been applied to many applications, for instance, human or faces recognition, eye gaze detection, drowsiness detection, and so on. However, eye detection often misdiagnoses for the interference caused by glasses when one wears spectacles. This thesis addresses an algorithm to automatically detect the eye location from a given face image and separate reflections within glasses while one has worn glasses. Our system consists of three modules: face segmentation, optic-area detection, and the separating of glasses reflections while one has worn glasses. First, we use the universal skin-color map to detect the face regions, which can ensure sufficient adaptability to ambient lighting conditions. Then, we proposed a novel method to detect the eye region and separate the reflection within glasses based on edge detection, corner detection, and anisotropic diffusion transform. The principle of separating reflection is based on that the correct decomposition of the reflection image whose summation of corners and edges is the smallest among all possible decompositions. The simulation and results demonstrate that the principle of separating reflection can be applied to the reflection within glasses effectively and result in good reflection separation.
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Ting-Hsuan. "Automatic Red-Eye Reduction with Fast Face Detection." 2008. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1606200815074300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

WANG, YU-HUI, and 王玉輝. "Eye Detection Using the Color Sequence Fuzzy Automata." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/98003077095482736700.

Full text
Abstract:
碩士
聖約翰科技大學
電子工程系碩士班
96
Eyes are one of the most important facial features and eyes detection play a quite important role in many useful applications, such as face detection, face recognition, facial expression analysis, or eye-gaze tracking systems. In this paper, the authors introduced an eye localization system based on the line color sequence (LCS) in the color images. The algorithm not only focuses on the identification of color types, but also engages in determining the spatial relationships between the colors. The novelty of this work is that the algorithm works without complete face location. With the fuzzy automata support, the lower mathematical computation is the advantage of our proposed method.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography