Academic literature on the topic 'Eye detection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Eye detection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Eye detection"

1

Patil, Vaibhavi, Sakshi Patil, Krishna Ganjegi, and Pallavi Chandratre. "Face and Eye Detection for Interpreting Malpractices in Examination Hall." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (2022): 1119–23. http://dx.doi.org/10.22214/ijraset.2022.41456.

Full text
Abstract:
Abstract: One of the most difficult problems in computer vision is detecting faces and eyes. The purpose of this work is to give a review of the available literature on face and eye detection, as well as assessment of gaze. With the growing popularity of systems based on face and eye detection in a range of disciplines in recent years, academia and industry have paid close attention to this topic. Face and eye identification has been the subject of numerous investigations. Face and eye detection systems have made significant process despite numerous challenges such as varying illumination conditions, wearing glasses, having facial hair or moustache on the face, and varying orientation poses or occlusion of the face. We categorize face detection models and look at basic face detection methods in this paper. We categorize face detection models and look at basic face detection methos in this paper. Then we’ll go through eye detection and estimation techniques. Keywords: Image Processing, Face Detection, Eye Detection, Gaze Estimation
APA, Harvard, Vancouver, ISO, and other styles
2

S.Priyadharsini. "Prevention from Road Accidents by Detecting Driver Drowsiness." Recent Trends in Information Technology and its Application 5, no. 2 (2022): 1–13. https://doi.org/10.5281/zenodo.6789736.

Full text
Abstract:
Driver lethargy is one of the main explanations for traffic accidents and the associated fiscal losses. Existing drowsiness detection techniques does not concentrate on all the key factors of drowsy drivers. The proposed system designed for the analysis and detection of drowsiness uses visual based features. The eye state, eye blinking frequency, eye closure duration, redness level detection, mouth state, yawning frequency are the key factors for detecting drowsiness. Systems that use this technique usually monitor eye states and the position of the iris through a specific time period to estimate the eye blinking frequency and the eye closure duration. On the other hand, mouth analysis and tracking the yawning frequency of a driver is an alternative way of detecting the drowsy driver. These techniques will identify the drowsing state of the driver and if he is drowsy, then an alert message is sent to the driver stating that the driver is no longer capable of driving the vehicle safely thus preventing accidents.
APA, Harvard, Vancouver, ISO, and other styles
3

Journal, IJSREM. "Eye Disease Detection Portal." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 01 (2024): 1–13. http://dx.doi.org/10.55041/ijsrem28164.

Full text
Abstract:
Eye diseases and cancer, affecting millions of people in the developing world, can lead to vision loss. Tomography, a type of X-ray technique, is used for their detection, but symptoms like pain, blurriness, and redness may go unnoticed. Limited access to expertise in metropolitan areas poses a challenge for accurate diagnosis, despite the availability of scanning centers in many towns. By utilizing Deep Learning techniques, we have revolutionized the detection of eye diseases and cancer. This project involves extensive datasets obtained from previously scanned tomography scans, which are subjected to preprocessing steps to ensure optimal quality. The trained models learn from these datasets, enabling them to accurately classify eye conditions. To facilitate ease of use, we have developed an intuitive interface that allows ophthalmologists to input eye images from scans. Leveraging the power of Deep Learning algorithms, the system swiftly analyzes the images and generates comprehensive reports indicating the presence of various eye diseases and cancer. This approach addresses the limitations of traditional diagnostic methods, as it significantly reduces the time and effort required for disease identification. By incorporating advanced Deep Learning techniques, our system achieves the highest levels of accuracy in detecting and diagnosing eye diseases, ensuring prompt and effective treatment for patients. Overall, our project showcases the potential of Deep Learning in revolutionizing the detection and diagnosis of eye diseases and cancer, ensuring that patients receive prompt and accurate treatment regardless of their geographical location.
APA, Harvard, Vancouver, ISO, and other styles
4

Mohammed, Anes J., and Dr.A.R.JayaSudha. "Driver Drowsiness Detection System." Advanced Innovations in Computer Programming Languages 5, no. 2 (2023): 8–15. https://doi.org/10.5281/zenodo.8037360.

Full text
Abstract:
<em>Drowsy drivers cause several accidents every year. It&#39;s a major contributor to vehicular mishaps in the modern era. According to recent data, driver fatigue is a leading cause of accidents. Thousands of people lose their lives every year in vehicle accidents brought on by sleepy drivers. Drowsiness contributes to almost 30% of all accidents. A system that can detect driver fatigue and provide an alarm in time to avert an accident is essential. In this study, we provide a method for identifying sleepy drivers. In this system, the driver is constantly watched over by a camera. The driver&#39;s face and eyes are the primary targets of the image processing used in this model. The device takes a picture of the driver&#39;s face and uses eye tracking data to guess when he or she will blink. To quantify perclos, we use an algorithm to follow and analyse the driver&#39;s face and eyes. A warning tone is played if the blink rate is too high.</em>
APA, Harvard, Vancouver, ISO, and other styles
5

B N, Ramya, MSJ Navaneeth Charan, Vishruth S, Suman R, and Thanmayi B. "Human Eye Disease Detection System Using Deep Learning." International Journal of Research Publication and Reviews 6, no. 5 (2025): 16212–17. https://doi.org/10.55248/gengpi.6.0525.19100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fogelton, Andrej, and Wanda Benesova. "Eye blink completeness detection." Computer Vision and Image Understanding 176-177 (November 2018): 78–85. http://dx.doi.org/10.1016/j.cviu.2018.09.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hilal, Pranali Pandurang. "Eye Disease Detection System." International Journal for Research in Applied Science and Engineering Technology 12, no. 3 (2024): 2610–13. http://dx.doi.org/10.22214/ijraset.2024.59188.

Full text
Abstract:
Abstract: Nowadays a lot of people have eye disease problems and to know their disease they do have to wait a lot because of the machine system in the hospital. To resolve that issue, we have developed an eye disease detection model using machine learning technology which will help the patient to know their disease as early as they can. The eye disease detection model is trained on a huge number of parameters so that can predict eye disease quickly
APA, Harvard, Vancouver, ISO, and other styles
8

R., Manikandan, Abilash S., Agilakalanchian C., and Tamilselvan P. "DRIVER DROWSINESS DETECTION SYSTEM USING OPEN COMPUTER VISION." International Journal of Current Research and Modern Education 3, no. 1 (2018): 410–14. https://doi.org/10.5281/zenodo.1218681.

Full text
Abstract:
In recent years driver fatigue is one of the major causes of vehicle accidents in the world. A direct way of measuring driver fatigue is measuring the state of the driver i.e. drowsiness.&nbsp; So it is very important to detect the drowsiness of the driver to save life and property. This project is aimed towards developing a prototype of drowsiness detection system. This system is a real time system which captures image continuously and measures the state of the eye according to the specified algorithm and gives warning if required. Though there are several methods for measuring the drowsiness but this approach is completely non-intrusive which does not affect the driver in any way, hence giving the exact condition of the driver. For detection of drowsiness the per closure value of eye is considered. So when the closure of eye exceeds a certain amount then the driver is identified to be sleepy. For implementing this system several OpenCv libraries are used including Haar-cascade. The entire system is implemented using microcontroller.&nbsp;&nbsp;&nbsp;&nbsp;
APA, Harvard, Vancouver, ISO, and other styles
9

Patel, Mitesh, Sara Lal, Diarmuid Kavanagh, and Peter Rossiter. "Fatigue Detection Using Computer Vision." International Journal of Electronics and Telecommunications 56, no. 4 (2010): 457–61. http://dx.doi.org/10.2478/v10177-010-0062-8.

Full text
Abstract:
Fatigue Detection Using Computer VisionLong duration driving is a significant cause of fatigue related accidents of cars, airplanes, trains and other means of transport. This paper presents a design of a detection system which can be used to detect fatigue in drivers. The system is based on computer vision with main focus on eye blink rate. We propose an algorithm for eye detection that is conducted through a process of extracting the face image from the video image followed by evaluating the eye region and then eventually detecting the iris of the eye using the binary image. The advantage of this system is that the algorithm works without any constraint of the background as the face is detected using a skin segmentation technique. The detection performance of this system was tested using video images which were recorded under laboratory conditions. The applicability of the system is discussed in light of fatigue detection for drivers.
APA, Harvard, Vancouver, ISO, and other styles
10

Nadella, Bhargavi. "Eye Detection and Tracking and Eye Gaze Estimation." Asia-pacific Journal of Convergent Research Interchange 1, no. 2 (2015): 25–42. http://dx.doi.org/10.21742/apjcri.2015.06.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Eye detection"

1

Hossain, Akdas, and Emma Miléus. "Eye Movement Event Detection for Wearable Eye Trackers." Thesis, Linköpings universitet, Matematik och tillämpad matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129616.

Full text
Abstract:
Eye tracking research is a growing area and the fields as where eye trackingcould be used in research are large. To understand the eye tracking data dif-ferent filters are used to classify the measured eye movements. To get accu-rate classification this thesis has investigated the possibility to measure bothhead movements and eye movements in order to improve the estimated gazepoint.The thesis investigates the difference in using head movement compensationwith a velocity based filter, I-VT filter, to using the same filter without headmovement compensation. Further on different velocity thresholds are testedto find where the performance of the filter is the best. The study is made with amobile eye tracker, where this problem exist since you have no absolute frameof reference as opposed to when using remote eye trackers. The head move-ment compensation shows promising results with higher precision overall.
APA, Harvard, Vancouver, ISO, and other styles
2

Trejo, Guerrero Sandra. "Model-Based Eye Detection and Animation." Thesis, Linköping University, Department of Electrical Engineering, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-7059.

Full text
Abstract:
<p>In this thesis we present a system to extract the eye motion from a video stream containing a human face and applying this eye motion into a virtual character. By the notation eye motion estimation, we mean the information which describes the location of the eyes in each frame of the video stream. Applying this eye motion estimation into a virtual character, we achieve that the virtual face moves the eyes in the same way than the human face, synthesizing eye motion into a virtual character. In this study, a system capable of face tracking, eye detection and extraction, and finally iris position extraction using video stream containing a human face has been developed. Once an image containing a human face is extracted from the current frame of the video stream, the detection and extraction of the eyes is applied. The detection and extraction of the eyes is based on edge detection. Then the iris center is determined applying different image preprocessing and region segmentation using edge features on the eye picture extracted.</p><p>Once, we have extracted the eye motion, using MPEG-4 Facial Animation, this motion is translated into the Facial Animation arameters (FAPs). Thus we can improve the quality and quantity of Facial Animation expressions that we can synthesize into a virtual character.</p>
APA, Harvard, Vancouver, ISO, and other styles
3

Miao, Yufan. "Landmark Detection for Mobile Eye Tracking." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-301499.

Full text
Abstract:
Mobile eye tracking studies in urban environments can provide important insights into several processes of human behavior, ranging from wayfinding to human-environment interaction. The analysis of this kind of eye tracking data are based on a semi-manual or even sometimes completely manual process, consuming immense post-processing time. In this thesis, we propose an approach based on computer vision methods that allows fully automatic analysis of eye tracking data, captured in an urban environment. We present our approach, as well as the results of three experiments that were conducted in order to evaluate the robustness of the system in open, as well as in narrow spaces. Furthermore, we give directions towards computation time optimization in order to achieve analysis on the fly of the captured eye tracking data, opening the way for human-environment interaction in real time.
APA, Harvard, Vancouver, ISO, and other styles
4

Bandara, Indrachapa Buwaneka. "Driver drowsiness detection based on eye blink." Thesis, Bucks New University, 2009. http://bucks.collections.crest.ac.uk/9782/.

Full text
Abstract:
Accidents caused by drivers’ drowsiness behind the steering wheel have a high fatality rate because of the discernible decline in the driver’s abilities of perception, recognition, and vehicle control abilities while sleepy. Preventing such accidents caused by drowsiness is highly desirable but requires techniques for continuously detecting, estimating, and predicting the level of alertness of drivers and delivering effective feedback to maintain maximum performance. The main objective of this research study is to develop a reliable metric and system for the detection of driver impairment due to drowsiness. More specifically, the goal of the research is to develop the best possible metric for detection of drowsiness, based on measures that can be detected during driving. This thesis describes the new studies that have been performed to develop, validate, and refine such a metric. A computer vision system is used to monitor the driver’s physiological eye blink behaviour. The novel application of green LED illumination overcame one of the major difficulties of the eye sclera segmentation problem due to illumination changes. Experimentation in a driving simulator revealed various visual cues, typically characterizing the level of alertness of the driver, and these cues were combined to infer the drowsiness level of the driver. Analysis of the data revealed that eye blink duration and eye blink frequency were important parameters in detecting drowsiness. From these measured parameters, a continuous measure of drowsiness, the New Drowsiness Scale (NDS), is derived. The NDS ranges from one to ten, where a decrease in NDS corresponds to an increase in drowsiness. Based upon previous research into the effects of drowsiness on driving performance, measures relating to the lateral placement of the vehicle within the lane are of particular interest in this study. Standard deviations of average deviations were measured continuously throughout the study. The NDS scale, based upon the gradient of the linear regression of standard deviation of average blink frequency and duration, is demonstrated as a reliable method for identifying the development of drowsiness in drivers. Deterioration of driver performance (reflected by increasingly severe lane deviation) is correlated with a decreasing NDS score. The final experimental results show the validity of the proposed model for driver drowsiness detection.
APA, Harvard, Vancouver, ISO, and other styles
5

Yi, Fei. "Robust eye coding mechanisms in humans during face detection." Thesis, University of Glasgow, 2018. http://theses.gla.ac.uk/31011/.

Full text
Abstract:
We can detect faces more rapidly and efficiently compared to non-face object categories (Bell et al., 2008; Crouzet, 2011), even when only partial information is visible (Tang et al., 2014). Face inversion impairs our ability to recognise faces. The key to understand this effect is to determine what special face features are processed and how coding of these features is affected by face inversion. Previous studies from our lab showed coding of the contralateral eye in an upright face detection task, which was maximal around the N170 recorded at posterior-lateral electrodes (Ince et al., 2016b; Rousselet et al., 2014). In chapter 2, we used the Bubble technique to determine whether brain responses also reflect the processing of eyes in inverted faces and how it does so in a simple face detection task. The results suggest that in upright and inverted faces alike the N170 reflects coding of the contralateral eye, but face inversion quantitatively weakens the early processing of the contralateral eye, specifically in the transition between the P1 and the N170 and delays this local feature coding. Group and individual results support this claim. First, regardless of face orientation, the N170 coded the eyes contralateral to the posterior-lateral electrodes, which was the case in all participants. Second, face inversion delayed coding of contralateral eye information. Third, time course analysis of contralateral eye coding revealed weaker contralateral eye coding for inverted compared to upright faces in the transition between the P1 and the N170. Fourth, single-trial EEG responses were driven by the corresponding single-trial visibility of the left eye. The N170 amplitude was larger and latency shorter as the left eye visibility increased in upright and upside-down faces for the majority of participants. However, for images of faces, eye position and face orientation were confounded, i.e., the upper visual field usually contains eyes in upright faces; in upside-down faces lower visual field contains eyes. Thus, the impaired processing of the contralateral eye by inversion might be simply attributed to that face inversion removes the eyes away from upper visual filed. In chapter 3, we manipulated three vertical locations of images in which eyes are presented in upper, centre and lower visual field relative to fixation cross (the centre of the screen) so that in upright and inverted faces the eyes can shift from the upper to the lower visual field. We used the similar technique as in chapter 2 during a face detection task. First, we found 2 that regardless of face orientation and position, the modulations of ERPs recorded at the posterior-lateral electrodes were associated with the contralateral eye. This suggests that coding of the contralateral eye underlying the N170. Second, face inversion delayed processing of the contralateral eye when the eyes of faces were presented in the same position, Above, Below or at the Centre of the screen. Also, in the early N170, most of our participants showed weakened contralateral eye sensitivity by inversion of faces, of which the eyes appeared in the same position. The results suggest that face inversion related changes in processing of the contralateral eye cannot be simply considered as the results of differences of eye position. The scan-paths traced by human eye movements are similar to the low-level computation saliency maps produced by contrast based computer vision algorithms (Itti et al., 1998). This evidence leads us to a question of whether the coding function to encode the eyes is due to the significance in the eye regions. In chapter 4, we aim to answer the question. We introduced two altered version of original faces: normalised and reversed contrast faces in a face detection task - removing eye saliency (Simoncelli and Olshausen, 2001) and reversing face contrast polarity (Gilad et al., 2009) in a simple face detection task. In each face condition, we observed ERPs, that recorded at contralateral posterior lateral electrodes, were sensitive to eye regions. Both contrast manipulations delayed and reduced eye sensitivity during the rising part of the N170, roughly 120 – 160 ms post-stimulus onset. Also, there were no such differences between two contrast-manipulated faces. These results were observed in the majority of participants. They suggest that the processing of contralateral eye is due partially to low-level factors and may reflect feature processing in the early N170.
APA, Harvard, Vancouver, ISO, and other styles
6

Anderson, Travis M. "Motion detection algorithm based on the common housefly eye." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400965531&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Samadzadegan, Sepideh. "Automatic and Adaptive Red Eye Detection and Removal : Investigation and Implementation." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-77977.

Full text
Abstract:
Redeye artifact is the most prevalent problem in the flash photography, especially using compact cameras with built-in flash, which bothers both amateur and professional photographers. Hence, removing the affected redeye pixels has become an important skill. This thesis work presents a completely automatic approach for the purpose of redeye detection and removal and it consists of two modules: detection and correction of the redeye pixels in an individual eye, detection of two red eyes in an individual face.This approach is considered as a combination of some of the previous attempts in the area of redeye removal together with some minor and major modifications and novel ideas. The detection procedure is based on the redness histogram analysis followed by two adaptive methods, general and specific approaches, in order to find a threshold point. The correction procedure is a four step algorithm which does not solely rely on the detected redeye pixels. It also applies some more pixel checking, such as enlarging the search area and neighborhood checking, to improve the reliability of the whole procedure by reducing the image degradation risk. The second module is based on a skin-likelihood detection algorithm. A completely novel approach which is utilizing the Golden Ratio in order to segment the face area into some specific regions is implemented in the second module. The proposed method in this thesis work is applied on more than 40 sample images; by considering some requirements and constrains, the achieved results are satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
8

Vidal, Diego Armando Benavides. "A Kernel matching approach for eye detection in surveillance images." reponame:Repositório Institucional da UnB, 2016. http://repositorio.unb.br/handle/10482/24112.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2016.<br>Submitted by Raquel Almeida (raquel.df13@gmail.com) on 2017-06-27T13:16:54Z No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5)<br>Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-08-15T10:55:01Z (GMT) No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5)<br>Made available in DSpace on 2017-08-15T10:55:01Z (GMT). No. of bitstreams: 1 2016_DiegoArmandoBenavidesVidal.pdf: 6256311 bytes, checksum: 032b7fb7441d8dc32be590f67a1be876 (MD5) Previous issue date: 2017-08-15<br>A detecção ocular é um problema aberto em pesquisa a ser resolvido eficientemente por detecção facial em sistemas de segurança. Características como precisão e custo computacional são consider- ados para uma abordagem de sucesso. Nós descrevemos uma abordagem integrada que segmenta os ROI emitidos por um detector Viola e Jones, constrói características HOGs e aprende uma função especial para mapear essas características para um espaço dimensional elevado onde a detecção alcança uma melhor precisão. Esse mapeamento segue a eficiente abordagem de funções Kernel, que se mostrou possível mas não foi feita para esse problema antes. Um classificador SVM linear é usado para detecção ocular através dessas características mapeadas. Experimentos extensivos são mostrados com diferentes bancos de dados e o método proposto alcança uma precisão elevada com baixo custo computacional adicional do que o detector Viola e Jones. O método também podem ser estendido para lidar com outros modelos equivalentes.<br>Eye detection is a open research problem to be solved efficiently by face detection and human surveillance systems. Features such as accuracy and computational cost are to be considered for a successful approach. We describe an integrated approach that takes the outputted ROI by a Viola and Jones detector, construct HOGs features on those and learn an special function to mapping these to a higher dimension space where the detection achieve a better accuracy. This mapping follows the efficient kernels match approach which was shown possible but had not been done for this problem before. Linear SVM is then used as classifier for eye detection using those mapped features. Extensive experiments are shown with different databases and the proposed method achieve higher accuracy with low added computational cost than Viola and Jones detector. The approach can also be extended to deal with other appearance models.
APA, Harvard, Vancouver, ISO, and other styles
9

Ignat, Simon, and Filip Mattsson. "Eye Blink Detection and Brain-Computer Interface for Health Care Applications." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-200571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tesárek, Viktor. "Detekce mrkání a rozpoznávání podle mrkání očí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217560.

Full text
Abstract:
This master thesis deals with the issues of the eye blink recognition from video. The main task is to analyse algorithms dealing with a detection of persons and make a program that could recognize the eye blink. Analysis of these algorithms and their problems are in the first part of this thesis. In the second part design and properties of my program are described. The realization of the program is based on the method of move detection using the accumulated difference frame, which helps to identify the eye areas. The eye blink detection algorithm tests a match between a tresholded pattern of the eye area taken from the actual frame and the frame before. The resolution whether the eye blink happened or not, is based on the level of the match. The algorithm is designed for watching a sitting man, which is slightly moving. The background can be a little dynamic as well. An average quality video with a moderator and dynamic backround was used as a tested subject.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Eye detection"

1

A, Douthwaite W., and Hurst Mark A, eds. Cataract: Detection, measurement in optometric practice, and management. Butterworth Heinemann, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1942-, Sommer Alfred, and World Health Organization, eds. Vitamin A deficiency and its consequences: A field guide to detection and control. 3rd ed. World Health Organization, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Muir, Frank. Eye for an eye. Soho Crime, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

J, Randisi Robert, and Private Eye Writers of America., eds. The Eyes have it: The first Private Eye Writers of America anthology. Mysterious Press, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Vincent C. E. Eye mouse system: Mouse control technique for detecting and tracking of eyes in visual images. UMIST, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Clark, Douglas. Jewelled eye. Perennial Library, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brockmole, James R., and Michi Matsukura. Eye movements and change detection. Oxford University Press, 2011. http://dx.doi.org/10.1093/oxfordhb/9780199539789.013.0031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rae, Robert Andrew. Detection of eyelid position in eye-tracking systems. 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Huh, Jinny. Arresting Eye: Race and the Anxiety of Detection. University of Virginia Press, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

The arresting eye: Race and the anxiety of detection. 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Eye detection"

1

Rathgeb, Christian, Andreas Uhl, and Peter Wild. "Eye Detection." In Advances in Information Security. Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-5571-4_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bessonova, Yulia V., and Alexander A. Oboznov. "Eye Movements and Lie Detection." In Intelligent Human Systems Integration. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-73888-8_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Park, Kang Ryoung, Jeong Jun Lee, and Jaihie Kim. "Facial and Eye Gaze Detection." In Biologically Motivated Computer Vision. Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-36181-2_37.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yan, Hongjin, and Yonglin Zhang. "Medical Eye Image Detection System." In Communications in Computer and Information Science. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23223-7_80.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shidnekoppa, Rekha A., Manjunath Kammar, and K. S. Shreedhar. "Liveness Detection Based on Eye Flicker." In Communications in Computer and Information Science. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-9059-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Shuo, and Chengjun Liu. "Various Discriminatory Features for Eye Detection." In Intelligent Systems Reference Library. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-28457-1_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Meng, Chunning, and Taining Zhang. "Human Eye Detection via Sparse Representation." In Proceedings of the 2015 International Conference on Communications, Signal Processing, and Systems. Springer Berlin Heidelberg, 2016. http://dx.doi.org/10.1007/978-3-662-49831-6_71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pradhan, Ashis, Jhuma Sunuwar, Sabna Sharma, and Kunal Agarwal. "Fatigue Detection Based on Eye Tracking." In Advances in Intelligent Systems and Computing. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8237-5_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Puri, Karuna, and Preeti Mulay. "Hawk Eye: A Plagiarism Detection System." In Advances in Intelligent Systems and Computing. Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2517-1_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Telse, Vrushabh, Rahul, and Joydeep Sengupta. "Eye Gaze Tracking and Blinking Detection." In Algorithms for Intelligent Systems. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4862-2_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Eye detection"

1

Ji, Fan, Jun Ya Wang, Jun Chang, and Yun Qiang Zhang. "Research on multilayer lobster eye detection technology." In Tenth Symposium on Novel Optoelectronic Detection Technology and Applications, edited by Chen Ping. SPIE, 2025. https://doi.org/10.1117/12.3056878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vengurlekar, Mrunal S., M. Toufeeq Nadaf, Nelton N. Fernandes, and K. M. Chaman Kumar. "Conjunctivitis Eye Detection using Deep Learning." In 2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC). IEEE, 2024. http://dx.doi.org/10.1109/icesc60852.2024.10689984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Shijie, Zhiqi Zhang, and Juan Liu. "Holographic Maxwellian near-eye display with continuous eyebox replication." In Digital Holography and Three-Dimensional Imaging. Optica Publishing Group, 2024. http://dx.doi.org/10.1364/dh.2024.tu1b.2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Nasra, Parul, Sheifali Gupta, and Gotte Ranjith Kumar. "Infrared Eye Image Classification: A Deep Learning Approach for Enhanced Eye State Detection." In 2025 3rd International Conference on Advancement in Computation & Computer Technologies (InCACCT). IEEE, 2025. https://doi.org/10.1109/incacct65424.2025.11011466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rodrigues, Fredson Costa, Anselmo C. de Paiva, João Dalysson S. de Almeida, Geraldo Braz Júnior, Aristófanes Corrêa, and André Castelo Branco Soares. "Computational Methodology for Iris Segmentation and Detection in Images from the Eyes Region Using Convolutional Neural Networks." In Workshop de Visão Computacional. Sociedade Brasileira de Computação - SBC, 2021. http://dx.doi.org/10.5753/wvc.2021.18910.

Full text
Abstract:
Eye tracking is an application of computer vision responsible for detecting the iris and pupil in the eye region. The usefulness of this tracking contributes to research that assesses cognitive aspects through pupillary reactions identified in these detected regions. Another application in this task is iris recognition in digital biometrics. This study aims to carry out the verification and detection of the iris in images of the eye region occluded by eyelashes, eyelids and specular reflexes, through a deep neural network called At-Unet in this article. In order to assist in eye tracking this method achieves 95.32 % of data coefficient when segmenting the iris of the eyes, indicating the efficiency of this methodology.
APA, Harvard, Vancouver, ISO, and other styles
6

Munteanu, Mihai, Alina Magda, Radu Ciorap, Corneliu Rusu, and Luige Vladareanu. "EOG SIGNAL PROCESSING ALGORITHM USED IN EYE MOVEMENT DETECTION." In eLSE 2017. Carol I National Defence University Publishing House, 2017. http://dx.doi.org/10.12753/2066-026x-17-255.

Full text
Abstract:
The EOG signal is based on the potential difference between the cornea and retina (also known as the corneo-retinal potential) and can be measured by using electrodes placed around the eyes. The amplitude of the EOG signal varies between 50 ... 3500 uV. The eye can be seen as a dipole with a positive pole in the retina and the negative one in the cornea. Thus, an electric potential field is created that is changing proportionally with the rotation of the eye. The movements can be assessed from a single eye, but because for a healthy human being the eye movements are coupled, in this paper the signal obtained from both eyes is used. Furthermore, this method allows assessing both vertical and horizontal signal simultaneously. One of the most complex algorithms used to detect steep slopes and peaks within medical signals is the Pan-Tompkins algorithm, but is especially used for ECG signals. This is the reason why the previous studies focused on the detection of the abrupt discontinuities in the EOG signal, from the wavelet transform perspective. In this manner, the points of discontinuity determined by the eye movement will generate high wavelet coefficients, regardless of scale. Starting from previous results regarding different technics used for eye movement investigation and electrooculogram signal (EOG), the paper presents a robust algorithm that is capable of performing real time eye movement detection. This algorithm was implemented in LabVIEW, based on virtual instruments and it was successfully tested in the laboratory of Biomedical Engineering from Technical University of Cluj-Napoca.
APA, Harvard, Vancouver, ISO, and other styles
7

Choi, Inho, Seungchul Han, and Daijin Kim. "Eye Detection and Eye Blink Detection Using AdaBoost Learning and Grouping." In 2011 20th International Conference on Computer Communications and Networks - ICCCN 2011. IEEE, 2011. http://dx.doi.org/10.1109/icccn.2011.6005896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Liting, Xiaoqing Ding, Chi Fang, Changsong Liu, and Kongqiao Wang. "Eye blink detection based on eye contour extraction." In IS&T/SPIE Electronic Imaging, edited by Jaakko T. Astola, Karen O. Egiazarian, Nasser M. Nasrabadi, and Syed A. Rizvi. SPIE, 2009. http://dx.doi.org/10.1117/12.804916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Abdulin, Evgeniy, and Oleg Komogortsev. "User Eye Fatigue Detection via Eye Movement Behavior." In CHI '15: CHI Conference on Human Factors in Computing Systems. ACM, 2015. http://dx.doi.org/10.1145/2702613.2732812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bin Li and Jian Wang. "Human eye characteristics in IR-based eye detection." In 2013 IEEE ICCE-China Workshop (ICCE-China Workshop). IEEE, 2013. http://dx.doi.org/10.1109/icce-china.2013.6780866.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Eye detection"

1

Scassellati, Brian. Eye Finding via Face Detection for a Foveated, Active Vision System. Defense Technical Information Center, 1998. http://dx.doi.org/10.21236/ada455661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Okuda, Koji, Michimasa Itoh, Bunji Inagaki, Shin Yamamoto, and Satoshi Mori. Detection of Eye Blink and Gaze Direction to Estimate Driver's Condition. SAE International, 2005. http://dx.doi.org/10.4271/2005-08-0045.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Ting Wei, Wei-Ting Luo, Yu-Kang Tu, Yu-Bai Chou, and Yu-Te Wu. Diagnostic Accuracy of Eye Art for Fundus-Based Detection of Diabetic Retinopathy: A Systematic Review and Meta-analysis. INPLASY - International Platform of Registered Systematic Review and Meta-analysis Protocols, 2025. https://doi.org/10.37766/inplasy2025.5.0079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Petrich, Jacob W. Development of Methods for the Real-Time and Rapid Identification and Detection of TSE in Living Animals Using Fluorescence Spectroscopy of the Eye. Defense Technical Information Center, 2006. http://dx.doi.org/10.21236/ada484430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Petrich, Jacob W. Development of Methods for the Real-Time and Rapid Identification and Detection of TSE in Living Animals Using Fluorescence Spectroscopy of the Eye. Defense Technical Information Center, 2004. http://dx.doi.org/10.21236/ada427875.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

McIntire, Lindsey, Chuck Goodyear, Justin Nelson, R. A. McKinley, and John McIntire. Eye Metrics: An Alternative Vigilance Detector. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada585834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

McIntire, Lindsey, Chuck Goodyear, Nathaniel Bridges, et al. Eye-Tracking: An Alternative Vigilance Detector. Defense Technical Information Center, 2011. http://dx.doi.org/10.21236/ada559743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McIntire, Lindsey, R. A. McKinley, John McIntire, Chuck Goodyear, and Justin Nelson. Eye Metrics: An Alternative Vigilance Detector for Military Cyber Operators. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada590398.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Smit, Amelia, Kate Dunlop, Nehal Singh, Diona Damian, Kylie Vuong, and Anne Cust. Primary prevention of skin cancer in primary care settings. The Sax Institute, 2022. http://dx.doi.org/10.57022/qpsm1481.

Full text
Abstract:
Overview Skin cancer prevention is a component of the new Cancer Plan 2022–27, which guides the work of the Cancer Institute NSW. To lessen the impact of skin cancer on the community, the Cancer Institute NSW works closely with the NSW Skin Cancer Prevention Advisory Committee, comprising governmental and non-governmental organisation representatives, to develop and implement the NSW Skin Cancer Prevention Strategy. Primary Health Networks and primary care providers are seen as important stakeholders in this work. To guide improvements in skin cancer prevention and inform the development of the next NSW Skin Cancer Prevention Strategy, an up-to-date review of the evidence on the effectiveness and feasibility of skin cancer prevention activities in primary care is required. A research team led by the Daffodil Centre, a joint venture between the University of Sydney and Cancer Council NSW, was contracted to undertake an Evidence Check review to address the questions below. Evidence Check questions This Evidence Check aimed to address the following questions: Question 1: What skin cancer primary prevention activities can be effectively administered in primary care settings? As part of this, identify the key components of such messages, strategies, programs or initiatives that have been effectively implemented and their feasibility in the NSW/Australian context. Question 2: What are the main barriers and enablers for primary care providers in delivering skin cancer primary prevention activities within their setting? Summary of methods The research team conducted a detailed analysis of the published and grey literature, based on a comprehensive search. We developed the search strategy in consultation with a medical librarian at the University of Sydney and the Cancer Institute NSW team, and implemented it across the databases Embase, MEDLINE, PsycInfo, Scopus, Cochrane Central and CINAHL. Results were exported and uploaded to Covidence for screening and further selection. The search strategy was designed according to the SPIDER tool for Qualitative and Mixed-Methods Evidence Synthesis, which is a systematic strategy for searching qualitative and mixed-methods research studies. The SPIDER tool facilitates rigour in research by defining key elements of non-quantitative research questions. We included peer-reviewed and grey literature that included skin cancer primary prevention strategies/ interventions/ techniques/ programs within primary care settings, e.g. involving general practitioners and primary care nurses. The literature was limited to publications since 2014, and for studies or programs conducted in Australia, the UK, New Zealand, Canada, Ireland, Western Europe and Scandinavia. We also included relevant systematic reviews and evidence syntheses based on a range of international evidence where also relevant to the Australian context. To address Question 1, about the effectiveness of skin cancer prevention activities in primary care settings, we summarised findings from the Evidence Check according to different skin cancer prevention activities. To address Question 2, about the barriers and enablers of skin cancer prevention activities in primary care settings, we summarised findings according to the Consolidated Framework for Implementation Research (CFIR). The CFIR is a framework for identifying important implementation considerations for novel interventions in healthcare settings and provides a practical guide for systematically assessing potential barriers and facilitators in preparation for implementing a new activity or program. We assessed study quality using the National Health and Medical Research Council (NHMRC) levels of evidence. Key findings We identified 25 peer-reviewed journal articles that met the eligibility criteria and we included these in the Evidence Check. Eight of the studies were conducted in Australia, six in the UK, and the others elsewhere (mainly other European countries). In addition, the grey literature search identified four relevant guidelines, 12 education/training resources, two Cancer Care pathways, two position statements, three reports and five other resources that we included in the Evidence Check. Question 1 (related to effectiveness) We categorised the studies into different types of skin cancer prevention activities: behavioural counselling (n=3); risk assessment and delivering risk-tailored information (n=10); new technologies for early detection and accompanying prevention advice (n=4); and education and training programs for general practitioners (GPs) and primary care nurses regarding skin cancer prevention (n=3). There was good evidence that behavioural counselling interventions can result in a small improvement in sun protection behaviours among adults with fair skin types (defined as ivory or pale skin, light hair and eye colour, freckles, or those who sunburn easily), which would include the majority of Australians. It was found that clinicians play an important role in counselling patients about sun-protective behaviours, and recommended tailoring messages to the age and demographics of target groups (e.g. high-risk groups) to have maximal influence on behaviours. Several web-based melanoma risk prediction tools are now available in Australia, mainly designed for health professionals to identify patients’ risk of a new or subsequent primary melanoma and guide discussions with patients about primary prevention and early detection. Intervention studies have demonstrated that use of these melanoma risk prediction tools is feasible and acceptable to participants in primary care settings, and there is some evidence, including from Australian studies, that using these risk prediction tools to tailor primary prevention and early detection messages can improve sun-related behaviours. Some studies examined novel technologies, such as apps, to support early detection through skin examinations, including a very limited focus on the provision of preventive advice. These novel technologies are still largely in the research domain rather than recommended for routine use but provide a potential future opportunity to incorporate more primary prevention tailored advice. There are a number of online short courses available for primary healthcare professionals specifically focusing on skin cancer prevention. Most education and training programs for GPs and primary care nurses in the field of skin cancer focus on treatment and early detection, though some programs have specifically incorporated primary prevention education and training. A notable example is the Dermoscopy for Victorian General Practice Program, in which 93% of participating GPs reported that they had increased preventive information provided to high-risk patients and during skin examinations. Question 2 (related to barriers and enablers) Key enablers of performing skin cancer prevention activities in primary care settings included: • Easy access and availability of guidelines and point-of-care tools and resources • A fit with existing workflows and systems, so there is minimal disruption to flow of care • Easy-to-understand patient information • Using the waiting room for collection of risk assessment information on an electronic device such as an iPad/tablet where possible • Pairing with early detection activities • Sharing of successful programs across jurisdictions. Key barriers to performing skin cancer prevention activities in primary care settings included: • Unclear requirements and lack of confidence (self-efficacy) about prevention counselling • Limited availability of GP services especially in regional and remote areas • Competing demands, low priority, lack of time • Lack of incentives.
APA, Harvard, Vancouver, ISO, and other styles
10

Kulhandjian, Hovannes. Detecting Driver Drowsiness with Multi-Sensor Data Fusion Combined with Machine Learning. Mineta Transportation Institute, 2021. http://dx.doi.org/10.31979/mti.2021.2015.

Full text
Abstract:
In this research work, we develop a drowsy driver detection system through the application of visual and radar sensors combined with machine learning. The system concept was derived from the desire to achieve a high level of driver safety through the prevention of potentially fatal accidents involving drowsy drivers. According to the National Highway Traffic Safety Administration, drowsy driving resulted in 50,000 injuries across 91,000 police-reported accidents, and a death toll of nearly 800 in 2017. The objective of this research work is to provide a working prototype of Advanced Driver Assistance Systems that can be installed in present-day vehicles. By integrating two modes of visual surveillance to examine a biometric expression of drowsiness, a camera and a micro-Doppler radar sensor, our system offers high reliability over 95% in the accuracy of its drowsy driver detection capabilities. The camera is used to monitor the driver’s eyes, mouth and head movement and recognize when a discrepancy occurs in the driver's blinking pattern, yawning incidence, and/or head drop, thereby signaling that the driver may be experiencing fatigue or drowsiness. The micro-Doppler sensor allows the driver's head movement to be captured both during the day and at night. Through data fusion and deep learning, the ability to quickly analyze and classify a driver's behavior under various conditions such as lighting, pose-variation, and facial expression in a real-time monitoring system is achieved.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography