Journal articles on the topic 'EEG, eye-tracking, Human Computer Interface'

To see the other types of publications on this topic, follow the link: EEG, eye-tracking, Human Computer Interface.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'EEG, eye-tracking, Human Computer Interface.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dekihara, Hiroyuki, and Tatsuya Iwaki. "Development of human computer interface based on eye movement using Emotiv EEG headset." International Journal of Psychophysiology 94, no. 2 (November 2014): 188. http://dx.doi.org/10.1016/j.ijpsycho.2014.08.787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sotnikov, P. I. "Feature Construction Methods for the Electroencephalogram Signal Analysis in Hybrid “Eye-Brain-Computer” Interface." Mathematics and Mathematical Modeling, no. 2 (May 21, 2018): 33–52. http://dx.doi.org/10.24108/mathm.0218.0000118.

Full text
Abstract:
The hybrid “eye-brain-computer” interface is a new approach to the human-machine interaction. It allows the user to select an object of interest on a screen by tracking the user’s gaze direction. At the same time, the user’s intent to give a command is determined by registering and decoding brain activity. The interface operation is based on the fact that control gaze fixations can be distinguished from spontaneous fixations using electroencephalogram (EEG) signal.The article discusses the recognition of EEG patterns that correspond to the spontaneous and control gaze fixations. To improve the classification accuracy, we suggest using the relatively new feature construction methods for time series analysis. These methods include a selection of optimal frequency bands of the multivariate EEG signal and a modified method of shapelets. The first method constructs the optimal feature space using prior information on a difference in frequency components of the multivariate signal for different classes. The second method uses a genetic algorithm to provide selecting such fragments of the multivariate time-series, which reflect as much as possible the properties of one or more than one class of such time series. Thus, calculating distances between them and a set of k top-best shapelets allows us to provide feature description of the time series.The article consists of five sections. The first one provides a mathematical formulation of the multivariate time-series classification problem. The second section gives a formal description of the proposed methods for feature construction. The third section describes test data, which include the EEG records from the six users of the hybrid “eye-brain-computer” interface. In the fourth section, we evaluate an efficiency of the methods proposed in comparison with other known feature extraction techniques, which include: 1) calculation of the average EEG amplitude values in the overlapping windows; 2) estimation of the power spectral density in the specified frequency bands; 3) selection of the most informative features using a genetic algorithm. In the fifth section, we conduct the statistical analysis of the results obtained. It is shown that the feature construction method, based on the selection of optimal frequency bands of the EEG signal, in efficiency significantly outperforms other techniques considered and opens up the possibility to reduce the number of false positives of the hybrid interface.
APA, Harvard, Vancouver, ISO, and other styles
3

Narejo, Sanam, Eros Pasero, and Farzana Kulsoom. "EEG Based Eye State Classification using Deep Belief Network and Stacked AutoEncoder." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 3131. http://dx.doi.org/10.11591/ijece.v6i6.12967.

Full text
Abstract:
<p>A Brain-Computer Interface (BCI) provides an alternative communication interface between the human brain and a computer. The Electroencephalogram (EEG) signals are acquired, processed and machine learning algorithms are further applied to extract useful information. During EEG acquisition, artifacts are induced due to involuntary eye movements or eye blink, casting adverse effects on system performance. The aim of this research is to predict eye states from EEG signals using Deep learning architectures and present improved classifier models. Recent studies reflect that Deep Neural Networks are trending state of the art Machine learning approaches. Therefore, the current work presents the implementation of Deep Belief Network (DBN) and Stacked AutoEncoders (SAE) as Classifiers with encouraging performance accuracy. One of the designed SAE models outperforms the performance of DBN and the models presented in existing research by an impressive error rate of 1.1% on the test set bearing accuracy of 98.9%. The findings in this study, may provide a contribution towards the state of the art performance on the problem of EEG based eye state classification.</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Narejo, Sanam, Eros Pasero, and Farzana Kulsoom. "EEG Based Eye State Classification using Deep Belief Network and Stacked AutoEncoder." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 3131. http://dx.doi.org/10.11591/ijece.v6i6.pp3131-3141.

Full text
Abstract:
<p>A Brain-Computer Interface (BCI) provides an alternative communication interface between the human brain and a computer. The Electroencephalogram (EEG) signals are acquired, processed and machine learning algorithms are further applied to extract useful information. During EEG acquisition, artifacts are induced due to involuntary eye movements or eye blink, casting adverse effects on system performance. The aim of this research is to predict eye states from EEG signals using Deep learning architectures and present improved classifier models. Recent studies reflect that Deep Neural Networks are trending state of the art Machine learning approaches. Therefore, the current work presents the implementation of Deep Belief Network (DBN) and Stacked AutoEncoders (SAE) as Classifiers with encouraging performance accuracy. One of the designed SAE models outperforms the performance of DBN and the models presented in existing research by an impressive error rate of 1.1% on the test set bearing accuracy of 98.9%. The findings in this study, may provide a contribution towards the state of the art performance on the problem of EEG based eye state classification.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Antoniou, Evangelos, Pavlos Bozios, Vasileios Christou, Katerina D. Tzimourta, Konstantinos Kalafatakis, Markos G. Tsipouras, Nikolaos Giannakeas, and Alexandros T. Tzallas. "EEG-Based Eye Movement Recognition Using Brain–Computer Interface and Random Forests." Sensors 21, no. 7 (March 27, 2021): 2339. http://dx.doi.org/10.3390/s21072339.

Full text
Abstract:
Discrimination of eye movements and visual states is a flourishing field of research and there is an urgent need for non-manual EEG-based wheelchair control and navigation systems. This paper presents a novel system that utilizes a brain–computer interface (BCI) to capture electroencephalographic (EEG) signals from human subjects while eye movement and subsequently classify them into six categories by applying a random forests (RF) classification algorithm. RF is an ensemble learning method that constructs a series of decision trees where each tree gives a class prediction, and the class with the highest number of class predictions becomes the model’s prediction. The categories of the proposed random forests brain–computer interface (RF-BCI) are defined according to the position of the subject’s eyes: open, closed, left, right, up, and down. The purpose of RF-BCI is to be utilized as an EEG-based control system for driving an electromechanical wheelchair (rehabilitation device). The proposed approach has been tested using a dataset containing 219 records taken from 10 different patients. The BCI implemented the EPOC Flex head cap system, which includes 32 saline felt sensors for capturing the subjects’ EEG signals. Each sensor caught four different brain waves (delta, theta, alpha, and beta) per second. Then, these signals were split in 4-second windows resulting in 512 samples per record and the band energy was extracted for each EEG rhythm. The proposed system was compared with naïve Bayes, Bayes Network, k-nearest neighbors (K-NN), multilayer perceptron (MLP), support vector machine (SVM), J48-C4.5 decision tree, and Bagging classification algorithms. The experimental results showed that the RF algorithm outperformed compared to the other approaches and high levels of accuracy (85.39%) for a 6-class classification are obtained. This method exploits high spatial information acquired from the Emotiv EPOC Flex wearable EEG recording device and examines successfully the potential of this device to be used for BCI wheelchair technology.
APA, Harvard, Vancouver, ISO, and other styles
6

Tran, Dang-Khoa, Thanh-Hai Nguyen, and Thanh-Nghia Nguyen. "Detection of EEG-Based Eye-Blinks Using A Thresholding Algorithm." European Journal of Engineering and Technology Research 6, no. 4 (May 11, 2021): 6–12. http://dx.doi.org/10.24018/ejeng.2021.6.4.2438.

Full text
Abstract:
In the electroencephalography (EEG) study, eye blinks are a commonly known type of ocular artifact that appears most frequently in any EEG measurement. The artifact can be seen as spiking electrical potentials in which their time-frequency properties are varied across individuals. Their presence can negatively impact various medical or scientific research or be helpful when applying to brain-computer interface applications. Hence, detecting eye-blink signals is beneficial for determining the correlation between the human brain and eye movement in this paper. The paper presents a simple, fast, and automated eye-blink detection algorithm that did not require user training before algorithm execution. EEG signals were smoothed and filtered before eye-blink detection. We conducted experiments with ten volunteers and collected three different eye-blink datasets over three trials using Emotiv EPOC+ headset. The proposed method performed consistently and successfully detected spiking activities of eye blinks with a mean accuracy of over 96%.
APA, Harvard, Vancouver, ISO, and other styles
7

Tran, Dang-Khoa, Thanh-Hai Nguyen, and Thanh-Nghia Nguyen. "Detection of EEG-Based Eye-Blinks Using A Thresholding Algorithm." European Journal of Engineering and Technology Research 6, no. 4 (May 11, 2021): 6–12. http://dx.doi.org/10.24018/ejers.2021.6.4.2438.

Full text
Abstract:
In the electroencephalography (EEG) study, eye blinks are a commonly known type of ocular artifact that appears most frequently in any EEG measurement. The artifact can be seen as spiking electrical potentials in which their time-frequency properties are varied across individuals. Their presence can negatively impact various medical or scientific research or be helpful when applying to brain-computer interface applications. Hence, detecting eye-blink signals is beneficial for determining the correlation between the human brain and eye movement in this paper. The paper presents a simple, fast, and automated eye-blink detection algorithm that did not require user training before algorithm execution. EEG signals were smoothed and filtered before eye-blink detection. We conducted experiments with ten volunteers and collected three different eye-blink datasets over three trials using Emotiv EPOC+ headset. The proposed method performed consistently and successfully detected spiking activities of eye blinks with a mean accuracy of over 96%.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xiashuang, Guanghong Gong, and Ni Li. "Multimodal fusion of EEG and fMRI for epilepsy detection." International Journal of Modeling, Simulation, and Scientific Computing 09, no. 02 (March 20, 2018): 1850010. http://dx.doi.org/10.1142/s1793962318500101.

Full text
Abstract:
Technology of brain–computer interface (BCI) provides a new way of communication and control without language or physical action. Brain signal tracking and positioning is the basis of BCI research, while brain modeling affects the treatment analysis of (EEG) and functional magnetic resonance imaging (fMRI) directly. This paper proposes human ellipsoid brain modeling method. Then, we use non-parametric spectral estimation method of time–frequency analysis to deal with simulation and real EEG of epilepsy patients, which utilizes both the high spatial and the high time resolution to improve the doctor’s diagnostic efficiency.
APA, Harvard, Vancouver, ISO, and other styles
9

Sahat, Norasyimah, Afishah Alias, and Fouziah Md Yassin. "Wheelchair controlled by human brainwave using brain-computer interface system for paralyzed patient." Bulletin of Electrical Engineering and Informatics 10, no. 6 (December 1, 2021): 3032–41. http://dx.doi.org/10.11591/eei.v10i6.3200.

Full text
Abstract:
Integrated wheelchair controlled by human brainwave using a brain-computer interface (BCI) system was designed to help disabled people. The invention aims to improve the development of integrated wheelchair using a BCI system, depending on the ability individual brain attention level. An electroencephalography (EEG) device called mindwave mobile plus (MW+) has been employed to obtain the attention value for wheelchair movement, eye blink to change the mode of the wheelchair to move forward (F), to the right (R), backward (B) and to the left (L). Stop mode (S) is selected when doing eyebrow movement as the signal quality value of 26 or 51 is produced. The development of the wheelchair controlled by human brainwave using a BCI system for helping a paralyzed patient shows the efficiency of the brainwave integrated wheelchair and improved using human attention value, eye blink detection and eyebrow movement. Also, analysis of the human attention value in different gender and age category also have been done to improve the accuracy of the brainwave integrated wheelchair. The threshold value for male children is 60, male teenager (70), male adult (40) while for female children is 50, female teenager (50) and female adult (30).
APA, Harvard, Vancouver, ISO, and other styles
10

Kubacki, Arkadiusz. "Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items." Sensors 21, no. 21 (October 30, 2021): 7244. http://dx.doi.org/10.3390/s21217244.

Full text
Abstract:
Research focused on signals derived from the human organism is becoming increasingly popular. In this field, a special role is played by brain-computer interfaces based on brainwaves. They are becoming increasingly popular due to the downsizing of EEG signal recording devices and ever-lower set prices. Unfortunately, such systems are substantially limited in terms of the number of generated commands. This especially applies to sets that are not medical devices. This article proposes a hybrid brain-computer system based on the Steady-State Visual Evoked Potential (SSVEP), EOG, eye tracking, and force feedback system. Such an expanded system eliminates many of the particular system shortcomings and provides much better results. The first part of the paper presents information on the methods applied in the hybrid brain-computer system. The presented system was tested in terms of the ability of the operator to place the robot’s tip to a designated position. A virtual model of an industrial robot was proposed, which was used in the testing. The tests were repeated on a real-life industrial robot. Positioning accuracy of system was verified with the feedback system both enabled and disabled. The results of tests conducted both on the model and on the real object clearly demonstrate that force feedback improves the positioning accuracy of the robot’s tip when controlled by the operator. In addition, the results for the model and the real-life industrial model are very similar. In the next stage, research was carried out on the possibility of sorting items using the BCI system. The research was carried out on a model and a real robot. The results show that it is possible to sort using bio signals from the human body.
APA, Harvard, Vancouver, ISO, and other styles
11

McMullen, David P., Guy Hotson, Kapil D. Katyal, Brock A. Wester, Matthew S. Fifer, Timothy G. McGee, Andrew Harris, et al. "Demonstration of a Semi-Autonomous Hybrid Brain–Machine Interface Using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic." IEEE Transactions on Neural Systems and Rehabilitation Engineering 22, no. 4 (July 2014): 784–96. http://dx.doi.org/10.1109/tnsre.2013.2294685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Vortmann, Lisa-Marie, and Felix Putze. "Exploration of Person-Independent BCIs for Internal and External Attention-Detection in Augmented Reality." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, no. 2 (June 23, 2021): 1–27. http://dx.doi.org/10.1145/3463507.

Full text
Abstract:
Adding attention-awareness to an Augmented Reality setting by using a Brain-Computer Interface promises many interesting new applications and improved usability. The possibly complicated setup and relatively long training period of EEG-based BCIs however, reduce this positive effect immensely. In this study, we aim at finding solutions for person-independent, training-free BCI integration into AR to classify internally and externally directed attention. We assessed several different classifier settings on a dataset of 14 participants consisting of simultaneously recorded EEG and eye tracking data. For this, we compared the classification accuracies of a linear algorithm, a non-linear algorithm, and a neural net that were trained on a specifically generated feature set, as well as a shallow neural net for raw EEG data. With a real-time system in mind, we also tested different window lengths of the data aiming at the best payoff between short window length and high classification accuracy. Our results showed that the shallow neural net based on 4-second raw EEG data windows was best suited for real-time person-independent classification. The accuracy for the binary classification of internal and external attention periods reached up to 88% accuracy with a model that was trained on a set of selected participants. On average, the person-independent classification rate reached 60%. Overall, the high individual differences could be seen in the results. In the future, further datasets are necessary to compare these results before optimizing a real-time person-independent attention classifier for AR.
APA, Harvard, Vancouver, ISO, and other styles
13

Lin, Chin-Teng, Chi-Hsien Liu, Po-Sheng Wang, Jung-Tai King, and Lun-De Liao. "Design and Verification of a Dry Sensor-Based Multi-Channel Digital Active Circuit for Human Brain Electroencephalography Signal Acquisition Systems." Micromachines 10, no. 11 (October 25, 2019): 720. http://dx.doi.org/10.3390/mi10110720.

Full text
Abstract:
A brain–computer interface (BCI) is a type of interface/communication system that can help users interact with their environments. Electroencephalography (EEG) has become the most common application of BCIs and provides a way for disabled individuals to communicate. While wet sensors are the most commonly used sensors for traditional EEG measurements, they require considerable preparation time, including the time needed to prepare the skin and to use the conductive gel. Additionally, the conductive gel dries over time, leading to degraded performance. Furthermore, requiring patients to wear wet sensors to record EEG signals is considered highly inconvenient. Here, we report a wireless 8-channel digital active-circuit EEG signal acquisition system that uses dry sensors. Active-circuit systems for EEG measurement allow people to engage in daily life while using these systems, and the advantages of these systems can be further improved by utilizing dry sensors. Moreover, the use of dry sensors can help both disabled and healthy people enjoy the convenience of BCIs in daily life. To verify the reliability of the proposed system, we designed three experiments in which we evaluated eye blinking and teeth gritting, measured alpha waves, and recorded event-related potentials (ERPs) to compare our developed system with a standard Neuroscan EEG system.
APA, Harvard, Vancouver, ISO, and other styles
14

Amin, Abdullah Al. "A Feasibility Study of Employing EOG Signal in Combination with EEG Based BCI System for Improved Control of a Wheelchair." Bangladesh Journal of Medical Physics 10, no. 1 (December 3, 2018): 47–58. http://dx.doi.org/10.3329/bjmp.v10i1.39150.

Full text
Abstract:
For a fully paralysed person, EEG (Electroencephalogram) based Brain Computer Interface (BCI) has a great promise for controlling electromechanical equipment such as a wheelchair. Again EOG (Electrooculography) based Human Machine Interface system also provides a possibility. Individually, none of these methods is capable of giving a fully error free reliable and safe control, but an appropriate combination may provide a better reliability, which is the aim of the present work. Here we intend to use EEG data to classify two classes, corresponding to left and right hand movement, and EOG data to classify two classes corresponding to left and right sided eyeball movement. We will use these classifications independently first and then combine these with different weightage to find if a better and reliable control is possible. For this purpose offline classification of motor imaginary EEG data of a subject was carried out extracting features using Common Spatial Pattern (CSP) and classifying using Linear Discriminative Analysis. The independent EEG motor imaginary data classification resulted in 89.8% of accuracy in 10 fold one leave out cross validation. The EOG eyeball movement produces distinctive signals of opposite polarities and is classified using a simple discriminant type classification resulting in 100% accuracy. However, using EOG solely is not acceptable as there always will be unintentional eye movement giving false commands. Combining both EEG and EOG with different weightage to the two classifications produced varied degrees of improvement. For 50% weightage to both resulted in 100% accuracy, without any error, and this may be accepted as a practical solution because the chances of unintentional false commands will be very rare. Therefore, a combination of EOG and BCI may lead to a greater reliability in terms of avoidance of undesired control signals.Bangladesh Journal of Medical Physics Vol.10 No.1 2017 47-58
APA, Harvard, Vancouver, ISO, and other styles
15

Tian, Peiyuan, Guanghua Xu, Chengcheng Han, Xiaowei Zheng, Kai Zhang, Chenghang Du, Fan Wei, and Sicong Zhang. "Effects of Paradigm Color and Screen Brightness on Visual Fatigue in Light Environment of Night Based on Eye Tracker and EEG Acquisition Equipment." Sensors 22, no. 11 (May 27, 2022): 4082. http://dx.doi.org/10.3390/s22114082.

Full text
Abstract:
Nowadays, more people tend to go to bed late and spend their sleep time with various electronic devices. At the same time, the BCI (brain–computer interface) rehabilitation equipment uses a visual display, thus it is necessary to evaluate the problem of visual fatigue to avoid the impact on the training effect. Therefore, it is very important to understand the impact of using electronic devices in a dark environment at night on human visual fatigue. This paper uses Matlab to write different color paradigm stimulations, uses a 4K display with an adjustable screen brightness to jointly design the experiment, uses eye tracker and g.tec Electroencephalogram (EEG) equipment to collect the signal, and then carries out data processing and analysis, finally obtaining the influence of the combination of different colors and different screen brightness on human visual fatigue in a dark environment. In this study, subjects were asked to evaluate their subjective (Likert scale) perception, and objective signals (pupil diameter, θ + α frequency band data) were collected in a dark environment (<3 lx). The Likert scale showed that a low screen brightness in the dark environment could reduce the visual fatigue of the subjects, and participants preferred blue to red. The pupil data revealed that visual perception sensitivity was more vulnerable to stimulation at a medium and high screen brightness, which is easier to deepen visual fatigue. EEG frequency band data concluded that there was no significant difference between paradigm colors and screen brightness on visual fatigue. On this basis, this paper puts forward a new index—the visual anti-fatigue index, which provides a valuable reference for the optimization of the indoor living environment, the improvement of satisfaction with the use of electronic equipment and BCI rehabilitation equipment, and the protection of human eyes.
APA, Harvard, Vancouver, ISO, and other styles
16

Miah, Abu Saleh Musa, Md Abdur Rahim, and Jungpil Shin. "Motor-Imagery Classification Using Riemannian Geometry with Median Absolute Deviation." Electronics 9, no. 10 (September 27, 2020): 1584. http://dx.doi.org/10.3390/electronics9101584.

Full text
Abstract:
Motor imagery (MI) from human brain signals can diagnose or aid specific physical activities for rehabilitation, recreation, device control, and technology assistance. It is a dynamic state in learning and practicing movement tracking when a person mentally imitates physical activity. Recently, it has been determined that a brain–computer interface (BCI) can support this kind of neurological rehabilitation or mental practice of action. In this context, MI data have been captured via non-invasive electroencephalogram (EEGs), and EEG-based BCIs are expected to become clinically and recreationally ground-breaking technology. However, determining a set of efficient and relevant features for the classification step was a challenge. In this paper, we specifically focus on feature extraction, feature selection, and classification strategies based on MI-EEG data. In an MI-based BCI domain, covariance metrics can play important roles in extracting discriminatory features from EEG datasets. To explore efficient and discriminatory features for the enhancement of MI classification, we introduced a median absolute deviation (MAD) strategy that calculates the average sample covariance matrices (SCMs) to select optimal accurate reference metrics in a tangent space mapping (TSM)-based MI-EEG. Furthermore, all data from SCM were projected using TSM according to the reference matrix that represents the featured vector. To increase performance, we reduced the dimensions and selected an optimum number of features using principal component analysis (PCA) along with an analysis of variance (ANOVA) that could classify MI tasks. Then, the selected features were used to develop linear discriminant analysis (LDA) training for classification. The benchmark datasets were considered for the evaluation and the results show that it provides better accuracy than more sophisticated methods.
APA, Harvard, Vancouver, ISO, and other styles
17

Saravanakumar, S., and N. Selvaraju. "Eye Tracking and Blink Detection for Human Computer Interface." International Journal of Computer Applications 2, no. 2 (May 10, 2010): 7–9. http://dx.doi.org/10.5120/634-873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Simon, Judy. "Human Computer Interface using Eye Gazing with error fixation in Smooth and Saccadic Eye Movement." Journal of Innovative Image Processing 3, no. 4 (December 22, 2021): 336–46. http://dx.doi.org/10.36548/jiip.2021.4.005.

Full text
Abstract:
Human Computer Interface (HCI) requires proper coordination and definition of features that serve as input to the system. The parameters of a saccadic and smooth eye movement tracking are observed and a comparison is drawn for HCI. This methodology is further incorporated with Pupil, OpenCV and Microsoft Visual Studio for image processing to identify the position of the pupil and observe the pupil movement direction in real-time. Once the direction is identified, it is possible to determine the accurate cruise position which moves towards the target. To quantify the differences between the step-change tracking of saccadic eye movement and incremental tracking of smooth eye movement, the test was conducted on two users. With the help of incremental tracking of smooth eye movement, an accuracy of 90% is achieved. It is found that the incremental tracking requires an average time of 7.21s while the time for step change tracking is just 2.82s. Based on the observations, it is determined that, when compared to the saccadic eye movement tracking, the smooth eye movement tracking is over four times more accurate. Therefore, the smooth eye tracking was found to be more accurate, precise, reliable, and predictable to use with the mouse cursor than the saccadic eye movement tracking.
APA, Harvard, Vancouver, ISO, and other styles
19

Stula, Tomas, Antonino Proto, Jan Kubicek, Lukas Peter, Martin Cerny, and Marek Penhaker. "A MATLAB-BASED GUI FOR REMOTE ELECTROOCULOGRAPHY VISUAL EXAMINATION." Lékař a technika - Clinician and Technology 50, no. 3 (March 19, 2021): 101–13. http://dx.doi.org/10.14311/ctj.2020.3.04.

Full text
Abstract:
In this work, a MATLAB-based graphical user interface is proposed for the visual examination of several eye movements. The proposed solution is algorithm-based, which localizes the area of the eye movement, removes artifacts, and calculates the view trajectory in terms of direction and orb deviation. To compute the algorithm, a five-electrode configuration is needed. The goodness of the proposed MATLAB-based graphical user interface has been validated, at the Clinic of Child Neurology of University Hospital of Ostrava, through the EEG Wave Program, which was considered as “gold standard” test. The proposed solution can help physicians on studying cerebral diseases, or to be used for the development of human-machine interfaces useful for the improvement of the digital era that surrounds us today.
APA, Harvard, Vancouver, ISO, and other styles
20

Tan, J. K., W. J. Chew, and S. K. Phang. "The application of image processing for Human-Computer Interface (HCI) using the Eye." Journal of Physics: Conference Series 2120, no. 1 (December 1, 2021): 012030. http://dx.doi.org/10.1088/1742-6596/2120/1/012030.

Full text
Abstract:
Abstract The field of Human-Computer Interaction (HCI) has been developing tremendously since the past decade. The existence of smartphones or modern computers is already a norm in society these days which utilizes touch, voice and typing as a means for input. To further increase the variety of interaction, human eyes are set to be a good candidate for another form of HCI. The amount of information which the human eyes contain are extremely useful, hence, various methods and algorithm for eye gaze tracking are implemented in multiple sectors. However, some eye-tracking method requires infrared rays to be projected into the eye of the user which could potentially cause enzyme denaturation when the eye is subjected to those rays under extreme exposure. Therefore, to avoid potential harm from the eye-tracking method that utilizes infrared rays, this paper proposes an image-based eye tracking system using the Viola-Jones algorithm and Circular Hough Transform (CHT) algorithm. The proposed method uses visible light instead of infrared rays to control the mouse pointer using the eye gaze of the user. This research aims to implement the proposed algorithm for people with hand disability to interact with computers using their eye gaze.
APA, Harvard, Vancouver, ISO, and other styles
21

Modi, Nandini, and Jaiteg Singh. "Role of Eye Tracking in Human Computer Interaction." ECS Transactions 107, no. 1 (April 24, 2022): 8211–18. http://dx.doi.org/10.1149/10701.8211ecst.

Full text
Abstract:
With the invention of computers arises the need of an interface for users and interacting with a computer has become a natural practice. For all the opportunities a machine can bring, it is now a limiting factor for humans and their interaction with machines. This has given rise to a significant amount of research in the area of human computer interaction to make it more intuitive, simpler, and efficient. Human interaction with computers is no longer confined to printers and keyboards. Traditional input devices give way to natural inputs like voice, gestures, and visual computing using eye tracking. This paper provides useful insights in understanding the use of eye gaze tracking technology for human-machine interaction. A case study was conducted with 15 participants to analyze eye movements on an educational website. Visual attention was measured using eye gaze fixations data and heat maps were utilized to illustrate the results.
APA, Harvard, Vancouver, ISO, and other styles
22

Szajerman, Dominik, Piotr Napieralski, and Jean-Philippe Lecointe. "Joint analysis of simultaneous EEG and eye tracking data for video images." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 37, no. 5 (September 3, 2018): 1870–84. http://dx.doi.org/10.1108/compel-07-2018-0281.

Full text
Abstract:
Purpose Technological innovation has made it possible to review how a film cues particular reactions on the part of the viewers. The purpose of this paper is to capture and interpret visual perception and attention by the simultaneous use of eye tracking and electroencephalography (EEG) technologies. Design/methodology/approach The authors have developed a method for joint analysis of EEG and eye tracking. To achieve this goal, an algorithm was implemented to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. All parameters have been measured as a function of the relationship between the tested signals, which, in turn, allowed for a more accurate validation of hypotheses by appropriately selected calculations. Findings The results of this study revealed a coherence between EEG and eye tracking that are of particular relevance for human perception. Practical implications This paper endeavors both to capture and interpret visual perception and attention by the simultaneous use of eye tracking and EEG technologies. Eye tracking provides a powerful real-time measure of viewer region of interest. EEG technologies provides data regarding the viewer’s emotional states while watching the movie. Originality/value The approach in this paper is distinct from similar studies because it takes into account the integration of the eye tracking and EEG technologies. This paper provides a method for determining a fully functional video introspection system.
APA, Harvard, Vancouver, ISO, and other styles
23

Kubacki, Arkadiusz, and Arkadiusz Jakubowski. "Controlling the industrial robot model with the hybrid BCI based on EOG and eye tracking." MATEC Web of Conferences 252 (2019): 06004. http://dx.doi.org/10.1051/matecconf/201925206004.

Full text
Abstract:
The article describes the design process of building a hybrid brain-computer interface based on Electrooculography (EOG) and centre eye tracking. In the first paragraph authors presented theoretical information about Electroencephalography (EEG), Electrooculography (EOG), and Eye. Authors prepared an overview of the literature concerning hybrid BCIs. The interface was built with use of bioactive sensors mounted on the head. Movement of industrial robot model was triggered by a signal from eyes movement by EOG and eye tracking. The built interface has been tested. Three experiments were carried out. In all experiments, three people aged 25-35 were involved. 30 attempts per scenario were recorded. Between each attempt, a respondent had a 1-minute break. The investigators attempted to move cube from one table to the other.
APA, Harvard, Vancouver, ISO, and other styles
24

Mannan, Malik M. Naeem, M. Ahmad Kamran, Shinil Kang, Hak Soo Choi, and Myung Yung Jeong. "A Hybrid Speller Design Using Eye Tracking and SSVEP Brain–Computer Interface." Sensors 20, no. 3 (February 7, 2020): 891. http://dx.doi.org/10.3390/s20030891.

Full text
Abstract:
Steady-state visual evoked potentials (SSVEPs) have been extensively utilized to develop brain–computer interfaces (BCIs) due to the advantages of robustness, large number of commands, high classification accuracies, and information transfer rates (ITRs). However, the use of several simultaneous flickering stimuli often causes high levels of user discomfort, tiredness, annoyingness, and fatigue. Here we propose to design a stimuli-responsive hybrid speller by using electroencephalography (EEG) and video-based eye-tracking to increase user comfortability levels when presented with large numbers of simultaneously flickering stimuli. Interestingly, a canonical correlation analysis (CCA)-based framework was useful to identify target frequency with a 1 s duration of flickering signal. Our proposed BCI-speller uses only six frequencies to classify forty-eight targets, thus achieve greatly increased ITR, whereas basic SSVEP BCI-spellers use an equal number of frequencies to the number of targets. Using this speller, we obtained an average classification accuracy of 90.35 ± 3.597% with an average ITR of 184.06 ± 12.761 bits per minute in a cued-spelling task and an ITR of 190.73 ± 17.849 bits per minute in a free-spelling task. Consequently, our proposed speller is superior to the other spellers in terms of targets classified, classification accuracy, and ITR, while producing less fatigue, annoyingness, tiredness and discomfort. Together, our proposed hybrid eye tracking and SSVEP BCI-based system will ultimately enable a truly high-speed communication channel.
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Baoguo, Wenlong Li, Deping Liu, Kun Zhang, Minmin Miao, Guozheng Xu, and Aiguo Song. "Continuous Hybrid BCI Control for Robotic Arm Using Noninvasive Electroencephalogram, Computer Vision, and Eye Tracking." Mathematics 10, no. 4 (February 17, 2022): 618. http://dx.doi.org/10.3390/math10040618.

Full text
Abstract:
The controlling of robotic arms based on brain–computer interface (BCI) can revolutionize the quality of life and living conditions for individuals with physical disabilities. Invasive electroencephalography (EEG)-based BCI has been able to control multiple degrees of freedom (DOFs) robotic arms in three dimensions. However, it is still hard to control a multi-DOF robotic arm to reach and grasp the desired target accurately in complex three-dimensional (3D) space by a noninvasive system mainly due to the limitation of EEG decoding performance. In this study, we propose a noninvasive EEG-based BCI for a robotic arm control system that enables users to complete multitarget reach and grasp tasks and avoid obstacles by hybrid control. The results obtained from seven subjects demonstrated that motor imagery (MI) training could modulate brain rhythms, and six of them completed the online tasks using the hybrid-control-based robotic arm system. The proposed system shows effective performance due to the combination of MI-based EEG, computer vision, gaze detection, and partially autonomous guidance, which drastically improve the accuracy of online tasks and reduce the brain burden caused by long-term mental activities.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Xuebai, Xiaolong Liu, Shyan-Ming Yuan, and Shu-Fan Lin. "Eye Tracking Based Control System for Natural Human-Computer Interaction." Computational Intelligence and Neuroscience 2017 (December 18, 2017): 1–9. http://dx.doi.org/10.1155/2017/5739301.

Full text
Abstract:
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user’s eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.
APA, Harvard, Vancouver, ISO, and other styles
27

Bozomitu, Radu Gabriel, Alexandru Păsărică, Daniela Tărniceriu, and Cristian Rotariu. "Development of an Eye Tracking-Based Human-Computer Interface for Real-Time Applications." Sensors 19, no. 16 (August 20, 2019): 3630. http://dx.doi.org/10.3390/s19163630.

Full text
Abstract:
In this paper, the development of an eye-tracking-based human–computer interface for real-time applications is presented. To identify the most appropriate pupil detection algorithm for the proposed interface, we analyzed the performance of eight algorithms, six of which we developed based on the most representative pupil center detection techniques. The accuracy of each algorithm was evaluated for different eye images from four representative databases and for video eye images using a new testing protocol for a scene image. For all video recordings, we determined the detection rate within a circular target 50-pixel area placed in different positions in the scene image, cursor controllability and stability on the user screen, and running time. The experimental results for a set of 30 subjects show a detection rate over 84% at 50 pixels for all proposed algorithms, and the best result (91.39%) was obtained with the circular Hough transform approach. Finally, this algorithm was implemented in the proposed interface to develop an eye typing application based on a virtual keyboard. The mean typing speed of the subjects who tested the system was higher than 20 characters per minute.
APA, Harvard, Vancouver, ISO, and other styles
28

Vortmann, Lisa-Marie, Leonid Schwenke, and Felix Putze. "Using Brain Activity Patterns to Differentiate Real and Virtual Attended Targets during Augmented Reality Scenarios." Information 12, no. 6 (May 26, 2021): 226. http://dx.doi.org/10.3390/info12060226.

Full text
Abstract:
Augmented reality is the fusion of virtual components and our real surroundings. The simultaneous visibility of generated and natural objects often requires users to direct their selective attention to a specific target that is either real or virtual. In this study, we investigated whether this target is real or virtual by using machine learning techniques to classify electroencephalographic (EEG) and eye tracking data collected in augmented reality scenarios. A shallow convolutional neural net classified 3 second EEG data windows from 20 participants in a person-dependent manner with an average accuracy above 70% if the testing data and training data came from different trials. This accuracy could be significantly increased to 77% using a multimodal late fusion approach that included the recorded eye tracking data. Person-independent EEG classification was possible above chance level for 6 out of 20 participants. Thus, the reliability of such a brain–computer interface is high enough for it to be treated as a useful input mechanism for augmented reality applications.
APA, Harvard, Vancouver, ISO, and other styles
29

PARK, K. S., and C. J. LIM. "A simple vision-based head tracking method for eye-controlled human/computer interface." International Journal of Human-Computer Studies 54, no. 3 (March 2001): 319–32. http://dx.doi.org/10.1006/ijhc.2000.0444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Cheng, Shiwei, and Ying Liu. "Eye-tracking based adaptive user interface: implicit human-computer interaction for preference indication." Journal on Multimodal User Interfaces 5, no. 1-2 (September 10, 2011): 77–84. http://dx.doi.org/10.1007/s12193-011-0064-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Byung Hyung, Minho Kim, and Sungho Jo. "Quadcopter flight control using a low-cost hybrid interface with EEG-based classification and eye tracking." Computers in Biology and Medicine 51 (August 2014): 82–92. http://dx.doi.org/10.1016/j.compbiomed.2014.04.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jambon, Francis, and Jean Vanderdonckt. "UsyBus: A Communication Framework among Reusable Agents integrating Eye-Tracking in Interactive Applications." Proceedings of the ACM on Human-Computer Interaction 6, EICS (June 14, 2022): 1–36. http://dx.doi.org/10.1145/3532207.

Full text
Abstract:
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
APA, Harvard, Vancouver, ISO, and other styles
33

Wojciechowski, A., and K. Fornalczyk. "Single web camera robust interactive eye-gaze tracking method." Bulletin of the Polish Academy of Sciences Technical Sciences 63, no. 4 (December 1, 2015): 879–86. http://dx.doi.org/10.1515/bpasts-2015-0100.

Full text
Abstract:
Abstract Eye-gaze tracking is an aspect of human-computer interaction still growing in popularity,. Tracking human gaze point can help control user interfaces and may help evaluate graphical user interfaces. At the same time professional eye-trackers are very expensive and thus unavailable for most of user interface researchers and small companies. The paper presents very effective, low cost, computer vision based, interactive eye-gaze tracking method. On contrary to other authors results the method achieves very high precision (about 1.5 deg horizontally and 2.5 deg vertically) at 20 fps performance, exploiting a simple HD web camera with reasonable environment restrictions. The paper describes the algorithms used in the eye-gaze tracking method and results of experimental tests, both static absolute point of interest estimation, and dynamic functional gaze controlled cursor steering.
APA, Harvard, Vancouver, ISO, and other styles
34

Krishnan, Chetana, Vijay Jeyakumar, and Alex Noel Joseph Raj. "REAL-TIME EYE TRACKING USING HEAT MAPS." Malaysian Journal of Computer Science 35, no. 4 (October 30, 2022): 339–58. http://dx.doi.org/10.22452/mjcs.vol35no4.3.

Full text
Abstract:
Communication in modern days has developed a lot, including wireless networks, Artificial Intelligence (AI) interaction, and human-computer interfaces. People with paralysis and immobile disorders face daily difficulties communicating with others and gadgets. Eye tracking has proven to promote accessible and accurate interaction compared to other complex automatic interactions. The project aims to develop an electronic eye blinker that integrates with the experimental setup to determine clinical pupil redundancy. The proposed solution comes up with an eye-tracking tool within an inbuilt laptop webcam that tracks the eye’s pupil in the given screen dimensions and generates heat maps on the tracked locations. These heat maps can denote a letter (in case of eye writing), an indication to click on that location (in case of gadget communication), or for blinking analysis. The proposed method achieves a perfect F-measure score of 0.998 to 1.000, which is comparatively more accurate and efficient than the existing technologies. The solution also provides an effective method to determine the eye's refractive error, which can replace the complex refractometers. Further, the spatially tracked coordinates obtained during the experiment can be used to analyze the patient’s blinking pattern, which, in turn, can detect retinal disorders and their progress during medication. One of the applications of the project is to integrate the derived model with a Brain-computer interface system to allow fast communications for the disabled.
APA, Harvard, Vancouver, ISO, and other styles
35

Naresh, B., S. Rambabu, and D. Khalandar Basha. "ARM Controller and EEG based Drowsiness Tracking and Controlling during Driving." International Journal of Reconfigurable and Embedded Systems (IJRES) 6, no. 3 (May 28, 2018): 127. http://dx.doi.org/10.11591/ijres.v6.i3.pp127-132.

Full text
Abstract:
<span>This paper discussed about EEG-Based Drowsiness Tracking during Distracted Driving based on Brain computer interfaces (BCI). BCIs are systems that can bypass conventional channels of communication (i.e., muscles and thoughts) to provide direct communication and control between the human brain and physical devices by translating different patterns of brain activity commands through controller device in real time. With these signals from brain in mat lab signals spectrum analyzed and estimates driver concentration and meditation conditions. If there is any nearest vehicles to this vehicle a voice alert given to driver for alert. And driver going to sleep gives voice alert for driver using voice chip. And give the information about traffic signal indication using RFID. The patterns of interaction between these neurons are represented as thoughts and emotional states. According to the human feelings, this pattern will be changing which in turn produce different electrical waves. A muscle contraction will also generate a unique electrical signal. All these electrical waves will be sensed by the brain wave sensor and it will convert the data into packets and transmit through Bluetooth medium. Level analyzer unit (LAU) is used to receive the raw data from brain wave sensor and it is used to extract and process the signal using Mat lab platform. The nearest vehicles information is information is taken through ultrasonic sensors and gives voice alert. And traffic signals condition is detected through RF technology.</span>
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Xiao Guang, Jing Tao Hu, He Chun Hu, Xiao Ping Bai, and Lei Gao. "Design and Implementation of Visible Human-Machine Interface for Trajectory Tracking in Agriculture Vehicle Navigation." Advanced Materials Research 466-467 (February 2012): 631–35. http://dx.doi.org/10.4028/www.scientific.net/amr.466-467.631.

Full text
Abstract:
This paper designs a visible human-machine interface for field computer in an agriculture vehicle navigation control system. Field computer, with the function of system configuration, vehicle configuration, steering configuration, job management, path planning and map views, is the human-machine interface in agriculture vehicle navigation control system. This paper introduces the design of map views including field view and machine view for field computer system based on coordinate conversion. Field View gives a bird’s eye view of the map. The agriculture vehicle moves while the map keeps stationary. Machine view keeps agriculture vehicle in center of screen, while the map moves along the reverse direction of the agriculture vehicle.
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, Jianxiang, and Yadong Liu. "A Hybrid Asynchronous Brain–Computer Interface Based on SSVEP and Eye-Tracking for Threatening Pedestrian Identification in Driving." Electronics 11, no. 19 (October 2, 2022): 3171. http://dx.doi.org/10.3390/electronics11193171.

Full text
Abstract:
A brain–computer interface (BCI) based on steady-state visual evoked potential (SSVEP) has achieved remarkable performance in the field of automatic driving. Prolonged SSVEP stimuli can cause driver fatigue and reduce the efficiency of interaction. In this paper, a multi-modal hybrid asynchronous BCI system combining eye-tracking and EEG signals is proposed for dynamic threatening pedestrian identification in driving. Stimuli arrows of different frequencies and directions are randomly superimposed on pedestrian targets. Subjects scan the stimuli according to the direction of arrows until the threatening pedestrian is selected. The thresholds determined by offline experiments are used to distinguish between working and idle states of the asynchronous online experiments. Subjects need to judge and select potentially threatening pedestrians in online experiments according to their own subjective experience. The three proposed decisions filter out the results with low confidence and effectively improve the selection accuracy of hybrid BCI. The experimental results of six subjects show that the proposed hybrid asynchronous BCI system achieves better performance compared with a single SSVEP-BCI, with an average selection time of 1.33 s, an average selection accuracy of 95.83%, and an average information transfer rate (ITR) of 67.50 bits/min. These results indicate that our hybrid asynchronous BCI has great application potential in dynamic threatening pedestrian identification in driving.
APA, Harvard, Vancouver, ISO, and other styles
38

Diwan, Mr Nihal, Prof Dhiraj Kalyankar, and Dr R. R. Keole. "Review Paper on Live Eye Gaze Tracking to Control Mouse Pointer Movement." International Journal for Research in Applied Science and Engineering Technology 10, no. 8 (August 31, 2022): 411–13. http://dx.doi.org/10.22214/ijraset.2022.46149.

Full text
Abstract:
Abstract: With recent advances in technology, modern computer systems are becoming more flexible. Modern computers are capable of processing millions of information per second. In such cases, traditional input devices such as a mouse or keyboard are relatively slow. In this paper we use system that can be overcome by human interaction with the computer. With innovation and development in technology, motion sensors are able to capture the position and natural movements of the human body. This has made possible a new way of communication with computers. So keeping all these in mind we propose a system which is an untouched and fast communication system. This system will be able to capture the movements of the eyeball for which it is responsible cursor control. The system processes the data in the camera feed and calibrates the parameter interface according to the user. The system then performs computer-related algorithms to determine the location of the doll's and use eyes to implement natural eye-computer interactions
APA, Harvard, Vancouver, ISO, and other styles
39

Pantanowitz, Adam, Kimoon Kim, Chelsey Chewins, Isabel N. K. Tollman, and David M. Rubin. "Addressing the eye fixation problem in gaze tracking for human computer interface using the vestibulo-ocular reflex." Informatics in Medicine Unlocked 21 (2020): 100488. http://dx.doi.org/10.1016/j.imu.2020.100488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Darisipudi, Ashok, Sushil K. Sharma, Jeff Zhang, Tom Harris, and Sheila Smith. "Usability Testing of an Interactive Online Movie Download Service." International Journal of E-Adoption 5, no. 4 (October 2013): 48–71. http://dx.doi.org/10.4018/ijea.2013100104.

Full text
Abstract:
The Human-Computer Interaction (HCI) is gaining momentum as more and more people increasingly are using technology tools and devises for their daily activities. Users expect highly effective and easy-to-learn interfaces and developers and designers now realize the crucial role the users' interface plays. HCI and System Usability design have greater significance in media use as the usability problems can adversely affect the large population of users depending on the overall usability of system design and the user interface design. This study is conducted to get rich and detailed feedback of users' personal experiences and usability of a new movie download software application and subscription service. This is achieved by a different approach of using eye-tracking methodology in conjunction with usability software for usability testing. Study gave rich information of quantitative data from eye-tracking and usability software for better analysis of the product.
APA, Harvard, Vancouver, ISO, and other styles
41

Mead, Patrick, David Keller, and Megan Kozub. "Point with your eyes not with your hands." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, no. 1 (September 2016): 835–39. http://dx.doi.org/10.1177/1541931213601190.

Full text
Abstract:
The emergence of gesture based controls like the Microsoft Kinect provides new opportunities for creative and innovative methods of human computer interaction. However, such devices are not without their limitations. The gross-motor movements of gestural interaction present physical limitations that may negatively affect interaction speed, accuracy, and workload, and subsequently affect the design of system interfaces and inputs. Conversely, interaction methods such as eye tracking require little physical effort, leveraging the unconscious and natural behaviors of human eye-movements as inputs. Unfortunately, eye tracking, in most cases, is limited to a simple pointing device. However, this research shows by combining these interactions into gaze-based gestural controls it is possible to overcome the limitations of each method, improving interaction performance by associating gestural commands to interface elements within a user’s field of view.
APA, Harvard, Vancouver, ISO, and other styles
42

Sergeev, Sergey, Andrey Gubanov, and Daniil Kirillov. "Radar operator’s oculomotor activity at working with group targets." Ergodesign 2021, no. 4 (December 30, 2021): 283–87. http://dx.doi.org/10.30987/2658-4026-2021-4-283-287.

Full text
Abstract:
Computer oculography (eye-tracking) technologies make it possible to effectively study the work of the human oculomotor system in the operator's activity in the man-machine system. The data obtained in this way helps to increase the system efficiency, especially in conditions of extreme activities associated with vital and emotional stress at a high cost of error. Within the process of the interface ergonomic design of the radar complex for detecting and tracking, the features of the human oculomotor system are considered when working with group targets.
APA, Harvard, Vancouver, ISO, and other styles
43

Szynkiewicz, Wojciech, Włodzimierz Kasprzak, Cezary Zieliński, Wojciech Dudek, Maciej Stefańczyk, Artur Wilkowski, and Maksym Figat. "Utilisation of Embodied Agents in the Design of Smart Human–Computer Interfaces—A Case Study in Cyberspace Event Visualisation Control." Electronics 9, no. 6 (June 11, 2020): 976. http://dx.doi.org/10.3390/electronics9060976.

Full text
Abstract:
The goal of the research reported here was to investigate whether the design methodology utilising embodied agents can be applied to produce a multi-modal human–computer interface for cyberspace events visualisation control. This methodology requires that the designed system structure be defined in terms of cooperating agents having well-defined internal components exhibiting specified behaviours. System activities are defined in terms of finite state machines and behaviours parameterised by transition functions. In the investigated case the multi-modal interface is a component of the Operational Centre which is a part of the National Cybersecurity Platform. Embodied agents have been successfully used in the design of robotic systems. However robots operate in physical environments, while cyberspace events visualisation involves cyberspace, thus the applied design methodology required a different definition of the environment. It had to encompass the physical environment in which the operator acts and the computer screen where the results of those actions are presented. Smart human–computer interaction (HCI) is a time-aware, dynamic process in which two parties communicate via different modalities, e.g., voice, gesture, eye movement. The use of computer vision and machine intelligence techniques are essential when the human is carrying an exhausting and concentration demanding activity. The main role of this interface is to support security analysts and operators controlling visualisation of cyberspace events like incidents or cyber attacks especially when manipulating graphical information. Visualisation control modalities include visual gesture- and voice-based commands.
APA, Harvard, Vancouver, ISO, and other styles
44

Planke, Lars J., Alessandro Gardi, Roberto Sabatini, Trevor Kistan, and Neta Ezer. "Online Multimodal Inference of Mental Workload for Cognitive Human Machine Systems." Computers 10, no. 6 (June 16, 2021): 81. http://dx.doi.org/10.3390/computers10060081.

Full text
Abstract:
With increasingly higher levels of automation in aerospace decision support systems, it is imperative that the human operator maintains a high level of situational awareness in different operational conditions and a central role in the decision-making process. While current aerospace systems and interfaces are limited in their adaptability, a Cognitive Human Machine System (CHMS) aims to perform dynamic, real-time system adaptation by estimating the cognitive states of the human operator. Nevertheless, to reliably drive system adaptation of current and emerging aerospace systems, there is a need to accurately and repeatably estimate cognitive states, particularly for Mental Workload (MWL), in real-time. As part of this study, two sessions were performed during a Multi-Attribute Task Battery (MATB) scenario, including a session for offline calibration and validation and a session for online validation of eleven multimodal inference models of MWL. The multimodal inference model implemented included an Adaptive Neuro Fuzzy Inference System (ANFIS), which was used in different configurations to fuse data from an Electroencephalogram (EEG) model’s output, four eye activity features and a control input feature. The results from the online validation of the ANFIS models demonstrated that five of the ANFIS models (containing different feature combinations of eye activity and control input features) all demonstrated good results, while the best performing model (containing all four eye activity features and the control input feature) showed an average Mean Absolute Error (MAE) = 0.67 ± 0.18 and Correlation Coefficient (CC) = 0.71 ± 0.15. The remaining six ANFIS models included data from the EEG model’s output, which had an offset discrepancy. This resulted in an equivalent offset for the online multimodal fusion. Nonetheless, the efficacy of these ANFIS models could be seen with the pairwise correlation with the task level, where one model demonstrated a CC = 0.77 ± 0.06, which was the highest among all the ANFIS models tested. Hence, this study demonstrates the ability for online multimodal fusion from features extracted from EEG signals, eye activity and control inputs to produce an accurate and repeatable inference of MWL.
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Peng, Xiaoliang Bai, Mark Billinghurst, Shusheng Zhang, Weiping He, Dechuan Han, Yue Wang, Haitao Min, Weiqi Lan, and Shu Han. "Using a Head Pointer or Eye Gaze: The Effect of Gaze on Spatial AR Remote Collaboration for Physical Tasks." Interacting with Computers 32, no. 2 (March 2020): 153–69. http://dx.doi.org/10.1093/iwcomp/iwaa012.

Full text
Abstract:
Abstract This paper investigates the effect of using augmented reality (AR) annotations and two different gaze visualizations, head pointer (HP) and eye gaze (EG), in an AR system for remote collaboration on physical tasks. First, we developed a spatial AR remote collaboration platform that supports sharing the remote expert’s HP or EG cues. Then the prototype system was evaluated with a user study comparing three conditions for sharing non-verbal cues: (1) a cursor pointer (CP), (2) HP and (3) EG with respect to task performance, workload assessment and user experience. We found that there was a clear difference between these three conditions in the performance time but no significant difference between the HP and EG conditions. When considering the perceived collaboration quality, the HP/EG interface was statistically significantly higher than the CP interface, but there was no significant difference for workload assessment between these three conditions. We used low-cost head tracking for the HP cue and found that this served as an effective referential pointer. This implies that in some circumstances, HP could be a good proxy for EG in remote collaboration. Head pointing is more accessible and cheaper to use than more expensive eye-tracking hardware and paves the way for multi-modal interaction based on HP and gesture in AR remote collaboration.
APA, Harvard, Vancouver, ISO, and other styles
46

Açık, Alper, Duygun Erol Barkana, Gökhan Akgün, Asım Evren Yantaç, and Çağla Aydın. "Evaluation of a surgical interface for robotic cryoablation task using an eye-tracking system." International Journal of Human-Computer Studies 95 (November 2016): 39–53. http://dx.doi.org/10.1016/j.ijhcs.2016.07.004.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hudák, Martin, Radovan Madleňák, and Veronika Brezániová. "THE IMPACT OF ADVERTISEMENT ON CONSUMER’S PERCEPTION." CBU International Conference Proceedings 5 (September 22, 2017): 187–91. http://dx.doi.org/10.12955/cbup.v5.923.

Full text
Abstract:
Marketing can be described as a tool for companies to influence the consumer’s perception to the desired direction. The current market situation is characterized by dynamism, growing consumer power, and intense competition. The consumer perception and behavior are changing and therefore need to be constantly monitored and measured. The aim of this article is to scan and measure consumer’s perception while watching a video advertisement. During this experiment, an eye-tracking technology was used, which allows capturing a consumer’s gaze. The central part of the research is to measure the brain activity of a consumer based on the EEG (Electroencephalography). EMOTIV Epoc+ is a 14-channel wireless EEG, designed for contextualized research and advanced brain computer interface applications. An advertising campaign from four different mobile operators was used for this purpose. In the conclusion of this article, consumer’s perception of different advertising campaigns are compared and evaluated.
APA, Harvard, Vancouver, ISO, and other styles
48

Tchanou, Armel Quentin, Pierre-Majorique Léger, Jared Boasen, Sylvain Senecal, Jad Adam Taher, and Marc Fredette. "Collaborative Use of a Shared System Interface: The Role of User Gaze—Gaze Convergence Index Based on Synchronous Dual-Eyetracking." Applied Sciences 10, no. 13 (June 29, 2020): 4508. http://dx.doi.org/10.3390/app10134508.

Full text
Abstract:
Gaze convergence of multiuser eye movements during simultaneous collaborative use of a shared system interface has been proposed as an important albeit sparsely explored construct in human-computer interaction literature. Here, we propose a novel index for measuring the gaze convergence of user dyads and address its validity through two consecutive eye-tracking studies. Eye-tracking data of user dyads were synchronously recorded while they simultaneously performed tasks on shared system interfaces. Results indicate the validity of the proposed gaze convergence index for measuring the gaze convergence of dyads. Moreover, as expected, our gaze convergence index was positively associated with dyad task performance and negatively associated with dyad cognitive load. These results suggest the utility of (theoretical or practical) applications such as synchronized gaze convergence displays in diverse settings. Further research perspectives, particularly into the construct’s nomological network, are warranted.
APA, Harvard, Vancouver, ISO, and other styles
49

Kryuchkov, B. I., V. M. Usov, V. A. Chertopolokhov, A. L. Ronzhin, and A. A. Karpov. "SIMULATION OF THE «COSMONAUT-ROBOT» SYSTEM INTERACTION ON THE LUNAR SURFACE BASED ON METHODS OF MACHINE VISION AND COMPUTER GRAPHICS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W4 (May 10, 2017): 129–33. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w4-129-2017.

Full text
Abstract:
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut’s poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut’s position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
APA, Harvard, Vancouver, ISO, and other styles
50

Tresanchez, Marcel, Tomàs Pallejà, and Jordi Palacín. "Optical Mouse Sensor for Eye Blink Detection and Pupil Tracking: Application in a Low-Cost Eye-Controlled Pointing Device." Journal of Sensors 2019 (December 10, 2019): 1–19. http://dx.doi.org/10.1155/2019/3931713.

Full text
Abstract:
In this paper, a new application of the optical mouse sensor is presented. The optical mouse is used as a main low-cost infrared vision system of a new proposal of a head-mounted human-computer interaction (HCI) device controlled by eye movements. The default optical mouse sensor lens and illumination source are replaced in order to improve its field of view and capture entire eye images. A complementary 8-bit microcontroller is used to acquire and process these images with two optimized algorithms to detect forced eye blinks and pupil displacements which are translated to computer pointer actions. This proposal introduces an inexpensive and approachable plug and play (PnP) device for people with severe disability in the upper extremities, neck, and head. The presented pointing device performs standard computer mouse actions with no extra software required. It uses the human interface device (HID) standard class of the universal serial bus (USB) increasing its compatibility for most computer platforms. This new device approach is aimed at improving comfortability and portability of the current commercial devices with simple installation and calibration. Several performance tests were done with different volunteer users obtaining an average pupil detection error of 0.34 pixels with a successful detection in 82.6% of all mouse events requested by means of pupil tracking.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography