Gotowa bibliografia na temat „Audio-EEG analysis”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Audio-EEG analysis”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Audio-EEG analysis"

1

Reddy Katthi, Jaswanth, i Sriram Ganapathy. "Deep Correlation Analysis for Audio-EEG Decoding". IEEE Transactions on Neural Systems and Rehabilitation Engineering 29 (2021): 2742–53. http://dx.doi.org/10.1109/tnsre.2021.3129790.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Geng, Bingrui, Ke Liu i Yiping Duan. "Human Perception Intelligent Analysis Based on EEG Signals". Electronics 11, nr 22 (17.11.2022): 3774. http://dx.doi.org/10.3390/electronics11223774.

Pełny tekst źródła
Streszczenie:
The research on brain cognition provides theoretical support for intelligence and cognition in computational intelligence, and it is further applied in various fields of scientific and technological innovation, production and life. Use of the 5G network and intelligent terminals has also brought diversified experiences to users. This paper studies human perception and cognition in the quality of experience (QoE) through audio noise. It proposes a novel method to study the relationship between human perception and audio noise intensity using electroencephalogram (EEG) signals. This kind of physiological signal can be used to analyze the user’s cognitive process through transformation and feature calculation, so as to overcome the deficiency of traditional subjective evaluation. Experimental and analytical results show that the EEG signals in frequency domain can be used for feature learning and calculation to measure changes in user-perceived audio noise intensity. In the experiment, the user’s noise tolerance limit for different audio scenarios varies greatly. The noise power spectral density of soothing audio is 0.001–0.005, and the noise spectral density of urgent audio is 0.03. The intensity of information flow in the corresponding brain regions increases by more than 10%. The proposed method explores the possibility of using EEG signals and computational intelligence to measure audio perception quality. In addition, the analysis of the intensity of information flow in different brain regions invoked by different tasks can also be used to study the theoretical basis of computational intelligence.
Style APA, Harvard, Vancouver, ISO itp.
3

Dasenbrock, Steffen, Sarah Blum, Stefan Debener, Volker Hohmann i Hendrik Kayser. "A Step towards Neuro-Steered Hearing Aids: Integrated Portable Setup for Time- Synchronized Acoustic Stimuli Presentation and EEG Recording". Current Directions in Biomedical Engineering 7, nr 2 (1.10.2021): 855–58. http://dx.doi.org/10.1515/cdbme-2021-2218.

Pełny tekst źródła
Streszczenie:
Abstract Aiming to provide a portable research platform to develop algorithms for neuro-steered hearing aids, a joint hearing aid - EEG measurement setup was implemented in this work. The setup combines the miniaturized electroencephalography sensor technology cEEGrid with a portable hearing aid research platform - the Portable Hearing Laboratory. The different components of the system are connected wirelessly, using the lab streaming layer framework for synchronization of audio and EEG data streams. Our setup was shown to be suitable for simultaneous recording of audio and EEG signals used in a pilot study (n=5) to perform an auditory Oddball experiment. The analysis showed that the setup can reliably capture typical event-related potential responses. Furthermore, linear discriminant analysis was successfully applied for single-trial classification of P300 responses. The study showed that time-synchronized audio and EEG data acquisition is possible with the Portable Hearing Laboratory research platform.
Style APA, Harvard, Vancouver, ISO itp.
4

Lee, Yi Yeh, Aaron Raymond See, Shih Chung Chen i Chih Kuo Liang. "Effect of Music Listening on Frontal EEG Asymmetry". Applied Mechanics and Materials 311 (luty 2013): 502–6. http://dx.doi.org/10.4028/www.scientific.net/amm.311.502.

Pełny tekst źródła
Streszczenie:
Frontal EEG asymmetry has been recognized as a useful method in determining emotional states and psychophysiological conditions. For the current research, resting prefrontal EEG was measured before, during and after listening to sad music video. Data were recorded and analyzed using a wireless EEG module with digital results sent via Bluetooth to a remote computer for further analysis. The relative alpha power was utilized to determine EEG asymmetry indexes. The results indicated that even if a person had a stronger right hemisphere in the initial phase a significant shift first occurred during audio-video stimulation and was followed by a further inclination to left EEG asymmetry as measured after the stimulation. Furthermore the current research was able to use prefrontal EEG to produce results that were mostly measured at the frontal lobe. It was also able to provide significant changes in results using audio and video stimulation as to previous experiments that made use of audio stimulation. In the future, more experiments can be conducted to obtain a better understanding of a person’s appreciation or dislike toward a certain video, commercial or other multimedia contents through the aid of convenient EEG module.
Style APA, Harvard, Vancouver, ISO itp.
5

Hadjidimitriou, Stelios K., Asteris I. Zacharakis, Panagiotis C. Doulgeris, Konstantinos J. Panoulas, Leontios J. Hadjileontiadis i Stavros M. Panas. "Revealing Action Representation Processes in Audio Perception Using Fractal EEG Analysis". IEEE Transactions on Biomedical Engineering 58, nr 4 (kwiecień 2011): 1120–29. http://dx.doi.org/10.1109/tbme.2010.2047016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Reshetnykov, Denys S. "EEG Analysis of Person Familiarity with Audio-Video Data Assessing Task". Upravlâûŝie sistemy i mašiny, nr 4 (276) (sierpień 2018): 70–83. http://dx.doi.org/10.15407/usim.2018.04.070.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Kumar, Himanshu, Subha D. Puthankattil i Ramakrishnan Swaminathan. "ANALYSIS OF EEG RESPONSE FOR AUDIO-VISUAL STIMULI IN FRONTAL ELECTRODES AT THETA FREQUENCY BAND USING THE TOPOLOGICAL FEATURES". Biomedical Sciences Instrumentation 57, nr 2 (1.04.2021): 333–39. http://dx.doi.org/10.34107/yhpn9422.04333.

Pełny tekst źródła
Streszczenie:
Emotions are the fundamental intellectual capacity of humans characterized by perception, attention, and behavior. Emotions are characterized by psychophysiological expressions. Studies have been performed by analyzing Electroencephalogram (EEG) responses from various lobes of the brain under all frequency bands. In this work, the EEG response of the theta band in the frontal lobe is analyzed extracting topological features during audio-visual stimulation. This study is carried out using the EEG signals from the public domain database. In this method, the signals are projected in higher dimensional space to find out the geometrical properties. Features, namely the center of gravity and perimeter of the boundary space, are used to quantify the changes in the geometrical properties of the signal, and the features are subject to the Wilcoxon rank-sum test for statistical significance. Different electrodes in the frontal region under the same audio-visual stimulus showed similar variations in the geometry of the boundary in higher-dimensional space. Further, the electrodes, Fp1 and F3, showed a statistical significance of p < 0.05 in differentiating arousal states, and the Fp1 electrode showed a statistical significance in differentiating valence emotional state. Thus, the topological features extracted from the frontal electrodes in theta band could differentiate arousal and valence emotional states and be of significant clinical relevance.
Style APA, Harvard, Vancouver, ISO itp.
8

Ribeiro, Estela, i Carlos Eduardo Thomaz. "A Whole Brain EEG Analysis of Musicianship". Music Perception 37, nr 1 (1.09.2019): 42–56. http://dx.doi.org/10.1525/mp.2019.37.1.42.

Pełny tekst źródła
Streszczenie:
The neural activation patterns provoked in response to music listening can reveal whether a subject did or did not receive music training. In the current exploratory study, we have approached this two-group (musicians and nonmusicians) classification problem through a computational framework composed of the following steps: Acoustic features extraction; Acoustic features selection; Trigger selection; EEG signal processing; and Multivariate statistical analysis. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram (EEG) signals on a given time instant. Our experiment's results—with 26 volunteers (13 musicians and 13 nonmusicians) who listened the classical music Hungarian Dance No. 5 from Johannes Brahms—have shown that is possible to linearly differentiate musicians and nonmusicians with classification accuracies that range from 69.2% (test set) to 93.8% (training set), despite the limited sample sizes available. Additionally, given the whole brain vector navigation method described and implemented here, our results suggest that it is possible to highlight the most expressive and discriminant changes in the participants brain activity patterns depending on the acoustic feature extracted from the audio.
Style APA, Harvard, Vancouver, ISO itp.
9

Gao, Chenguang, Hirotaka Uchitomi i Yoshihiro Miyake. "Influence of Multimodal Emotional Stimulations on Brain Activity: An Electroencephalographic Study". Sensors 23, nr 10 (16.05.2023): 4801. http://dx.doi.org/10.3390/s23104801.

Pełny tekst źródła
Streszczenie:
This study aimed to reveal the influence of emotional valence and sensory modality on neural activity in response to multimodal emotional stimuli using scalp EEG. In this study, 20 healthy participants completed the emotional multimodal stimulation experiment for three stimulus modalities (audio, visual, and audio-visual), all of which are from the same video source with two emotional components (pleasure or unpleasure), and EEG data were collected using six experimental conditions and one resting state. We analyzed power spectral density (PSD) and event-related potential (ERP) components in response to multimodal emotional stimuli, for spectral and temporal analysis. PSD results showed that the single modality (audio only/visual only) emotional stimulation PSD differed from multi-modality (audio-visual) in a wide brain and band range due to the changes in modality and not from the changes in emotional degree. The most pronounced N200-to-P300 potential shifts occurred in monomodal rather than multimodal emotional stimulations. This study suggests that emotional saliency and sensory processing efficiency perform a significant role in shaping neural activity during multimodal emotional stimulation, with the sensory modality being more influential in PSD. These findings contribute to our understanding of the neural mechanisms involved in multimodal emotional stimulation.
Style APA, Harvard, Vancouver, ISO itp.
10

Lee, Tae-Ju, Seung-Min Park i Kwee-Bo Sim. "Electroencephalography Signal Grouping and Feature Classification Using Harmony Search for BCI". Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/754539.

Pełny tekst źródła
Streszczenie:
This paper presents a heuristic method for electroencephalography (EEG) grouping and feature classification using harmony search (HS) for improving the accuracy of the brain-computer interface (BCI) system. EEG, a noninvasive BCI method, uses many electrodes on the scalp, and a large number of electrodes make the resulting analysis difficult. In addition, traditional EEG analysis cannot handle multiple stimuli. On the other hand, the classification method using the EEG signal has a low accuracy. To solve these problems, we use a heuristic approach to reduce the complexities in multichannel problems and classification. In this study, we build a group of stimuli using the HS algorithm. Then, the features from common spatial patterns are classified by the HS classifier. To confirm the proposed method, we perform experiments using 64-channel EEG equipment. The subjects are subjected to three kinds of stimuli: audio, visual, and motion. Each stimulus is applied alone or in combination with the others. The acquired signals are processed by the proposed method. The classification results in an accuracy of approximately 63%. We conclude that the heuristic approach using the HS algorithm on the BCI is beneficial for EEG signal analysis.
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Audio-EEG analysis"

1

Katthi, Jaswanth Reddy. "Deep Learning Methods For Audio EEG Analysis". Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5734.

Pełny tekst źródła
Streszczenie:
The perception of speech and audio is one of the defining features of humans. Much of the brain’s underlying processes as we listen to acoustic signals are unknown, and significant research efforts are needed to unravel them. The non-invasive recordings capturing the brain activations like electroencephalogram (EEG) and magnetoencephalogram (MEG) are commonly deployed to capture the brain responses to auditory stimuli. But these non-invasive techniques capture artifacts and signals not related to the stimuli, which distort the stimulus-response analysis. The effect of the artifacts be- comes more evident for naturalistic stimuli. To reduce the inter-subject redundancies and amplify the components related to the stimuli, the EEG responses from multiple subjects listening to a common naturalistic stimulus need to be normalized. The currently used normalization and pre-processing methods are the canonical correlation analysis (CCA) models and the temporal response function based forward/backward models. However, these methods assume a simplistic linear relationship between the audio features and the EEG responses and therefore, may not alleviate the recording artifacts and interfering signals in EEG. This thesis proposes novel methods using machine learning advances to improve the audio-EEG analysis. We propose a deep learning framework for audio-EEG analysis in intra-subject and inter-subject settings. The deep learning based intra-subject analysis methods are trained with a Pearson correlation-based cost function between the stimuli and EEG responses. This model allows the transformation of the audio and EEG features that are maximally correlated. The correlation-based cost function can be optimized with the learnable parameters of the model trained using standard gradient descent- based methods. This model is referred to as the deep CCA (DCCA) model. Several experiments are performed on the EEG data recorded when the subjects are listening to naturalistic speech and music stimuli. We show that the deep methods obtain better representations than the linear methods and results in statistically significant improvements in correlation values. Further, we propose a neural network model with shared encoders that align the EEG responses from multiple subjects listening to the same audio stimuli. This inter-subject model boosts the signals common across the subjects and suppresses the subject-specific artifacts. The impact of improving stimulus-response correlations are highlighted based on multi-subject EEG data from speech and music tasks. This model is referred to as the deep multi-way canonical correlation analysis (DMCCA). The combination of inter-subject analysis using DMCCA and intra-subject analysis using DCCA is shown to provide the best stimulus-response in audio-EEG experiments. We highlight how much of the audio signal can be recovered purely from the non- invasive EEG recordings with modern machine learning methods, and conclude with a discussion on future challenges in audio-EEG analysis.
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Audio-EEG analysis"

1

Zucco, Chiara, Barbara Calabrese i Mario Cannataro. "Methods and Techniques for Recognizing Emotions: Sentiment Analysis and Biosignal Analysis with Applications in Neurosciences". W Future Trends of HPC in a Disruptive Scenario. IOS Press, 2019. http://dx.doi.org/10.3233/apc190018.

Pełny tekst źródła
Streszczenie:
In recent years an extensive use of social networking platforms has been registered, coupled with the increasing popularity of wearable devices, which is expected to double within the next four years (Source: Gartner August 2017). Physical tracking activities and the publishing of peoples own images, emoticons, audio files and texts on social platforms have become daily practices, with an increase in the availability of data, and therefore potential information, for each user. To extract knowledge from this data, new computational technologies such as Sentiment Analysis (SA) and Affective Computing (AC) have found applications in fields such as marketing, politics, social sciences, cognitive sciences, medical sciences, etc. Such technologies aim to automatically extract emotions from heterogeneous data sources such as text, images, audio, video, and a plethora of biosignals such as voice, facial expression, electroencephalographic signals (EEG), gestures, etc. and find application in various fields. The paper introduces main concepts of Sentiment Analysis and Affective Computing and presents an overview of the primary methodologies and techniques used to recognize emotions from the analysis of various data sources such as text, images, voice signals, EEG. Finally, the paper discusses various applications of those techniques to neurosciences and underlines the high-performance issues of SA and AC, as well as challenges and future trends.
Style APA, Harvard, Vancouver, ISO itp.
2

Rastogi, Rohit, Devendra Kumar Chaturvedi i Mayank Gupta. "Use of IoT and Different Biofeedback to Measure TTH". W Handbook of Research on Disease Prediction Through Data Analytics and Machine Learning, 486–525. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-2742-9.ch025.

Pełny tekst źródła
Streszczenie:
This chapter applied the random sampling in selection of the subjects suffering with headache, and care was taken that they ensure to fulfill the International Headache Society criteria. Subjects under consideration were assigned the two groups of GSR-integrated audio-visual feedback, GSR (audio-visual)- and EMG (audio-visual)-integrated feedback groups. In 10 sessions, the subjects experienced the GSR and EMG BF therapy for 15 minutes. Twenty subjects were subjected to EEG therapy. The variables for stress (pain) and SF-36 (quality of life) scores were recorded at starting point, 30 days, and 90 days after the starting of GSR and EMG-BF therapy. To reduce the anxiety and depression in day-to-day routine, the present research work is shown as evidence in favor of the mindful meditation. The physical, mental, and total scores increased over the time duration of SF-36 scores after 30- and 90-days recordings (p<0.05). Intergroup analysis has demonstrated the improvement. EMG-audio visual biofeedback group also showed highest improvement in SF-36 scores at first and third month follow up. EEG measures the Alpha waves for the subjects after meditation. GSR, EMG, and EEG-integrated auditory-visual biofeedback are efficient in solution of stress due to TTH with most advantage seen.
Style APA, Harvard, Vancouver, ISO itp.
3

Jeswani, Jahanvi, Praveen Kumar Govarthan, Abirami Selvaraj, Amalin Prince, John Thomas, Mohanavelu Kalathe, Sreeraj V S, Venkat Subramaniam i Jac Fredo Agastinose Ronickom. "Low Valence Low Arousal Stimuli: An Effective Candidate for EEG-Based Biometrics Authentication System". W Caring is Sharing – Exploiting the Value in Data for Health and Innovation. IOS Press, 2023. http://dx.doi.org/10.3233/shti230114.

Pełny tekst źródła
Streszczenie:
Electroencephalography (EEG) has recently gained popularity in user authentication systems since it is unique and less impacted by fraudulent interceptions. Although EEG is known to be sensitive to emotions, understanding the stability of brain responses to EEG-based authentication systems is challenging. In this study, we compared the effect of different emotion stimuli for the application in the EEG-based biometrics system (EBS). Initially, we pre-processed audio-visual evoked EEG potentials from the ‘A Database for Emotion Analysis using Physiological Signals’ (DEAP) dataset. A total of 21 time-domain and 33 frequency-domain features were extracted from the considered EEG signals in response to Low valence Low arousal (LVLA) and High valence low arousal (HVLA) stimuli. These features were fed as input to an XGBoost classifier to evaluate the performance and identify the significant features. The model performance was validated using leave-one-out cross-validation. The pipeline achieved high performance with multiclass accuracy of 80.97% and a binary-class accuracy of 99.41% with LVLA stimuli. In addition, it also achieved recall, precision and F-measure scores of 80.97%, 81.58% and 80.95%, respectively. For both the cases of LVLA and LVHA, skewness was the stand-out feature. We conclude that boring stimuli (negative experience) that fall under the LVLA category can elicit a more unique neuronal response than its counterpart the LVHA (positive experience). Thus, the proposed pipeline involving LVLA stimuli could be a potential authentication technique in security applications.
Style APA, Harvard, Vancouver, ISO itp.
4

Oveisi, Farid, Shahrzad Oveisi, Abbas Efranian i Ioannis Patras. "Nonlinear Independent Component Analysis for EEG-Based Brain-Computer Interface Systems". W Independent Component Analysis for Audio and Biosignal Applications. InTech, 2012. http://dx.doi.org/10.5772/39092.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Audio-EEG analysis"

1

Manochitra, S., i Ravinthiran Partheepan. "EEG Analysis of Human Perception based on Video-Audio Stimuli". W 2021 Smart Technologies, Communication and Robotics (STCR). IEEE, 2021. http://dx.doi.org/10.1109/stcr51658.2021.9588981.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Abtahi, Farnaz, Tony Ro, Wei Li i Zhigang Zhu. "Emotion Analysis Using Audio/Video, EMG and EEG: A Dataset and Comparison Study". W 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2018. http://dx.doi.org/10.1109/wacv.2018.00008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Ryabinin, Konstantin, Svetlana Chuprina i Ivan Labutin. "Ontology-Driven Toolset for Audio-Visual Stimuli Representation in EEG-Based BCI Research". W 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-223-234.

Pełny tekst źródła
Streszczenie:
In the last decade, the recent advances in software and hardware facilitate the increase of interest in conducting experiments in the field of neurosciences, especially related to human-machine interaction. There are many mature and popular platforms leveraging experiments in this area including systems for representing the stimuli. However, these solutions often lack high-level adaptability to specific conditions, specific experiment setups, and third-party software and hardware, which may be involved in the experimental pipelines. This paper presents an adaptable solution based on ontology engineering that allows creating and tuning the EEG-based brain-computer interfaces. This solution relies on the ontology-driven SciVi visual analytics platform developed earlier. In the present work, we introduce new capabilities of SciVi, which enable organizing the pipeline for neuroscience-related experiments, including the representation of audio-visual stimuli, as well as retrieving, processing, and analyzing the EEG data. The distinctive feature of our approach is utilizing the ontological description of both the neural interface and processing tools used. This increases the semantic power of experiments, simplifies the reuse of pipeline parts between different experiments, and allows automatic distribution of data acquisition, storage, processing, and visualization on different computing nodes in the network to balance the computation load and to allow utilizing various hardware platforms, EEG devices, and stimuli controllers.
Style APA, Harvard, Vancouver, ISO itp.
4

Du, Ruoyu, i Hyo Jong Lee. "Power spectral performance analysis of EEG during emotional auditory experiment". W 2014 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2014. http://dx.doi.org/10.1109/icalip.2014.7009758.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Bo, Hongjian, Haifeng Li, Lin Ma i Bo Yu. "A Constant Q Transform based approach for robust EEG spectral analysis". W 2014 International Conference on Audio, Language and Image Processing (ICALIP). IEEE, 2014. http://dx.doi.org/10.1109/icalip.2014.7009757.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Ribeiro, Estela, i Carlos Eduardo Thomaz. "A multivariate statistical analysis of EEG signals for differentiation of musicians and non-musicians". W XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4442.

Pełny tekst źródła
Streszczenie:
It is possible to reveal whether a subject received musical training through the neural activation patterns induced in response to music listening. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram electrodes signals. Our experiments results, with 13 musicians and 12 non-musicians who listened the song Hungarian Dance No 5 from Johannes Brahms, have shown that is possible to differentiate musicians and non-musicians with high classification accuracy (88%). Given this multivariate statistical framework, it has also been possible to highlight the most expressive and discriminant changes in the participants brain according to the acoustic features extracted from the audio.
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Audio-EEG analysis"

1

Hamlin, Alexandra, Erik Kobylarz, James Lever, Susan Taylor i Laura Ray. Assessing the feasibility of detecting epileptic seizures using non-cerebral sensor. Engineer Research and Development Center (U.S.), grudzień 2021. http://dx.doi.org/10.21079/11681/42562.

Pełny tekst źródła
Streszczenie:
This paper investigates the feasibility of using non-cerebral, time-series data to detect epileptic seizures. Data were recorded from fifteen patients (7 male, 5 female, 3 not noted, mean age 36.17 yrs), five of whom had a total of seven seizures. Patients were monitored in an inpatient setting using standard video electroencephalography (vEEG), while also wearing sensors monitoring electrocardiography, electrodermal activity, electromyography, accelerometry, and audio signals (vocalizations). A systematic and detailed study was conducted to identify the sensors and the features derived from the non-cerebral sensors that contribute most significantly to separability of data acquired during seizures from non-seizure data. Post-processing of the data using linear discriminant analysis (LDA) shows that seizure data are strongly separable from non-seizure data based on features derived from the signals recorded. The mean area under the receiver operator characteristic (ROC) curve for each individual patient that experienced a seizure during data collection, calculated using LDA, was 0.9682. The features that contribute most significantly to seizure detection differ for each patient. The results show that a multimodal approach to seizure detection using the specified sensor suite is promising in detecting seizures with both sensitivity and specificity. Moreover, the study provides a means to quantify the contribution of each sensor and feature to separability. Development of a non-electroencephalography (EEG) based seizure detection device would give doctors a more accurate seizure count outside of the clinical setting, improving treatment and the quality of life of epilepsy patients.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii