To see the other types of publications on this topic, follow the link: Audio-EEG analysis.

Journal articles on the topic 'Audio-EEG analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Audio-EEG analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Reddy Katthi, Jaswanth, and Sriram Ganapathy. "Deep Correlation Analysis for Audio-EEG Decoding." IEEE Transactions on Neural Systems and Rehabilitation Engineering 29 (2021): 2742–53. http://dx.doi.org/10.1109/tnsre.2021.3129790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Geng, Bingrui, Ke Liu, and Yiping Duan. "Human Perception Intelligent Analysis Based on EEG Signals." Electronics 11, no. 22 (November 17, 2022): 3774. http://dx.doi.org/10.3390/electronics11223774.

Full text
Abstract:
The research on brain cognition provides theoretical support for intelligence and cognition in computational intelligence, and it is further applied in various fields of scientific and technological innovation, production and life. Use of the 5G network and intelligent terminals has also brought diversified experiences to users. This paper studies human perception and cognition in the quality of experience (QoE) through audio noise. It proposes a novel method to study the relationship between human perception and audio noise intensity using electroencephalogram (EEG) signals. This kind of physiological signal can be used to analyze the user’s cognitive process through transformation and feature calculation, so as to overcome the deficiency of traditional subjective evaluation. Experimental and analytical results show that the EEG signals in frequency domain can be used for feature learning and calculation to measure changes in user-perceived audio noise intensity. In the experiment, the user’s noise tolerance limit for different audio scenarios varies greatly. The noise power spectral density of soothing audio is 0.001–0.005, and the noise spectral density of urgent audio is 0.03. The intensity of information flow in the corresponding brain regions increases by more than 10%. The proposed method explores the possibility of using EEG signals and computational intelligence to measure audio perception quality. In addition, the analysis of the intensity of information flow in different brain regions invoked by different tasks can also be used to study the theoretical basis of computational intelligence.
APA, Harvard, Vancouver, ISO, and other styles
3

Dasenbrock, Steffen, Sarah Blum, Stefan Debener, Volker Hohmann, and Hendrik Kayser. "A Step towards Neuro-Steered Hearing Aids: Integrated Portable Setup for Time- Synchronized Acoustic Stimuli Presentation and EEG Recording." Current Directions in Biomedical Engineering 7, no. 2 (October 1, 2021): 855–58. http://dx.doi.org/10.1515/cdbme-2021-2218.

Full text
Abstract:
Abstract Aiming to provide a portable research platform to develop algorithms for neuro-steered hearing aids, a joint hearing aid - EEG measurement setup was implemented in this work. The setup combines the miniaturized electroencephalography sensor technology cEEGrid with a portable hearing aid research platform - the Portable Hearing Laboratory. The different components of the system are connected wirelessly, using the lab streaming layer framework for synchronization of audio and EEG data streams. Our setup was shown to be suitable for simultaneous recording of audio and EEG signals used in a pilot study (n=5) to perform an auditory Oddball experiment. The analysis showed that the setup can reliably capture typical event-related potential responses. Furthermore, linear discriminant analysis was successfully applied for single-trial classification of P300 responses. The study showed that time-synchronized audio and EEG data acquisition is possible with the Portable Hearing Laboratory research platform.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Yi Yeh, Aaron Raymond See, Shih Chung Chen, and Chih Kuo Liang. "Effect of Music Listening on Frontal EEG Asymmetry." Applied Mechanics and Materials 311 (February 2013): 502–6. http://dx.doi.org/10.4028/www.scientific.net/amm.311.502.

Full text
Abstract:
Frontal EEG asymmetry has been recognized as a useful method in determining emotional states and psychophysiological conditions. For the current research, resting prefrontal EEG was measured before, during and after listening to sad music video. Data were recorded and analyzed using a wireless EEG module with digital results sent via Bluetooth to a remote computer for further analysis. The relative alpha power was utilized to determine EEG asymmetry indexes. The results indicated that even if a person had a stronger right hemisphere in the initial phase a significant shift first occurred during audio-video stimulation and was followed by a further inclination to left EEG asymmetry as measured after the stimulation. Furthermore the current research was able to use prefrontal EEG to produce results that were mostly measured at the frontal lobe. It was also able to provide significant changes in results using audio and video stimulation as to previous experiments that made use of audio stimulation. In the future, more experiments can be conducted to obtain a better understanding of a person’s appreciation or dislike toward a certain video, commercial or other multimedia contents through the aid of convenient EEG module.
APA, Harvard, Vancouver, ISO, and other styles
5

Hadjidimitriou, Stelios K., Asteris I. Zacharakis, Panagiotis C. Doulgeris, Konstantinos J. Panoulas, Leontios J. Hadjileontiadis, and Stavros M. Panas. "Revealing Action Representation Processes in Audio Perception Using Fractal EEG Analysis." IEEE Transactions on Biomedical Engineering 58, no. 4 (April 2011): 1120–29. http://dx.doi.org/10.1109/tbme.2010.2047016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reshetnykov, Denys S. "EEG Analysis of Person Familiarity with Audio-Video Data Assessing Task." Upravlâûŝie sistemy i mašiny, no. 4 (276) (August 2018): 70–83. http://dx.doi.org/10.15407/usim.2018.04.070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Himanshu, Subha D. Puthankattil, and Ramakrishnan Swaminathan. "ANALYSIS OF EEG RESPONSE FOR AUDIO-VISUAL STIMULI IN FRONTAL ELECTRODES AT THETA FREQUENCY BAND USING THE TOPOLOGICAL FEATURES." Biomedical Sciences Instrumentation 57, no. 2 (April 1, 2021): 333–39. http://dx.doi.org/10.34107/yhpn9422.04333.

Full text
Abstract:
Emotions are the fundamental intellectual capacity of humans characterized by perception, attention, and behavior. Emotions are characterized by psychophysiological expressions. Studies have been performed by analyzing Electroencephalogram (EEG) responses from various lobes of the brain under all frequency bands. In this work, the EEG response of the theta band in the frontal lobe is analyzed extracting topological features during audio-visual stimulation. This study is carried out using the EEG signals from the public domain database. In this method, the signals are projected in higher dimensional space to find out the geometrical properties. Features, namely the center of gravity and perimeter of the boundary space, are used to quantify the changes in the geometrical properties of the signal, and the features are subject to the Wilcoxon rank-sum test for statistical significance. Different electrodes in the frontal region under the same audio-visual stimulus showed similar variations in the geometry of the boundary in higher-dimensional space. Further, the electrodes, Fp1 and F3, showed a statistical significance of p < 0.05 in differentiating arousal states, and the Fp1 electrode showed a statistical significance in differentiating valence emotional state. Thus, the topological features extracted from the frontal electrodes in theta band could differentiate arousal and valence emotional states and be of significant clinical relevance.
APA, Harvard, Vancouver, ISO, and other styles
8

Ribeiro, Estela, and Carlos Eduardo Thomaz. "A Whole Brain EEG Analysis of Musicianship." Music Perception 37, no. 1 (September 1, 2019): 42–56. http://dx.doi.org/10.1525/mp.2019.37.1.42.

Full text
Abstract:
The neural activation patterns provoked in response to music listening can reveal whether a subject did or did not receive music training. In the current exploratory study, we have approached this two-group (musicians and nonmusicians) classification problem through a computational framework composed of the following steps: Acoustic features extraction; Acoustic features selection; Trigger selection; EEG signal processing; and Multivariate statistical analysis. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram (EEG) signals on a given time instant. Our experiment's results—with 26 volunteers (13 musicians and 13 nonmusicians) who listened the classical music Hungarian Dance No. 5 from Johannes Brahms—have shown that is possible to linearly differentiate musicians and nonmusicians with classification accuracies that range from 69.2% (test set) to 93.8% (training set), despite the limited sample sizes available. Additionally, given the whole brain vector navigation method described and implemented here, our results suggest that it is possible to highlight the most expressive and discriminant changes in the participants brain activity patterns depending on the acoustic feature extracted from the audio.
APA, Harvard, Vancouver, ISO, and other styles
9

Gao, Chenguang, Hirotaka Uchitomi, and Yoshihiro Miyake. "Influence of Multimodal Emotional Stimulations on Brain Activity: An Electroencephalographic Study." Sensors 23, no. 10 (May 16, 2023): 4801. http://dx.doi.org/10.3390/s23104801.

Full text
Abstract:
This study aimed to reveal the influence of emotional valence and sensory modality on neural activity in response to multimodal emotional stimuli using scalp EEG. In this study, 20 healthy participants completed the emotional multimodal stimulation experiment for three stimulus modalities (audio, visual, and audio-visual), all of which are from the same video source with two emotional components (pleasure or unpleasure), and EEG data were collected using six experimental conditions and one resting state. We analyzed power spectral density (PSD) and event-related potential (ERP) components in response to multimodal emotional stimuli, for spectral and temporal analysis. PSD results showed that the single modality (audio only/visual only) emotional stimulation PSD differed from multi-modality (audio-visual) in a wide brain and band range due to the changes in modality and not from the changes in emotional degree. The most pronounced N200-to-P300 potential shifts occurred in monomodal rather than multimodal emotional stimulations. This study suggests that emotional saliency and sensory processing efficiency perform a significant role in shaping neural activity during multimodal emotional stimulation, with the sensory modality being more influential in PSD. These findings contribute to our understanding of the neural mechanisms involved in multimodal emotional stimulation.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Tae-Ju, Seung-Min Park, and Kwee-Bo Sim. "Electroencephalography Signal Grouping and Feature Classification Using Harmony Search for BCI." Journal of Applied Mathematics 2013 (2013): 1–9. http://dx.doi.org/10.1155/2013/754539.

Full text
Abstract:
This paper presents a heuristic method for electroencephalography (EEG) grouping and feature classification using harmony search (HS) for improving the accuracy of the brain-computer interface (BCI) system. EEG, a noninvasive BCI method, uses many electrodes on the scalp, and a large number of electrodes make the resulting analysis difficult. In addition, traditional EEG analysis cannot handle multiple stimuli. On the other hand, the classification method using the EEG signal has a low accuracy. To solve these problems, we use a heuristic approach to reduce the complexities in multichannel problems and classification. In this study, we build a group of stimuli using the HS algorithm. Then, the features from common spatial patterns are classified by the HS classifier. To confirm the proposed method, we perform experiments using 64-channel EEG equipment. The subjects are subjected to three kinds of stimuli: audio, visual, and motion. Each stimulus is applied alone or in combination with the others. The acquired signals are processed by the proposed method. The classification results in an accuracy of approximately 63%. We conclude that the heuristic approach using the HS algorithm on the BCI is beneficial for EEG signal analysis.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Tie Hua, Wenlong Liang, Hangyu Liu, Ling Wang, Keun Ho Ryu, and Kwang Woo Nam. "EEG Emotion Recognition Applied to the Effect Analysis of Music on Emotion Changes in Psychological Healthcare." International Journal of Environmental Research and Public Health 20, no. 1 (December 26, 2022): 378. http://dx.doi.org/10.3390/ijerph20010378.

Full text
Abstract:
Music therapy is increasingly being used to promote physical health. Emotion semantic recognition is more objective and provides direct awareness of the real emotional state based on electroencephalogram (EEG) signals. Therefore, we proposed a music therapy method to carry out emotion semantic matching between the EEG signal and music audio signal, which can improve the reliability of emotional judgments, and, furthermore, deeply mine the potential influence correlations between music and emotions. Our proposed EER model (EEG-based Emotion Recognition Model) could identify 20 types of emotions based on 32 EEG channels, and the average recognition accuracy was above 90% and 80%, respectively. Our proposed music-based emotion classification model (MEC model) could classify eight typical emotion types of music based on nine music feature combinations, and the average classification accuracy was above 90%. In addition, the semantic mapping was analyzed according to the influence of different music types on emotional changes from different perspectives based on the two models, and the results showed that the joy type of music video could improve fear, disgust, mania, and trust emotions into surprise or intimacy emotions, while the sad type of music video could reduce intimacy to the fear emotion.
APA, Harvard, Vancouver, ISO, and other styles
12

Andrusiak, V., and V. Kravchenko. "Comparative EEG analysis of learning effectiveness using paper books, e-books, and audio books." Bulletin of Taras Shevchenko National University of Kyiv. Series: Biology 74, no. 2 (2017): 39–46. http://dx.doi.org/10.17721/1728_2748.2017.74.39-46.

Full text
Abstract:
In this work the peculiarities of reading comprehension from electronic, audio devices and hard copies were studied through comparative analysis of the learning accuracy and electrical activity of the brain when reading or listening to the text. Eighty students took part in the research. They were offered 2 passages of text from fiction and popular-scientific literature for reading, presented in a form of an e-book, MP3-format and in a printed copy. The level of comprehension and assimilation of the read material was checked by testing based on the content of the text immediately after reading and in 2 weeks. The comparative EEG analysis did not reveal significant differences in the spectral power of the studied ranges when reading a paper book and e-book. Differences were found when listening to audiobooks comparatively to reading. In general, the effectiveness of text learning does not depend on the way of its presentation, however, sex and individual traits of a person, such as preferred learning style and extraversion level, are more important.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Wei, and Guobin Wu. "A Multimodal Convolutional Neural Network Model for the Analysis of Music Genre on Children’s Emotions Influence Intelligence." Computational Intelligence and Neuroscience 2022 (August 29, 2022): 1–11. http://dx.doi.org/10.1155/2022/5611456.

Full text
Abstract:
This paper designs a multimodal convolutional neural network model for the intelligent analysis of the influence of music genres on children’s emotions by constructing a multimodal convolutional neural network model and profoundly analyzing the impact of music genres on children’s feelings. Considering the diversity of music genre features in the audio power spectrogram, the Mel filtering method is used in the feature extraction stage to ensure the effective retention of the genre feature attributes of the audio signal by dimensional reduction of the Mel filtered signal, deepening the differences of the extracted features between different genres, and to reduce the input size and expand the model training scale in the model input stage, the audio power spectrogram obtained by feature extraction is cut the MSCN-LSTM consists of two modules: multiscale convolutional kernel convolutional neural network and long and short term memory network. The MSCNN network is used to extract the EEG signal features, the LSTM network is used to remove the temporal characteristics of the eye-movement signal, and the feature fusion is done by feature-level fusion. The multimodal signal has a higher emotion classification accuracy than the unimodal signal, and the average accuracy of emotion quadruple classification based on a 6-channel EEG signal, and children’s multimodal signal reaches 97.94%. After pretraining with the MSD (Million Song Dataset) dataset in this paper, the model effect was further improved significantly. The accuracy of the Dense Inception network improved to 91.0% and 89.91% on the GTZAN dataset and ISMIR2004 dataset, respectively, proving that the Dense Inception network’s effectiveness and advancedness of the Dense Inception network were demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
14

Bischoff, M., H. Gebhardt, CR Blecker, K. Zentgraf, D. Vaitl, and G. Sammer. "EEG-guided fMRI-analysis reveals involvement of the superior temporal sulcus in audio-visual binding." NeuroImage 47 (July 2009): S130. http://dx.doi.org/10.1016/s1053-8119(09)71270-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Masood, Naveen, and Humera Farooq. "Investigating EEG Patterns for Dual-Stimuli Induced Human Fear Emotional State." Sensors 19, no. 3 (January 26, 2019): 522. http://dx.doi.org/10.3390/s19030522.

Full text
Abstract:
Most electroencephalography (EEG) based emotion recognition systems make use of videos and images as stimuli. Few used sounds, and even fewer studies were found involving self-induced emotions. Furthermore, most of the studies rely on single stimuli to evoke emotions. The question of “whether different stimuli for same emotion elicitation generate any subject-independent correlations” remains unanswered. This paper introduces a dual modality based emotion elicitation paradigm to investigate if emotions can be classified induced with different stimuli. A method has been proposed based on common spatial pattern (CSP) and linear discriminant analysis (LDA) to analyze human brain signals for fear emotions evoked with two different stimuli. Self-induced emotional imagery is one of the considered stimuli, while audio/video clips are used as the other stimuli. The method extracts features from the CSP algorithm and LDA performs classification. To investigate associated EEG correlations, a spectral analysis was performed. To further improve the performance, CSP was compared with other regularized techniques. Critical EEG channels are identified based on spatial filter weights. To the best of our knowledge, our work provides the first contribution for the assessment of EEG correlations in the case of self versus video induced emotions captured with a commercial grade EEG device.
APA, Harvard, Vancouver, ISO, and other styles
16

Sokhadze, E. M., B. Hillard, M. Eng, A. S. El-Baz, A. Tasman, and L. Sears. "ELECTROENCEPHALOGRAPHIC BIOFEEDBACK IMPROVES FOCUSED ATTENTION IN ATTENTION DEFICIT/HYPERACTIVITY DISORDER." Bulletin of Siberian Medicine 12, no. 2 (April 28, 2013): 182–94. http://dx.doi.org/10.20538/1682-0363-2013-2-182-194.

Full text
Abstract:
EEG biofeedback (so called neurofeedback) is considered as an efficacious treatment for ADHD. We propose that operant conditioning of EEG in neurofeedback training mode, aimed to mitigate inattention and low arousal in ADHD, will be accompanied by changes in EEG bands' relative power. Patients were 18 children diagnosed with ADHD. The neurofeedback protocol (“Focus/Alertness” by Peak Achievement Trainer, Neurotek, KY) used to train patients has focused attention training procedure, which according to specifications, represents wide band EEG amplitude suppression training. Quantitative EEG analysis was completed on each of 25 min long twelve sessions to determine the relative power of each of the EEG bands of interest throughout each session, and from the first session to the last session. Additional statistical analysis was performed to determine significant changes in relative power within sessions (from minute 1 to minute 25), and between sessions (from session 1 to session 12) for an individual patient. We performed analysis of relative power of Theta, Alpha, Low and High Beta, Theta/Alpha, Theta/Beta, and Theta/Low Beta and Theta/High Beta ratios. We performed also analysis between the “Focus”measure and changes in relative power of above EEG rhythms and their ratios. Additional secondary measures of patients’ post-neurofeedback outcomes were assessed using an audio-visual selective attention test (IVA + Plus) and behavioral evaluation scores from Aberrant Behavior Checklist. We found that, as expected, Theta/Low Beta and Theta/Alpha ratios decreased significantly from Session 1 to Session 12 and from minute 1 to minute 25 within sessions. The ‘Focus” measure of protocol showed high negative correlation with both Theta/Alpha and Theta/Beta ratios. The findings regarding EEG changes resulting from self-regulation training along with behavioral evaluations will help in elucidation of neural mechanisms of neurofeedback aimed to improve focused attention and alertness in ADHD.
APA, Harvard, Vancouver, ISO, and other styles
17

Browarska, Natalia, Aleksandra Kawala-Sterniuk, Jaroslaw Zygarlicki, Michal Podpora, Mariusz Pelc, Radek Martinek, and Edward Gorzelańczyk. "Comparison of Smoothing Filters’ Influence on Quality of Data Recorded with the Emotiv EPOC Flex Brain–Computer Interface Headset during Audio Stimulation." Brain Sciences 11, no. 1 (January 13, 2021): 98. http://dx.doi.org/10.3390/brainsci11010098.

Full text
Abstract:
Off-the-shelf, consumer-grade EEG equipment is nowadays becoming the first-choice equipment for many scientists when it comes to recording brain waves for research purposes. On one hand, this is perfectly understandable due to its availability and relatively low cost (especially in comparison to some clinical-level EEG devices), but, on the other hand, quality of the recorded signals is gradually increasing and reaching levels that were offered just a few years ago by much more expensive devices used in medicine for diagnostic purposes. In many cases, a well-designed filter and/or a well-thought signal acquisition method improve the signal quality to the level that it becomes good enough to become subject of further analysis allowing to formulate some valid scientific theories and draw far-fetched conclusions related to human brain operation. In this paper, we propose a smoothing filter based upon the Savitzky–Golay filter for the purpose of EEG signal filtering. Additionally, we provide a summary and comparison of the applied filter to some other approaches to EEG data filtering. All the analyzed signals were acquired from subjects performing visually involving high-concentration tasks with audio stimuli using Emotiv EPOC Flex equipment.
APA, Harvard, Vancouver, ISO, and other styles
18

Cui, Gao Chao, and Jian Ting Cao. "P300 Oddball Task and Classification Based on Support Vector Machine for BCI System." Applied Mechanics and Materials 397-400 (September 2013): 2187–90. http://dx.doi.org/10.4028/www.scientific.net/amm.397-400.2187.

Full text
Abstract:
The P300 oddball task is the most popular paradigm in the existing BCI systems. Recently using auditory stimuli in P300 oddball task arises since it gives much freedom to the BCI user. In this paper, we present a novel BCI paradigm using P300 and P100 responses. Since P300 and P100 responses occur in the frontal lobe and the temporal lobe respectively, so that we can use these responses stimulated by an audio in a single task. The main advantage of our designed paradigm is that we can obtain two different kinds of responses in a single trial EEG task. In the EEG data analysis, we first employ the multivariate empirical mode decomposition (MEMD) algorithm to extract P300 and P100 components. And then, we employ a support vector machine (SVM) technique for the feature classification.
APA, Harvard, Vancouver, ISO, and other styles
19

Giroldini, William, Luciano Pederzoli, Marco Bilucaglia, Patrizio Caini, Alessandro Ferrini, Simone Melloni, Elena Prati, and Patrizio E. Tressoldi. "EEG correlates of social interaction at distance." F1000Research 4 (August 3, 2015): 457. http://dx.doi.org/10.12688/f1000research.6755.1.

Full text
Abstract:
This study investigated EEG correlates of social interaction at distance between twenty-five pairs of participants who were not connected by any traditional channels of communication.Each session involved the application of 128 stimulations separated by intervals of random duration ranging from 4 to 6 seconds. One of the pair received a one-second stimulation from a light signal produced by an arrangement of red LEDs, and a simultaneous 500 Hz sinusoidal audio signal of the same length. The other member of the pair sat in an isolated sound-proof room, such that any sensory interaction between the pair was impossible.An analysis of the Event-Related Potentials associated with sensory stimulation using traditional averaging methods showed a distinct peak at approximately 300 ms, but only in the EEG activity of subjects who were directly stimulated. However, when a new algorithm was applied to the EEG activity based on the correlation between signals from all active electrodes, a weak but robust response was also detected in the EEG activity of the passive member of the pair, particularly within 9 – 10 Hz in the Alpha range. Using the Bootstrap method and the Monte Carlo emulation, this signal was found to be statistically significant.
APA, Harvard, Vancouver, ISO, and other styles
20

Giroldini, William, Luciano Pederzoli, Marco Bilucaglia, Patrizio Caini, Alessandro Ferrini, Simone Melloni, Elena Prati, and Patrizio E. Tressoldi. "EEG correlates of social interaction at distance." F1000Research 4 (November 9, 2015): 457. http://dx.doi.org/10.12688/f1000research.6755.2.

Full text
Abstract:
This study investigated EEG correlates of social interaction at distance between twenty-five pairs of participants who were not connected by any traditional channels of communication.Each session involved the application of 128 stimulations separated by intervals of random duration ranging from 4 to 6 seconds. One of the pair received a one-second stimulation from a light signal produced by an arrangement of red LEDs, and a simultaneous 500 Hz sinusoidal audio signal of the same length. The other member of the pair sat in an isolated sound-proof room, such that any sensory interaction between the pair was impossible.An analysis of the Event-Related Potentials associated with sensory stimulation using traditional averaging methods showed a distinct peak at approximately 300 ms, but only in the EEG activity of subjects who were directly stimulated. However, when a new algorithm was applied to the EEG activity based on the correlation between signals from all active electrodes, a weak but robust response was also detected in the EEG activity of the passive member of the pair, particularly within 9 – 10 Hz in the Alpha range. Using the Bootstrap method and the Monte Carlo emulation, this signal was found to be statistically significant.
APA, Harvard, Vancouver, ISO, and other styles
21

Giroldini, William, Luciano Pederzoli, Marco Bilucaglia, Patrizio Caini, Alessandro Ferrini, Simone Melloni, Elena Prati, and Patrizio E. Tressoldi. "EEG correlates of social interaction at distance." F1000Research 4 (January 14, 2016): 457. http://dx.doi.org/10.12688/f1000research.6755.3.

Full text
Abstract:
This study investigated EEG correlates of social interaction at distance between twenty-five pairs of participants who were not connected by any traditional channels of communication.Each session involved the application of 128 stimulations separated by intervals of random duration ranging from 4 to 6 seconds. One of the pair received a one-second stimulation from a light signal produced by an arrangement of red LEDs, and a simultaneous 500 Hz sinusoidal audio signal of the same length. The other member of the pair sat in an isolated sound-proof room, such that any sensory interaction between the pair was impossible.An analysis of the Event-Related Potentials associated with sensory stimulation using traditional averaging methods showed a distinct peak at approximately 300 ms, but only in the EEG activity of subjects who were directly stimulated. However, when a new algorithm was applied to the EEG activity based on the correlation between signals from all active electrodes, a weak but robust response was also detected in the EEG activity of the passive member of the pair, particularly within 9 – 10 Hz in the Alpha range. Using the Bootstrap method and the Monte Carlo emulation, this signal was found to be statistically significant.
APA, Harvard, Vancouver, ISO, and other styles
22

Giroldini, William, Luciano Pederzoli, Marco Bilucaglia, Patrizio Caini, Alessandro Ferrini, Simone Melloni, Elena Prati, and Patrizio E. Tressoldi. "EEG correlates of social interaction at distance." F1000Research 4 (February 2, 2016): 457. http://dx.doi.org/10.12688/f1000research.6755.4.

Full text
Abstract:
This study investigated EEG correlates of social interaction at distance between twenty-five pairs of participants who were not connected by any traditional channels of communication.Each session involved the application of 128 stimulations separated by intervals of random duration ranging from 4 to 6 seconds. One of the pair received a one-second stimulation from a light signal produced by an arrangement of red LEDs, and a simultaneous 500 Hz sinusoidal audio signal of the same length. The other member of the pair sat in an isolated sound-proof room, such that any sensory interaction between the pair was impossible.An analysis of the Event-Related Potentials associated with sensory stimulation using traditional averaging methods showed a distinct peak at approximately 300 ms, but only in the EEG activity of subjects who were directly stimulated. However, when a new algorithm was applied to the EEG activity based on the correlation between signals from all active electrodes, a weak but robust response was also detected in the EEG activity of the passive member of the pair, particularly within 9 – 10 Hz in the Alpha range. Using the Bootstrap method and the Monte Carlo emulation, this signal was found to be statistically significant.
APA, Harvard, Vancouver, ISO, and other styles
23

Giroldini, William, Luciano Pederzoli, Marco Bilucaglia, Patrizio Caini, Alessandro Ferrini, Simone Melloni, Elena Prati, and Patrizio E. Tressoldi. "EEG correlates of social interaction at distance." F1000Research 4 (February 10, 2016): 457. http://dx.doi.org/10.12688/f1000research.6755.5.

Full text
Abstract:
This study investigated EEG correlates of social interaction at distance between twenty-five pairs of participants who were not connected by any traditional channels of communication.Each session involved the application of 128 stimulations separated by intervals of random duration ranging from 4 to 6 seconds. One of the pair received a one-second stimulation from a light signal produced by an arrangement of red LEDs, and a simultaneous 500 Hz sinusoidal audio signal of the same length. The other member of the pair sat in an isolated sound-proof room, such that any sensory interaction between the pair was impossible.An analysis of the Event-Related Potentials associated with sensory stimulation using traditional averaging methods showed a distinct peak at approximately 300 ms, but only in the EEG activity of subjects who were directly stimulated. However, when a new algorithm was applied to the EEG activity based on the correlation between signals from all active electrodes, a weak but robust response was also detected in the EEG activity of the passive member of the pair, particularly within 9 – 10 Hz in the Alpha range. Using the Bootstrap method and the Monte Carlo emulation, this signal was found to be statistically significant.
APA, Harvard, Vancouver, ISO, and other styles
24

Rajashekhar, U., and Neelappa Neelappa. "Development of Automated BCI System to Assist the Physically Challenged Person Through Audio Announcement With Help of EEG Signal." WSEAS TRANSACTIONS ON SYSTEMS AND CONTROL 16 (May 26, 2021): 302–14. http://dx.doi.org/10.37394/23203.2021.16.26.

Full text
Abstract:
Individuals face numerous challenges with many disorders, particularly when multiple disfunctions are diagnosed and especially for visually effected wheelchair users. This scenario, in reality creates in a degree of incapacity on the part of the wheelchair user in terms of performing simple activities. Based on their specific medical needs confined patients are treated in a modified method. Independent navigation is secured for individuals with vision and motor disabilities. There is a necessity for communication which justifies the use of virtual reality (VR) in this navigation situation. For the effective integration of locomotion besides, it must be under natural guidance. Electroencephalography (EEG), which uses random brain impulses, has made significant progress in the field of health. The custom of an automated audio announcement system modified to have the help of Virtual Reality (VR) and EEG for training of locomotion and individualised interaction of wheelchair users with visual disability is demonstrated in this study through an experiment. Enabling the patients who were otherwise deemed incapacitated to participate in social activities, as the aim was to have efficient connections. The natural control, feedback, stimuli, and protection these subsequent principles founded this project. Via properly conducted experiments, a multilayer computer rehabilitation system was created that integrated natural interaction assisted by EEG, which enabled the movements in the virtual environment and real wheelchair. For blind wheelchair operator patients this study involved of expounding the proper methodology. For educating the value of life and independence of blind wheelchair users, outcomes proven that VR with EEG signals has that potential. To protect their life straightaway and to report all these disputes, the military system should have high speed, more precise portable prototype device for nursing the soldier health, recognition of solider location and report about health sharing system to the concerned system. FPGA-based soldier’s health observing and position gratitude system is proposed in this paper. Reliant on heart rate which is centred on EEG signals the soldier health is observed in systematic bases. By emerging Verilog HDL programming language and executing on Artix-7 development FPGA board of part name XC7ACSG100t the whole work is approved in a Vivado Design Suite. Classification of different abnormalities, and cloud storage of EEG along with type of abnormalities, artifact elimination, abnormalities identification based on feature extraction, exist in the segment of suggested architecture. Irregularity circumstances are noticed through developed prototype system and alert the physically challenged (PHC) individual via audio announcement. An actual method for eradicating motion artefacts from EEG signals that have anomalies in the PHC person's brain has been established, and the established system is a portable device that can deliver differences in brain signal variation intensity. Primarily the EEG signals can be taken and the undesirable artifact can be detached, later structures can be mined by DWT these are the two stages through which artifact deletion can be completed. The anomalies in signal can be noticed and recognized by using machine learning algorithms known as Multirate SVM classifiers, when the features have been extracted using a combination of HMM and GMM. Intended for capable declaration about action taken by a blind person, these result signals are protected in storage devices and conveyed to the controller. Pretending daily motion schedules allows the pretentious EEG signals to be caught. Aimed at the validation of planned system, the database can be used and continued with numerous recorded signals of EEG. The projected strategy executes better in terms of re-storing theta, delta, alpha, and beta (TDAB) complexes of the original EEG with less alteration and a higher signal to noise ratio (SNR) value of the EEG signal which illustrates in the quantitative analysis. The projected method used Verilog HDL and MATLAB software for both formation and authorization of results in order to yield improved results. Since from the achieved results, it is initiated that 32% enhancement in SNR, 14% in MSE and 65% enhancement in recognition of anomalies, hence design is effectively certified and proved for standard EEG signals datasets on FPGA.
APA, Harvard, Vancouver, ISO, and other styles
25

Puchkova, A. N., O. N. Tkachenko, I. P. Trapeznikov, I. A. Piletskaya, E. V. Tiunova, M. M. Sazonova, A. O. Taranov, S. S. Gruzdeva, and V. B. Dorokhov. "Assessment of potential capabilities of Dreem: An ambulatory device for EEG phase-locked acoustic stimulation during sleep." SOCIALNO-ECOLOGICHESKIE TECHNOLOGII 9, no. 1 (2019): 96–112. http://dx.doi.org/10.31862/2500-2961-2019-9-1-96-112.

Full text
Abstract:
Sleep disorders are one of the significant problems in the modern society. Current research is on the lookout for the nonpharmacological ways to improve sleep quality and slow wave brain activity that plays a crucial role in homeostasis and cognitive functions. One of the promising approaches is acoustic stimulation that is phase-locked to deep sleep EEG rhythms. It was already shown that such stimulation improves slow wave brain activity. This article describes Dreem: a wireless consumer device that performs acoustic sleep stimulation in home conditions. The device has dry EEG electrodes, photo sensor for pulse oximetry, and an accelerometer. The inbuilt software detects deep sleep, performs audio stimulation on the ascending slope of the delta wave and does automatic sleep staging. In the pilot study of the device, three subjects made 10 to 24 recordings of night sleep with EEG recording and stimulation. The raw data recorded by the device is available to the user and is sufficient for sleep staging and basic sleep analysis. Automatic hypnograms reflect the structure of a normal night sleep. EEG averaged by the stimulation markers demonstrated the high efficacy of slow wave detectors and placement of stimulations on the ascending slope of a delta wave. Dreem device is of interest for the sleep researchers as an easy to use tool for an out-of-lab data acquisition.
APA, Harvard, Vancouver, ISO, and other styles
26

Davis, Jeffrey Jonathan (Joshua), Chin-Teng Lin, Grant Gillett, and Robert Kozma. "An Integrative Approach to Analyze Eeg Signals and Human Brain Dynamics in Different Cognitive States." Journal of Artificial Intelligence and Soft Computing Research 7, no. 4 (October 1, 2017): 287–99. http://dx.doi.org/10.1515/jaiscr-2017-0020.

Full text
Abstract:
Abstract Electroencephalograph (EEG) data provide insight into the interconnections and relationships between various cognitive states and their corresponding brain dynamics, by demonstrating dynamic connections between brain regions at different frequency bands. While sensory input tends to stimulate neural activity in different frequency bands, peaceful states of being and self-induced meditation tend to produce activity in the mid-range (Alpha). These studies were conducted with the aim of: (a) testing different equipment in order to assess two (2) different EEG technologies together with their benefits and limitations and (b) having an initial impression of different brain states associated with different experimental modalities and tasks, by analyzing the spatial and temporal power spectrum and applying our movie making methodology to engage in qualitative exploration via the art of encephalography. This study complements our previous study of measuring multichannel EEG brain dynamics using MINDO48 equipment associated with three experimental modalities measured both in the laboratory and the natural environment. Together with Hilbert analysis, we conjecture, the results will provide us with the tools to engage in more complex brain dynamics and mental states, such as Meditation, Mathematical Audio Lectures, Music Induced Meditation, and Mental Arithmetic Exercises. This paper focuses on open eye and closed eye conditions, as well as meditation states in laboratory conditions. We assess similarities and differences between experimental modalities and their associated brain states as well as differences between the different tools for analysis and equipment.
APA, Harvard, Vancouver, ISO, and other styles
27

Kimmatkar, Nisha Vishnupant, and B. Vijaya Babu. "Novel Approach for Emotion Detection and Stabilizing Mental State by Using Machine Learning Techniques." Computers 10, no. 3 (March 19, 2021): 37. http://dx.doi.org/10.3390/computers10030037.

Full text
Abstract:
The aim of this research study is to detect emotional state by processing electroencephalography (EEG) signals and test effect of meditation music therapy to stabilize mental state. This study is useful to identify 12 subtle emotions angry (annoying, angry, nervous), calm (calm, peaceful, relaxed), happy (excited, happy, pleased), sad (sleepy, bored, sad). A total 120 emotion signals were collected by using Emotive 14 channel EEG headset. Emotions are elicited by using three types of stimulus thoughts, audio and video. The system is trained by using captured database of emotion signals which include 30 signals of each emotion class. A total of 24 features were extracted by performing Chirplet transform. Band power is ranked as the prominent feature. The multimodel approach of classifier is used to classify emotions. Classification accuracy is tested for K-nearest neighbor (KNN), convolutional neural network (CNN), recurrent neural network (RNN) and deep neural network (DNN) classifiers. The system is tested to detect emotions of intellectually disable people. Meditation music therapy is used to stable mental state. It is found that it changed emotions of both intellectually disabled and normal participants from the annoying state to the relaxed state. A 75% positive transformation of mental state is obtained in the participants by using music therapy. This research study presents a novel approach for detailed analysis of brain EEG signals for emotion detection and stabilize mental state.
APA, Harvard, Vancouver, ISO, and other styles
28

Chen, Qicheng, and Boon Giin Lee. "Deep Learning Models for Stress Analysis in University Students: A Sudoku-Based Study." Sensors 23, no. 13 (July 2, 2023): 6099. http://dx.doi.org/10.3390/s23136099.

Full text
Abstract:
Due to the phenomenon of “involution” in China, the current generation of college and university students are experiencing escalating levels of stress, both academically and within their families. Extensive research has shown a strong correlation between heightened stress levels and overall well-being decline. Therefore, monitoring students’ stress levels is crucial for improving their well-being in educational institutions and at home. Previous studies have primarily focused on recognizing emotions and detecting stress using physiological signals like ECG and EEG. However, these studies often relied on video clips to induce various emotional states, which may not be suitable for university students who already face additional stress to excel academically. In this study, a series of experiments were conducted to evaluate students’ stress levels by engaging them in playing Sudoku games under different distracting conditions. The collected physiological signals, including PPG, ECG, and EEG, were analyzed using enhanced models such as LRCN and self-supervised CNN to assess stress levels. The outcomes were compared with participants’ self-reported stress levels after the experiments. The findings demonstrate that the enhanced models presented in this study exhibit a high level of proficiency in assessing stress levels. Notably, when subjects were presented with Sudoku-solving tasks accompanied by noisy or discordant audio, the models achieved an impressive accuracy rate of 95.13% and an F1-score of 93.72%. Additionally, when subjects engaged in Sudoku-solving activities with another individual monitoring the process, the models achieved a commendable accuracy rate of 97.76% and an F1-score of 96.67%. Finally, under comforting conditions, the models achieved an exceptional accuracy rate of 98.78% with an F1-score of 95.39%.
APA, Harvard, Vancouver, ISO, and other styles
29

Masood, Naveen, and Humera Farooq. "Comparing Neural Correlates of Human Emotions across Multiple Stimulus Presentation Paradigms." Brain Sciences 11, no. 6 (May 25, 2021): 696. http://dx.doi.org/10.3390/brainsci11060696.

Full text
Abstract:
Most electroencephalography (EEG)-based emotion recognition systems rely on a single stimulus to evoke emotions. These systems make use of videos, sounds, and images as stimuli. Few studies have been found for self-induced emotions. The question “if different stimulus presentation paradigms for same emotion, produce any subject and stimulus independent neural correlates” remains unanswered. Furthermore, we found that there are publicly available datasets that are used in a large number of studies targeting EEG-based human emotional state recognition. Since one of the major concerns and contributions of this work is towards classifying emotions while subjects experience different stimulus-presentation paradigms, we need to perform new experiments. This paper presents a novel experimental study that recorded EEG data for three different human emotional states evoked with four different stimuli presentation paradigms. Fear, neutral, and joy have been considered as three emotional states. In this work, features were extracted with common spatial pattern (CSP) from recorded EEG data and classified through linear discriminant analysis (LDA). The considered emotion-evoking paradigms included emotional imagery, pictures, sounds, and audio–video movie clips. Experiments were conducted with twenty-five participants. Classification performance in different paradigms was evaluated, considering different spectral bands. With a few exceptions, all paradigms showed the best emotion recognition for higher frequency spectral ranges. Interestingly, joy emotions were classified more strongly as compared to fear. The average neural patterns for fear vs. joy emotional states are presented with topographical maps based on spatial filters obtained with CSP for averaged band power changes for all four paradigms. With respect to the spectral bands, beta and alpha oscillation responses produced the highest number of significant results for the paradigms under consideration. With respect to brain region, the frontal lobe produced the most significant results irrespective of paradigms and spectral bands. The temporal site also played an effective role in generating statistically significant findings. To the best of our knowledge, no study has been conducted for EEG emotion recognition while considering four different stimuli paradigms. This work provides a good contribution towards designing EEG-based system for human emotion recognition that could work effectively in different real-time scenarios.
APA, Harvard, Vancouver, ISO, and other styles
30

Sahni, Pooja, and Jyoti Kumar. "Effect of Nature Experience on Fronto-Parietal Correlates of Neurocognitive Processes Involved in Directed Attention: An ERP Study." Annals of Neurosciences 27, no. 3-4 (July 2020): 136–47. http://dx.doi.org/10.1177/0972753121990143.

Full text
Abstract:
Background: Several studies have demonstrated that brief interactions with natural environments can improve cognitive functioning. However, the neurocognitive processes that are affected by natural surroundings are not yet fully understood. It is argued that the “elements” in natural environment evoke “effortless” involuntary attention and may affect the neural mechanisms underlying inhibition control central to directed attention. Methods: The present study used electroencephalography (EEG) to investigate the effects of nature experience on neurocognitive processes involved in directed attention. During EEG recordings, participants ( n = 53) were presented nature audio/video as stimuli to evoke nature experience, and flanker task was administered both before and after nature experience. An open eye rest condition was included randomly in either before or after nature experience cognitive task as a control condition. Results: The event-related potential analysis demonstrated a significant improvement in the response time after the nature experience. The analysis also demonstrated a significant difference for the inhibitory control process in fronto-parietal N2 ( P < .01) and P3 ( P < .05) for incongruent trials subsequent to nature experience. The spectral analysis also found an increase in alpha in all five brain regions (all Ps < .01) and fronto-central theta power ( P < .01). Conclusion: The findings suggest that improved inhibitory control processes could be one of the aspects of enhanced directed attention after nature experience. Increased alpha along with theta indicates a relaxed yet alert state of mind after nature experience.
APA, Harvard, Vancouver, ISO, and other styles
31

Shah, Syed Yaseen, Hadi Larijani, Ryan M. Gibson, and Dimitrios Liarokapis. "Random Neural Network Based Epileptic Seizure Episode Detection Exploiting Electroencephalogram Signals." Sensors 22, no. 7 (March 23, 2022): 2466. http://dx.doi.org/10.3390/s22072466.

Full text
Abstract:
Epileptic seizures are caused by abnormal electrical activity in the brain that manifests itself in a variety of ways, including confusion and loss of awareness. Correct identification of epileptic seizures is critical in the treatment and management of patients with epileptic disorders. One in four patients present resistance against seizures episodes and are in dire need of detecting these critical events through continuous treatment in order to manage the specific disease. Epileptic seizures can be identified by reliably and accurately monitoring the patients’ neuro and muscle activities, cardiac activity, and oxygen saturation level using state-of-the-art sensing techniques including electroencephalograms (EEGs), electromyography (EMG), electrocardiograms (ECGs), and motion or audio/video recording that focuses on the human head and body. EEG analysis provides a prominent solution to distinguish between the signals associated with epileptic episodes and normal signals; therefore, this work aims to leverage on the latest EEG dataset using cutting-edge deep learning algorithms such as random neural network (RNN), convolutional neural network (CNN), extremely random tree (ERT), and residual neural network (ResNet) to classify multiple variants of epileptic seizures from non-seizures. The results obtained highlighted that RNN outperformed all other algorithms used and provided an overall accuracy of 97%, which was slightly improved after cross validation.
APA, Harvard, Vancouver, ISO, and other styles
32

Wei, Wei, Qingxuan Jia, Yongli Feng, and Gang Chen. "Emotion Recognition Based on Weighted Fusion Strategy of Multichannel Physiological Signals." Computational Intelligence and Neuroscience 2018 (July 5, 2018): 1–9. http://dx.doi.org/10.1155/2018/5296523.

Full text
Abstract:
Emotion recognition is an important pattern recognition problem that has inspired researchers for several areas. Various data from humans for emotion recognition have been developed, including visual, audio, and physiological signals data. This paper proposes a decision-level weight fusion strategy for emotion recognition in multichannel physiological signals. Firstly, we selected four kinds of physiological signals, including Electroencephalography (EEG), Electrocardiogram (ECG), Respiration Amplitude (RA), and Galvanic Skin Response (GSR). And various analysis domains have been used in physiological emotion features extraction. Secondly, we adopt feedback strategy for weight definition, according to recognition rate of each emotion of each physiological signal based on Support Vector Machine (SVM) classifier independently. Finally, we introduce weight in decision level by linear fusing weight matrix with classification result of each SVM classifier. The experiments on the MAHNOB-HCI database show the highest accuracy. The results also provide evidence and suggest a way for further developing a more specialized emotion recognition system based on multichannel data using weight fusion strategy.
APA, Harvard, Vancouver, ISO, and other styles
33

Abdul Halim, S. F., S. A. Awang, and S. Mohamaddan. "An Investigation of Brain Signal Characteristics between Hafiz/Hafizah Subjects and Non-Hafiz/Hafizah Subjects." Journal of Physics: Conference Series 2071, no. 1 (October 1, 2021): 012037. http://dx.doi.org/10.1088/1742-6596/2071/1/012037.

Full text
Abstract:
Abstract Tahfiz education has gain its popularity among Malaysians thus expand the circle of hafiz and hafizah all over the country. This study has been done to investigate effect of memorizing Al-Quran by determining the difference between hafiz/hafizah subjects and non-hafiz/hafizah subjects in terms of their focus using brain signal characteristics. 10 subjects (5 hafiz/hafizah and 5 non-hafiz/hafizah) have been participated in this study. Database of EEG was recorded by using EegoSport (ANT Neuro, ES-230, The Netherlands) while listening no music, rock music, instrumental music and Al-Quran audio simultaneously with Continuous Performance Task (CPT). The classification has been done by using machine learning method. Decision Tree method have obtained the highest accuracy (96.63%) for PSD Burg by using beta wave. The finding shows that hafiz/hafizah group were more focus in all given tasks compared to non-hafiz/hafizah group. Statistical analysis by using Wilcoxon Signed-Ranks Test found that the designed methodology was significant with 95% confidence interval.
APA, Harvard, Vancouver, ISO, and other styles
34

Basu, Medha, SHANKHA SANYAL, Archi Banerjee, Kumardeb Banerjee, and Dipak Ghosh. "Does musical training affect neuro-cognition of emotions? An EEG study with instrumental Indian classical music." Journal of the Acoustical Society of America 151, no. 4 (April 2022): A60. http://dx.doi.org/10.1121/10.0010655.

Full text
Abstract:
Music across all genres evokes a variety of emotions, irrespective of its timbre and tempo. Indian classical music (ICM) is no exception. Although being biased towards vocal musical styles, instrumental music forms one broad section of ICM. In this study, we have tried to compare the neural responses of music practitioners and non-musicians towards different emotions using audio clips from two popular plucked string instruments used in ICM, Sitar and Sarod. From pre-recorded performances of two eminent maestros, 20 clips of approximately 30 s duration were selected from the Alaap sections (initial introductory section without any rhythmic accompaniment) of different Raagas played in the two instruments. From an audience response assessment of 100 participants, a total of eight clips having maximum arousal for happy and sad emotions were identified from the 20 clips, using which EEG (Electroencephalography) recordings were collected from five musicians and five non-musicians. Robust nonlinear Multifractal Detrended Fluctuation Analysis technique (MFDFA) was applied to quantitatively measure the brain-state changes in different lobes for both categories of participants. In essence, this study attempts to encapsulate if and how prior musical training influences the brain responses towards two basic musical emotions in ICM using two instruments of same family.
APA, Harvard, Vancouver, ISO, and other styles
35

Mercier, Manuel R., John J. Foxe, Ian C. Fiebelkorn, John S. Butler, Theodore H. Schwartz, and Sophie Molholm. "Auditory modulation of oscillatory activity in extra-striate visual cortex and its contribution to audio–visual multisensory integration: A human intracranial EEG study." Seeing and Perceiving 25 (2012): 198. http://dx.doi.org/10.1163/187847612x648279.

Full text
Abstract:
Investigations have traditionally focused on activity in the sensory cortices as a function of their respective sensory inputs. However, converging evidence from multisensory research has shown that neural activity in a given sensory region can be modulated by stimulation of other so-called ancillary sensory systems. Both electrophysiology and functional imaging support the occurrence of multisensory processing in human sensory cortex based on the latency of multisensory effects and their precise anatomical localization. Still, due to inherent methodological limitations, direct evidence of the precise mechanisms by which multisensory integration occurs within human sensory cortices is lacking. Using intracranial recordings in epileptic patients () undergoing presurgical evaluation, we investigated the neurophysiological basis of multisensory integration in visual cortex. Subdural electrical brain activity was recorded while patients performed a simple detection task of randomly ordered Auditory alone (A), Visual alone (V) and Audio–Visual stimuli (AV). We then performed time-frequency analysis: first we investigated each condition separately to evaluate responses compared to baseline, then we indexed multisensory integration using both the maximum criterion model (AV vs. V) and the additive model (AV vs. A+V). Our results show that auditory input significantly modulates neuronal activity in visual cortex by resetting the phase of ongoing oscillatory activity. This in turn leads to multisensory integration when auditory and visual stimuli are simultaneously presented.
APA, Harvard, Vancouver, ISO, and other styles
36

Tardón, Lorenzo J., Ignacio Rodríguez-Rodríguez, Niels T. Haumann, Elvira Brattico, and Isabel Barbancho. "Music with Concurrent Saliences of Musical Features Elicits Stronger Brain Responses." Applied Sciences 11, no. 19 (October 1, 2021): 9158. http://dx.doi.org/10.3390/app11199158.

Full text
Abstract:
Brain responses are often studied under strictly experimental conditions in which electroencephalograms (EEGs) are recorded to reflect reactions to short and repetitive stimuli. However, in real life, aural stimuli are continuously mixed and cannot be found isolated, such as when listening to music. In this audio context, the acoustic features in music related to brightness, loudness, noise, and spectral flux, among others, change continuously; thus, significant values of these features can occur nearly simultaneously. Such situations are expected to give rise to increased brain reaction with respect to a case in which they would appear in isolation. In order to assert this, EEG signals recorded while listening to a tango piece were considered. The focus was on the amplitude and time of the negative deflation (N100) and positive deflation (P200) after the stimuli, which was defined on the basis of the selected music feature saliences, in order to perform a statistical analysis intended to test the initial hypothesis. Differences in brain reactions can be identified depending on the concurrence (or not) of such significant values of different features, proving that coterminous increments in several qualities of music influence and modulate the strength of brain responses.
APA, Harvard, Vancouver, ISO, and other styles
37

Ajenaghughrure, Ighoyota Ben, Sonia Da Costa Sousa, and David Lamas. "Measuring Trust with Psychophysiological Signals: A Systematic Mapping Study of Approaches Used." Multimodal Technologies and Interaction 4, no. 3 (September 1, 2020): 63. http://dx.doi.org/10.3390/mti4030063.

Full text
Abstract:
Trust plays an essential role in all human relationships. However, measuring trust remains a challenge for researchers exploring psychophysiological signals. Therefore, this article aims to systematically map the approaches used in studies assessing trust with psychophysiological signals. In particular, we examine the numbers and frequency of combined psychophysiological signals, the primary outcomes of previous studies, and the types and most commonly used data analysis techniques for analyzing psychophysiological data to infer a trust state. For this purpose, we employ a systematic mapping review method, through which we analyze 51 carefully selected articles (studies focused on trust using psychophysiology). Two significant findings are as follows: (1) Psychophysiological signals from EEG(electroencephalogram) and ECG(electrocardiogram) for monitoring peripheral and central nervous systems are the most frequently used to measure trust, while audio and EOG(electro-oculography) psychophysiological signals are the least commonly used. Moreover, the maximum number of psychophysiological signals ever combined so far is three (2). Most of which are peripheral nervous system monitoring psychophysiological signals that are low in spatial resolution. (3) Regarding outcomes: there is only one tool proposed for assessing trust in an interpersonal context, excluding trust in a technology context. Moreover, there are no stable and accurate ensemble models that have been developed to assess trust; all prior attempts led to unstable but fairly accurate models or did not satisfy the conditions for combining several algorithms (ensemble). In conclusion, the extent to which trust can be assessed using psychophysiological measures during user interactions (real-time) remains unknown, as there several issues, such as the lack of a stable and accurate ensemble trust classifier model, among others, that require urgent research attention. Although this topic is relatively new, much work has been done. However, more remains to be done to provide clarity on this topic.
APA, Harvard, Vancouver, ISO, and other styles
38

Jeon, Jin Yong, Haram Lee, and Yunjin Lee. "Psychophysiological effect according to restoration factors of audio-visual environment." Journal of the Acoustical Society of America 152, no. 4 (October 2022): A128. http://dx.doi.org/10.1121/10.0015776.

Full text
Abstract:
This study investigated the effects of psychophysiological restoration due to environmental factors while experiencing the city, waterfront, and green space using various psychological scales and physiological measurement tools. The environment was experienced using virtual reality technology, and the subjects' responses were collected through surveys and EEG (electroencephalography) and HRV (heart rate variability) measurement. HRV responses were carried out by parameters such as total power (TP), SDNN and TSI. In case of EEG, PSD (power spectrum density) analysis indicated a relatively high restoration effect in the natural environment and increase of alpha-beta ratio. Based on functional connectivity, graph theory analysis showed that the limbic system network of non-restoration group was hyperactive. When comparing HRV responses (increase of TP, SDNN and reduction of TSI), from the resilience group, it was found to have higher global network efficiency. In fact, in urban space, there were fewer psychological restoration responses in the group with high noise sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
39

Samadiani, Huang, Cai, Luo, Chi, Xiang, and He. "A Review on Automatic Facial Expression Recognition Systems Assisted by Multimodal Sensor Data." Sensors 19, no. 8 (April 18, 2019): 1863. http://dx.doi.org/10.3390/s19081863.

Full text
Abstract:
Facial Expression Recognition (FER) can be widely applied to various research areas, such as mental diseases diagnosis and human social/physiological interaction detection. With the emerging advanced technologies in hardware and sensors, FER systems have been developed to support real-world application scenes, instead of laboratory environments. Although the laboratory-controlled FER systems achieve very high accuracy, around 97%, the technical transferring from the laboratory to real-world applications faces a great barrier of very low accuracy, approximately 50%. In this survey, we comprehensively discuss three significant challenges in the unconstrained real-world environments, such as illumination variation, head pose, and subject-dependence, which may not be resolved by only analysing images/videos in the FER system. We focus on those sensors that may provide extra information and help the FER systems to detect emotion in both static images and video sequences. We introduce three categories of sensors that may help improve the accuracy and reliability of an expression recognition system by tackling the challenges mentioned above in pure image/video processing. The first group is detailed-face sensors, which detect a small dynamic change of a face component, such as eye-trackers, which may help differentiate the background noise and the feature of faces. The second is non-visual sensors, such as audio, depth, and EEG sensors, which provide extra information in addition to visual dimension and improve the recognition reliability for example in illumination variation and position shift situation. The last is target-focused sensors, such as infrared thermal sensors, which can facilitate the FER systems to filter useless visual contents and may help resist illumination variation. Also, we discuss the methods of fusing different inputs obtained from multimodal sensors in an emotion system. We comparatively review the most prominent multimodal emotional expression recognition approaches and point out their advantages and limitations. We briefly introduce the benchmark data sets related to FER systems for each category of sensors and extend our survey to the open challenges and issues. Meanwhile, we design a framework of an expression recognition system, which uses multimodal sensor data (provided by the three categories of sensors) to provide complete information about emotions to assist the pure face image/video analysis. We theoretically analyse the feasibility and achievability of our new expression recognition system, especially for the use in the wild environment, and point out the future directions to design an efficient, emotional expression recognition system.
APA, Harvard, Vancouver, ISO, and other styles
40

Mortazavi, Mo, David Oakley, Jon Minor, Prem Kumar Thirunagari, Nassar Koucheki, and Nitin Prabhaker. "FUNCTIONAL NEUROCOGNITIVE DEFICITS AND CORTICAL EVOKED POTENTIALS IN PEDIATRIC PATIENTS WITH PROLONGED POST CONCUSSIVE SYMPTOMS." Orthopaedic Journal of Sports Medicine 8, no. 4_suppl3 (April 1, 2020): 2325967120S0026. http://dx.doi.org/10.1177/2325967120s00261.

Full text
Abstract:
Background: Prolonged neurophysiological impairments have been demonstrated after brain injury utilizing tools such as auditory evoked response potentials. The objectivity and sensitivity of such tools can provide clinicians a unique perspective on patients with prolonged post concussive symptoms (PPCS), and help determine optimal management strategies. Limited research currently exists on evoked potentials and pediatric PPCS. Purpose: Illustrate the clinical utility of abnormal evoked potentials in adolescents with PPCS. Methods and Study Design: The study is a retrospective cross-sectional analysis of pediatric PPCS cases from 7/2018 to 7/2019 at a private concussion clinic in Tucson AZ. Patients were included if they had prolonged symptoms beyond 30 days from date of injury date, were 13-16 years old, and diagnosed with PPCS based on SCAT5 assessment and clinical evaluation. All patients with PPCS had moderate to severe clinical symptoms with limited academic and exertional tolerance. Patients were excluded if they had a history of learning disorders, seizure disorder, or complex concussions with a skull fracture or intracranial hemorrhage. Patients were tested using the standard oddball audio P300 EEG protocol. Measures extracted included P300 latency, P300 amplitude, and coherence frequency on mapping. The PPCS cohort (n=71) was then compared to age-matched references (n=73) obtained during baseline testing of non-injured athletes using the same protocol (and hardware) from a multitude of sites. Results: There was a significant correlation between PPCS and reduced P300 amplitude along with an increased coherence. The PPCS group had an average P300 amplitude of 13.7uV compared to 16.9uV in the reference group (p<0.001, CohD=.51). Increased coherence (in the alpha frequency) was observed where the average PPCS patient had 9.1 connected pairs (scalp sites) with coherences that were >2σ, compared to the expected 3.1 connections in reference group >2 σ (p<0.001, CohD=.86). There was no significant difference in P300 latency between the groups. Conclusions: Pediatric patients with PPCS showed decreased P300 amplitude and increased coherence on average compared to the reference group. Reduced voltages and/or increased coherence likely represent ongoing primary brain dysfunction, adaptability, and recovery. These findings suggest the need for ongoing care, and should guide a conservative care approach towards return to high-risk activities. Evoked potentials may be a critical tool to help identify primary neurologic dysfunction in PPCS and help guide management strategies. This tool may also aide in differentiating ongoing neurophysiologic recovery from commonly reported secondary psychosocial symptoms after PPCS. [Figure: see text]
APA, Harvard, Vancouver, ISO, and other styles
41

Reid, Malcolm S., Leslie S. Prichep, Debra Ciplet, Siobhan O'Leary, MeeLee Tom, Bryant Howard, John Rotrosen, and E. Roy John. "Quantitative Electroencephalographic Studies of Cue-Induced Cocaine Craving." Clinical Electroencephalography 34, no. 3 (July 2003): 110–23. http://dx.doi.org/10.1177/155005940303400305.

Full text
Abstract:
Quantitative electroencephalographic (qEEG) profiles were studied in cocaine dependent patients in response to cocaine cue exposure. Using neurometric analytical methods, the spectral power of each primary bandwidth was computed and topographically mapped. Additional measures of cue-reactivity included cocaine craving, anxiety and related subjective ratings, and physiological measures of skin conductance, skin temperature, heart rate, and plasma Cortisol and HVA levels. Twenty-four crack cocaine-dependent subjects were tested for their response to tactile, visual and audio cues related to crack cocaine or neutral items. All measures were analyzed for significant difference by comparing cocaine versus neutral cue conditions. An increase in cocaine craving, anxiety and related subjective ratings, elevated plasma Cortisol levels, and a decrease in skin temperature, were induced by cocaine cue exposure. Distinct qEEG profiles were found during the paraphernalia handling and video viewing (eyes-open), and guided imagery (eyes-closed), phases of cocaine cue exposure. During paraphernalia handling and video viewing, there was an increase in beta activity accompanied by a drop in delta power in the frontal cortex, and an increase in beta mean frequency in the occipital cortex. In contrast, during guided imagery there was an increase in theta and delta power in the frontal cortex, and an increase in beta power in the occipital cortex. Correlation analyses revealed that cue-induced anxiety during paraphernalia handling and video viewing was associated with reduced high frequency and enhanced low frequency EEG activity. These findings demonstrated that EEG activation during cue-induced cocaine craving may be topographically mapped and subsequently analyzed for functional relevance.
APA, Harvard, Vancouver, ISO, and other styles
42

Boyarkina, Iren. "Positive influence of certain sports on learning and second language acquisition processes." Zbornik radova Filozofskog fakulteta u Pristini 51, no. 1 (2021): 309–20. http://dx.doi.org/10.5937/zrffp51-30724.

Full text
Abstract:
Various research convincingly demonstrated positive influence of certain sports and physical exercises on brain and brain functions in general, and on cognitive functions in particular. As it has been demonstrated, efficient and well-developed cognitive functions enhance all human activities and are of crucial importance for learning. In particular, this paper focuses on the positive correlation between certain sports and language learning and its relevance to the Second Language Acquisition studies (SLA). In SLA, students' ability to process input strongly depends on their cognitive abilities, namely, their abilities to process audio and visual input. As various researchers demonstrated, some sports enhance sportsmen's abilities of audio and visual perception; these developed cognitive abilities may, in turn, enhance SLA process. The paper analyses some data collected during the experiments with three groups of students, age 20-23: basketball players, judo wrestlers and control group whose members practiced sports only 2h/week. Three groups were exposed to three types of stimuli: (reverse chess pattern, tone click, flash of light); their brain activity was monitored by EEG. The obtained results demonstrated that basketball players manifested better-developed cognitive abilities responsible for the perception of audio and video stimuli. They were also faster in making decisions on the basis of the video/audio stimuli perceived. Similar results were obtained by American psychologists (University of Illinois, Urbana-Champaign). Their study involved 87 Brazilian volleyball players and 67 persons who don't practice sports. The results were published in Frontiers in Psychology. Experiments have shown that athletes are usually better in controlling their reactions and are able to slow down their reactions, if necessary. According to the professor of Psychology Arthur Kramer, one of the authors of the research, "Athletes can perceive information faster and switch quicker between different tasks than those who don't practice sports." During the experiment, all the participants were asked to perform tasks to test their cognitive abilities (information perception, memory, reactions). The most interesting discovery was that the athletes had significant cognitive advantages over women and men who had not practiced sports. Volleyball players were faster in reactions, in noticing differences in the pictures, in identifying the missing details in puzzles. The process of auditory and visual perception and processing involves many neural structures: from primary units of signal processing sensors to higher levels of processing, responsible for stimuli recognition and decision-making. As it is known, individual waves of cognitive evoked potentials reflect the involvement of certain neural mechanisms of visual and auditory systems in processing and recognition of stimuli of corresponding modality. Wave characteristics can indirectly indicate working speed of specific units of the neural circuit and the number of neurons involved; it may allow measuring the influence of various sports training loads on the development of neural chains involved in audio/video stimuli perception, processing and decision-making. These aspects are also considered important for input processing in SLA. The paper analyses positive contribution of certain sports to learning in general, and Second Language Acquisition in particular, using neuroscience, psychological and socio-cultural approaches.
APA, Harvard, Vancouver, ISO, and other styles
43

Zeng, Chengcheng, Wei Lin, Nian Li, Ya Wen, Yanxin Wang, Wenyuan Jiang, Jialing Zhang, et al. "Electroencephalography (EEG)-Based Neural Emotional Response to the Vegetation Density and Integrated Sound Environment in a Green Space." Forests 12, no. 10 (October 10, 2021): 1380. http://dx.doi.org/10.3390/f12101380.

Full text
Abstract:
Emotion plays an important role in physical and mental health. Green space is an environment conducive to physical and mental recovery and influences human emotions through visual and auditory stimulation. Both the visual environment and sound environment of a green space are important factors affecting its quality. Most of the previous relevant studies have focused solely on the visual or sound environment of green spaces and its impacts. This study focused on the combination of vegetation density (VD) and integrated sound environment (ISE) based on neural emotional evaluation criteria. VD was used as the visual variable, with three levels: high (H), moderate (M) and low (L). ISE was used as the sound variable, with four levels: low-decibel natural and low-decibel artificial sounds (LL), low-decibel natural and high-decibel artificial sounds (LH), high-decibel natural and low-decibel artificial sounds (HL) and high-decibel natural and high-decibel artificial sounds (HH). These two variables were combined into 12 unique groups. A total of 360 volunteer college students were recruited and randomly assigned to the 12 groups (N = 30). All 12 groups underwent the same 5 min high-pressure learning task (pretest baseline), followed by a 5 min audio-visual recovery (posttest). Six indicators of neural emotion (engagement, excitement, focus, interest, relaxation and stress) were dynamically measured by an Emotiv EPOC X device during the pretest and posttest. Analysis of covariance was used to determine the main and coupled effects of the variables. (1) VD and ISE have significant effects on human neural emotions. In moderate- and high-VD spaces, artificial sound levels may have a positive effect on excitement. (2) A higher VD is more likely to result in excitatory neural emotion expression. (3) Low-VD and high-VD spaces have a higher degree of visual continuity. Both extremely low and extremely high VDs result in a higher expression of stressful emotions than observed for a moderate VD. (4) High-decibel artificial sounds are more likely to attract attention, possibly because artificial sounds are easier to recognize than natural sounds. However, when both the natural and artificial sounds are low, it is difficult to induce higher tones, and the lower the artificial sound decibel level, the easier it is to relax. Additionally, under the influence of an ISE, attention recovery and stress recovery may be negatively correlated. The results show that an appropriate combination of VD and ISE can improve the health benefits of a green space and thus the well-being of visitors.
APA, Harvard, Vancouver, ISO, and other styles
44

Hölle, Daniel, Sarah Blum, Sven Kissner, Stefan Debener, and Martin G. Bleichner. "Real-Time Audio Processing of Real-Life Soundscapes for EEG Analysis: ERPs Based on Natural Sound Onsets." Frontiers in Neuroergonomics 3 (February 4, 2022). http://dx.doi.org/10.3389/fnrgo.2022.793061.

Full text
Abstract:
With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS), and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed a stable lag and a small jitter (&lt; 3 ms) indicating a high temporal precision of the system. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.
APA, Harvard, Vancouver, ISO, and other styles
45

Cai, Hanshu, Zhenqin Yuan, Yiwen Gao, Shuting Sun, Na Li, Fuze Tian, Han Xiao, et al. "A multi-modal open dataset for mental-disorder analysis." Scientific Data 9, no. 1 (April 19, 2022). http://dx.doi.org/10.1038/s41597-022-01211-x.

Full text
Abstract:
AbstractAccording to the WHO, the number of mental disorder patients, especially depression patients, has overgrown and become a leading contributor to the global burden of disease. With the rising of tools such as artificial intelligence, using physiological data to explore new possible physiological indicators of mental disorder and creating new applications for mental disorder diagnosis has become a new research hot topic. We present a multi-modal open dataset for mental-disorder analysis. The dataset includes EEG and recordings of spoken language data from clinically depressed patients and matching normal controls, who were carefully diagnosed and selected by professional psychiatrists in hospitals. The EEG dataset includes data collected using a traditional 128-electrodes mounted elastic cap and a wearable 3-electrode EEG collector for pervasive computing applications. The 128-electrodes EEG signals of 53 participants were recorded as both in resting state and while doing the Dot probe tasks; the 3-electrode EEG signals of 55 participants were recorded in resting-state; the audio data of 52 participants were recorded during interviewing, reading, and picture description.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Guijun, Xueying Zhang, Jing Zhang, Fenglian Li, and Shufei Duan. "A novel brain-computer interface based on audio-assisted visual evoked EEG and spatial-temporal attention CNN." Frontiers in Neurorobotics 16 (September 30, 2022). http://dx.doi.org/10.3389/fnbot.2022.995552.

Full text
Abstract:
ObjectiveBrain-computer interface (BCI) can translate intentions directly into instructions and greatly improve the interaction experience for disabled people or some specific interactive applications. To improve the efficiency of BCI, the objective of this study is to explore the feasibility of an audio-assisted visual BCI speller and a deep learning-based single-trial event related potentials (ERP) decoding strategy.ApproachIn this study, a two-stage BCI speller combining the motion-onset visual evoked potential (mVEP) and semantically congruent audio evoked ERP was designed to output the target characters. In the first stage, the different group of characters were presented in the different locations of visual field simultaneously and the stimuli were coded to the mVEP based on a new space division multiple access scheme. And then, the target character can be output based on the audio-assisted mVEP in the second stage. Meanwhile, a spatial-temporal attention-based convolutional neural network (STA-CNN) was proposed to recognize the single-trial ERP components. The CNN can learn 2-dimentional features including the spatial information of different activated channels and time dependence among ERP components. In addition, the STA mechanism can enhance the discriminative event-related features by adaptively learning probability weights.Main resultsThe performance of the proposed two-stage audio-assisted visual BCI paradigm and STA-CNN model was evaluated using the Electroencephalogram (EEG) recorded from 10 subjects. The average classification accuracy of proposed STA-CNN can reach 59.6 and 77.7% for the first and second stages, which were always significantly higher than those of the comparison methods (p &lt; 0.05).SignificanceThe proposed two-stage audio-assisted visual paradigm showed a great potential to be used to BCI speller. Moreover, through the analysis of the attention weights from time sequence and spatial topographies, it was proved that STA-CNN could effectively extract interpretable spatiotemporal EEG features.
APA, Harvard, Vancouver, ISO, and other styles
47

Fauchon, Camille, David Meunier, Isabelle Faillenot, Florence B. Pomares, Hélène Bastuji, Luis Garcia-Larrea, and Roland Peyron. "The Modular Organization of Pain Brain Networks: An fMRI Graph Analysis Informed by Intracranial EEG." Cerebral Cortex Communications 1, no. 1 (2020). http://dx.doi.org/10.1093/texcom/tgaa088.

Full text
Abstract:
Abstract Intracranial EEG (iEEG) studies have suggested that the conscious perception of pain builds up from successive contributions of brain networks in less than 1 s. However, the functional organization of cortico-subcortical connections at the multisecond time scale, and its accordance with iEEG models, remains unknown. Here, we used graph theory with modular analysis of fMRI data from 60 healthy participants experiencing noxious heat stimuli, of whom 36 also received audio stimulation. Brain connectivity during pain was organized in four modules matching those identified through iEEG, namely: 1) sensorimotor (SM), 2) medial fronto-cingulo-parietal (default mode-like), 3) posterior parietal-latero-frontal (central executive-like), and 4) amygdalo-hippocampal (limbic). Intrinsic overlaps existed between the pain and audio conditions in high-order areas, but also pain-specific higher small-worldness and connectivity within the sensorimotor module. Neocortical modules were interrelated via “connector hubs” in dorsolateral frontal, posterior parietal, and anterior insular cortices, the antero-insular connector being most predominant during pain. These findings provide a mechanistic picture of the brain networks architecture and support fractal-like similarities between the micro-and macrotemporal dynamics associated with pain. The anterior insula appears to play an essential role in information integration, possibly by determining priorities for the processing of information and subsequent entrance into other points of the brain connectome.
APA, Harvard, Vancouver, ISO, and other styles
48

Varshney, Yash V., and Azizuddin Khan. "Imagined Speech Classification Using Six Phonetically Distributed Words." Frontiers in Signal Processing 2 (March 25, 2022). http://dx.doi.org/10.3389/frsip.2022.760643.

Full text
Abstract:
Imagined speech can be used to send commands without any muscle movement or emitting audio. The current status of research is in the early stage, and there is a shortage of open-access datasets for imagined speech analysis. We have proposed an openly accessible electroencephalograph (EEG) dataset for six imagined words in this work. We have selected six phonetically distributed, monosyllabic, and emotionally neutral words from W-22 CID word lists. The phonetic distribution of words consisted of the different places of consonants’ articulation and different positions of tongue advancement for vowel pronunciation. The selected words were “could,” “yard,” “give,” “him,” “there,” and “toe.” The experiment was performed over 15 subjects who performed the overt and imagined speech task for the displayed word. Each word was presented 50 times in random order. EEG signals were recorded during the experiment using a 64-channel EEG acquisition system with a sampling rate of 2,048 Hz. A preliminary analysis of the recorded data is presented by performing the classification of EEGs corresponding to the imagined words. The achieved accuracy is above the chance level for all subjects, which suggests that the recorded EEGs contain distinctive information about the imagined words.
APA, Harvard, Vancouver, ISO, and other styles
49

Dauer, Tysen, Duc T. Nguyen, Nick Gang, Jacek P. Dmochowski, Jonathan Berger, and Blair Kaneshiro. "Inter-subject Correlation While Listening to Minimalist Music: A Study of Electrophysiological and Behavioral Responses to Steve Reich's Piano Phase." Frontiers in Neuroscience 15 (December 9, 2021). http://dx.doi.org/10.3389/fnins.2021.702067.

Full text
Abstract:
Musical minimalism utilizes the temporal manipulation of restricted collections of rhythmic, melodic, and/or harmonic materials. One example, Steve Reich's Piano Phase, offers listeners readily audible formal structure with unpredictable events at the local level. For example, pattern recurrences may generate strong expectations which are violated by small temporal and pitch deviations. A hyper-detailed listening strategy prompted by these minute deviations stands in contrast to the type of listening engagement typically cultivated around functional tonal Western music. Recent research has suggested that the inter-subject correlation (ISC) of electroencephalographic (EEG) responses to natural audio-visual stimuli objectively indexes a state of “engagement,” demonstrating the potential of this approach for analyzing music listening. But can ISCs capture engagement with minimalist music, which features less obvious expectation formation and has historically received a wide range of reactions? To approach this question, we collected EEG and continuous behavioral (CB) data while 30 adults listened to an excerpt from Steve Reich's Piano Phase, as well as three controlled manipulations and a popular-music remix of the work. Our analyses reveal that EEG and CB ISC are highest for the remix stimulus and lowest for our most repetitive manipulation, no statistical differences in overall EEG ISC between our most musically meaningful manipulations and Reich's original piece, and evidence that compositional features drove engagement in time-resolved ISC analyses. We also found that aesthetic evaluations corresponded well with overall EEG ISC. Finally we highlight co-occurrences between stimulus events and time-resolved EEG and CB ISC. We offer the CB paradigm as a useful analysis measure and note the value of minimalist compositions as a limit case for the neuroscientific study of music listening. Overall, our participants' neural, continuous behavioral, and question responses showed strong similarities that may help refine our understanding of the type of engagement indexed by ISC for musical stimuli.
APA, Harvard, Vancouver, ISO, and other styles
50

Choy, Chi S., Qiang Fang, Katrina Neville, Bingrui Ding, Akshay Kumar, Seedahmed S. Mahmoud, Xudong Gu, Jianming Fu, and Beth Jelfs. "Virtual reality and motor imagery for early post-stroke rehabilitation." BioMedical Engineering OnLine 22, no. 1 (July 5, 2023). http://dx.doi.org/10.1186/s12938-023-01124-9.

Full text
Abstract:
Abstract Background Motor impairment is a common consequence of stroke causing difficulty in independent movement. The first month of post-stroke rehabilitation is the most effective period for recovery. Movement imagination, known as motor imagery, in combination with virtual reality may provide a way for stroke patients with severe motor disabilities to begin rehabilitation. Methods The aim of this study is to verify whether motor imagery and virtual reality help to activate stroke patients’ motor cortex. 16 acute/subacute (< 6 months) stroke patients participated in this study. All participants performed motor imagery of basketball shooting which involved the following tasks: listening to audio instruction only, watching a basketball shooting animation in 3D with audio, and also performing motor imagery afterwards. Electroencephalogram (EEG) was recorded for analysis of motor-related features of the brain such as power spectral analysis in the $$\alpha$$ α and $$\beta$$ β frequency bands and spectral entropy. 18 EEG channels over the motor cortex were used for all stroke patients. Results All results are normalised relative to all tasks for each participant. The power spectral densities peak near the $$\alpha$$ α band for all participants and also the $$\beta$$ β band for some participants. Tasks with instructions during motor imagery generally show greater power spectral peaks. The p-values of the Wilcoxon signed-rank test for band power comparison from the 18 EEG channels between different pairs of tasks show a 0.01 significance of rejecting the band powers being the same for most tasks done by stroke subjects. The motor cortex of most stroke patients is more active when virtual reality is involved during motor imagery as indicated by their respective scalp maps of band power and spectral entropy. Conclusion The resulting activation of stroke patient’s motor cortices in this study reveals evidence that it is induced by imagination of movement and virtual reality supports motor imagery. The framework of the current study also provides an efficient way to investigate motor imagery and virtual reality during post-stroke rehabilitation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography