Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Classification of biomedical time series.

Zeitschriftenartikel zum Thema „Classification of biomedical time series“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Classification of biomedical time series" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ramanujam, E., und S. Padmavathi. „Genetic time series motif discovery for time series classification“. International Journal of Biomedical Engineering and Technology 31, Nr. 1 (2019): 47. http://dx.doi.org/10.1504/ijbet.2019.101051.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Jin, Lin-peng, und Jun Dong. „Ensemble Deep Learning for Biomedical Time Series Classification“. Computational Intelligence and Neuroscience 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/6212684.

Der volle Inhalt der Quelle
Annotation:
Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such asBaggingandAdaBoost.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Ivaturi, Praharsh, Matteo Gadaleta, Amitabh C. Pandey, Michael Pazzani, Steven R. Steinhubl und Giorgio Quer. „A Comprehensive Explanation Framework for Biomedical Time Series Classification“. IEEE Journal of Biomedical and Health Informatics 25, Nr. 7 (Juli 2021): 2398–408. http://dx.doi.org/10.1109/jbhi.2021.3060997.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Wang, Jin, Ping Liu, Mary F. H. She, Saeid Nahavandi und Abbas Kouzani. „Bag-of-words representation for biomedical time series classification“. Biomedical Signal Processing and Control 8, Nr. 6 (November 2013): 634–44. http://dx.doi.org/10.1016/j.bspc.2013.06.004.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ku-Maldonado, Carlos Alejandro, und Erik Molino-Minero-Re. „Performance Evaluation of Biomedical Time Series Transformation Methods for Classification Tasks“. Revista Mexicana de Ingeniería Biomédica 44, Nr. 4 (17.08.2023): 105–16. http://dx.doi.org/10.17488/rmib.44.4.7.

Der volle Inhalt der Quelle
Annotation:
The extraction of time series features is essential across various fields, yet it remains a challenging endeavor. Therefore, it's crucial to identify appropriate methods capable of extracting pertinent information that can significantly enhance classification performance. Among these methods are those that translate time series into different domains. This study investigates three distinct time series transformation approaches for addressing time series classification challenges within biomedical data. The first method involves a response vector transformation, while the other two employ image transformation techniques: RandOm Convolutional KErnel Transform (ROCKET), Gramian Angular Fields, and Markov Transition Fields. These transformation methods were applied to five biomedical datasets, exploring various format configurations to ascertain the optimal representation technique and configuration for input, which in turn improves classification performance. Evaluations were conducted on the effectiveness of these methods in conjunction with two classification algorithms. The outcomes underscore the significance of these time series transformation techniques as facilitators for enhanced classification algorithms documented in current literature.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Gupta, R., A. Mittal, K. Singh, V. Narang und S. Roy. „Time-series approach to protein classification problem“. IEEE Engineering in Medicine and Biology Magazine 28, Nr. 4 (Juli 2009): 32–37. http://dx.doi.org/10.1109/memb.2009.932903.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Wang, Will Ke, Ina Chen, Leeor Hershkovich, Jiamu Yang, Ayush Shetty, Geetika Singh, Yihang Jiang et al. „A Systematic Review of Time Series Classification Techniques Used in Biomedical Applications“. Sensors 22, Nr. 20 (20.10.2022): 8016. http://dx.doi.org/10.3390/s22208016.

Der volle Inhalt der Quelle
Annotation:
Background: Digital clinical measures collected via various digital sensing technologies such as smartphones, smartwatches, wearables, and ingestible and implantable sensors are increasingly used by individuals and clinicians to capture the health outcomes or behavioral and physiological characteristics of individuals. Time series classification (TSC) is very commonly used for modeling digital clinical measures. While deep learning models for TSC are very common and powerful, there exist some fundamental challenges. This review presents the non-deep learning models that are commonly used for time series classification in biomedical applications that can achieve high performance. Objective: We performed a systematic review to characterize the techniques that are used in time series classification of digital clinical measures throughout all the stages of data processing and model building. Methods: We conducted a literature search on PubMed, as well as the Institute of Electrical and Electronics Engineers (IEEE), Web of Science, and SCOPUS databases using a range of search terms to retrieve peer-reviewed articles that report on the academic research about digital clinical measures from a five-year period between June 2016 and June 2021. We identified and categorized the research studies based on the types of classification algorithms and sensor input types. Results: We found 452 papers in total from four different databases: PubMed, IEEE, Web of Science Database, and SCOPUS. After removing duplicates and irrelevant papers, 135 articles remained for detailed review and data extraction. Among these, engineered features using time series methods that were subsequently fed into widely used machine learning classifiers were the most commonly used technique, and also most frequently achieved the best performance metrics (77 out of 135 articles). Statistical modeling (24 out of 135 articles) algorithms were the second most common and also the second-best classification technique. Conclusions: In this review paper, summaries of the time series classification models and interpretation methods for biomedical applications are summarized and categorized. While high time series classification performance has been achieved in digital clinical, physiological, or biomedical measures, no standard benchmark datasets, modeling methods, or reporting methodology exist. There is no single widely used method for time series model development or feature interpretation, however many different methods have proven successful.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Lemus, Mariano, João P. Beirão, Nikola Paunković, Alexandra M. Carvalho und Paulo Mateus. „Information-Theoretical Criteria for Characterizing the Earliness of Time-Series Data“. Entropy 22, Nr. 1 (30.12.2019): 49. http://dx.doi.org/10.3390/e22010049.

Der volle Inhalt der Quelle
Annotation:
Biomedical signals constitute time-series that sustain machine learning techniques to achieve classification. These signals are complex with measurements of several features over, eventually, an extended period. Characterizing whether the data can anticipate prediction is an essential task in time-series mining. The ability to obtain information in advance by having early knowledge about a specific event may be of great utility in many areas. Early classification arises as an extension of the time-series classification problem, given the need to obtain a reliable prediction as soon as possible. In this work, we propose an information-theoretic method, named Multivariate Correlations for Early Classification (MCEC), to characterize the early classification opportunity of a time-series. Experimental validation is performed on synthetic and benchmark data, confirming the ability of the MCEC algorithm to perform a trade-off between accuracy and earliness in a wide-spectrum of time-series data, such as those collected from sensors, images, spectrographs, and electrocardiograms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Athavale, Yashodhan, Sridhar Krishnan und Aziz Guergachi. „Pattern Classification of Signals Using Fisher Kernels“. Mathematical Problems in Engineering 2012 (2012): 1–15. http://dx.doi.org/10.1155/2012/467175.

Der volle Inhalt der Quelle
Annotation:
The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model (GMM) for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Carreiro, André V., Orlando Anunciação, João A. Carriço und Sara C. Madeira. „Prognostic Prediction through Biclustering-Based Classification of Clinical Gene Expression Time Series“. Journal of Integrative Bioinformatics 8, Nr. 3 (01.12.2011): 73–89. http://dx.doi.org/10.1515/jib-2011-175.

Der volle Inhalt der Quelle
Annotation:
Summary The constant drive towards a more personalized medicine led to an increasing interest in temporal gene expression analyzes. It is now broadly accepted that considering a temporal perspective represents a great advantage to better understand disease progression and treatment results at a molecular level. In this context, biclustering algorithms emerged as an important tool to discover local expression patterns in biomedical applications, and CCC-Biclustering arose as an efficient algorithm relying on the temporal nature of data to identify all maximal temporal patterns in gene expression time series. In this work, CCC-Biclustering was integrated in new biclustering-based classifiers for prognostic prediction. As case study we analyzed multiple gene expression time series in order to classify the response of Multiple Sclerosis patients to the standard treatment with Interferon-β, to which nearly half of the patients reveal a negative response. In this scenario, using an effective predictive model of a patient’s response would avoid useless and possibly harmful therapies for the non-responder group. The results revealed interesting potentialities to be further explored in classification problems involving other (clinical) time series.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Piepjohn, Patricia, Christin Bald, Gregor Kuhlenbäumer, Jos Steffen Becktepe, Günther Deuschl und Gerhard Schmidt. „Real-time classification of movement patterns of tremor patients“. Biomedical Engineering / Biomedizinische Technik 67, Nr. 2 (24.02.2022): 119–30. http://dx.doi.org/10.1515/bmt-2021-0140.

Der volle Inhalt der Quelle
Annotation:
Abstract The process of diagnosing tremor patients often leads to misdiagnoses. Therefore, existing technical methods for analysing tremor are needed to more effectively distinguish between different diseases. For this purpose, a system has been developed that classifies measured tremor signals in real time. To achieve this, the hand tremor of 561 subjects has been measured in different hand positions. Acceleration and surface electromyography are recorded during the examination. For this study, data from subjects with Parkinson’s Disease, Essential Tremor, and physiological tremor are considered. In a first signal analysis feature extraction is performed, and the resulting features are examined for their discriminative value. In a second step, three classification models based on different pattern recognition techniques are developed to classify the subjects with respect to their tremor type. With a trained decision tree, the three tremor types can be classified with a relative diagnostic accuracy of 83.14%. A neural network achieves 84.24% and the combination of both classifiers yields a relative diagnostic accuracy of 85.76%. The approach is promising and involving more features of the recorded time series will improve the discriminative value.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Fulcher, Ben D., Max A. Little und Nick S. Jones. „Highly comparative time-series analysis: the empirical structure of time series and their methods“. Journal of The Royal Society Interface 10, Nr. 83 (06.06.2013): 20130048. http://dx.doi.org/10.1098/rsif.2013.0048.

Der volle Inhalt der Quelle
Annotation:
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording and analysing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series, and over 9000 time-series analysis algorithms are analysed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heartbeat intervals, speech signals and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Gamidullaeva, Leyla Ayvarovna, und Vsevolod Chernyshenko. „Using Decision-Making Block of Computer-Based Intelligent Biomedical Avatar for Applied Research in Bioinformatics“. International Journal of Applied Research in Bioinformatics 9, Nr. 2 (Juli 2019): 24–34. http://dx.doi.org/10.4018/ijarb.2019070102.

Der volle Inhalt der Quelle
Annotation:
A biomedical task in which the definitions and properties of applied research indicators under study in bioinformatics is formalized. A wide range of traditional approaches used for predicting medical time series were reviewed. Advanced algorithms for predicting moments of reversals of biomedical trends based on machine learning tools were investigated as well. The effectiveness of different kinds of approaches was discussed, and related examples are given. An original securities price dynamics trend classification algorithm, based on the use of the sliding window methodology and biomedical avatar, is described. A general scheme of the classification algorithm to identify biomedical market phases is analyzed and results of computer modelling are presented. Selection of initial and resulting metrics is grounded.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Alarcón, Ángel Serrano, Natividad Martínez Madrid, Ralf Seepold und Juan Antonio Ortega Ramirez. „Main requirements of end-to-end deep learning models for biomedical time series classification in healthcare environments“. Procedia Computer Science 207 (2022): 3038–46. http://dx.doi.org/10.1016/j.procs.2022.09.532.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Carreiro, André V., Artur J. Ferreira, Mário A. T. Figueiredo und Sara C. Madeira. „Towards a Classification Approach using Meta-Biclustering: Impact of Discretization in the Analysis of Expression Time Series“. Journal of Integrative Bioinformatics 9, Nr. 3 (01.12.2012): 105–20. http://dx.doi.org/10.1515/jib-2012-207.

Der volle Inhalt der Quelle
Annotation:
Summary Biclustering has been recognized as a remarkably effective method for discovering local temporal expression patterns and unraveling potential regulatory mechanisms, essential to understanding complex biomedical processes, such as disease progression and drug response. In this work, we propose a classification approach based on meta-biclusters (a set of similar biclusters) applied to prognostic prediction. We use real clinical expression time series to predict the response of patients with multiple sclerosis to treatment with Interferon-β. As compared to previous approaches, the main advantages of this strategy are the interpretability of the results and the reduction of data dimensionality, due to biclustering. This would allow the identification of the genes and time points which are most promising for explaining different types of response profiles, according to clinical knowledge. We assess the impact of different unsupervised and supervised discretization techniques on the classification accuracy. The experimental results show that, in many cases, the use of these discretization methods improves the classification accuracy, as compared to the use of the original features.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Zhang, Yinghui, Fengyuan Zhang, Yantong Cui und Ruoci Ning. „CLASSIFICATION OF BIOMEDICAL IMAGES USING CONTENT BASED IMAGE RETRIEVAL SYSTEMS“. International Journal of Engineering Technologies and Management Research 5, Nr. 2 (08.02.2020): 181–89. http://dx.doi.org/10.29121/ijetmr.v5.i2.2018.161.

Der volle Inhalt der Quelle
Annotation:
Because of the numerous application of Content-based image retrieval (CBIR) system in various areas, it has always remained a topic of keen interest by the researchers. Fetching of the most similar image from the complete repository by comparing it to the input image in the minimum span of time is the main task of the CBIR. The purpose of the CBIR can vary from different types of requirements like a diagnosis of the illness by the physician, crime investigation, product recommendation by the e-commerce companies, etc. In the present work, CBIR is used for finding the similar patients having Breast cancer. Gray-Level Co-Occurrence Matrix along with histogram and correlation coefficient is used for creating CBIR system. Comparing the images of the area of interest of a present patient with the complete series of the image of a past patient can help in early diagnosis of the disease. CBIR is so much effective that even when the symptoms are not shown by the body the disease can be diagnosed from the sample images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Lipponen, Jukka A., und Mika P. Tarvainen. „A robust algorithm for heart rate variability time series artefact correction using novel beat classification“. Journal of Medical Engineering & Technology 43, Nr. 3 (03.04.2019): 173–81. http://dx.doi.org/10.1080/03091902.2019.1640306.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Jackson, Rhydon, Debra Knisley, Cecilia McIntosh und Phillip Pfeiffer. „Predicting Flavonoid UGT Regioselectivity“. Advances in Bioinformatics 2011 (30.06.2011): 1–15. http://dx.doi.org/10.1155/2011/506583.

Der volle Inhalt der Quelle
Annotation:
Machine learning was applied to a challenging and biologically significant protein classification problem: the prediction of avonoid UGT acceptor regioselectivity from primary sequence. Novel indices characterizing graphical models of residues were proposed and found to be widely distributed among existing amino acid indices and to cluster residues appropriately. UGT subsequences biochemically linked to regioselectivity were modeled as sets of index sequences. Several learning techniques incorporating these UGT models were compared with classifications based on standard sequence alignment scores. These techniques included an application of time series distance functions to protein classification. Time series distances defined on the index sequences were used in nearest neighbor and support vector machine classifiers. Additionally, Bayesian neural network classifiers were applied to the index sequences. The experiments identified improvements over the nearest neighbor and support vector machine classifications relying on standard alignment similarity scores, as well as strong correlations between specific subsequences and regioselectivities.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

JO, YONG-UN, und DO-CHANG OH. „REAL-TIME HAND GESTURE CLASSIFICATION USING CRNN WITH SCALE AVERAGE WAVELET TRANSFORM“. Journal of Mechanics in Medicine and Biology 20, Nr. 10 (Dezember 2020): 2040028. http://dx.doi.org/10.1142/s021951942040028x.

Der volle Inhalt der Quelle
Annotation:
It is very useful in the human computer interface to quickly and accurately recognize human hand movements in real time. In this paper, we aimed to robustly recognize hand gestures in real time using Convolutional Recurrent Neural Network (CRNN) with pre-processing and overlapping window. The CRNN is a deep learning model that combines Long Short-Term Memory (LSTM) for time-series information classification and Convolutional Neural Network (CNN) for feature extraction. The sensor for hand gesture detection uses Myo-armband, and six hand gestures are recognized and classified, including two grips, three hand signs, and one rest. As the essential pre-processing due to the characteristics of EMG data, the existing Short Time Fourier Transform (STFT), Continuous-time Wavelet Transform (CWT), and newly proposed Scale Average Wavelet Transform (SAWT) are used, and thus, the SAWT showed relatively high accuracy in the stationary environmental test. The CRNN with overlapping window has been proposed that can improve the degradation of real-time prediction accuracy, which is caused by inconsistent start time and hand motion speed when acquiring the EMG signal. In the stationary environmental test, the CRNN model with SAWT and overlapping window showed the highest accuracy of 92.5%. In the real-time environmental test, for all subjects learning, 80% accuracy and 0.99 s time delay were obtained on average, and for individual learning, 91.5% accuracy and 0.32 s time delay were obtained on average. As a result, in both stationary and real-time tests, the CRNN with SAWT and overlapping window showed better performance than the other methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Gao, Yongxiang, Zhi Zhao, Yimin Chen, Gehendra Mahara, Jialing Huang, Zhuochen Lin und Jinxin Zhang. „Automatic epileptic seizure classification in multichannel EEG time series with linear discriminant analysis“. Technology and Health Care 28, Nr. 1 (13.01.2020): 23–33. http://dx.doi.org/10.3233/thc-181548.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Chambon, Stanislas, Mathieu N. Galtier, Pierrick J. Arnal, Gilles Wainrib und Alexandre Gramfort. „A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series“. IEEE Transactions on Neural Systems and Rehabilitation Engineering 26, Nr. 4 (April 2018): 758–69. http://dx.doi.org/10.1109/tnsre.2018.2813138.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Arami, Arash, Antonios Poulakakis-Daktylidis, Yen F. Tai und Etienne Burdet. „Prediction of Gait Freezing in Parkinsonian Patients: A Binary Classification Augmented With Time Series Prediction“. IEEE Transactions on Neural Systems and Rehabilitation Engineering 27, Nr. 9 (September 2019): 1909–19. http://dx.doi.org/10.1109/tnsre.2019.2933626.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Dursun, Gizem, Dunja Bijelić, Neşe Ayşit, Burcu Kurt Vatandaşlar, Lidija Radenović, Abdulkerim Çapar, Bilal Ersen Kerman, Pavle R. Andjus, Andrej Korenić und Ufuk Özkaya. „Combined segmentation and classification-based approach to automated analysis of biomedical signals obtained from calcium imaging“. PLOS ONE 18, Nr. 2 (06.02.2023): e0281236. http://dx.doi.org/10.1371/journal.pone.0281236.

Der volle Inhalt der Quelle
Annotation:
Automated screening systems in conjunction with machine learning-based methods are becoming an essential part of the healthcare systems for assisting in disease diagnosis. Moreover, manually annotating data and hand-crafting features for training purposes are impractical and time-consuming. We propose a segmentation and classification-based approach for assembling an automated screening system for the analysis of calcium imaging. The method was developed and verified using the effects of disease IgGs (from Amyotrophic Lateral Sclerosis patients) on calcium (Ca2+) homeostasis. From 33 imaging videos we analyzed, 21 belonged to the disease and 12 to the control experimental groups. The method consists of three main steps: projection, segmentation, and classification. The entire Ca2+ time-lapse image recordings (videos) were projected into a single image using different projection methods. Segmentation was performed by using a multi-level thresholding (MLT) step and the Regions of Interest (ROIs) that encompassed cell somas were detected. A mean value of the pixels within these boundaries was collected at each time point to obtain the Ca2+ traces (time-series). Finally, a new matrix called feature image was generated from those traces and used for assessing the classification accuracy of various classifiers (control vs. disease). The mean value of the segmentation F-score for all the data was above 0.80 throughout the tested threshold levels for all projection methods, namely maximum intensity, standard deviation, and standard deviation with linear scaling projection. Although the classification accuracy reached up to 90.14%, interestingly, we observed that achieving better scores in segmentation results did not necessarily correspond to an increase in classification performance. Our method takes the advantage of the multi-level thresholding and of a classification procedure based on the feature images, thus it does not have to rely on hand-crafted training parameters of each event. It thus provides a semi-autonomous tool for assessing segmentation parameters which allows for the best classification accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Liu, Chenxi, Israel Cohen, Rotem Vishinkin und Hossam Haick. „Nanomaterial-Based Sensor Array Signal Processing and Tuberculosis Classification Using Machine Learning“. Journal of Low Power Electronics and Applications 13, Nr. 2 (29.05.2023): 39. http://dx.doi.org/10.3390/jlpea13020039.

Der volle Inhalt der Quelle
Annotation:
Tuberculosis (TB) has long been recognized as a significant health concern worldwide. Recent advancements in noninvasive wearable devices and machine learning (ML) techniques have enabled rapid and cost-effective testing for the real-time detection of TB. However, small datasets are often encountered in biomedical and chemical engineering domains, which can hinder the success of ML models and result in overfitting issues. To address this challenge, we propose various data preprocessing methods and ML approaches, including long short-term memory (LSTM), convolutional neural network (CNN), Gramian angular field-CNN (GAF-CNN), and multivariate time series with MinCutPool (MT-MinCutPool), for classifying a small TB dataset consisting of multivariate time series (MTS) sensor signals. Our proposed methods are compared with state-of-the-art models commonly used in MTS classification (MTSC) tasks. We find that lightweight models are more appropriate for small-dataset problems. Our experimental results demonstrate that the average performance of our proposed models outperformed the baseline methods in all aspects. Specifically, the GAF-CNN model achieved the highest accuracy of 0.639 and the highest specificity of 0.777, indicating its superior effectiveness for MTSC tasks. Furthermore, our proposed MT-MinCutPool model surpassed the baseline MTPool model in all evaluation metrics, demonstrating its viability for MTSC tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Arunachalam, S. P., S. Kapa, S. K. Mulpuru, P. A. Friedman und E. G. Tolkacheva. „Improved Multiscale Entropy Technique with Nearest-Neighbor Moving-Average Kernel for Nonlinear and Nonstationary Short-Time Biomedical Signal Analysis“. Journal of Healthcare Engineering 2018 (2018): 1–13. http://dx.doi.org/10.1155/2018/8632436.

Der volle Inhalt der Quelle
Annotation:
Analysis of biomedical signals can yield invaluable information for prognosis, diagnosis, therapy evaluation, risk assessment, and disease prevention which is often recorded as short time series data that challenges existing complexity classification algorithms such as Shannon entropy (SE) and other techniques. The purpose of this study was to improve previously developed multiscale entropy (MSE) technique by incorporating nearest-neighbor moving-average kernel, which can be used for analysis of nonlinear and non-stationary short time series physiological data. The approach was tested for robustness with respect to noise analysis using simulated sinusoidal and ECG waveforms. Feasibility of MSE to discriminate between normal sinus rhythm (NSR) and atrial fibrillation (AF) was tested on a single-lead ECG. In addition, the MSE algorithm was applied to identify pivot points of rotors that were induced in ex vivo isolated rabbit hearts. The improved MSE technique robustly estimated the complexity of the signal compared to that of SE with various noises, discriminated NSR and AF on single-lead ECG, and precisely identified the pivot points of ex vivo rotors by providing better contrast between the rotor core and the peripheral region. The improved MSE technique can provide efficient complexity analysis of variety of nonlinear and nonstationary short-time biomedical signals.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Tripathy, R. K., und U. Rajendra Acharya. „Use of features from RR-time series and EEG signals for automated classification of sleep stages in deep neural network framework“. Biocybernetics and Biomedical Engineering 38, Nr. 4 (2018): 890–902. http://dx.doi.org/10.1016/j.bbe.2018.05.005.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Cuesta-Frau, David, Juan Pablo Murillo-Escobar, Diana Alexandra Orrego und Edilson Delgado-Trejos. „Embedded Dimension and Time Series Length. Practical Influence on Permutation Entropy and Its Applications“. Entropy 21, Nr. 4 (10.04.2019): 385. http://dx.doi.org/10.3390/e21040385.

Der volle Inhalt der Quelle
Annotation:
Permutation Entropy (PE) is a time series complexity measure commonly used in a variety of contexts, with medicine being the prime example. In its general form, it requires three input parameters for its calculation: time series length N, embedded dimension m, and embedded delay τ . Inappropriate choices of these parameters may potentially lead to incorrect interpretations. However, there are no specific guidelines for an optimal selection of N, m, or τ , only general recommendations such as N > > m ! , τ = 1 , or m = 3 , … , 7 . This paper deals specifically with the study of the practical implications of N > > m ! , since long time series are often not available, or non-stationary, and other preliminary results suggest that low N values do not necessarily invalidate PE usefulness. Our study analyses the PE variation as a function of the series length N and embedded dimension m in the context of a diverse experimental set, both synthetic (random, spikes, or logistic model time series) and real–world (climatology, seismic, financial, or biomedical time series), and the classification performance achieved with varying N and m. The results seem to indicate that shorter lengths than those suggested by N > > m ! are sufficient for a stable PE calculation, and even very short time series can be robustly classified based on PE measurements before the stability point is reached. This may be due to the fact that there are forbidden patterns in chaotic time series, not all the patterns are equally informative, and differences among classes are already apparent at very short lengths.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Wang, Jialing, Shiwei Cheng, Jieming Tian und Yuefan Gao. „A 2D CNN-LSTM hybrid algorithm using time series segments of EEG data for motor imagery classification“. Biomedical Signal Processing and Control 83 (Mai 2023): 104627. http://dx.doi.org/10.1016/j.bspc.2023.104627.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

BAI, G. MERCY, und P. VENKADESH. „TAYLOR–MONARCH BUTTERFLY OPTIMIZATION-BASED SUPPORT VECTOR MACHINE FOR ACUTE LYMPHOBLASTIC LEUKEMIA CLASSIFICATION WITH BLOOD SMEAR MICROSCOPIC IMAGES“. Journal of Mechanics in Medicine and Biology 21, Nr. 06 (21.06.2021): 2150041. http://dx.doi.org/10.1142/s021951942150041x.

Der volle Inhalt der Quelle
Annotation:
Acute lymphoblastic leukemia (ALL) is a serious hematological neoplasis that is characterized by the development of immature and abnormal growth of lymphoblasts. However, microscopic examination of bone marrow is the only way to achieve leukemia detection. Various methods are developed for automatic leukemia detection, but these methods are costly and time-consuming. Hence, an effective leukemia detection approach is designed using the proposed Taylor–monarch butterfly optimization-based support vector machine (Taylor–MBO-based SVM). However, the proposed Taylor–MBO is designed by integrating the Taylor series and MBO, respectively. The sparking process is designed to perform the automatic segmentation of blood smear images by estimating optimal threshold values. By extracting the features, such as texture features, statistical, and grid-based features from the segmented smear image, the performance of classification is increased with less training time. The kernel function of SVM is enabled to perform the leukemia classification such that the proposed Taylor–MBO algorithm accomplishes the training process of SVM. However, the proposed Taylor–MBO-based SVM obtained better performance using the metrics, such as accuracy, sensitivity, and specificity, with 94.5751, 95.526, and 94.570%, respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Chang, Yuan-Hsiang, Kuniya Abe, Hideo Yokota, Kazuhiro Sudo, Yukio Nakamura und Ming-Dar Tsai. „HUMAN INDUCED PLURIPOTENT STEM CELL REGION DETECTION IN BRIGHT-FIELD MICROSCOPY IMAGES USING CONVOLUTIONAL NEURAL NETWORKS“. Biomedical Engineering: Applications, Basis and Communications 31, Nr. 02 (April 2019): 1950009. http://dx.doi.org/10.4015/s1016237219500091.

Der volle Inhalt der Quelle
Annotation:
Human induced pluripotent stem (iPS) cells represent an ideal source for patient specific cell-based regenerative medicine. For practical uses of iPS cells, large-scale, cost- and time-effective production of fully reprogrammed iPS cells from a number of patients should be achieved. To achieve this goal, culture protocols for inducing iPS cells as well as methods for selecting fully reprogrammed iPS cells in a mixture of cells which are still in reprogramming and non-iPS differentiated cells, should be improved. This paper proposes a convolutional neural network (CNN) structure to classify a bright-field microscopy image as respective probability images. Each probability image represents regions of differentiated cells, fully reprogrammed iPS cells or cells still in reprogramming, respectively. The CNN classifier was trained by multiple types of image patches which represent differentiated, reprogramming and reprogrammed iPS cells, etc. Classification of an image containing the confirmed iPS cells by the trained CNN classifier shows that high classification accuracy can be achieved. Classifications of sets of time-lapse microscopy images show that growth and transition from CD34[Formula: see text] human cord blood cells through reprogramming to reprogrammed iPS cells can be visualized and quantitatively analyzed by the output time-series probability images. These experiment results show our CNN structure yields a potential tool to detect the differentiated cells that possibly undergo reprogramming to iPS cells for screening reagents or culture conditions in human iPS induction, and ultimately further understand the ideal culturing conditions for practical use in regenerative medicine.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Resta, Michele, Anna Monreale und Davide Bacciu. „Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals“. Entropy 23, Nr. 8 (17.08.2021): 1064. http://dx.doi.org/10.3390/e23081064.

Der volle Inhalt der Quelle
Annotation:
The biomedical field is characterized by an ever-increasing production of sequential data, which often come in the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper, we propose a model agnostic explanation method, based on occlusion, that enables the learning of the input’s influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models that are typically used to deal with data of such nature, i.e., recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to make aware decisions. A wide experimentation on different physiological data demonstrates the effectiveness of our approach both in classification and regression tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Dissanayake, W. M. N. D., und Maheshi B. Dissanayake. „A Novel LSTM-based Data Synthesis Approach for Performance Improvement in Detecting Epileptic Seizures“. WSEAS TRANSACTIONS ON BIOLOGY AND BIOMEDICINE 20 (10.10.2023): 132–39. http://dx.doi.org/10.37394/23208.2023.20.13.

Der volle Inhalt der Quelle
Annotation:
Bio-electrical time signals play a significant role in assisting non-invasive observational procedures in healthcare. These bioelectrical signals are weak signals with inherently low voltage and low frequency, hidden mostly under relatively large high-voltage noise signals. Hence it is extra challenging to analyze them. In modern clinical data analysis, these signals could be further analyzed using conventional machine learning (ML) methods. Also, in the recent past, two-dimensional spectrum-based classification, predominantly with Convolutional Neural Networks (CNN), has been tried with time-series data. One of the objectives of this study is to find which approach would suit better for biomedical signal analysis when data are scarce and signals are weak. Also, in bio-medical signal analysis data is scarce. Yet, to effectively train either an ML or a deep learning (DL) model, a sample clinical dataset of a significant size is required. Hence, the second objective of this research is to present a novel data synthesis method to address data scarcity. With these objectives, the study compares the performance of the time-series-based classification with traditional ML approaches, against the 2D spectrum-based classification for bio-electrical signal classification. For this purpose the study utilizes learning models; Multi-layer Perceptron (MLP), Recurrent Neural Network (RNN), Gated Recurrent Unit (GRU), Long Short-Term Memory Networks (LSTMs), Auto Encoder (AE), and Convolutions Neural Network (CNN). Also, the authors propose a novel data synthesis method based on LSTMs to improve the sample size of the standard CHB-MIT Scalp EEG dataset. The results show that with the expanded dataset, the two-dimensional spectrum-based classification architecture was able to achieve a precision level of 85% at the classification. The conventional ML-based methods showed on average a precision level of 82%. In conclusion with the proposed virtual sample generation approach, 2D spectrum-based classification with Convolutional Neural Networks showed promising performances.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Zhu, Mengyun, Ximin Fan, Weijing Liu, Jianying Shen, Wei Chen, Yawei Xu und Xuejing Yu. „Artificial Intelligence-Based Echocardiographic Left Atrial Volume Measurement with Pulmonary Vein Comparison“. Journal of Healthcare Engineering 2021 (06.12.2021): 1–11. http://dx.doi.org/10.1155/2021/1336762.

Der volle Inhalt der Quelle
Annotation:
This paper combines echocardiographic signal processing and artificial intelligence technology to propose a deep neural network model adapted to echocardiographic signals to achieve left atrial volume measurement and automatic assessment of pulmonary veins efficiently and quickly. Based on the echocardiographic signal generation mechanism and detection method, an experimental scheme for the echocardiographic signal acquisition was designed. The echocardiographic signal data of healthy subjects were measured in four different experimental states, and a database of left atrial volume measurements and pulmonary veins was constructed. Combining the correspondence between ECG signals and echocardiographic signals in the time domain, a series of preprocessing such as denoising, feature point localization, and segmentation of the cardiac cycle was realized by wavelet transform and threshold method to complete the data collection. This paper proposes a comparative model based on artificial intelligence, adapts to the characteristics of one-dimensional time-series echocardiographic signals, automatically extracts the deep features of echocardiographic signals, effectively reduces the subjective influence of manual feature selection, and realizes the automatic classification and evaluation of human left atrial volume measurement and pulmonary veins under different states. The experimental results show that the proposed BP neural network model has good adaptability and classification performance in the tasks of LV volume measurement and pulmonary vein automatic classification evaluation and achieves an average test accuracy of over 96.58%. The average root-mean-square error percentage of signal compression is only 0.65% by extracting the coding features of the original echocardiographic signal through the convolutional autoencoder, which completes the signal compression with low loss. Comparing the training time and classification accuracy of the LSTM network with the original signal and encoded features, the experimental results show that the AI model can greatly reduce the model training time cost and achieve an average accuracy of 97.97% in the test set and increase the real-time performance of the left atrial volume measurement and pulmonary vein evaluation as well as the security of the data transmission process, which is very important for the comparison of left atrial volume measurement and pulmonary vein. It is of great practical importance to compare left atrial volume measurements with pulmonary veins.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Szigeti, Balázs, Ajinkya Deogade und Barbara Webb. „Searching for motifs in the behaviour of larval Drosophila melanogaster and Caenorhabditis elegans reveals continuity between behavioural states“. Journal of The Royal Society Interface 12, Nr. 113 (Dezember 2015): 20150899. http://dx.doi.org/10.1098/rsif.2015.0899.

Der volle Inhalt der Quelle
Annotation:
We present a novel method for the unsupervised discovery of behavioural motifs in larval Drosophila melanogaster and Caenorhabditis elegans . A motif is defined as a particular sequence of postures that recurs frequently. The animal's changing posture is represented by an eigenshape time series, and we look for motifs in this time series. To find motifs, the eigenshape time series is segmented, and the segments clustered using spline regression. Unlike previous approaches, our method can classify sequences of unequal duration as the same motif. The behavioural motifs are used as the basis of a probabilistic behavioural annotator, the eigenshape annotator (ESA). Probabilistic annotation avoids rigid threshold values and allows classification uncertainty to be quantified. We apply eigenshape annotation to both larval Drosophila and C. elegans and produce a good match to hand annotation of behavioural states. However, we find many behavioural events cannot be unambiguously classified. By comparing the results with ESA of an artificial agent's behaviour, we argue that the ambiguity is due to greater continuity between behavioural states than is generally assumed for these organisms.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Chatterjee, Shre Kumar, Saptarshi Das, Koushik Maharatna, Elisa Masi, Luisa Santopolo, Stefano Mancuso und Andrea Vitaletti. „Exploring strategies for classification of external stimuli using statistical features of the plant electrical response“. Journal of The Royal Society Interface 12, Nr. 104 (März 2015): 20141225. http://dx.doi.org/10.1098/rsif.2014.1225.

Der volle Inhalt der Quelle
Annotation:
Plants sense their environment by producing electrical signals which in essence represent changes in underlying physiological processes. These electrical signals, when monitored, show both stochastic and deterministic dynamics. In this paper, we compute 11 statistical features from the raw non-stationary plant electrical signal time series to classify the stimulus applied (causing the electrical signal). By using different discriminant analysis-based classification techniques, we successfully establish that there is enough information in the raw electrical signal to classify the stimuli. In the process, we also propose two standard features which consistently give good classification results for three types of stimuli—sodium chloride (NaCl), sulfuric acid (H 2 SO 4 ) and ozone (O 3 ). This may facilitate reduction in the complexity involved in computing all the features for online classification of similar external stimuli in future.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Uyulan, Caglar, Türker Tekin Ergüzel und Nevzat Tarhan. „Entropy-based feature extraction technique in conjunction with wavelet packet transform for multi-mental task classification“. Biomedical Engineering / Biomedizinische Technik 64, Nr. 5 (25.09.2019): 529–42. http://dx.doi.org/10.1515/bmt-2018-0105.

Der volle Inhalt der Quelle
Annotation:
Abstract Event-related mental task information collected from electroencephalography (EEG) signals, which are functionally related to different brain areas, possesses complex and non-stationary signal features. It is essential to be able to classify mental task information through the use in brain-computer interface (BCI) applications. This paper proposes a wavelet packet transform (WPT) technique merged with a specific entropy biomarker as a feature extraction tool to classify six mental tasks. First, the data were collected from a healthy control group and the multi-signal information comprised six mental tasks which were decomposed into a number of subspaces spread over a wide frequency spectrum by projecting six different wavelet basis functions. Later, the decomposed subspaces were subjected to three entropy-type statistical measure functions to extract the feature vectors for each mental task to be fed into a backpropagation time-recurrent neural network (BPTT-RNN) model. Cross-validated classification results demonstrated that the model could classify with 85% accuracy through a discrete Meyer basis function coupled with a Renyi entropy biomarker. The classifier model was finally tested in the Simulink platform to demonstrate the Fourier series representation of periodic signals by tracking the harmonic pattern. In order to boost the model performance, ant colony optimization (ACO)-based feature selection method was employed. The overall accuracy increased to 88.98%. The results underlined that the WPT combined with an entropy uncertainty measure methodology is both effective and versatile to discriminate the features of the signal localized in a time-frequency domain.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Makhir, Abdelmalek, My Hachem El Yousfi Alaoui und Larbi Belarbi. „Comprehensive Cardiac Ischemia Classification Using Hybrid CNN-Based Models“. International Journal of Online and Biomedical Engineering (iJOE) 20, Nr. 03 (27.02.2024): 154–65. http://dx.doi.org/10.3991/ijoe.v20i03.45769.

Der volle Inhalt der Quelle
Annotation:
This study addresses the critical issue of classifying cardiac ischemia, a disease with significant global health implications that contributes to the global mortality rate. In our study, we tackle the classification of ischemia using six diverse electrocardiogram (ECG) datasets and a convolutional neural network (CNN) as the primary methodology. We combined six separate datasets to gain a more comprehensive understanding of cardiac electrical activity, utilizing 12 leads to obtain a broader perspective. A discrete wavelet transform (DWT) preprocessing was used to eliminate irrelevant information from the signals, aiming to improve classification results. Focusing on accuracy and minimizing false negatives (FN) in ischemia detection, we enhance our study by incorporating various machine learning models into our base model. These models include multilayer perceptron (MLP), support vector machines (SVM), random forest (RF), long short-term memory (LSTM), and bidirectional LSTM (BiLSTM), allowing us to leverage the strengths of each algorithm. The CNN-BiLSTM model achieved the highest accuracy of 99.23% and demonstrated good sensitivity of 98.53%, effectively reducing false negative cases in the overall tests. The CNN-BiLSTM model demonstrated the ability to effectively identify abnormalities, misclassifying only 25 out of 1,673 ischemic cases in the test set as normal. This is due to the BiLSTM’s efficiency in capturing long-range dependencies and sequential patterns, making it suitable for tasks involving time-series data such as ECG signals. In addition, CNNs are well-suited for hierarchical feature learning and complex pattern recognition in ECG data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Cuesta-Frau, David, Jakub Schneider, Eduard Bakštein, Pavel Vostatek, Filip Spaniel und Daniel Novák. „Classification of Actigraphy Records from Bipolar Disorder Patients Using Slope Entropy: A Feasibility Study“. Entropy 22, Nr. 11 (01.11.2020): 1243. http://dx.doi.org/10.3390/e22111243.

Der volle Inhalt der Quelle
Annotation:
Bipolar Disorder (BD) is an illness with high prevalence and a huge social and economic impact. It is recurrent, with a long-term evolution in most cases. Early treatment and continuous monitoring have proven to be very effective in mitigating the causes and consequences of BD. However, no tools are currently available for a massive and semi-automatic BD patient monitoring and control. Taking advantage of recent technological developments in the field of wearables, this paper studies the feasibility of a BD episodes classification analysis while using entropy measures, an approach successfully applied in a myriad of other physiological frameworks. This is a very difficult task, since actigraphy records are highly non-stationary and corrupted with artifacts (no activity). The method devised uses a preprocessing stage to extract epochs of activity, and then applies a quantification measure, Slope Entropy, recently proposed, which outperforms the most common entropy measures used in biomedical time series. The results confirm the feasibility of the approach proposed, since the three states that are involved in BD, depression, mania, and remission, can be significantly distinguished.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Khorasani, Abed, Mohammad Reza Daliri und Mohammad Pooyan. „Recognition of amyotrophic lateral sclerosis disease using factorial hidden Markov model“. Biomedical Engineering / Biomedizinische Technik 61, Nr. 1 (01.02.2016): 119–26. http://dx.doi.org/10.1515/bmt-2014-0089.

Der volle Inhalt der Quelle
Annotation:
Abstract Amyotrophic lateral sclerosis (ALS) is a common disease among neurological disorders that can change the pattern of gait in human. One of the effective methods for recognition and analysis of gait patterns in ALS patients is utilizing stride interval time series. With proper preprocessing for removing unwanted artifacts from the raw stride interval times and then extracting meaningful features from these data, the factorial hidden Markov model (FHMM) was used to distinguish ALS patients from healthy subjects. The results of classification accuracy evaluated using the leave-one-out (LOO) cross-validation algorithm showed that the FHMM method provides better recognition of ALS and healthy subjects compared to standard HMM. Moreover, comparing our method with a state-of-the art method named least square support vector machine (LS-SVM) showed the efficiency of the FHMM in distinguishing ALS subjects from healthy ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

BALOGLU, ULAS BARAN, und ÖZAL YILDIRIM. „CONVOLUTIONAL LONG-SHORT TERM MEMORY NETWORKS MODEL FOR LONG DURATION EEG SIGNAL CLASSIFICATION“. Journal of Mechanics in Medicine and Biology 19, Nr. 01 (Februar 2019): 1940005. http://dx.doi.org/10.1142/s0219519419400050.

Der volle Inhalt der Quelle
Annotation:
Background and objective: Deep learning structures have recently achieved remarkable success in the field of machine learning. Convolutional neural networks (CNN) in image processing and long-short term memory (LSTM) in the time-series analysis are commonly used deep learning algorithms. Healthcare applications of deep learning algorithms provide important contributions for computer-aided diagnosis research. In this study, convolutional long-short term memory (CLSTM) network was used for automatic classification of EEG signals and automatic seizure detection. Methods: A new nine-layer deep network model consisting of convolutional and LSTM layers was designed. The signals processed in the convolutional layers were given as an input to the LSTM network whose outputs were processed in densely connected neural network layers. The EEG data is appropriate for a model having 1-D convolution layers. A bidirectional model was employed in the LSTM layer. Results: Bonn University EEG database with five different datasets was used for experimental studies. In this database, each dataset contains 23.6[Formula: see text]s duration 100 single channel EEG segments which consist of 4097 dimensional samples (173.61[Formula: see text]Hz). Eight two-class and three three-class clinical scenarios were examined. When the experimental results were evaluated, it was seen that the proposed model had high accuracy on both binary and ternary classification tasks. Conclusions: The proposed end-to-end learning structure showed a good performance without using any hand-crafted feature extraction or shallow classifiers to detect the seizures. The model does not require filtering, and also automatically learns to filter the input as well. As a result, the proposed model can process long duration EEG signals without applying segmentation, and can detect epileptic seizures automatically by using the correlation of ictal and interictal signals of raw data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Bogdanov, M. R., G. R. Shakhmametova und N. N. Oskin. „Possibility of Using the Attention Mechanism in Multimodal Recognition of Cardiovascular Diseases“. Programmnaya Ingeneria 15, Nr. 11 (18.11.2024): 578–88. http://dx.doi.org/10.17587/prin.15.578-588.

Der volle Inhalt der Quelle
Annotation:
The paper is about studying the possibility of using the attention mechanism in diagnosing various cardiovascular diseases. Biomedical data were presented in different modalities (text, images, and time series). A comparison of the efficiency of 5 transformers based on the attention mechanism (Dosovitsky transformer, compact convolutional trans­former, transformer with external attention, transformer based on tokenization with patch shift and local self-attention, transformer based on multiple deep attention) was carried out with the Exception convolutional neural network, three fully connected neural networks (MLP-Mixer, Fnet, and gMLP), and the YOLO architecture on the problem of multi-class classification (16 classes of dangerous arrhythmias). High efficiency of transformers in diagnosing cardiac diseases was shown. The transformer based on tokenization with patch shift and local self-attention showed the greatest efficiency.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Amarantidis, Lampros Chrysovalantis, und Daniel Abásolo. „Interpretation of Entropy Algorithms in the Context of Biomedical Signal Analysis and Their Application to EEG Analysis in Epilepsy“. Entropy 21, Nr. 9 (27.08.2019): 840. http://dx.doi.org/10.3390/e21090840.

Der volle Inhalt der Quelle
Annotation:
Biomedical signals are measurable time series that describe a physiological state of a biological system. Entropy algorithms have been previously used to quantify the complexity of biomedical signals, but there is a need to understand the relationship of entropy to signal processing concepts. In this study, ten synthetic signals that represent widely encountered signal structures in the field of signal processing were created to interpret permutation, modified permutation, sample, quadratic sample and fuzzy entropies. Subsequently, the entropy algorithms were applied to two different databases containing electroencephalogram (EEG) signals from epilepsy studies. Transitions from randomness to periodicity were successfully detected in the synthetic signals, while significant differences in EEG signals were observed based on different regions and states of the brain. In addition, using results from one entropy algorithm as features and the k-nearest neighbours algorithm, maximum classification accuracies in the first EEG database ranged from 63% to 73.5%, while these values increased by approximately 20% when using two different entropies as features. For the second database, maximum classification accuracy reached 62.5% using one entropy algorithm, while using two algorithms as features further increased that by 10%. Embedding entropies (sample, quadratic sample and fuzzy entropies) are found to outperform the rest of the algorithms in terms of sensitivity and show greater potential by considering the fine-tuning possibilities they offer. On the other hand, permutation and modified permutation entropies are more consistent across different input parameter values and considerably faster to calculate.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Zhu, Lingxia, Zhiping Xu und Ting Fang. „Analysis of Cardiac Ultrasound Images of Critically Ill Patients Using Deep Learning“. Journal of Healthcare Engineering 2021 (27.10.2021): 1–8. http://dx.doi.org/10.1155/2021/6050433.

Der volle Inhalt der Quelle
Annotation:
Cardiovascular disease remains a substantial cause of morbidity and mortality in the developed world and is becoming an increasingly important cause of death in developing countries too. While current cardiovascular treatments can assist to reduce the risk of this disease, a large number of patients still retain a high risk of experiencing a life-threatening cardiovascular event. Thus, the advent of new treatments methods capable of reducing this residual risk remains an important healthcare objective. This paper proposes a deep learning-based method for section recognition of cardiac ultrasound images of critically ill cardiac patients. A convolution neural network (CNN) is used to classify the standard ultrasound video data. The ultrasound video data is parsed into a static image, and InceptionV3 and ResNet50 networks are used to classify eight ultrasound static sections, and the ResNet50 with better classification accuracy is selected as the standard network for classification. The correlation between the ultrasound video data frames is used to construct the ResNet50 + LSTM model. Next, the time-series features of the two-dimensional image sequence are extracted and the classification of the ultrasound section video data is realized. Experimental results show that the proposed cardiac ultrasound image recognition model has good performance and can meet the requirements of clinical section classification accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Jing, Junyuan, Jing Zhang, Aiping Liu, Min Gao, Ruobing Qian und Xun Chen. „ECG-Based Multiclass Arrhythmia Classification Using Beat-Level Fusion Network“. Journal of Healthcare Engineering 2023 (29.11.2023): 1–10. http://dx.doi.org/10.1155/2023/1755121.

Der volle Inhalt der Quelle
Annotation:
Cardiovascular disease (CVD) is one of the most severe diseases threatening human life. Electrocardiogram (ECG) is an effective way to detect CVD. In recent years, many methods have been proposed to detect arrhythmia using 12-lead ECG. In particular, deep learning methods have been proven to be effective and have been widely used. The attention mechanism has attracted extensive attention in many fields in a series of deep learning methods. Off-the-shelf solutions based on deep learning and attention mechanism for ECG classification mostly give weights to time points. None of the existing methods were considered using the attention mechanism dealing with ECG signals at the level of heartbeats. In this paper, we propose a beat-level fusion net (BLF-Net) for multiclass arrhythmia classification by assigning weights at the heartbeat level, according to the contribution of the heartbeat to diagnostic results. This algorithm consists of three steps: (1) segmenting the long ECG signal into short beats; (2) using a neural network to extract features from heartbeats; and (3) assigning weights to features extracted from heartbeats using an attention mechanism. We test our algorithm on the PTB-XL database and have superiority over state-of-the-art performance on six classification tasks. Besides, the principle of this architecture is clarified by visualizing the weight of the attention mechanism. The proposed BLF-Net is shown to be useful and automatically provides an effective network structure for arrhythmia classification, which is capable of aiding cardiologists in arrhythmia diagnosis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Heo, Suncheol, Jae Yong Yu, Eun Ae Kang, Hyunah Shin, Kyeongmin Ryu, Chungsoo Kim, Yebin Chegal et al. „Development and Verification of Time-Series Deep Learning for Drug-Induced Liver Injury Detection in Patients Taking Angiotensin II Receptor Blockers: A Multicenter Distributed Research Network Approach“. Healthcare Informatics Research 29, Nr. 3 (31.07.2023): 246–55. http://dx.doi.org/10.4258/hir.2023.29.3.246.

Der volle Inhalt der Quelle
Annotation:
Objectives: The objective of this study was to develop and validate a multicenter-based, multi-model, time-series deep learning model for predicting drug-induced liver injury (DILI) in patients taking angiotensin receptor blockers (ARBs). The study leveraged a national-level multicenter approach, utilizing electronic health records (EHRs) from six hospitals in Korea.Methods: A retrospective cohort analysis was conducted using EHRs from six hospitals in Korea, comprising a total of 10,852 patients whose data were converted to the Common Data Model. The study assessed the incidence rate of DILI among patients taking ARBs and compared it to a control group. Temporal patterns of important variables were analyzed using an interpretable timeseries model.Results: The overall incidence rate of DILI among patients taking ARBs was found to be 1.09%. The incidence rates varied for each specific ARB drug and institution, with valsartan having the highest rate (1.24%) and olmesartan having the lowest rate (0.83%). The DILI prediction models showed varying performance, measured by the average area under the receiver operating characteristic curve, with telmisartan (0.93), losartan (0.92), and irbesartan (0.90) exhibiting higher classification performance. The aggregated attention scores from the models highlighted the importance of variables such as hematocrit, albumin, prothrombin time, and lymphocytes in predicting DILI.Conclusions: Implementing a multicenter-based timeseries classification model provided evidence that could be valuable to clinicians regarding temporal patterns associated with DILI in ARB users. This information supports informed decisions regarding appropriate drug use and treatment strategies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Nigat, Tsedenya Debebe, Tilahun Melak Sitote und Berihun Molla Gedefaw. „Fungal Skin Disease Classification Using the Convolutional Neural Network“. Journal of Healthcare Engineering 2023 (30.05.2023): 1–9. http://dx.doi.org/10.1155/2023/6370416.

Der volle Inhalt der Quelle
Annotation:
Skin is the outer cover of our body, which protects vital organs from harm. This important body part is often affected by a series of infections caused by fungus, bacteria, viruses, allergies, and dust. Millions of people suffer from skin diseases. It is one of the common causes of infection in sub-Saharan Africa. Skin disease can also be the cause of stigma and discrimination. Early and accurate diagnosis of skin disease can be vital for effective treatment. Laser and photonics-based technologies are used for the diagnosis of skin disease. These technologies are expensive and not affordable, especially for resource-limited countries like Ethiopia. Hence, image-based methods can be effective in reducing cost and time. There are previous studies on image-based diagnosis for skin disease. However, there are few scientific studies on tinea pedis and tinea corporis. In this study, the convolution neural network (CNN) has been used to classify fungal skin disease. The classification was carried out on the four most common fungal skin diseases: tinea pedis, tinea capitis, tinea corporis, and tinea unguium. The dataset consisted of a total of 407 fungal skin lesions collected from Dr. Gerbi Medium Clinic, Jimma, Ethiopia. Normalization of image size, conversion of RGB to grayscale, and balancing the intensity of the image have been carried out. Images were normalized to three sizes: 120 × 120, 150 × 150, and 224 × 224. Then, augmentation was applied. The developed model classified the four common fungal skin diseases with 93.3% accuracy. Comparisons were made with similar CNN architectures: MobileNetV2 and ResNet 50, and the proposed model was superior to both. This study may be an important addition to the very limited work on the detection of fungal skin disease. It can be used to build an automated image-based screening system for dermatology at an initial stage.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Cuesta-Frau, David, Daniel Novák, Vacláv Burda, Daniel Abasolo, Tricia Adjei, Manuel Varela, Borja Vargas et al. „Influence of Duodenal–Jejunal Implantation on Glucose Dynamics: A Pilot Study Using Different Nonlinear Methods“. Complexity 2019 (14.02.2019): 1–10. http://dx.doi.org/10.1155/2019/6070518.

Der volle Inhalt der Quelle
Annotation:
Diabetes is a disease of great and rising prevalence, with the obesity epidemic being a significant contributing risk factor. Duodenal–jejunal bypass liner (DJBL) is a reversible implant that mimics the effects of more aggressive surgical procedures, such as gastric bypass, to induce weight loss. We hypothesized that DJBL also influences the glucose dynamics in type II diabetes, based on the induced changes already demonstrated in other physiological characteristics and parameters. In order to assess the validity of this assumption, we conducted a quantitative analysis based on several nonlinear algorithms (Lempel–Ziv Complexity, Sample Entropy, Permutation Entropy, and modified Permutation Entropy), well suited to the characterization of biomedical time series. We applied them to glucose records drawn from two extreme cases available of DJBL implantation: before and after 10 months. The results confirmed the hypothesis and an accuracy of 86.4% was achieved with modified Permutation Entropy. Other metrics also yielded significant classification accuracy results, all above 70%, provided a suitable parameter configuration was chosen. With the Leave–One–Out method, the results were very similar, between 72% and 82% classification accuracy. There was also a decrease in entropy of glycaemia records during the time interval studied. These findings provide a solid foundation to assess how glucose metabolism may be influenced by DJBL implantation and opens a new line of research in this field.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Ahammed, Kawser, und Mosabber Uddin Ahmed. „QUANTIFICATION OF MENTAL STRESS USING COMPLEXITY ANALYSIS OF EEG SIGNALS“. Biomedical Engineering: Applications, Basis and Communications 32, Nr. 02 (April 2020): 2050011. http://dx.doi.org/10.4015/s1016237220500118.

Der volle Inhalt der Quelle
Annotation:
Detection of mental stress has been receiving great attention from the researchers for many years. Many studies have analyzed electroencephalogram signals in order to estimate mental stress using linear methods. In this paper, a novel nonlinear stress assessment method based on multivariate multiscale entropy has been introduced. Since the multivariate multiscale entropy method characterizes the complexity of nonlinear time series, this research determines the mental stress of human during cognitive workload using complexity of electroencephalogram (EEG) signals. To perform this work, 36 subjects including 9 men and 27 women were participated in the cognitive workload experiment. Multivariate multiscale entropy method has been applied to electroencephalogram data collected from those subjects for estimating mental stress in terms of complexity. The complexity feature of brain electroencephalogram signals collected during resting and cognitive workload has shown statistically significant ([Formula: see text]) differences across brain regions and mental tasks which can be implemented practically for building stress detection system. In addition, the complexity profile of electroencephalogram signals has shown that higher stress is reflected in good counting compared to bad counting. Moreover, the support vector machine (SVM) has shown promising classification between resting and mental counting states by providing 80% sensitivity, 100% specificity and 90% classification accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Alrowais, Fadwa, Faiz Abdullah Alotaibi, Abdulkhaleq Q. A. Hassan, Radwa Marzouk, Mrim M. Alnfiai und Ahmed Sayed. „Enhanced Pelican Optimization Algorithm with Deep Learning-Driven Mitotic Nuclei Classification on Breast Histopathology Images“. Biomimetics 8, Nr. 7 (10.11.2023): 538. http://dx.doi.org/10.3390/biomimetics8070538.

Der volle Inhalt der Quelle
Annotation:
Breast cancer (BC) is a prevalent disease worldwide, and accurate diagnoses are vital for successful treatment. Histopathological (HI) inspection, particularly the detection of mitotic nuclei, has played a pivotal function in the prognosis and diagnosis of BC. It includes the detection and classification of mitotic nuclei within breast tissue samples. Conventionally, the detection of mitotic nuclei has been a subjective task and is time-consuming for pathologists to perform manually. Automatic classification using computer algorithms, especially deep learning (DL) algorithms, has been developed as a beneficial alternative. DL and CNNs particularly have shown outstanding performance in different image classification tasks, including mitotic nuclei classification. CNNs can learn intricate hierarchical features from HI images, making them suitable for detecting subtle patterns related to the mitotic nuclei. In this article, we present an Enhanced Pelican Optimization Algorithm with a Deep Learning-Driven Mitotic Nuclei Classification (EPOADL-MNC) technique on Breast HI. This developed EPOADL-MNC system examines the histopathology images for the classification of mitotic and non-mitotic cells. In this presented EPOADL-MNC technique, the ShuffleNet model can be employed for the feature extraction method. In the hyperparameter tuning procedure, the EPOADL-MNC algorithm makes use of the EPOA system to alter the hyperparameters of the ShuffleNet model. Finally, we used an adaptive neuro-fuzzy inference system (ANFIS) for the classification and detection of mitotic cell nuclei on histopathology images. A series of simulations took place to validate the improved detection performance of the EPOADL-MNC technique. The comprehensive outcomes highlighted the better outcomes of the EPOADL-MNC algorithm compared to existing DL techniques with a maximum accuracy of 97.83%.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Maheshwari, Saumil, Aman Agarwal, Anupam Shukla und Ritu Tiwari. „A comprehensive evaluation for the prediction of mortality in intensive care units with LSTM networks: patients with cardiovascular disease“. Biomedical Engineering / Biomedizinische Technik 65, Nr. 4 (27.08.2020): 435–46. http://dx.doi.org/10.1515/bmt-2018-0206.

Der volle Inhalt der Quelle
Annotation:
AbstractIntensive care units (ICUs) are responsible for generating a wealth of useful data in the form of electronic health records. We aimed to build a mortality prediction model on a Medical Information Mart for Intensive Care (MIMIC-III) database and to assess whether the use of deep learning techniques like long short-term memory (LSTM) can effectively utilize the temporal relations among clinical variables. The models were built on clinical variable dynamics of the first 48 h of ICU admission of 12,550 records from the MIMIC-III database. A total of 36 variables including 33 time series variables and three static variables were used for the prediction. We present the application of LSTM and LSTM attention (LSTM-AT) model for mortality prediction with such a large number of clinical variables dataset. For training and validation purpose, we have used International Classification of Diseases, 9th edition (ICD-9) codes for extracting the patients with cardiovascular disease, and infections and parasitic disease, respectively. The effectiveness of the LSTM model is achieved over non-recurrent baseline models like naïve Bayes, logistic regression (LR), support vector machine and multilayer perceptron (MLP) by generating state of the art results (area under the curve [AUC], 0.852). Next, by providing attention at each time stamp, we developed a model, LSTM-AT, which exhibits even better performance (AUC, 0.876).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie