Journal articles on the topic 'Automatic classification Statistical methods'

To see the other types of publications on this topic, follow the link: Automatic classification Statistical methods.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Automatic classification Statistical methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Couvreur, Christophe, and Yoram Bresler. "Automatic classification of environmental noise sources by statistical methods." Noise Control Engineering Journal 46, no. 4 (1998): 167. http://dx.doi.org/10.3397/1.2828469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Garnsey, Margaret R. "Automatic Classification of Financial Accounting Concepts." Journal of Emerging Technologies in Accounting 3, no. 1 (January 1, 2006): 21–39. http://dx.doi.org/10.2308/jeta.2006.3.1.21.

Full text
Abstract:
Information and standards overload are part of the current business environment. In accounting, this is exacerbated due to the variety of users and the evolving nature of accounting language. This article describes a research project that determines the feasibility of using statistical methods to automatically group related accounting concepts together. Starting with the frequencies of words in documents and modifying them for local and global weighting, Latent Semantic Indexing (LSI) and agglomerative clustering were used to derive clusters of related accounting concepts. Resultant clusters were compared to terms generated randomly and terms identified by individuals to determine if related terms are identified. A recognition test was used to determine if providing individuals with lists of terms generated automatically allowed them to identify additional relevant terms. Results found that both clusters obtained from the weighted term-document matrix and clusters from a LSI matrix based on 50 dimensions contained significant numbers of related terms. There was no statistical difference in the number of related terms found by the methods. However, the LSI clusters contained terms that were of a lower frequency in the corpus. This finding may have significance in using cluster terms to assist in retrieval. When given a specific term and asked for related terms, providing individuals with a list of potential terms significantly increased the number of related terms they were able to identify when compared to their free-recall.
APA, Harvard, Vancouver, ISO, and other styles
3

Christlieb, N., L. Wisotzki, and G. Graßhoff. "Statistical methods of automatic spectral classification and their application to the Hamburg/ESO Survey." Astronomy & Astrophysics 391, no. 1 (July 29, 2002): 397–406. http://dx.doi.org/10.1051/0004-6361:20020830.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ştefan, Raluca-Mariana, Măriuţa Şerban, Iulian-Ion Hurloiu, and Bianca-Florentina Rusu. "Kernel Methods for Data Classification." International conference KNOWLEDGE-BASED ORGANIZATION 22, no. 3 (June 1, 2016): 572–75. http://dx.doi.org/10.1515/kbo-2016-0098.

Full text
Abstract:
Abstract In the past decades, the exponential evolution of data collection for macroeconomic databases in digital format caused a huge increase in their volume. As a consequence, the automatic organization and the classification of macroeconomic data show a significant practical value. Various techniques for categorizing data are used to classify numerous macroeconomic data according to the classes they belong to. Since the manual construction of some of the classifiers is difficult and time consuming, are preferred classifiers that learn from action examples, a process which forms the supervised classification type. A variant of solving the problem of data classification is the one of using the kernel type methods. These methods represent a class of algorithms used in the automatic analysis and classification of information. Most algorithms of this section focus on solving convex optimization problems and calculating their own values. They are efficient in terms of computation time and are very stable statistically. Shaw-Taylor, J. and Cristianini, N. have demonstrated that this type of approach to data classification is robust and efficient in terms of detection of existing stable patterns in a finite array of data. Thus, in a modular manner data will be incorporated into a space where it can cause certain linear relationship.
APA, Harvard, Vancouver, ISO, and other styles
5

Siracusano, Giulio, Francesca Garescì, Giovanni Finocchio, Riccardo Tomasello, Francesco Lamonaca, Carmelo Scuro, Mario Carpentieri, Massimo Chiappini, and Aurelio La Corte. "Automatic Crack Classification by Exploiting Statistical Event Descriptors for Deep Learning." Applied Sciences 11, no. 24 (December 17, 2021): 12059. http://dx.doi.org/10.3390/app112412059.

Full text
Abstract:
In modern building infrastructures, the chance to devise adaptive and unsupervised data-driven structural health monitoring (SHM) systems is gaining in popularity. This is due to the large availability of big data from low-cost sensors with communication capabilities and advanced modeling tools such as deep learning. A promising method suitable for smart SHM is the analysis of acoustic emissions (AEs), i.e., ultrasonic waves generated by internal ruptures of the concrete when it is stressed. The advantage in respect to traditional ultrasonic measurement methods is the absence of the emitter and the suitability to implement continuous monitoring. The main purpose of this paper is to combine deep neural networks with bidirectional long short term memory and advanced statistical analysis involving instantaneous frequency and spectral kurtosis to develop an accurate classification tool for tensile, shear and mixed modes originated from AE events (cracks). We investigated effective event descriptors to capture the unique characteristics from the different types of modes. Tests on experimental results confirm that this method achieves promising classification among different crack events and can impact on the design of the future of SHM technologies. This approach is effective to classify incipient damages with 92% of accuracy, which is advantageous to plan maintenance.
APA, Harvard, Vancouver, ISO, and other styles
6

GHOSH, ANIL KUMAR, and SMARAJIT BOSE. "FEATURE EXTRACTION FOR CLASSIFICATION USING STATISTICAL NETWORKS." International Journal of Pattern Recognition and Artificial Intelligence 21, no. 07 (November 2007): 1103–26. http://dx.doi.org/10.1142/s0218001407005855.

Full text
Abstract:
In a classification problem, quite often the dimension of the measurement vector is large. Some of these measurements may not be important for separating the classes. Removal of these measurement variables not only reduces the computational cost but also leads to better understanding of class separability. There are some methods in the existing literature for reducing the dimensionality of a classification problem without losing much of the separability information. However, these dimension reduction procedures usually work well for linear classifiers. In the case where competing classes are not linearly separable, one has to look for ideal "features" which could be some transformations of one or more measurements. In this paper, we make an attempt to tackle both, the problems of dimension reduction and feature extraction, by considering a projection pursuit regression model. The single hidden layer perceptron model and some other popular models can be viewed as special cases of this model. An iterative algorithm based on backfitting is proposed to select the features dynamically, and cross-validation method is used to select the ideal number of features. We carry out an extensive simulation study to show the effectiveness of this fully automatic method.
APA, Harvard, Vancouver, ISO, and other styles
7

AMARO-CAMARGO, ERIKA, CARLOS A. REYES-GARCÍA, EMILIO ARCH-TIRADO, and MARIO MANDUJANO-VALDÉS. "STATISTICAL VECTORS OF ACOUSTIC FEATURES FOR THE AUTOMATIC CLASSIFICATION OF INFANT CRY." International Journal of Information Acquisition 04, no. 04 (December 2007): 347–55. http://dx.doi.org/10.1142/s0219878907001423.

Full text
Abstract:
With the objective of helping diagnose some pathologies in recently born babies, we present the experiments and results obtained in the classification of infant cry using a variety of single classifiers, and ensembles from the combination of them. Three kinds of cry were classified: normal, hypoacoustic (deaf), and asphyxia. The feature vectors were formed by the extraction of Mel Frequency Cepstral Coefficients (MFCC). The vectors were then processed and reduced through the application of five statistics operations, namely: minimum, maximum, average, standard deviation and variance. LDA, a data reduction technique is implemented with the purpose of comparing the results of our proposed method. Four supervised machine learning methods including Support Vector Machines, Neural Networks, J48, Random Forest and Naive Bayes are used. The ensembles tested were combinations of these under different approaches like Majority Vote, Staking, Bagging and Boosting.
APA, Harvard, Vancouver, ISO, and other styles
8

A. S. Hazaa, Muneer, Nazlia Omar, Fadl Mutaher Ba-Alwi, and Mohammed Albared. "Automatic Extraction Of Malay Compound Nouns Using A Hybrid Of Statistical And Machine Learning Methods." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 3 (June 1, 2016): 925. http://dx.doi.org/10.11591/ijece.v6i3.9663.

Full text
Abstract:
Identifying of compound nouns is important for a wide spectrum of applications in the field of natural language processing such as machine translation and information retrieval. Extraction of compound nouns requires deep or shallow syntactic preprocessing tools and large corpora. This paper investigates several methods for extracting Noun compounds from Malay text corpora. First, we present the empirical results of sixteen statistical association measures of Malay <N+N> compound nouns extraction. Second, we introduce the possibility of integrating multiple association measures. Third, this work also provides a standard dataset intended to provide a common platform for evaluating research on the identification compound Nouns in Malay language. The standard data set contains 7,235 unique N-N candidates, 2,970 of them are N-N compound nouns collocations. The extraction algorithms are evaluated against this reference data set. The experimental results demonstrate that a group of association measures (T-test , Piatersky-Shapiro (PS) , C_value, FGM and rank combination method) are the best association measure and outperforms the other association measures for <N+N> collocations in the Malay corpus. Finally, we describe several classification methods for combining association measures scores of the basic measures, followed by their evaluation. Evaluation results show that classification algorithms significantly outperform individual association measures. Experimental results obtained are quite satisfactory in terms of the Precision, Recall and F-score.
APA, Harvard, Vancouver, ISO, and other styles
9

A. S. Hazaa, Muneer, Nazlia Omar, Fadl Mutaher Ba-Alwi, and Mohammed Albared. "Automatic Extraction Of Malay Compound Nouns Using A Hybrid Of Statistical And Machine Learning Methods." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 3 (June 1, 2016): 925. http://dx.doi.org/10.11591/ijece.v6i3.pp925-935.

Full text
Abstract:
Identifying of compound nouns is important for a wide spectrum of applications in the field of natural language processing such as machine translation and information retrieval. Extraction of compound nouns requires deep or shallow syntactic preprocessing tools and large corpora. This paper investigates several methods for extracting Noun compounds from Malay text corpora. First, we present the empirical results of sixteen statistical association measures of Malay <N+N> compound nouns extraction. Second, we introduce the possibility of integrating multiple association measures. Third, this work also provides a standard dataset intended to provide a common platform for evaluating research on the identification compound Nouns in Malay language. The standard data set contains 7,235 unique N-N candidates, 2,970 of them are N-N compound nouns collocations. The extraction algorithms are evaluated against this reference data set. The experimental results demonstrate that a group of association measures (T-test , Piatersky-Shapiro (PS) , C_value, FGM and rank combination method) are the best association measure and outperforms the other association measures for <N+N> collocations in the Malay corpus. Finally, we describe several classification methods for combining association measures scores of the basic measures, followed by their evaluation. Evaluation results show that classification algorithms significantly outperform individual association measures. Experimental results obtained are quite satisfactory in terms of the Precision, Recall and F-score.
APA, Harvard, Vancouver, ISO, and other styles
10

Protopapas, Pavlos. "Workshop on Algorithms for Time-Series Analysis." Proceedings of the International Astronomical Union 7, S285 (September 2011): 271. http://dx.doi.org/10.1017/s1743921312000737.

Full text
Abstract:
SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion.Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications.Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes.Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together.Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.
APA, Harvard, Vancouver, ISO, and other styles
11

Cherian, Aswathy K., Poovammal E, and Malathy C. "AUTOMATIC FEATURE EXTRACTION FOR BREAST DENSITY SEGMENTATION AND CLASSIFICATION." Asian Journal of Pharmaceutical and Clinical Research 10, no. 12 (December 1, 2017): 111. http://dx.doi.org/10.22159/ajpcr.2017.v10i12.19699.

Full text
Abstract:
Objective: Cancer is the uncontrollable multiplication of cells in human body. The expansion of cancerous cells in the breast area of the women is identified as breast cancer. It is mostly identified among women aged above 40. With the current advancement in the medical field, various automatic tests are available for the identification of cancerous tissues. The cancerous cells are spotted by taking the photo imprint in the form of X-ray comprising the breast area of the woman. Such images are called mammograms. Segmentation of mammograms is the primary step toward diagnosis. It involves the pre-processing of the image to identify the region of interest (ROI). Later, features are extracted from the image which involves the learned features that may be statistical and textural features [7]. When these features are used as input to the simple classifier, it helps us to predict the risk of cancer. The support vector machine (SVM) classifier was proved to produce a better accuracy percentage with the features extracted.Methods: The mammograms are subjected to a pre-processing stage, where the images are processed to identify the ROI. Next, the features are extracted from these images to identify the statistical [9] and textural features. Finally, these features are used as input to the simple classifier, it helps us to predict the risk of cancer.Results: The SVM classifier was proved to produce the maximum accuracy of about 88.67% considering 13 features including both statistical and textural features. The features taken for the study are mean, inverse difference moment, energy, entropy, root mean square, correlation, homogeneity, variance, skewness, range, contrast, kurtosis, and smoothness.Conclusion: Computer-aided diagnosis is one of the most common methods of detection of cancer with mammograms, and it involves minor human intervention. The dataset of mammograms was analyzed and found that SVM provided the highest accuracy of 88.67%. A wide range of the study is progressing in the field of cancer as this disease causes a high threat of human life in this era.
APA, Harvard, Vancouver, ISO, and other styles
12

Remeseiro, B., M. Penas, A. Mosquera, J. Novo, M. G. Penedo, and E. Yebra-Pimentel. "Statistical Comparison of Classifiers Applied to the Interferential Tear Film Lipid Layer Automatic Classification." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/207315.

Full text
Abstract:
The tear film lipid layer is heterogeneous among the population. Its classification depends on its thickness and can be done using the interference pattern categories proposed by Guillon. The interference phenomena can be characterised as a colour texture pattern, which can be automatically classified into one of these categories. From a photography of the eye, a region of interest is detected and its low-level features are extracted, generating a feature vector that describes it, to be finally classified in one of the target categories. This paper presents an exhaustive study about the problem at hand using different texture analysis methods in three colour spaces and different machine learning algorithms. All these methods and classifiers have been tested on a dataset composed of 105 images from healthy subjects and the results have been statistically analysed. As a result, the manual process done by experts can be automated with the benefits of being faster and unaffected by subjective factors, with maximum accuracy over 95%.
APA, Harvard, Vancouver, ISO, and other styles
13

Branisavljević, Nemanja, Zoran Kapelan, and Dušan Prodanović. "Improved real-time data anomaly detection using context classification." Journal of Hydroinformatics 13, no. 3 (January 6, 2011): 307–23. http://dx.doi.org/10.2166/hydro.2011.042.

Full text
Abstract:
The number of automated measuring and reporting systems used in water distribution and sewer systems is dramatically increasing and, as a consequence, so is the volume of data acquired. Since real-time data is likely to contain a certain amount of anomalous values and data acquisition equipment is not perfect, it is essential to equip the SCADA (Supervisory Control and Data Acquisition) system with automatic procedures that can detect the related problems and assist the user in monitoring and managing the incoming data. A number of different anomaly detection techniques and methods exist and can be used with varying success. To improve the performance, these methods must be fine tuned according to crucial aspects of the process monitored and the contexts in which the data are classified. The aim of this paper is to explore if the data context classification and pre-processing techniques can be used to improve the anomaly detection methods, especially in fully automated systems. The methodology developed is tested on sets of real-life data, using different standard and experimental anomaly detection procedures including statistical, model-based and data-mining approaches. The results obtained clearly demonstrate the effectiveness of the suggested anomaly detection methodology.
APA, Harvard, Vancouver, ISO, and other styles
14

Piera, J., V. Parisi-Baradad, E. García-Ladona, A. Lombarte, L. Recasens, and J. Cabestany. "Otolith shape feature extraction oriented to automatic classification with open distributed data." Marine and Freshwater Research 56, no. 5 (2005): 805. http://dx.doi.org/10.1071/mf04163.

Full text
Abstract:
The present study reviewed some of the critical pre-processing steps required for otolith shape characterisation for automatic classification with heterogeneous distributed data. A common procedure for optimising automatic classification is to apply data pre-processing in order to reduce the dimension of vector inputs. One of the key aspects of these pre-processing methods is the type of codification method used for describing the otolith contour. Two types of codification methods (Cartesian and Polar) were evaluated, and the limitations (loss of information) and the benefits (invariance to affine transformations) associated with each method were pointed out. The comparative study was developed using four types of shape descriptors (morphological, statistical, spectral and multiscale), and focused on data codification techniques and their effects on extracting shape features for automatic classification. A new method derived from the Karhunen–Loève transformation was proposed as the main procedure for standardising the codification of the otolith contours.
APA, Harvard, Vancouver, ISO, and other styles
15

Wongsirichot, Thakerng, and Anantaporn Hanskunatai. "A MULTI-LAYER HYBRID MACHINE LEARNING MODEL FOR AUTOMATIC SLEEP STAGE CLASSIFICATION." Biomedical Engineering: Applications, Basis and Communications 30, no. 06 (November 29, 2018): 1850041. http://dx.doi.org/10.4015/s1016237218500412.

Full text
Abstract:
Sleep Stage Classification (SSC) is a standard process in the Polysomnography (PSG) for studying sleep patterns and events. The SSC provides sleep stage information of a patient throughout an entire sleep test. A physician uses results from SSCs to diagnose sleep disorder symptoms. However, the SSC data processing is time-consuming and requires trained sleep technicians to complete the task. Over the years, researchers attempted to find alternative methods, which are known as Automatic Sleep Stage Classification (ASSC), to perform the task faster and more efficiently. Proposed ASSC techniques usually derived from existing statistical methods and machine learning (ML) techniques. The objective of this study is to develop a new hybrid ASSC technique, Multi-Layer Hybrid Machine Learning Model (MLHM), for classifying sleep stages. The MLHM blends two baseline ML techniques, Decision Tree (DT) and Support Vector Machine (SVM). It operates on a newly developed multi-layer architecture. The multi-layer architecture consists of three layers for classifying [Formula: see text], [Formula: see text] and [Formula: see text], [Formula: see text], [Formula: see text] in different epoch lengths. Our experiment design compares MLHM and baseline ML techniques and other research works. The dataset used in this study was derived from the ISRUC-Sleep database comprising of 100 subjects. The classification performances were thoroughly reviewed using the hold-out and the 10-fold cross-validation method in both subject-specific and subject-independent classifications. The MLHM achieved a certain satisfactory classification results. It gained 0.694[Formula: see text][Formula: see text][Formula: see text]0.22 of accuracy ([Formula: see text]) in subject-specific classification and 0.942[Formula: see text][Formula: see text][Formula: see text]0.02 of accuracy ([Formula: see text]) in subject-independent classification. The pros and cons of the MLHM with the multi-layer architecture were thoroughly discussed. The effect of class imbalance was rationally discussed towards the classification results.
APA, Harvard, Vancouver, ISO, and other styles
16

Soto-Murillo, Manuel A., Jorge I. Galván-Tejada, Carlos E. Galván-Tejada, Jose M. Celaya-Padilla, Huizilopoztli Luna-García, Rafael Magallanes-Quintanar, Tania A. Gutiérrez-García, and Hamurabi Gamboa-Rosales. "Automatic Evaluation of Heart Condition According to the Sounds Emitted and Implementing Six Classification Methods." Healthcare 9, no. 3 (March 12, 2021): 317. http://dx.doi.org/10.3390/healthcare9030317.

Full text
Abstract:
The main cause of death in Mexico and the world is heart disease, and it will continue to lead the death rate in the next decade according to data from the World Health Organization (WHO) and the National Institute of Statistics and Geography (INEGI). Therefore, the objective of this work is to implement, compare and evaluate machine learning algorithms that are capable of classifying normal and abnormal heart sounds. Three different sounds were analyzed in this study; normal heart sounds, heart murmur sounds and extra systolic sounds, which were labeled as healthy sounds (normal sounds) and unhealthy sounds (murmur and extra systolic sounds). From these sounds, fifty-two features were calculated to create a numerical dataset; thirty-six statistical features, eight Linear Predictive Coding (LPC) coefficients and eight Cepstral Frequency-Mel Coefficients (MFCC). From this dataset two more were created; one normalized and one standardized. These datasets were analyzed with six classifiers: k-Nearest Neighbors, Naive Bayes, Decision Trees, Logistic Regression, Support Vector Machine and Artificial Neural Networks, all of them were evaluated with six metrics: accuracy, specificity, sensitivity, ROC curve, precision and F1-score, respectively. The performances of all the models were statistically significant, but the models that performed best for this problem were logistic regression for the standardized data set, with a specificity of 0.7500 and a ROC curve of 0.8405, logistic regression for the normalized data set, with a specificity of 0.7083 and a ROC curve of 0.8407, and Support Vector Machine with a lineal kernel for the non-normalized data; with a specificity of 0.6842 and a ROC curve of 0.7703. Both of these metrics are of utmost importance in evaluating the performance of computer-assisted diagnostic systems.
APA, Harvard, Vancouver, ISO, and other styles
17

Sabbar, Bayan M. "Quantum Inspired Genetic Algorithm Model based Automatic Modulation Classification." Webology 18, Special Issue 04 (September 30, 2021): 1070–85. http://dx.doi.org/10.14704/web/v18si04/web18182.

Full text
Abstract:
The popularity of automatic modulation categorization (AMC) is high in recent years owing to the many advantages. When it comes to communication, reliability in an AMC is very critical. Increasing the amount of signals exponentially increases the cost of using the AMC. Precise classification methods, such as neural network, in which either the parameters of the neural network or the dimensions of the input or output variables are modified dynamically, are not successful in obtaining high accuracy results. To improve the accuracy of the modulation categorization, this study employs a "QIGA" feature selection model based on Quantum (inspired) Genetic Algorithm (QIGA). QIGA is used to choose the correct functionality, and to limit the number of examples that must be learned so that the overall system time is shortened and the cost of computing is reduced. Selecting excellent characteristics is enhanced via quantum computing, and this is done to lower the complexity of the solutions. The internal validation results demonstrated that the QIGA model significantly improved the statistical match quality and significantly outperformed the other models.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Kaixu, and Tapabrata Maiti. "Statistical Aspects of High-Dimensional Sparse Artificial Neural Network Models." Machine Learning and Knowledge Extraction 2, no. 1 (January 2, 2020): 1–19. http://dx.doi.org/10.3390/make2010001.

Full text
Abstract:
An artificial neural network (ANN) is an automatic way of capturing linear and nonlinear correlations, spatial and other structural dependence among features. This machine performs well in many application areas such as classification and prediction from magnetic resonance imaging, spatial data and computer vision tasks. Most commonly used ANNs assume the availability of large training data compared to the dimension of feature vector. However, in modern applications, as mentioned above, the training sample sizes are often low, and may be even lower than the dimension of feature vector. In this paper, we consider a single layer ANN classification model that is suitable for analyzing high-dimensional low sample-size (HDLSS) data. We investigate the theoretical properties of the sparse group lasso regularized neural network and show that under mild conditions, the classification risk converges to the optimal Bayes classifier’s risk (universal consistency). Moreover, we proposed a variation on the regularization term. A few examples in popular research fields are also provided to illustrate the theory and methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Akbal, Erhan. "An automated environmental sound classification methods based on statistical and textural feature." Applied Acoustics 167 (October 2020): 107413. http://dx.doi.org/10.1016/j.apacoust.2020.107413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

FAIZA, NUR, I. WAYAN SUMARJAYA, and I. GUSTI AYU MADE SRINADI. "METODE QUEST DAN CHAID PADA KLASIFIKASI KARAKTERISTIK NASABAH KREDIT." E-Jurnal Matematika 4, no. 4 (November 24, 2015): 163. http://dx.doi.org/10.24843/mtk.2015.v04.i04.p106.

Full text
Abstract:
This aim of this research is to find out the classification results and to compare the magnitude of misclassification of QUEST and CHAID methods on the classification of customer of Adira Kredit Elektronik branch Denpasar. QUEST (Quick, Unbiased, Efficient Statistical Trees) and CHAID (Chi-squared Automatic Interaction Detection) are nonparametric methods that produce tree diagram which is easy to interpret. The QUEST and CHAID classification methods conclude that: 1) QUEST method produces three groups which predict customers into the current category, whereas CHAID method produces four groups which also predict customer into the current category; 2) both methods generate the biggest classification accuracy for customers that current category which share similar characteristics; 3) both methods also have the same degree of accuracy in classifying customer data Adira Kredit Elektronik branch Denpasar.
APA, Harvard, Vancouver, ISO, and other styles
21

Uzar, Melis, and Naci Yastikli. "Automatic building extraction using LiDAR and aerial photographs." Boletim de Ciências Geodésicas 19, no. 2 (June 2013): 153–71. http://dx.doi.org/10.1590/s1982-21702013000200001.

Full text
Abstract:
This paper presents an automatic building extraction approach using LiDAR data and aerial photographs from a multi-sensor system positioned at the same platform. The automatic building extraction approach consists of segmentation, analysis and classification steps based on object-based image analysis. The chessboard, contrast split and multi-resolution segmentation methods were used in the segmentation step. The determined object primitives in segmentation, such as scale parameter, shape, completeness, brightness, and statistical parameters, were used to determine threshold values for classification in the analysis step. The rule-based classification was carried out with defined decision rules based on determined object primitives and fuzzy rules. In this study, hierarchical classification was preferred. First, the vegetation and ground classes were generated; the building class was then extracted. The NDVI, slope and Hough images were generated and used to avoid confusing the building class with other classes. The intensity images generated from the LiDAR data and morphological operations were utilized to improve the accuracy of the building class. The proposed approach achieved an overall accuracy of approximately 93% for the target class in a suburban neighborhood, which was the study area. Moreover, completeness (96.73%) and correctness (95.02%) analyses were performed by comparing the automatically extracted buildings and reference data.
APA, Harvard, Vancouver, ISO, and other styles
22

Васильева, Ирина Карловна, and Анатолий Владиславович Попов. "МЕТОД АВТОМАТИЧЕСКОЙ КЛАСТЕРИЗАЦИИ ДАННЫХ ДИСТАНЦИОННОГО ЗОНДИРОВАНИЯ." Aerospace technic and technology, no. 3 (July 15, 2019): 64–75. http://dx.doi.org/10.32620/aktt.2019.3.08.

Full text
Abstract:
The subject matter of the article is the methods of automatic clustering of remote sensing data under conditions of a priori uncertainty regarding the number of observed object classes and the statistical characteristics of the signatures of classes. The aim is to develop a method for approximating multimodal empirical distributions of observational data to construct decision rules for pixel-by-pixel statistical classification procedures, as well as to investigate the effectiveness of this method for automatically classifying objects on synthesized and real images. The tasks to be solved are: to develop and implement a procedure for splitting a mixture of basic distributions, while ensuring the following requirements: the absence of a preliminary data analysis stage in order to select optimal initial approximations; a good convergence of the method and the ability to automatically refine the list of classes by combining indistinguishable or poorly distinguishable components of the mixture into a single cluster; to synthesize test images with a specified number of objects and known data distributions for each object; to evaluate the effectiveness of the developed method for automatic classification by the criterion of the probability of correct recognition; to evaluate the results of automatic clustering of real images. The methods used are methods of stochastic simulation, methods of approximation of empirical distributions, statistical methods of recognition, methods of probability theory and mathematical statistics. The following results have been obtained. A method for automatic splitting of a mixture of Gaussian distributions to construct decision thresholds according to the maximal a posteriori probability criterion was proposed. The results of the automatic forming the list of classes and their probabilistic descriptions, as well as the results of the clustering both test images and satellite ones are given. It is shown that the developed method is quite effective and can be used to determine the number of objects’ classes as well as their stochastic characteristics’ mathematical description for pattern recognition tasks and cluster analysis. Conclusions. The scientific novelty of the results obtained is that the proposed approach makes it possible directly during the “unsupervised” training procedure to evaluate the distinguishability of classes and exclude indistinguishable objects from the list of classes.
APA, Harvard, Vancouver, ISO, and other styles
23

Pluto-Kossakowska, Joanna. "Review on Multitemporal Classification Methods of Satellite Images for Crop and Arable Land Recognition." Agriculture 11, no. 10 (October 13, 2021): 999. http://dx.doi.org/10.3390/agriculture11100999.

Full text
Abstract:
This paper presents a review of the conducted research in the field of multitemporal classification methods used for the automatic identification of crops and arable land using optical satellite images. The review and systematization of these methods in terms of the effectiveness of the obtained results and their accuracy allows for the planning towards further development in this area. The state of the art analysis concerns various methodological approaches, including selection of data in terms of spatial resolution, selection of algorithms, as well as external conditions related to arable land use, especially the structure of crops. The results achieved with use of various approaches and classifiers and subsequently reported in the literature vary depending on the crops and area of analysis and the sources of satellite data. Hence, their review and systematic conclusions are needed, especially in the context of the growing interest in automatic processes of identifying crops for statistical purposes or monitoring changes in arable land. The results of this study show no significant difference between the accuracy achieved from different machine learning algorithms, yet on average artificial neural network classifiers have results that are better by a few percent than others. For very fragmented regions, better results were achieved using Sentinel-2, SPOT-5 rather than Landsat images, but the level of accuracy can still be improved. For areas with large plots there is no difference in the level of accuracy achieved from any HR images.
APA, Harvard, Vancouver, ISO, and other styles
24

Faghri, Ardeshir, Martin Glaubitz, and Janaki Parameswaran. "Development of Integrated Traffic Monitoring System for Delaware." Transportation Research Record: Journal of the Transportation Research Board 1536, no. 1 (January 1996): 40–44. http://dx.doi.org/10.1177/0361198196153600106.

Full text
Abstract:
The establishment of a comprehensive statewide traffic counting program is discussed. The program comprises automatic traffic recorder (ATR), automatic vehicle classification (AVC), and weigh-in-motion (WIM) sites for the state of Delaware. The program was undertaken to review, establish, and implement effective statistical and procedural methods. The second phase of a two phase project, which implements the methodologies that were derived in the first phase, is presented. Using descriptive analysis and seasonal grouping, the number and location of sites needed for each of the three types of traffic monitoring devices were determined. Existing field data from Delaware's current ATR locations allowed for a statistical determination of the necessary number and road-type group distribution for the ATR sites. The absence of field data for AVC and WIM sites, however, necessitated alternative methods for determining the number and location of the traffic monitoring devices. As a result, a combination of statistical analysis and engineering judgment must be used for the establishment of any statewide traffic monitoring system.
APA, Harvard, Vancouver, ISO, and other styles
25

Přibil, Jiří, Anna Přibilová, and Ivan Frollo. "Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment." Measurement Science Review 17, no. 6 (December 20, 2017): 257–63. http://dx.doi.org/10.1515/msr-2017-0031.

Full text
Abstract:
Abstract The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.
APA, Harvard, Vancouver, ISO, and other styles
26

AHMED, PERVEZ, and C. Y. SUEN. "COMPUTER RECOGNITION OF TOTALLY UNCONSTRAINED HANDWRITTEN ZIP CODES." International Journal of Pattern Recognition and Artificial Intelligence 01, no. 01 (April 1987): 1–15. http://dx.doi.org/10.1142/s0218001487000023.

Full text
Abstract:
This paper deals with the application of automatic sorting of envelopes with totally unconstrained handwritten numeric postal ZIP codes and presents a complete model (including preprocessing, feature extraction and classification modules) of a ZIP code reader/sorter. Different recognition methods, including statistical, structural and combined were developed and their performance on real-life ZIP code samples (8540 numerals) were measured. The statistical recognition method was used as a front-end recognizer and predictor of an unknown character. Based on edge classification, a new technique was implemented to define and extract the structural features. In the combined recognition method, unknown characters were identified either by the statistical or structural method. Its recognition reliability was found to be in the interval (96.29%, 95.94%) with substitution and rejection rates between (3.45%, 3.96%) and (2.36%, 7.01%) respectively.
APA, Harvard, Vancouver, ISO, and other styles
27

Eleftherios, Kaklamanis, Purnima Ratilal, and Nicholas C. Makris. "Optimal Automatic Wide-Area Discrimination of Fish Shoals from Seafloor Geology with Multi-Spectral Ocean Acoustic Waveguide Remote Sensing in the Gulf of Maine." Remote Sensing 15, no. 2 (January 11, 2023): 437. http://dx.doi.org/10.3390/rs15020437.

Full text
Abstract:
Ocean Acoustic Waveguide Remote Sensing (OAWRS) enables fish population density distributions to be instantaneously quantified and continuously monitored over wide areas. Returns from seafloor geology can also be received as background or clutter by OAWRS when insufficient fish populations are present in any region. Given the large spatial regions that fish inhabit and roam over, it is important to develop automatic methods for determining whether fish are present at any pixel in an OAWRS image so that their population distributions, migrations and behaviour can be efficiently analyzed and monitored in large data sets. Here, a statistically optimal automated approach for distinguishing fish from seafloor geology in OAWRS imagery is demonstrated with Neyman–Pearson hypothesis testing which provides the highest true-positive classification rate for a given false-positive rate. Multispectral OAWRS images of large herring shoals during spawning migration to Georges Bank are analyzed. Automated Neyman-Pearson hypothesis testing is shown to accurately distinguish fish from seafloor geology through their differing spectral responses at any space and time pixel in OAWRS imagery. These spectral differences are most dramatic in the vicinity of swimbladder resonances of the fish probed by OAWRS. When such significantly different spectral dependencies exist between fish and geologic scattering, the approach presented provides an instantaneous, reliable and statistically optimal means of automatically distinguishing fish from seafloor geology at any spatial pixel in wide-area OAWRS images. Employing Kullback–Leibler divergence or the relative entropy in bits from Information Theory is shown to also enable automatic discrimination of fish from seafloor by their distinct statistical scattering properties across sensing frequency, but without the statistical optimal properties of the Neyman–Pearson approach.
APA, Harvard, Vancouver, ISO, and other styles
28

Maciel, Susanne, and Ricardo Biloti. "A statistics-based descriptor for automatic classification of scatterers in seismic sections." GEOPHYSICS 85, no. 5 (September 1, 2020): O83—O96. http://dx.doi.org/10.1190/geo2018-0673.1.

Full text
Abstract:
Discontinuities and small structures induce diffractions on seismic or ground-penetrating radar (GPR) acquisitions. Therefore, diffraction images can be used as a tool to access valuable information concerning subsurface scattering features, such as pinch outs, fractures, and edges. Usually, diffraction-imaging methods operate on diffraction events previously detected. Pattern-recognition methods are efficient to detect, image, and characterize diffractions. The use of this kind of approach, though, requires a numerical description of image points on a seismic section or radargram. We have investigated a new descriptor for seismic/GPR data that distinguishes diffractions from reflections. The descriptor consists of a set of statistical measures from diffraction operators sensitive to kinematic and dynamic aspects of an event. We develop experiments in which the proposed descriptor was incorporated into a pattern-recognition routine for diffraction imaging. The obtained method is useful for performing the automatic classification of image points using supervised and unsupervised algorithms, as a complementary step to Kirchhoff imaging. We also develop a new type of filtering, designed to address anomalies on the diffraction operators caused by interfering events. We evaluate the method using synthetic seismic data and real GPR data. Our results indicate that the descriptor correctly discriminates diffractions and shows promising results for low signal-to-noise-ratio situations.
APA, Harvard, Vancouver, ISO, and other styles
29

Hanser, F., B. Pfeifer, M. Seger, C. Hintermüller, R. Modre, B. Tilg, T. Trieb, et al. "A Signal Processing Pipeline for Noninvasive Imaging of Ventricular Preexcitation." Methods of Information in Medicine 44, no. 04 (2005): 508–15. http://dx.doi.org/10.1055/s-0038-1634001.

Full text
Abstract:
Summary Objectives: Noninvasive imaging of the cardiac activation sequence in humans could guide interventional curative treatment of cardiac arrhythmias by catheter ablation. Highly automated signal processing tools are desirable for clinical acceptance. The developed signal processing pipeline reduces user interactions to a minimum, which eases the operation by the staff in the catheter laboratory and increases the reproducibility of the results. Methods: A previously described R-peak detector was modified for automatic detection of all possible targets (beats) using the information of all leads in the ECG map. A direct method was applied for signal classification. The algorithm was tuned for distinguishing beats with an adenosine induced AV-nodal block from baseline morphology in Wolff-Parkinson-White (WPW) patients. Furthermore, an automatic identification of the QRS-interval borders was implemented. Results: The software was tested with data from eight patients having overt ventricular preexcitation. The R-peak detector captured all QRS-complexes with no false positive detection. The automatic classification was verified by demonstrating adenosine-induced prolongation of ventricular activation with statistical significance (p <0.001) in all patients. This also demonstrates the performance of the automatic detection of QRS-interval borders. Furthermore, all ectopic or paced beats were automatically separated from sinus rhythm. Computed activation maps are shown for one patient localizing the accessory pathway with an accuracy of 1 cm. Conclusions: The implemented signal processing pipeline is a powerful tool for selecting target beats for noninvasive activation imaging in WPW patients. It robustly identifies and classifies beats. The small beat to beat variations in the automatic QRS-interval detection indicate accurate identification of the time window of interest.
APA, Harvard, Vancouver, ISO, and other styles
30

Christou, Vasileios, Andreas Miltiadous, Ioannis Tsoulos, Evaggelos Karvounis, Katerina D. Tzimourta, Markos G. Tsipouras, Nikolaos Anastasopoulos, Alexandros T. Tzallas, and Nikolaos Giannakeas. "Evaluating the Window Size’s Role in Automatic EEG Epilepsy Detection." Sensors 22, no. 23 (November 27, 2022): 9233. http://dx.doi.org/10.3390/s22239233.

Full text
Abstract:
Electroencephalography is one of the most commonly used methods for extracting information about the brain’s condition and can be used for diagnosing epilepsy. The EEG signal’s wave shape contains vital information about the brain’s state, which can be challenging to analyse and interpret by a human observer. Moreover, the characteristic waveforms of epilepsy (sharp waves, spikes) can occur randomly through time. Considering all the above reasons, automatic EEG signal extraction and analysis using computers can significantly impact the successful diagnosis of epilepsy. This research explores the impact of different window sizes on EEG signals’ classification accuracy using four machine learning classifiers. The machine learning methods included a neural network with ten hidden nodes trained using three different training algorithms and the k-nearest neighbours classifier. The neural network training methods included the Broyden–Fletcher–Goldfarb–Shanno algorithm, the multistart method for global optimization problems, and a genetic algorithm. The current research utilized the University of Bonn dataset containing EEG data, divided into epochs having 50% overlap and window lengths ranging from 1 to 24 s. Then, statistical and spectral features were extracted and used to train the above four classifiers. The outcome from the above experiments showed that large window sizes with a length of about 21 s could positively impact the classification accuracy between the compared methods.
APA, Harvard, Vancouver, ISO, and other styles
31

Cabral, Frederico Soares, Hidekazu Fukai, and Satoshi Tamura. "Feature Extraction Methods Proposed for Speech Recognition Are Effective on Road Condition Monitoring Using Smartphone Inertial Sensors." Sensors 19, no. 16 (August 9, 2019): 3481. http://dx.doi.org/10.3390/s19163481.

Full text
Abstract:
The objective of our project is to develop an automatic survey system for road condition monitoring using smartphone devices. One of the main tasks of our project is the classification of paved and unpaved roads. Assuming recordings will be archived by using various types of vehicle suspension system and speeds in practice, hence, we use the multiple sensors found in smartphones and state-of-the-art machine learning techniques for signal processing. Despite usually not being paid much attention, the results of the classification are dependent on the feature extraction step. Therefore, we have to carefully choose not only the classification method but also the feature extraction method and their parameters. Simple statistics-based features are most commonly used to extract road surface information from acceleration data. In this study, we evaluated the mel-frequency cepstral coefficient (MFCC) and perceptual linear prediction coefficients (PLP) as a feature extraction step to improve the accuracy for paved and unpaved road classification. Although both MFCC and PLP have been developed in the human speech recognition field, we found that modified MFCC and PLP can be used to improve the commonly used statistical method.
APA, Harvard, Vancouver, ISO, and other styles
32

Gentillon, Hugues, Ludomir Stefańczyk, Michał Strzelecki, and Maria Respondek-Liberska. "Texture analysis of the developing human brain using customization of a knowledge-based system." F1000Research 6 (January 12, 2017): 40. http://dx.doi.org/10.12688/f1000research.10401.1.

Full text
Abstract:
Background: Pattern recognition software originally designed for geospatial and other technical applications could be trained by physicians and used as texture-analysis tools for evidence-based practice, in order to improve diagnostic imaging examination during pregnancy. Methods: Various machine-learning techniques and customized datasets were assessed for training of an integrable knowledge-based system (KBS), to determine a hypothetical methodology for texture classification of closely-related anatomical structures in fetal brain magnetic resonance (MR) images. Samples were manually categorized according to the magnetic field of the MRI scanner (i.e. 1.5-tesla (1.5T), 3-tesla (3T)), rotational planes (i.e. coronal, sagittal and axial), and signal weighting (i.e. spin-lattice, spin-spin, relaxation, proton density). In the machine-learning sessions, the operator manually selected relevant regions of interest (ROI) in 1.5/3T MR images. Semi-automatic procedures in MaZda/B11 were performed to determine optimal parameter sets for ROI classification. Four classes were defined: ventricles, thalamus, grey matter, and white matter. Various textures analysis methods were tested. The KBS performed automatic data pre-processing and semi-automatic classification of ROIs. Results: After testing 3456 ROIs, statistical binary classification revealed that combination of reduction techniques with linear discriminant algorithms (LDA) or nonlinear discriminant algorithms (NDA) yielded the best scoring in terms of sensitivity (both 100%, 95% CI: 99.79-100), specificity (both 100%, 95% CI: 99.79-100) and Fisher coefficient (≈E+4, ≈E+5, respectively). Conclusions: LDA and NDA in MaZda can be useful data mining tools for screening a population of interest subjected to a clinical test.
APA, Harvard, Vancouver, ISO, and other styles
33

Gentillon, Hugues, Ludomir Stefańczyk, Michał Strzelecki, and Maria Respondek-Liberska. "Texture analysis of the developing human brain using customization of a knowledge-based system." F1000Research 6 (September 11, 2017): 40. http://dx.doi.org/10.12688/f1000research.10401.2.

Full text
Abstract:
Background: Pattern recognition software originally designed for geospatial and other technical applications could be trained by physicians and used as texture analysis tools for evidence-based practice, in order to improve diagnostic imaging examination during pregnancy. Methods: Various machine-learning techniques and customized datasets were assessed for training of an integrable knowledge-based system (KBS) to determine a hypothetical methodology for texture classification of closely related anatomical structures in fetal brain magnetic resonance (MR) images. Samples were manually categorized according to the magnetic field of the MRI scanner (i.e., 1.5-tesla [1.5T], 3-tesla [3T]), rotational planes (i.e., coronal, sagittal, and axial), and signal weighting (i.e., spin-lattice, spin-spin, relaxation, and proton density). In the machine-learning sessions, the operator manually selected relevant regions of interest (ROI) in 1.5/3T MR images. Semi-automatic procedures in MaZda/B11 were performed to determine optimal parameter sets for ROI classification. Four classes were defined: ventricles, thalamus, gray matter, and white matter. Various texture analysis methods were tested. The KBS performed automatic data preprocessing and semi-automatic classification of ROI. Results: After testing 3456 ROI, statistical binary classification revealed that the combination of reduction techniques with linear discriminant algorithms (LDA) or nonlinear discriminant algorithms (NDA) yielded the best scoring in terms of sensitivity (both 100%, 95% CI: 99.79–100), specificity (both 100%, 95% CI: 99.79–100), and Fisher coefficient (≈E+4 and ≈E+5, respectively). Conclusions: LDA and NDA in MaZda can be useful data mining tools for screening a population of interest subjected to a clinical test.
APA, Harvard, Vancouver, ISO, and other styles
34

Du, R., M. A. Elbestawi, and S. M. Wu. "Automated Monitoring of Manufacturing Processes, Part 1: Monitoring Methods." Journal of Engineering for Industry 117, no. 2 (May 1, 1995): 121–32. http://dx.doi.org/10.1115/1.2803286.

Full text
Abstract:
This paper presents a systematic study of various monitoring methods suitable for automated monitoring of manufacturing processes. In general, monitoring is composed of two phases: learning and classification. In the learning phase, the key issue is to establish the relationship between monitoring indices (selected signature features) and the process conditions. Based on this relationship and the current sensor signals, the process condition is then estimated in the classification phase. The monitoring methods discussed in this paper include pattern recognition, fuzzy systems, decision trees, expert systems and neural networks. A brief review of signal processing techniques commonly used in monitoring, such as statistical analysis, spectral analysis, system modeling, bi-spectral analysis and time-frequency distribution, is also included.
APA, Harvard, Vancouver, ISO, and other styles
35

Sarra, Raniya R., Ahmed M. Dinar, Mazin Abed Mohammed, and Karrar Hameed Abdulkareem. "Enhanced Heart Disease Prediction Based on Machine Learning and χ2 Statistical Optimal Feature Selection Model." Designs 6, no. 5 (September 29, 2022): 87. http://dx.doi.org/10.3390/designs6050087.

Full text
Abstract:
Automatic heart disease prediction is a major global health concern. Effective cardiac treatment requires an accurate heart disease prognosis. Therefore, this paper proposes a new heart disease classification model based on the support vector machine (SVM) algorithm for improved heart disease detection. To increase prediction accuracy, the χ2 statistical optimum feature selection technique was used. The suggested model’s performance was then validated by comparing it to traditional models using several performance measures. The proposed model increased accuracy from 85.29% to 89.7%. Additionally, the componential load was reduced by half. This result indicates that our system outperformed other state-of-the-art methods in predicting heart disease.
APA, Harvard, Vancouver, ISO, and other styles
36

Gil-Rios, Miguel-Angel, Igor V. Guryev, Ivan Cruz-Aceves, Juan Gabriel Avina-Cervantes, Martha Alicia Hernandez-Gonzalez, Sergio Eduardo Solorio-Meza, and Juan Manuel Lopez-Hernandez. "Automatic Feature Selection for Stenosis Detection in X-ray Coronary Angiograms." Mathematics 9, no. 19 (October 3, 2021): 2471. http://dx.doi.org/10.3390/math9192471.

Full text
Abstract:
The automatic detection of coronary stenosis is a very important task in computer aided diagnosis systems in the cardiology area. The main contribution of this paper is the identification of a suitable subset of 20 features that allows for the classification of stenosis cases in X-ray coronary images with a high performance overcoming different state-of-the-art classification techniques including deep learning strategies. The automatic feature selection stage was driven by the Univariate Marginal Distribution Algorithm and carried out by statistical comparison between five metaheuristics in order to explore the search space, which is O(249) computational complexity. Moreover, the proposed method is compared with six state-of-the-art classification methods, probing its effectiveness in terms of the Accuracy and Jaccard Index evaluation metrics. All the experiments were performed using two X-ray image databases of coronary angiograms. The first database contains 500 instances and the second one 250 images. In the experimental results, the proposed method achieved an Accuracy rate of 0.89 and 0.88 and Jaccard Index of 0.80 and 0.79, respectively. Finally, the average computational time of the proposed method to classify stenosis cases was ≈0.02 s, which made it highly suitable to be used in clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
37

Quintana, J. M., A. Urkaregi, and I. Arostegui. "Use of Statistical Techniques to Synthesize Explicit Criteria Developed by an Expert Panel." Methods of Information in Medicine 45, no. 06 (2006): 622–30. http://dx.doi.org/10.1055/s-0038-1634126.

Full text
Abstract:
Summary Objectives: Methodology based on expert panels has been commonly used to evaluate the appropriateness of interventions. An important issue is the adequate synthesis of the generated information in an applicable way to clinical decision making. This paper shows how statistical procedures help synthesize the results of an expert panel. Methods: Three statistical techniques were applied to an expert panel that developed explicit criteria to assess the appropriateness of total hip joint replacement: classification tree, regression tree and multiple correspondence analysis combined with automatic classification. Results: Results provided by the three models were shown in graphical displays and were compared to the original panel results using crude and weighted probability of misclassification. Results were also applied to real interventions in order to know the implication of the misclassification on real patients. Conclusions: The statistical techniques help summarize data from panels of experts and provide useful decision models for clinical practice, especially when the number of indications is big. However, degree of misclassification and its implication should be taken into account.
APA, Harvard, Vancouver, ISO, and other styles
38

Nabil, Dib, Radhwane Benali, and Fethi Bereksi Reguig. "Epileptic seizure recognition using EEG wavelet decomposition based on nonlinear and statistical features with support vector machine classification." Biomedical Engineering / Biomedizinische Technik 65, no. 2 (April 28, 2020): 133–48. http://dx.doi.org/10.1515/bmt-2018-0246.

Full text
Abstract:
AbstractEpileptic seizure (ES) is a neurological brain dysfunction. ES can be detected using the electroencephalogram (EEG) signal. However, visual inspection of ES using long-time EEG recordings is a difficult, time-consuming and a costly procedure. Thus, automatic epilepsy recognition is of primary importance. In this paper, a new method is proposed for automatic ES recognition using short-time EEG recordings. The method is based on first decomposing the EEG signals on sub-signals using discrete wavelet transform. Then, from the obtained sub-signals, different non-linear parameters such as approximate entropy (ApEn), largest Lyapunov exponents (LLE) and statistical parameters are determined. These parameters along with phase entropies, calculated through higher order spectrum analysis, are used as an input vector of a multi-class support vector machine (MSVM) for ES recognition. The proposed method is evaluated using the standard EEG database developed by the Department of Epileptology, University of Bonn, Germany. The evaluation is carried out through a large number of classification experiments. Different statistical metrics namely Sensitivity (Se), Specificity (Sp) and classification accuracy (Ac) are calculated and compared to those obtained in the scientific research literature. The obtained results show that the proposed method achieves high accuracies, which are as good as the best existing state-of-the-art methods studied using the same EEG database.
APA, Harvard, Vancouver, ISO, and other styles
39

Ali, Ahmed K., and Ergun Erçelebi. "An M-QAM Signal Modulation Recognition Algorithm in AWGN Channel." Scientific Programming 2019 (May 12, 2019): 1–17. http://dx.doi.org/10.1155/2019/6752694.

Full text
Abstract:
Computing the distinct features from input data, before the classification, is a part of complexity to the methods of automatic modulation classification (AMC) which deals with modulation classification and is a pattern recognition problem. However, the algorithms that focus on multilevel quadrature amplitude modulation (M-QAM) which underneath different channel scenarios is well detailed. A search of the literature revealed that few studies were performed on the classification of high-order M-QAM modulation schemes such as 128-QAM, 256-QAM, 512-QAM, and 1024-QAM. This work focuses on the investigation of the powerful capability of the natural logarithmic properties and the possibility of extracting higher order cumulant’s (HOC) features from input data received raw. The HOC signals were extracted under the additive white Gaussian noise (AWGN) channel with four effective parameters which were defined to distinguish the types of modulation from the set: 4-QAM∼1024-QAM. This approach makes the classifier more intelligent and improves the success rate of classification. The simulation results manifest that a very good classification rate is achieved at a low SNR of 5 dB, which was performed under conditions of statistical noisy channel models. This shows the potential of the logarithmic classifier model for the application of M-QAM signal classification. furthermore, most results were promising and showed that the logarithmic classifier works well under both AWGN and different fading channels, as well as it can achieve a reliable recognition rate even at a lower signal-to-noise ratio (less than zero). It can be considered as an integrated automatic modulation classification (AMC) system in order to identify the higher order of M-QAM signals that has a unique logarithmic classifier to represent higher versatility. Hence, it has a superior performance in all previous works in automatic modulation identification systems.
APA, Harvard, Vancouver, ISO, and other styles
40

Almela, Ángela. "A Corpus-Based Study of Linguistic Deception in Spanish." Applied Sciences 11, no. 19 (September 23, 2021): 8817. http://dx.doi.org/10.3390/app11198817.

Full text
Abstract:
In the last decade, fields such as psychology and natural language processing have devoted considerable attention to the automatization of the process of deception detection, developing and employing a wide array of automated and computer-assisted methods for this purpose. Similarly, another emerging research area is focusing on computer-assisted deception detection using linguistics, with promising results. Accordingly, in the present article, the reader is firstly provided with an overall review of the state of the art of corpus-based research exploring linguistic cues to deception as well as an overview on several approaches to the study of deception and on previous research into its linguistic detection. In an effort to promote corpus-based research in this context, this study explores linguistic cues to deception in the Spanish written language with the aid of an automatic text classification tool, by means of an ad hoc corpus containing ground truth data. Interestingly, the key findings reveal that, although there is a set of linguistic cues which contributes to the global statistical classification model, there are some discursive differences across the subcorpora, yielding better classification results on the analysis conducted on the subcorpus containing emotionally loaded language.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Xiashuang, Guanghong Gong, and Ni Li. "Automated Recognition of Epileptic EEG States Using a Combination of Symlet Wavelet Processing, Gradient Boosting Machine, and Grid Search Optimizer." Sensors 19, no. 2 (January 9, 2019): 219. http://dx.doi.org/10.3390/s19020219.

Full text
Abstract:
Automatic recognition methods for non-stationary electroencephalogram (EEG) data collected from EEG sensors play an essential role in neurological detection. The integrated approaches proposed in this study consist of Symlet wavelet processing, a gradient boosting machine, and a grid search optimizer for a three-class classification scheme for normal subjects, intermittent epilepsy, and continuous epilepsy. Fourth-order Symlet wavelets are adopted to decompose the EEG data into five frequencies sub-bands, such as gamma, beta, alpha, theta, and delta, whose statistical features were computed and used as classification features. The grid search optimizer is used to automatically find the optimal parameters for training the classifier. The classification accuracy of the gradient boosting machine was compared with that of a conventional support vector machine and a random forest classifier constructed according to previous descriptions. Multiple performance indices were used to evaluate the proposed classification scheme, which provided better classification accuracy and detection effectiveness than has been recently reported in other studies on three-class classification of EEG data.
APA, Harvard, Vancouver, ISO, and other styles
42

Joy, R. Catherine, S. Thomas George, A. Albert Rajan, M. S. P. Subathra, N. J. Sairamya, J. Prasanna, Mazin Abed Mohammed, Alaa S. Al-Waisy, Mustafa Musa Jaber, and Mohammed Nasser Al-Andoli. "Detection and Classification of ADHD from EEG Signals Using Tunable Q-Factor Wavelet Transform." Journal of Sensors 2022 (September 15, 2022): 1–17. http://dx.doi.org/10.1155/2022/3590973.

Full text
Abstract:
The automatic identification of Attention Deficit Hyperactivity Disorder (ADHD) is essential for developing ADHD diagnosis tools that assist healthcare professionals. Recently, there has been a lot of interest in ADHD detection from EEG signals because it seemed to be a rapid method for identifying and treating this disorder. This paper proposes a technique for detecting ADHD from EEG signals with the nonlinear features extracted using tunable Q-wavelet transform (TQWT). The 16 channels of EEG signal data are decomposed into the optimal amount of time-frequency sub-bands using the TQWT filter banks. The unique feature vectors are evaluated using Katz and Higuchi nonlinear fractal dimension methods at each decomposed levels. An Artificial Neural Network classifier with a 10-fold cross-validation method is found to be an effective classifier for discriminating ADHD and normal subjects. Different performance metrics reveal that the proposed technique could effectively classify the ADHD and normal subjects with the highest accuracy. The statistical analysis showed that the Katz and Higuchi nonlinear feature estimation methods provide potential features that can be classified with high accuracy, sensitivity, and specificity and is suitable for automatic detection of ADHD. The proposed system is capable of accurately distinguishing between ADHD and non-ADHD subjects with a maximum accuracy of 100%.
APA, Harvard, Vancouver, ISO, and other styles
43

Lata, K. Ramani, P. Penczek, and J. Frank. "Automatic particle picking from electron micrographs." Proceedings, annual meeting, Electron Microscopy Society of America 52 (1994): 122–23. http://dx.doi.org/10.1017/s0424820100168347.

Full text
Abstract:
The present-day interactive manual selection of biological molecules from digitized micrographs for single particle averaging and reconstruction requires substantial effort and time. Thus a computer algorithm capable of recognition of structural content and selection of particles would be desirable.A few approaches have been proposed in the past. The method by Frank and Wagenknecht is based on the principle of correlation search. Van Heel's method is based on the computation of the local variance over a small area around each point of the image field. The method by Harauz and Fong-Lochovsky is based on edge-detection. The present work was focussed on the detection and classification of particlesby exploiting the standard statistical methods of discriminant analysis.The proposed technique is described in the block diagram (Fig.1). As illustrated in the figure, the program consists of three distinct segments devoted to, respectively, preparation of the data, training session and automatic selection based on a discriminant function set up in the training, hi the data preparation segment, the micrograph is (i) reduced four-fold in size, (ii) low-pass filtered and (iii) run through a peak search algorithm.
APA, Harvard, Vancouver, ISO, and other styles
44

Prakasam, P., and M. Madheswaran. "Digital Modulation Identification Model Using Wavelet Transform and Statistical Parameters." Journal of Computer Systems, Networks, and Communications 2008 (2008): 1–8. http://dx.doi.org/10.1155/2008/175236.

Full text
Abstract:
A generalized modulation identification scheme is developed and presented. With the help of this scheme, the automatic modulation classification and recognition of wireless communication signals with a priori unknown parameters are possible effectively. The special features of the procedure are the possibility to adapt it dynamically to nearly all modulation types, and the capability to identify. The developed scheme based on wavelet transform and statistical parameters has been used to identify M-ary PSK, M-ary QAM, GMSK, and M-ary FSK modulations. The simulated results show that the correct modulation identification is possible to a lower bound of 5 dB. The identification percentage has been analyzed based on the confusion matrix. When SNR is above 5 dB, the probability of detection of the proposed system is more than 0.968. The performance of the proposed scheme has been compared with existing methods and found it will identify all digital modulation schemes with low SNR.
APA, Harvard, Vancouver, ISO, and other styles
45

BAI, G. MERCY, and P. VENKADESH. "TAYLOR–MONARCH BUTTERFLY OPTIMIZATION-BASED SUPPORT VECTOR MACHINE FOR ACUTE LYMPHOBLASTIC LEUKEMIA CLASSIFICATION WITH BLOOD SMEAR MICROSCOPIC IMAGES." Journal of Mechanics in Medicine and Biology 21, no. 06 (June 21, 2021): 2150041. http://dx.doi.org/10.1142/s021951942150041x.

Full text
Abstract:
Acute lymphoblastic leukemia (ALL) is a serious hematological neoplasis that is characterized by the development of immature and abnormal growth of lymphoblasts. However, microscopic examination of bone marrow is the only way to achieve leukemia detection. Various methods are developed for automatic leukemia detection, but these methods are costly and time-consuming. Hence, an effective leukemia detection approach is designed using the proposed Taylor–monarch butterfly optimization-based support vector machine (Taylor–MBO-based SVM). However, the proposed Taylor–MBO is designed by integrating the Taylor series and MBO, respectively. The sparking process is designed to perform the automatic segmentation of blood smear images by estimating optimal threshold values. By extracting the features, such as texture features, statistical, and grid-based features from the segmented smear image, the performance of classification is increased with less training time. The kernel function of SVM is enabled to perform the leukemia classification such that the proposed Taylor–MBO algorithm accomplishes the training process of SVM. However, the proposed Taylor–MBO-based SVM obtained better performance using the metrics, such as accuracy, sensitivity, and specificity, with 94.5751, 95.526, and 94.570%, respectively.
APA, Harvard, Vancouver, ISO, and other styles
46

Ting, Jo-Anne, Aaron D'Souza, Sethu Vijayakumar, and Stefan Schaal. "Efficient Learning and Feature Selection in High-Dimensional Regression." Neural Computation 22, no. 4 (April 2010): 831–86. http://dx.doi.org/10.1162/neco.2009.02-08-702.

Full text
Abstract:
We present a novel algorithm for efficient learning and feature selection in high-dimensional regression problems. We arrive at this model through a modification of the standard regression model, enabling us to derive a probabilistic version of the well-known statistical regression technique of backfitting. Using the expectation-maximization algorithm, along with variational approximation methods to overcome intractability, we extend our algorithm to include automatic relevance detection of the input features. This variational Bayesian least squares (VBLS) approach retains its simplicity as a linear model, but offers a novel statistically robust black-box approach to generalized linear regression with high-dimensional inputs. It can be easily extended to nonlinear regression and classification problems. In particular, we derive the framework of sparse Bayesian learning, the relevance vector machine, with VBLS at its core, offering significant computational and robustness advantages for this class of methods. The iterative nature of VBLS makes it most suitable for real-time incremental learning, which is crucial especially in the application domain of robotics, brain-machine interfaces, and neural prosthetics, where real-time learning of models for control is needed. We evaluate our algorithm on synthetic and neurophysiological data sets, as well as on standard regression and classification benchmark data sets, comparing it with other competitive statistical approaches and demonstrating its suitability as a drop-in replacement for other generalized linear regression techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Mi, Lei Cao, Qian Zhai, Peng Li, Sa Liu, Richeng Li, Lei Feng, Gang Wang, Bin Hu, and Shengfu Lu. "Method of Depression Classification Based on Behavioral and Physiological Signals of Eye Movement." Complexity 2020 (January 14, 2020): 1–9. http://dx.doi.org/10.1155/2020/4174857.

Full text
Abstract:
This paper presents a method of depression recognition based on direct measurement of affective disorder. Firstly, visual emotional stimuli are used to obtain eye movement behavior signals and physiological signals directly related to mood. Then, in order to eliminate noise and redundant information and obtain better classification features, statistical methods (FDR corrected t-test) and principal component analysis (PCA) are used to select features of eye movement behavior and physiological signals. Finally, based on feature extraction, we use kernel extreme learning machine (KELM) to recognize depression based on PCA features. The results show that, on the one hand, the classification performance based on the fusion features of eye movement behavior and physiological signals is better than using a single behavior feature and a single physiological feature; on the other hand, compared with previous methods, the proposed method for depression recognition achieves better classification results. This study is of great value for the establishment of an automatic depression diagnosis system for clinical use.
APA, Harvard, Vancouver, ISO, and other styles
48

BAIRY, G. MURALIDHAR, U. C. NIRANJAN, and SUBHA D. PUTHANKATTIL. "AUTOMATED CLASSIFICATION OF DEPRESSION EEG SIGNALS USING WAVELET ENTROPIES AND ENERGIES." Journal of Mechanics in Medicine and Biology 16, no. 03 (May 2016): 1650035. http://dx.doi.org/10.1142/s0219519416500354.

Full text
Abstract:
Depression is a mental disorder that relates to a state of sadness and dejection. It also affects the emotional and physical state of a person. Currently, there are no standard diagnostic tests for depression that are able to produce conclusive results and more over the symptoms of depression are hard to diagnose. A lot of people who are suffering from depression are unaware of their illness. The electroencephalographic (EEG) signals can be used to detect the alterations in the brain’s electrochemical potential. The present work is based on the automated classification of the normal and depression EEG signals. Thus, signal processing methods are used to extract hidden information from the EEG signals. In this work, normal and depression EEG signals are used and discrete wavelet transform (DWT) is performed up to two levels. The features (skewness, energy, kurtosis, standard deviation (SD), mean and entropy) are extracted at the various detailed coefficients levels of the DWT. The extracted features then undergo a statistical analysis method, which is the Student’s t-test that determines the significance of differences in the features. Support Vector Machine classifier with Radial Basis Kernel Function (SVM RBF) was used and the classification accuracy results of 88.9237% was obtained. Hence, this proposed automatic classification system can serve as a useful diagnostic and monitoring tool for detection of depression.
APA, Harvard, Vancouver, ISO, and other styles
49

Záhorská, Renáta, Ladislav Nozdrovický, and Ľudovít Mikulášik. "Implementation of Statistical Methods and SWOT Analysis for Evaluation of Metal Waste Management in Engineering Company." Acta Technologica Agriculturae 19, no. 4 (December 1, 2016): 89–95. http://dx.doi.org/10.1515/ata-2016-0018.

Full text
Abstract:
Abstract This paper presents the results of the waste management research in a selected engineering company RIBE Slovakia, k. s., Nitra factory. Within of its manufacturing programme, the mentioned factory uses wide range of the manufacturing technologies (cutting operations, metal cold-forming, thread rolling, metal surface finishing, automatic sorting, metrology, assembly), with the aim to produce the final products – connecting components (fasteners) delivered to many industrial fields (agricultural machinery manufacturers, car industry, etc.). There were obtained data characterizing production technologies and the range of manufactured products. The key attention is paid to the classification of waste produced by engineering production and to waste management within the company. Within the research, there were obtained data characterizing the time course of production of various waste types and these data were evaluated by means of statistical method using STATGRAPHICS. Based on the application of SWOT analysis, there is objectively assessed the waste management in the company in terms of strengths and weaknesses, as well as determination of the opportunities and potential threats. Results obtained by the SWOT analysis application have allowed to come to conclusion that the company RIBE Slovakia, k. s., Nitra factory has well organized waste management system. The fact that the waste management system is incorporated into the company management system can be considered as an advantage.
APA, Harvard, Vancouver, ISO, and other styles
50

Schaufelberger, Matthias, Reinald Kühle, Andreas Wachter, Frederic Weichel, Niclas Hagen, Friedemann Ringwald, Urs Eisenmann, et al. "A Radiation-Free Classification Pipeline for Craniosynostosis Using Statistical Shape Modeling." Diagnostics 12, no. 7 (June 21, 2022): 1516. http://dx.doi.org/10.3390/diagnostics12071516.

Full text
Abstract:
Background: Craniosynostosis is a condition caused by the premature fusion of skull sutures, leading to irregular growth patterns of the head. Three-dimensional photogrammetry is a radiation-free alternative to the diagnosis using computed tomography. While statistical shape models have been proposed to quantify head shape, no shape-model-based classification approach has been presented yet. Methods: We present a classification pipeline that enables an automated diagnosis of three types of craniosynostosis. The pipeline is based on a statistical shape model built from photogrammetric surface scans. We made the model and pathology-specific submodels publicly available, making it the first publicly available craniosynostosis-related head model, as well as the first focusing on infants younger than 1.5 years. To the best of our knowledge, we performed the largest classification study for craniosynostosis to date. Results: Our classification approach yields an accuracy of 97.8 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Regarding the statistical shape model, we demonstrate that our model performs similar to other statistical shape models of the human head. Conclusion: We present a state-of-the-art shape-model-based classification approach for a radiation-free diagnosis of craniosynostosis. Our publicly available shape model enables the assessment of craniosynostosis on realistic and synthetic data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography