To see the other types of publications on this topic, follow the link: Evaluation of extreme classifiers.

Journal articles on the topic 'Evaluation of extreme classifiers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Evaluation of extreme classifiers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Balasubramanian, Kishore, and N. P. Ananthamoorthy. "Analysis of hybrid statistical textural and intensity features to discriminate retinal abnormalities through classifiers." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 233, no. 5 (March 20, 2019): 506–14. http://dx.doi.org/10.1177/0954411919835856.

Full text
Abstract:
Retinal image analysis relies on the effectiveness of computational techniques to discriminate various abnormalities in the eye like diabetic retinopathy, macular degeneration and glaucoma. The onset of the disease is often unnoticed in case of glaucoma, the effect of which is felt only at a later stage. Diagnosis of such degenerative diseases warrants early diagnosis and treatment. In this work, performance of statistical and textural features in retinal vessel segmentation is evaluated through classifiers like extreme learning machine, support vector machine and Random Forest. The fundus images are initially preprocessed for any noise reduction, image enhancement and contrast adjustment. The two-dimensional Gabor Wavelets and Partition Clustering is employed on the preprocessed image to extract the blood vessels. Finally, the combined hybrid features comprising statistical textural, intensity and vessel morphological features, extracted from the image, are used to detect glaucomatous abnormality through the classifiers. A crisp decision can be taken depending on the classifying rates of the classifiers. Public databases RIM-ONE and high-resolution fundus and local datasets are used for evaluation with threefold cross validation. The evaluation is based on performance metrics through accuracy, sensitivity and specificity. The evaluation of hybrid features obtained an overall accuracy of 97% when tested using classifiers. The support vector machine classifier is able to achieve an accuracy of 93.33% on high-resolution fundus, 93.8% on RIM-ONE dataset and 95.3% on local dataset. For extreme learning machine classifier, the accuracy is 95.1% on high-resolution fundus, 97.8% on RIM-ONE and 96.8% on local dataset. An accuracy of 94.5% on high-resolution fundus 92.5% on RIM-ONE and 94.2% on local dataset is obtained for the random forest classifier. Validation of the experiment results indicate that the hybrid features can be deployed in supervised classifiers to discriminate retinal abnormalities effectively.
APA, Harvard, Vancouver, ISO, and other styles
2

Michau, Gabriel, Yang Hu, Thomas Palmé, and Olga Fink. "Feature learning for fault detection in high-dimensional condition monitoring signals." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 234, no. 1 (August 24, 2019): 104–15. http://dx.doi.org/10.1177/1748006x19868335.

Full text
Abstract:
Complex industrial systems are continuously monitored by a large number of heterogeneous sensors. The diversity of their operating conditions and the possible fault types make it impossible to collect enough data for learning all the possible fault patterns. This article proposes an integrated automatic unsupervised feature learning and one-class classification for fault detection that uses data on healthy conditions only for its training. The approach is based on stacked extreme learning machines (namely hierarchical extreme learning machines) and comprises an autoencoder, performing unsupervised feature learning, stacked with a one-class classifier monitoring the distance of the test data to the training healthy class, thereby assessing the health of the system. This study provides a comprehensive evaluation of hierarchical extreme learning machines fault detection capability compared to other machine learning approaches, such as stand-alone one-class classifiers (extreme learning machines and support vector machines); these same one-class classifiers combined with traditional dimensionality reduction methods (principal component analysis) and a deep belief network. The performance is first evaluated on a synthetic dataset that encompasses typical characteristics of condition monitoring data. Subsequently, the approach is evaluated on a real case study of a power plant fault. The proposed algorithm for fault detection, combining feature learning with the one-class classifier, demonstrates a better performance, particularly in cases where condition monitoring data contain several non-informative signals.
APA, Harvard, Vancouver, ISO, and other styles
3

Raza, Ali, Furqan Rustam, Hafeez Ur Rehman Siddiqui, Isabel de la Torre Diez, Begoña Garcia-Zapirain, Ernesto Lee, and Imran Ashraf. "Predicting Genetic Disorder and Types of Disorder Using Chain Classifier Approach." Genes 14, no. 1 (December 26, 2022): 71. http://dx.doi.org/10.3390/genes14010071.

Full text
Abstract:
Genetic disorders are the result of mutation in the deoxyribonucleic acid (DNA) sequence which can be developed or inherited from parents. Such mutations may lead to fatal diseases such as Alzheimer’s, cancer, Hemochromatosis, etc. Recently, the use of artificial intelligence-based methods has shown superb success in the prediction and prognosis of different diseases. The potential of such methods can be utilized to predict genetic disorders at an early stage using the genome data for timely treatment. This study focuses on the multi-label multi-class problem and makes two major contributions to genetic disorder prediction. A novel feature engineering approach is proposed where the class probabilities from an extra tree (ET) and random forest (RF) are joined to make a feature set for model training. Secondly, the study utilizes the classifier chain approach where multiple classifiers are joined in a chain and the predictions from all the preceding classifiers are used by the conceding classifiers to make the final prediction. Because of the multi-label multi-class data, macro accuracy, Hamming loss, and α-evaluation score are used to evaluate the performance. Results suggest that extreme gradient boosting (XGB) produces the best scores with a 92% α-evaluation score and a 84% macro accuracy score. The performance of XGB is much better than state-of-the-art approaches, in terms of both performance and computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Afolabi, Hassan A., and Aburas A. Abdurazzag. "Statistical performance assessment of supervised machine learning algorithms for intrusion detection system." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (March 1, 2024): 266. http://dx.doi.org/10.11591/ijai.v13.i1.pp266-277.

Full text
Abstract:
<span lang="EN-US">Several studies have shown that an ensemble classifier's effectiveness is directly correlated with the diversity of its members. However, the algorithms used to build the base learners are one of the issues encountered when using a stacking ensemble. Given the number of options, choosing the best ones might be challenging. In this study, we selected some of the most extensively applied supervised machine learning algorithms and performed a performance evaluation in terms of well-known metrics and validation methods using two internet of things (IoT) intrusion detection datasets, namely network-based anomaly internet of things (N-BaIoT) and internet of things intrusion detection dataset (IoTID20). Friedman and Dunn's tests are used to statistically examine the significant differences between the classifier groups. The goal of this study is to encourage security researchers to develop an intrusion detection system (IDS) using ensemble learning and to propose an appropriate method for selecting diverse base classifiers for a stacking-type ensemble. The performance results indicate that adaptive boosting, and gradient boosting (GB), gradient boosting machines (GBM), light gradient boosting machines (LGBM), extreme gradient boosting (XGB) and deep neural network (DNN) classifiers exhibit better trade-off between the performance parameters and classification time making them ideal choices for developing anomaly-based IDSs.</span>
APA, Harvard, Vancouver, ISO, and other styles
5

Thiamchoo, Nantarika, and Pornchai Phukpattaranont. "Evaluation of feature projection techniques in object grasp classification using electromyogram signals from different limb positions." PeerJ Computer Science 8 (May 6, 2022): e949. http://dx.doi.org/10.7717/peerj-cs.949.

Full text
Abstract:
A myoelectric prosthesis is manipulated using electromyogram (EMG) signals from the existing muscles for performing the activities of daily living. A feature vector that is formed by concatenating data from many EMG channels may result in a high dimensional space, which may cause prolonged computation time, redundancy, and irrelevant information. We evaluated feature projection techniques, namely principal component analysis (PCA), linear discriminant analysis (LDA), t-Distributed Stochastic Neighbor Embedding (t-SNE), and spectral regression extreme learning machine (SRELM), applied to object grasp classification. These represent feature projections that are combinations of either linear or nonlinear, and supervised or unsupervised types. All pairs of the four types of feature projection with seven types of classifiers were evaluated, with data from six EMG channels and an IMU sensors for nine upper limb positions in the transverse plane. The results showed that SRELM outperformed LDA with supervised feature projections, and t-SNE was superior to PCA with unsupervised feature projections. The classification errors from SRELM and t-SNE paired with the seven classifiers were from 1.50% to 2.65% and from 1.27% to 17.15%, respectively. A one-way ANOVA test revealed no statistically significant difference by classifier type when using the SRELM projection, which is a nonlinear supervised feature projection (p = 0.334). On the other hand, we have to carefully select an appropriate classifier for use with t-SNE, which is a nonlinear unsupervised feature projection. We achieved the lowest classification error 1.27% using t-SNE paired with a k-nearest neighbors classifier. For SRELM, the lowest 1.50% classification error was obtained when paired with a neural network classifier.
APA, Harvard, Vancouver, ISO, and other styles
6

Kamaruddin, Ami Shamril, Mohd Fikri Hadrawi, Yap Bee Wah, and Sharifah Aliman. "An evaluation of nature-inspired optimization algorithms and machine learning classifiers for electricity fraud prediction." Indonesian Journal of Electrical Engineering and Computer Science 32, no. 1 (October 1, 2023): 468. http://dx.doi.org/10.11591/ijeecs.v32.i1.pp468-477.

Full text
Abstract:
<span>This study evaluated the nature-inspired optimization algorithms to improve classification involving imbalanced class problems. The particle swarm optimization (PSO) and grey wolf optimizer (GWO) were used to adaptively balance the distribution and then four supervised machine learning classifiers artificial neural network (ANN), support vector machine (SVM), extreme gradient-boosted tree (XGBoost), and random forest (RF) were applied to maximize the classification performance for electricity fraud prediction. The imbalance data was balanced using random undersampling (RUS) and two nature-inspired algorithm techniques (PSO and GWO). Results showed that for the data balanced using random undersampling, ANN (Sentest = 50.31%), and XGBoost (Sentest = 66.32%) has better sensitivity than SVM (Sentest = 23.61%), while RF exhibits overfitting (Sentrain = 100%, Sentest = 71.25%). The classification performance of RF model hybrid with PSO improved tremendously (AccTest = 96.98%, Sentest = 94.87%, Spectest = 99.16%, Pretest = 99.14%, F1 Score = 96.96%, and area under the curve (AUC) = 0.989). This was closely followed by hybrid of XGBoost with PSO. Moreover, RF and XGBoost hybrid with GWO also showed an improvement and promising results. This study has showed that nature-inspired optimization algorithms (PSO and GWO) are effective methods in addressing imbalanced dataset.</span>
APA, Harvard, Vancouver, ISO, and other styles
7

Tian, Zhang, Chen, Geng, and Wang. "Selective Ensemble Based on Extreme Learning Machine for Sensor-Based Human Activity Recognition." Sensors 19, no. 16 (August 8, 2019): 3468. http://dx.doi.org/10.3390/s19163468.

Full text
Abstract:
Sensor-based human activity recognition (HAR) has attracted interest both in academic and applied fields, and can be utilized in health-related areas, fitness, sports training, etc. With a view to improving the performance of sensor-based HAR and optimizing the generalizability and diversity of the base classifier of the ensemble system, a novel HAR approach (pairwise diversity measure and glowworm swarm optimization-based selective ensemble learning, DMGSOSEN) that utilizes ensemble learning with differentiated extreme learning machines (ELMs) is proposed in this paper. Firstly, the bootstrap sampling method is utilized to independently train multiple base ELMs which make up the initial base classifier pool. Secondly, the initial pool is pre-pruned by calculating the pairwise diversity measure of each base ELM, which can eliminate similar base ELMs and enhance the performance of HAR system by balancing diversity and accuracy. Then, glowworm swarm optimization (GSO) is utilized to search for the optimal sub-ensemble from the base ELMs after pre-pruning. Finally, majority voting is utilized to combine the results of the selected base ELMs. For the evaluation of our proposed method, we collected a dataset from different locations on the body, including chest, waist, left wrist, left ankle and right arm. The experimental results show that, compared with traditional ensemble algorithms such as Bagging, Adaboost, and other state-of-the-art pruning algorithms, the proposed approach is able to achieve better performance (96.7% accuracy and F1 from wrist) with fewer base classifiers.
APA, Harvard, Vancouver, ISO, and other styles
8

Guo, Weian, Yan Zhang, Ming Chen, Lei Wang, and Qidi Wu. "Fuzzy performance evaluation of Evolutionary Algorithms based on extreme learning classifier." Neurocomputing 175 (January 2016): 371–82. http://dx.doi.org/10.1016/j.neucom.2015.10.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Gethami, Khalid M., Mousa T. Al-Akhras, and Mohammed Alawairdhi. "Empirical Evaluation of Noise Influence on Supervised Machine Learning Algorithms Using Intrusion Detection Datasets." Security and Communication Networks 2021 (January 15, 2021): 1–28. http://dx.doi.org/10.1155/2021/8836057.

Full text
Abstract:
Optimizing the detection of intrusions is becoming more crucial due to the continuously rising rates and ferocity of cyber threats and attacks. One of the popular methods to optimize the accuracy of intrusion detection systems (IDSs) is by employing machine learning (ML) techniques. However, there are many factors that affect the accuracy of the ML-based IDSs. One of these factors is noise, which can be in the form of mislabelled instances, outliers, or extreme values. Determining the extent effect of noise helps to design and build more robust ML-based IDSs. This paper empirically examines the extent effect of noise on the accuracy of the ML-based IDSs by conducting a wide set of different experiments. The used ML algorithms are decision tree (DT), random forest (RF), support vector machine (SVM), artificial neural networks (ANNs), and Naïve Bayes (NB). In addition, the experiments are conducted on two widely used intrusion datasets, which are NSL-KDD and UNSW-NB15. Moreover, the paper also investigates the use of these ML algorithms as base classifiers with two ensembles of classifiers learning methods, which are bagging and boosting. The detailed results and findings are illustrated and discussed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
10

Okwonu, Friday Zinzendoff, Nor Aishah Ahad, Nicholas Oluwole Ogini, Innocent Ejiro Okoloko, and Wan Zakiyatussariroh Wan Husin. "COMPARATIVE PERFORMANCE EVALUATION OF EFFICIENCY FOR HIGH DIMENSIONAL CLASSIFICATION METHODS." Journal of Information and Communication Technology 21, No.3 (July 17, 2022): 437–64. http://dx.doi.org/10.32890/jict2022.21.3.6.

Full text
Abstract:
This paper aimed to determine the efficiency of classifiers for high-dimensional classification methods. It also investigated whether an extreme minimum misclassification rate translates into robust efficiency. To ensure an acceptable procedure, a benchmark evaluation threshold (BETH) was proposed as a metric to analyze the comparative performance for high-dimensional classification methods. A simplified performance metric was derived to show the efficiency of different classification methods. To achieve the objectives, the existing probability of correct classification (PCC) or classification accuracy reported in five different articles was used to generate the BETH value. Then, a comparative analysis was performed between the application of BETH value and the well-established PCC value ,derived from the confusion matrix. The analysis indicated that the BETH procedure had a minimum misclassification rate, unlike the Optimal method. The results also revealed that as the PCC inclined toward unity value, the misclassification rate between the two methods (BETH and PCC) became extremely irrelevant. The study revealed that the BETH method was invariant to the performance established by the classifiers using the PCC criterion but demonstrated more relevant aspects of robustness and minimum misclassification rate as compared to the PCC method. In addition, the comparative analysis affirmed that the BETH method exhibited more robust efficiency than the Optimal method. The study concluded that a minimum misclassification rate yields robust performance efficiency.
APA, Harvard, Vancouver, ISO, and other styles
11

Tariq, Muhammad Arham, Allah Bux Sargano, Muhammad Aksam Iftikhar, and Zulfiqar Habib. "Comparing Different Oversampling Methods in Predicting Multi-Class Educational Datasets Using Machine Learning Techniques." Cybernetics and Information Technologies 23, no. 4 (November 1, 2023): 199–212. http://dx.doi.org/10.2478/cait-2023-0044.

Full text
Abstract:
Abstract Predicting students’ academic performance is a critical research area, yet imbalanced educational datasets, characterized by unequal academic-level representation, present challenges for classifiers. While prior research has addressed the imbalance in binary-class datasets, this study focuses on multi-class datasets. A comparison of ten resampling methods (SMOTE, Adasyn, Distance SMOTE, BorderLineSMOTE, KmeansSMOTE, SVMSMOTE, LN SMOTE, MWSMOTE, Safe Level SMOTE, and SMOTETomek) is conducted alongside nine classification models: K-Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), Support Vector Machine (SVM), Logistic Regression (LR), Extra Tree (ET), Random Forest (RT), Extreme Gradient Boosting (XGB), and Ada Boost (AdaB). Following a rigorous evaluation, including hyperparameter tuning and 10 fold cross-validations, KNN with SmoteTomek attains the highest accuracy of 83.7%, as demonstrated through an ablation study. These results emphasize SMOTETomek’s effectiveness in mitigating class imbalance in educational datasets and highlight KNN’s potential as an educational data mining classifier.
APA, Harvard, Vancouver, ISO, and other styles
12

Jafarzadeh, Hamid, Masoud Mahdianpari, Eric Gill, Fariba Mohammadimanesh, and Saeid Homayouni. "Bagging and Boosting Ensemble Classifiers for Classification of Multispectral, Hyperspectral and PolSAR Data: A Comparative Evaluation." Remote Sensing 13, no. 21 (November 2, 2021): 4405. http://dx.doi.org/10.3390/rs13214405.

Full text
Abstract:
In recent years, several powerful machine learning (ML) algorithms have been developed for image classification, especially those based on ensemble learning (EL). In particular, Extreme Gradient Boosting (XGBoost) and Light Gradient Boosting Machine (LightGBM) methods have attracted researchers’ attention in data science due to their superior results compared to other commonly used ML algorithms. Despite their popularity within the computer science community, they have not yet been well examined in detail in the field of Earth Observation (EO) for satellite image classification. As such, this study investigates the capability of different EL algorithms, generally known as bagging and boosting algorithms, including Adaptive Boosting (AdaBoost), Gradient Boosting Machine (GBM), XGBoost, LightGBM, and Random Forest (RF), for the classification of Remote Sensing (RS) data. In particular, different classification scenarios were designed to compare the performance of these algorithms on three different types of RS data, namely high-resolution multispectral, hyperspectral, and Polarimetric Synthetic Aperture Radar (PolSAR) data. Moreover, the Decision Tree (DT) single classifier, as a base classifier, is considered to evaluate the classification’s accuracy. The experimental results demonstrated that the RF and XGBoost methods for the multispectral image, the LightGBM and XGBoost methods for hyperspectral data, and the XGBoost and RF algorithms for PolSAR data produced higher classification accuracies compared to other ML techniques. This demonstrates the great capability of the XGBoost method for the classification of different types of RS data.
APA, Harvard, Vancouver, ISO, and other styles
13

Tomita, Katsuyuki, Akira Yamasaki, Ryohei Katou, Tomoyuki Ikeuchi, Hirokazu Touge, Hiroyuki Sano, and Yuji Tohda. "Construction of a Diagnostic Algorithm for Diagnosis of Adult Asthma Using Machine Learning with Random Forest and XGBoost." Diagnostics 13, no. 19 (September 27, 2023): 3069. http://dx.doi.org/10.3390/diagnostics13193069.

Full text
Abstract:
An evidence-based diagnostic algorithm for adult asthma is necessary for effective treatment and management. We present a diagnostic algorithm that utilizes a random forest (RF) and an optimized eXtreme Gradient Boosting (XGBoost) classifier to diagnose adult asthma as an auxiliary tool. Data were gathered from the medical records of 566 adult outpatients who visited Kindai University Hospital with complaints of nonspecific respiratory symptoms. Specialists made a thorough diagnosis of asthma based on symptoms, physical indicators, and objective testing, including airway hyperresponsiveness. We used two decision-tree classifiers to identify the diagnostic algorithms: RF and XGBoost. Bayesian optimization was used to optimize the hyperparameters of RF and XGBoost. Accuracy and area under the curve (AUC) were used as evaluation metrics. The XGBoost classifier outperformed the RF classifier with an accuracy of 81% and an AUC of 85%. A combination of symptom–physical signs and lung function tests was successfully used to construct a diagnostic algorithm on importance features for diagnosing adult asthma. These results indicate that the proposed model can be reliably used to construct diagnostic algorithms with selected features from objective tests in different settings.
APA, Harvard, Vancouver, ISO, and other styles
14

Walavalkar, Praniket, Ansh Dasrapuria, Meghna Sarda, and Lynette Dmello. "A Token-based Approach to Detect Fraud in Ethereum Transactions." International Journal for Research in Applied Science and Engineering Technology 12, no. 4 (April 30, 2024): 34–42. http://dx.doi.org/10.22214/ijraset.2024.59690.

Full text
Abstract:
Abstract: As a consequence of mass unemployment being the byproduct of COVID-19, people around the world discovered investment in cryptocurrency as a means to tackle their declining financial condition. Subsequently, the prominence of Ethereum as a platform for crypto transactions also gave rise to fraudulent transactions. The need to detect these frauds exists even today. This study proposes a token-based approach to detect fraud in Ethereum transactions incorporating the ERC20 standard, by employing machine learning techniques. After cleaning and preprocessing of the dataset, the transaction data was fed to Random Forest (RF), AdaBoost, Extra Trees (ET), Gradient Boosting (GB) and Extreme Gradient Boosting (XGB) classifiers in search of the most suitable model for fraud detection. Meticulous evaluation revealed that RF, ET and XGB classifiers yielded the highest accuracy of 95%. The proposed token-based approach hence presents a novel and efficient solution for fraud detection, with room for improvement and scalability.
APA, Harvard, Vancouver, ISO, and other styles
15

FAUST, OLIVER, U. RAJENDRA ACHARYA, LIM CHOO MIN, and BERNHARD H. C. SPUTH. "AUTOMATIC IDENTIFICATION OF EPILEPTIC AND BACKGROUND EEG SIGNALS USING FREQUENCY DOMAIN PARAMETERS." International Journal of Neural Systems 20, no. 02 (April 2010): 159–76. http://dx.doi.org/10.1142/s0129065710002334.

Full text
Abstract:
The analysis of electroencephalograms continues to be a problem due to our limited understanding of the signal origin. This limited understanding leads to ill-defined models, which in turn make it hard to design effective evaluation methods. Despite these shortcomings, electroencephalogram analysis is a valuable tool in the evaluation of neurological disorders and the evaluation of overall cerebral activity. We compared different model based power spectral density estimation methods and different classification methods. Specifically, we used the autoregressive moving average as well as from Yule-Walker and Burg's methods, to extract the power density spectrum from representative signal samples. Local maxima and minima were detected from these spectra. In this paper, the locations of these extrema are used as input to different classifiers. The three classifiers we used were: Gaussian mixture model, artificial neural network, and support vector machine. The classification results are documented with confusion matrices and compared with receiver operating characteristic curves. We found that Burg's method for spectrum estimation together with a support vector machine classifier yields the best classification results. This combination reaches a classification rate of 93.33%, the sensitivity is 98.33% and the specificy is 96.67%.
APA, Harvard, Vancouver, ISO, and other styles
16

Kuntiyellannagari, Bhagyalaxmi, Bhoopalan Dwarakanath, and Panuganti VijayaPal Reddy. "Hybrid model for brain tumor detection using convolution neural networks." Indonesian Journal of Electrical Engineering and Computer Science 33, no. 3 (March 1, 2024): 1775. http://dx.doi.org/10.11591/ijeecs.v33.i3.pp1775-1781.

Full text
Abstract:
<div>The development of abnormal cells in the brain, some of which may turn out to be cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) is the most common technique for detecting brain tumors. Information about the abnormal tissue growth in the brain is visible from the MRI scans. In most research papers, machine learning (ML) and deep learning (DL) algorithms are applied to detect brain tumors. The radiologist can make speedy decisions because of this prediction. The proposed work creates a hybrid convolution neural networks (CNN) model and logistic regression (LR). The visual geometry group16 (VGG16) which was pre-trained model is used for the extraction of features. To reduce the complexity, we eliminated the last eight layers of VGG16. From this transformed model, the features are extracted in the form of a vector array. These features fed into different ML classifiers like support vector machine (SVM), and Naïve Bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as Recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.</div>
APA, Harvard, Vancouver, ISO, and other styles
17

Sakri, Sapiah, and Shakila Basheer. "Fusion Model for Classification Performance Optimization in a Highly Imbalance Breast Cancer Dataset." Electronics 12, no. 5 (February 28, 2023): 1168. http://dx.doi.org/10.3390/electronics12051168.

Full text
Abstract:
Accurate diagnosis of breast cancer using automated algorithms continues to be a challenge in the literature. Although researchers have conducted a great deal of work to address this issue, no definitive answer has yet been discovered. This challenge is aggravated further by the fact that most available datasets have imbalanced class issues, meaning that the number of cases in one class vastly outnumbers those of the others. The goal of this study was to (i) develop a reliable machine-learning-based prediction model for breast cancer based on the combination of the resampling technique and the classifier, which we called a ‘fusion model’; (ii) deal with a typical high-class imbalance problem, which is posed because the breast cancer patients’ class is significantly smaller than the healthy class; and (iii) interpret the model output to understand the decision-making mechanism. In a comparative analysis with three well-known classifiers representing classical learning, ensemble learning, and deep learning, the effectiveness of the proposed machine-learning-based approach was investigated in terms of metrics related to both generalization capability and prediction accuracy. Based on the comparative analysis, the fusion model (random oversampling techniques dataset + extreme gradient boosting classifier) affects the accuracy, precision, recall, and F1-score with the highest value of 99.9%. On the other hand, for ROC evaluation, the oversampling and hybrid sampling techniques dataset combined with extreme gradient boosting achieved 100% performance compared to the models combined with the undersampling techniques dataset. Thus, the proposed predictive model based on the fusion strategy can optimize the performance of breast cancer diagnosis classification.
APA, Harvard, Vancouver, ISO, and other styles
18

Deng, Weiquan, Bo Ye, Jun Bao, Guoyong Huang, and Jiande Wu. "Classification and Quantitative Evaluation of Eddy Current Based on Kernel-PCA and ELM for Defects in Metal Component." Metals 9, no. 2 (February 1, 2019): 155. http://dx.doi.org/10.3390/met9020155.

Full text
Abstract:
Eddy current testing technology is widely used in the defect detection of metal components and the integrity evaluation of critical components. However, at present, the evaluation and analysis of defect signals are still mostly based on artificial evaluation. Therefore, the evaluation of defects is often subjectively affected by human factors, which may lead to a lack in objectivity, accuracy, and reliability. In this paper, the feature extraction of non-linear signals is carried out. First, using the kernel-based principal component analysis (KPCA) algorithm. Secondly, based on the feature vectors of defects, the classification of an extreme learning machine (ELM) for different defects is studied. Compared with traditional classifiers, such as artificial neural network (ANN) and support vector machine (SVM), the accuracy and rapidity of ELM are more advantageous. Based on the accurate classification of defects, the linear least-squares fitting is used to further quantitatively evaluate the defects. Finally, the experimental results have verified the effectiveness of the proposed method, which involves automatic defect classification and quantitative analysis.
APA, Harvard, Vancouver, ISO, and other styles
19

Bibi, Ruqia, Zahid Mehmood, Asmaa Munshi, Rehan Mehmood Yousaf, and Syed Sohail Ahmed. "Deep features optimization based on a transfer learning, genetic algorithm, and extreme learning machine for robust content-based image retrieval." PLOS ONE 17, no. 10 (October 3, 2022): e0274764. http://dx.doi.org/10.1371/journal.pone.0274764.

Full text
Abstract:
The recent era has witnessed exponential growth in the production of multimedia data which initiates exploration and expansion of certain domains that will have an overwhelming impact on human society in near future. One of the domains explored in this article is content-based image retrieval (CBIR), in which images are mostly encoded using hand-crafted approaches that employ different descriptors and their fusions. Although utilization of these approaches has yielded outstanding results, their performance in terms of a semantic gap, computational cost, and appropriate fusion based on problem domain is still debatable. In this article, a novel CBIR method is proposed which is based on the transfer learning-based visual geometry group (VGG-19) method, genetic algorithm (GA), and extreme learning machine (ELM) classifier. In the proposed method, instead of using hand-crafted features extraction approaches, features are extracted automatically using a transfer learning-based VGG-19 model to consider both local and global information of an image for robust image retrieval. As deep features are of high dimension, the proposed method reduces the computational expense by passing the extracted features through GA which returns a reduced set of optimal features. For image classification, an extreme learning machine classifier is incorporated which is much simpler in terms of parameter tuning and learning time as compared to other traditional classifiers. The performance of the proposed method is evaluated on five datasets which highlight the better performance in terms of evaluation metrics as compared with the state-of-the-art image retrieval methods. Its statistical analysis through a nonparametric Wilcoxon matched-pairs signed-rank test also exhibits significant performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Awadi, Jhan Yahya Rbat, Hadeel K. Aljobouri, and Ali M. Hasan. "MRI Brain Scans Classification Using Extreme Learning Machine on LBP and GLCM." International Journal of Online and Biomedical Engineering (iJOE) 19, no. 02 (February 16, 2023): 134–49. http://dx.doi.org/10.3991/ijoe.v19i02.33987.

Full text
Abstract:
The primary goal of this study is to predict the presence of a brain tumor using MRI brain images. These images are first pre-processed to remove the boundary borders and the undesired regions. Gray-Level Co-Occurrence Matrix (GLCM) and Local Binary Pattern method (LBP) approaches are mixed for extracting multiple local and global features. The best features are selected using the ANOVA statistical approach, which is based on the largest variance. Then, the selected features are applied to many state of arts classifiers as well as to Extreme Learning Machine (ELM) neural network model, where the weights are optimized via the regularization of RELM using a suitable ratio of Cross Validation (CV) for the images' classification into one of two classes, namely normal (benign) and abnormal (malignant). The proposed ELM algorithm was trained and tested with 800 images of BRATS 2015 datasets types, and the experimental results demonstrated that this approach has better performance on several evaluation criteria, including accuracy, stability, and speedup. It reaches to 98.87% accuracy with extremely low classification time. ELM can improve the classification performance by raising the accuracy more than 2% and reducing the number of processes needed by speeding up the algorithm by a factor of 10 for an average of 20 trials.
APA, Harvard, Vancouver, ISO, and other styles
21

Leng, Qian, Honggang Qi, Jun Miao, Wentao Zhu, and Guiping Su. "One-Class Classification with Extreme Learning Machine." Mathematical Problems in Engineering 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/412957.

Full text
Abstract:
One-class classification problem has been investigated thoroughly for past decades. Among one of the most effective neural network approaches for one-class classification, autoencoder has been successfully applied for many applications. However, this classifier relies on traditional learning algorithms such as backpropagation to train the network, which is quite time-consuming. To tackle the slow learning speed in autoencoder neural network, we propose a simple and efficient one-class classifier based on extreme learning machine (ELM). The essence of ELM is that the hidden layer need not be tuned and the output weights can be analytically determined, which leads to much faster learning speed. The experimental evaluation conducted on several real-world benchmarks shows that the ELM based one-class classifier can learn hundreds of times faster than autoencoder and it is competitive over a variety of one-class classification methods.
APA, Harvard, Vancouver, ISO, and other styles
22

R P, Prawin. "Performance Evaluation and Comparative Analysis of Several Machine Learning Classification Techniques Using a Data-driven Approach in Predicting Renal Failure." International Journal for Research in Applied Science and Engineering Technology 11, no. 6 (June 30, 2023): 3522–30. http://dx.doi.org/10.22214/ijraset.2023.54343.

Full text
Abstract:
Abstract: Renal failure is characterized by progressive kidney function loss over time. It is a serious medical condition that affects millions of people worldwide. It is caused by the inability of the kidneys to properly filter waste and excess fluids from the blood. Renal failure can be a consequence of chronic kidney disease. Chronic kidney disease is a long-term condition that causes the kidneys to gradually lose function over time. If chronic kidney disease is not adequately managed, the kidney’s function may continue to decline, leading to renal failure. It is essential to monitor and manage chronic kidney disease to prevent renal failure from developing. This research paper presents an approach for predicting renal failure using several machine-learning classification techniques. The study evaluates the performance of various classifiers such as Decision Tree, Naive Bayes, Extreme Gradient Boosting, Logistic Regression, and Support Vector Machines using various evaluation metrics. The performance of these classifiers is evaluated using various metrics such as accuracy, precision, recall, and F1-score. This proposed method can be useful for early diagnosis and treatment of renal failure, thus reducing the complications and costs associated with the disease. By comparing and evaluating the performance of these models, we aim to identify the most effective approach for predicting renal failure and provide valuable insights for clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
23

K., Bhagyalaxmi, and B. Dwarakanath. "Hybrid model for detection of brain tumor using convolution neural networks." Computer Science and Information Technologies 5, no. 1 (March 1, 2024): 78–84. http://dx.doi.org/10.11591/csit.v5i1.p78-84.

Full text
Abstract:
The development of aberrant brain cells, some of which may turn cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) scans are the most common technique for finding brain tumors. Information about the aberrant tissue growth in the brain is discernible from the MRI scans. In numerous research papers, machine learning, and deep learning algorithms are used to detect brain tumors. It takes extremely little time to forecast a brain tumor when these algorithms are applied to MRI pictures, and better accuracy makes it easier to treat patients. The radiologist can make speedy decisions because of this forecast. The proposed work creates a hybrid convolution neural networks (CNN) model using CNN for feature extraction and logistic regression (LR). The pre-trained model visual geometry group 16 (VGG16) is used for the extraction of features. To reduce the complexity and parameters to train we eliminated the last eight layers of VGG16. From this transformed model the features are extracted in the form of a vector array. These features fed into different machine learning classifiers like support vector machine (SVM), naïve bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.
APA, Harvard, Vancouver, ISO, and other styles
24

K., Bhagyalaxmi, and B. Dwarakanath. "Hybrid model for detection of brain tumor using convolution neural networks." Computer Science and Information Technologies 5, no. 1 (March 1, 2024): 78–84. http://dx.doi.org/10.11591/csit.v5i1.pp78-84.

Full text
Abstract:
The development of aberrant brain cells, some of which may turn cancerous, is known as a brain tumor. Magnetic resonance imaging (MRI) scans are the most common technique for finding brain tumors. Information about the aberrant tissue growth in the brain is discernible from the MRI scans. In numerous research papers, machine learning, and deep learning algorithms are used to detect brain tumors. It takes extremely little time to forecast a brain tumor when these algorithms are applied to MRI pictures, and better accuracy makes it easier to treat patients. The radiologist can make speedy decisions because of this forecast. The proposed work creates a hybrid convolution neural networks (CNN) model using CNN for feature extraction and logistic regression (LR). The pre-trained model visual geometry group 16 (VGG16) is used for the extraction of features. To reduce the complexity and parameters to train we eliminated the last eight layers of VGG16. From this transformed model the features are extracted in the form of a vector array. These features fed into different machine learning classifiers like support vector machine (SVM), naïve bayes (NB), LR, extreme gradient boosting (XGBoost), AdaBoost, and random forest for training and testing. The performance of different classifiers is compared. The CNN-LR hybrid combination outperformed the remaining classifiers. The evaluation measures such as recall, precision, F1-score, and accuracy of the proposed CNN-LR model are 94%, 94%, 94%, and 91% respectively.
APA, Harvard, Vancouver, ISO, and other styles
25

Ding, Hu, Jiaming Na, Shangjing Jiang, Jie Zhu, Kai Liu, Yingchun Fu, and Fayuan Li. "Evaluation of Three Different Machine Learning Methods for Object-Based Artificial Terrace Mapping—A Case Study of the Loess Plateau, China." Remote Sensing 13, no. 5 (March 8, 2021): 1021. http://dx.doi.org/10.3390/rs13051021.

Full text
Abstract:
Artificial terraces are of great importance for agricultural production and soil and water conservation. Automatic high-accuracy mapping of artificial terraces is the basis of monitoring and related studies. Previous research achieved artificial terrace mapping based on high-resolution digital elevation models (DEMs) or imagery. As a result of the importance of the contextual information for terrace mapping, object-based image analysis (OBIA) combined with machine learning (ML) technologies are widely used. However, the selection of an appropriate classifier is of great importance for the terrace mapping task. In this study, the performance of an integrated framework using OBIA and ML for terrace mapping was tested. A catchment, Zhifanggou, in the Loess Plateau, China, was used as the study area. First, optimized image segmentation was conducted. Then, features from the DEMs and imagery were extracted, and the correlations between the features were analyzed and ranked for classification. Finally, three different commonly-used ML classifiers, namely, extreme gradient boosting (XGBoost), random forest (RF), and k-nearest neighbor (KNN), were used for terrace mapping. The comparison with the ground truth, as delineated by field survey, indicated that random forest performed best, with a 95.60% overall accuracy (followed by 94.16% and 92.33% for XGBoost and KNN, respectively). The influence of class imbalance and feature selection is discussed. This work provides a credible framework for mapping artificial terraces.
APA, Harvard, Vancouver, ISO, and other styles
26

Yotsawat, Wirot, Pakaket Wattuya, and Anongnart Srivihok. "Improved credit scoring model using XGBoost with Bayesian hyper-parameter optimization." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5477. http://dx.doi.org/10.11591/ijece.v11i6.pp5477-5487.

Full text
Abstract:
<span>Several credit-scoring models have been developed using ensemble classifiers in order to improve the accuracy of assessment. However, among the ensemble models, little consideration has been focused on the hyper-parameters tuning of base learners, although these are crucial to constructing ensemble models. This study proposes an improved credit scoring model based on the extreme gradient boosting (XGB) classifier using Bayesian hyper-parameters optimization (XGB-BO). The model comprises two steps. Firstly, data pre-processing is utilized to handle missing values and scale the data. Secondly, Bayesian hyper-parameter optimization is applied to tune the hyper-parameters of the XGB classifier and used to train the model. The model is evaluated on four widely public datasets, i.e., the German, Australia, lending club, and Polish datasets. Several state-of-the-art classification algorithms are implemented for predictive comparison with the proposed method. The results of the proposed model showed promising results, with an improvement in accuracy of 4.10%, 3.03%, and 2.76% on the German, lending club, and Australian datasets, respectively. The proposed model outperformed commonly used techniques, e.g., decision tree, support vector machine, neural network, logistic regression, random forest, and bagging, according to the evaluation results. The experimental results confirmed that the XGB-BO model is suitable for assessing the creditworthiness of applicants.</span>
APA, Harvard, Vancouver, ISO, and other styles
27

He, Qingshan, Jianping Yang, Hongju Chen, Jun Liu, Qin Ji, Yanxia Wang, and Fan Tang. "Evaluation of Extreme Precipitation Based on Three Long-Term Gridded Products Over the Qinghai-Tibet Plateau." Remote Sensing 13, no. 15 (July 30, 2021): 3010. http://dx.doi.org/10.3390/rs13153010.

Full text
Abstract:
Accurate estimates of extreme precipitation events play an important role in climate change studies and natural disaster risk assessments. This study aimed to evaluate the capability of the China Meteorological Forcing Dataset (CMFD), Asian Precipitation-Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE), and Climate Hazards Group Infrared Precipitation with Station data (CHIRPS) to detect the spatiotemporal patterns of extreme precipitation events over the Qinghai-Tibet Plateau (QTP) in China, from 1981 to 2014. Compared to the gauge-based precipitation dataset obtained from 101 stations across the region, 12 indices of extreme precipitation were employed and classified into three categories: fixed threshold, station-related threshold, and non-threshold indices. Correlation coefficient (CC), root mean square error (RMSE), mean absolute error (MAE), and Kling–Gupta efficiency (KGE), were used to assess the accuracy of extreme precipitation estimation; indices including probability of detection (POD), false alarm ratio (FAR), and critical success index (CSI) were adopted to evaluate the ability of gridded products’ to detect rain occurrences. The results indicated that all three gridded datasets showed acceptable representation of the extreme precipitation events over the QTP. CMFD and APHRODITE tended to slightly underestimate extreme precipitation indices (except for consecutive wet days), whereas CHIRPS overestimated most indices. Overall, CMFD outperformed the other datasets for capturing the spatiotemporal pattern of most extreme precipitation indices over the QTP. Although CHIRPS had lower levels of accuracy, the generated data had a higher spatial resolution, and with correction, it may be considered for small-scale studies in future research.
APA, Harvard, Vancouver, ISO, and other styles
28

Photiadou, C. S., A. H. Weerts, and B. J. J. M. van den Hurk. "Evaluation of two precipitation data sets for the Rhine River using streamflow simulations." Hydrology and Earth System Sciences 15, no. 11 (November 8, 2011): 3355–66. http://dx.doi.org/10.5194/hess-15-3355-2011.

Full text
Abstract:
Abstract. This paper presents an extended version of a widely used precipitation data set and evaluates it along with a recently released precipitation data set, using streamflow simulations. First, the existing precipitation data set issued by the Commission for the Hydrology of the Rhine basin (CHR), originally covering the period 1961–1995, was extended until 2008 using a number of additional precipitation data sets. Next, the extended version of the CHR, together with E-OBS Version 4 (ECA &amp; D gridded data set) were evaluated for their performance in the Rhine basin for extreme events. Finally, the two aforementioned precipitation data sets and a meteorological reanalysis data set were used to force a hydrological model, evaluating the influence of different precipitation forcings on the annual mean and extreme discharges compared to observational discharges for the period from 1990 until 2008. The extended version of CHR showed good agreement in terms of mean annual cycle, extreme discharge (both high and low flows), and spatial distribution of correlations with observed discharge. E-OBS performed well with respect to extreme discharge. However, its performance of the mean annual cycle in winter was rather poor and remarkably well in the summer. Also, CHR08 outperformed E-OBS in terms of temporal correlations in most of the analyzed sub-catchment means. The length extension for the CHR and the even longer length of E-OBS permit the assessment of extreme discharge and precipitation values with lower uncertainty for longer return periods. This assessment classifies both of the presented precipitation data sets as possible reference data sets for future studies in hydrological applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Nida, Nudrat, Muhammad Haroon Yousaf, Aun Irtaza, and Sergio A. Velastin. "Instructor Activity Recognition through Deep Spatiotemporal Features and Feedforward Extreme Learning Machines." Mathematical Problems in Engineering 2019 (April 30, 2019): 1–13. http://dx.doi.org/10.1155/2019/2474865.

Full text
Abstract:
Human action recognition has the potential to predict the activities of an instructor within the lecture room. Evaluation of lecture delivery can help teachers analyze shortcomings and plan lectures more effectively. However, manual or peer evaluation is time-consuming, tedious and sometimes it is difficult to remember all the details of the lecture. Therefore, automation of lecture delivery evaluation significantly improves teaching style. In this paper, we propose a feedforward learning model for instructor’s activity recognition in the lecture room. The proposed scheme represents a video sequence in the form of a single frame to capture the motion profile of the instructor by observing the spatiotemporal relation within the video frames. First, we segment the instructor silhouettes from input videos using graph-cut segmentation and generate a motion profile. These motion profiles are centered by obtaining the largest connected components and normalized. Then, these motion profiles are represented in the form of feature maps by a deep convolutional neural network. Then, an extreme learning machine (ELM) classifier is trained over the obtained feature representations to recognize eight different activities of the instructor within the classroom. For the evaluation of the proposed method, we created an instructor activity video (IAVID-1) dataset and compared our method against different state-of-the-art activity recognition methods. Furthermore, two standard datasets, MuHAVI and IXMAS, were also considered for the evaluation of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
30

Alshammari, Khaznah, Shah Muhammad Hamdi, and Soukaina Filali Boubrahimi. "Identifying Flare-indicative Photospheric Magnetic Field Parameters from Multivariate Time-series Data of Solar Active Regions." Astrophysical Journal Supplement Series 271, no. 2 (March 19, 2024): 39. http://dx.doi.org/10.3847/1538-4365/ad21e4.

Full text
Abstract:
Abstract Photospheric magnetic field parameters are frequently used to analyze and predict solar events. Observation of these parameters over time, i.e., representing solar events by multivariate time-series (MVTS) data, can determine relationships between magnetic field states in active regions and extreme solar events, e.g., solar flares. We can improve our understanding of these events by selecting the most relevant parameters that give the highest predictive performance. In this study, we propose a two-step incremental feature selection method for MVTS data using a deep-learning model based on long short-term memory (LSTM) networks. First, each MVTS feature (magnetic field parameter) is evaluated individually by a univariate sequence classifier utilizing an LSTM network. Then, the top performing features are combined to produce input for an LSTM-based multivariate sequence classifier. Finally, we tested the discrimination ability of the selected features by training downstream classifiers, e.g., Minimally Random Convolutional Kernel Transform and support vector machine. We performed our experiments using a benchmark data set for flare prediction known as Space Weather Analytics for Solar Flares. We compared our proposed method with three other baseline feature selection methods and demonstrated that our method selects more discriminatory features compared to other methods. Due to the imbalanced nature of the data, primarily caused by the rarity of minority flare classes (e.g., the X and M classes), we used the true skill statistic as the evaluation metric. Finally, we reported the set of photospheric magnetic field parameters that give the highest discrimination performance in predicting flare classes.
APA, Harvard, Vancouver, ISO, and other styles
31

Pinki, Farhana Tazmim, Md Abdul Awal, Khondoker Mirazul Mumenin, Md Shahadat Hossain, Jabed Al Faysal, Rajib Rana, Latifah Almuqren, Amel Ksibi, and Md Abdus Samad. "HGSOXGB: Hunger-Games-Search-Optimization-Based Framework to Predict the Need for ICU Admission for COVID-19 Patients Using eXtreme Gradient Boosting." Mathematics 11, no. 18 (September 18, 2023): 3960. http://dx.doi.org/10.3390/math11183960.

Full text
Abstract:
Millions of people died in the COVID-19 pandemic, which pressured hospitals and healthcare workers into keeping up with the speed and intensity of the outbreak, resulting in a scarcity of ICU beds for COVID-19 patients. Therefore, researchers have developed machine learning (ML) algorithms to assist in identifying patients at increased risk of requiring an ICU bed. However, many of these studies used state-of-the-art ML algorithms with arbitrary or default hyperparameters to control the learning process. Hyperparameter optimization is essential in enhancing the classification effectiveness and ensuring the optimal use of ML algorithms. Therefore, this study utilized an improved Hunger Games Search Optimization (HGSO) algorithm coupled with a robust extreme gradient boosting (XGB) classifier to predict a COVID-19 patient’s need for ICU transfer. To further mitigate the random initialization inherent in HGSO and facilitate an efficient convergence toward optimal solutions, the Metropolis–Hastings (MH) method is proposed for integration with HGSO. In addition, population diversity was reintroduced to effectively escape local optima. To evaluate the efficacy of the MH-based HGSO algorithm, the proposed method was compared with the original HGSO algorithm using the Congress on Evolutionary Computation benchmark function. The analysis revealed that the proposed algorithm converges better than the original method and exhibits statistical significance. Consequently, the proposed algorithm optimizes the XGB hyperparameters to further predict the need for ICU transfer for COVID-19 patients. Various evaluation metrics, including the receiver operating curve (ROC), precision–recall curve, bootstrap ROC, and recall vs. decision boundary, were used to estimate the effectiveness of the proposed HGSOXGB model. The model achieves the highest accuracy of 97.39% and an area under the ROC curve of 99.10% compared with other classifiers. Additionally, the important features that significantly affect the prediction of ICU transfer need using XGB were calculated.
APA, Harvard, Vancouver, ISO, and other styles
32

Ghorbani, Aida, Amir Daneshvar, Ladan Riazi, and Reza Radfar. "Friend Recommender System for Social Networks Based on Stacking Technique and Evolutionary Algorithm." Complexity 2022 (August 31, 2022): 1–11. http://dx.doi.org/10.1155/2022/5864545.

Full text
Abstract:
In recent years, social networks have made significant progress and the number of people who use them to communicate is increasing day by day. The vast amount of information available on social networks has led to the importance of using friend recommender systems to discover knowledge about future communications. It is challenging to choose the best machine learning approach to address the recommender system issue since there are several strategies with various benefits and drawbacks. In light of this, a solution based on the stacking approach was put out in this study to provide a buddy recommendation system in social networks. Additionally, a decrease in system performance was caused by the large amount of information that was accessible and the inefficiency of some functions. To solve this problem, a particle swarm optimization (PSO) algorithm to select the most efficient features was used in our proposed method. To learn the model in the objective function of the particle swarm algorithm, a hybrid system based on stacking is proposed. In this method, two random forests and Extreme Gradient Boosting (XGBoost) had been used as the base classifiers. The results obtained from these base classifiers were used in the logistic regression algorithm, which has been applied sequentially. The suggested approach was able to effectively address this issue by combining the advantages of the applied strategies. The results of implementation and evaluation of the proposed system show the appropriate efficiency of this method compared with other studied techniques.
APA, Harvard, Vancouver, ISO, and other styles
33

Hao, Yong Xing, Ya Mei Han, Hai Tao Cheng, and Hua Ying Guo. "The Stability Evaluation of Radial Ring Rolling." Advanced Materials Research 482-484 (February 2012): 1229–32. http://dx.doi.org/10.4028/www.scientific.net/amr.482-484.1229.

Full text
Abstract:
In the Non-stability radial ring rolling process, the ring may collide with the guide roll, making the ring become a polygon. In extreme cases the rolling process could be terminated and the ring be scrapped. The stability evaluation of radial ring rolling has great theory and practice significance. In this article, based on the kinematics theory, a classified research on the dynamic phenomenon of radial ring rolling was done, and a stability evaluation method during the radial ring rolling was put forward. The evaluation provided a good base for the future ring rolling dynamic research.
APA, Harvard, Vancouver, ISO, and other styles
34

Qin, Chao, Yunfeng Zhang, Fangxun Bao, Caiming Zhang, Peide Liu, and Peipei Liu. "XGBoost Optimized by Adaptive Particle Swarm Optimization for Credit Scoring." Mathematical Problems in Engineering 2021 (March 23, 2021): 1–18. http://dx.doi.org/10.1155/2021/6655510.

Full text
Abstract:
Personal credit scoring is a challenging issue. In recent years, research has shown that machine learning has satisfactory performance in credit scoring. Because of the advantages of feature combination and feature selection, decision trees can match credit data which have high dimension and a complex correlation. Decision trees tend to overfitting yet. eXtreme Gradient Boosting is an advanced gradient enhanced tree that overcomes its shortcomings by integrating tree models. The structure of the model is determined by hyperparameters, which is aimed at the time-consuming and laborious problem of manual tuning, and the optimization method is employed for tuning. As particle swarm optimization describes the particle state and its motion law as continuous real numbers, the hyperparameter applicable to eXtreme Gradient Boosting can find its optimal value in the continuous search space. However, classical particle swarm optimization tends to fall into local optima. To solve this problem, this paper proposes an eXtreme Gradient Boosting credit scoring model that is based on adaptive particle swarm optimization. The swarm split, which is based on the clustering idea and two kinds of learning strategies, is employed to guide the particles to improve the diversity of the subswarms, in order to prevent the algorithm from falling into a local optimum. In the experiment, several traditional machine learning algorithms and popular ensemble learning classifiers, as well as four hyperparameter optimization methods (grid search, random search, tree-structured Parzen estimator, and particle swarm optimization), are considered for comparison. Experiments were performed with four credit datasets and seven KEEL benchmark datasets over five popular evaluation measures: accuracy, error rate (type I error and type II error), Brier score, and F 1 score. Results demonstrate that the proposed model outperforms other models on average. Moreover, adaptive particle swarm optimization performs better than the other hyperparameter optimization strategies.
APA, Harvard, Vancouver, ISO, and other styles
35

Shahi, T. B., C. Sitaula, and N. Paudel. "A Hybrid Feature Extraction Method for Nepali COVID-19-Related Tweets Classification." Computational Intelligence and Neuroscience 2022 (March 9, 2022): 1–11. http://dx.doi.org/10.1155/2022/5681574.

Full text
Abstract:
COVID-19 is one of the deadliest viruses, which has killed millions of people around the world to this date. The reason for peoples’ death is not only linked to its infection but also to peoples’ mental states and sentiments triggered by the fear of the virus. People’s sentiments, which are predominantly available in the form of posts/tweets on social media, can be interpreted using two kinds of information: syntactical and semantic. Herein, we propose to analyze peoples’ sentiment using both kinds of information (syntactical and semantic) on the COVID-19-related twitter dataset available in the Nepali language. For this, we, first, use two widely used text representation methods: TF-IDF and FastText and then combine them to achieve the hybrid features to capture the highly discriminating features. Second, we implement nine widely used machine learning classifiers (Logistic Regression, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, Decision Trees, Random Forest, Extreme Tree classifier, AdaBoost, and Multilayer Perceptron), based on the three feature representation methods: TF-IDF, FastText, and Hybrid. To evaluate our methods, we use a publicly available Nepali-COVID-19 tweets dataset, NepCov19Tweets, which consists of Nepali tweets categorized into three classes (Positive, Negative, and Neutral). The evaluation results on the NepCOV19Tweets show that the hybrid feature extraction method not only outperforms the other two individual feature extraction methods while using nine different machine learning algorithms but also provides excellent performance when compared with the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Pyko, N. S., E. D. Orandarenko, and M. I. Bogachev. "Statistical Analysis of Local Extrema in Rough Sea Surfaces Based on Computer Simulation." Journal of the Russian Universities. Radioelectronics 26, no. 5 (November 28, 2023): 99–111. http://dx.doi.org/10.32603/1993-8985-2023-26-5-99-111.

Full text
Abstract:
Introduction. Generalized extreme value (GEV) distributions represent a universal description of the limiting distribution of the normalized local maxima statistics for independent and identically distributed data series. Extreme value distributions are commonly classified into three different types representing different functional forms and thus varying in shape, also known as types I, II, and III. Thus, attribution of some observational data series to a particular type of its local maxima distribution, as well as fitting of the distribution parameters, provides certain information about the laws governing the underlying natural or technogenic process. Radar-based remote sensing techniques represent a ubiquitous tool for analyzing large patterns of the sea surface and determining the parameters of the waves. In turn, understanding the laws governing the extreme values in the rough sea surface obtained from their radar images followed by evaluation of their distribution parameters, depending on the wind speed and direction, as well as the presence of surface currents and swells, can be useful for predicting wave height. Aim. Analysis of the functional forms governing the local extreme value distributions in a rough sea surface for the given wind and swell parameters based on computer simulations. Materials and methods. For the rough sea surface simulated by an additive harmonic synthesis procedure, the local extreme value distribution was fitted using the least-mean-squares technique. The fitted parameters were then used for their classification according to the three predetermined types. Results. Computer simulations of a rough sea surface with combined wind and swell waves were performed. It is shown that the distribution of local maxima in the absence of swell waves could be well approximated by theWeibull (type III GEV) distribution, with the parameters explicitly depending on the wind speed. At the same time, no significant dependence on the sea depth was observed. On the contrary, in the presence of additional swell waves, the distribution of local extrema could be rather attributed to the Fréchet (type II GEV) distribution, with the parameters additionally depending on the angle between the wind and swell waves. Conclusion. The laws governing the distributions of local wave extrema in rough seas are in a good agreement with the theoretical GEV approximations, with the distribution parameters being deductible from the key features of the waves. This indicates the predictability of wave height extrema from sea surface measurements, which can be performed based on remote radar observations.
APA, Harvard, Vancouver, ISO, and other styles
37

Molla, Shourav, F. M. Javed Mehedi Shamrat, Raisul Islam Rafi, Umme Umaima, Md Ariful Islam Arif, Shahed Hossain, and Imran Mahmud. "A predictive analysis framework of heart disease using machine learning approaches." Bulletin of Electrical Engineering and Informatics 11, no. 5 (October 1, 2022): 2705–16. http://dx.doi.org/10.11591/eei.v11i5.3942.

Full text
Abstract:
Heart diseaseis among the leading causes for death globally. Thus, early identification and treatment are indispensable to prevent the disease. In this work, we propose a framework based on machine learning algorithms to tackle such problems through the identification of risk variables associated to this disease. To ensure the success of our proposed model, influential data pre-processing and data transformation strategies are used to generate accurate data for the training model that utilizes the five most popular datasets (Hungarian, Stat log, Switzerland, Long Beach VA, and Cleveland) from UCI. The univariate feature selection technique is applied to identify essential features and during the training phase, classifiers, namely extreme gradient boosting (XGBoost), support vector machine (SVM), random forest (RF), gradient boosting (GB), and decision tree (DT), are deployed. Subsequently, various performance evaluations are measured to demonstrate accurate predictions using the introduced algorithms. The inclusion of Univariate results indicated that the DT classifier achieves a comparatively higher accuracy of around 97.75% than others. Thus, a machine learning approach is recognize, that can predict heart disease with high accuracy. Furthermore, the 10 attributes chosen are used to analyze the model's outcomes explainability, indicating which attributes are more significant in the model's outcome.
APA, Harvard, Vancouver, ISO, and other styles
38

Jayaraman, Senthil Kumar, Venkataraman Venkatachalam, Marwa M. Eid, Kannan Krithivasan, Sekar Kidambi Raju, Doaa Sami Khafaga, Faten Khalid Karim, and Ayman Em Ahmed. "Enhancing Cyclone Intensity Prediction for Smart Cities Using a Deep-Learning Approach for Accurate Prediction." Atmosphere 14, no. 10 (October 16, 2023): 1567. http://dx.doi.org/10.3390/atmos14101567.

Full text
Abstract:
Accurate cyclone intensity prediction is crucial for smart cities to effectively prepare and mitigate the potential devastation caused by these extreme weather events. Traditional meteorological models often face challenges in accurately forecasting cyclone intensity due to cyclonic systems’ complex and dynamic nature. Predicting the intensity of cyclones is a challenging task in meteorological research, as it requires expertise in extracting spatio-temporal features. To address this challenge, a new technique, called linear support vector regressive gradient descent Jaccardized deep multilayer perceptive classifier (LEGEMP), has been proposed to improve the accuracy of cyclone intensity prediction. This technique utilizes a dataset that contains various attributes. It employs the Herfindahl correlative linear support vector regression feature selection to identify the most important characteristics for enhancing cyclone intensity forecasting accuracy. The selected features are then used in conjunction with the Nesterov gradient descent jeopardized deep multilayer perceptive classifier to predict the intensity classes of cyclones, including depression, deep depression, cyclone, severe cyclone, very severe cyclone, and extremely severe cyclone. Experimental results have demonstrated that LEGEMP outperforms conventional methods in terms of cyclone intensity prediction accuracy, requiring minimum time, error rate, and memory consumption. By leveraging advanced techniques and feature selection, LEGEMP provides more reliable and precise predictions for cyclone intensity, enabling better preparedness and response strategies to mitigate the impact of these destructive storms. The LEGEMP technique offers an improved approach to cyclone intensity prediction, leveraging advanced classifiers and feature selection methods to enhance accuracy and reduce error rates. We demonstrate the effectiveness of our approach through rigorous evaluation and comparison with conventional prediction methods, showcasing significant improvements in prediction accuracy. Integrating our enhanced prediction model into smart city disaster management systems can substantially enhance preparedness and response strategies, ultimately contributing to the safety and resilience of communities in cyclone-prone regions.
APA, Harvard, Vancouver, ISO, and other styles
39

Kek, Tomaž, Primož Potočnik, Martin Misson, Zoran Bergant, Mario Sorgente, Edvard Govekar, and Roman Šturm. "Characterization of Biocomposites and Glass Fiber Epoxy Composites Based on Acoustic Emission Signals, Deep Feature Extraction, and Machine Learning." Sensors 22, no. 18 (September 13, 2022): 6886. http://dx.doi.org/10.3390/s22186886.

Full text
Abstract:
This study presents the results of acoustic emission (AE) measurements and characterization in the loading of biocomposites at room and low temperatures that can be observed in the aviation industry. The fiber optic sensors (FOS) that can outperform electrical sensors in challenging operational environments were used. Standard features were extracted from AE measurements, and a convolutional autoencoder (CAE) was applied to extract deep features from AE signals. Different machine learning methods including discriminant analysis (DA), neural networks (NN), and extreme learning machines (ELM) were used for the construction of classifiers. The analysis is focused on the classification of extracted AE features to classify the source material, to evaluate the predictive importance of extracted features, and to evaluate the ability of used FOS for the evaluation of material behavior under challenging low-temperature environments. The results show the robustness of different CAE configurations for deep feature extraction. The combination of classic and deep features always significantly improves classification accuracy. The best classification accuracy (80.9%) was achieved with a neural network model and generally, more complex nonlinear models (NN, ELM) outperform simple models (DA). In all the considered models, the selected combined features always contain both classic and deep features.
APA, Harvard, Vancouver, ISO, and other styles
40

Korvink, Michael, John Martin, and Michael Long. "Real-Time Identification of Patients Included in the CMS Bundled Payment Care Improvement (BPCI) Program." Infection Control & Hospital Epidemiology 41, S1 (October 2020): s367—s368. http://dx.doi.org/10.1017/ice.2020.993.

Full text
Abstract:
Background: The Bundled Payment Care Improvement Program is a CMS initiative designed to encourage greater collaboration across settings of care, especially as it relates to an initial set of targeted clinical episodes, which include sepsis and pneumonia. As with many CMS incentive programs, performance evaluation is retrospective in nature, resulting in after-the-fact changes in operational processes to improve both efficiency and quality. Although retrospective performance evaluation is informative, care providers would ideally identify a patient’s potential clinical cohort during the index stay and implement care management procedures as necessary to prevent or reduce the severity of the condition. The primary challenges for real-time identification of a patient’s clinical cohort are CMS-targeted cohorts are based on either MS-DRG (grouping of ICD-10 codes) or HCPCS coding—coding that occurs after discharge by clinical abstractors. Additionally, many informative data elements in the EHR lack standardization and no simple and reliable heuristic rules can be employed to meaningfully identify those cohorts without human review. Objective: To share the results of an ensemble statistical model to predict patient risks of sepsis and pneumonia during their hospital (ie, index) stay. Methods: The predictive model uses a combination of Bernoulli Naïve Bayes natural language processing (NLP) classifiers, to reduce text dimensionality into a single probability value, and an eXtreme Gradient Boosting (XGBoost) algorithm as a meta-model to collectively evaluate both standardized clinical elements alongside the NLP-based text probabilities. Results: Bernoulli Naïve Bayes classifiers have proven to perform well on short text strings and allow for highly explanatory unstructured or semistructured text fields (eg, reason for visit, culture results), to be used in a both comparative and generalizable way within the larger XGBoost model. Conclusions: The choice of XGBoost as the meta-model has the benefits of mitigating concerns of nonlinearity among clinical features, reducing potential of overfitting, while allowing missing values to exist within the data. Both the Bayesian classifier and meta-model were trained using a patient-level integrated dataset extracted from both a patient-billing and EHR data warehouse maintained by Premier. The data set, joined by patient admission-date, medical record number, date of birth, and hospital entity code, allows the presence of both the coded clinical cohort (derived from the MS-DRG) and the explanatory features in the EHR to exist within a single patient encounter record. The resulting model produced F1 performance scores of .65 for the sepsis population and .61 for the pneumonia population.Funding: NoneDisclosures: None
APA, Harvard, Vancouver, ISO, and other styles
41

Sampath, Akila, Uma S. Bhatt, Peter A. Bieniek, Robert Ziel, Alison York, Heidi Strader, Sharon Alden, et al. "Evaluation of Seasonal Forecasts for the Fire Season in Interior Alaska." Weather and Forecasting 36, no. 2 (April 2021): 601–13. http://dx.doi.org/10.1175/waf-d-19-0225.1.

Full text
Abstract:
AbstractIn this study, seasonal forecasts from the National Centers for Environmental Prediction (NCEP) Climate Forecast System, version 2 (CFSv2), are compared with station observations to assess their usefulness in producing accurate buildup index (BUI) forecasts for the fire season in Interior Alaska. These comparisons indicate that the CFSv2 June–July–August (JJA) climatology (1994–2017) produces negatively biased BUI forecasts because of negative temperature and positive precipitation biases. With quantile mapping (QM) correction, the temperature and precipitation forecasts better match the observations. The long-term JJA mean BUI improves from 12 to 42 when computed using the QM-corrected forecasts. Further postprocessing of the QM-corrected BUI forecasts using the quartile classification method shows anomalously high values for the 2004 fire season, which was the worst on record in terms of the area burned by wildfires. These results suggest that the QM-corrected CFSv2 forecasts can be used to predict extreme fire events. An assessment of the classified BUI ensemble members at the subseasonal scale shows that persistently occurring BUI forecasts exceeding 150 in the cumulative drought season can be used as an indicator that extreme fire events will occur during the upcoming season. This study demonstrates the ability of QM-corrected CFSv2 forecasts to predict the potential fire season in advance. This information could, therefore, assist fire managers in resource allocation and disaster response preparedness.
APA, Harvard, Vancouver, ISO, and other styles
42

Alitalesi, Atefe, Hamid Jazayeriy, and Javad Kazemitabar. "Wi-Fi fingerprinting-based floor detection using adaptive scaling and weighted autoencoder extreme learning machine." Computer Science and Information Technologies 3, no. 2 (July 1, 2022): 104–15. http://dx.doi.org/10.11591/csit.v3i2.p104-115.

Full text
Abstract:
In practical applications, accurate floor determination in multi-building/floor environments is particularly useful and plays an increasingly crucial role in the performance of location-based services. An accurate and robust building and floor detection can reduce the location search space and ameliorate the positioning and wayfinding accuracy. As an efficient solution, this paper proposes a floor identification method that exploits statistical properties of wireless access point propagated signals to exponent received signal strength (RSS) in the radio map. Then, using single-layer extreme learning machine-weighted autoencoder (ELM-WAE) main feature extraction and dimensional reduction is implemented. Finally, ELM based classifier is trained over a new feature space to determine floor level. For the efficiency evaluation of our proposed model, we utilized three different datasets captured in the real scenarios. The evaluation result shows that the proposed model can achieve state-of-art performance and improve the accuracy of floor detection compared with multiple recent techniques. In this way, the floor level can be identified with 97.30%, 95.32%, and 96.39% on UJIIndoorLoc, Tampere, and UTSIndoorLoc datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
43

Atefe Alitaleshi, Hamid Jazayeriy, and Javad Kazemitabar. "Wi-Fi fingerprinting-based floor detection using adaptive scaling and weighted autoencoder extreme learning machine." Computer Science and Information Technologies 3, no. 2 (July 1, 2022): 104–15. http://dx.doi.org/10.11591/csit.v3i2.pp104-115.

Full text
Abstract:
In practical applications, accurate floor determination in multi-building/floor environments is particularly useful and plays an increasingly crucial role in the performance of location-based services. An accurate and robust building and floor detection can reduce the location search space and ameliorate the positioning and wayfinding accuracy. As an efficient solution, this paper proposes a floor identification method that exploits statistical properties of wireless access point propagated signals to exponent received signal strength (RSS) in the radio map. Then, using single-layer extreme learning machine-weighted autoencoder (ELM-WAE) main feature extraction and dimensional reduction is implemented. Finally, ELM based classifier is trained over a new feature space to determine floor level. For the efficiency evaluation of our proposed model, we utilized three different datasets captured in the real scenarios. The evaluation result shows that the proposed model can achieve state-of-art performance and improve the accuracy of floor detection compared with multiple recent techniques. In this way, the floor level can be identified with 97.30%, 95.32%, and 96.39% on UJIIndoorLoc, Tampere, and UTSIndoorLoc datasets, respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Hüllermeier, Eyke, Marcel Wever, Eneldo Loza Mencia, Johannes Fürnkranz, and Michael Rapp. "A flexible class of dependence-aware multi-label loss functions." Machine Learning 111, no. 2 (January 13, 2022): 713–37. http://dx.doi.org/10.1007/s10994-021-06107-2.

Full text
Abstract:
AbstractThe idea to exploit label dependencies for better prediction is at the core of methods for multi-label classification (MLC), and performance improvements are normally explained in this way. Surprisingly, however, there is no established methodology that allows to analyze the dependence-awareness of MLC algorithms. With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence. To this end, we leverage the mathematical framework of non-additive measures and integrals. Roughly speaking, a non-additive measure allows for modeling the importance of correct predictions of label subsets (instead of single labels), and thereby their impact on the overall evaluation, in a flexible way. The well-known Hamming and subset 0/1 losses are rather extreme special cases of this function class, which give full importance to single label sets or the entire label set, respectively. We present concrete instantiations of this class, which appear to be especially appealing from a modeling perspective. The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies. Finally, while not being the main goal of this study, we also show some preliminary results on the minimization of this parametrized family of losses.
APA, Harvard, Vancouver, ISO, and other styles
45

Kodati, Dr Sarangam, Dr M. Dhasaratham, Veldandi Srikanth, and K. Meenendranath Reddy. "Classification of SARS Cov-2 and Non-SARS Cov-2 Pneumonia Using CNN." Journal of Prevention, Diagnosis and Management of Human Diseases, no. 36 (November 23, 2023): 32–40. http://dx.doi.org/10.55529/jpdmhd.36.32.40.

Full text
Abstract:
Both patients and medical professionals will benefit from precise identification of the Covid responsible for the COVID-19 outbreak this year, which is the extreme intense respiratory condition CoV-2 (SARS CoV-2). In countries where diagnostic tools are not easily accessible, knowledge of the disease's impact on the lungs is of utmost importance. The goal of this research was to demonstrate that high-resolution chest X-ray images could be used in conjunction with extensive training data to reliably differentiate COVID-19. The evaluation included the training of deep learning and AI classifiers using publicly available X-beam images (1092 sound, 1345 pneumonia, and 3616 affirmed Covid). There were 38 tests driven using Convolutional Brain Organizations, 10 examinations utilizing 5 simulated intelligence models, and 14 tests utilizing top tier pre-arranged models for move learning. In the first stages, the presentation of the models was surveyed using an eightfold cross-approval system that disentangled visuals and data analysis. Area under the curve for collector performance is a typical 96.51%, with 93.84% responsiveness, 98.18% particularity, 98.50% accuracy, and 93.84% responsiveness. COVID-19 may be detected in a small number of skewed chest X-beam pictures using a convolutional frontal cortex network with not many layers and no pre -taking care of.
APA, Harvard, Vancouver, ISO, and other styles
46

Sugiharti, Endang, Riza Arifudin, Dian Tri Wiyanti, and Arief Broto Susilo. "Integration of convolutional neural network and extreme gradient boosting for breast cancer detection." Bulletin of Electrical Engineering and Informatics 11, no. 2 (April 1, 2022): 803–13. http://dx.doi.org/10.11591/eei.v11i2.3562.

Full text
Abstract:
With the most recent advances in technology, computer programming has reached the capabilities of human brain to decide things for almost all healthcare systems. The implementation of Convolutional Neural Network (CNN) and Extreme Gradient Boosting (XGBoost) is expected to improve the accurateness of breast cancer detection. The aims of this research were to; i) determine the stages of CNN-XGBoost integration in diagnosis of breast cancer and ii) calculate the accuracy of the CNN-XGBoost integration in breast cancer detection. By combining transfer learning and data augmentation, CNN with XGBoost as a classifier was used. After acquiring accuracy results through transfer learning, this reasearch connects the final layer to the XGBoost classifier. Furthermore, the interface design for the evaluation process was established using the Python programming language and the Django platform. The results: i) the stages of CNN-XGBoost integration on histopathology images for breast cancer detection were discovered. ii) Achieved a higher level of accuracy as a result of the CNN-XGBoost integration for breast cancer detection. In conclusion, breast cancer detection was revealed through the integration of CNN-XGBoost through histopathological images. The combination of CNN and XGBoost can enhance the accuracy of breast cancer detection.
APA, Harvard, Vancouver, ISO, and other styles
47

Omar Franklin Molina, Zeila Coelho Santos, Bruno Ricardo Huber Simião, Rógerio Ferreira Marchezan, Natalia de Paula e Silva, and Karla Regina Gama. "A comprehensive method to classify subgroups of bruxers in temporomandibular disorders (TMDs) individuals: frequency, clinical and psychological implications." RSBO 10, no. 1 (March 28, 2014): 11–9. http://dx.doi.org/10.21726/rsbo.v10i1.888.

Full text
Abstract:
Bruxism is an oral pnenomenon described as a parafunctional activity involving nocturnal and/or diurnal tooth clenching and/or grinding which may cause teeth wearing, fatigue, pain in the muscles and temporomandibular joints and limitations in mandibular movements. Objective: To classify bruxers in four different subgroups. Material and methods: Evaluation of 162 individuals presenting temporomandibular disorders (TMDs) referred consecutively over a period of six years. Chief complaint, history of signs/symptoms and clinical examination were used to gather data. Individuals were classified as TMDs if they were seeking active treatment for the following complaints: pain in the masticatory muscles and/or temporomandibular joints (TMJs), difficulties to perform normal jaw movements, tenderness to palpation of muscle and joints, joint noises and. Patients were classified as mild, moderate, severe and extreme bruxers if they presented 3 to 5, 6 to 10, 11 to 15 or 16 to 25 signs and symptoms of bruxing behavior, respectively. Data was submitted to Chi-square for independence and Fisher’s exact test (p < 0.05). Results: Frequencies of 16.1%, 29.6%, 31.5% and 22.8% of mild, moderate, severe and extreme bruxing behavior were found in this study. Moderate and severe bruxing behavior occurred more frequently than mild and extreme bruxing behavior (p < 0.0001). Conclusion: The four groups of bruxers occurred more or less frequently in this study and mild and extreme bruxing behavior demonstrated the lowest frequencies of such behavior.
APA, Harvard, Vancouver, ISO, and other styles
48

M, Duraipandian, and Vinothkanna R. "Smart Digital Mammographic Screening System for Bulk Image Processing." December 2020 2, no. 4 (February 22, 2021): 156–61. http://dx.doi.org/10.36548/jeea.2020.4.003.

Full text
Abstract:
Treating breast cancer is easier at early stages. However, proper diagnosis is essential for this purpose. Mammography helps in early detection of cancer cells. Existence of masses, calcification and mammogram are the evidences that help radiologists in early cancer identification. This paper proposes a smart digital mammographic screening system for processing images in large volumes irrespective of the nature of images. Watershed segmentation is performed based on appropriate selection of internal and external markers using multiple threshold extended maxima transformations in this technique. Distinguishing between healthy breast tissue and masses can be performed efficiently using a two-stage classifier. Extreme Learning Machine based single layer feed forward network along with Bayesian classifier is used for reducing false positive areas. Feature vector with features like texture and contrast are calculated using these approaches. Digital Mammography Screening database (DMS) is created with 100 mammographic images for the purpose of evaluation. Further, online databases like Breast Cancer Database (BCDB) and BreakHis are also used for analysis. Overall sensitivity of the datasets using the Bayesian classifier and Extreme Learning Machine is found to be 85% and 90% respectively.
APA, Harvard, Vancouver, ISO, and other styles
49

Acosta-Coll, Melisa, Abel Morales, Ronald Zamora-Musa, and Shariq Aziz Butt. "Cross-Evaluation of Reflectivity from NEXRAD and Global Precipitation Mission during Extreme Weather Events." Sensors 22, no. 15 (August 2, 2022): 5773. http://dx.doi.org/10.3390/s22155773.

Full text
Abstract:
During extreme events such as tropical cyclones, the precision of sensors used to sample the meteorological data is vital to feed weather and climate models for storm path forecasting, quantitative precipitation estimation, and other atmospheric parameters. For this reason, periodic data comparison between several sensors used to monitor these phenomena such as ground-based and satellite instruments, must maintain a high degree of correlation in order to issue alerts with an accuracy that allows for timely decision making. This study presents a cross-evaluation of the radar reflectivity from the dual-frequency precipitation radar (DPR) onboard the Global Precipitation Measurement Mission (GPM) and the U.S. National Weather Service (NWS) Next-Generation Radar (NEXRAD) ground-based instrument located in the Caribbean island of Puerto Rico, USA, to determine the correlation degree between these two sensors’ measurements during extreme weather events and normal precipitation events during 2015–2019. GPM at Ku-band and Ka-band and NEXRAD at S-band overlapping scanning regions data of normal precipitation events during 2015–2019, and the spiral rain bands of four extreme weather events, Irma (Category 5 Hurricane), Beryl (Tropical Storm), Dorian (Category 1 hurricane), and Karen (Tropical Storm), were processed using the GPM Ground Validation System (GVS). In both cases, data were classified and analyzed statistically, paying particular attention to variables such as elevation angle mode and precipitation type (stratiform and convective). Given that ground-based radar (GR) has better spatial and temporal resolution, the NEXRAD was used as ground-truth. The results revealed that the correlation coefficient between the data of both instruments during the analyzed extreme weather events was moderate to low; for normal precipitation events, the correlation is lower than that of studies that compared GPM and NEXRAD reflectivity located in other regions of the USA. Only Tropical Storm Karen obtained similar results to other comparative studies in terms of the correlation coefficient. Furthermore, the GR elevation angle and precipitation type have a substantial impact on how well the rain reflectivity correlates between the two sensors. It was found that the Ku-band channel possesses the least bias and variability when compared to the NEXRAD instrument’s reflectivity and should therefore be considered more reliable for future tropical storm tracking and tropical region precipitation estimates in regions with no NEXRAD coverage.
APA, Harvard, Vancouver, ISO, and other styles
50

T R, Prajwala. "NON-PARAMETRIC RANDOMIZED TREE CLASSIFIER FOR DETECTION OF AUTISM DISORDER IN TODDLERS." International Journal of Research -GRANTHAALAYAH 9, no. 10 (November 3, 2021): 205–10. http://dx.doi.org/10.29121/granthaalayah.v9.i10.2021.4341.

Full text
Abstract:
Autism is a behavioral disorder seen in toddlers and adolescents. It is a disorder which concerns behavior of child, speech, social interaction of child as well as nonverbal communication of child is affected. The parents of affected children find it very cumbersome to manage the child. Detection of such anomalies is really important at early stages. This paper mainly focuses on early detection of autistic behavior in toddlers. There are various machine learning and deep learning algorithms. Non parametric Extreme randomized classifier is one such technique which helps in early detection of autistic behavior in toddlers. The various performance evaluation metrics used are Jaccard score, ROC Curves and Mean Squared Error. The Feature selection is done using spearman correlation to identify the features affecting the child most and represented in form of Heat map. Extra tree classifier proves a better algorithm in detection of autism at early stages of child development.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography