To see the other types of publications on this topic, follow the link: KNN CLASSIFIER.

Journal articles on the topic 'KNN CLASSIFIER'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'KNN CLASSIFIER.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Demidova, Liliya A. "Two-Stage Hybrid Data Classifiers Based on SVM and kNN Algorithms." Symmetry 13, no. 4 (April 7, 2021): 615. http://dx.doi.org/10.3390/sym13040615.

Full text
Abstract:
The paper considers a solution to the problem of developing two-stage hybrid SVM-kNN classifiers with the aim to increase the data classification quality by refining the classification decisions near the class boundary defined by the SVM classifier. In the first stage, the SVM classifier with default parameters values is developed. Here, the training dataset is designed on the basis of the initial dataset. When developing the SVM classifier, a binary SVM algorithm or one-class SVM algorithm is used. Based on the results of the training of the SVM classifier, two variants of the training dataset are formed for the development of the kNN classifier: a variant that uses all objects from the original training dataset located inside the strip dividing the classes, and a variant that uses only those objects from the initial training dataset that are located inside the area containing all misclassified objects from the class dividing strip. In the second stage, the kNN classifier is developed using the new training dataset above-mentioned. The values of the parameters of the kNN classifier are determined during training to maximize the data classification quality. The data classification quality using the two-stage hybrid SVM-kNN classifier was assessed using various indicators on the test dataset. In the case of the improvement of the quality of classification near the class boundary defined by the SVM classifier using the kNN classifier, the two-stage hybrid SVM-kNN classifier is recommended for further use. The experimental results approve the feasibility of using two-stage hybrid SVM-kNN classifiers in the data classification problem. The experimental results obtained with the application of various datasets confirm the feasibility of using two-stage hybrid SVM-kNN classifiers in the data classification problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Hu, Juan, Hong Peng, Jun Wang, and Wenping Yu. "kNN-P: A kNN classifier optimized by P systems." Theoretical Computer Science 817 (May 2020): 55–65. http://dx.doi.org/10.1016/j.tcs.2020.01.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

PAO, TSANG-LONG, YUN-MAW CHENG, YU-TE CHEN, and JUN-HENG YEH. "PERFORMANCE EVALUATION OF DIFFERENT WEIGHTING SCHEMES ON KNN-BASED EMOTION RECOGNITION IN MANDARIN SPEECH." International Journal of Information Acquisition 04, no. 04 (December 2007): 339–46. http://dx.doi.org/10.1142/s021987890700140x.

Full text
Abstract:
Since emotion is important in influencing cognition, perception of daily activities such as learning, communication and even rational decision-making, it must be considered in human-computer interaction. In this paper, we compare four different weighting functions in weighted KNN-based classifiers to recognize five emotions, including anger, happiness, sadness, neutral and boredom, from Mandarin emotional speech. The classifiers studied include weighted KNN, weighted CAP, and weighted D-KNN. We use the result of traditional KNN classifier as the line performance measure. The experimental results show that the used Fibonacci weighting function outperforms others in all weighted classifiers. The highest accuracy achieves 81.4% with weighted D-KNN classifier.
APA, Harvard, Vancouver, ISO, and other styles
4

Murugan, S., Ganesh Babu T R, and Srinivasan C. "Underwater Object Recognition Using KNN Classifier." International Journal of MC Square Scientific Research 9, no. 3 (December 17, 2017): 48. http://dx.doi.org/10.20894/ijmsr.117.009.003.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mohamed, Taha M. "Pulsar selection using fuzzy knn classifier." Future Computing and Informatics Journal 3, no. 1 (June 2018): 1–6. http://dx.doi.org/10.1016/j.fcij.2017.11.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khan, Asfandyar, Abdullah Khan, Muhammad Muntazir Khan, Kamran Farid, Muhammad Mansoor Alam, and Mazliham Bin Mohd Su’ud. "Cardiovascular and Diabetes Diseases Classification Using Ensemble Stacking Classifiers with SVM as a Meta Classifier." Diagnostics 12, no. 11 (October 26, 2022): 2595. http://dx.doi.org/10.3390/diagnostics12112595.

Full text
Abstract:
Cardiovascular disease includes coronary artery diseases (CAD), which include angina and myocardial infarction (commonly known as a heart attack), and coronary heart diseases (CHD), which are marked by the buildup of a waxy material called plaque inside the coronary arteries. Heart attacks are still the main cause of death worldwide, and if not treated right they have the potential to cause major health problems, such as diabetes. If ignored, diabetes can result in a variety of health problems, including heart disease, stroke, blindness, and kidney failure. Machine learning methods can be used to identify and diagnose diabetes and other illnesses. Diabetes and cardiovascular disease both can be diagnosed using several classifier types. Naive Bayes, K-Nearest neighbor (KNN), linear regression, decision trees (DT), and support vector machines (SVM) were among the classifiers employed, although all of these models had poor accuracy. Therefore, due to a lack of significant effort and poor accuracy, new research is required to diagnose diabetes and cardiovascular disease. This study developed an ensemble approach called “Stacking Classifier” in order to improve the performance of integrated flexible individual classifiers and decrease the likelihood of misclassifying a single instance. Naive Bayes, KNN, Linear Discriminant Analysis (LDA), and Decision Tree (DT) are just a few of the classifiers used in this study. As a meta-classifier, Random Forest and SVM are used. The suggested stacking classifier obtains a superior accuracy of 0.9735 percent when compared to current models for diagnosing diabetes, such as Naive Bayes, KNN, DT, and LDA, which are 0.7646 percent, 0.7460 percent, 0.7857 percent, and 0.7735 percent, respectively. Furthermore, for cardiovascular disease, when compared to current models such as KNN, NB, DT, LDA, and SVM, which are 0.8377 percent, 0.8256 percent, 0.8426 percent, 0.8523 percent, and 0.8472 percent, respectively, the suggested stacking classifier performed better and obtained a higher accuracy of 0.8871 percent.
APA, Harvard, Vancouver, ISO, and other styles
7

Widyadhana, Arya, Cornelius Bagus Purnama Putra, Rarasmaya Indraswari, and Agus Zainal Arifin. "A Bonferroni Mean Based Fuzzy K Nearest Centroid Neighbor Classifier." Jurnal Ilmu Komputer dan Informasi 14, no. 1 (February 28, 2021): 65–71. http://dx.doi.org/10.21609/jiki.v14i1.959.

Full text
Abstract:
K-nearest neighbor (KNN) is an effective nonparametric classifier that determines the neighbors of a point based only on distance proximity. The classification performance of KNN is disadvantaged by the presence of outliers in small sample size datasets and its performance deteriorates on datasets with class imbalance. We propose a local Bonferroni Mean based Fuzzy K-Nearest Centroid Neighbor (BM-FKNCN) classifier that assigns class label of a query sample dependent on the nearest local centroid mean vector to better represent the underlying statistic of the dataset. The proposed classifier is robust towards outliers because the Nearest Centroid Neighborhood (NCN) concept also considers spatial distribution and symmetrical placement of the neighbors. Also, the proposed classifier can overcome class domination of its neighbors in datasets with class imbalance because it averages all the centroid vectors from each class to adequately interpret the distribution of the classes. The BM-FKNCN classifier is tested on datasets from the Knowledge Extraction based on Evolutionary Learning (KEEL) repository and benchmarked with classification results from the KNN, Fuzzy-KNN (FKNN), BM-FKNN and FKNCN classifiers. The experimental results show that the BM-FKNCN achieves the highest overall average classification accuracy of 89.86% compared to the other four classifiers.
APA, Harvard, Vancouver, ISO, and other styles
8

Zheng, Shuai, and Chris Ding. "A group lasso based sparse KNN classifier." Pattern Recognition Letters 131 (March 2020): 227–33. http://dx.doi.org/10.1016/j.patrec.2019.12.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Zhiping, Junying Na, and Baoyou Zheng. "An Improved kNN Classifier for Epilepsy Diagnosis." IEEE Access 8 (2020): 100022–30. http://dx.doi.org/10.1109/access.2020.2996946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Taguelmimt, Redha, and Rachid Beghdad. "DS-kNN." International Journal of Information Security and Privacy 15, no. 2 (April 2021): 131–44. http://dx.doi.org/10.4018/ijisp.2021040107.

Full text
Abstract:
On one hand, there are many proposed intrusion detection systems (IDSs) in the literature. On the other hand, many studies try to deduce the important features that can best detect attacks. This paper presents a new and an easy-to-implement approach to intrusion detection, named distance sum-based k-nearest neighbors (DS-kNN), which is an improved version of k-NN classifier. Given a data sample to classify, DS-kNN computes the distance sum of the k-nearest neighbors of the data sample in each of the possible classes of the dataset. Then, the data sample is assigned to the class having the smallest sum. The experimental results show that the DS-kNN classifier performs better than the original k-NN algorithm in terms of accuracy, detection rate, false positive, and attacks classification. The authors mainly compare DS-kNN to CANN, but also to SVM, S-NDAE, and DBN. The obtained results also show that the approach is very competitive.
APA, Harvard, Vancouver, ISO, and other styles
11

Mu, Xuchuan. "Implementation of Music Genre Classifier Using KNN Algorithm." Highlights in Science, Engineering and Technology 34 (February 28, 2023): 149–54. http://dx.doi.org/10.54097/hset.v34i.5439.

Full text
Abstract:
As music history grew, music began to diversify into different genres. Thisstudy aims to implement a music genre classifier using the KNN algorithm and a faster method. The KNN algorithm is accurate but with long execution time. This study implements a new method that can speed up the process of the KNN algorithm, and the K-means clustering algorithm inspires the idea. The dataset is preprocessed using the new idea. The program will select the song that is the centroid of the genre and use the method of the KNN to return the closest genre based on the distance from the test sample to the centroid. In conclusion, the new method did not perform well in accuracy but sped up the program. This study provides a great reference for the music genre classification problem in the machine learning domain. The study investigates an infeasible method in preprocessing data for the KNN algorithm optimization.
APA, Harvard, Vancouver, ISO, and other styles
12

Ahmed, Ismail Taha, Baraa Tareq Hammad, and Norziana Jamil. "Forgery detection algorithm based on texture features." Indonesian Journal of Electrical Engineering and Computer Science 24, no. 1 (October 1, 2021): 226. http://dx.doi.org/10.11591/ijeecs.v24.i1.pp226-235.

Full text
Abstract:
Any researcher's goal is to improve detection accuracy with a limited feature vector dimension. Therefore, in this paper, we attempt to find and discover the best types of texture features and classifiers that are appropriate for the coarse mesh finite differenc (CMFD). Segmentation-based fractal texture analysis (SFTA), local binary pattern (LBP), and Haralick are the texture features that have been chosen. K-nearest neighbors (KNN), naïve Bayes, and Logistics are also among the classifiers chosen. SFTA, local binary pattern (LBP), and Haralick feature vector are fed to the KNN, naïve Bayes, and logistics classifier. The outcomes of the experiment indicate that the SFTA texture feature surpassed all other texture features in all classifiers, making it the best texture feature to use in forgery detection. Haralick feature has the second-best texture feature performance in all of the classifiers. The performance using the LBP feature is lower than that of the other texture features. It also shows that the KNN classifier outperformed the other two in terms of accuracy. However, among the classifiers, the logistic classifier had the lowest accuracy. The proposed SFTA based KNN method is compared to other state-of-the-art techniques in terms of feature dimension and detection accuracy. The proposed method outperforms other current techniques.
APA, Harvard, Vancouver, ISO, and other styles
13

Mahanya, G. B., and S. Nithyaselvakumari. "Analysis And Comparison Of Ventricular Cardiac Arrhythmia Classification Using Calcium Channel Parameters With KNN And ANN Classifier." CARDIOMETRY, no. 25 (February 14, 2023): 919–25. http://dx.doi.org/10.18137/cardiometry.2022.25.919925.

Full text
Abstract:
Aim: Aim of this research is to analyze and compare ventricular cardiac arrhythmia classification using calcium channel parameters with Artificial Neural Network (ANN) and K- Nearest Neighbour (KNN) classifier. Materials and Methods: For the classification of arrhythmias, A.V.Panifilov (AVP) is used. THVCM contains well defined Calcium channel dynamics and its properties. Sample size was calculated by keeping threshold 0.05, G Power 80%, confidence interval 95% and enrolment ratio as 1. Number of samples considered is 20 for each analysis and will be imported to the classifier such as K-Nearest Neighbour (KNN) and Artificial Neural Network (ANN) classifiers to find better accuracy. Finally, the results (accuracy) will be validated by using Statistical Package for the Social Science (SPSS) software. Results: The results obtained from Normal, Tachycardia and Bradycardia data are imported to the ANN and KNN classifier. In which KNN shows accuracy value (12.3950%), standard deviation (0.96490) and Standard error mean (0.21576). And ANN shows accuracy value (35.3400%), standard deviation (3.22285) and Standard error mean (0.72065). Conclusion: From the results, it is concluded that ANN produces better results when compared with KNN classification in terms of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
14

Man Kwon, Young, Min Gu Son, Dong Keun Chung, and Myung Jae Lim. "A Study on the Performance of Feature Extraction Methods According to the Size of N-Gram." International Journal of Engineering & Technology 7, no. 3.33 (August 29, 2018): 23. http://dx.doi.org/10.14419/ijet.v7i3.33.18516.

Full text
Abstract:
In this paper, we studied the performance of feature extraction methods according to the size of N-gram for malware detection. The feature is extracted by three methods, using Opcode Only, both Opcode and API and API Only from PE file. We measure the performance of them indirectly with measuring the AUC score and accuracy of classifier. We did experiments with the different N size by using several classifiers such as DT, SVM, KNN and BNB classifiers. As a result, we got the conclusion as followings. If we use N-gram technique, we recommend Opcode Only method through our experiments. Also, the instance-based classifier KNN and DT among the model based classifier have good performance than SVM and BNB.
APA, Harvard, Vancouver, ISO, and other styles
15

Sameera, N. "Protocol-Specific Intrusion Detection System using KNN Classifier." International Journal for Research in Applied Science and Engineering Technology 6, no. 5 (May 31, 2018): 292–99. http://dx.doi.org/10.22214/ijraset.2018.5049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rathi, Rakesh, Ravi Krishan Pandey, Mahesh Jangid, and Vikas Chaturvedi. "Offline Handwritten Devanagari Vowels Recognition using KNN Classifier." International Journal of Computer Applications 49, no. 23 (July 31, 2012): 11–16. http://dx.doi.org/10.5120/7942-1270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jamshidi, Yazdan, and Vassilis G. Kaburlasos. "gsaINknn: A GSA optimized, lattice computing knn classifier." Engineering Applications of Artificial Intelligence 35 (October 2014): 277–85. http://dx.doi.org/10.1016/j.engappai.2014.06.018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

TAN, S. "An effective refinement strategy for KNN text classifier." Expert Systems with Applications 30, no. 2 (February 2006): 290–98. http://dx.doi.org/10.1016/j.eswa.2005.07.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Narayan, Yogendra. "Motor-Imagery based EEG Signals Classification using MLP and KNNClassifiers." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 11, 2021): 3345–50. http://dx.doi.org/10.17762/turcomat.v12i2.2394.

Full text
Abstract:
The electro encephalo gram (EEG) signals classification playsa major role in developing assistive rehabilitation devices for physically disabled performs. In this context, EEG data were acquired from 20 healthy humans followed by the pre-processing and feature extraction process. After extracting the 12-time domain features, two well-known classifiers namely K-nearest neighbor (KNN) and multi-layer perceptron (MLP) were employed. The fivefold cross-validation approach was utilized for dividing data into training and testing purpose. The results indicated that the performance of MLP classifier was found better than the KNN classifier. MLP classifier achieved 95% classifier accuracy which is the best. The outcome of this study would be very useful for online development of EEG classification model as well as designing the EEG based wheelchair.
APA, Harvard, Vancouver, ISO, and other styles
20

Shaul, Hayim, Dan Feldman, and Daniela Rus. "Secure k-ish Nearest Neighbors Classifier." Proceedings on Privacy Enhancing Technologies 2020, no. 3 (July 1, 2020): 42–61. http://dx.doi.org/10.2478/popets-2020-0045.

Full text
Abstract:
AbstractThe k-nearest neighbors (kNN) classifier predicts a class of a query, q, by taking the majority class of its k neighbors in an existing (already classified) database, S. In secure kNN, q and S are owned by two different parties and q is classified without sharing data. In this work we present a classifier based on kNN, that is more efficient to implement with homomorphic encryption (HE). The efficiency of our classifier comes from a relaxation we make to consider κ nearest neighbors for κ ≈k with probability that increases as the statistical distance between Gaussian and the distribution of the distances from q to S decreases. We call our classifier k-ish Nearest Neighbors (k-ish NN). For the implementation we introduce double-blinded coin-toss where the bias and output of the toss are encrypted. We use it to approximate the average and variance of the distances from q to S in a scalable circuit whose depth is independent of |S|. We believe these to be of independent interest. We implemented our classifier in an open source library based on HElib and tested it on a breast tumor database. Our classifier has accuracy and running time comparable to current state of the art (non-HE) MPC solution that have better running time but worse communication complexity. It also has communication complexity similar to naive HE implementation that have worse running time.
APA, Harvard, Vancouver, ISO, and other styles
21

Mahanya, G. B., and S. Nithyaselvakumari. "Analysis And Comparison Of Ventricular Cardiac Arrhythmia Classification Using Sodium Channel Parameters With ANN And KNN Classifier." CARDIOMETRY, no. 25 (February 14, 2023): 911–18. http://dx.doi.org/10.18137/cardiometry.2022.25.911918.

Full text
Abstract:
Aim: Aim of this research is to analyze and compare ventricular Cardiac Arrhythmia (CA) classification using Sodium Channel (Na+) parameters with Artificial Neural Network (ANN) and K-Nearest Neighbour (KNN) classifiers. Materials and Methods: Ten Tusscher Human Ventricular Cell Model (THVCM) (data) is used for arrhythmias classification. THVCM has well defined sodium (Na+) channel dynamics. Sample size was calculated by keeping threshold 0.05, G Power 80%, confidence interval 95% and enrolment ratio as 1. Number of samples considered is 20 for each analysis and will be imported to the classifier, K-Nearest Neighbour (KNN) and Artificial Neural Network (ANN) classifier to find better accuracy. Finally, the results (accuracy) will be validated by using Statistical Package for the Social Science (SPSS) software. Result: Ventricular normal, tachycardia and bradycardia data are fed into novel ANN and KNN classifiers. The results obtained from classifiers for 20 samples are fed to SPSS. In that ANN shows accuracy of 35.6% with standard deviation (3.17822) and Standard error mean (0.71067). Similarly KNN produces an accuracy value of 18.05% with standard deviation (1.19593) and Standard error mean (0.26739). Conclusion: As per the results, it clearly shows that the novel ANN has better accuracy for classification than KNN.
APA, Harvard, Vancouver, ISO, and other styles
22

Zou, Xiuguo, Chenyang Wang, Manman Luo, Qiaomu Ren, Yingying Liu, Shikai Zhang, Yungang Bai, Jiawei Meng, Wentian Zhang, and Steven W. Su. "Design of Electronic Nose Detection System for Apple Quality Grading Based on Computational Fluid Dynamics Simulation and K-Nearest Neighbor Support Vector Machine." Sensors 22, no. 8 (April 14, 2022): 2997. http://dx.doi.org/10.3390/s22082997.

Full text
Abstract:
Apples are one of the most widely planted fruits in the world, with an extremely high annual production. Several issues should be addressed to avoid the damaging of samples during the quality grading process of apples (e.g., the long detection period and the inability to detect the internal quality of apples). In this study, an electronic nose (e-nose) detection system for apple quality grading based on the K-nearest neighbor support vector machine (KNN-SVM) was designed, and the nasal cavity structure of the e-nose was optimized by computational fluid dynamics (CFD) simulation. A KNN-SVM classifier was also proposed to overcome the shortcomings of the traditional SVMs. The performance of the developed device was experimentally verified in the following steps. The apples were divided into three groups according to their external and internal quality. The e-nose data were pre-processed before features extraction, and then Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) were used to reduce the dimension of the datasets. The recognition accuracy of the PCA–KNN-SVM classifier was 96.45%, and the LDA–KNN-SVM classifier achieved 97.78%. Compared with other commonly used classifiers, (traditional KNN, SVM, Decision Tree, and Random Forest), KNN-SVM is more efficient in terms of training time and accuracy of classification. Generally, the apple grading system can be used to evaluate the quality of apples during storage.
APA, Harvard, Vancouver, ISO, and other styles
23

Gweon, Hyukjun, Matthias Schonlau, and Stefan H. Steiner. "The k conditional nearest neighbor algorithm for classification and class probability estimation." PeerJ Computer Science 5 (May 13, 2019): e194. http://dx.doi.org/10.7717/peerj-cs.194.

Full text
Abstract:
The k nearest neighbor (kNN) approach is a simple and effective nonparametric algorithm for classification. One of the drawbacks of kNN is that the method can only give coarse estimates of class probabilities, particularly for low values of k. To avoid this drawback, we propose a new nonparametric classification method based on nearest neighbors conditional on each class: the proposed approach calculates the distance between a new instance and the kth nearest neighbor from each class, estimates posterior probabilities of class memberships using the distances, and assigns the instance to the class with the largest posterior. We prove that the proposed approach converges to the Bayes classifier as the size of the training data increases. Further, we extend the proposed approach to an ensemble method. Experiments on benchmark data sets show that both the proposed approach and the ensemble version of the proposed approach on average outperform kNN, weighted kNN, probabilistic kNN and two similar algorithms (LMkNN and MLM-kHNN) in terms of the error rate. A simulation shows that kCNN may be useful for estimating posterior probabilities when the class distributions overlap.
APA, Harvard, Vancouver, ISO, and other styles
24

Asghar, Ali, Saad Jawaid Khan, Fahad Azim, Choudhary Sobhan Shakeel, Amatullah Hussain, and Imran Khan Niazi. "Inter-classifier comparison for upper extremity EMG signal at different hand postures and arm positions using pattern recognition." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 236, no. 2 (October 22, 2021): 228–38. http://dx.doi.org/10.1177/09544119211053669.

Full text
Abstract:
The utilization of surface EMG and intramuscular EMG signals has been observed to create significant improvement in pattern recognition approaches and myoelectric control. However, there is less data of different arm positions and hand postures available. Hand postures and arm positions tend to affect the combination of surface and intramuscular EMG signal acquisition in terms of classifier accuracy. Hence, this study aimed to find a robust classifier for two scenarios: (1) at fixed arm position (FAP) where classifiers classify different hand postures and (2) at fixed hand posture (FHP) where classifiers classify different arm positions. A total of 20 healthy male participants (30.62 ± 3.87 years old) were recruited for this study. They were asked to perform five motion classes including hand grasp, hand open, rest, hand extension, and hand flexion at four different arm positions at 0°, 45°, 90°, and 135°. SVM, KNN, and LDA classifier were deployed. Statistical analysis in the form of pairwise comparisons was carried out using SPSS. It is concluded that there is no significant difference among the three classifiers. SVM gave highest accuracy of 75.35% and 58.32% at FAP and FHP respectively for each motion classification. KNN yielded the highest accuracies of 69.11% and 79.04% when data was pooled and was classified at different arm positions and at different hand postures respectively. The results exhibited that there is no significant effect of changing arm position and hand posture on the classifier accuracy.
APA, Harvard, Vancouver, ISO, and other styles
25

Sandhu, Gaurav, Amandeep Singh, Puneet Singh Lamba, Deepali Virmani, and Gopal Chaudhary. "Modified Euclidean-Canberra blend distance metric for kNN classifier." Intelligent Decision Technologies 17, no. 2 (May 15, 2023): 527–41. http://dx.doi.org/10.3233/idt-220233.

Full text
Abstract:
In today’s world different data sets are available on which regression or classification algorithms of machine learning are applied. One of the classification algorithms is k-nearest neighbor (kNN) which computes distance amongst various rows in a dataset. The performance of kNN is evaluated based on K-value and distance metric used, where K is the total count of neighboring elements. Many different distance metrics have been used by researchers in literature, one of them is Canberra distance metric. In this paper the performance of kNN based on Canberra distance metric is measured on different datasets, further the proposed Canberra distance metric, namely, Modified Euclidean-Canberra Blend Distance (MECBD) metric has been applied to the kNN algorithm which led to improvement of class prediction efficiency on the same datasets measured in terms of accuracy, precision, recall, F1-score for different values of k. Further, this study depicts that MECBD metric use led to improvement in accuracy value 80.4% to 90.3%, 80.6% to 85.4% and 70.0% to 77.0% for various data sets used. Also, implementation of ROC curves and auc for k= 5 is done to show the improvement is kNN model prediction which showed increase in auc values for different data sets, for instance increase in auc values from 0.873 to 0.958 for Spine (2 Classes) dataset, 0.857 to 0.940, 0.983 to 0.983 (no change), 0.910 to 0.957 for DH, SL and NO class for Spine (3 Classes) data set and 0.651 to 0.742 for Haberman’s data set.
APA, Harvard, Vancouver, ISO, and other styles
26

Abdulrahman, Noora, and Wala Abedalkhader. "KNN Classifier and Naive Bayse Classifier for Crime Prediction in San Francisco Context." International Journal of Database Management Systems 9, no. 4 (August 31, 2017): 1–9. http://dx.doi.org/10.5121/ijdms.2017.9401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

R.P, Rajeswari, Kavitha Juliet, and Arad hana. "Text Classification for Student Data Set using Naive Bayes Classifier and KNN Classifier." International Journal of Computer Trends and Technology 43, no. 1 (January 25, 2017): 8–12. http://dx.doi.org/10.14445/22312803/ijctt-v43p103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Guharoy, Rabel, Nanda Dulal Jana, and Suparna Biswas. "An Efficient Epileptic Seizure Detection Technique using Discrete Wavelet Transform and Machine Learning Classifiers." Journal of Physics: Conference Series 2286, no. 1 (July 1, 2022): 012013. http://dx.doi.org/10.1088/1742-6596/2286/1/012013.

Full text
Abstract:
Abstract This paper presents an epilepsy detection method based on discrete wavelet transform (DWT) with Machine learning classifiers. Here DWT has been used for feature extraction as it provides a better decomposition of the signals in different frequency bands. At first, DWT has been applied to the EEG signal to extract the detail and approximate coefficients or different sub-bands. After the extraction of the coefficients, Principal component analysis (PCA) has been applied on different sub-bands and then a feature level fusion technique is used to extract the main features in low dimensional feature space. Three classifiers name: Support Vector Machine (SVM) classifier, K-Nearest-Neighbor (KNN) classifier, and Naive Bayes (NB) classifier have been used in the proposed work for classifying the EEG signals. The raised method is tested over Bonn databases and provides a maximum of 100% recognition accuracy for KNN, SVM, NB classifiers.
APA, Harvard, Vancouver, ISO, and other styles
29

Pagidirayi, Anil Kumar, and Anuradha Bhuma. "Speech Emotion Recognition Using Machine Learning Techniques." Revue d'Intelligence Artificielle 36, no. 2 (April 30, 2022): 271–78. http://dx.doi.org/10.18280/ria.360211.

Full text
Abstract:
Mel Frequency Cepstral Coefficient (MFCC) method is a feature extraction technique used for speech signals. In machine learning systems, the Random Subspace Method (RSM) known as attribute bagging or bagged featuring used to classify the complete feature sets. In this paper, an innovative method is proposed which is a combination of RSM and kNN algorithm known as Subspace-kNN (S-kNN) classifier. The classifier selects the specific features extracted from MFCC are angry, sad, fear, disgust, calm, happiness, surprise, and neutral speech emotions in Speech Emotion Recognition (SER) system. Furthermore, in the proposed method the performance metrics of accuracy, Positive Predictive Values (PPV) rate, training time are evaluated on male and female voice signals when compared with previous classifiers like SVM and bagged trees.
APA, Harvard, Vancouver, ISO, and other styles
30

Neira-Rodado, Dionicio, Chris Nugent, Ian Cleland, Javier Velasquez, and Amelec Viloria. "Evaluating the Impact of a Two-Stage Multivariate Data Cleansing Approach to Improve to the Performance of Machine Learning Classifiers: A Case Study in Human Activity Recognition." Sensors 20, no. 7 (March 27, 2020): 1858. http://dx.doi.org/10.3390/s20071858.

Full text
Abstract:
Human activity recognition (HAR) is a popular field of study. The outcomes of the projects in this area have the potential to impact on the quality of life of people with conditions such as dementia. HAR is focused primarily on applying machine learning classifiers on data from low level sensors such as accelerometers. The performance of these classifiers can be improved through an adequate training process. In order to improve the training process, multivariate outlier detection was used in order to improve the quality of data in the training set and, subsequently, performance of the classifier. The impact of the technique was evaluated with KNN and random forest (RF) classifiers. In the case of KNN, the performance of the classifier was improved from 55.9% to 63.59%.
APA, Harvard, Vancouver, ISO, and other styles
31

Amer, Abdullah Yahya Abdullah, and Tamanna Siddiqu. "A novel algorithm for sarcasm detection using supervised machine learning approach." AIMS Electronics and Electrical Engineering 6, no. 4 (2022): 345–69. http://dx.doi.org/10.3934/electreng.2022021.

Full text
Abstract:
<abstract> <p>Sarcasm means the opposite of what you desire to express, particularly to insult a person. Sarcasm detection in social networks SNs such as Twitter is a significant task as it has assisted in studying tweets using NLP. Many existing study-related methods have always focused only on the content-based on features in sarcastic words, leaving out the lexical-based features and context-based features knowledge in isolation. This shows a loss of the semantics of terms in a sarcastic expression. This study proposes an improved model to detect sarcasm from SNs. We used three feature set engineering: context-based on features set, Sarcastic based on features, and lexical based on features. Two Novel Algorithms for an effective model to detect sarcasm are divided into two stages. The first used two algorithms one with preprocessing, and the second algorithm with feature sets. To deal with data from SNs. We applied various supervised machine learning (ML) such as k-nearest neighbor classifier (KNN), na?ve Bayes (NB), support vector machine (SVM), and Random Forest (RF) classifiers with TF-IDF feature extraction representation data. To model evaluation metrics, evaluate sarcasm detection model performance in precision, accuracy, recall, and F1 score by 100%. We achieved higher results in Lexical features with KNN 89.19 % accuracy campers to other classifiers. Combining two feature sets (Sarcastic and Lexical) has shown slight improvement with the same classifier KNN; we achieved 90.00% accuracy. When combining three feature sets (Sarcastic, Lexical, and context), the accuracy is shown slight improvement. Also, the same classifier we achieved is a 90.51% KNN classifier. We perform the model differently to see the effect of three feature sets through the experiment individual, combining two feature sets and gradually combining three feature sets. When combining all features set together, achieve the best accuracy with the KNN classifier.</p> </abstract>
APA, Harvard, Vancouver, ISO, and other styles
32

Sharma, Krishna Gopal, and Yashpal Singh. "Predicting Intrusion in a Network Traffic Using Variance of Neighboring Object’s Distance." International Journal of Computer Network and Information Security 15, no. 2 (April 8, 2023): 73–84. http://dx.doi.org/10.5815/ijcnis.2023.02.06.

Full text
Abstract:
Activities in network traffic can be broadly classified into two categories: normal and malicious. Malicious activities are harmful and their detection is necessary for security reasons. The intrusion detection process monitors network traffic to identify malicious activities in the system. Any algorithm that divides objects into two categories, such as good or bad, is a binary class predictor or binary classifier. In this paper, we utilized the Nearest Neighbor Distance Variance (NNDV) classifier for the prediction of intrusion. NNDV is a binary class predictor and uses the concept of variance on the distance between objects. We used KDD CUP 99 dataset to evaluate the NNDV and compared the predictive accuracy of NNDV with that of the KNN or K Nearest Neighbor classifier. KNN is an efficient general purpose classifier, but we only considered its binary aspect. The results are quite satisfactory to show that NNDV is comparable to KNN. Many times, the performance of NNDV is better than KNN. We experimented with normalized and unnormalized data for NNDV and found that the accuracy results are generally better for normalized data. We also compared the accuracy results of different cross validation techniques such as 2 fold, 5 fold, 10 fold, and leave one out on the NNDV for the KDD CUP 99 dataset. Cross validation results can be helpful in determining the parameters of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
33

Rezaeijo, Seyed Masoud, Razzagh Abedi-Firouzjah, Mohammadreza Ghorvei, and Samad Sarnameh. "Screening of COVID-19 based on the extracted radiomics features from chest CT images." Journal of X-Ray Science and Technology 29, no. 2 (March 11, 2021): 229–43. http://dx.doi.org/10.3233/xst-200831.

Full text
Abstract:
BACKGROUND AND OBJECTIVE: Radiomics has been widely used in quantitative analysis of medical images for disease diagnosis and prognosis assessment. The objective of this study is to test a machine-learning (ML) method based on radiomics features extracted from chest CT images for screening COVID-19 cases. METHODS: The study is carried out on two groups of patients, including 138 patients with confirmed and 140 patients with suspected COVID-19. We focus on distinguishing pneumonia caused by COVID-19 from the suspected cases by segmentation of whole lung volume and extraction of 86 radiomics features. Followed by feature extraction, nine feature-selection procedures are used to identify valuable features. Then, ten ML classifiers are applied to classify and predict COVID-19 cases. Each ML models is trained and tested using a ten-fold cross-validation method. The predictive performance of each ML model is evaluated using the area under the curve (AUC) and accuracy. RESULTS: The range of accuracy and AUC is from 0.32 (recursive feature elimination [RFE]+Multinomial Naive Bayes [MNB] classifier) to 0.984 (RFE+bagging [BAG], RFE+decision tree [DT] classifiers) and 0.27 (mutual information [MI]+MNB classifier) to 0.997 (RFE+k-nearest neighborhood [KNN] classifier), respectively. There is no direct correlation among the number of the selected features, accuracy, and AUC, however, with changes in the number of the selected features, the accuracy and AUC values will change. Feature selection procedure RFE+BAG classifier and RFE+DT classifier achieve the highest prediction accuracy (accuracy: 0.984), followed by MI+Gaussian Naive Bayes (GNB) and logistic regression (LGR)+DT classifiers (accuracy: 0.976). RFE+KNN classifier as a feature selection procedure achieve the highest AUC (AUC: 0.997), followed by RFE+BAG classifier (AUC: 0.991) and RFE+gradient boosting decision tree (GBDT) classifier (AUC: 0.99). CONCLUSION: This study demonstrates that the ML model based on RFE+KNN classifier achieves the highest performance to differentiate patients with a confirmed infection caused by COVID-19 from the suspected cases.
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Yuchun. "Handwritten Digit Recognition Using K Nearest-Neighbor, Radial-Basis Function, and Backpropagation Neural Networks." Neural Computation 3, no. 3 (September 1991): 440–49. http://dx.doi.org/10.1162/neco.1991.3.3.440.

Full text
Abstract:
Results of recent research suggest that carefully designed multilayer neural networks with local “receptive fields” and shared weights may be unique in providing low error rates on handwritten digit recognition tasks. This study, however, demonstrates that these networks, radial basis function (RBF) networks, and k nearest-neighbor (kNN) classifiers, all provide similar low error rates on a large handwritten digit database. The backpropagation network is overall superior in memory usage and classification time but can provide “false positive” classifications when the input is not a digit. The backpropagation network also has the longest training time. The RBF classifier requires more memory and more classification time, but less training time. When high accuracy is warranted, the RBF classifier can generate a more effective confidence judgment for rejecting ambiguous inputs. The simple kNN classifier can also perform handwritten digit recognition, but requires a prohibitively large amount of memory and is much slower at classification. Nevertheless, the simplicity of the algorithm and fast training characteristics makes the kNN classifier an attractive candidate in hardware-assisted classification tasks. These results on a large, high input dimensional problem demonstrate that practical constraints including training time, memory usage, and classification time often constrain classifier selection more strongly than small differences in overall error rate.
APA, Harvard, Vancouver, ISO, and other styles
35

Ajao, Jumoke Falilat, David Olufemi Olawuyi, and Odetunji Ode Odejobi. "Yoruba Handwritten Character Recognition using Freeman Chain Code and K-Nearest Neighbor Classifier." Jurnal Teknologi dan Sistem Komputer 6, no. 4 (October 31, 2018): 129–34. http://dx.doi.org/10.14710/jtsiskom.6.4.2018.129-134.

Full text
Abstract:
This work presents a recognition system for Offline Yoruba characters recognition using Freeman chain code and K-Nearest Neighbor (KNN). Most of the Latin word recognition and character recognition have used k-nearest neighbor classifier and other classification algorithms. Research tends to explore the same recognition capability on Yoruba characters recognition. Data were collected from adult indigenous writers and the scanned images were subjected to some level of preprocessing to enhance the quality of the digitized images. Freeman chain code was used to extract the features of THE digitized images and KNN was used to classify the characters based on feature space. The performance of the KNN was compared with other classification algorithms that used Support Vector Machine (SVM) and Bayes classifier for recognition of Yoruba characters. It was observed that the recognition accuracy of the KNN classification algorithm and the Freeman chain code is 87.7%, which outperformed other classifiers used on Yoruba characters.
APA, Harvard, Vancouver, ISO, and other styles
36

Dar, Basra Farooq, Malik Sajjad Ahmed Nadeem, Samina Khalid, Farzana Riaz, Yasir Mahmood, and Ghias Hameed. "Improving the Classification Ability of Delegating Classifiers Using Different Supervised Machine Learning Algorithms." Computer and Information Science 16, no. 3 (August 23, 2023): 22. http://dx.doi.org/10.5539/cis.v16n3p22.

Full text
Abstract:
Cancer Classification &amp; Prediction Is Vitally Important for Cancer Diagnosis. DNA Microarray Technology Has Provided Genetic Data That Has Facilitated Scientists Perform Cancer Research. Traditional Methods of Classification Have Certain Limitations E.G. Traditionally A Proposed DSS Uses A Single Classifier at A Time for Classification or Prediction Purposes. To Increase Classification Accuracy, Reject Option Classifiers Has Been Proposed in Machine Learning Literature. In A Reject Option Classifier, A Rejection Region Is Defined and The Samples Fall in That Region Are Not Classified by The Classifier. The Unclassifiable Samples Are Rejected by Classifier in Order to Improve Classifier&rsquo;s Accuracy. However, These Rejections Affect the Prediction Rate of The Classifier as Well. To Overcome the Problem of Low Prediction Rates by Increased Rejection of Samples by A Single Classifier, the &ldquo;Delegating Classifiers&rdquo; Provide Better Accuracy at Less Rejection Rate. Different Classifiers Such as Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), K Nearest Neighbor (KNN) Etc. Have Been Proposed. Moreover, Traditionally, Same Learner Is Used As &ldquo;Delegated&rdquo; And &ldquo;Delegating&rdquo; Classifier. This Research Has Investigated the Effects of Using Different Machine Learning Classifiers in Both of The Delegated and Delegating Classifiers, And the Results Obtained Showed That Proposed Method Gives High Accuracy and Increases the Prediction Rate.
APA, Harvard, Vancouver, ISO, and other styles
37

Mahanya, G. B., and S. Nithyaselvakumari. "Analysis And Comparison Of Ventricular Cardiac Arrhythmia Classification Using Potassium Channel Parameters With ANN And KNN Classifier." CARDIOMETRY, no. 25 (February 14, 2023): 926–33. http://dx.doi.org/10.18137/cardiometry.2022.25.926933.

Full text
Abstract:
Aim: The Motive of this research is to analyze, compare ventricular Cardiac Arrhythmia (CA) classification using potassium channel (k+) parameters with Artificial Neural Network (ANN) and K-Nearest Neighbor (KNN) classifiers. Materials and Methods: D Noble Model For Human Ventricular Tissue (DNFHVT) is used for our classification. The DNFHVT is a mathematical model of action potential focusing on major ionic currents like K+,Na+ and Ca+.. Size of the sample was calculated by keeping threshold 0.05, G Power 80%, confidence interval 95% and enrolment ratio as 1. Number of samples considered is 20. These data are imported to KNN and ANN classifiers to find better accuracy among them. The accuracy of novel ANN and KNN classifiers for 20 samples is obtained by alternating the cross fold validation. These results will be imported to Statistical Package for the Social Science (SPSS) software to identify the overall accuracy for each classifier. Results: The results are obtained from SPSS for novel ANN and KNN classifiers. ANN shows accuracy of 13.14% with standard deviation (1.6800) and Standard error mean (0.3757). Similarly KNN produces an accuracy value of 7.19% with standard deviation (1.6902) and Standard error mean (0.377). Conclusion: As of the results, it clearly shows that ANN has better accuracy for classification than KNN.
APA, Harvard, Vancouver, ISO, and other styles
38

Bayraktar, Rabia, Batur Alp Akgul, and Kadir Sercan Bayram. "Colour recognition using colour histogram feature extraction and K-nearest neighbour classifier." New Trends and Issues Proceedings on Advances in Pure and Applied Sciences, no. 12 (April 30, 2020): 08–14. http://dx.doi.org/10.18844/gjpaas.v0i12.4981.

Full text
Abstract:
K-nearest neighbours (KNN) is a widely used neural network and machine learning classification algorithm. Recently, it has been used in the neural network and digital image processing fields. In this study, the KNN classifier is used to distinguish 12 different colours. These colours are black, blue, brown, forest green, green, navy, orange, pink, red, violet, white and yellow. Using colour histogram feature extraction, which is one of the image processing techniques, the features that distinguish these colours are determined. These features increase the effectiveness of the KNN classifier. The training data consist of saved frames and the test data are obtained from the video camera in real-time. The video consists of consecutive frames. The frames are 100 × 70 in size. Each frame is tested with K = 3,5,7,9 and the obtained results are recorded. In general, the best results are obtained when used K = 5. Keywords: KNN algorithm, classifier, application, neural network, image processing, developed, colour, dataset, colour recognition.
APA, Harvard, Vancouver, ISO, and other styles
39

Gomes, Eduardo, Luciano Bertini, Wagner Rangel Campos, Ana Paula Sobral, Izabela Mocaiber, and Alessandro Copetti. "Machine Learning Algorithms for Activity-Intensity Recognition Using Accelerometer Data." Sensors 21, no. 4 (February 9, 2021): 1214. http://dx.doi.org/10.3390/s21041214.

Full text
Abstract:
In pervasive healthcare monitoring, activity recognition is critical information for adequate management of the patient. Despite the great number of studies on this topic, a contextually relevant parameter that has received less attention is intensity recognition. In the present study, we investigated the potential advantage of coupling activity and intensity, namely, Activity-Intensity, in accelerometer data to improve the description of daily activities of individuals. We further tested two alternatives for supervised classification. In the first alternative, the activity and intensity are inferred together by applying a single classifier algorithm. In the other alternative, the activity and intensity are classified separately. In both cases, the algorithms used for classification are k-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Random Forest (RF). The results showed the viability of the classification with good accuracy for Activity-Intensity recognition. The best approach was KNN implemented in the single classifier alternative, which resulted in 79% of accuracy. Using two classifiers, the result was 97% accuracy for activity recognition (Random Forest), and 80% for intensity recognition (KNN), which resulted in 78% for activity-intensity coupled. These findings have potential applications to improve the contextualized evaluation of movement by health professionals in the form of a decision system with expert rules.
APA, Harvard, Vancouver, ISO, and other styles
40

Marianingsih, Susi, Widodo Widodo, Marla Sheilamita S. Pieter, Evanita Veronica Manullang, and Hendry Y. Nanlohy. "Machine Vision for the Various Road Surface Type Classification Based on Texture Feature." Journal of Mechanical Engineering Science and Technology (JMEST) 6, no. 1 (July 19, 2022): 40. http://dx.doi.org/10.17977/um016v6i12022p040.

Full text
Abstract:
The mechanized ability to specify the way surface type is a piece of key enlightenment for autonomous transportation machine navigation like wheelchairs and smart cars. In the present work, the extracted features from the object are getting based on structure and surface evidence using Gray Level Co-occurrence Matrix (GLCM). Furthermore, K-Nearest Neighbor (K-NN) Classifier was built to classify the road surface image into three classes, asphalt, gravel, and pavement. A comparison of KNN and Naïve Bayes (NB) was used in present study. We have constructed a road image dataset of 450 samples from real-world road images in the asphalt, gravel, and pavement. Experiment result that the classification accuracy using the K-NN classifier is 78%, which is better as compared to Naïve Bayes classifier which has a classification accuracy of 72%. The paving class has the smallest accuracy in both classifier methods. The two classifiers have nearly the same computing time, 3.459 seconds for the KNN Classifier and 3.464 seconds for the Naive Bayes Classifier.
APA, Harvard, Vancouver, ISO, and other styles
41

Gupta, Shweta, and . "Classification of Heart Disease Hungarian Data Using Entropy, Knnga Based Classifier and Optimizer." International Journal of Engineering & Technology 7, no. 4.5 (September 22, 2018): 292. http://dx.doi.org/10.14419/ijet.v7i4.5.20092.

Full text
Abstract:
To mine the useful information from massive medical databases data mining plays as imperative role. In data mining classification (supervised learning) which can be used to design model by describing significant data classed, where class attribute is involved in the construction of the classifier. In this work, we propose a methodology in which uses KNN classifier. It is simple, popular, more efficient and proficient algorithm for pattern recognition. The samples of the medical databases are classified on the basis of nearest neighbor in which medical database are massively found in nature and contains irrelevant and redundant attributes. The only KNN classifier produce less accurate results that is why we use hybrid approach of KNN and genetic algorithm (GA) to obtain more accurate results. To evaluate the performance of the proposed approach Hungarian dataset (UCI learning) is used to classify the attributes of heart disease. The genetic algorithm performs global research on complex large and multimodal landscapes which provide minimal solutions or search space. The experimental outcomes of accuracy parameter of proposed approach give more accurate and efficient results than the existing approach.
APA, Harvard, Vancouver, ISO, and other styles
42

Dhiman, Akashdeep, and Dinesh Kumar. "Sentiment Analysis Approach based N-gram and KNN Classifier." International Journal of Computer Applications 182, no. 4 (July 16, 2018): 29–32. http://dx.doi.org/10.5120/ijca2018917513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tripathi, Garima. "SENTIMENT ANALYSIS APPROACH BASED N-GRAM AND KNN CLASSIFIER." International Journal of Advanced Research in Computer Science 9, no. 3 (June 20, 2018): 209–12. http://dx.doi.org/10.26483/ijarcs.v9i3.5976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Poongothai, E., and A. Suruliandi. "Person re-identification using kNN classifier-based fusion approach." International Journal of Advanced Intelligence Paradigms 16, no. 2 (2020): 113. http://dx.doi.org/10.1504/ijaip.2020.10027874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Poongothai, E., and A. Suruliandi. "Person re-identification using kNN classifier-based fusion approach." International Journal of Advanced Intelligence Paradigms 16, no. 2 (2020): 113. http://dx.doi.org/10.1504/ijaip.2020.107009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bhavani, R. R., and G. Wiselin Jiji. "Image registration for varicose ulcer classification using KNN classifier." International Journal of Computers and Applications 40, no. 2 (October 31, 2017): 88–97. http://dx.doi.org/10.1080/1206212x.2017.1395108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

SINGH, SINAM AJITKUMAR, and SWANIRBHAR MAJUMDER. "CLASSIFICATION OF UNSEGMENTED HEART SOUND RECORDING USING KNN CLASSIFIER." Journal of Mechanics in Medicine and Biology 19, no. 04 (June 2019): 1950025. http://dx.doi.org/10.1142/s0219519419500258.

Full text
Abstract:
Due to low physical workout, high-calorie intake, and bad behavioral character, people were affected by cardiological disorders. Every instant, one out of four deaths are due to heart-related ailments. Hence, the early diagnosis of a heart is essential. Most of the approaches for automated classification of the heart sound need segmentation of Phonocardiograms (PCG) signal. The main aim of this study was to decline the segmentation process and to estimate the utility for accurate and detailed classification of short unsegmented PCG recording. Based on wavelet decomposition, Hilbert transform, homomorphic filtering, and power spectral density (PSD), the features had been obtained using the beginning 5 second PCG recording. The extracted features were classified using nearest neighbors with Euclidean distances for different values of [Formula: see text] by bootstrapping 50% PCG recording for training and 50% for testing over 100 iterations. The overall accuracy of 100%, 85%, 80.95%, 81.4%, and 98.13% had been achieved for five different datasets using KNN classifiers. The classification performance for analyzing the whole datasets is 90% accuracy with 93% sensitivity and 90% specificity. The classification of unsegmented PCG recording based on an efficient feature extraction is necessary. This paper presents a promising classification performance as compared with the state-of-the-art approaches in short time less complexity.
APA, Harvard, Vancouver, ISO, and other styles
48

Wasnik, Sakshi. "Detection of Cancerous Nodule in Lung Using KNN Classifier." HELIX 9, no. 6 (December 31, 2019): 5779–83. http://dx.doi.org/10.29042/2019-5779-5783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kosasih, Rifki. "Face Recognition Using Isomap, KNN and Naïve Bayes Classifier." CogITo Smart Journal 9, no. 1 (June 30, 2023): 38–47. http://dx.doi.org/10.31154/cogito.v9i1.473.38-47.

Full text
Abstract:
Sistem pengenalan wajah merupakan sistem yang dapat mengenali wajah seseorang dengan bantuan komputer. Untuk mengenali wajah tersebut, dilakukan ekstraksi fitur wajah terlebih dahulu. Pada penelitian ini digunakan metode isomap untuk mengekstrak fitur wajah. Isomap merupakan suatu metode yang dapat mengubah dimensi citra yang berdimensi tinggi menjadi fitur-fitur yang memiliki dimensi rendah. Data yang digunakan adalah citra wajah yang diperoleh dari 6 orang, setiap orang memiliki 4 variasi ekspresi citra. Setelah fitur wajah diekstrak, selanjutnya dilakukan klasifikasi dengan menggunakan metode K Nearest Neighbor (KNN) dan metode Naive Bayes Classifier. Berdasarkan hasil penelitian pada metode KNN, tingkat akurasi terbaik terjadi saat jumlah tetangga K = 2. Nilai akurasi yang diperoleh sebesar 87,5%, nilai rata-rata presisi terbobot (RPT) sebesar 81,25% dan nilai rata-rata recall terbobot (RRT) sebesar 87,5% Pada metode Naive Bayes Classifier diperoleh tingkat akurasi sebesar 50%, nilai rata-rata presisi terbobot (RPT) sebesar 62% dan nilai rata-rata recall terbobot (RRT) sebesar 50%.
APA, Harvard, Vancouver, ISO, and other styles
50

McHUGH, E. S., A. P. SHINN, and J. W. KAY. "Discrimination of the notifiable pathogen Gyrodactylus salaris from G. thymalli (Monogenea) using statistical classifiers applied to morphometric data." Parasitology 121, no. 3 (September 2000): 315–23. http://dx.doi.org/10.1017/s0031182099006381.

Full text
Abstract:
The identification and discrimination of 2 closely related and morphologically similar species of Gyrodactylus, G. salaris and G. thymalli, were assessed using the statistical classification methodologies Linear Discriminant Analysis (LDA) and k-Nearest Neighbours (KNN). These statistical methods were applied to morphometric measurements made on the gyrodactylid attachment hooks. The mean estimated classification percentages of correctly identifying each species were 98·1% (LDA) and 97·9% (KNN) for G. salaris and 99·9% (LDA) and 73·2% (KNN) for G. thymalli. The analysis was expanded to include another 2 closely related species and the new classification efficiencies were 94·6% (LDA) and 98·0% (KNN) for G. salaris; 98·2% (LDA) and 72·6% (KNN) for G. thymalli; 86·7% (LDA) and 91·8% (KNN) for G. derjavini; and 76·5% (LDA) and 77·7% (KNN) for G. truttae. The higher correct classification scores of G. salaris and G. thymalli by the LDA classifier in the 2-species analysis over the 4-species analysis suggested the development of a 2-stage classifier. The mean estimated correct classification scores were 99·97% (LDA) and 99·99% (KNN) for the G. salaris–G. thymalli pairing and 99·4% (LDA) and 99·92% (KNN) for the G. derjavini–G. truttae pairing. Assessment of the 2-stage classifier using only marginal hook data was very good with classification efficiencies of 100% (LDA) and 99·6% (KNN) for the G. salaris–G. thymalli pairing and 97·2% (LDA) and 99·2% (KNN) for the G. derjavini–G. truttae pairing. Paired species were then discriminated individually in the second stage of the classifier using data from the full set of hooks. These analyses demonstrate that using the methods of LDA and KNN statistical classification, the discrimination of closely related and pathogenic species of Gyrodactylus may be achieved using data derived from light microscope studies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography