Littérature scientifique sur le sujet « KNN classification »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « KNN classification ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "KNN classification"

1

Gweon, Hyukjun, Matthias Schonlau et Stefan H. Steiner. « The k conditional nearest neighbor algorithm for classification and class probability estimation ». PeerJ Computer Science 5 (13 mai 2019) : e194. http://dx.doi.org/10.7717/peerj-cs.194.

Texte intégral
Résumé :
The k nearest neighbor (kNN) approach is a simple and effective nonparametric algorithm for classification. One of the drawbacks of kNN is that the method can only give coarse estimates of class probabilities, particularly for low values of k. To avoid this drawback, we propose a new nonparametric classification method based on nearest neighbors conditional on each class: the proposed approach calculates the distance between a new instance and the kth nearest neighbor from each class, estimates posterior probabilities of class memberships using the distances, and assigns the instance to the class with the largest posterior. We prove that the proposed approach converges to the Bayes classifier as the size of the training data increases. Further, we extend the proposed approach to an ensemble method. Experiments on benchmark data sets show that both the proposed approach and the ensemble version of the proposed approach on average outperform kNN, weighted kNN, probabilistic kNN and two similar algorithms (LMkNN and MLM-kHNN) in terms of the error rate. A simulation shows that kCNN may be useful for estimating posterior probabilities when the class distributions overlap.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Shichao. « Cost-sensitive KNN classification ». Neurocomputing 391 (mai 2020) : 234–42. http://dx.doi.org/10.1016/j.neucom.2018.11.101.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhao, Puning, et Lifeng Lai. « Efficient Classification with Adaptive KNN ». Proceedings of the AAAI Conference on Artificial Intelligence 35, no 12 (18 mai 2021) : 11007–14. http://dx.doi.org/10.1609/aaai.v35i12.17314.

Texte intégral
Résumé :
In this paper, we propose an adaptive kNN method for classification, in which different k are selected for different test samples. Our selection rule is easy to implement since it is completely adaptive and does not require any knowledge of the underlying distribution. The convergence rate of the risk of this classifier to the Bayes risk is shown to be minimax optimal for various settings. Moreover, under some special assumptions, the convergence rate is especially fast and does not decay with the increase of dimensionality.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Zhang, Shichao, Xuelong Li, Ming Zong, Xiaofeng Zhu et Debo Cheng. « Learning k for kNN Classification ». ACM Transactions on Intelligent Systems and Technology 8, no 3 (22 avril 2017) : 1–19. http://dx.doi.org/10.1145/2990508.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Khairina, Nurul, Theofil Tri Saputra Sibarani, Rizki Muliono, Zulfikar Sembiring et Muhathir Muhathir. « Identification of Pneumonia using The K-Nearest Neighbors Method using HOG Fitur Feature Extraction ». JOURNAL OF INFORMATICS AND TELECOMMUNICATION ENGINEERING 5, no 2 (26 janvier 2022) : 562–68. http://dx.doi.org/10.31289/jite.v5i2.6216.

Texte intégral
Résumé :
Pneumonia is a wet lung disease. Pneumonia is generally caused by viruses, bacteria or fungi. Not infrequently Pneumonia can cause death. The K-Nearest Neighbors method is a classification method that uses the majority value from the closest k value category. At this time people are not too worried about pneumonia because this pneumonia has symptoms like a normal cough. However, fast and accurate information from health experts is also very necessary so that pneumonia symptoms can be recognized early and how to deal with them can also be done faster. In this study, researchers will diagnose pneumonia to obtain information quickly about the symptoms of pneumonia. This information will adopt human knowledge into computers designed to solve the problem of identifying pneumonia. In this study, the K-Nearest Neighbors method will be combined with the HOG Extraction Feature to identify pneumonia more accurately. The KNN classification used is Fine KNN, Cosine KNN, and Cubic KNN. Where will be seen how the value of accuracy, precision, recall, and fi-score. The results showed that the classification could run well on the Fine KKN, Cosine KNN, and Cubic KNN methods. Fine KNN has an accuracy rate of 80.67, Cosine KNN has an accuracy rate of 84,93333, and Cubic KNN has an accuracy rate of 83,13333. Fine KNN has precision, recall and f1-score values of 0.794842, 0.923706, and 0.854442. Cosine KNN has precision, recall and f1-score values of 0.803048, 0.954039, and 0.872056. Cubic KNN has precision, recall and f1-score values of 0.73388, 0.964561, and 0.833555. From the test results, positive and negative identification of pneumonia was found to be more accurate with the Cosine KNN classification which reached 84,93333.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Raeisi Shahraki, Hadi, Saeedeh Pourahmad et Najaf Zare. « K Important Neighbors : A Novel Approach to Binary Classification in High Dimensional Data ». BioMed Research International 2017 (2017) : 1–9. http://dx.doi.org/10.1155/2017/7560807.

Texte intégral
Résumé :
K nearest neighbors (KNN) are known as one of the simplest nonparametric classifiers but in high dimensional setting accuracy of KNN are affected by nuisance features. In this study, we proposed the K important neighbors (KIN) as a novel approach for binary classification in high dimensional problems. To avoid the curse of dimensionality, we implemented smoothly clipped absolute deviation (SCAD) logistic regression at the initial stage and considered the importance of each feature in construction of dissimilarity measure with imposing features contribution as a function of SCAD coefficients on Euclidean distance. The nature of this hybrid dissimilarity measure, which combines information of both features and distances, enjoys all good properties of SCAD penalized regression and KNN simultaneously. In comparison to KNN, simulation studies showed that KIN has a good performance in terms of both accuracy and dimension reduction. The proposed approach was found to be capable of eliminating nearly all of the noninformative features because of utilizing oracle property of SCAD penalized regression in the construction of dissimilarity measure. In very sparse settings, KIN also outperforms support vector machine (SVM) and random forest (RF) as the best classifiers.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Yang, Zhida, Peng Liu et Yi Yang. « Convective/Stratiform Precipitation Classification Using Ground-Based Doppler Radar Data Based on the K-Nearest Neighbor Algorithm ». Remote Sensing 11, no 19 (29 septembre 2019) : 2277. http://dx.doi.org/10.3390/rs11192277.

Texte intégral
Résumé :
Stratiform and convective rain types are associated with different cloud physical processes, vertical structures, thermodynamic influences and precipitation types. Distinguishing convective and stratiform systems is beneficial to meteorology research and weather forecasting. However, there is no clear boundary between stratiform and convective precipitation. In this study, a machine learning algorithm, K-nearest neighbor (KNN), is used to classify precipitation types. Six Doppler radar (WSR-98D/SA) data sets from Jiangsu, Guangzhou and Anhui Provinces in China were used as training and classification samples, and the 2A23 product of the Tropical Precipitation Measurement Mission (TRMM) was used to obtain the training labels and evaluate the classification performance. Classifying precipitation types using KNN requires three steps. First, features are selected from the radar data by comparing the range of each variable for different precipitation types. Second, the same unclassified samples are classified with different k values to choose the best-performing k. Finally, the unclassified samples are put into the KNN algorithm with the best k to classify precipitation types, and the classification performance is evaluated. Three types of cases, squall line, embedded convective and stratiform cases, are classified by KNN. The KNN method can accurately classify the location and area of stratiform and convective systems. For stratiform classifications, KNN has a 95% probability of detection, 8% false alarm rate, and 87% cumulative success index; for convective classifications, KNN yields a 78% probability of detection, a 13% false alarm rate, and a 69% cumulative success index. These results imply that KNN can correctly classify almost all stratiform precipitation and most convective precipitation types. This result suggests that KNN has great potential in classifying precipitation types.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ganatra, Dr Dhimant. « Improving classification accuracy :The KNN approach ». International Journal of Advanced Trends in Computer Science and Engineering 9, no 4 (25 août 2020) : 6147–50. http://dx.doi.org/10.30534/ijatcse/2020/287942020.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Su, Yixin, et Sheng-Uei Guan. « Density and Distance Based KNN Approach to Classification ». International Journal of Applied Evolutionary Computation 7, no 2 (avril 2016) : 45–60. http://dx.doi.org/10.4018/ijaec.2016040103.

Texte intégral
Résumé :
KNN algorithm is a simple and efficient algorithm developed to solve classification problems. However, it encounters problems when classifying datasets with non-uniform density distributions. The existing KNN voting mechanism may lose essential information by considering majority only and get degraded performance when a dataset has uneven distribution. The other drawback comes from the way that KNN treat all the participating candidates equally when judging upon one test datum. To overcome the weaknesses of KNN, a Region of Influence Based KNN (RI-KNN) is proposed. RI-KNN computes for each training datum region of influence information based on their nearby data (i.e. locality information) so that each training datum can encode some locality information from its region. Information coming from both training and testing stages will contribute to the formation of weighting formula. By solving these two problems, RI-KNN is shown to outperform KNN upon several artificial datasets and real datasets without sacrificing time cost much in nearly all tested datasets.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Boyko, Nataliya I., et Mykhaylo V. Muzyka. « Methods of analysis of multimodal data to increase the accuracy of classification ». Applied Aspects of Information Technology 5, no 2 (4 juillet 2022) : 147–60. http://dx.doi.org/10.15276/aait.05.2022.11.

Texte intégral
Résumé :
This paper proposes methods for analyzing multimodal data that will help improve the overall accuracy of the results and plans for classifying K-Nearest Neighbor (KNN) to minimize their risk. The mechanism of increasing the accuracy of KNN classification is considered. The research methods used in this work are comparison, analysis, induction, and experiment. This work aimed to improve the accuracy of KNN classification by comparing existing algorithms and applying new methods. Many literary and media sources on the classification according to the algorithm k of the nearest neighbors were analyzed, and the most exciting variations of the given algorithm were selected. Emphasis will be placed on achieving maximum classification accuracy by comparing existing and improving methods for choosing the number k and finding the nearest class. Algorithms with and without data analysis and preprocessing are also compared. All the strategies discussed in this article will be achieved purely practically. An experimental classification by k nearest neighbors with different variations was performed. Data for the experiment used two different data sets of various sizes. Different classifications k and the test sample size were taken as classification arguments. The paper studies three variants of the algorithm k nearest neighbors: the classical KNN, KNN with the lowest average and hybrid KNN. These algorithms are compared for different test sample sizes for other numbers k. The article analyzes the data before classification. As for selecting the number k, no simple method would give the maximum result with great accuracy. The essence of the algorithm is to find k closest to the sample of objects already classified by predefined and numbered classes. Then, among these k objects, you need to count how often the class occurs and assign the most common class to the selected object. If two classes' occurrences are the largest and the same, the class with the smaller number is assigned
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "KNN classification"

1

Mestre, Ricardo Jorge Palheira. « Improvements on the KNN classifier ». Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10923.

Texte intégral
Résumé :
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
The object classification is an important area within the artificial intelligence and its application extends to various areas, whether or not in the branch of science. Among the other classifiers, the K-nearest neighbor (KNN) is among the most simple and accurate especially in environments where the data distribution is unknown or apparently not parameterizable. This algorithm assigns the classifying element the major class in the K nearest neighbors. According to the original algorithm, this classification implies the calculation of the distances between the classifying instance and each one of the training objects. If on the one hand, having an extensive training set is an element of importance in order to obtain a high accuracy, on the other hand, it makes the classification of each object slower due to its lazy-learning algorithm nature. Indeed, this algorithm does not provide any means of storing information about the previous calculated classifications,making the calculation of the classification of two equal instances mandatory. In a way, it may be said that this classifier does not learn. This dissertation focuses on the lazy-learning fragility and intends to propose a solution that transforms the KNNinto an eager-learning classifier. In other words, it is intended that the algorithm learns effectively with the training set, thus avoiding redundant calculations. In the context of the proposed change in the algorithm, it is important to highlight the attributes that most characterize the objects according to their discriminating power. In this framework, there will be a study regarding the implementation of these transformations on data of different types: continuous and/or categorical.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hanson, Sarah Elizabeth. « Classification of ADHD Using Heterogeneity Classes and Attention Network Task Timing ». Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83610.

Texte intégral
Résumé :
Throughout the 1990s ADHD diagnosis and medication rates have increased rapidly, and this trend continues today. These sharp increases have been met with both public and clinical criticism, detractors stating over-diagnosis is a problem and healthy children are being unnecessarily medicated and labeled as disabled. However, others say that ADHD is being under-diagnosed in some populations. Critics often state that there are multiple factors that introduce subjectivity into the diagnosis process, meaning that a final diagnosis may be influenced by more than the desire to protect a patient's wellbeing. Some of these factors include standardized testing, legislation affecting special education funding, and the diagnostic process. In an effort to circumvent these extraneous factors, this work aims to further develop a potential method of using EEG signals to accurately discriminate between ADHD and non-ADHD children using features that capture spectral and perhaps temporal information from evoked EEG signals. KNN has been shown in prior research to be an effective tool in discriminating between ADHD and non-ADHD, therefore several different KNN models are created using features derived in a variety of fashions. One takes into account the heterogeneity of ADHD, and another one seeks to exploit differences in executive functioning of ADHD and non-ADHD subjects. The results of this classification method vary widely depending on the sample used to train and test the KNN model. With unfiltered Dataset 1 data over the entire ANT1 period, the most accurate EEG channel pair achieved an overall vector classification accuracy of 94%, and the 5th percentile of classification confidence was 80%. These metrics suggest that using KNN of EEG signals taken during the ANT task would be a useful diagnosis tool. However, the most accurate channel pair for unfiltered Dataset 2 data achieved an overall accuracy of 65% and a 5th percentile of classification confidence of 17%. The same method that worked so well for Dataset 1 did not work well for Dataset 2, and no conclusive reason for this difference was identified, although several methods to remove possible sources of noise were used. Using target time linked intervals did appear to marginally improve results in both Dataset 1 and Dataset 2. However, the changes in accuracy of intervals relative to target presentation vary between Dataset 1 and Dataset 2. Separating subjects into heterogeneity classes does appear to result in good (up to 83%) classification accuracy for some classes, but results are poor (about 50%) for other heterogeneity classes. A much larger data set is necessary to determine whether or not the very positive results found with Dataset 1 extend to a wide population.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bel, Haj Ali Wafa. « Minimisation de fonctions de perte calibrée pour la classification des images ». Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00934062.

Texte intégral
Résumé :
La classification des images est aujourd'hui un défi d'une grande ampleur puisque ça concerne d'un côté les millions voir des milliards d'images qui se trouvent partout sur le web et d'autre part des images pour des applications temps réel critiques. Cette classification fait appel en général à des méthodes d'apprentissage et à des classifieurs qui doivent répondre à la fois à la précision ainsi qu'à la rapidité. Ces problèmes d'apprentissage touchent aujourd'hui un grand nombre de domaines d'applications: à savoir, le web (profiling, ciblage, réseaux sociaux, moteurs de recherche), les "Big Data" et bien évidemment la vision par ordinateur tel que la reconnaissance d'objets et la classification des images. La présente thèse se situe dans cette dernière catégorie et présente des algorithmes d'apprentissage supervisé basés sur la minimisation de fonctions de perte (erreur) dites "calibrées" pour deux types de classifieurs: k-Plus Proches voisins (kNN) et classifieurs linéaires. Ces méthodes d'apprentissage ont été testées sur de grandes bases d'images et appliquées par la suite à des images biomédicales. Ainsi, cette thèse reformule dans une première étape un algorithme de Boosting des kNN et présente ensuite une deuxième méthode d'apprentissage de ces classifieurs NN mais avec une approche de descente de Newton pour une convergence plus rapide. Dans une seconde partie, cette thèse introduit un nouvel algorithme d'apprentissage par descente stochastique de Newton pour les classifieurs linéaires connus pour leur simplicité et leur rapidité de calcul. Enfin, ces trois méthodes ont été utilisées dans une application médicale qui concerne la classification de cellules en biologie et en pathologie.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Lopez, Marcano Juan L. « Classification of ADHD and non-ADHD Using AR Models and Machine Learning Algorithms ». Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/73688.

Texte intégral
Résumé :
As of 2016, diagnosis of ADHD in the US is controversial. Diagnosis of ADHD is based on subjective observations, and treatment is usually done through stimulants, which can have negative side-effects in the long term. Evidence shows that the probability of diagnosing a child with ADHD not only depends on the observations of parents, teachers, and behavioral scientists, but also on state-level special education policies. In light of these facts, unbiased, quantitative methods are needed for the diagnosis of ADHD. This problem has been tackled since the 1990s, and has resulted in methods that have not made it past the research stage and methods for which claimed performance could not be reproduced. This work proposes a combination of machine learning algorithms and signal processing techniques applied to EEG data in order to classify subjects with and without ADHD with high accuracy and confidence. More specifically, the K-nearest Neighbor algorithm and Gaussian-Mixture-Model-based Universal Background Models (GMM-UBM), along with autoregressive (AR) model features, are investigated and evaluated for the classification problem at hand. In this effort, classical KNN and GMM-UBM were also modified in order to account for uncertainty in diagnoses. Some of the major findings reported in this work include classification performance as high, if not higher, than those of the highest performing algorithms found in the literature. One of the major findings reported here is that activities that require attention help the discrimination of ADHD and Non-ADHD subjects. Mixing in EEG data from periods of rest or during eyes closed leads to loss of classification performance, to the point of approximating guessing when only resting EEG data is used.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Sichu. « Application of Machine Learning Techniques for Real-time Classification of Sensor Array Data ». ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/913.

Texte intégral
Résumé :
There is a significant need to identify approaches for classifying chemical sensor array data with high success rates that would enhance sensor detection capabilities. The present study attempts to fill this need by investigating six machine learning methods to classify a dataset collected using a chemical sensor array: K-Nearest Neighbor (KNN), Support Vector Machine (SVM), Classification and Regression Trees (CART), Random Forest (RF), Naïve Bayes Classifier (NB), and Principal Component Regression (PCR). A total of 10 predictors that are associated with the response from 10 sensor channels are used to train and test the classifiers. A training dataset of 4 classes containing 136 samples is used to build the classifiers, and a dataset of 4 classes with 56 samples is used for testing. The results generated with the six different methods are compared and discussed. The RF, CART, and KNN are found to have success rates greater than 90%, and to outperform the other methods.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Do, Cao Tri. « Apprentissage de métrique temporelle multi-modale et multi-échelle pour la classification robuste de séries temporelles par plus proches voisins ». Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM028/document.

Texte intégral
Résumé :
La définition d'une métrique entre des séries temporelles est un élément important pour de nombreuses tâches en analyse ou en fouille de données, tel que le clustering, la classification ou la prédiction. Les séries temporelles présentent naturellement différentes caractéristiques, que nous appelons modalités, sur lesquelles elles peuvent être comparées, comme leurs valeurs, leurs formes ou leurs contenus fréquentielles. Ces caractéristiques peuvent être exprimées avec des délais variables et à différentes granularités ou localisations temporelles - exprimées globalement ou localement. Combiner plusieurs modalités à plusieurs échelles pour apprendre une métrique adaptée est un challenge clé pour de nombreuses applications réelles impliquant des données temporelles. Cette thèse propose une approche pour l'Apprentissage d'une Métrique Multi-modal et Multi-scale (M2TML) en vue d'une classification robuste par plus proches voisins. La solution est basée sur la projection des paires de séries temporelles dans un espace de dissimilarités, dans lequel un processus d'optimisation à vaste marge est opéré pour apprendre la métrique. La solution M2TML est proposée à la fois dans le contexte linéaire et non-linéaire, et est étudiée pour différents types de régularisation. Une variante parcimonieuse et interprétable de la solution montre le potentiel de la métrique temporelle apprise à pouvoir localiser finement les modalités discriminantes, ainsi que leurs échelles temporelles en vue de la tâche d'analyse considérée. L'approche est testée sur un vaste nombre de 30 bases de données publiques et challenging, couvrant des images, traces, données ECG, qui sont linéairement ou non-linéairement séparables. Les expériences montrent l'efficacité et le potentiel de la méthode M2TML pour la classification de séries temporelles par plus proches voisins
The definition of a metric between time series is inherent to several data analysis and mining tasks, including clustering, classification or forecasting. Time series data present naturally several characteristics, called modalities, covering their amplitude, behavior or frequential spectrum, that may be expressed with varying delays and at different temporal granularity and localization - exhibited globally or locally. Combining several modalities at multiple temporal scales to learn a holistic metric is a key challenge for many real temporal data applications. This PhD proposes a Multi-modal and Multi-scale Temporal Metric Learning (M2TML) approach for robust time series nearest neighbors classification. The solution is based on the embedding of pairs of time series into a pairwise dissimilarity space, in which a large margin optimization process is performed to learn the metric. The M2TML solution is proposed for both linear and non linear contexts, and is studied for different regularizers. A sparse and interpretable variant of the solution shows the ability of the learned temporal metric to localize accurately discriminative modalities as well as their temporal scales.A wide range of 30 public and challenging datasets, encompassing images, traces and ECG data, that are linearly or non linearly separable, are used to show the efficiency and the potential of M2TML for time series nearest neighbors classification
Styles APA, Harvard, Vancouver, ISO, etc.
7

Villa, Medina Joe Luis. « Reliability of classification and prediction in k-nearest neighbours ». Doctoral thesis, Universitat Rovira i Virgili, 2013. http://hdl.handle.net/10803/127108.

Texte intégral
Résumé :
En esta tesis doctoral seha desarrollado el cálculo de la fiabilidad de clasificación y de la fiabilidad de predicción utilizando el método de los k-vecinos más cercanos (k-nearest neighbours, kNN) y estrategias de remuestreo basadas en bootstrap. Se han desarrollado, además, dos nuevos métodos de clasificación:Probabilistic Bootstrapk-Nearest Neighbours (PBkNN) y Bagged k-Nearest Neighbours (BaggedkNN),yun nuevo método de predicción,el Direct OrthogonalizationkNN (DOkNN).En todos los casos, los resultados obtenidos con los nuevos métodos han sido comparables o mejores que los obtenidos utilizando métodos clásicos de clasificación y calibración multivariante.
En aquesta tesi doctoral s'ha desenvolupat el càlcul de la fiabilitat de classificació i de la fiabilitat de predicció utilitzant el mètode dels k-veïns més propers (k-nearest neighbours, kNN) i estratègies de remostreig basades en bootstrap. S'han desenvolupat, a més, dos nous mètodes de classificació: Probabilistic Bootstrap k-Nearest Neighbours (PBkNN) i Bagged k-Nearest Neighbours (Bagged kNN), i un nou mètode de predicció, el Direct OrthogonalizationkNN (DOkNN). En tots els casos, els resultats obtinguts amb els nous mètodes han estat comparables o millors que els obtinguts utilitzant mètodes clàssics de classificació i calibratge multivariant.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ozsakabasi, Feray. « Classification Of Forest Areas By K Nearest Neighbor Method : Case Study, Antalya ». Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609548/index.pdf.

Texte intégral
Résumé :
Among the various remote sensing methods that can be used to map forest areas, the K Nearest Neighbor (KNN) supervised classification method is becoming increasingly popular for creating forest inventories in some countries. In this study, the utility of the KNN algorithm is evaluated for forest/non-forest/water stratification. Antalya is selected as the study area. The data used are composed of Landsat TM and Landsat ETM satellite images, acquired in 1987 and 2002, respectively, SRTM 90 meters digital elevation model (DEM) and land use data from the year 2003. The accuracies of different modifications of the KNN algorithm are evaluated using Leave One Out, which is a special case of K-fold cross-validation, and traditional accuracy assessment using error matrices. The best parameters are found to be Euclidean distance metric, inverse distance weighting, and k equal to 14, while using bands 4, 3 and 2. With these parameters, the cross-validation error is 0.009174, and the overall accuracy is around 86%. The results are compared with those from the Maximum Likelihood algorithm. KNN results are found to be accurate enough for practical applicability of this method for mapping forest areas.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Joseph, Katherine Amanda. « Comparison of Segment and Pixel Based Non-Parametric Classification of Land Cover in the Amazon Region of Brazil Using Multitemporal Landsat TM/ETM+ Imagery ». Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/32802.

Texte intégral
Résumé :
This study evaluated the ability of segment-based classification paired with non-parametric methods (CART and kNN) to classify a chronosequence of Landsat TM/ETM+ imagery spanning from 1992 to 2002 within the state of Rondônia, Brazil. Pixel-based classification was also implemented for comparison. Interannual multitemporal composites were used in each classification in an attempt to increase the separation of primary forest, cleared, and re-vegetated classes within a given year. The kNN and CART classification methods, with the integration of multitemporal data, performed equally well with overall accuracies ranging from 77% to 91%. Pixel-based CART classification, although not different in terms of mean or median overall accuracy, did have significantly lower variability than all other techniques (3.2% vs. an average of 13.2%), and thus provided more consistent results. Segmentation did not improve classification success over pixel-based methods and was therefore an unnecessary processing step with the used dataset. Through the appropriate band selection methods of the respective non-parametric classifiers, multitemporal bands were chosen in 38 of the 44 total classifications, strongly suggesting the utility of interannual multitemporal data for the separation of cleared, re-vegetated, and primary forest classes. The separation of the primary forest class from the cleared and re-vegetated classes was particularly successful and may be a possible result of the incorporation of multitemporal data. The land cover maps from this study allow for an accurate annualized analysis of land cover and can be coupled with household data to gain a better understanding of landscape change in the region.
Master of Science
Styles APA, Harvard, Vancouver, ISO, etc.
10

Buani, Bruna Elisa Zanchetta. « Aplicação da Lógica Fuzzy kNN e análises estatísticas para seleção de características e classificação de abelhas ». Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-10012011-085835/.

Texte intégral
Résumé :
Este trabalho propõe uma alternativa para o problema de classificação de espécies de abelhas a partir da implementação de um algoritmo com base na Morfométria Geométrica e estudo das Formas dos marcos anatômicos das imagens obtidas pelas asas das abelhas. O algoritmo implementado para este propósito se baseia no algoritmo dos k-Vizinho mais Próximos (do inglês, kNN) e na Lógica Fuzzy kNN (Fuzzy k-Nearest Neighbor) aplicados a dados analisados e selecionados de pontos bidimensionais referentes as características geradas por marcos anatômicos. O estudo apresentado envolve métodos de seleção e ordenação de marcos anatômicos para a utilização no algoritmo por meio da implementação de um método matemático que utiliza o calculo dos marcos anatômicos mais significativos (que são representados por marcos matemáticos) e a formulação da Ordem de Significância onde cada elemento representa variáveis de entrada para a Fuzzy kNN. O conhecimento envolvido neste trabalho inclui uma perspectiva sobre a seleção de características não supervisionada como agrupamentos e mineração de dados, analise de pré-processamento dos dados, abordagens estatísticas para estimação e predição, estudo da Forma, Analise de Procrustes e Morfométria Geométrica sobre os dados e o tópico principal que envolve uma modificação do algoritmo dos k- Vizinhos mais Próximos e a aplicação da Fuzzy kNN para o problema. Os resultados mostram que a classificação entre amostras de abelhas no seu próprio grupo apresentam acuracia de 90%, dependendo da espécie. As classificações realizadas entre as espécies de abelhas alcançaram acuracia de 97%.
This work presents a proposal to solve the bees classification problem by implementing an algorithm based on Geometrics Morphometrics and the Shape analysis of landmarks generated from bees wings images. The algorithm is based on the K-Nearest Neighbor (K-Nearest Neighbor) algorithm and Fuzzy Logic KNN applied to the analysis and selection of two-dimensional data points relating to landmarks. This work is part of the Architecture Reference Model for Automatic identification and Taxonomic Classification System of Stingless Bee using the Wing Morphometry. The study includes selection and ordering methods for landmarks used in the algorithm by developing a mathematical model to represent the significance order, generating the most significant mathematical landmarks as input variables for Fuzzy Logic kNN. The main objective of this work is to develop a classification system for bee species. The knowledge involved in the development of this work include an overview of feature selection, unsupervised clustering and data mining, analysis of data pre-processing, statistical approaches for estimation and prediction, study of Shape, Procrustes Analysis on data that comes from Geometric Morphometry and the modification of the k-Nearest Neighbors algorithm and the Fuzzy Logic kNN. The results show that the classification in bee samples of the same species presents a accuracy above 90%, depending on the specie in analysis. The classification done between the bees species reach accuracies of 97%.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "KNN classification"

1

Chōsakai, Kanagawa-ken Shokubutsushi. Kanagawa-ken shokubutsushi 1988. Yokohama-shi : Kanagawa Kenritsu Hakubutsukan, 1988.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhongguo tu shu guan tu shu fen lei fa bian ji wei yuan hui. Zhongguo tu shu guan tu shu fen lei fa : Qi kan fen lei fa. Beijing : Shu mu wen xian chu ban she, 1987.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Kankyōka, Shimane-ken (Japan) Shizen. Kaitei Shimane reddo dēta bukku 2013 : Shimane-ken no zetsumetsu no osore no aru yasei shokubutsu : Shokubutsu-hen = Shimane red data book 2013. Shimane-ken Matsue-shi : Shimane-ken Kankyō Seikatsubu Shizen Kankyōka, 2013.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Kankyōka, Shimane-ken (Japan) Shizen. Kaitei Shimane reddo dēta bukku 2014 : Shimane-ken no zetsumetsu no osore no aru yasei dōbutsu : Dōbutsu hen = Shimane red data book 2014. Shimane-ken Matsue-shi : Shimane-ken Kankyō Seikatsubu Shizen Kankyōka, 2014.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Navajo weaving in the late twentieth century : Kin, community, and collectors. Tucson : University of Arizona Press, 2004.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Scheffler, Harold W. Australian Kin Classification. Cambridge University Press, 2011.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Scheffler, Harold W. Australian Kin Classification. Cambridge University Press, 2009.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Scheffler, Harold W. Australian Kin Classification (Cambridge Studies in Social and Cultural Anthropology). Cambridge University Press, 2007.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Taisetsu ni shitai Nara-ken no yasei dōshokubutsu : Nara-kenban reddo dēta bukku : 2008. Nara-shi : Nara-ken Nōrinbu Shinrin Hozenka, 2008.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

"Zhongguo tu shu guan tu shu fen lei fa, qi kan fen lei biao" shi yong zhi nan. Beijing tu shu guan chu ban she, 1998.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "KNN classification"

1

Ishii, Naohiro, Tsuyoshi Murai, Takahiro Yamada et Yongguang Bao. « Classification by Weighting, Similarity and kNN ». Dans Intelligent Data Engineering and Automated Learning – IDEAL 2006, 57–64. Berlin, Heidelberg : Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11875581_7.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Guo, Gongde, Hui Wang, David Bell, Yaxin Bi et Kieran Greer. « KNN Model-Based Approach in Classification ». Dans On The Move to Meaningful Internet Systems 2003 : CoopIS, DOA, and ODBASE, 986–96. Berlin, Heidelberg : Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-39964-3_62.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Zhuang, Jiaxin, Jiabin Cai, Ruixuan Wang, Jianguo Zhang et Wei-Shi Zheng. « Deep kNN for Medical Image Classification ». Dans Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 127–36. Cham : Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59710-8_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ishii, Naohiro, Yuichi Morioka, Hiroaki Kimura et Yongguang Bao. « Classification by Multiple Reducts-kNN with Confidence ». Dans Intelligent Data Engineering and Automated Learning – IDEAL 2010, 94–101. Berlin, Heidelberg : Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15381-5_12.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ashai, Mariyam, Rhea Gautam Mukherjee, Sanjana P. Mundharikar, Vinayak Dev Kuanr et R. Harikrishnan. « Classification of Astronomical Objects using KNN Algorithm ». Dans Smart Intelligent Computing and Applications, Volume 1, 377–87. Singapore : Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9669-5_34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Costa, Bruno G., Jean Carlos Arouche Freire, Hamilton S. Cavalcante, Marcia Homci, Adriana R. G. Castro, Raimundo Viegas, Bianchi S. Meiguins et Jefferson M. Morais. « Fault Classification on Transmission Lines Using KNN-DTW ». Dans Computational Science and Its Applications – ICCSA 2017, 174–87. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-62392-4_13.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Beryl Princess, P. Joyce, Salaja Silas et Elijah Blessing Rajsingh. « Classification of Road Accidents Using SVM and KNN ». Dans Advances in Intelligent Systems and Computing, 27–41. Singapore : Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3514-7_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Orczyk, Tomasz, Rafal Doroz et Piotr Porwik. « Combined kNN Classifier for Classification of Incomplete Data ». Dans Advances in Intelligent Systems and Computing, 21–26. Cham : Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-19738-4_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Bhattacharya, Gautam, Koushik Ghosh et Ananda S. Chowdhury. « kNN Classification with an Outlier Informative Distance Measure ». Dans Lecture Notes in Computer Science, 21–27. Cham : Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69900-4_3.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Gong, An, et Yanan Liu. « Improved KNN Classification Algorithm by Dynamic Obtaining K ». Dans Advanced Research on Electronic Commerce, Web Application, and Communication, 320–24. Berlin, Heidelberg : Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20367-1_51.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "KNN classification"

1

Deivasikamani, Ganeshkumar, Akshay C, Ananthakrishnan T et Rohith C. Manoj. « Covid Cough Classification using KNN Classification Algorithm ». Dans 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC). IEEE, 2022. http://dx.doi.org/10.1109/icaaic53929.2022.9793198.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Wang, Zonghu, et Zhijing Liu. « Graph-based KNN text classification ». Dans 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE, 2010. http://dx.doi.org/10.1109/fskd.2010.5569866.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pichardo-Morales, Francisco D., Marco A. Acevedo-Mosqueda et Sandra L. Gomez-Coronel. « Classification of Gunshots with KNN Classifier ». Dans EATIS '18 : Euro American Conference on Telematics and Information Systems. New York, NY, USA : ACM, 2018. http://dx.doi.org/10.1145/3293614.3293656.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Anagnostou, Panagiotis, Petros Barbas, Aristidis G. Vrahatis et Sotiris K. Tasoulis. « Approximate kNN Classification for Biomedical Data ». Dans 2020 IEEE International Conference on Big Data (Big Data). IEEE, 2020. http://dx.doi.org/10.1109/bigdata50022.2020.9378126.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Thejaswini, B. M., T. Y. Satheesha et Sathish Bhairannawar. « EEG Classification Using Modified KNN Algorithm ». Dans 2023 International Conference on Applied Intelligence and Sustainable Computing (ICAISC). IEEE, 2023. http://dx.doi.org/10.1109/icaisc58445.2023.10200104.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

Jyothi, R., Sujit Hiwale et Parvati V. Bhat. « Classification of labour contractions using KNN classifier ». Dans 2016 International Conference on Systems in Medicine and Biology (ICSMB). IEEE, 2016. http://dx.doi.org/10.1109/icsmb.2016.7915100.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
7

Li, Huijuan, He Jiang, Dongyuan Wang et Bing Han. « An Improved KNN Algorithm for Text Classification ». Dans 2018 Eighth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC). IEEE, 2018. http://dx.doi.org/10.1109/imccc.2018.00225.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Xie, Huahua, Dong Liang, Zhaojing Zhang, Hao Jin, Chen Lu et Yi Lin. « A Novel Pre-Classification Based kNN Algorithm ». Dans 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). IEEE, 2016. http://dx.doi.org/10.1109/icdmw.2016.0182.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
9

Sonar, Poonam, Udhav Bhosle et Chandrajit Choudhury. « Mammography classification using modified hybrid SVM-KNN ». Dans 2017 International Conference on Signal Processing and Communication (ICSPC). IEEE, 2017. http://dx.doi.org/10.1109/cspc.2017.8305858.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
10

Nadeem, Humaira, Imran Mujaddid Rabbani, Muhammad Aslam et Martinez Enriquez A. M. « KNN-fuzzy classification for cloud service selection ». Dans ICFNDS'18 : International Conference on Future Networks and Distributed Systems. New York, NY, USA : ACM, 2018. http://dx.doi.org/10.1145/3231053.3231133.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie