Статті в журналах з теми "Method of k-means"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Method of k-means.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "Method of k-means".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Hedar, Abdel-Rahman, Abdel-Monem Ibrahim, Alaa Abdel-Hakim, and Adel Sewisy. "K-Means Cloning: Adaptive Spherical K-Means Clustering." Algorithms 11, no. 10 (October 6, 2018): 151. http://dx.doi.org/10.3390/a11100151.

Повний текст джерела
Анотація:
We propose a novel method for adaptive K-means clustering. The proposed method overcomes the problems of the traditional K-means algorithm. Specifically, the proposed method does not require prior knowledge of the number of clusters. Additionally, the initial identification of the cluster elements has no negative impact on the final generated clusters. Inspired by cell cloning in microorganism cultures, each added data sample causes the existing cluster ‘colonies’ to evaluate, with the other clusters, various merging or splitting actions in order for reaching the optimum cluster set. The proposed algorithm is adequate for clustering data in isolated or overlapped compact spherical clusters. Experimental results support the effectiveness of this clustering algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Maldonado, Sebastián, Emilio Carrizosa, and Richard Weber. "Kernel Penalized K-means: A feature selection method based on Kernel K-means." Information Sciences 322 (November 2015): 150–60. http://dx.doi.org/10.1016/j.ins.2015.06.008.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Litvinenko, Natalya, Orken Mamyrbayev, Assem Shayakhmetova, and Mussa Turdalyuly. "Clusterization by the K-means method when K is unknown." ITM Web of Conferences 24 (2019): 01013. http://dx.doi.org/10.1051/itmconf/20192401013.

Повний текст джерела
Анотація:
There are various methods of objects’ clusterization used in different areas of machine learning. Among the vast amount of clusterization methods, the K-means method is one of the most popular. Such a method has as pros as cons. Speaking about the advantages of this method, we can mention the rather high speed of objects clusterization. The main disadvantage is a necessity to know the number of clusters before the experiment. This paper describes the new way and the new method of clusterization, based on the K-means method. The method we suggest is also quite fast in terms of processing speed, however, it does not require the user to know in advance the exact number of clusters to be processed. The user only has to define the range within which the number of clusters is located. Besides, using suggested method there is a possibility to limit the radius of clusters, which would allow finding objects that express the criteria of one cluster in the most distinctive and accurate way, and it would also allow limiting the number of objects in each cluster within the certain range.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hämäläinen, Joonas, Tommi Kärkkäinen, and Tuomo Rossi. "Improving Scalable K-Means++." Algorithms 14, no. 1 (December 27, 2020): 6. http://dx.doi.org/10.3390/a14010006.

Повний текст джерела
Анотація:
Two new initialization methods for K-means clustering are proposed. Both proposals are based on applying a divide-and-conquer approach for the K-means‖ type of an initialization strategy. The second proposal also uses multiple lower-dimensional subspaces produced by the random projection method for the initialization. The proposed methods are scalable and can be run in parallel, which make them suitable for initializing large-scale problems. In the experiments, comparison of the proposed methods to the K-means++ and K-means‖ methods is conducted using an extensive set of reference and synthetic large-scale datasets. Concerning the latter, a novel high-dimensional clustering data generation algorithm is given. The experiments show that the proposed methods compare favorably to the state-of-the-art by improving clustering accuracy and the speed of convergence. We also observe that the currently most popular K-means++ initialization behaves like the random one in the very high-dimensional cases.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kim, Ga-On, Gang-Seong Lee, and Sang-Hun Lee. "An Edge Extraction Method Using K-means Clustering In Image." Journal of Digital Convergence 12, no. 11 (November 28, 2014): 281–88. http://dx.doi.org/10.14400/jdc.2014.12.11.281.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Arthur, David, Bodo Manthey, and Heiko Röglin. "Smoothed Analysis of the k-Means Method." Journal of the ACM 58, no. 5 (October 2011): 1–31. http://dx.doi.org/10.1145/2027216.2027217.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

SARMA, T. HITENDRA, P. VISWANATH, and B. ESWARA REDDY. "Single pass kernel k-means clustering method." Sadhana 38, no. 3 (June 2013): 407–19. http://dx.doi.org/10.1007/s12046-013-0143-3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Har-Peled, Sariel, and Bardia Sadri. "How Fast Is the k-Means Method?" Algorithmica 41, no. 3 (December 8, 2004): 185–202. http://dx.doi.org/10.1007/s00453-004-1127-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

D. Indriyanti, A., D. R. Prehanto, and T. Z. Vitadiar. "K-means method for clustering learning classes." Indonesian Journal of Electrical Engineering and Computer Science 22, no. 2 (May 1, 2021): 835. http://dx.doi.org/10.11591/ijeecs.v22.i2.pp835-841.

Повний текст джерела
Анотація:
<span>Learning class is a collection of several students in an educational institution. Every beginning of the school year the educational institution conducts a grouping class test. However, sometimes class grouping is not in accordance with the ability of students. For this reason, a system is needed to be able to see the ability of students according to the desired parameters. Determination of the weight of test scores is done using the K-Means method as a grouping method. Iteration or repetition process in the K-Means method is very important because the weight value is still very possible to change. Therefore, the repetition process is carried out to produce a value that does not change and is used to determine the ability level of students. The results of the class grouping test scores affect the ability of students. Application of K-Means method is used in building an information system grouping student admissions in an educational institution. Acceptance of students will be grouped into 3 groups of learning classes. The results of testing the system that applies K-Means method and based on data on the admission of prospective students from educational institutions have very high accuracy with an error rate of 0.074. </span>
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Cho, Young-Sung, Mi-Sug Gu, and Keun-Ho Ryu. "Development of Personalized Recommendation System using RFM method and k-means Clustering." Journal of the Korea Society of Computer and Information 17, no. 6 (June 30, 2012): 163–72. http://dx.doi.org/10.9708/jksci.2012.17.6.163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Pham, D. T., S. S. Dimov, and C. D. Nguyen. "An Incremental K-means algorithm." Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science 218, no. 7 (July 1, 2004): 783–95. http://dx.doi.org/10.1243/0954406041319509.

Повний текст джерела
Анотація:
Data clustering is an important data exploration technique with many applications in engineering, including parts family formation in group technology and segmentation in image processing. One of the most popular data clustering methods is K-means clustering because of its simplicity and computational efficiency. The main problem with this clustering method is its tendency to coverge at a local minimum. In this paper, the cause of this problem is explained and an existing solution involving a cluster centre jumping operation is examined. The jumping technique alleviates the problem with local minima by enabling cluster centres to move in such a radical way as to reduce the overall cluster distortion. However, the method is very sensitive to errors in estimating distortion. A clustering scheme that is also based on distortion reduction through cluster centre movement but is not so sensitive to inaccuracies in distortion estimation is proposed in this paper. The scheme, which is an incremental version of the K-means algorithm, involves adding cluster centres one by one as clusters are being formed. The paper presents test results to demonstrate the efficacy of the proposed algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Aldahdooh, Raed T., and Wesam Ashour. "DIMK-means “Distance-based Initialization Method for K-means Clustering Algorithm”." International Journal of Intelligent Systems and Applications 5, no. 2 (January 3, 2013): 41–51. http://dx.doi.org/10.5815/ijisa.2013.02.05.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Sobolewski, Marek, and Andrzej Sokołowski. "CLUSTERING USING K-MEANS METHOD WITH COHERENCE PROPERTY." Prace Naukowe Uniwersytetu Ekonomicznego we Wrocławiu, no. 468 (2017): 215–21. http://dx.doi.org/10.15611/pn.2017.468.22.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

陈, 建文. "Time and Season Recognition Method via K-Means." Journal of Image and Signal Processing 07, no. 01 (2018): 57–64. http://dx.doi.org/10.12677/jisp.2018.71007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
15

He Yibo, 贺一波, 陈冉丽 Chen Ranli, 吴侃 Wu Kan, and 段志鑫 Duan Zhixin. "Point Cloud Simplification Method Based on k-Means Clustering." Laser & Optoelectronics Progress 56, no. 9 (2019): 091002. http://dx.doi.org/10.3788/lop56.091002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Chi, Jocelyn T., Eric C. Chi, and Richard G. Baraniuk. "k-POD: A Method for k-Means Clustering of Missing Data." American Statistician 70, no. 1 (January 2, 2016): 91–99. http://dx.doi.org/10.1080/00031305.2015.1086685.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Yuan, Chunhui, and Haitao Yang. "Research on K-Value Selection Method of K-Means Clustering Algorithm." J 2, no. 2 (June 18, 2019): 226–35. http://dx.doi.org/10.3390/j2020016.

Повний текст джерела
Анотація:
Among many clustering algorithms, the K-means clustering algorithm is widely used because of its simple algorithm and fast convergence. However, the K-value of clustering needs to be given in advance and the choice of K-value directly affect the convergence result. To solve this problem, we mainly analyze four K-value selection algorithms, namely Elbow Method, Gap Statistic, Silhouette Coefficient, and Canopy; give the pseudo code of the algorithm; and use the standard data set Iris for experimental verification. Finally, the verification results are evaluated, the advantages and disadvantages of the above four algorithms in a K-value selection are given, and the clustering range of the data set is pointed out.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Liu, Bowen, Ting Zhang, Yujian Li, Zhaoying Liu, and Zhilin Zhang. "Kernel Probabilistic K-Means Clustering." Sensors 21, no. 5 (March 8, 2021): 1892. http://dx.doi.org/10.3390/s21051892.

Повний текст джерела
Анотація:
Kernel fuzzy c-means (KFCM) is a significantly improved version of fuzzy c-means (FCM) for processing linearly inseparable datasets. However, for fuzzification parameter m=1, the problem of KFCM (kernel fuzzy c-means) cannot be solved by Lagrangian optimization. To solve this problem, an equivalent model, called kernel probabilistic k-means (KPKM), is proposed here. The novel model relates KFCM to kernel k-means (KKM) in a unified mathematic framework. Moreover, the proposed KPKM can be addressed by the active gradient projection (AGP) method, which is a nonlinear programming technique with constraints of linear equalities and linear inequalities. To accelerate the AGP method, a fast AGP (FAGP) algorithm was designed. The proposed FAGP uses a maximum-step strategy to estimate the step length, and uses an iterative method to update the projection matrix. Experiments demonstrated the effectiveness of the proposed method through a performance comparison of KPKM with KFCM, KKM, FCM and k-means. Experiments showed that the proposed KPKM is able to find nonlinearly separable structures in synthetic datasets. Ten real UCI datasets were used in this study, and KPKM had better clustering performance on at least six datsets. The proposed fast AGP requires less running time than the original AGP, and it reduced running time by 76–95% on real datasets.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Febrianti, Fitria, Moh Hafiyusholeh, and Ahmad Hanif Asyhar. "PERBANDINGAN PENGKLUSTERAN DATA IRIS MENGGUNAKAN METODE K-MEANS DAN FUZZY C-MEANS." Jurnal Matematika "MANTIK" 2, no. 1 (October 30, 2016): 7. http://dx.doi.org/10.15642/mantik.2016.2.1.7-13.

Повний текст джерела
Анотація:
Indonesia with abundant natural resources, certainly have a lot of plants are innumerable. To clasify the plants into different clusters can use several methods. Methods used are K-Means and Fuzzy C-Means. However, this methods have difference. Not only in terms of algorithms, but in terms of value calculation on the root mean square error (RMSE) also different. To calculate the value of RMSE there are two indicators are required, namelt the training data and the checking data. Of discussion, the Fuzzy C-Means method has RMSE values smaller than the K-Means method, namely on 80 training data and 70 checking data with RMSE value 2,2122E-14. This indicates that the Fuzzy C-Means method has a higher level of accuracy than the K-Means method
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Mawati, Rose, I. Made Sumertajaya, and Farit Mochamad Afendi. "Modified Centroid Selection Method of K-Means Clustering." IOSR Journal of Mathematics 10, no. 2 (2014): 49–53. http://dx.doi.org/10.9790/5728-10234953.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Alqadi, Prof Ziad, Dr Ghazi M. Qaryouti, and Prof Mohammad Abuzalata. "Enhancing Color Image Clustering using K-Means Method." IJARCCE 9, no. 1 (January 30, 2020): 78–84. http://dx.doi.org/10.17148/ijarcce.2020.9115.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Ilhan, Sevinc, Nevcihan Duru, and Esref Adali. "Improved Fuzzy Art Method for Initializing K-means." International Journal of Computational Intelligence Systems 3, no. 3 (2010): 274. http://dx.doi.org/10.2991/ijcis.2010.3.3.3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Li, Youguo, and Haiyan Wu. "A Clustering Method Based on K-Means Algorithm." Physics Procedia 25 (2012): 1104–9. http://dx.doi.org/10.1016/j.phpro.2012.03.206.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

PIETRZYKOWSKI, Marcin. "Mini-model method based on k-means clustering." PRZEGLĄD ELEKTROTECHNICZNY 1, no. 1 (January 5, 2017): 75–78. http://dx.doi.org/10.15199/48.2017.01.18.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Sai, L. Nitya, M. Sai Shreya, A. Anjan Subudhi, B. Jaya Lakshmi, and K. B. Madhuri. "Optimal K-Means Clustering Method Using Silhouette Coefficient." International Journal of Applied Research on Information Technology and Computing 8, no. 3 (2017): 335. http://dx.doi.org/10.5958/0975-8089.2017.00030.6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Li, You Guo. "A Clustering Method Based on K-Means Algorithm." Applied Mechanics and Materials 380-384 (August 2013): 1697–700. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1697.

Повний текст джерела
Анотація:
In this paper we combine the largest minimum distance algorithm and the traditional K-Means algorithm to propose an improved K-Means clustering algorithm. This improved algorithm can make up the shortcomings for the traditional K-Means algorithm to determine the initial focal point. The improved K-Means algorithm effectively solved two disadvantages of the traditional algorithm, the first one is greater dependence to choice the initial focal point, and another one is easy to be trapped in local minimum [1][2].
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Ilhan, Sevinc, Nevcihan Duru, and Esref Adali. "Improved Fuzzy Art Method for Initializing K-means." International Journal of Computational Intelligence Systems 3, no. 3 (September 2010): 274–79. http://dx.doi.org/10.1080/18756891.2010.9727698.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Rini, D. S., I. Sriliana, P. Novianti, S. Nugroho, and P. Jana. "Spherical K-Means method to determine earthquake clusters." Journal of Physics: Conference Series 1823, no. 1 (March 1, 2021): 012043. http://dx.doi.org/10.1088/1742-6596/1823/1/012043.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Rahayuni, Sri, Indra Gunawan, and Ika Okta Kirana. "Material Sales Clustering Using the K-Means Method." JOMLAI: Journal of Machine Learning and Artificial Intelligence 1, no. 1 (March 18, 2022): 85–94. http://dx.doi.org/10.55123/jomlai.v1i1.177.

Повний текст джерела
Анотація:
Along with the increasing growth of technology and the development of science, business competition is also getting faster and therefore we are required to always develop the business in order to always survive in the competition. Family Gypsum is a store whose sales system is the same as a supermarket, namely the buyer will take the goods to be purchased himself. From this, data on sales, purchases of goods, and unexpected expenses are not structured properly so that the data only functions as an archive. In this research, data mining is applied using the K-Means calculation process which provides a standard process for using data mining in various fields to be used in clustering because the results of this method are easy to understand and interpret. The results obtained from the K-Means method that has been implemented into Rapid Miner have the same value, which produces 3 clusters, namely clusters that do not sell, clusters that sell, and clusters that sell very well. With red clusters with 2 items, the clusters selling green with 28 items, the clusters selling with blue with 30 items. The results of this study can be entered into the Family Gypsum store Jl. H. Ulakma Sinaga, Red Rambung who is getting more attention on each sale based on the cluster that has been done
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Smyrlis, P. N., D. C. Tsouros, and M. G. Tsipouras. "Constrained K-Means Classification." Engineering, Technology & Applied Science Research 8, no. 4 (August 18, 2018): 3203–8. http://dx.doi.org/10.48084/etasr.2149.

Повний текст джерела
Анотація:
Classification-via-clustering (CvC) is a widely used method, using a clustering procedure to perform classification tasks. In this paper, a novel K-Means-based CvC algorithm is presented, analysed and evaluated. Two additional techniques are employed to reduce the effects of the limitations of K-Means. A hypercube of constraints is defined for each centroid and weights are acquired for each attribute of each class, for the use of a weighted Euclidean distance as a similarity criterion in the clustering procedure. Experiments are made with 42 well–known classification datasets. The experimental results demonstrate that the proposed algorithm outperforms CvC with simple K-Means.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Belhaouari, Samir Brahim, Shahnawaz Ahmed, and Samer Mansour. "Optimized K-Means Algorithm." Mathematical Problems in Engineering 2014 (2014): 1–14. http://dx.doi.org/10.1155/2014/506480.

Повний текст джерела
Анотація:
The localization of the region of interest (ROI), which contains the face, is the first step in any automatic recognition system, which is a special case of the face detection. However, face localization from input image is a challenging task due to possible variations in location, scale, pose, occlusion, illumination, facial expressions, and clutter background. In this paper we introduce a new optimized k-means algorithm that finds the optimal centers for each cluster which corresponds to the global minimum of the k-means cluster. This method was tested to locate the faces in the input image based on image segmentation. It separates the input image into two classes: faces and nonfaces. To evaluate the proposed algorithm, MIT-CBCL, BioID, and Caltech datasets are used. The results show significant localization accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Qona'ah, Niswatul, Alvita Rachma Devi, and I. Made Gde Meranggi Dana. "Laboratory Clustering using K-Means, K-Medoids, and Model-Based Clustering." Indonesian Journal of Applied Statistics 3, no. 1 (July 23, 2020): 64. http://dx.doi.org/10.13057/ijas.v3i1.40823.

Повний текст джерела
Анотація:
<p>Institut Teknologi Sepuluh Nopember (ITS) is an institute which has about 100 laboratories to support some academic activity like teaching, research and society service. This study is clustering the laboratory in ITS based on the productivity of laboratory in carrying out its function. The methods used to group laboratory are <em>K</em>-Means, <em>K</em>-Medoids, and Model-Based Clustering. <em>K</em>-Means and <em>K</em>-Medoids are non-hierarchy clustering method that the number of cluster can be given at first. The difference of them is datapoints that selected by <em>K</em>-Medoids as centers (medoids or exemplars) and mean for <em>K</em>-Means. Whereas, Model-Based Clustering is a clustering method that takes into account statistical models. This statistical method is quite developed and considered to have advantages over other classical method. Comparison of each cluster method using Integrated Convergent Divergent Random (ICDR). The best method based on ICDR is Model-Based Clustering.</p><p><strong>Keywords</strong><strong> : </strong><em>K</em>-Means, <em>K</em>-Medoids, Laboratory, Model-Based Clustering</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Kao, Hsiou Hen, Li Ching Huang, Miin Jye Wen, and Kuo Lung Wu. "The K-Exemplars Clustering Method." Applied Mechanics and Materials 376 (August 2013): 224–30. http://dx.doi.org/10.4028/www.scientific.net/amm.376.224.

Повний текст джерела
Анотація:
In order to apply the concepts of k-means to deal with any specified dissimilarity measures, we propose a k-exemplars clustering method that modifies k-means by restricting the cluster centers on data points. The proposed method not only has similar clustering accuracy as k-means but also faster convergence. According to the definition of the objective function of k-exemplars, the proposed method can be used to deal with a relational data set, and the cluster centers (exemplars) of each cluster will also be extracted. Hence, the k-exemplars can work in an environment with specified dissimilarity measures.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Oktarina, Cahyani, Khairil Anwar Notodiputro, and Indahwati Indahwati. "COMPARISON OF K-MEANS CLUSTERING METHOD AND K-MEDOIDS ON TWITTER DATA." Indonesian Journal of Statistics and Its Applications 4, no. 1 (February 28, 2020): 189–202. http://dx.doi.org/10.29244/ijsa.v4i1.599.

Повний текст джерела
Анотація:
The presidential election is one of the political events that occur in Indonesia once in five years. Public satisfaction and dissatisfaction with political issues have led to an increase in the number of political opinion tweets. The purpose of this study is to examine the performance of the k-means and k-medoids method in the Twitter data and to tweet about the presidential election in 2019. The data used in this study are primary data taken from Muhyi's research, then mining the text against data obtained. Because this data has been processed by Muhyi to analyze the electability of the 2019 presidential candidate pairs, for this journal needs a preprocessing was carried out to analyze the tendency of tweets to side with the candidate pairs of one or two. The difference in the pre-processing of this research with previous research is that there is a cleaning of duplicate data and normalizing. The results of this study indicate that the optimal number of clusters resulting from the k-means method and the k-medoid method are different.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Assegaf, Alwi, Moch Abdul Mukid, and Abdul Hoyyi. "Analisis Kesehatan Bank Menggunakan Local Mean K-Nearest Neighbor dan Multi Local Means K-Harmonic Nearest Neighbor." Jurnal Gaussian 8, no. 3 (August 30, 2019): 343–55. http://dx.doi.org/10.14710/j.gauss.v8i3.26679.

Повний текст джерела
Анотація:
The classification method continues to develop in order to get more accurate classification results than before. The purpose of the research is comparing the two k-Nearest Neighbor (KNN) methods that have been developed, namely the Local Mean k-Nearest Neighbor (LMKNN) and Multi Local Means k-Harmonic Nearest Neighbor (MLM-KHNN) by taking a case study of listed bank financial statements and financial statements complete recorded at Bank Indonesia in 2017. LMKNN is a method that aims to improve classification performance and reduce the influence of outliers, and MLM-KHNN is a method that aims to reduce sensitivity to a single value. This study uses seven indicators to measure the soundness of a bank, including the Capital Adequacy Ratio, Non Performing Loans, Loan to Deposit Ratio, Return on Assets, Return on Equity, Net Interest Margin, and Operating Expenses on Operational Income with a classification of bank health status is very good (class 1), good (class 2), quite good (class 3) and poor (class 4). The measure of the accuracy of the classification results used is the Apparent Error Rate (APER). The best classification results of the LMKNN method are in the proportion of 80% training data and 20% test data with k=7 which produces the smallest APER 0,0556 and an accuracy of 94,44%, while the best classification results of the MLM-KHNN method are in the proportion of 80% training data and 20% test data with k=3 which produces the smallest APER 0,1667 and an accuracy of 83,33%. Based on APER calculation shows that the LMKNN method is better than MLM-KHNN in classifying the health status of banks in Indonesia.Keywords: Classification, Local Mean k-Nearest Neighbor (LMKNN), Multi Local Means k-Harmonic Nearest Neighbor (MLM-KHNN), Measure of accuracy of classification
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Cha, Su-Ram, Jeong-Tae Kim, and Min-Seok Kim. "Automatic Dynamic Range Improvement Method using Histogram Modification and K-means Clustering." Journal of Broadcast Engineering 16, no. 6 (November 30, 2011): 1047–57. http://dx.doi.org/10.5909/jeb.2011.16.6.1047.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Gaddam, Shekhar R., Vir V. Phoha, and Kiran S. Balagani. "K-Means+ID3: A Novel Method for Supervised Anomaly Detection by Cascading K-Means Clustering and ID3 Decision Tree Learning Methods." IEEE Transactions on Knowledge and Data Engineering 19, no. 3 (March 2007): 345–54. http://dx.doi.org/10.1109/tkde.2007.44.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Dyczkowska, Joanna. "Usefulness of K-means Method in Detection Corporate Crisis." European Financial and Accounting Journal 5, no. 2 (June 1, 2010): 53–70. http://dx.doi.org/10.18267/j.efaj.49.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kumar, Narander, Vishal Verma, and Vipin Saxena. "Cluster Analysis in Data Mining using K-Means Method." International Journal of Computer Applications 76, no. 12 (August 23, 2013): 11–14. http://dx.doi.org/10.5120/13298-0748.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Wang, Fangzheng. "Method of Fruit Image Segmentation by Improved K-Means." Advance Journal of Food Science and Technology 10, no. 11 (April 15, 2016): 838–40. http://dx.doi.org/10.19026/ajfst.10.2271.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Kumar, Ajay, and Shishir Kumar. "Density Based Initialization Method for K-Means Clustering Algorithm." International Journal of Intelligent Systems and Applications 9, no. 10 (October 8, 2017): 40–48. http://dx.doi.org/10.5815/ijisa.2017.10.05.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Dong, Shi, Wei Ding, Jian Gong, and Dingding Zhou. "Flow cluster algorithm based on improved K-means method." IETE Journal of Research 59, no. 4 (2013): 326. http://dx.doi.org/10.4103/0377-2063.118021.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Xue, Huiwen, Haochen Li, and Yanfei Wang. "An Improved K-means Method with Density Distribution Analysis." MATEC Web of Conferences 176 (2018): 01019. http://dx.doi.org/10.1051/matecconf/201817601019.

Повний текст джерела
Анотація:
In this paper, a novel K-means clustering algorithm is proposed. Before running the traditional Kmeans, the cluster centers should be randomly selected, which would influence the time cost and accuracy. To solve this problem, we utilize density distribution analysis in the traditional K-means. For a reasonable cluster, it should have a dense inside structure which means the points in the same cluster should tightly surround the center, while separated away from other cluster canters. Based on this assumption, two quantities are firstly introduced: the local density of cluster center ρi and its desperation degree δi, then some reasonable cluster centers candidates are selected from the original data. We performed our algorithm on three synthetic data and a real bank business data to evaluate its accuracy and efficiency. Comparing with Traditional K-means and K-means++, the results demonstrated that the improved method performs better.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Meng, Chun, Yuanyuan Lv, Long You, and Yuchen Yue. "Intrusion Detection Method Based on Improved K-Means Algorithm." Journal of Physics: Conference Series 1302 (August 2019): 032011. http://dx.doi.org/10.1088/1742-6596/1302/3/032011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Wu, Zhijun, Rong Li, and Changliang Li. "Adaptive Speech Information Hiding Method Based on K-Means." IEEE Access 8 (2020): 23308–16. http://dx.doi.org/10.1109/access.2020.2970194.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Güngör, Zülal, and Alper Ünler. "K-Harmonic means data clustering with tabu-search method." Applied Mathematical Modelling 32, no. 6 (June 2008): 1115–25. http://dx.doi.org/10.1016/j.apm.2007.03.011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Gourevitch, B., and R. Le Bouquin-Jeannes. "K-means clustering method for auditory evoked potentials selection." Medical and Biological Engineering and Computing 41, no. 4 (July 2003): 397–402. http://dx.doi.org/10.1007/bf02348081.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Rocci, Roberto, Stefano Antonio Gattone, and Maurizio Vichi. "A New Dimension Reduction Method: Factor Discriminant K-means." Journal of Classification 28, no. 2 (July 2011): 210–26. http://dx.doi.org/10.1007/s00357-011-9085-9.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Dhio Saputra. "Goods Stock Management using the K-Means Algorithm Method." Jurnal Teknologi 10, no. 1 (April 30, 2020): 22–45. http://dx.doi.org/10.35134/jitekin.v9i2.15.

Повний текст джерела
Анотація:
The grouping of Mazaya products at PT. Bougenville Anugrah can still do manuals in calculating purchases, sales and product inventories. Requires time and data. For this reason, a research is needed to optimize the inventory of Mazaya goods by computerization. The method used in this research is K-Means Clustering on sales data of Mazaya products. The data processed is the purchase, sales and remaining inventory of Mazaya products in March to July 2019 totaling 40 pieces. Data is grouped into 3 clusters, namely cluster 0 for non-selling criteria, cluster 1 for best-selling criteria and cluster 2 for very best-selling criteria. The test results obtained are cluster 0 with 13 data, cluster 1 with 25 data and cluster 2 with 2 data. So to optimize inventory is to multiply goods in cluster 2, so as to save costs for management of Mazayaproducts that are not available. K-Means clustering method can be used for data processing using data mining in grouping data according to criteria.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Gay, Robin, Jérémie Lecoutre, Nicolas Menouret, Arthur Morillon, and Pascal Monasse. "Bilateral K-Means for Superpixel Computation (the SLIC Method)." Image Processing On Line 12 (April 21, 2022): 72–91. http://dx.doi.org/10.5201/ipol.2022.373.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії