Academic literature on the topic 'K-means clustering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'K-means clustering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "K-means clustering"

1

Hedar, Abdel-Rahman, Abdel-Monem Ibrahim, Alaa Abdel-Hakim, and Adel Sewisy. "K-Means Cloning: Adaptive Spherical K-Means Clustering." Algorithms 11, no. 10 (October 6, 2018): 151. http://dx.doi.org/10.3390/a11100151.

Full text
Abstract:
We propose a novel method for adaptive K-means clustering. The proposed method overcomes the problems of the traditional K-means algorithm. Specifically, the proposed method does not require prior knowledge of the number of clusters. Additionally, the initial identification of the cluster elements has no negative impact on the final generated clusters. Inspired by cell cloning in microorganism cultures, each added data sample causes the existing cluster ‘colonies’ to evaluate, with the other clusters, various merging or splitting actions in order for reaching the optimum cluster set. The proposed algorithm is adequate for clustering data in isolated or overlapped compact spherical clusters. Experimental results support the effectiveness of this clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
2

Jhun, Myoungshic. "BOOTSTRAPPING K-MEANS CLUSTERING." Journal of the Japanese Society of Computational Statistics 3, no. 1 (1990): 1–14. http://dx.doi.org/10.5183/jjscs1988.3.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Timmerman, Marieke E., Eva Ceulemans, Kim De Roover, and Karla Van Leeuwen. "Subspace K-means clustering." Behavior Research Methods 45, no. 4 (March 23, 2013): 1011–23. http://dx.doi.org/10.3758/s13428-013-0329-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xiao, Ethan. "Comprehensive K-Means Clustering." Journal of Computer and Communications 12, no. 03 (2024): 146–59. http://dx.doi.org/10.4236/jcc.2024.123009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hengjun, Kohei Inoue, Kenji Hara, and Kiichi Urahama. "A Robust K-Means for Document Clustering." Journal of the Institute of Industrial Applications Engineers 6, no. 2 (April 25, 2018): 60–65. http://dx.doi.org/10.12792/jiiae.6.60.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Madhuri, K., and Mr K. Srinivasa Rao. "Social Media Analysis using Optimized K-Means Clustering." International Journal of Trend in Scientific Research and Development Volume-3, Issue-2 (February 28, 2019): 953–57. http://dx.doi.org/10.31142/ijtsrd21558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ravindran, R. Malathi, and Dr Antony Selvadoss Thanamani. "K-Means Document Clustering using Vector Space Model." Bonfring International Journal of Data Mining 5, no. 2 (July 31, 2015): 10–14. http://dx.doi.org/10.9756/bijdm.8076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

HUA, C., Q. CHEN, H. WU, and T. WADA. "RK-Means Clustering: K-Means with Reliability." IEICE Transactions on Information and Systems E91-D, no. 1 (January 1, 2008): 96–104. http://dx.doi.org/10.1093/ietisy/e91-d.1.96.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jain, Preeti, and Bala Buksh. "Accelerated K-means Clustering Algorithm." International Journal of Information Technology and Computer Science 8, no. 10 (October 8, 2016): 39–46. http://dx.doi.org/10.5815/ijitcs.2016.10.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Farzizadeh, Mohammad, and Ali Abdolahi. "Clustering Students By K-means." International Journal of Computer Applications Technology and Research 5, no. 8 (July 26, 2016): 530–32. http://dx.doi.org/10.7753/ijcatr0508.1006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "K-means clustering"

1

Buchta, Christian, Martin Kober, Ingo Feinerer, and Kurt Hornik. "Spherical k-Means Clustering." American Statistical Association, 2012. http://epub.wu.ac.at/4000/1/paper.pdf.

Full text
Abstract:
Clustering text documents is a fundamental task in modern data analysis, requiring approaches which perform well both in terms of solution quality and computational efficiency. Spherical k-means clustering is one approach to address both issues, employing cosine dissimilarities to perform prototype-based partitioning of term weight representations of the documents. This paper presents the theory underlying the standard spherical k-means problem and suitable extensions, and introduces the R extension package skmeans which provides a computational environment for spherical k-means clustering featuring several solvers: a fixed-point and genetic algorithm, and interfaces to two external solvers (CLUTO and Gmeans). Performance of these solvers is investigated by means of a large scale benchmark experiment. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
2

Musco, Cameron N. (Cameron Nicholas). "Dimensionality reduction for k-means clustering." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101473.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 123-131).
In this thesis we study dimensionality reduction techniques for approximate k-means clustering. Given a large dataset, we consider how to quickly compress to a smaller dataset (a sketch), such that solving the k-means clustering problem on the sketch will give an approximately optimal solution on the original dataset. First, we provide an exposition of technical results of [CEM+15], which show that provably accurate dimensionality reduction is possible using common techniques such as principal component analysis, random projection, and random sampling. We next present empirical evaluations of dimensionality reduction techniques to supplement our theoretical results. We show that our dimensionality reduction algorithms, along with heuristics based on these algorithms, indeed perform well in practice. Finally, we discuss possible extensions of our work to neurally plausible algorithms for clustering and dimensionality reduction. This thesis is based on joint work with Michael Cohen, Samuel Elder, Nancy Lynch, Christopher Musco, and Madalina Persu.
by Cameron N. Musco.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Persu, Elena-Mădălina. "Approximate k-means clustering through random projections." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99847.

Full text
Abstract:
Thesis: S.M. in Computer Science and Engineering, Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 39-41).
Using random row projections, we show how to approximate a data matrix A with a much smaller sketch à that can be used to solve a general class of constrained k-rank approximation problems to within (1 + [epsilon]) error. Importantly, this class of problems includes k-means clustering. By reducing data points to just O(k) dimensions, our methods generically accelerate any exact, approximate, or heuristic algorithm for these ubiquitous problems. For k-means dimensionality reduction, we provide (1+ [epsilon]) relative error results for random row projections which improve on the (2 + [epsilon]) prior known constant factor approximation associated with this sketching technique, while preserving the number of dimensions. For k-means clustering, we show how to achieve a (9 + [epsilon]) approximation by Johnson-Lindenstrauss projecting data points to just 0(log k/[epsilon]2 ) dimensions. This gives the first result that leverages the specific structure of k-means to achieve dimension independent of input size and sublinear in k.
by Elena-Mădălina Persu.
S.M. in Computer Science and Engineering
APA, Harvard, Vancouver, ISO, and other styles
4

Xiang, Chongyuan. "Private k-means clustering : algorithms and applications." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106394.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 77-80).
Today is a new era of big data. We contribute our personal data for the common good simply by using our smart phones, searching the web and doing online transactions. Researchers, companies and governments use the collected data to learn various user behavior patterns and make impactful decisions based on that. Is it possible to publish and run queries on those databases without disclosing information about any specific individual? Differential privacy is a strong notion of privacy which guarantees that very little will be learned about individual records in the database, no matter what the attackers already know or wish to learn. Still, there is no practical system applying differential privacy algorithms for clustering points on real databases. This thesis describes the construction of small coresets for computing k-means clustering of a set of points while preserving differential privacy. As a result, it gives the first 𝑘-means clustering algorithm that is both differentially private, and has an approximation error that depends sub-linearly on the data’s dimension d. Previous results introduced errors that are exponential in d. This thesis implements this algorithm and uses it to create differentially private location data from GPS tracks. Specifically the algorithm allows clustering GPS databases generated from mobile nodes, while letting the user control the introduced noise due to privacy. This thesis also provides experimental results for the system and algorithms, and compares them to existing techniques. To the best of my knowledge, this is the first practical system that enables differentially private clustering on real data.
by Chongyuan Xiang.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
5

Nelson, Joshua. "On K-Means Clustering Using Mahalanobis Distance." Thesis, North Dakota State University, 2012. https://hdl.handle.net/10365/26766.

Full text
Abstract:
A problem that arises quite frequently in statistics is that of identifying groups, or clusters, of data within a population or sample. The most widely used procedure to identify clusters in a set of observations is known as K-Means. The main limitation of this algorithm is that it uses the Euclidean distance metric to assign points to clusters. Hence, this algorithm operates well only if the covariance structures of the clusters are nearly spherical and homogeneous in nature. To remedy this shortfall in the K-Means algorithm the Mahalanobis distance metric was used to capture the variance structure of the clusters. The issue with using Mahalanobis distances is that the accuracy of the distance is sensitive to initialization. If this method serves as a signicant improvement over its competitors, then it will provide a useful tool for analyzing clusters.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Yanjun. "High Performance Text Document Clustering." Wright State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=wright1181005422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

ELIASSON, PHILIP, and NIKLAS ROSÉN. "Efficient K-means clustering and the importanceof seeding." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134910.

Full text
Abstract:
Data clustering is the process of grouping data elements based on some aspect of similarity between the elements in the group. Clustering has many applications such as data compression, data mining, pattern recognition and machine learning and there are many different clustering methods. This paper examines the k-means method of clustering and how the choice of initial seeding affects the result. Lloyd’s algorithm is used as a base line and it is compared to an improved algorithm utilizing kd-trees. Two different methods of seeding are compared, random seeding and partial clustering seeding.
Klustring av data innebär att man grupperar dataelement baserat på någon typ a likhet mellan de grupperade elementen. Klustring har många olika användningsråden såsom datakompression, datautvinning, mönsterigenkänning, och maskininlärning och det finns många olika klustringsmetoder. Den här uppsatsen undersöker klustringsmetoden k-means och hur valet av startvärden för metoden påverkar resultatet. Lloyds algorithm används som utgångspunkt och den jämförs med en förbättrad algorithm som använder sig av kd-träd. Två olika metoder att välja startvärden jämförs, slumpmässigt val av startvärde och delklustring.
APA, Harvard, Vancouver, ISO, and other styles
8

Kondo, Yumi. "Robustification of the sparse K-means clustering algorithm." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/37093.

Full text
Abstract:
Searching a dataset for the ‘‘natural grouping / clustering’’ is an important explanatory technique for understanding complex multivariate datasets. One might expect that the true underlying clusters present in a dataset differ only with respect to a small fraction of the features. Furthermore, one might afraid that the dataset might contain potential outliers. Through simulation studies, we find that an existing sparse clustering method can be severely affected by a single outlier. In this thesis, we develop a robust clustering method that is also able to perform variable selection: we robustified sparse K-means (Witten and Tibshirani [28]), based on the idea of trimmed K-means introduced by Gordaliza [7] and Gordaliza [8]. Since high dimensional datasets often contain quite a few missing observations, we made our proposed method capable of handling datasets with missing values. The performance of the proposed robust sparse K-means is assessed in various simulation studies and two data analyses. The simulation studies show that robust sparse K-means performs better than other competing algorithms in terms of both the selection of features and the selection of a partition when datasets are contaminated. The analysis of a microarray dataset shows that robust sparse K-means best reflects the oestrogen receptor status of the patients among all other competing algorithms. We also adapt Clest (Duboit and Fridlyand [5]) to our robust sparse K-means to provide an automatic robust procedure of selecting the number of clusters. Our proposed methods are implemented in the R package RSKC.
APA, Harvard, Vancouver, ISO, and other styles
9

Chowuraya, Tawanda. "Online content clustering using variant K-Means Algorithms." Thesis, Cape Peninsula University of Technology, 2019. http://hdl.handle.net/20.500.11838/3089.

Full text
Abstract:
Thesis (MTech)--Cape Peninsula University of Technology, 2019
We live at a time when so much information is created. Unfortunately, much of the information is redundant. There is a huge amount of online information in the form of news articles that discuss similar stories. The number of articles is projected to grow. The growth makes it difficult for a person to process all that information in order to update themselves on a subject matter. There is an overwhelming amount of similar information on the internet. There is need for a solution that can organize this similar information into specific themes. The solution is a branch of Artificial intelligence (AI) called machine learning (ML) using clustering algorithms. This refers to clustering groups of information that is similar into containers. When the information is clustered people can be presented with information on their subject of interest, grouped together. The information in a group can be further processed into a summary. This research focuses on unsupervised learning. Literature has it that K-Means is one of the most widely used unsupervised clustering algorithm. K-Means is easy to learn, easy to implement and is also efficient. However, there is a horde of variations of K-Means. The research seeks to find a variant of K-Means that can be used with an acceptable performance, to cluster duplicate or similar news articles into correct semantic groups. The research is an experiment. News articles were collected from the internet using gocrawler. gocrawler is a program that takes Universal Resource Locators (URLs) as an argument and collects a story from a website pointed to by the URL. The URLs are read from a repository. The stories come riddled with adverts and images from the web page. This is referred to as a dirty text. The dirty text is sanitized. Sanitization is basically cleaning the collected news articles. This includes removing adverts and images from the web page. The clean text is stored in a repository, it is the input for the algorithm. The other input is the K value. All K-Means based variants take K value that defines the number of clusters to be produced. The stories are manually classified and labelled. The labelling is done to check the accuracy of machine clustering. Each story is labelled with a class to which it belongs. The data collection process itself was not unsupervised but the algorithms used to cluster are totally unsupervised. A total of 45 stories were collected and 9 manual clusters were identified. Under each manual cluster there are sub clusters of stories talking about one specific event. The performance of all the variants is compared to see the one with the best clustering results. Performance was checked by comparing the manual classification and the clustering results from the algorithm. Each K-Means variant is run on the same set of settings and same data set, that is 45 stories. The settings used are, • Dimensionality of the feature vectors, • Window size, • Maximum distance between the current and predicted word in a sentence, • Minimum word frequency, • Specified range of words to ignore, • Number of threads to train the model. • The training algorithm either distributed memory (PV-DM) or distributed bag of words (PV-DBOW), • The initial learning rate. The learning rate decreases to minimum alpha as training progresses, • Number of iterations per cycle, • Final learning rate, • Number of clusters to form, • The number of times the algorithm will be run, • The method used for initialization. The results obtained show that K-Means can perform better than K-Modes. The results are tabulated and presented in graphs in chapter six. Clustering can be improved by incorporating Named Entity (NER) recognition into the K-Means algorithms. Results can also be improved by implementing multi-stage clustering technique. Where initial clustering is done then you take the cluster group and further cluster it to achieve finer clustering results.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Songzi. "K-groups: A Generalization of K-means by Energy Distance." Bowling Green State University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1428583805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "K-means clustering"

1

Wu, Junjie. Advances in K-means Clustering. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29807-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Roy, Falguni. Seismic signal detection using K-means clustering algorithm. Mumbai: Bhabha Atomic Research Centre, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

service), SpringerLink (Online, ed. Advances in K-means Clustering: A Data Mining Thinking. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schreiber, Thomas. A voronoi diagram based adaptive k-means-type clustering algorithm for multidimensional weighted data. Kaiserslautern: Universität Kaiserslautern, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

K, Kokula Krishna Hari, and K. Saravanan, eds. Identification of Brain Regions Related to Alzheimers’ Diseases using MRI Images Based on Eigenbrain and K-means Clustering. Tiruppur, Tamil Nadu, India: Association of Scientists, Developers and Faculties, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Junjie. Advances in K-means Clustering: A Data Mining Thinking. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wu, JunJie. Advances in K-Means Clustering: A Data Mining Thinking. Springer Berlin / Heidelberg, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kaur, Arvind, and Nancy Nancy. Comparative Analysis of Hybrid Clustering Algorithm with K- Means. Independently Published, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wong, M. Anthony. Using the K-Means Clustering Method As a Density Estimation Procedure. Creative Media Partners, LLC, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tuckfield, Bradford, and Alok Malik. Applied Unsupervised Learning with R: Uncover Hidden Relationships and Patterns with K-Means Clustering, Hierarchical Clustering, and PCA. Packt Publishing, Limited, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "K-means clustering"

1

Zhou, Hong. "K-Means Clustering." In Learn Data Mining Through Excel, 35–47. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5982-5_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ng, Annalyn, and Kenneth Soo. "k-Means-Clustering." In Data Science – was ist das eigentlich?!, 19–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2018. http://dx.doi.org/10.1007/978-3-662-56776-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Dinov, Ivo D. "k-Means Clustering." In Data Science and Predictive Analytics, 443–73. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-72347-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mannor, Shie, Xin Jin, Jiawei Han, Xin Jin, Jiawei Han, Xin Jin, Jiawei Han, and Xinhua Zhang. "K-Means Clustering." In Encyclopedia of Machine Learning, 563–64. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stanberry, Larissa. "Clustering, k-Means." In Encyclopedia of Systems Biology, 430–31. New York, NY: Springer New York, 2013. http://dx.doi.org/10.1007/978-1-4419-9863-7_1189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sreevalsan-Nair, Jaya. "K-Means Clustering." In Encyclopedia of Mathematical Geosciences, 1–3. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-26050-7_171-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jin, Xin, and Jiawei Han. "K-Means Clustering." In Encyclopedia of Machine Learning and Data Mining, 1–3. Boston, MA: Springer US, 2016. http://dx.doi.org/10.1007/978-1-4899-7502-7_431-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jin, Xin, and Jiawei Han. "K-Means Clustering." In Encyclopedia of Machine Learning and Data Mining, 695–97. Boston, MA: Springer US, 2017. http://dx.doi.org/10.1007/978-1-4899-7687-1_431.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhou, Hong. "K-Means Clustering." In Learn Data Mining Through Excel, 37–52. Berkeley, CA: Apress, 2023. http://dx.doi.org/10.1007/978-1-4842-9771-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sreevalsan-Nair, Jaya. "K-Means Clustering." In Encyclopedia of Mathematical Geosciences, 695–97. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-85040-1_171.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "K-means clustering"

1

Di Fatta, Giuseppe, Francesco Blasa, Simone Cafiero, and Giancarlo Fortino. "Epidemic K-Means Clustering." In 2011 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2011. http://dx.doi.org/10.1109/icdmw.2011.76.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Agarwal, Pankaj K., and Nabil H. Mustafa. "k-means projective clustering." In the twenty-third ACM SIGMOD-SIGACT-SIGART symposium. New York, New York, USA: ACM Press, 2004. http://dx.doi.org/10.1145/1055558.1055581.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Arandjelovic, Ognjen. "Discriminative k-means clustering." In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6707038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Asgharbeygi, Nima, and Arian Maleki. "Geodesic K-means clustering." In 2008 19th International Conference on Pattern Recognition (ICPR). IEEE, 2008. http://dx.doi.org/10.1109/icpr.2008.4761241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Borgelt, Christian, and Olha Yarikova. "Initializing k-means Clustering." In 9th International Conference on Data Science, Technology and Applications. SCITEPRESS - Science and Technology Publications, 2020. http://dx.doi.org/10.5220/0009872702600267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Goel, Anurag, and Angshul Majumdar. "Transformed K-means Clustering." In 2021 29th European Signal Processing Conference (EUSIPCO). IEEE, 2021. http://dx.doi.org/10.23919/eusipco54536.2021.9616177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Na, Shi, Liu Xumin, and Guan Yong. "Research on k-means Clustering Algorithm: An Improved k-means Clustering Algorithm." In 2010 Third International Symposium on Intelligent Information Technology and Security Informatics (IITSI). IEEE, 2010. http://dx.doi.org/10.1109/iitsi.2010.74.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dashti, Hesam T., Tiago Simas, Rita A. Ribeiro, Amir Assadi, and Andre Moitinho. "MK-means - Modified K-means clustering algorithm." In 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, 2010. http://dx.doi.org/10.1109/ijcnn.2010.5596300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qi, Jianpeng, Yanwei Yu, Lihong Wang, and Jinglei Liu. "K*-Means: An Effective and Efficient K-Means Clustering Algorithm." In 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom) (BDCloud-SocialCom-SustainCom). IEEE, 2016. http://dx.doi.org/10.1109/bdcloud-socialcom-sustaincom.2016.46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Vivek Kumar, Nisha Tiwari, and Shekhar Garg. "Document Clustering Using K-Means, Heuristic K-Means and Fuzzy C-Means." In 2011 International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2011. http://dx.doi.org/10.1109/cicn.2011.62.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "K-means clustering"

1

Kanungo, T., D. M. Mount, N. S. Netanyahu, C. Piatko, R. Silverman, and A. Y. Wu. The Analysis of a Simple k-Means Clustering Algorithm. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada458738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cordeiro de Amorim, Renato. A survey on feature weighting based K-Means algorithms. Web of Open Science, December 2020. http://dx.doi.org/10.37686/ser.v1i2.79.

Full text
Abstract:
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means
APA, Harvard, Vancouver, ISO, and other styles
3

Kryzhanivs'kyi, Evstakhii, Liliana Horal, Iryna Perevozova, Vira Shyiko, Nataliia Mykytiuk, and Maria Berlous. Fuzzy cluster analysis of indicators for assessing the potential of recreational forest use. [б. в.], October 2020. http://dx.doi.org/10.31812/123456789/4470.

Full text
Abstract:
Cluster analysis of the efficiency of the recreational forest use of the region by separate components of the recreational forest use potential is provided in the article. The main stages of the cluster analysis of the recreational forest use level based on the predetermined components were determined. Among the agglomerative methods of cluster analysis, intended for grouping and combining the objects of study, it is common to distinguish the three most common types: the hierarchical method or the method of tree clustering; the K-means Clustering Method and the two-step aggregation method. For the correct selection of clusters, a comparative analysis of several methods was performed: arithmetic mean ranks, hierarchical methods followed by dendrogram construction, K- means method, which refers to reference methods, in which the number of groups is specified by the user. The cluster analysis of forestries by twenty analytical grounds was not proved by analysis of variance, so the re-clustering of certain objects was carried out according to the nine most significant analytical features. As a result, the forestry was clustered into four clusters. The conducted cluster analysis with the use of different methods allows us to state that their combination helps to select reasonable groupings, clearly illustrate the clustering procedure and rank the obtained forestry clusters.
APA, Harvard, Vancouver, ISO, and other styles
4

Herrera, Allen, and Alexander Heifetz. Detection of Anomalies in Gamma Background Radiation Data with K-Means and Self-Organizing Map Clustering Algorithms - Consortium on Nuclear Security Technologies (CONNECT) Q1 Report. Office of Scientific and Technical Information (OSTI), December 2021. http://dx.doi.org/10.2172/1841591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Eshed-Williams, Leor, and Daniel Zilberman. Genetic and cellular networks regulating cell fate at the shoot apical meristem. United States Department of Agriculture, January 2014. http://dx.doi.org/10.32747/2014.7699862.bard.

Full text
Abstract:
The shoot apical meristem establishes plant architecture by continuously producing new lateral organs such as leaves, axillary meristems and flowers throughout the plant life cycle. This unique capacity is achieved by a group of self-renewing pluripotent stem cells that give rise to founder cells, which can differentiate into multiple cell and tissue types in response to environmental and developmental cues. Cell fate specification at the shoot apical meristem is programmed primarily by transcription factors acting in a complex gene regulatory network. In this project we proposed to provide significant understanding of meristem maintenance and cell fate specification by studying four transcription factors acting at the meristem. Our original aim was to identify the direct target genes of WUS, STM, KNAT6 and CNA transcription factor in a genome wide scale and the manner by which they regulate their targets. Our goal was to integrate this data into a regulatory model of cell fate specification in the SAM and to identify key genes within the model for further study. We have generated transgenic plants carrying the four TF with two different tags and preformed chromatin Immunoprecipitation (ChIP) assay to identify the TF direct target genes. Due to unforeseen obstacles we have been delayed in achieving this aim but hope to accomplish it soon. Using the GR inducible system, genetic approach and transcriptome analysis [mRNA-seq] we provided a new look at meristem activity and its regulation of morphogenesis and phyllotaxy and propose a coherent framework for the role of many factors acting in meristem development and maintenance. We provided evidence for 3 different mechanisms for the regulation of WUS expression, DNA methylation, a second receptor pathway - the ERECTA receptor and the CNA TF that negatively regulates WUS expression in its own domain, the Organizing Center. We found that once the WUS expression level surpasses a certain threshold it alters cell identity at the periphery of the inflorescence meristem from floral meristem to carpel fate [FM]. When WUS expression highly elevated in the FM, the meristem turn into indeterminate. We showed that WUS activate cytokinine, inhibit auxin response and represses the genes required for root identity fate and that gradual increase in WUCHEL activity leads to gradual meristem enlargement that affect phyllotaxis. We also propose a model in which the direction of WUS domain expansion laterally or upward affects meristem structure differently. We preformed mRNA-seq on meristems with different size and structure followed by k-means clustering and identified groups of genes that are expressed in specific domains at the meristem. We will integrate this data with the ChIP-seq of the 4 TF to add another layer to the genetic network regulating meristem activity.
APA, Harvard, Vancouver, ISO, and other styles
6

Multiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, March 2022. http://dx.doi.org/10.4271/2022-01-0616.

Full text
Abstract:
As a critical power source, the diesel engine is widely used in various situations. Diesel engine failure may lead to serious property losses and even accidents. Fault detection can improve the safety of diesel engines and reduce economic loss. Surface vibration signal is often used in non-disassembly fault diagnosis because of its convenient measurement and stability. This paper proposed a novel method for engine fault detection based on vibration signals using variational mode decomposition (VMD), K-means, and genetic algorithm. The mode number of VMD dramatically affects the accuracy of extracting signal components. Therefore, a method based on spectral energy distribution is proposed to determine the parameter, and the quadratic penalty term is optimized according to SNR. The results show that the optimized VMD can adaptively extract the vibration signal components of the diesel engine. In the actual fault diagnosis case, it is difficult to obtain the data with labels. The clustering algorithm can complete the classification without labeled data, but it is limited by the low accuracy. In this paper, the optimized VMD is used to decompose and standardize the vibration signal. Then the correlation-based feature selection method is implemented to obtain the feature results after dimensionality reduction. Finally, the results are input into the classifier combined by K-means and genetic algorithm (GA). By introducing and optimizing the genetic algorithm, the number of classes can be selected automatically, and the accuracy is significantly improved. This method can carry out adaptive multiple fault detection of a diesel engine without labeled data. Compared with many supervised learning algorithms, the proposed method also has high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography