Academic literature on the topic 'Clustering spectral'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Clustering spectral.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Clustering spectral"

1

Hess, Sibylle, Wouter Duivesteijn, Philipp Honysz, and Katharina Morik. "The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3788–95. http://dx.doi.org/10.1609/aaai.v33i01.33013788.

Full text
Abstract:
When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose SPECTACL: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as Spectral Clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Hongmin, Xiucai Ye, Akira Imakura, and Tetsuya Sakurai. "LSEC: Large-scale spectral ensemble clustering." Intelligent Data Analysis 27, no. 1 (January 30, 2023): 59–77. http://dx.doi.org/10.3233/ida-216240.

Full text
Abstract:
A fundamental problem in machine learning is ensemble clustering, that is, combining multiple base clusterings to obtain improved clustering result. However, most of the existing methods are unsuitable for large-scale ensemble clustering tasks owing to efficiency bottlenecks. In this paper, we propose a large-scale spectral ensemble clustering (LSEC) method to balance efficiency and effectiveness. In LSEC, a large-scale spectral clustering-based efficient ensemble generation framework is designed to generate various base clusterings with low computational complexity. Thereafter, all the base clusterings are combined using a bipartite graph partition-based consensus function to obtain improved consensus clustering results. The LSEC method achieves a lower computational complexity than most existing ensemble clustering methods. Experiments conducted on ten large-scale datasets demonstrate the efficiency and effectiveness of the LSEC method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhuang, Xinwei, and Sean Hanna. "Space Frame Optimisation with Spectral Clustering." International Journal of Machine Learning and Computing 10, no. 4 (July 2020): 507–12. http://dx.doi.org/10.18178/ijmlc.2020.10.4.965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sun, Gan, Yang Cong, Qianqian Wang, Jun Li, and Yun Fu. "Lifelong Spectral Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5867–74. http://dx.doi.org/10.1609/aaai.v34i04.6045.

Full text
Abstract:
In the past decades, spectral clustering (SC) has become one of the most effective clustering algorithms. However, most previous studies focus on spectral clustering tasks with a fixed task set, which cannot incorporate with a new spectral clustering task without accessing to previously learned tasks. In this paper, we aim to explore the problem of spectral clustering in a lifelong machine learning framework, i.e., Lifelong Spectral Clustering (L2SC). Its goal is to efficiently learn a model for a new spectral clustering task by selectively transferring previously accumulated experience from knowledge library. Specifically, the knowledge library of L2SC contains two components: 1) orthogonal basis library: capturing latent cluster centers among the clusters in each pair of tasks; 2) feature embedding library: embedding the feature manifold information shared among multiple related tasks. As a new spectral clustering task arrives, L2SC firstly transfers knowledge from both basis library and feature library to obtain encoding matrix, and further redefines the library base over time to maximize performance across all the clustering tasks. Meanwhile, a general online update formulation is derived to alternatively update the basis library and feature library. Finally, the empirical experiments on several real-world benchmark datasets demonstrate that our L2SC model can effectively improve the clustering performance when comparing with other state-of-the-art spectral clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
5

Ling Ping, Rong Xiangsheng, and Dong Yongquan. "Incremental Spectral Clustering." Journal of Convergence Information Technology 7, no. 15 (August 31, 2012): 286–93. http://dx.doi.org/10.4156/jcit.vol7.issue15.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Jaehwan, and Seungjin Choi. "Semidefinite spectral clustering." Pattern Recognition 39, no. 11 (November 2006): 2025–35. http://dx.doi.org/10.1016/j.patcog.2006.05.021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Challa, Aditya, Sravan Danda, B. S. Daya Sagar, and Laurent Najman. "Power Spectral Clustering." Journal of Mathematical Imaging and Vision 62, no. 9 (July 11, 2020): 1195–213. http://dx.doi.org/10.1007/s10851-020-00980-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huang, Jin, Feiping Nie, and Heng Huang. "Spectral Rotation versus K-Means in Spectral Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 431–37. http://dx.doi.org/10.1609/aaai.v27i1.8683.

Full text
Abstract:
Spectral clustering has been a popular data clustering algorithm. This category of approaches often resort to other clustering methods, such as K-Means, to get the final cluster. The potential flaw of such common practice is that the obtained relaxed continuous spectral solution could severely deviate from the true discrete solution. In this paper, we propose to impose an additional orthonormal constraint to better approximate the optimal continuous solution to the graph cut objective functions. Such a method, called spectral rotation in literature, optimizes the spectral clustering objective functions better than K-Means, and improves the clustering accuracy. We would provide efficient algorithm to solve the new problem rigorously, which is not significantly more costly than K-Means. We also establish the connection between our method andK-Means to provide theoretical motivation of our method. Experimental results show that our algorithm consistently reaches better cut and meanwhile outperforms in clustering metrics than classic spectral clustering methods.
APA, Harvard, Vancouver, ISO, and other styles
9

JIN, Hui-zhen. "Multilevel spectral clustering with ascertainable clustering number." Journal of Computer Applications 28, no. 5 (October 17, 2008): 1229–31. http://dx.doi.org/10.3724/sp.j.1087.2008.01229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Huang, Dong, Chang-Dong Wang, Jian-Sheng Wu, Jian-Huang Lai, and Chee-Keong Kwoh. "Ultra-Scalable Spectral Clustering and Ensemble Clustering." IEEE Transactions on Knowledge and Data Engineering 32, no. 6 (June 1, 2020): 1212–26. http://dx.doi.org/10.1109/tkde.2019.2903410.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Clustering spectral"

1

Shortreed, Susan. "Learning in spectral clustering /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/8977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Larson, Ellis, and Nelly Åkerblom. "Spectral clustering for Meteorology." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297760.

Full text
Abstract:
Climate is a tremendously complex topic, affecting many aspects of human activity and constantly changing. Defining some structures and rules for how it works is thereof of the utmost importance even though it might only cover a small part of the complexity. Cluster analysis is a tool developed in data analysis that is able to categorize data into groups of similar type. In this paper data from the Swedish Meteorological and Hydrological Institute (SMHI) is clustered to find a partitioning. The cluster analysis used is called Spectral clustering which is a family of methods making use of the spectral properties of graphs. Concrete results over different groupings of climate over Sweden were found.
APA, Harvard, Vancouver, ISO, and other styles
3

Gaertler, Marco. "Clustering with spectral methods." [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10101213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Masum, Mohammad. "Vertex Weighted Spectral Clustering." Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3266.

Full text
Abstract:
Spectral clustering is often used to partition a data set into a specified number of clusters. Both the unweighted and the vertex-weighted approaches use eigenvectors of the Laplacian matrix of a graph. Our focus is on using vertex-weighted methods to refine clustering of observations. An eigenvector corresponding with the second smallest eigenvalue of the Laplacian matrix of a graph is called a Fiedler vector. Coefficients of a Fiedler vector are used to partition vertices of a given graph into two clusters. A vertex of a graph is classified as unassociated if the Fiedler coefficient of the vertex is close to zero compared to the largest Fiedler coefficient of the graph. We propose a vertex-weighted spectral clustering algorithm which incorporates a vector of weights for each vertex of a given graph to form a vertex-weighted graph. The proposed algorithm predicts association of equidistant or nearly equidistant data points from both clusters while the unweighted clustering does not provide association. Finally, we implemented both the unweighted and the vertex-weighted spectral clustering algorithms on several data sets to show that the proposed algorithm works in general.
APA, Harvard, Vancouver, ISO, and other styles
5

Larsson, Johan, and Isak Ågren. "Numerical Methods for Spectral Clustering." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275701.

Full text
Abstract:
The Aviation industry is important to the European economy and development, therefore a study of the sensitivity of the European flight network is interesting. If clusters exist within the network, that could indicate possible vulnerabilities or bottlenecks, since that would represent a group of airports poorly connected to other parts of the network. In this paper a cluster analysis using spectral clustering is performed with flight data from 34 different European countries. The report also looks at how to implement the spectral clustering algorithm for large data sets. After performing the spectral clustering it appears as if the European flight network is not clustered, and thus does not appear to be sensitive.
APA, Harvard, Vancouver, ISO, and other styles
6

Rossi, Alfred Vincent III. "Temporal Clustering of Finite Metric Spaces and Spectral k-Clustering." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500033042082458.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Darke, Felix, and Blomkvist Linus Below. "Categorization of songs using spectral clustering." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297763.

Full text
Abstract:
A direct consequence of the world becoming more digital is that the amount of available data grows, which presents great opportunities for organizations, researchers and institutions alike.However, this places a huge demand on efficient and understandable algorithms for analyzing vast datasets. This project is centered around using one of these algorithms for identifying groups of songs in a public dataset released by Spotify in 2018. This problem is part of a larger problem class, where one wish to assign data into groups, without the preexisting knowledge of what makes the different groups special, or how many different groups there are. This is typically solved using unsupervised machine learning. The overall goal of this project was to use spectral clustering (a specific algorithm in the unsupervised machine learning family) to assign 50 704 songs from the dataset into different categories, where each category would be made up of similar songs. The algorithm rests upon graph theory, and a large emphasis was placed upon actuallyunderstanding the mathematical foundation and motivation behind the method before the actual implementation, which is reflected in the report. The results achieved through applying spectral clustering were one large group consisting of 40 718 songs in combination with 22 smaller groups, all larger than 100 songs, with an average size of 430 songs. The groups found were not examined in depth, but the analysis done hints that certain groups were clearly different from the data as a whole in terms of the musical features. For instance, one groupwere deemed to be 54% more likely to be acoustic than the dataset as a whole. As a conclusion, the largest cluster was deemed to be an artefact of the fact that when a sample of songs listened to on Spotify is taken, the likelihood of these songs mainly being popular songs would be high. This would explain the homogeneity that resulted in the fact that most songs were assigned into the same group, which also resulted in the limited success of spectral clustering for this specific project.
APA, Harvard, Vancouver, ISO, and other styles
8

Marotta, Serena. "Alcuni metodi matriciali per lo Spectral Clustering." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14122/.

Full text
Abstract:
L'obiettivo di questa tesi è analizzare nel dettaglio un insieme di tecniche di analisi dei dati, volte alla selezione e al raggruppamento di elementi omogenei, in modo che si possano facilmente interfacciare tra di loro e fornire un utilizzo più semplice per chi opera nel settore.È introdotta la trattazione dei principali metodi di clustering: linkage, k-medie e in particolare spectral clustering, argomento centrale della mia tesi.
APA, Harvard, Vancouver, ISO, and other styles
9

Alshammari, Mashaan. "Graph Filtering and Automatic Parameter Selection for Efficient Spectral Clustering." Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24091.

Full text
Abstract:
Spectral clustering is usually used to detect non-convex clusters. Despite being an effective method to detect this type of clusters, spectral clustering has two deficiencies that made it less attractive for the pattern recognition community. First, the graph Laplacian has to pass through eigen-decomposition to find the embedding space. This has been proved to be a computationally expensive process when the number of points is large. Second, spectral clustering used parameters that highly influence its outcome. Tuning these parameters manually would be a tedious process when examining different datasets. This thesis introduces solutions to these two problems of spectral clustering. For computational efficiency, we proposed approximated graphs with a reduced number of graph vertices. Consequently, eigen-decomposition will be performed on a matrix with reduced size which makes it faster. Unfortunately, reducing graph vertices could lead to a loss in local information that affects clustering accuracy. Thus, we proposed another graph where the number of edges was reduced significantly while keeping the same number of vertices to maintain local information. This would reduce the matrix size, making it computationally efficient and maintaining good clustering accuracy. Regarding influential parameters, we proposed cost functions that test a range of values and decide on the optimum value. Cost functions were used to estimate the number of embedding space dimensions and the number of clusters. We also observed in the literature that the graph reduction step requires manual tuning of parameters. Therefore, we developed a graph reduction framework that does not require any parameters.
APA, Harvard, Vancouver, ISO, and other styles
10

Azam, Nadia Farhanaz. "Spectral clustering: An explorative study of proximity measures." Thesis, University of Ottawa (Canada), 2009. http://hdl.handle.net/10393/28238.

Full text
Abstract:
In cluster analysis, data are clustered into meaningful groups so that the objects in the same group are very similar, and the objects residing in two different groups are different from one another. One such cluster analysis algorithm is called the spectral clustering algorithm, which originated from the area of graph partitioning. The input, in this case, is a similarity matrix, constructed from the pair-wise similarity between data objects. The algorithm uses the eigenvalues and eigenvectors of a normalized similarity matrix to partition the data. The pair-wise similarity between the objects is calculated from the proximity (e.g. similarity or distance) measures. In any clustering task, the proximity measures often play a crucial role. In fact, one of the early and fundamental steps in a clustering process is the selection of a suitable proximity measure. A number of such measures may be used for this task. However, the success of a clustering algorithm partially depends on the selection of the proximity measure. While, the majority of prior research on the spectral clustering algorithm emphasizes on the algorithm-specific issues, little research has been performed on the evaluation of the performance of the proximity measures. To this end, we perform a comparative and exploratory analysis on several existing proximity measures to evaluate their performance when applying the spectral clustering algorithm to a number of diverse data sets. To accomplish this task, we use a ten-fold cross validation technique, and assess the clustering results using several external cluster evaluation measures. The performances of the proximity measures are then compared using the quantitative results from the external evaluation measures and analyzed further to determine the probable causes that may have led to such results. In essence, our experimental evaluation indicates that the proximity measures, in general, yield comparable results. That is, no measure is clearly superior, or inferior, to the others in its group. However, among the six similarity measures considered for the binary data, one measure (Russell and Roo similarity coefficient) frequently performed poorer than the others. For numeric data, our study shows that the distance measures based on the relative distances (i.e. the Pearson correlation coefficient and the Angular distance) generally performed better than the distance measures based on the absolute distances (e.g. the Euclidean or Manhattan distance). When considering the proximity measures for mixed data, our results indicate that the choice of distance measure for the numeric data has the highest impact on the final outcome.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Clustering spectral"

1

Bolla, Marianna, ed. Spectral Clustering and Biclustering. Chichester, UK: John Wiley & Sons, Ltd, 2013. http://dx.doi.org/10.1002/9781118650684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

F, Shandarin Sergei, Weinberg David Hal, and United States. National Aeronautics and Space Administration., eds. A test of the adhesion approximation for gravitational clustering. [Washington, D.C: National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chennubhotla, Srinivas Chakra. Spectral methods for multi-scale feature extraction and data clustering. 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bolla, Marianna. Spectral Clustering and Biclustering of Networks: Large Graphs and Contingency Tables. Wiley & Sons, Limited, John, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Coolen, A. C. C., A. Annibale, and E. S. Roberts. Definitions and concepts. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198709893.003.0002.

Full text
Abstract:
A network is specified by its links and nodes. However, it can be described by a much wider range of interesting and important topological features. This chapter introduces how a network can be characterized by its microscopic topological features and macroscopic topological features. Microscopic features introduced are degree and clustering coefficients. Macroscopic topological features introduced are the degree distribution; correlation between degrees of connected nodes; modularity; and, the eigenvalue spectrum (which counts the number of closed paths in the graph).
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Clustering spectral"

1

Theodoridis, Sergios, and Konstantinos Koutroumbas. "Spectral Clustering." In Encyclopedia of Database Systems, 1–5. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_606-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wierzchoń, Sławomir T., and Mieczysław A. Kłopotek. "Spectral Clustering." In Studies in Big Data, 181–259. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69308-8_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Theodoridis, Sergios, and Konstantinos Koutroumbas. "Spectral Clustering." In Encyclopedia of Database Systems, 2748–52. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Martin, Eric, Samuel Kaski, Fei Zheng, Geoffrey I. Webb, Xiaojin Zhu, Ion Muslea, Kai Ming Ting, et al. "Spectral Clustering." In Encyclopedia of Machine Learning, 907. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tripathy, B. K., S. Anveshrithaa, and Shrusti Ghela. "Spectral Clustering." In Unsupervised Learning Approaches for Dimensionality Reduction and Data Visualization, 99–107. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003190554-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Theodoridis, Sergios, and Konstantinos Koutroumbas. "Spectral Clustering." In Encyclopedia of Database Systems, 3660–65. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_606.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anselin, Luc. "Spectral Clustering." In An Introduction to Spatial Data Science with GeoDa, 121–30. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781032713175-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Liang, Christopher Leckie, Kotagiri Ramamohanarao, and James Bezdek. "Approximate Spectral Clustering." In Advances in Knowledge Discovery and Data Mining, 134–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01307-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jiang, Wenhao, and Fu-lai Chung. "Transfer Spectral Clustering." In Machine Learning and Knowledge Discovery in Databases, 789–803. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33486-3_50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gong, Yun-Chao, and Chuanliang Chen. "Locality Spectral Clustering." In AI 2008: Advances in Artificial Intelligence, 348–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89378-3_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Clustering spectral"

1

Chakeri, Alireza, Hamidreza Farhidzadeh, and Lawrence O. Hall. "Spectral sparsification in spectral clustering." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7899979.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xiang, and Ian Davidson. "Active Spectral Clustering." In 2010 IEEE 10th International Conference on Data Mining (ICDM). IEEE, 2010. http://dx.doi.org/10.1109/icdm.2010.119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhao, Bin, and Changshui Zhang. "Compressed Spectral Clustering." In 2009 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2009. http://dx.doi.org/10.1109/icdmw.2009.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yoo, Shinjae, Hao Huang, and Shiva Prasad Kasiviswanathan. "Streaming spectral clustering." In 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 2016. http://dx.doi.org/10.1109/icde.2016.7498277.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Hongfu, Tongliang Liu, Junjie Wu, Dacheng Tao, and Yun Fu. "Spectral Ensemble Clustering." In KDD '15: The 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2783258.2783287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blaschko, Matthew B., and Christoph H. Lampert. "Correlational spectral clustering." In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587353.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hunter, Blake, Thomas Strohmer, Theodore E. Simos, George Psihoyios, and Ch Tsitouras. "Compressive Spectral Clustering." In ICNAAM 2010: International Conference of Numerical Analysis and Applied Mathematics 2010. AIP, 2010. http://dx.doi.org/10.1063/1.3498187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yu and Shi. "Multiclass spectral clustering." In ICCV 2003: 9th International Conference on Computer Vision. IEEE, 2003. http://dx.doi.org/10.1109/iccv.2003.1238361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ladikos, Alexander, Slobodan Ilic, and Nassir Navab. "Spectral camera clustering." In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Palit, Biswaroop, Rakesh Nigam, Keren Perlmutter, and Sharon Perlmutter. "Spectral face clustering." In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457700.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Clustering spectral"

1

Neville, Jennifer, Micah Adler, and David Jensen. Spectral Clustering with Links and Attributes. Fort Belvoir, VA: Defense Technical Information Center, January 2004. http://dx.doi.org/10.21236/ada472209.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Blakely, Logan. Spectral Clustering for Electrical Phase Identification Using Advanced Metering Infrastructure Voltage Time Series. Portland State University Library, January 2000. http://dx.doi.org/10.15760/etd.6567.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Multiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, March 2022. http://dx.doi.org/10.4271/2022-01-0616.

Full text
Abstract:
As a critical power source, the diesel engine is widely used in various situations. Diesel engine failure may lead to serious property losses and even accidents. Fault detection can improve the safety of diesel engines and reduce economic loss. Surface vibration signal is often used in non-disassembly fault diagnosis because of its convenient measurement and stability. This paper proposed a novel method for engine fault detection based on vibration signals using variational mode decomposition (VMD), K-means, and genetic algorithm. The mode number of VMD dramatically affects the accuracy of extracting signal components. Therefore, a method based on spectral energy distribution is proposed to determine the parameter, and the quadratic penalty term is optimized according to SNR. The results show that the optimized VMD can adaptively extract the vibration signal components of the diesel engine. In the actual fault diagnosis case, it is difficult to obtain the data with labels. The clustering algorithm can complete the classification without labeled data, but it is limited by the low accuracy. In this paper, the optimized VMD is used to decompose and standardize the vibration signal. Then the correlation-based feature selection method is implemented to obtain the feature results after dimensionality reduction. Finally, the results are input into the classifier combined by K-means and genetic algorithm (GA). By introducing and optimizing the genetic algorithm, the number of classes can be selected automatically, and the accuracy is significantly improved. This method can carry out adaptive multiple fault detection of a diesel engine without labeled data. Compared with many supervised learning algorithms, the proposed method also has high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography