Literatura científica selecionada sobre o tema "Clustering spectral"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Clustering spectral".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Clustering spectral"

1

Hess, Sibylle, Wouter Duivesteijn, Philipp Honysz e Katharina Morik. "The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 de julho de 2019): 3788–95. http://dx.doi.org/10.1609/aaai.v33i01.33013788.

Texto completo da fonte
Resumo:
When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose SPECTACL: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as Spectral Clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Li, Hongmin, Xiucai Ye, Akira Imakura e Tetsuya Sakurai. "LSEC: Large-scale spectral ensemble clustering". Intelligent Data Analysis 27, n.º 1 (30 de janeiro de 2023): 59–77. http://dx.doi.org/10.3233/ida-216240.

Texto completo da fonte
Resumo:
A fundamental problem in machine learning is ensemble clustering, that is, combining multiple base clusterings to obtain improved clustering result. However, most of the existing methods are unsuitable for large-scale ensemble clustering tasks owing to efficiency bottlenecks. In this paper, we propose a large-scale spectral ensemble clustering (LSEC) method to balance efficiency and effectiveness. In LSEC, a large-scale spectral clustering-based efficient ensemble generation framework is designed to generate various base clusterings with low computational complexity. Thereafter, all the base clusterings are combined using a bipartite graph partition-based consensus function to obtain improved consensus clustering results. The LSEC method achieves a lower computational complexity than most existing ensemble clustering methods. Experiments conducted on ten large-scale datasets demonstrate the efficiency and effectiveness of the LSEC method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhuang, Xinwei, e Sean Hanna. "Space Frame Optimisation with Spectral Clustering". International Journal of Machine Learning and Computing 10, n.º 4 (julho de 2020): 507–12. http://dx.doi.org/10.18178/ijmlc.2020.10.4.965.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sun, Gan, Yang Cong, Qianqian Wang, Jun Li e Yun Fu. "Lifelong Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 34, n.º 04 (3 de abril de 2020): 5867–74. http://dx.doi.org/10.1609/aaai.v34i04.6045.

Texto completo da fonte
Resumo:
In the past decades, spectral clustering (SC) has become one of the most effective clustering algorithms. However, most previous studies focus on spectral clustering tasks with a fixed task set, which cannot incorporate with a new spectral clustering task without accessing to previously learned tasks. In this paper, we aim to explore the problem of spectral clustering in a lifelong machine learning framework, i.e., Lifelong Spectral Clustering (L2SC). Its goal is to efficiently learn a model for a new spectral clustering task by selectively transferring previously accumulated experience from knowledge library. Specifically, the knowledge library of L2SC contains two components: 1) orthogonal basis library: capturing latent cluster centers among the clusters in each pair of tasks; 2) feature embedding library: embedding the feature manifold information shared among multiple related tasks. As a new spectral clustering task arrives, L2SC firstly transfers knowledge from both basis library and feature library to obtain encoding matrix, and further redefines the library base over time to maximize performance across all the clustering tasks. Meanwhile, a general online update formulation is derived to alternatively update the basis library and feature library. Finally, the empirical experiments on several real-world benchmark datasets demonstrate that our L2SC model can effectively improve the clustering performance when comparing with other state-of-the-art spectral clustering algorithms.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Ling Ping, Rong Xiangsheng e Dong Yongquan. "Incremental Spectral Clustering". Journal of Convergence Information Technology 7, n.º 15 (31 de agosto de 2012): 286–93. http://dx.doi.org/10.4156/jcit.vol7.issue15.34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Kim, Jaehwan, e Seungjin Choi. "Semidefinite spectral clustering". Pattern Recognition 39, n.º 11 (novembro de 2006): 2025–35. http://dx.doi.org/10.1016/j.patcog.2006.05.021.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Challa, Aditya, Sravan Danda, B. S. Daya Sagar e Laurent Najman. "Power Spectral Clustering". Journal of Mathematical Imaging and Vision 62, n.º 9 (11 de julho de 2020): 1195–213. http://dx.doi.org/10.1007/s10851-020-00980-7.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Huang, Jin, Feiping Nie e Heng Huang. "Spectral Rotation versus K-Means in Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 27, n.º 1 (30 de junho de 2013): 431–37. http://dx.doi.org/10.1609/aaai.v27i1.8683.

Texto completo da fonte
Resumo:
Spectral clustering has been a popular data clustering algorithm. This category of approaches often resort to other clustering methods, such as K-Means, to get the final cluster. The potential flaw of such common practice is that the obtained relaxed continuous spectral solution could severely deviate from the true discrete solution. In this paper, we propose to impose an additional orthonormal constraint to better approximate the optimal continuous solution to the graph cut objective functions. Such a method, called spectral rotation in literature, optimizes the spectral clustering objective functions better than K-Means, and improves the clustering accuracy. We would provide efficient algorithm to solve the new problem rigorously, which is not significantly more costly than K-Means. We also establish the connection between our method andK-Means to provide theoretical motivation of our method. Experimental results show that our algorithm consistently reaches better cut and meanwhile outperforms in clustering metrics than classic spectral clustering methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

JIN, Hui-zhen. "Multilevel spectral clustering with ascertainable clustering number". Journal of Computer Applications 28, n.º 5 (17 de outubro de 2008): 1229–31. http://dx.doi.org/10.3724/sp.j.1087.2008.01229.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Huang, Dong, Chang-Dong Wang, Jian-Sheng Wu, Jian-Huang Lai e Chee-Keong Kwoh. "Ultra-Scalable Spectral Clustering and Ensemble Clustering". IEEE Transactions on Knowledge and Data Engineering 32, n.º 6 (1 de junho de 2020): 1212–26. http://dx.doi.org/10.1109/tkde.2019.2903410.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Clustering spectral"

1

Shortreed, Susan. "Learning in spectral clustering /". Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/8977.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Larson, Ellis, e Nelly Åkerblom. "Spectral clustering for Meteorology". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297760.

Texto completo da fonte
Resumo:
Climate is a tremendously complex topic, affecting many aspects of human activity and constantly changing. Defining some structures and rules for how it works is thereof of the utmost importance even though it might only cover a small part of the complexity. Cluster analysis is a tool developed in data analysis that is able to categorize data into groups of similar type. In this paper data from the Swedish Meteorological and Hydrological Institute (SMHI) is clustered to find a partitioning. The cluster analysis used is called Spectral clustering which is a family of methods making use of the spectral properties of graphs. Concrete results over different groupings of climate over Sweden were found.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Gaertler, Marco. "Clustering with spectral methods". [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10101213.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Masum, Mohammad. "Vertex Weighted Spectral Clustering". Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3266.

Texto completo da fonte
Resumo:
Spectral clustering is often used to partition a data set into a specified number of clusters. Both the unweighted and the vertex-weighted approaches use eigenvectors of the Laplacian matrix of a graph. Our focus is on using vertex-weighted methods to refine clustering of observations. An eigenvector corresponding with the second smallest eigenvalue of the Laplacian matrix of a graph is called a Fiedler vector. Coefficients of a Fiedler vector are used to partition vertices of a given graph into two clusters. A vertex of a graph is classified as unassociated if the Fiedler coefficient of the vertex is close to zero compared to the largest Fiedler coefficient of the graph. We propose a vertex-weighted spectral clustering algorithm which incorporates a vector of weights for each vertex of a given graph to form a vertex-weighted graph. The proposed algorithm predicts association of equidistant or nearly equidistant data points from both clusters while the unweighted clustering does not provide association. Finally, we implemented both the unweighted and the vertex-weighted spectral clustering algorithms on several data sets to show that the proposed algorithm works in general.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Larsson, Johan, e Isak Ågren. "Numerical Methods for Spectral Clustering". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275701.

Texto completo da fonte
Resumo:
The Aviation industry is important to the European economy and development, therefore a study of the sensitivity of the European flight network is interesting. If clusters exist within the network, that could indicate possible vulnerabilities or bottlenecks, since that would represent a group of airports poorly connected to other parts of the network. In this paper a cluster analysis using spectral clustering is performed with flight data from 34 different European countries. The report also looks at how to implement the spectral clustering algorithm for large data sets. After performing the spectral clustering it appears as if the European flight network is not clustered, and thus does not appear to be sensitive.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Rossi, Alfred Vincent III. "Temporal Clustering of Finite Metric Spaces and Spectral k-Clustering". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500033042082458.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Darke, Felix, e Blomkvist Linus Below. "Categorization of songs using spectral clustering". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297763.

Texto completo da fonte
Resumo:
A direct consequence of the world becoming more digital is that the amount of available data grows, which presents great opportunities for organizations, researchers and institutions alike.However, this places a huge demand on efficient and understandable algorithms for analyzing vast datasets. This project is centered around using one of these algorithms for identifying groups of songs in a public dataset released by Spotify in 2018. This problem is part of a larger problem class, where one wish to assign data into groups, without the preexisting knowledge of what makes the different groups special, or how many different groups there are. This is typically solved using unsupervised machine learning. The overall goal of this project was to use spectral clustering (a specific algorithm in the unsupervised machine learning family) to assign 50 704 songs from the dataset into different categories, where each category would be made up of similar songs. The algorithm rests upon graph theory, and a large emphasis was placed upon actuallyunderstanding the mathematical foundation and motivation behind the method before the actual implementation, which is reflected in the report. The results achieved through applying spectral clustering were one large group consisting of 40 718 songs in combination with 22 smaller groups, all larger than 100 songs, with an average size of 430 songs. The groups found were not examined in depth, but the analysis done hints that certain groups were clearly different from the data as a whole in terms of the musical features. For instance, one groupwere deemed to be 54% more likely to be acoustic than the dataset as a whole. As a conclusion, the largest cluster was deemed to be an artefact of the fact that when a sample of songs listened to on Spotify is taken, the likelihood of these songs mainly being popular songs would be high. This would explain the homogeneity that resulted in the fact that most songs were assigned into the same group, which also resulted in the limited success of spectral clustering for this specific project.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Marotta, Serena. "Alcuni metodi matriciali per lo Spectral Clustering". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14122/.

Texto completo da fonte
Resumo:
L'obiettivo di questa tesi è analizzare nel dettaglio un insieme di tecniche di analisi dei dati, volte alla selezione e al raggruppamento di elementi omogenei, in modo che si possano facilmente interfacciare tra di loro e fornire un utilizzo più semplice per chi opera nel settore.È introdotta la trattazione dei principali metodi di clustering: linkage, k-medie e in particolare spectral clustering, argomento centrale della mia tesi.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Alshammari, Mashaan. "Graph Filtering and Automatic Parameter Selection for Efficient Spectral Clustering". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24091.

Texto completo da fonte
Resumo:
Spectral clustering is usually used to detect non-convex clusters. Despite being an effective method to detect this type of clusters, spectral clustering has two deficiencies that made it less attractive for the pattern recognition community. First, the graph Laplacian has to pass through eigen-decomposition to find the embedding space. This has been proved to be a computationally expensive process when the number of points is large. Second, spectral clustering used parameters that highly influence its outcome. Tuning these parameters manually would be a tedious process when examining different datasets. This thesis introduces solutions to these two problems of spectral clustering. For computational efficiency, we proposed approximated graphs with a reduced number of graph vertices. Consequently, eigen-decomposition will be performed on a matrix with reduced size which makes it faster. Unfortunately, reducing graph vertices could lead to a loss in local information that affects clustering accuracy. Thus, we proposed another graph where the number of edges was reduced significantly while keeping the same number of vertices to maintain local information. This would reduce the matrix size, making it computationally efficient and maintaining good clustering accuracy. Regarding influential parameters, we proposed cost functions that test a range of values and decide on the optimum value. Cost functions were used to estimate the number of embedding space dimensions and the number of clusters. We also observed in the literature that the graph reduction step requires manual tuning of parameters. Therefore, we developed a graph reduction framework that does not require any parameters.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Azam, Nadia Farhanaz. "Spectral clustering: An explorative study of proximity measures". Thesis, University of Ottawa (Canada), 2009. http://hdl.handle.net/10393/28238.

Texto completo da fonte
Resumo:
In cluster analysis, data are clustered into meaningful groups so that the objects in the same group are very similar, and the objects residing in two different groups are different from one another. One such cluster analysis algorithm is called the spectral clustering algorithm, which originated from the area of graph partitioning. The input, in this case, is a similarity matrix, constructed from the pair-wise similarity between data objects. The algorithm uses the eigenvalues and eigenvectors of a normalized similarity matrix to partition the data. The pair-wise similarity between the objects is calculated from the proximity (e.g. similarity or distance) measures. In any clustering task, the proximity measures often play a crucial role. In fact, one of the early and fundamental steps in a clustering process is the selection of a suitable proximity measure. A number of such measures may be used for this task. However, the success of a clustering algorithm partially depends on the selection of the proximity measure. While, the majority of prior research on the spectral clustering algorithm emphasizes on the algorithm-specific issues, little research has been performed on the evaluation of the performance of the proximity measures. To this end, we perform a comparative and exploratory analysis on several existing proximity measures to evaluate their performance when applying the spectral clustering algorithm to a number of diverse data sets. To accomplish this task, we use a ten-fold cross validation technique, and assess the clustering results using several external cluster evaluation measures. The performances of the proximity measures are then compared using the quantitative results from the external evaluation measures and analyzed further to determine the probable causes that may have led to such results. In essence, our experimental evaluation indicates that the proximity measures, in general, yield comparable results. That is, no measure is clearly superior, or inferior, to the others in its group. However, among the six similarity measures considered for the binary data, one measure (Russell and Roo similarity coefficient) frequently performed poorer than the others. For numeric data, our study shows that the distance measures based on the relative distances (i.e. the Pearson correlation coefficient and the Angular distance) generally performed better than the distance measures based on the absolute distances (e.g. the Euclidean or Manhattan distance). When considering the proximity measures for mixed data, our results indicate that the choice of distance measure for the numeric data has the highest impact on the final outcome.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Livros sobre o assunto "Clustering spectral"

1

Bolla, Marianna, ed. Spectral Clustering and Biclustering. Chichester, UK: John Wiley & Sons, Ltd, 2013. http://dx.doi.org/10.1002/9781118650684.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

F, Shandarin Sergei, Weinberg David Hal e United States. National Aeronautics and Space Administration., eds. A test of the adhesion approximation for gravitational clustering. [Washington, D.C: National Aeronautics and Space Administration, 1995.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Chennubhotla, Srinivas Chakra. Spectral methods for multi-scale feature extraction and data clustering. 2004.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Bolla, Marianna. Spectral Clustering and Biclustering of Networks: Large Graphs and Contingency Tables. Wiley & Sons, Limited, John, 2013.

Encontre o texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Coolen, A. C. C., A. Annibale e E. S. Roberts. Definitions and concepts. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198709893.003.0002.

Texto completo da fonte
Resumo:
A network is specified by its links and nodes. However, it can be described by a much wider range of interesting and important topological features. This chapter introduces how a network can be characterized by its microscopic topological features and macroscopic topological features. Microscopic features introduced are degree and clustering coefficients. Macroscopic topological features introduced are the degree distribution; correlation between degrees of connected nodes; modularity; and, the eigenvalue spectrum (which counts the number of closed paths in the graph).
Estilos ABNT, Harvard, Vancouver, APA, etc.

Capítulos de livros sobre o assunto "Clustering spectral"

1

Theodoridis, Sergios, e Konstantinos Koutroumbas. "Spectral Clustering". In Encyclopedia of Database Systems, 1–5. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_606-2.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wierzchoń, Sławomir T., e Mieczysław A. Kłopotek. "Spectral Clustering". In Studies in Big Data, 181–259. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69308-8_5.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Theodoridis, Sergios, e Konstantinos Koutroumbas. "Spectral Clustering". In Encyclopedia of Database Systems, 2748–52. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_606.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Martin, Eric, Samuel Kaski, Fei Zheng, Geoffrey I. Webb, Xiaojin Zhu, Ion Muslea, Kai Ming Ting et al. "Spectral Clustering". In Encyclopedia of Machine Learning, 907. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_771.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Tripathy, B. K., S. Anveshrithaa e Shrusti Ghela. "Spectral Clustering". In Unsupervised Learning Approaches for Dimensionality Reduction and Data Visualization, 99–107. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003190554-10.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Theodoridis, Sergios, e Konstantinos Koutroumbas. "Spectral Clustering". In Encyclopedia of Database Systems, 3660–65. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_606.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Anselin, Luc. "Spectral Clustering". In An Introduction to Spatial Data Science with GeoDa, 121–30. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781032713175-8.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Wang, Liang, Christopher Leckie, Kotagiri Ramamohanarao e James Bezdek. "Approximate Spectral Clustering". In Advances in Knowledge Discovery and Data Mining, 134–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01307-2_15.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Jiang, Wenhao, e Fu-lai Chung. "Transfer Spectral Clustering". In Machine Learning and Knowledge Discovery in Databases, 789–803. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33486-3_50.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Gong, Yun-Chao, e Chuanliang Chen. "Locality Spectral Clustering". In AI 2008: Advances in Artificial Intelligence, 348–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89378-3_34.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Trabalhos de conferências sobre o assunto "Clustering spectral"

1

Chakeri, Alireza, Hamidreza Farhidzadeh e Lawrence O. Hall. "Spectral sparsification in spectral clustering". In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7899979.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Wang, Xiang, e Ian Davidson. "Active Spectral Clustering". In 2010 IEEE 10th International Conference on Data Mining (ICDM). IEEE, 2010. http://dx.doi.org/10.1109/icdm.2010.119.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Zhao, Bin, e Changshui Zhang. "Compressed Spectral Clustering". In 2009 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2009. http://dx.doi.org/10.1109/icdmw.2009.22.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Yoo, Shinjae, Hao Huang e Shiva Prasad Kasiviswanathan. "Streaming spectral clustering". In 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 2016. http://dx.doi.org/10.1109/icde.2016.7498277.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Liu, Hongfu, Tongliang Liu, Junjie Wu, Dacheng Tao e Yun Fu. "Spectral Ensemble Clustering". In KDD '15: The 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2783258.2783287.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Blaschko, Matthew B., e Christoph H. Lampert. "Correlational spectral clustering". In 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587353.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Hunter, Blake, Thomas Strohmer, Theodore E. Simos, George Psihoyios e Ch Tsitouras. "Compressive Spectral Clustering". In ICNAAM 2010: International Conference of Numerical Analysis and Applied Mathematics 2010. AIP, 2010. http://dx.doi.org/10.1063/1.3498187.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Yu e Shi. "Multiclass spectral clustering". In ICCV 2003: 9th International Conference on Computer Vision. IEEE, 2003. http://dx.doi.org/10.1109/iccv.2003.1238361.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Ladikos, Alexander, Slobodan Ilic e Nassir Navab. "Spectral camera clustering". In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457537.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Palit, Biswaroop, Rakesh Nigam, Keren Perlmutter e Sharon Perlmutter. "Spectral face clustering". In 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457700.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.

Relatórios de organizações sobre o assunto "Clustering spectral"

1

Neville, Jennifer, Micah Adler e David Jensen. Spectral Clustering with Links and Attributes. Fort Belvoir, VA: Defense Technical Information Center, janeiro de 2004. http://dx.doi.org/10.21236/ada472209.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Blakely, Logan. Spectral Clustering for Electrical Phase Identification Using Advanced Metering Infrastructure Voltage Time Series. Portland State University Library, janeiro de 2000. http://dx.doi.org/10.15760/etd.6567.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Multiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, março de 2022. http://dx.doi.org/10.4271/2022-01-0616.

Texto completo da fonte
Resumo:
As a critical power source, the diesel engine is widely used in various situations. Diesel engine failure may lead to serious property losses and even accidents. Fault detection can improve the safety of diesel engines and reduce economic loss. Surface vibration signal is often used in non-disassembly fault diagnosis because of its convenient measurement and stability. This paper proposed a novel method for engine fault detection based on vibration signals using variational mode decomposition (VMD), K-means, and genetic algorithm. The mode number of VMD dramatically affects the accuracy of extracting signal components. Therefore, a method based on spectral energy distribution is proposed to determine the parameter, and the quadratic penalty term is optimized according to SNR. The results show that the optimized VMD can adaptively extract the vibration signal components of the diesel engine. In the actual fault diagnosis case, it is difficult to obtain the data with labels. The clustering algorithm can complete the classification without labeled data, but it is limited by the low accuracy. In this paper, the optimized VMD is used to decompose and standardize the vibration signal. Then the correlation-based feature selection method is implemented to obtain the feature results after dimensionality reduction. Finally, the results are input into the classifier combined by K-means and genetic algorithm (GA). By introducing and optimizing the genetic algorithm, the number of classes can be selected automatically, and the accuracy is significantly improved. This method can carry out adaptive multiple fault detection of a diesel engine without labeled data. Compared with many supervised learning algorithms, the proposed method also has high accuracy.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia