Gotowa bibliografia na temat „Clustering spectral”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „Clustering spectral”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "Clustering spectral"

1

Hess, Sibylle, Wouter Duivesteijn, Philipp Honysz i Katharina Morik. "The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3788–95. http://dx.doi.org/10.1609/aaai.v33i01.33013788.

Pełny tekst źródła
Streszczenie:
When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose SPECTACL: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as Spectral Clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings.
Style APA, Harvard, Vancouver, ISO itp.
2

Li, Hongmin, Xiucai Ye, Akira Imakura i Tetsuya Sakurai. "LSEC: Large-scale spectral ensemble clustering". Intelligent Data Analysis 27, nr 1 (30.01.2023): 59–77. http://dx.doi.org/10.3233/ida-216240.

Pełny tekst źródła
Streszczenie:
A fundamental problem in machine learning is ensemble clustering, that is, combining multiple base clusterings to obtain improved clustering result. However, most of the existing methods are unsuitable for large-scale ensemble clustering tasks owing to efficiency bottlenecks. In this paper, we propose a large-scale spectral ensemble clustering (LSEC) method to balance efficiency and effectiveness. In LSEC, a large-scale spectral clustering-based efficient ensemble generation framework is designed to generate various base clusterings with low computational complexity. Thereafter, all the base clusterings are combined using a bipartite graph partition-based consensus function to obtain improved consensus clustering results. The LSEC method achieves a lower computational complexity than most existing ensemble clustering methods. Experiments conducted on ten large-scale datasets demonstrate the efficiency and effectiveness of the LSEC method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhuang, Xinwei, i Sean Hanna. "Space Frame Optimisation with Spectral Clustering". International Journal of Machine Learning and Computing 10, nr 4 (lipiec 2020): 507–12. http://dx.doi.org/10.18178/ijmlc.2020.10.4.965.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Sun, Gan, Yang Cong, Qianqian Wang, Jun Li i Yun Fu. "Lifelong Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5867–74. http://dx.doi.org/10.1609/aaai.v34i04.6045.

Pełny tekst źródła
Streszczenie:
In the past decades, spectral clustering (SC) has become one of the most effective clustering algorithms. However, most previous studies focus on spectral clustering tasks with a fixed task set, which cannot incorporate with a new spectral clustering task without accessing to previously learned tasks. In this paper, we aim to explore the problem of spectral clustering in a lifelong machine learning framework, i.e., Lifelong Spectral Clustering (L2SC). Its goal is to efficiently learn a model for a new spectral clustering task by selectively transferring previously accumulated experience from knowledge library. Specifically, the knowledge library of L2SC contains two components: 1) orthogonal basis library: capturing latent cluster centers among the clusters in each pair of tasks; 2) feature embedding library: embedding the feature manifold information shared among multiple related tasks. As a new spectral clustering task arrives, L2SC firstly transfers knowledge from both basis library and feature library to obtain encoding matrix, and further redefines the library base over time to maximize performance across all the clustering tasks. Meanwhile, a general online update formulation is derived to alternatively update the basis library and feature library. Finally, the empirical experiments on several real-world benchmark datasets demonstrate that our L2SC model can effectively improve the clustering performance when comparing with other state-of-the-art spectral clustering algorithms.
Style APA, Harvard, Vancouver, ISO itp.
5

Ling Ping, Rong Xiangsheng i Dong Yongquan. "Incremental Spectral Clustering". Journal of Convergence Information Technology 7, nr 15 (31.08.2012): 286–93. http://dx.doi.org/10.4156/jcit.vol7.issue15.34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kim, Jaehwan, i Seungjin Choi. "Semidefinite spectral clustering". Pattern Recognition 39, nr 11 (listopad 2006): 2025–35. http://dx.doi.org/10.1016/j.patcog.2006.05.021.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Challa, Aditya, Sravan Danda, B. S. Daya Sagar i Laurent Najman. "Power Spectral Clustering". Journal of Mathematical Imaging and Vision 62, nr 9 (11.07.2020): 1195–213. http://dx.doi.org/10.1007/s10851-020-00980-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Huang, Jin, Feiping Nie i Heng Huang. "Spectral Rotation versus K-Means in Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (30.06.2013): 431–37. http://dx.doi.org/10.1609/aaai.v27i1.8683.

Pełny tekst źródła
Streszczenie:
Spectral clustering has been a popular data clustering algorithm. This category of approaches often resort to other clustering methods, such as K-Means, to get the final cluster. The potential flaw of such common practice is that the obtained relaxed continuous spectral solution could severely deviate from the true discrete solution. In this paper, we propose to impose an additional orthonormal constraint to better approximate the optimal continuous solution to the graph cut objective functions. Such a method, called spectral rotation in literature, optimizes the spectral clustering objective functions better than K-Means, and improves the clustering accuracy. We would provide efficient algorithm to solve the new problem rigorously, which is not significantly more costly than K-Means. We also establish the connection between our method andK-Means to provide theoretical motivation of our method. Experimental results show that our algorithm consistently reaches better cut and meanwhile outperforms in clustering metrics than classic spectral clustering methods.
Style APA, Harvard, Vancouver, ISO itp.
9

JIN, Hui-zhen. "Multilevel spectral clustering with ascertainable clustering number". Journal of Computer Applications 28, nr 5 (17.10.2008): 1229–31. http://dx.doi.org/10.3724/sp.j.1087.2008.01229.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Huang, Dong, Chang-Dong Wang, Jian-Sheng Wu, Jian-Huang Lai i Chee-Keong Kwoh. "Ultra-Scalable Spectral Clustering and Ensemble Clustering". IEEE Transactions on Knowledge and Data Engineering 32, nr 6 (1.06.2020): 1212–26. http://dx.doi.org/10.1109/tkde.2019.2903410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Rozprawy doktorskie na temat "Clustering spectral"

1

Shortreed, Susan. "Learning in spectral clustering /". Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/8977.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Larson, Ellis, i Nelly Åkerblom. "Spectral clustering for Meteorology". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297760.

Pełny tekst źródła
Streszczenie:
Climate is a tremendously complex topic, affecting many aspects of human activity and constantly changing. Defining some structures and rules for how it works is thereof of the utmost importance even though it might only cover a small part of the complexity. Cluster analysis is a tool developed in data analysis that is able to categorize data into groups of similar type. In this paper data from the Swedish Meteorological and Hydrological Institute (SMHI) is clustered to find a partitioning. The cluster analysis used is called Spectral clustering which is a family of methods making use of the spectral properties of graphs. Concrete results over different groupings of climate over Sweden were found.
Style APA, Harvard, Vancouver, ISO itp.
3

Gaertler, Marco. "Clustering with spectral methods". [S.l. : s.n.], 2002. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10101213.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Masum, Mohammad. "Vertex Weighted Spectral Clustering". Digital Commons @ East Tennessee State University, 2017. https://dc.etsu.edu/etd/3266.

Pełny tekst źródła
Streszczenie:
Spectral clustering is often used to partition a data set into a specified number of clusters. Both the unweighted and the vertex-weighted approaches use eigenvectors of the Laplacian matrix of a graph. Our focus is on using vertex-weighted methods to refine clustering of observations. An eigenvector corresponding with the second smallest eigenvalue of the Laplacian matrix of a graph is called a Fiedler vector. Coefficients of a Fiedler vector are used to partition vertices of a given graph into two clusters. A vertex of a graph is classified as unassociated if the Fiedler coefficient of the vertex is close to zero compared to the largest Fiedler coefficient of the graph. We propose a vertex-weighted spectral clustering algorithm which incorporates a vector of weights for each vertex of a given graph to form a vertex-weighted graph. The proposed algorithm predicts association of equidistant or nearly equidistant data points from both clusters while the unweighted clustering does not provide association. Finally, we implemented both the unweighted and the vertex-weighted spectral clustering algorithms on several data sets to show that the proposed algorithm works in general.
Style APA, Harvard, Vancouver, ISO itp.
5

Larsson, Johan, i Isak Ågren. "Numerical Methods for Spectral Clustering". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-275701.

Pełny tekst źródła
Streszczenie:
The Aviation industry is important to the European economy and development, therefore a study of the sensitivity of the European flight network is interesting. If clusters exist within the network, that could indicate possible vulnerabilities or bottlenecks, since that would represent a group of airports poorly connected to other parts of the network. In this paper a cluster analysis using spectral clustering is performed with flight data from 34 different European countries. The report also looks at how to implement the spectral clustering algorithm for large data sets. After performing the spectral clustering it appears as if the European flight network is not clustered, and thus does not appear to be sensitive.
Style APA, Harvard, Vancouver, ISO itp.
6

Rossi, Alfred Vincent III. "Temporal Clustering of Finite Metric Spaces and Spectral k-Clustering". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500033042082458.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Darke, Felix, i Blomkvist Linus Below. "Categorization of songs using spectral clustering". Thesis, KTH, Skolan för teknikvetenskap (SCI), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297763.

Pełny tekst źródła
Streszczenie:
A direct consequence of the world becoming more digital is that the amount of available data grows, which presents great opportunities for organizations, researchers and institutions alike.However, this places a huge demand on efficient and understandable algorithms for analyzing vast datasets. This project is centered around using one of these algorithms for identifying groups of songs in a public dataset released by Spotify in 2018. This problem is part of a larger problem class, where one wish to assign data into groups, without the preexisting knowledge of what makes the different groups special, or how many different groups there are. This is typically solved using unsupervised machine learning. The overall goal of this project was to use spectral clustering (a specific algorithm in the unsupervised machine learning family) to assign 50 704 songs from the dataset into different categories, where each category would be made up of similar songs. The algorithm rests upon graph theory, and a large emphasis was placed upon actuallyunderstanding the mathematical foundation and motivation behind the method before the actual implementation, which is reflected in the report. The results achieved through applying spectral clustering were one large group consisting of 40 718 songs in combination with 22 smaller groups, all larger than 100 songs, with an average size of 430 songs. The groups found were not examined in depth, but the analysis done hints that certain groups were clearly different from the data as a whole in terms of the musical features. For instance, one groupwere deemed to be 54% more likely to be acoustic than the dataset as a whole. As a conclusion, the largest cluster was deemed to be an artefact of the fact that when a sample of songs listened to on Spotify is taken, the likelihood of these songs mainly being popular songs would be high. This would explain the homogeneity that resulted in the fact that most songs were assigned into the same group, which also resulted in the limited success of spectral clustering for this specific project.
Style APA, Harvard, Vancouver, ISO itp.
8

Marotta, Serena. "Alcuni metodi matriciali per lo Spectral Clustering". Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14122/.

Pełny tekst źródła
Streszczenie:
L'obiettivo di questa tesi è analizzare nel dettaglio un insieme di tecniche di analisi dei dati, volte alla selezione e al raggruppamento di elementi omogenei, in modo che si possano facilmente interfacciare tra di loro e fornire un utilizzo più semplice per chi opera nel settore.È introdotta la trattazione dei principali metodi di clustering: linkage, k-medie e in particolare spectral clustering, argomento centrale della mia tesi.
Style APA, Harvard, Vancouver, ISO itp.
9

Alshammari, Mashaan. "Graph Filtering and Automatic Parameter Selection for Efficient Spectral Clustering". Thesis, University of Sydney, 2020. https://hdl.handle.net/2123/24091.

Pełny tekst źródła
Streszczenie:
Spectral clustering is usually used to detect non-convex clusters. Despite being an effective method to detect this type of clusters, spectral clustering has two deficiencies that made it less attractive for the pattern recognition community. First, the graph Laplacian has to pass through eigen-decomposition to find the embedding space. This has been proved to be a computationally expensive process when the number of points is large. Second, spectral clustering used parameters that highly influence its outcome. Tuning these parameters manually would be a tedious process when examining different datasets. This thesis introduces solutions to these two problems of spectral clustering. For computational efficiency, we proposed approximated graphs with a reduced number of graph vertices. Consequently, eigen-decomposition will be performed on a matrix with reduced size which makes it faster. Unfortunately, reducing graph vertices could lead to a loss in local information that affects clustering accuracy. Thus, we proposed another graph where the number of edges was reduced significantly while keeping the same number of vertices to maintain local information. This would reduce the matrix size, making it computationally efficient and maintaining good clustering accuracy. Regarding influential parameters, we proposed cost functions that test a range of values and decide on the optimum value. Cost functions were used to estimate the number of embedding space dimensions and the number of clusters. We also observed in the literature that the graph reduction step requires manual tuning of parameters. Therefore, we developed a graph reduction framework that does not require any parameters.
Style APA, Harvard, Vancouver, ISO itp.
10

Azam, Nadia Farhanaz. "Spectral clustering: An explorative study of proximity measures". Thesis, University of Ottawa (Canada), 2009. http://hdl.handle.net/10393/28238.

Pełny tekst źródła
Streszczenie:
In cluster analysis, data are clustered into meaningful groups so that the objects in the same group are very similar, and the objects residing in two different groups are different from one another. One such cluster analysis algorithm is called the spectral clustering algorithm, which originated from the area of graph partitioning. The input, in this case, is a similarity matrix, constructed from the pair-wise similarity between data objects. The algorithm uses the eigenvalues and eigenvectors of a normalized similarity matrix to partition the data. The pair-wise similarity between the objects is calculated from the proximity (e.g. similarity or distance) measures. In any clustering task, the proximity measures often play a crucial role. In fact, one of the early and fundamental steps in a clustering process is the selection of a suitable proximity measure. A number of such measures may be used for this task. However, the success of a clustering algorithm partially depends on the selection of the proximity measure. While, the majority of prior research on the spectral clustering algorithm emphasizes on the algorithm-specific issues, little research has been performed on the evaluation of the performance of the proximity measures. To this end, we perform a comparative and exploratory analysis on several existing proximity measures to evaluate their performance when applying the spectral clustering algorithm to a number of diverse data sets. To accomplish this task, we use a ten-fold cross validation technique, and assess the clustering results using several external cluster evaluation measures. The performances of the proximity measures are then compared using the quantitative results from the external evaluation measures and analyzed further to determine the probable causes that may have led to such results. In essence, our experimental evaluation indicates that the proximity measures, in general, yield comparable results. That is, no measure is clearly superior, or inferior, to the others in its group. However, among the six similarity measures considered for the binary data, one measure (Russell and Roo similarity coefficient) frequently performed poorer than the others. For numeric data, our study shows that the distance measures based on the relative distances (i.e. the Pearson correlation coefficient and the Angular distance) generally performed better than the distance measures based on the absolute distances (e.g. the Euclidean or Manhattan distance). When considering the proximity measures for mixed data, our results indicate that the choice of distance measure for the numeric data has the highest impact on the final outcome.
Style APA, Harvard, Vancouver, ISO itp.

Książki na temat "Clustering spectral"

1

Bolla, Marianna, red. Spectral Clustering and Biclustering. Chichester, UK: John Wiley & Sons, Ltd, 2013. http://dx.doi.org/10.1002/9781118650684.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

F, Shandarin Sergei, Weinberg David Hal i United States. National Aeronautics and Space Administration., red. A test of the adhesion approximation for gravitational clustering. [Washington, D.C: National Aeronautics and Space Administration, 1995.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Bolla, Marianna. Spectral Clustering and Biclustering: Learning Large Graphs and Contingency Tables. Wiley & Sons, Incorporated, John, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Chennubhotla, Srinivas Chakra. Spectral methods for multi-scale feature extraction and data clustering. 2004.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Bolla, Marianna. Spectral Clustering and Biclustering of Networks: Large Graphs and Contingency Tables. Wiley & Sons, Limited, John, 2013.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Coolen, A. C. C., A. Annibale i E. S. Roberts. Definitions and concepts. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198709893.003.0002.

Pełny tekst źródła
Streszczenie:
A network is specified by its links and nodes. However, it can be described by a much wider range of interesting and important topological features. This chapter introduces how a network can be characterized by its microscopic topological features and macroscopic topological features. Microscopic features introduced are degree and clustering coefficients. Macroscopic topological features introduced are the degree distribution; correlation between degrees of connected nodes; modularity; and, the eigenvalue spectrum (which counts the number of closed paths in the graph).
Style APA, Harvard, Vancouver, ISO itp.

Części książek na temat "Clustering spectral"

1

Theodoridis, Sergios, i Konstantinos Koutroumbas. "Spectral Clustering". W Encyclopedia of Database Systems, 1–5. New York, NY: Springer New York, 2016. http://dx.doi.org/10.1007/978-1-4899-7993-3_606-2.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wierzchoń, Sławomir T., i Mieczysław A. Kłopotek. "Spectral Clustering". W Studies in Big Data, 181–259. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-69308-8_5.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Theodoridis, Sergios, i Konstantinos Koutroumbas. "Spectral Clustering". W Encyclopedia of Database Systems, 2748–52. Boston, MA: Springer US, 2009. http://dx.doi.org/10.1007/978-0-387-39940-9_606.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Martin, Eric, Samuel Kaski, Fei Zheng, Geoffrey I. Webb, Xiaojin Zhu, Ion Muslea, Kai Ming Ting i in. "Spectral Clustering". W Encyclopedia of Machine Learning, 907. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_771.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Tripathy, B. K., S. Anveshrithaa i Shrusti Ghela. "Spectral Clustering". W Unsupervised Learning Approaches for Dimensionality Reduction and Data Visualization, 99–107. Boca Raton: CRC Press, 2021. http://dx.doi.org/10.1201/9781003190554-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Theodoridis, Sergios, i Konstantinos Koutroumbas. "Spectral Clustering". W Encyclopedia of Database Systems, 3660–65. New York, NY: Springer New York, 2018. http://dx.doi.org/10.1007/978-1-4614-8265-9_606.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Anselin, Luc. "Spectral Clustering". W An Introduction to Spatial Data Science with GeoDa, 121–30. Boca Raton: Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781032713175-8.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Wang, Liang, Christopher Leckie, Kotagiri Ramamohanarao i James Bezdek. "Approximate Spectral Clustering". W Advances in Knowledge Discovery and Data Mining, 134–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01307-2_15.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Jiang, Wenhao, i Fu-lai Chung. "Transfer Spectral Clustering". W Machine Learning and Knowledge Discovery in Databases, 789–803. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33486-3_50.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Gong, Yun-Chao, i Chuanliang Chen. "Locality Spectral Clustering". W AI 2008: Advances in Artificial Intelligence, 348–54. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-89378-3_34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Streszczenia konferencji na temat "Clustering spectral"

1

Chakeri, Alireza, Hamidreza Farhidzadeh i Lawrence O. Hall. "Spectral sparsification in spectral clustering". W 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7899979.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wang, Xiang, i Ian Davidson. "Active Spectral Clustering". W 2010 IEEE 10th International Conference on Data Mining (ICDM). IEEE, 2010. http://dx.doi.org/10.1109/icdm.2010.119.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Zhao, Bin, i Changshui Zhang. "Compressed Spectral Clustering". W 2009 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2009. http://dx.doi.org/10.1109/icdmw.2009.22.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Yoo, Shinjae, Hao Huang i Shiva Prasad Kasiviswanathan. "Streaming spectral clustering". W 2016 IEEE 32nd International Conference on Data Engineering (ICDE). IEEE, 2016. http://dx.doi.org/10.1109/icde.2016.7498277.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

Liu, Hongfu, Tongliang Liu, Junjie Wu, Dacheng Tao i Yun Fu. "Spectral Ensemble Clustering". W KDD '15: The 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2015. http://dx.doi.org/10.1145/2783258.2783287.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Blaschko, Matthew B., i Christoph H. Lampert. "Correlational spectral clustering". W 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2008. http://dx.doi.org/10.1109/cvpr.2008.4587353.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Hunter, Blake, Thomas Strohmer, Theodore E. Simos, George Psihoyios i Ch Tsitouras. "Compressive Spectral Clustering". W ICNAAM 2010: International Conference of Numerical Analysis and Applied Mathematics 2010. AIP, 2010. http://dx.doi.org/10.1063/1.3498187.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Yu i Shi. "Multiclass spectral clustering". W ICCV 2003: 9th International Conference on Computer Vision. IEEE, 2003. http://dx.doi.org/10.1109/iccv.2003.1238361.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Ladikos, Alexander, Slobodan Ilic i Nassir Navab. "Spectral camera clustering". W 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457537.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Palit, Biswaroop, Rakesh Nigam, Keren Perlmutter i Sharon Perlmutter. "Spectral face clustering". W 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops. IEEE, 2009. http://dx.doi.org/10.1109/iccvw.2009.5457700.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.

Raporty organizacyjne na temat "Clustering spectral"

1

Neville, Jennifer, Micah Adler i David Jensen. Spectral Clustering with Links and Attributes. Fort Belvoir, VA: Defense Technical Information Center, styczeń 2004. http://dx.doi.org/10.21236/ada472209.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Blakely, Logan. Spectral Clustering for Electrical Phase Identification Using Advanced Metering Infrastructure Voltage Time Series. Portland State University Library, styczeń 2000. http://dx.doi.org/10.15760/etd.6567.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
3

Multiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, marzec 2022. http://dx.doi.org/10.4271/2022-01-0616.

Pełny tekst źródła
Streszczenie:
As a critical power source, the diesel engine is widely used in various situations. Diesel engine failure may lead to serious property losses and even accidents. Fault detection can improve the safety of diesel engines and reduce economic loss. Surface vibration signal is often used in non-disassembly fault diagnosis because of its convenient measurement and stability. This paper proposed a novel method for engine fault detection based on vibration signals using variational mode decomposition (VMD), K-means, and genetic algorithm. The mode number of VMD dramatically affects the accuracy of extracting signal components. Therefore, a method based on spectral energy distribution is proposed to determine the parameter, and the quadratic penalty term is optimized according to SNR. The results show that the optimized VMD can adaptively extract the vibration signal components of the diesel engine. In the actual fault diagnosis case, it is difficult to obtain the data with labels. The clustering algorithm can complete the classification without labeled data, but it is limited by the low accuracy. In this paper, the optimized VMD is used to decompose and standardize the vibration signal. Then the correlation-based feature selection method is implemented to obtain the feature results after dimensionality reduction. Finally, the results are input into the classifier combined by K-means and genetic algorithm (GA). By introducing and optimizing the genetic algorithm, the number of classes can be selected automatically, and the accuracy is significantly improved. This method can carry out adaptive multiple fault detection of a diesel engine without labeled data. Compared with many supervised learning algorithms, the proposed method also has high accuracy.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii