Artykuły w czasopismach na temat „Clustering spectral”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Clustering spectral.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „Clustering spectral”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Hess, Sibylle, Wouter Duivesteijn, Philipp Honysz i Katharina Morik. "The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17.07.2019): 3788–95. http://dx.doi.org/10.1609/aaai.v33i01.33013788.

Pełny tekst źródła
Streszczenie:
When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose SPECTACL: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as Spectral Clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings.
Style APA, Harvard, Vancouver, ISO itp.
2

Li, Hongmin, Xiucai Ye, Akira Imakura i Tetsuya Sakurai. "LSEC: Large-scale spectral ensemble clustering". Intelligent Data Analysis 27, nr 1 (30.01.2023): 59–77. http://dx.doi.org/10.3233/ida-216240.

Pełny tekst źródła
Streszczenie:
A fundamental problem in machine learning is ensemble clustering, that is, combining multiple base clusterings to obtain improved clustering result. However, most of the existing methods are unsuitable for large-scale ensemble clustering tasks owing to efficiency bottlenecks. In this paper, we propose a large-scale spectral ensemble clustering (LSEC) method to balance efficiency and effectiveness. In LSEC, a large-scale spectral clustering-based efficient ensemble generation framework is designed to generate various base clusterings with low computational complexity. Thereafter, all the base clusterings are combined using a bipartite graph partition-based consensus function to obtain improved consensus clustering results. The LSEC method achieves a lower computational complexity than most existing ensemble clustering methods. Experiments conducted on ten large-scale datasets demonstrate the efficiency and effectiveness of the LSEC method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhuang, Xinwei, i Sean Hanna. "Space Frame Optimisation with Spectral Clustering". International Journal of Machine Learning and Computing 10, nr 4 (lipiec 2020): 507–12. http://dx.doi.org/10.18178/ijmlc.2020.10.4.965.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Sun, Gan, Yang Cong, Qianqian Wang, Jun Li i Yun Fu. "Lifelong Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 5867–74. http://dx.doi.org/10.1609/aaai.v34i04.6045.

Pełny tekst źródła
Streszczenie:
In the past decades, spectral clustering (SC) has become one of the most effective clustering algorithms. However, most previous studies focus on spectral clustering tasks with a fixed task set, which cannot incorporate with a new spectral clustering task without accessing to previously learned tasks. In this paper, we aim to explore the problem of spectral clustering in a lifelong machine learning framework, i.e., Lifelong Spectral Clustering (L2SC). Its goal is to efficiently learn a model for a new spectral clustering task by selectively transferring previously accumulated experience from knowledge library. Specifically, the knowledge library of L2SC contains two components: 1) orthogonal basis library: capturing latent cluster centers among the clusters in each pair of tasks; 2) feature embedding library: embedding the feature manifold information shared among multiple related tasks. As a new spectral clustering task arrives, L2SC firstly transfers knowledge from both basis library and feature library to obtain encoding matrix, and further redefines the library base over time to maximize performance across all the clustering tasks. Meanwhile, a general online update formulation is derived to alternatively update the basis library and feature library. Finally, the empirical experiments on several real-world benchmark datasets demonstrate that our L2SC model can effectively improve the clustering performance when comparing with other state-of-the-art spectral clustering algorithms.
Style APA, Harvard, Vancouver, ISO itp.
5

Ling Ping, Rong Xiangsheng i Dong Yongquan. "Incremental Spectral Clustering". Journal of Convergence Information Technology 7, nr 15 (31.08.2012): 286–93. http://dx.doi.org/10.4156/jcit.vol7.issue15.34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
6

Kim, Jaehwan, i Seungjin Choi. "Semidefinite spectral clustering". Pattern Recognition 39, nr 11 (listopad 2006): 2025–35. http://dx.doi.org/10.1016/j.patcog.2006.05.021.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Challa, Aditya, Sravan Danda, B. S. Daya Sagar i Laurent Najman. "Power Spectral Clustering". Journal of Mathematical Imaging and Vision 62, nr 9 (11.07.2020): 1195–213. http://dx.doi.org/10.1007/s10851-020-00980-7.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Huang, Jin, Feiping Nie i Heng Huang. "Spectral Rotation versus K-Means in Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 27, nr 1 (30.06.2013): 431–37. http://dx.doi.org/10.1609/aaai.v27i1.8683.

Pełny tekst źródła
Streszczenie:
Spectral clustering has been a popular data clustering algorithm. This category of approaches often resort to other clustering methods, such as K-Means, to get the final cluster. The potential flaw of such common practice is that the obtained relaxed continuous spectral solution could severely deviate from the true discrete solution. In this paper, we propose to impose an additional orthonormal constraint to better approximate the optimal continuous solution to the graph cut objective functions. Such a method, called spectral rotation in literature, optimizes the spectral clustering objective functions better than K-Means, and improves the clustering accuracy. We would provide efficient algorithm to solve the new problem rigorously, which is not significantly more costly than K-Means. We also establish the connection between our method andK-Means to provide theoretical motivation of our method. Experimental results show that our algorithm consistently reaches better cut and meanwhile outperforms in clustering metrics than classic spectral clustering methods.
Style APA, Harvard, Vancouver, ISO itp.
9

JIN, Hui-zhen. "Multilevel spectral clustering with ascertainable clustering number". Journal of Computer Applications 28, nr 5 (17.10.2008): 1229–31. http://dx.doi.org/10.3724/sp.j.1087.2008.01229.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
10

Huang, Dong, Chang-Dong Wang, Jian-Sheng Wu, Jian-Huang Lai i Chee-Keong Kwoh. "Ultra-Scalable Spectral Clustering and Ensemble Clustering". IEEE Transactions on Knowledge and Data Engineering 32, nr 6 (1.06.2020): 1212–26. http://dx.doi.org/10.1109/tkde.2019.2903410.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
11

Yousefi, Bardia, Clemente Ibarra-Castanedo, Martin Chamberland, Xavier P. V. Maldague i Georges Beaudoin. "Unsupervised Identification of Targeted Spectra Applying Rank1-NMF and FCC Algorithms in Long-Wave Hyperspectral Infrared Imagery". Remote Sensing 13, nr 11 (28.05.2021): 2125. http://dx.doi.org/10.3390/rs13112125.

Pełny tekst źródła
Streszczenie:
Clustering methods unequivocally show considerable influence on many recent algorithms and play an important role in hyperspectral data analysis. Here, we challenge the clustering for mineral identification using two different strategies in hyperspectral long wave infrared (LWIR, 7.7–11.8 μm). For that, we compare two algorithms to perform the mineral identification in a unique dataset. The first algorithm uses spectral comparison techniques for all the pixel-spectra and creates RGB false color composites (FCC). Then, a color based clustering is used to group the regions (called FCC-clustering). The second algorithm clusters all the pixel-spectra to directly group the spectra. Then, the first rank of non-negative matrix factorization (NMF) extracts the representative of each cluster and compares results with the spectral library of JPL/NASA. These techniques give the comparison values as features which convert into RGB-FCC as the results (called clustering rank1-NMF). We applied K-means as clustering approach, which can be modified in any other similar clustering approach. The results of the clustering-rank1-NMF algorithm indicate significant computational efficiency (more than 20 times faster than the previous approach) and promising performance for mineral identification having up to 75.8% and 84.8% average accuracies for FCC-clustering and clustering-rank1 NMF algorithms (using spectral angle mapper (SAM)), respectively. Furthermore, several spectral comparison techniques are used also such as adaptive matched subspace detector (AMSD), orthogonal subspace projection (OSP) algorithm, principal component analysis (PCA), local matched filter (PLMF), SAM, and normalized cross correlation (NCC) for both algorithms and most of them show a similar range in accuracy. However, SAM and NCC are preferred due to their computational simplicity. Our algorithms strive to identify eleven different mineral grains (biotite, diopside, epidote, goethite, kyanite, scheelite, smithsonite, tourmaline, pyrope, olivine, and quartz).
Style APA, Harvard, Vancouver, ISO itp.
12

Fu, Li Li, Yong Li Liu i Li Jing Hao. "Research on Spectral Clustering". Applied Mechanics and Materials 687-691 (listopad 2014): 1350–53. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.1350.

Pełny tekst źródła
Streszczenie:
Spectral clustering algorithm is a kind of clustering algorithm based on spectral graph theory. As spectral clustering has deep theoretical foundation as well as the advantage in dealing with non-convex distribution, it has received much attention in machine learning and data mining areas. The algorithm is easy to implement, and outperforms traditional clustering algorithms such as K-means algorithm. This paper aims to give some intuitions on spectral clustering. We describe different graph partition criteria, the definition of spectral clustering, and clustering steps, etc. Finally, in order to solve the disadvantage of spectral clustering, some improvements are introduced briefly.
Style APA, Harvard, Vancouver, ISO itp.
13

Wada, Yuichiro, Shugo Miyamoto, Takumi Nakagama, Léo Andéol, Wataru Kumagai i Takafumi Kanamori. "Spectral Embedded Deep Clustering". Entropy 21, nr 8 (15.08.2019): 795. http://dx.doi.org/10.3390/e21080795.

Pełny tekst źródła
Streszczenie:
We propose a new clustering method based on a deep neural network. Given an unlabeled dataset and the number of clusters, our method directly groups the dataset into the given number of clusters in the original space. We use a conditional discrete probability distribution defined by a deep neural network as a statistical model. Our strategy is first to estimate the cluster labels of unlabeled data points selected from a high-density region, and then to conduct semi-supervised learning to train the model by using the estimated cluster labels and the remaining unlabeled data points. Lastly, by using the trained model, we obtain the estimated cluster labels of all given unlabeled data points. The advantage of our method is that it does not require key conditions. Existing clustering methods with deep neural networks assume that the cluster balance of a given dataset is uniform. Moreover, it also can be applied to various data domains as long as the data is expressed by a feature vector. In addition, it is observed that our method is robust against outliers. Therefore, the proposed method is expected to perform, on average, better than previous methods. We conducted numerical experiments on five commonly used datasets to confirm the effectiveness of the proposed method.
Style APA, Harvard, Vancouver, ISO itp.
14

Chi, Yun, Xiaodan Song, Dengyong Zhou, Koji Hino i Belle L. Tseng. "On evolutionary spectral clustering". ACM Transactions on Knowledge Discovery from Data 3, nr 4 (listopad 2009): 1–30. http://dx.doi.org/10.1145/1631162.1631165.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
15

Chen, Jiansheng, Zhengqin Li i Bo Huang. "Linear Spectral Clustering Superpixel". IEEE Transactions on Image Processing 26, nr 7 (lipiec 2017): 3317–30. http://dx.doi.org/10.1109/tip.2017.2651389.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Li, Jianyuan, Yingjie Xia, Zhenyu Shan i Yuncai Liu. "Scalable Constrained Spectral Clustering". IEEE Transactions on Knowledge and Data Engineering 27, nr 2 (1.02.2015): 589–93. http://dx.doi.org/10.1109/tkde.2014.2356471.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Yang, Yang, Fumin Shen, Zi Huang, Heng Tao Shen i Xuelong Li. "Discrete Nonnegative Spectral Clustering". IEEE Transactions on Knowledge and Data Engineering 29, nr 9 (1.09.2017): 1834–45. http://dx.doi.org/10.1109/tkde.2017.2701825.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Huang, Shudong, Hongjun Wang, Dingcheng Li, Yan Yang i Tianrui Li. "Spectral co-clustering ensemble". Knowledge-Based Systems 84 (sierpień 2015): 46–55. http://dx.doi.org/10.1016/j.knosys.2015.03.027.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Langone, Rocco, Marc Van Barel i Johan A. K. Suykens. "Efficient evolutionary spectral clustering". Pattern Recognition Letters 84 (grudzień 2016): 78–84. http://dx.doi.org/10.1016/j.patrec.2016.08.012.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Alzate, Carlos, i Johan A. K. Suykens. "Hierarchical kernel spectral clustering". Neural Networks 35 (listopad 2012): 21–30. http://dx.doi.org/10.1016/j.neunet.2012.06.007.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Ozertem, Umut, Deniz Erdogmus i Robert Jenssen. "Mean shift spectral clustering". Pattern Recognition 41, nr 6 (czerwiec 2008): 1924–38. http://dx.doi.org/10.1016/j.patcog.2007.09.009.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
22

Binkiewicz, N., J. T. Vogelstein i K. Rohe. "Covariate-assisted spectral clustering". Biometrika 104, nr 2 (19.03.2017): 361–77. http://dx.doi.org/10.1093/biomet/asx008.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

von Luxburg, Ulrike, Mikhail Belkin i Olivier Bousquet. "Consistency of spectral clustering". Annals of Statistics 36, nr 2 (kwiecień 2008): 555–86. http://dx.doi.org/10.1214/009053607000000640.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Langone, Rocco, i Johan A. K. Suykens. "Fast kernel spectral clustering". Neurocomputing 268 (grudzień 2017): 27–33. http://dx.doi.org/10.1016/j.neucom.2016.12.085.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
25

Chen, Guangliang, i Gilad Lerman. "Spectral Curvature Clustering (SCC)". International Journal of Computer Vision 81, nr 3 (10.12.2008): 317–30. http://dx.doi.org/10.1007/s11263-008-0178-9.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
26

Bolla, Marianna, i Ahmed Elbanna. "Discrepancy minimizing spectral clustering". Discrete Applied Mathematics 243 (lipiec 2018): 286–89. http://dx.doi.org/10.1016/j.dam.2018.02.016.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

Pang, Yanwei, Jin Xie, Feiping Nie i Xuelong Li. "Spectral Clustering by Joint Spectral Embedding and Spectral Rotation". IEEE Transactions on Cybernetics 50, nr 1 (styczeń 2020): 247–58. http://dx.doi.org/10.1109/tcyb.2018.2868742.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Yan, Yuguang, Zhihao Xu, Canlin Yang, Jie Zhang, Ruichu Cai i Michael Kwok-Po Ng. "An Optimal Transport View for Subspace Clustering and Spectral Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 38, nr 15 (24.03.2024): 16281–89. http://dx.doi.org/10.1609/aaai.v38i15.29563.

Pełny tekst źródła
Streszczenie:
Clustering is one of the most fundamental problems in machine learning and data mining, and many algorithms have been proposed in the past decades. Among them, subspace clustering and spectral clustering are the most famous approaches. In this paper, we provide an explanation for subspace clustering and spectral clustering from the perspective of optimal transport. Optimal transport studies how to move samples from one distribution to another distribution with minimal transport cost, and has shown a powerful ability to extract geometric information. By considering a self optimal transport model with only one group of samples, we observe that both subspace clustering and spectral clustering can be explained in the framework of optimal transport, and the optimal transport matrix bridges the spaces of features and spectral embeddings. Inspired by this connection, we propose a spectral optimal transport barycenter model, which learns spectral embeddings by solving a barycenter problem equipped with an optimal transport discrepancy and guidance of data. Based on our proposed model, we take advantage of optimal transport to exploit both feature and metric information involved in data for learning coupled spectral embeddings and affinity matrix in a unified model. We develop an alternating optimization algorithm to solve the resultant problems, and conduct experiments in different settings to evaluate the performance of our proposed methods.
Style APA, Harvard, Vancouver, ISO itp.
29

Chen, Ji, Kaiping Zhan, Qingzhou Li, Zhiyang Tang, Chenwei Zhu, Ke Liu i Xiangyou Li. "Spectral clustering based on histogram of oriented gradient (HOG) of coal using laser-induced breakdown spectroscopy". Journal of Analytical Atomic Spectrometry 36, nr 6 (2021): 1297–305. http://dx.doi.org/10.1039/d1ja00104c.

Pełny tekst źródła
Streszczenie:
Histogram of oriented gradients (HOG) was introduced in the unsupervised spectral clustering in LIBS. After clustering, the spectra of different matrices were clearly distinguished, and the accuracy of quantitative analysis of coal was improved.
Style APA, Harvard, Vancouver, ISO itp.
30

Blanza, Jojo. "Wireless Propagation Multipaths using Spectral Clustering and Three-Constraint Affinity Matrix Spectral Clustering". Baghdad Science Journal 18, nr 2(Suppl.) (20.06.2021): 1001. http://dx.doi.org/10.21123/bsj.2021.18.2(suppl.).1001.

Pełny tekst źródła
Streszczenie:
This study focused on spectral clustering (SC) and three-constraint affinity matrix spectral clustering (3CAM-SC) to determine the number of clusters and the membership of the clusters of the COST 2100 channel model (C2CM) multipath dataset simultaneously. Various multipath clustering approaches solve only the number of clusters without taking into consideration the membership of clusters. The problem of giving only the number of clusters is that there is no assurance that the membership of the multipath clusters is accurate even though the number of clusters is correct. SC and 3CAM-SC aimed to solve this problem by determining the membership of the clusters. The cluster and the cluster count were then computed through the cluster-wise Jaccard index of the membership of the multipaths to their clusters. The multipaths generated by C2CM were transformed using the directional cosine transform (DCT) and the whitening transform (WT). The transformed dataset was clustered using SC and 3CAM-SC. The clustering performance was validated using the Jaccard index by comparing the reference multipath dataset with the calculated multipath clusters. The results show that the effectiveness of SC is similar to the state-of-the-art clustering approaches. However, 3CAM-SC outperforms SC in all channel scenarios. SC can be used in indoor scenarios based on accuracy, while 3CAM-SC is applicable in indoor and semi-urban scenarios. Thus, the clustering approaches can be applied as alternative clustering techniques in the field of channel modeling.
Style APA, Harvard, Vancouver, ISO itp.
31

Chen, Guangchun, Juan Hu, Hong Peng, Jun Wang i Xiangnian Huang. "A Spectral Clustering Algorithm Improved by P Systems". International Journal of Computers Communications & Control 13, nr 5 (29.09.2018): 759–71. http://dx.doi.org/10.15837/ijccc.2018.5.3238.

Pełny tekst źródła
Streszczenie:
Using spectral clustering algorithm is diffcult to find the clusters in the cases that dataset has a large difference in density and its clustering effect depends on the selection of initial centers. To overcome the shortcomings, we propose a novel spectral clustering algorithm based on membrane computing framework, called MSC algorithm, whose idea is to use membrane clustering algorithm to realize the clustering component in spectral clustering. A tissue-like P system is used as its computing framework, where each object in cells denotes a set of cluster centers and velocity-location model is used as the evolution rules. Under the control of evolutioncommunication mechanism, the tissue-like P system can obtain a good clustering partition for each dataset. The proposed spectral clustering algorithm is evaluated on three artiffcial datasets and ten UCI datasets, and it is further compared with classical spectral clustering algorithms. The comparison results demonstrate the advantage of the proposed spectral clustering algorithm.
Style APA, Harvard, Vancouver, ISO itp.
32

Li, Qiang. "A Spectrum Clustering Algorithm Based on Weighted Fuzzy Similar Matrix". Advanced Materials Research 482-484 (luty 2012): 2109–13. http://dx.doi.org/10.4028/www.scientific.net/amr.482-484.2109.

Pełny tekst źródła
Streszczenie:
Unlike those traditional clustering algorithms, the spectral clustering algorithm can be applied to non-convex sphere of sample spaces and be converged to global optimal. As a entry point that the similar of spectral clustering, introduce improved weighted fuzzy similar matrix to spectral in this paper which avoids influence from parameters changes of fuzzy similar matrix in traditional spectral clustering on clustering effect and improves the effectiveness of clustering. It is more actual and scientific, which is tested based on UCI data set.
Style APA, Harvard, Vancouver, ISO itp.
33

Li, Ziyue, Emma L. D'Ambro, Siegfried Schobesberger, Cassandra J. Gaston, Felipe D. Lopez-Hilfiker, Jiumeng Liu, John E. Shilling, Joel A. Thornton i Christopher D. Cappa. "A robust clustering algorithm for analysis of composition-dependent organic aerosol thermal desorption measurements". Atmospheric Chemistry and Physics 20, nr 4 (2.03.2020): 2489–512. http://dx.doi.org/10.5194/acp-20-2489-2020.

Pełny tekst źródła
Streszczenie:
Abstract. One of the challenges of understanding atmospheric organic aerosol (OA) particles stems from its complex composition. Mass spectrometry is commonly used to characterize the compositional variability of OA. Clustering of a mass spectral dataset helps identify components that exhibit similar behavior or have similar properties, facilitating understanding of sources and processes that govern compositional variability. Here, we developed an algorithm for clustering mass spectra, the noise-sorted scanning clustering (NSSC), appropriate for application to thermal desorption measurements of collected OA particles from the Filter Inlet for Gases and AEROsols coupled to a chemical ionization mass spectrometer (FIGAERO-CIMS). NSSC, which extends the common density-based special clustering of applications with noise (DBSCAN) algorithm, provides a robust, reproducible analysis of the FIGAERO temperature-dependent mass spectral data. The NSSC allows for the determination of thermal profiles for compositionally distinct clusters of mass spectra, increasing the accessibility and enhancing the interpretation of FIGAERO data. Applications of NSSC to several laboratory biogenic secondary organic aerosol (BSOA) systems demonstrate the ability of NSSC to distinguish different types of thermal behaviors for the components comprising the particles along with the relative mass contributions and chemical properties (e.g., average molecular formula) of each mass spectral cluster. For each of the systems examined, more than 80 % of the total mass is clustered into 9–13 mass spectral clusters. Comparison of the average thermograms of the mass spectral clusters between systems indicates some commonality in terms of the thermal properties of different BSOA, although with some system-specific behavior. Application of NSSC to sets of experiments in which one experimental parameter, such as the concentration of NO, is varied demonstrates the potential for mass spectral clustering to elucidate the chemical factors that drive changes in the thermal properties of OA particles. Further quantitative interpretation of the thermograms of the mass spectral clusters will allow for a more comprehensive understanding of the thermochemical properties of OA particles.
Style APA, Harvard, Vancouver, ISO itp.
34

Mandal, Jyotsna Kumar, i Parthajit Roy. "A Novel Spectral Clustering based on Local Distribution". International Journal of Electrical and Computer Engineering (IJECE) 5, nr 2 (1.04.2015): 361. http://dx.doi.org/10.11591/ijece.v5i2.pp361-370.

Pełny tekst źródła
Streszczenie:
This paper proposed a novel variation of spectral clustering model based on a novel affinitymetric that considers the distribution of the neighboring points to learn the underlayingstructures in the data set. Proposed affinity metric is calculated using Mahalanobis distancethat exploits the concept of outlier detection for identifying the neighborhoods of the datapoints. RandomWalk Laplacian of the representative graph and its spectra has been consideredfor the clustering purpose and the first k number of eigenvectors have been consideredin the second phase of clustering. The model has been tested with benchmark data and thequality of the output of the proposed model has been tested in various clustering indicesscales.
Style APA, Harvard, Vancouver, ISO itp.
35

Mizutani, Tomohiko. "Convex programming based spectral clustering". Machine Learning 110, nr 5 (14.04.2021): 933–64. http://dx.doi.org/10.1007/s10994-020-05940-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
36

Zhao, Qianli, Linlin Zong, Xianchao Zhang, Xinyue Liu i Hong Yu. "Incomplete multi-view spectral clustering". Journal of Intelligent & Fuzzy Systems 38, nr 3 (4.03.2020): 2991–3001. http://dx.doi.org/10.3233/jifs-190380.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Hong Yu, He Jiang, Xianchao Zhang i Yuansheng Yang. "K_Neighbors Path Based Spectral Clustering". International Journal of Advancements in Computing Technology 4, nr 1 (31.01.2012): 50–58. http://dx.doi.org/10.4156/ijact.vol4.issue1.6.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Wang, Hongtao, Ang Li, Bolin Shen, Yuyan Sun i Hongmei Wang. "Federated Multi-View Spectral Clustering". IEEE Access 8 (2020): 202249–59. http://dx.doi.org/10.1109/access.2020.3036747.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
39

Mohamed, Samar S., i Magdy MA Salama. "Spectral clustering for TRUS images". BioMedical Engineering OnLine 6, nr 1 (2007): 10. http://dx.doi.org/10.1186/1475-925x-6-10.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
40

Liu, Mingming, Bing Liu, Chen Zhang i Wei Sun. "Spectral Nonlinearly Embedded Clustering Algorithm". Mathematical Problems in Engineering 2016 (2016): 1–9. http://dx.doi.org/10.1155/2016/9264561.

Pełny tekst źródła
Streszczenie:
As is well known, traditional spectral clustering (SC) methods are developed based on themanifold assumption, namely, that two nearby data points in the high-density region of a low-dimensional data manifold have the same cluster label. But, for some high-dimensional and sparse data, such an assumption might be invalid. Consequently, the clustering performance of SC will be degraded sharply in this case. To solve this problem, in this paper, we propose a general spectral embedded framework, which embeds the true cluster assignment matrix for high-dimensional data into a nonlinear space by a predefined embedding function. Based on this framework, several algorithms are presented by using different embedding functions, which aim at learning the final cluster assignment matrix and a transformation into a low dimensionality space simultaneously. More importantly, the proposed method can naturally handle the out-of-sample extension problem. The experimental results on benchmark datasets demonstrate that the proposed method significantly outperforms existing clustering methods.
Style APA, Harvard, Vancouver, ISO itp.
41

Zhou, Peng, Yi-Dong Shen, Liang Du, Fan Ye i Xuejun Li. "Incremental multi-view spectral clustering". Knowledge-Based Systems 174 (czerwiec 2019): 73–86. http://dx.doi.org/10.1016/j.knosys.2019.02.036.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Jiang, Wenhao, Wei Liu i Fu-lai Chung. "Knowledge transfer for spectral clustering". Pattern Recognition 81 (wrzesień 2018): 484–96. http://dx.doi.org/10.1016/j.patcog.2018.04.018.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
43

Chen, Weifu, i Guocan Feng. "Spectral clustering with discriminant cuts". Knowledge-Based Systems 28 (kwiecień 2012): 27–37. http://dx.doi.org/10.1016/j.knosys.2011.11.010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
44

Cai, Yang, Yuanyuan Jiao, Wenzhang Zhuge, Hong Tao i Chenping Hou. "Partial multi-view spectral clustering". Neurocomputing 311 (październik 2018): 316–24. http://dx.doi.org/10.1016/j.neucom.2018.05.053.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
45

Wen, Guoqiu. "Robust self-tuning spectral clustering". Neurocomputing 391 (maj 2020): 243–48. http://dx.doi.org/10.1016/j.neucom.2018.11.105.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Chang, Hong, i Dit-Yan Yeung. "Robust path-based spectral clustering". Pattern Recognition 41, nr 1 (styczeń 2008): 191–203. http://dx.doi.org/10.1016/j.patcog.2007.04.010.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
47

Xiang, Tao, i Shaogang Gong. "Spectral clustering with eigenvector selection". Pattern Recognition 41, nr 3 (marzec 2008): 1012–29. http://dx.doi.org/10.1016/j.patcog.2007.07.023.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
48

von Luxburg, Ulrike. "A tutorial on spectral clustering". Statistics and Computing 17, nr 4 (22.08.2007): 395–416. http://dx.doi.org/10.1007/s11222-007-9033-z.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
49

Yong Wang, Yuan Jiang, Yi Wu i Zhi-Hua Zhou. "Spectral Clustering on Multiple Manifolds". IEEE Transactions on Neural Networks 22, nr 7 (lipiec 2011): 1149–61. http://dx.doi.org/10.1109/tnn.2011.2147798.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Wang, Qi, Zequn Qin, Feiping Nie i Xuelong Li. "Spectral Embedded Adaptive Neighbors Clustering". IEEE Transactions on Neural Networks and Learning Systems 30, nr 4 (kwiecień 2019): 1265–71. http://dx.doi.org/10.1109/tnnls.2018.2861209.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii