Letteratura scientifica selezionata sul tema "Clustering"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Consulta la lista di attuali articoli, libri, tesi, atti di convegni e altre fonti scientifiche attinenti al tema "Clustering".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Articoli di riviste sul tema "Clustering"

1

Qian, Yue, Shixin Yao, Tianjun Wu, You Huang e Lingbin Zeng. "Improved Selective Deep-Learning-Based Clustering Ensemble". Applied Sciences 14, n. 2 (15 gennaio 2024): 719. http://dx.doi.org/10.3390/app14020719.

Testo completo
Abstract (sommario):
Clustering ensemble integrates multiple base clustering results to improve the stability and robustness of the single clustering method. It consists of two principal steps: a generation step, which is about the creation of base clusterings, and a consensus function, which is the integration of all clusterings obtained in the generation step. However, most of the existing base clustering algorithms used in the generation step are shallow clustering algorithms such as k-means. These shallow clustering algorithms do not work well or even fail when dealing with large-scale, high-dimensional unstructured data. The emergence of deep clustering algorithms provides a solution to address this challenge. Deep clustering combines the unsupervised commonality of deep representation learning to address complex high-dimensional data clustering, which has achieved excellent performance in many fields. In light of this, we introduce deep clustering into clustering ensemble and propose an improved selective deep-learning-based clustering ensemble algorithm (ISDCE). ISDCE exploits the deep clustering algorithm with different initialization parameters to generate multiple diverse base clusterings. Next, ISDCE constructs ensemble quality and diversity evaluation metrics of base clusterings to select higher-quality and rich-diversity candidate base clusterings. Finally, a weighted graph partition consensus function is utilized to aggregate the candidate base clusterings to obtain a consensus clustering result. Extensive experimental results on various types of datasets demonstrate that ISDCE performs significantly better than existing clustering ensemble approaches.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hess, Sibylle, Wouter Duivesteijn, Philipp Honysz e Katharina Morik. "The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 33 (17 luglio 2019): 3788–95. http://dx.doi.org/10.1609/aaai.v33i01.33013788.

Testo completo
Abstract (sommario):
When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose SPECTACL: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as Spectral Clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Manjunath, Mohith, Yi Zhang, Yeonsung Kim, Steve H. Yeo, Omar Sobh, Nathan Russell, Christian Followell, Colleen Bushell, Umberto Ravaioli e Jun S. Song. "ClusterEnG: an interactive educational web resource for clustering and visualizing high-dimensional data". PeerJ Computer Science 4 (21 maggio 2018): e155. http://dx.doi.org/10.7717/peerj-cs.155.

Testo completo
Abstract (sommario):
Background Clustering is one of the most common techniques in data analysis and seeks to group together data points that are similar in some measure. Although there are many computer programs available for performing clustering, a single web resource that provides several state-of-the-art clustering methods, interactive visualizations and evaluation of clustering results is lacking. Methods ClusterEnG (acronym for Clustering Engine for Genomics) provides a web interface for clustering data and interactive visualizations including 3D views, data selection and zoom features. Eighteen clustering validation measures are also presented to aid the user in selecting a suitable algorithm for their dataset. ClusterEnG also aims at educating the user about the similarities and differences between various clustering algorithms and provides tutorials that demonstrate potential pitfalls of each algorithm. Conclusions The web resource will be particularly useful to scientists who are not conversant with computing but want to understand the structure of their data in an intuitive manner. The validation measures facilitate the process of choosing a suitable clustering algorithm among the available options. ClusterEnG is part of a bigger project called KnowEnG (Knowledge Engine for Genomics) and is available at http://education.knoweng.org/clustereng.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Wei, Shaowei, Jun Wang, Guoxian Yu, Carlotta Domeniconi e Xiangliang Zhang. "Multi-View Multiple Clusterings Using Deep Matrix Factorization". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 04 (3 aprile 2020): 6348–55. http://dx.doi.org/10.1609/aaai.v34i04.6104.

Testo completo
Abstract (sommario):
Multi-view clustering aims at integrating complementary information from multiple heterogeneous views to improve clustering results. Existing multi-view clustering solutions can only output a single clustering of the data. Due to their multiplicity, multi-view data, can have different groupings that are reasonable and interesting from different perspectives. However, how to find multiple, meaningful, and diverse clustering results from multi-view data is still a rarely studied and challenging topic in multi-view clustering and multiple clusterings. In this paper, we introduce a deep matrix factorization based solution (DMClusts) to discover multiple clusterings. DMClusts gradually factorizes multi-view data matrices into representational subspaces layer-by-layer and generates one clustering in each layer. To enforce the diversity between generated clusterings, it minimizes a new redundancy quantification term derived from the proximity between samples in these subspaces. We further introduce an iterative optimization procedure to simultaneously seek multiple clusterings with quality and diversity. Experimental results on benchmark datasets confirm that DMClusts outperforms state-of-the-art multiple clustering solutions.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Miklautz, Lukas, Dominik Mautz, Muzaffer Can Altinigneli, Christian Böhm e Claudia Plant. "Deep Embedded Non-Redundant Clustering". Proceedings of the AAAI Conference on Artificial Intelligence 34, n. 04 (3 aprile 2020): 5174–81. http://dx.doi.org/10.1609/aaai.v34i04.5961.

Testo completo
Abstract (sommario):
Complex data types like images can be clustered in multiple valid ways. Non-redundant clustering aims at extracting those meaningful groupings by discouraging redundancy between clusterings. Unfortunately, clustering images in pixel space directly has been shown to work unsatisfactory. This has increased interest in combining the high representational power of deep learning with clustering, termed deep clustering. Algorithms of this type combine the non-linear embedding of an autoencoder with a clustering objective and optimize both simultaneously. None of these algorithms try to find multiple non-redundant clusterings. In this paper, we propose the novel Embedded Non-Redundant Clustering algorithm (ENRC). It is the first algorithm that combines neural-network-based representation learning with non-redundant clustering. ENRC can find multiple highly non-redundant clusterings of different dimensionalities within a data set. This is achieved by (softly) assigning each dimension of the embedded space to the different clusterings. For instance, in image data sets it can group the objects by color, material and shape, without the need for explicit feature engineering. We show the viability of ENRC in extensive experiments and empirically demonstrate the advantage of combining non-linear representation learning with non-redundant clustering.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Fisher, D. "Iterative Optimization and Simplification of Hierarchical Clusterings". Journal of Artificial Intelligence Research 4 (1 aprile 1996): 147–78. http://dx.doi.org/10.1613/jair.276.

Testo completo
Abstract (sommario):
Clustering is often used for discovering structure in data. Clustering systems differ in the objective function used to evaluate clustering quality and the control strategy used to search the space of clusterings. Ideally, the search strategy should consistently construct clusterings of high quality, but be computationally inexpensive as well. In general, we cannot have it both ways, but we can partition the search so that a system inexpensively constructs a `tentative' clustering for initial examination, followed by iterative optimization, which continues to search in background for improved clusterings. Given this motivation, we evaluate an inexpensive strategy for creating initial clusterings, coupled with several control strategies for iterative optimization, each of which repeatedly modifies an initial clustering in search of a better one. One of these methods appears novel as an iterative optimization strategy in clustering contexts. Once a clustering has been constructed it is judged by analysts -- often according to task-specific criteria. Several authors have abstracted these criteria and posited a generic performance task akin to pattern completion, where the error rate over completed patterns is used to `externally' judge clustering utility. Given this performance task, we adapt resampling-based pruning strategies used by supervised learning systems to the task of simplifying hierarchical clusterings, thus promising to ease post-clustering analysis. Finally, we propose a number of objective functions, based on attribute-selection measures for decision-tree induction, that might perform well on the error rate and simplicity dimensions.
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Li, Hongmin, Xiucai Ye, Akira Imakura e Tetsuya Sakurai. "LSEC: Large-scale spectral ensemble clustering". Intelligent Data Analysis 27, n. 1 (30 gennaio 2023): 59–77. http://dx.doi.org/10.3233/ida-216240.

Testo completo
Abstract (sommario):
A fundamental problem in machine learning is ensemble clustering, that is, combining multiple base clusterings to obtain improved clustering result. However, most of the existing methods are unsuitable for large-scale ensemble clustering tasks owing to efficiency bottlenecks. In this paper, we propose a large-scale spectral ensemble clustering (LSEC) method to balance efficiency and effectiveness. In LSEC, a large-scale spectral clustering-based efficient ensemble generation framework is designed to generate various base clusterings with low computational complexity. Thereafter, all the base clusterings are combined using a bipartite graph partition-based consensus function to obtain improved consensus clustering results. The LSEC method achieves a lower computational complexity than most existing ensemble clustering methods. Experiments conducted on ten large-scale datasets demonstrate the efficiency and effectiveness of the LSEC method. The MATLAB code of the proposed method and experimental datasets are available at https://github.com/Li-Hongmin/MyPaperWithCode.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Sun, Yuqin, Songlei Wang, Dongmei Huang, Yuan Sun, Anduo Hu e Jinzhong Sun. "A multiple hierarchical clustering ensemble algorithm to recognize clusters arbitrarily shaped". Intelligent Data Analysis 26, n. 5 (5 settembre 2022): 1211–28. http://dx.doi.org/10.3233/ida-216112.

Testo completo
Abstract (sommario):
As a research hotspot in ensemble learning, clustering ensemble obtains robust and highly accurate algorithms by integrating multiple basic clustering algorithms. Most of the existing clustering ensemble algorithms take the linear clustering algorithms as the base clusterings. As a typical unsupervised learning technique, clustering algorithms have difficulties properly defining the accuracy of the findings, making it difficult to significantly enhance the performance of the final algorithm. AGglomerative NESting method is used to build base clusters in this article, and an integration strategy for integrating multiple AGglomerative NESting clusterings is proposed. The algorithm has three main steps: evaluating the credibility of labels, producing multiple base clusters, and constructing the relation among clusters. The proposed algorithm builds on the original advantages of AGglomerative NESting and further compensates for the inability to identify arbitrarily shaped clusters. It can establish the proposed algorithm’s superiority in terms of clustering performance by comparing the proposed algorithm’s clustering performance to that of existing clustering algorithms on different datasets.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rouba, Baroudi, e Safia Nait Bahloul. "A Multicriteria Clustering Approach Based on Similarity Indices and Clustering Ensemble Techniques". International Journal of Information Technology & Decision Making 13, n. 04 (luglio 2014): 811–37. http://dx.doi.org/10.1142/s0219622014500631.

Testo completo
Abstract (sommario):
This paper deals with the problem of multicriteria clusters construction. The aim is to propose a multicriteria clustering procedure aiming at discovering data structures from a multicriteria perspective by defining a dissimilarity measure which takes into account the multicriteria nature of the problem. Comparing two objects in the multicriteria context is based on the preference information that expresses whether these objects are indifferent, incomparable or one is preferred to the other. The proposed approach uses this preference information with an agreement–disagreement similarity index to compute a dissimilarity measure. The approach generates, according to the preference relations, a set of clusterings. Each clustering expresses a way of grouping objects according to the preference relation used. A good quality final clustering is obtained by combining the clusterings generated previously using a clustering ensemble technique.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Gilpin, Sean, Siegried Nijssen e Ian Davidson. "Formalizing Hierarchical Clustering as Integer Linear Programming". Proceedings of the AAAI Conference on Artificial Intelligence 27, n. 1 (30 giugno 2013): 372–78. http://dx.doi.org/10.1609/aaai.v27i1.8671.

Testo completo
Abstract (sommario):
Hierarchical clustering is typically implemented as a greedy heuristic algorithm with no explicit objective function. In this work we formalize hierarchical clustering as an integer linear programming (ILP) problem with a natural objective function and the dendrogram properties enforced as linear constraints. Though exact solvers exists for ILP we show that a simple randomized algorithm and a linear programming (LP) relaxation can be used to provide approximate solutions faster. Formalizing hierarchical clustering also has the benefit that relaxing the constraints can produce novel problem variations such as overlapping clusterings. Our experiments show that our formulation is capable of outperforming standard agglomerative clustering algorithms in a variety of settings, including traditional hierarchical clustering as well as learning overlapping clusterings.
Gli stili APA, Harvard, Vancouver, ISO e altri

Tesi sul tema "Clustering"

1

Yoo, Jaiyul. "From galaxy clustering to dark matter clustering". Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1186586898.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Hinz, Joel. "Clustering the Web : Comparing Clustering Methods in Swedish". Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-95228.

Testo completo
Abstract (sommario):
Clustering -- automatically sorting -- web search results has been the focus of much attention but is by no means a solved problem, and there is little previous work in Swedish. This thesis studies the performance of three clustering algorithms -- k-means, agglomerative hierarchical clustering, and bisecting k-means -- on a total of 32 corpora, as well as whether clustering web search previews, called snippets, instead of full texts can achieve reasonably decent results. Four internal evaluation metrics are used to assess the data. Results indicate that k-means performs worse than the other two algorithms, and that snippets may be good enough to use in an actual product, although there is ample opportunity for further research on both issues; however, results are inconclusive regarding bisecting k-means vis-à-vis agglomerative hierarchical clustering. Stop word and stemmer usage results are not significant, and appear to not affect the clustering by any considerable magnitude.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Bacarella, Daniele. "Distributed clustering algorithm for large scale clustering problems". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-212089.

Testo completo
Abstract (sommario):
Clustering is a task which has got much attention in data mining. The task of finding subsets of objects sharing some sort of common attributes is applied in various fields such as biology, medicine, business and computer science. A document search engine for instance, takes advantage of the information obtained clustering the document database to return a result with relevant information to the query. Two main factors that make clustering a challenging task are the size of the dataset and the dimensionality of the objects to cluster. Sometimes the character of the object makes it difficult identify its attributes. This is the case of the image clustering. A common approach is comparing two images using their visual features like the colors or shapes they contain. However, sometimes they come along with textual information claiming to be sufficiently descriptive of the content (e.g. tags on web images). The purpose of this thesis work is to propose a text-based image clustering algorithm through the combined application of two techniques namely Minhash Locality Sensitive Hashing (MinHash LSH) and Frequent itemset Mining.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Zimek, Arthur. "Correlation Clustering". Diss., lmu, 2008. http://nbn-resolving.de/urn:nbn:de:bvb:19-87361.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rutten, Jeroen Hendrik Gerardus Christiaan. "Polyhedral clustering". Maastricht : Maastricht : Universiteit Maastricht ; University Library, Maastricht University [Host], 1998. http://arno.unimaas.nl/show.cgi?fid=6061.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Leisch, Friedrich. "Bagged clustering". SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/1272/1/document.pdf.

Testo completo
Abstract (sommario):
A new ensemble method for cluster analysis is introduced, which can be interpreted in two different ways: As complexity-reducing preprocessing stage for hierarchical clustering and as combination procedure for several partitioning results. The basic idea is to locate and combine structurally stable cluster centers and/or prototypes. Random effects of the training set are reduced by repeatedly training on resampled sets (bootstrap samples). We discuss the algorithm both from a more theoretical and an applied point of view and demonstrate it on several data sets. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Eldridge, Justin Eldridge. "Clustering Consistently". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1512070374903249.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Salamone, Johnny <1990&gt. "Speaker Clustering". Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/12958.

Testo completo
Abstract (sommario):
Lo scopo di questo progetto di tesi, dopo uno studio sui papers di Speaker CLustering di riferimento, è di reimplementare l'algoritmo di clustering che mirando in un implementazione migliore in termini di prestazioni che dimostrino l'efficacia e la flessibilità di un approccio piuttosto nuovo. Diversamente dal solito, questo metodo alternativo per lo Speaker Clustering ridefinisce livemente la definizione di cluster e viene chiamato Dominant Set. La nozione di Dominant Set ruota attorno alla teoria dei grafi e al problema di ottimizzazione nella ricerca del sotto-grafico massimale, e aiutata dalla teoria dei giochi. Tali sotto-grafici sono analoghi ad un insieme con alta coerenza interna e debole con elementi esterni. Il data ser utilizzato in input è stato fornito da un gruppo di ricerca e conosciuto con il nome di TIMIT, con i vettori di features già estratti da registrazioni di file audio. Sebbene TIMIT fosse pensato per i metodi supervisionati e le implementazioni basate su reti neurali, l'obiettivo è appunto quello di dimostrare la flessibilità degli insiemi dominanti nei vettori di features nel riconoscimento degli interlocutori mediante la classificazione delle espressioni vocali. Alcune implementazioni in diversi linguaggi di programmazione dimostrano il potenziale dell'utilizzo dei Dominant Set per lo Speaker Clustering dopo un primo test comparativo su altre tecniche di clustering simili e utilizzando entrambe le versioni ridotta e completa del data set TIMIT.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Rosell, Magnus. "Text Clustering Exploration : Swedish Text Representation and Clustering Results Unraveled". Doctoral thesis, KTH, Numerisk Analys och Datalogi, NADA, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10129.

Testo completo
Abstract (sommario):
Text clustering divides a set of texts into clusters (parts), so that texts within each cluster are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on familiar ones. The main contributions of this thesis are an investigation of text representation for Swedish and some extensions of the work on how to use text clustering as an exploration tool. We have also done some work on synonyms and evaluation of clustering results. Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Swedish has more morphological variation than for instance English. We show that it is beneficial to use the lemma form of words rather than the word forms. Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages. Our experiments show that it is beneficial to split solid compounds into their parts when building the representation. The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. We have also tried to differentiate between homographs, words that look alike but mean different things, by augmenting all words with a tag indicating their part of speech. None of our experiments using phrases or part of speech information have shown any improvement over using the ordinary model. Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is – text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe how evaluation can be improved for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. In some related work we have built a dictionary of synonyms. We use it to compare two different principles for automatic word relation extraction through clustering of words. Text clustering can be used to explore the contents of a text set. We have developed a visualization method that aids such exploration, and implemented it in a tool, called Infomat. It presents the representation matrix directly in two dimensions. When the order of texts and words are changed, by for instance clustering, distributional patterns that indicate similarities between texts and words appear. We have used Infomat to explore a set of free text answers about occupation from a questionnaire given to over 40 000 Swedish twins. The questionnaire also contained a closed answer regarding smoking. We compared several clusterings of the text answers to the closed answer, regarded as a categorization, by means of clustering evaluation. A recurring text cluster of high quality led us to formulate the hypothesis that “farmers smoke less than the average”, which we later could verify by reading previous studies. This hypothesis generation method could be used on any set of texts that is coupled with data that is restricted to a limited number of possible values.
QC 20100806
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Rossi, Alfred Vincent III. "Temporal Clustering of Finite Metric Spaces and Spectral k-Clustering". The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1500033042082458.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Libri sul tema "Clustering"

1

Xu, Rui. Clustering. Hoboken, N.J: Wiley, 2009.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Owsiński, Jan W., Jarosław Stańczak, Karol Opara, Sławomir Zadrożny e Janusz Kacprzyk. Reverse Clustering. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-69359-6.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Bolshoy, Alexander, Zeev (Vladimir) Volkovich, Valery Kirzhner e Zeev Barzily. Genome Clustering. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12952-0.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Govaert, Gérard, e Mohamed Nadif. Co-Clustering. Hoboken, USA: John Wiley & Sons, Inc., 2013. http://dx.doi.org/10.1002/9781118649480.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

1968-, Abraham Ajith, e Konar Amit, a cura di. Metaheuristic clustering. Berlin: Springer, 2009.

Cerca il testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Greenblatt, Alan. Economic Clustering. 2455 Teller Road, Thousand Oaks California 91320 United States: CQ Press, 2020. http://dx.doi.org/10.4135/cqresrre20200821.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Biehl, Michael, Barbara Hammer, Michel Verleysen e Thomas Villmann, a cura di. Similarity-Based Clustering. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-01805-3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Pedrycz, Witold. Knowledge-Based Clustering. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2005. http://dx.doi.org/10.1002/0471708607.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Celebi, M. Emre, a cura di. Partitional Clustering Algorithms. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-09259-1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Andreopoulou, Zacharoula, Christiana Koliouska e Constantin Zopounidis. Multicriteria and Clustering. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-55565-2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Capitoli di libri sul tema "Clustering"

1

Govaert, Gérard, e Mohamed Nadif. "Cluster Analysis". In Co-Clustering, 1–53. Hoboken, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118649480.ch1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Govaert, Gérard, e Mohamed Nadif. "Model-Based Co-Clustering". In Co-Clustering, 55–77. Hoboken, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118649480.ch2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Govaert, Gérard, e Mohamed Nadif. "Co-Clustering of Binary and Categorical Data". In Co-Clustering, 79–105. Hoboken, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118649480.ch3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Govaert, Gérard, e Mohamed Nadif. "Co-Clustering of Contingency Tables". In Co-Clustering, 107–50. Hoboken, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118649480.ch4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Govaert, Gérard, e Mohamed Nadif. "Co-Clustering of Continuous Data". In Co-Clustering, 151–76. Hoboken, USA: John Wiley & Sons, Inc., 2014. http://dx.doi.org/10.1002/9781118649480.ch5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Bolshoy, Alexander, Zeev (Vladimir) Volkovich, Valery Kirzhner e Zeev Barzily. "Biological Background". In Genome Clustering, 1–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12952-0_1.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bolshoy, Alexander, Zeev (Vladimir) Volkovich, Valery Kirzhner e Zeev Barzily. "Biological Classification". In Genome Clustering, 17–22. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12952-0_2.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Bolshoy, Alexander, Zeev (Vladimir) Volkovich, Valery Kirzhner e Zeev Barzily. "Mathematical Models for the Analysis of Natural-Language Documents". In Genome Clustering, 23–42. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12952-0_3.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Bolshoy, Alexander, Zeev (Vladimir) Volkovich, Valery Kirzhner e Zeev Barzily. "DNA Texts". In Genome Clustering, 43–60. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12952-0_4.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Bolshoy, Alexander, Zeev (Vladimir) Volkovich, Valery Kirzhner e Zeev Barzily. "N-Gram Spectra of the DNA Text". In Genome Clustering, 61–85. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-12952-0_5.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Atti di convegni sul tema "Clustering"

1

Yao, Shixin, Guoxian Yu, Jun Wang, Carlotta Domeniconi e Xiangliang Zhang. "Multi-View Multiple Clustering". In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/572.

Testo completo
Abstract (sommario):
Multiple clustering aims at exploring alternative clusterings to organize the data into meaningful groups from different perspectives. Existing multiple clustering algorithms are designed for single-view data. We assume that the individuality and commonality of multi-view data can be leveraged to generate high-quality and diverse clusterings. To this end, we propose a novel multi-view multiple clustering (MVMC) algorithm. MVMC first adapts multi-view self-representation learning to explore the individuality encoding matrices and the shared commonality matrix of multi-view data. It additionally reduces the redundancy (i.e., enhancing the individuality) among the matrices using the Hilbert-Schmidt Independence Criterion (HSIC), and collects shared information by forcing the shared matrix to be smooth across all views. It then uses matrix factorization on the individual matrices, along with the shared matrix, to generate diverse clusterings of high-quality. We further extend multiple co-clustering on multi-view data and propose a solution called multi-view multiple co-clustering (MVMCC). Our empirical study shows that MVMC (MVMCC) can exploit multi-view data to generate multiple high-quality and diverse clusterings (co-clusterings), with superior performance to the state-of-the-art methods.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Gusmão, Renê, Allan Juan Araújo e Francisco Carvalho. "A distributed approach to cluster multi-view relational data". In Congresso Brasileiro de Inteligência Computacional. SBIC, 2024. http://dx.doi.org/10.21528/cbic2023-098.

Testo completo
Abstract (sommario):
Clustering of multi-view data has become an important research field. The efficient clustering of multi-view data is a challenging problem. This work aimed to investigate a distributed approach to cluster multi-view relational data. A PSO-based hybrid method was used to generate clustering from all views independently. Five different objective functions were explored to induce diversity to the clusterings since each function looks for different cluster structures. Five different consensus functions were compared to produce the final partition from the ensembles. Three multi-view real-world data sets were considered in this study. The Adjusted Rand Index, the F-measure and Silhouette clustering validity indexes were used to assess obtained clusterings. The distributed approach found better clusterings for all data sets considering at least one consensus function.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Van Craenendonck, Toon, Sebastijan Dumancic e Hendrik Blockeel. "COBRA: A Fast and Simple Method for Active Clustering with Pairwise Constraints". In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/400.

Testo completo
Abstract (sommario):
Clustering is inherently ill-posed: there often exist multiple valid clusterings of a single dataset, and without any additional information a clustering system has no way of knowing which clustering it should produce. This motivates the use of constraints in clustering, as they allow users to communicate their interests to the clustering system. Active constraint-based clustering algorithms select the most useful constraints to query, aiming to produce a good clustering using as few constraints as possible. We propose COBRA, an active method that first over-clusters the data by running K-means with a $K$ that is intended to be too large, and subsequently merges the resulting small clusters into larger ones based on pairwise constraints. In its merging step, COBRA is able to keep the number of pairwise queries low by maximally exploiting constraint transitivity and entailment. We experimentally show that COBRA outperforms the state of the art in terms of clustering quality and runtime, without requiring the number of clusters in advance.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Sublemontier, Jacques-Henri. "Unsupervised collaborative boosting of clustering: An unifying framework for multi-view clustering, multiple consensus clusterings and alternative clustering". In 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas). IEEE, 2013. http://dx.doi.org/10.1109/ijcnn.2013.6706911.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Seidel, Thomas E., e Michael R. Stark. "Learning opportunities through the use of cluster tools". In Process Module Metrology, Control and Clustering, a cura di Cecil J. Davis, Irving P. Herman e Terry R. Turner. SPIE, 1992. http://dx.doi.org/10.1117/12.56617.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Lally, Kevin. "Equipment improvement methodology". In Process Module Metrology, Control and Clustering, a cura di Cecil J. Davis, Irving P. Herman e Terry R. Turner. SPIE, 1992. http://dx.doi.org/10.1117/12.56618.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Seidel, J. P., W. Wachter, William M. Triggs e Robert P. Hall. "Integrated deposition of TiN barrier layers in cluster tools". In Process Module Metrology, Control and Clustering, a cura di Cecil J. Davis, Irving P. Herman e Terry R. Turner. SPIE, 1992. http://dx.doi.org/10.1117/12.56619.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Hauser, John R., e Syed A. Rizvi. "Cluster tool technology". In Process Module Metrology, Control and Clustering, a cura di Cecil J. Davis, Irving P. Herman e Terry R. Turner. SPIE, 1992. http://dx.doi.org/10.1117/12.56620.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Wong, Fred, e George E. Zilberman. "Open architecture cluster tool: communication and user interface integration". In Process Module Metrology, Control and Clustering, a cura di Cecil J. Davis, Irving P. Herman e Terry R. Turner. SPIE, 1992. http://dx.doi.org/10.1117/12.56621.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Boitnott, Charles A., e David R. Craven. "Single-wafer high-pressure oxidation". In Process Module Metrology, Control and Clustering, a cura di Cecil J. Davis, Irving P. Herman e Terry R. Turner. SPIE, 1992. http://dx.doi.org/10.1117/12.56622.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri

Rapporti di organizzazioni sul tema "Clustering"

1

Zhao, Ying, e George Karypis. Soft Clustering Criterion Functions for Partitional Document Clustering. Fort Belvoir, VA: Defense Technical Information Center, maggio 2004. http://dx.doi.org/10.21236/ada439425.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Graf, N. Clustering Algorithm Studies. Office of Scientific and Technical Information (OSTI), ottobre 2004. http://dx.doi.org/10.2172/839953.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Chen, Yudong, Sujay Sanghavi e Huan Xu. Improved graph clustering. Fort Belvoir, VA: Defense Technical Information Center, gennaio 2013. http://dx.doi.org/10.21236/ada596381.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Karypis, George. CLUTO - A Clustering Toolkit. Fort Belvoir, VA: Defense Technical Information Center, aprile 2002. http://dx.doi.org/10.21236/ada439508.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Winlaw, Manda, Hans De Sterck e Geoffrey Sanders. A Clustering Graph Generator. Office of Scientific and Technical Information (OSTI), ottobre 2015. http://dx.doi.org/10.2172/1239229.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Castiglia, Emma, Giani Pezzullo e Sarah Demers. Mu2e Calorimeter Clustering Studies. Office of Scientific and Technical Information (OSTI), giugno 2018. http://dx.doi.org/10.2172/1462082.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Bresson, Xavier, David Uminsky, Thomas Laurent e James H. Von Brecht. Multiclass Total Variation Clustering. Fort Belvoir, VA: Defense Technical Information Center, dicembre 2014. http://dx.doi.org/10.21236/ada612811.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Castiglia, Emma. Mu2e Calorimeter Clustering Studies. Office of Scientific and Technical Information (OSTI), giugno 2018. http://dx.doi.org/10.2172/1579232.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Bell, J. W. Temporal clustering of paleoearthquakes. Office of Scientific and Technical Information (OSTI), dicembre 1994. http://dx.doi.org/10.2172/240928.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Kung, H. T. Clustering Theory and Applications. Fort Belvoir, VA: Defense Technical Information Center, aprile 2012. http://dx.doi.org/10.21236/ada560226.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia