Journal articles on the topic 'Density clustering'

To see the other types of publications on this topic, follow the link: Density clustering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Density clustering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hess, Sibylle, Wouter Duivesteijn, Philipp Honysz, and Katharina Morik. "The SpectACl of Nonconvex Clustering: A Spectral Approach to Density-Based Clustering." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3788–95. http://dx.doi.org/10.1609/aaai.v33i01.33013788.

Full text
Abstract:
When it comes to clustering nonconvex shapes, two paradigms are used to find the most suitable clustering: minimum cut and maximum density. The most popular algorithms incorporating these paradigms are Spectral Clustering and DBSCAN. Both paradigms have their pros and cons. While minimum cut clusterings are sensitive to noise, density-based clusterings have trouble handling clusters with varying densities. In this paper, we propose SPECTACL: a method combining the advantages of both approaches, while solving the two mentioned drawbacks. Our method is easy to implement, such as Spectral Clustering, and theoretically founded to optimize a proposed density criterion of clusterings. Through experiments on synthetic and real-world data, we demonstrate that our approach provides robust and reliable clusterings.
APA, Harvard, Vancouver, ISO, and other styles
2

Rinaldo, Alessandro, and Larry Wasserman. "Generalized density clustering." Annals of Statistics 38, no. 5 (October 2010): 2678–722. http://dx.doi.org/10.1214/10-aos797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kriegel, Hans‐Peter, Peer Kröger, Jörg Sander, and Arthur Zimek. "Density‐based clustering." WIREs Data Mining and Knowledge Discovery 1, no. 3 (April 5, 2011): 231–40. http://dx.doi.org/10.1002/widm.30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Boqing Feng, Boqing Feng, Mohan Liu Boqing Feng, and Jiuqiang Jin Mohan Liu. "Density Space Clustering Algorithm Based on Users Behaviors." 電腦學刊 33, no. 2 (April 2022): 201–9. http://dx.doi.org/10.53106/199115992022043302018.

Full text
Abstract:
<p>At present, insider threat detection requires a series of complex projects, and has certain limitations in practical applications; in order to reduce the complexity of the model, most studies ignore the timing of user behavior and fail to identify internal attacks that last for a period of time. In addition, companies usually categorize the behavior data generated by all users and store them in different databases. How to collaboratively process large-scale heterogeneous log files and extract characteristic data that accurately reflects user behavior is a difficult point in current research. In order to optimize the parameter selection of the DBSCAN algorithm, this paper proposes a Psychometric Data & Attack Threat Density Based Spatial Clustering of Applications with Noise algorithm (PD&AT-DBSCAN). This algorithm can improve the accuracy of clustering results. The simulation results show that this algorithm is better than the traditional DBSCAN algorithm in terms of Rand index and normalized mutual information.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Boqing Feng, Boqing Feng, Mohan Liu Boqing Feng, and Jiuqiang Jin Mohan Liu. "Density Space Clustering Algorithm Based on Users Behaviors." 電腦學刊 33, no. 2 (April 2022): 201–9. http://dx.doi.org/10.53106/199115992022043302018.

Full text
Abstract:
<p>At present, insider threat detection requires a series of complex projects, and has certain limitations in practical applications; in order to reduce the complexity of the model, most studies ignore the timing of user behavior and fail to identify internal attacks that last for a period of time. In addition, companies usually categorize the behavior data generated by all users and store them in different databases. How to collaboratively process large-scale heterogeneous log files and extract characteristic data that accurately reflects user behavior is a difficult point in current research. In order to optimize the parameter selection of the DBSCAN algorithm, this paper proposes a Psychometric Data & Attack Threat Density Based Spatial Clustering of Applications with Noise algorithm (PD&AT-DBSCAN). This algorithm can improve the accuracy of clustering results. The simulation results show that this algorithm is better than the traditional DBSCAN algorithm in terms of Rand index and normalized mutual information.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Prabhjot, Kaur, Lamba I. M. S, and Gosain Anjana. "DOFCM: A Robust Clustering Technique Based upon Density." International Journal of Engineering and Technology 3, no. 3 (2011): 297–303. http://dx.doi.org/10.7763/ijet.2011.v3.241.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hua, Jia-Lin, Jian Yu, and Miin-Shen Yang. "Correlative Density-Based Clustering." Journal of Computational and Theoretical Nanoscience 13, no. 10 (October 1, 2016): 6935–43. http://dx.doi.org/10.1166/jctn.2016.5650.

Full text
Abstract:
Mountains, which heap up by densities of a data set, intuitively reflect the structure of data points. These mountain clustering methods are useful for grouping data points. However, the previous mountain-based clustering suffers from the choice of parameters which are used to compute the density. In this paper, we adopt correlation analysis to determine the density, and propose a new clustering algorithm, called Correlative Density-based Clustering (CDC). The new algorithm computes the density with a modified way and determines the parameters based on the inherent structure of data points. Experiments on artificial datasets and real datasets demonstrate the simplicity and effectiveness of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Zejian, and Yongchuan Tang. "Comparative density peaks clustering." Expert Systems with Applications 95 (April 2018): 236–47. http://dx.doi.org/10.1016/j.eswa.2017.11.020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mauceri, Christian, and Diem Ho. "Clustering by kernel density." Computational Economics 29, no. 2 (March 1, 2007): 199–212. http://dx.doi.org/10.1007/s10614-006-9078-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lin, Jun-Lin. "Generalizing Local Density for Density-Based Clustering." Symmetry 13, no. 2 (January 24, 2021): 185. http://dx.doi.org/10.3390/sym13020185.

Full text
Abstract:
Discovering densely-populated regions in a dataset of data points is an essential task for density-based clustering. To do so, it is often necessary to calculate each data point’s local density in the dataset. Various definitions for the local density have been proposed in the literature. These definitions can be divided into two categories: Radius-based and k Nearest Neighbors-based. In this study, we find the commonality between these two types of definitions and propose a canonical form for the local density. With the canonical form, the pros and cons of the existing definitions can be better explored, and new definitions for the local density can be derived and investigated.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhang, Enli, and Lin Gao. "A Microblock Density-Based Similarity Measure for Graph Clustering." Journal of Computers 10, no. 2 (2015): 90–100. http://dx.doi.org/10.17706/jcp.10.2.90-100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Zhang, Zhiyong, Qingsheng Zhu, Fan Zhu, Junnan Li, Dongdong Cheng, Yi Liu, and Jiangmei Luo. "Density decay graph-based density peak clustering." Knowledge-Based Systems 224 (July 2021): 107075. http://dx.doi.org/10.1016/j.knosys.2021.107075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hou, Jian, and Aihua Zhang. "Enhancing Density Peak Clustering via Density Normalization." IEEE Transactions on Industrial Informatics 16, no. 4 (April 2020): 2477–85. http://dx.doi.org/10.1109/tii.2019.2929743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Chunzhong, and Yunong Zhang. "Density Peak Clustering Based on Relative Density Optimization." Mathematical Problems in Engineering 2020 (June 11, 2020): 1–8. http://dx.doi.org/10.1155/2020/2816102.

Full text
Abstract:
Among numerous clustering algorithms, clustering by fast search and find of density peaks (DPC) is favoured because it is less affected by shapes and density structures of the data set. However, DPC still shows some limitations in clustering of data set with heterogeneity clusters and easily makes mistakes in assignment of remaining points. The new algorithm, density peak clustering based on relative density optimization (RDO-DPC), is proposed to settle these problems and try obtaining better results. With the help of neighborhood information of sample points, the proposed algorithm defines relative density of the sample data and searches and recognizes density peaks of the nonhomogeneous distribution as cluster centers. A new assignment strategy is proposed to solve the abundance classification problem. The experiments on synthetic and real data sets show good performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Yin, Lifeng, Yingfeng Wang, Huayue Chen, and Wu Deng. "An Improved Density Peak Clustering Algorithm for Multi-Density Data." Sensors 22, no. 22 (November 15, 2022): 8814. http://dx.doi.org/10.3390/s22228814.

Full text
Abstract:
Density peak clustering is the latest classic density-based clustering algorithm, which can directly find the cluster center without iteration. The algorithm needs to determine a unique parameter, so the selection of parameters is particularly important. However, for multi-density data, when one parameter cannot satisfy all data, clustering often cannot achieve good results. Moreover, the subjective selection of cluster centers through decision diagrams is often not very convincing, and there are also certain errors. In view of the above problems, in order to achieve better clustering of multi-density data, this paper improves the density peak clustering algorithm. Aiming at the selection of parameter dc, the K-nearest neighbor idea is used to sort the neighbor distance of each data, draw a line graph of the K-nearest neighbor distance, and find the global bifurcation point to divide the data with different densities. Aiming at the selection of cluster centers, the local density and distance of each data point in each data division is found, a γ map is drawn, the average value of the γ height difference is calculated, and through two screenings the largest discontinuity point is found to automatically determine the cluster center and the number of cluster centers. The divided datasets are clustered by the DPC algorithm, and then the clustering results are perfected and integrated by using the cluster fusion rules. Finally, a variety of experiments are designed from various perspectives on various artificial simulated datasets and UCI real datasets, which demonstrate the superiority of the F-DPC algorithm in terms of clustering effect, clustering quality, and number of samples.
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Shuliang, Qi Li, Chuanfeng Zhao, Xingquan Zhu, Hanning Yuan, and Tianru Dai. "Extreme clustering – A clustering method via density extreme points." Information Sciences 542 (January 2021): 24–39. http://dx.doi.org/10.1016/j.ins.2020.06.069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Wang, Yizhang, Wei Pang, and You Zhou. "Density propagation based adaptive multi-density clustering algorithm." PLOS ONE 13, no. 7 (July 18, 2018): e0198948. http://dx.doi.org/10.1371/journal.pone.0198948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hou, Jian, Aihua Zhang, and Naiming Qi. "Density peak clustering based on relative density relationship." Pattern Recognition 108 (December 2020): 107554. http://dx.doi.org/10.1016/j.patcog.2020.107554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gan, Cai Zhao, and Hong Bin Huang. "Relative Density Clustering Algorithm Based on Density Fluctuation." Journal of Physics: Conference Series 1087 (September 2018): 022010. http://dx.doi.org/10.1088/1742-6596/1087/2/022010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pradeep, Lanka, and A. M. Sowjanya. "Multi-Density based Incremental Clustering." International Journal of Computer Applications 116, no. 17 (April 22, 2015): 6–9. http://dx.doi.org/10.5120/20426-2742.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wu, Zhihao, Tingting Song, and Yanbing Zhang. "Quantum Density Peak Clustering Algorithm." Entropy 24, no. 2 (February 3, 2022): 237. http://dx.doi.org/10.3390/e24020237.

Full text
Abstract:
A widely used clustering algorithm, density peak clustering (DPC), assigns different attribute values to data points through the distance between data points, and then determines the number and range of clustering by attribute values. However, DPC is inefficient when dealing with scenes with a large amount of data, and the range of parameters is not easy to determine. To fix these problems, we propose a quantum DPC (QDPC) algorithm based on a quantum DistCalc circuit and a Grover circuit. The time complexity is reduced to O(log(N2)+6N+N), whereas that of the traditional algorithm is O(N2). The space complexity is also decreased from O(N·⌈logN⌉) to O(⌈logN⌉).
APA, Harvard, Vancouver, ISO, and other styles
22

Lin. "Accelerating Density Peak Clustering Algorithm." Symmetry 11, no. 7 (July 2, 2019): 859. http://dx.doi.org/10.3390/sym11070859.

Full text
Abstract:
The Density Peak Clustering (DPC) algorithm is a new density-based clustering method. It spends most of its execution time on calculating the local density and the separation distance for each data point in a dataset. The purpose of this study is to accelerate its computation. On average, the DPC algorithm scans half of the dataset to calculate the separation distance of each data point. We propose an approach to calculate the separation distance of a data point by scanning only the neighbors of the data point. Additionally, the purpose of the separation distance is to assist in choosing the density peaks, which are the data points with both high local density and high separation distance. We propose an approach to identify non-peak data points at an early stage to avoid calculating their separation distances. Our experimental results show that most of the data points in a dataset can benefit from the proposed approaches to accelerate the DPC algorithm.
APA, Harvard, Vancouver, ISO, and other styles
23

Ren, Yazhou, Ni Wang, Mingxia Li, and Zenglin Xu. "Deep density-based image clustering." Knowledge-Based Systems 197 (June 2020): 105841. http://dx.doi.org/10.1016/j.knosys.2020.105841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Azzalini, Adelchi, and Nicola Torelli. "Clustering via nonparametric density estimation." Statistics and Computing 17, no. 1 (February 3, 2007): 71–80. http://dx.doi.org/10.1007/s11222-006-9010-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ruiz, Carlos, Myra Spiliopoulou, and Ernestina Menasalvas. "Density-based semi-supervised clustering." Data Mining and Knowledge Discovery 21, no. 3 (November 21, 2009): 345–70. http://dx.doi.org/10.1007/s10618-009-0157-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mai, Son T., Ira Assent, Jon Jacobsen, and Martin Storgaard Dieu. "Anytime parallel density-based clustering." Data Mining and Knowledge Discovery 32, no. 4 (April 10, 2018): 1121–76. http://dx.doi.org/10.1007/s10618-018-0562-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Min, Fan Min, Zhi-Heng Zhang, and Yan-Xue Wu. "Active learning through density clustering." Expert Systems with Applications 85 (November 2017): 305–17. http://dx.doi.org/10.1016/j.eswa.2017.05.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Steinwart, Ingo. "Fully adaptive density-based clustering." Annals of Statistics 43, no. 5 (October 2015): 2132–67. http://dx.doi.org/10.1214/15-aos1331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

McInnes, Leland, John Healy, and Steve Astels. "hdbscan: Hierarchical density based clustering." Journal of Open Source Software 2, no. 11 (March 21, 2017): 205. http://dx.doi.org/10.21105/joss.00205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ahn, Sung Mahn, and Sung Wook Baik. "A Density-based Clustering Method." Communications for Statistical Applications and Methods 9, no. 3 (December 1, 2002): 715–23. http://dx.doi.org/10.5351/ckss.2002.9.3.715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Lasek, Piotr, and Jarek Gryz. "Density-based clustering with constraints." Computer Science and Information Systems 16, no. 2 (2019): 469–89. http://dx.doi.org/10.2298/csis180601007l.

Full text
Abstract:
In this paper we present our ic-NBC and ic-DBSCAN algorithms for data clustering with constraints. The algorithms are based on density-based clustering algorithms NBC and DBSCAN but allow users to incorporate background knowledge into the process of clustering by means of instance constraints. The knowledge about anticipated groups can be applied by specifying the so-called must-link and cannot-link relationships between objects or points. These relationships are then incorporated into the clustering process. In the proposed algorithms this is achieved by properly merging resulting clusters and introducing a new notion of deferred points which are temporarily excluded from clustering and assigned to clusters based on their involvement in cannot-link relationships. To examine the algorithms, we have carried out a number of experiments. We used benchmark data sets and tested the efficiency and quality of the results. We have also measured the efficiency of the algorithms against their original versions. The experiments prove that the introduction of instance constraints improves the quality of both algorithms. The efficiency is only insignificantly reduced and is due to extra computation related to the introduced constraints.
APA, Harvard, Vancouver, ISO, and other styles
32

Brecheisen, Stefan, Hans-Peter Kriegel, and Martin Pfeifle. "Multi-step density-based clustering." Knowledge and Information Systems 9, no. 3 (September 9, 2005): 284–308. http://dx.doi.org/10.1007/s10115-005-0217-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wei Liu, Wei Liu, Jiaxin Wang Wei Liu, Xiaopan Su Jiaxin Wang, and Yimin Mao Xiaopan Su. "MR-DBIFOA: a parallel Density-based Clustering Algorithm by Using Improve Fruit Fly Optimization." 電腦學刊 33, no. 1 (February 2022): 101–14. http://dx.doi.org/10.53106/199115992022023301010.

Full text
Abstract:
<p>Clustering is an important technique for data analysis and knowledge discovery. In the context of big data, the density-based clustering algorithm faces three challenging problems: unreasonable division of data gridding, poor parameter optimization ability and low efficiency of parallelization. In this study, a density-based clustering algorithm by using improve fruit fly optimization based on MapReduce (MR-DBIFOA) is proposed to tackle these three problems. Firstly, based on KD-Tree, a division strategy (KDG) is proposed to divide the cell of grid adaptively. Secondly, an improve fruit fly optimization algorithm (IFOA) which use the step strategy based on knowledge learn (KLSS) and the clustering criterion function (CFF) is designed. In addition, based on IFOA algorithm, the optimal parameters of local clustering are dynamically selected, which can improve the clustering effect of local clustering. Meanwhile, in order to improve the parallel efficiency, the density-based clustering algorithm using IFOA (MR-QRMEC) are proposed to parallel compute the local clusters of clustering algorithm. Finally, based on QR-Tree and MapReduce, a cluster merging algorithm (MR-QRMEC) is proposed to get the result of clustering algorithm more quickly, which improve the core clusters merging efficiency of density-based clustering algorithm. The experimental results show that the MR-DBIFOA algorithm has better clustering results and performs better parallelization in big data.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
34

Vijendra, Singh. "Efficient Clustering for High Dimensional Data: Subspace Based Clustering and Density Based Clustering." Information Technology Journal 10, no. 6 (May 15, 2011): 1092–105. http://dx.doi.org/10.3923/itj.2011.1092.1105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Yongli, Congcong Zhao, and Hao Chao. "Density Peak Clustering Based on Relative Density under Progressive Allocation Strategy." Mathematical and Computational Applications 27, no. 5 (October 6, 2022): 84. http://dx.doi.org/10.3390/mca27050084.

Full text
Abstract:
In traditional density peak clustering, when the density distribution of samples in a dataset is uneven, the density peak points are often concentrated in the region with dense sample distribution, which is easy to affect clustering accuracy. Under the progressive allocation strategy, a density peak clustering algorithm based on relative density is proposed in this paper. This algorithm uses the K-nearest neighbor method to calculate the local density of sample points. In addition, in order to avoid the domino effect during sample allocation, a new similarity calculation method is defined, and a progressive allocation strategy from near to far is used for the allocation of the remaining points. In order to evaluate the effectiveness of this algorithm, comparative experiments with five algorithms were carried out on classical artificial datasets and real datasets. Experimental results show that the proposed algorithm can achieve higher clustering accuracy on datasets with uneven density distribution.
APA, Harvard, Vancouver, ISO, and other styles
36

Ma, Yong. "Clustering Algorithm Based on Density of Data." E3S Web of Conferences 275 (2021): 03075. http://dx.doi.org/10.1051/e3sconf/202127503075.

Full text
Abstract:
The k_means clustering algorithm has very extensive application. The paper gives out_in clustering algorithm based on density. The algorithm combines distance with data density to adapt to data distribution. It can effectively solve the clustering of data. Out_in clustering based on density reduce distorition by move out and move in. Simulation results show that out_in clustering algorithm is more effective than the k_means clustering algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Jun-Lin, Jen-Chieh Kuo, and Hsing-Wang Chuang. "Improving Density Peak Clustering by Automatic Peak Selection and Single Linkage Clustering." Symmetry 12, no. 7 (July 14, 2020): 1168. http://dx.doi.org/10.3390/sym12071168.

Full text
Abstract:
Density peak clustering (DPC) is a density-based clustering method that has attracted much attention in the academic community. DPC works by first searching density peaks in the dataset, and then assigning each data point to the same cluster as its nearest higher-density point. One problem with DPC is the determination of the density peaks, where poor selection of the density peaks could yield poor clustering results. Another problem with DPC is its cluster assignment strategy, which often makes incorrect cluster assignments for data points that are far from their nearest higher-density points. This study modifies DPC and proposes a new clustering algorithm to resolve the above problems. The proposed algorithm uses the radius of the neighborhood to automatically select a set of the likely density peaks, which are far from their nearest higher-density points. Using the potential density peaks as the density peaks, it then applies DPC to yield the preliminary clustering results. Finally, it uses single-linkage clustering on the preliminary clustering results to reduce the number of clusters, if necessary. The proposed algorithm avoids the cluster assignment problem in DPC because the cluster assignments for the potential density peaks are based on single-linkage clustering, not based on DPC. Our performance study shows that the proposed algorithm outperforms DPC for datasets with irregularly shaped clusters.
APA, Harvard, Vancouver, ISO, and other styles
38

Yuan, Hanning, Shuliang Wang, Jing Geng, Yang Yu, and Ming Zhong. "Robust Clustering with Distance and Density." International Journal of Data Warehousing and Mining 13, no. 2 (April 2017): 63–74. http://dx.doi.org/10.4018/ijdwm.2017040104.

Full text
Abstract:
Clustering is fundamental for using big data. However, AP (affinity propagation) is not good at non-convex datasets, and the input parameter has a marked impact on DBSCAN (density-based spatial clustering of applications with noise). Moreover, new characteristics such as volume, variety, velocity, veracity make it difficult to group big data. To address the issues, a parameter free AP (PFAP) is proposed to group big data on the basis of both distance and density. Firstly, it obtains a group of normalized density from the AP clustering. The estimated parameters are monotonically. Then, the density is used for density clustering for multiple times. Finally, the multiple-density clustering results undergo a two-stage amalgamation to achieve the final clustering result. Experimental results on several benchmark datasets show that PFAP has been achieved better clustering quality than DBSCAN, AP, and APSCAN. And it also has better performance than APSCAN and FSDP.
APA, Harvard, Vancouver, ISO, and other styles
39

Jahirabadkar, Sunita, and Parag Kulkarni. "Clustering for High Dimensional Data: Density based Subspace Clustering Algorithms." International Journal of Computer Applications 63, no. 20 (February 15, 2013): 29–35. http://dx.doi.org/10.5120/10584-5732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Laohakiat, Sirisup, and Vera Sa-ing. "An incremental density-based clustering framework using fuzzy local clustering." Information Sciences 547 (February 2021): 404–26. http://dx.doi.org/10.1016/j.ins.2020.08.052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kaur, Supreet. "Quality Prediction of Object Oriented Software Using Density Based Clustering Approach." International Journal of Engineering and Technology 3, no. 4 (2011): 440–45. http://dx.doi.org/10.7763/ijet.2011.v3.267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Lu, Shuyi, Yuanjie Zheng, Rong Luo, Weikuan Jia, Jian Lian, and Chengjiang Li. "Density Peak Clustering Algorithm Considering Topological Features." Electronics 9, no. 3 (March 8, 2020): 459. http://dx.doi.org/10.3390/electronics9030459.

Full text
Abstract:
The clustering algorithm plays an important role in data mining and image processing. The breakthrough of algorithm precision and method directly affects the direction and progress of the following research. At present, types of clustering algorithms are mainly divided into hierarchical, density-based, grid-based and model-based ones. This paper mainly studies the Clustering by Fast Search and Find of Density Peaks (CFSFDP) algorithm, which is a new clustering method based on density. The algorithm has the characteristics of no iterative process, few parameters and high precision. However, we found that the clustering algorithm did not consider the original topological characteristics of the data. We also found that the clustering data is similar to the social network nodes mentioned in DeepWalk, which satisfied power-law distribution. In this study, we tried to consider the topological characteristics of the graph in the clustering algorithm. Based on previous studies, we propose a clustering algorithm that adds the topological characteristics of original data on the basis of the CFSFDP algorithm. Our experimental results show that the clustering algorithm with topological features significantly improves the clustering effect and proves that the addition of topological features is effective and feasible.
APA, Harvard, Vancouver, ISO, and other styles
43

Zou, Yujuan, and Zhijian Wang. "ConDPC: Data Connectivity-Based Density Peak Clustering." Applied Sciences 12, no. 24 (December 13, 2022): 12812. http://dx.doi.org/10.3390/app122412812.

Full text
Abstract:
As a relatively novel density-based clustering algorithm, Density peak clustering (DPC) has been widely studied in recent years. DPC sorts all points in descending order of local density and finds neighbors for each point in turn to assign all points to the appropriate clusters. The algorithm is simple and effective but has some limitations in applicable scenarios. If the density difference between clusters is large or the data distribution is in a nested structure, the clustering effect of this algorithm is poor. This study incorporates the idea of connectivity into the original algorithm and proposes an improved density peak clustering algorithm ConDPC. ConDPC modifies the strategy of obtaining clustering center points and assigning neighbors and improves the clustering accuracy of the original density peak clustering algorithm. In this study, clustering comparison experiments were conducted on synthetic data sets and real-world data sets. The compared algorithms include original DPC, DBSCAN, K-means and two improved algorithms over DPC. The comparison results prove the effectiveness of ConDPC.
APA, Harvard, Vancouver, ISO, and other styles
44

Mohod, Prerna, and Vandana P. Janeja. "Density-Based Spatial Anomalous Window Discovery." International Journal of Data Warehousing and Mining 18, no. 1 (January 2022): 1–23. http://dx.doi.org/10.4018/ijdwm.299015.

Full text
Abstract:
The focus of this paper is to identify anomalous spatial windows using clustering-based methods. Spatial Anomalous windows are the contiguous groupings of spatial nodes which are unusual with respect to the rest of the data. Many scan statistics based approaches have been proposed for the identification of spatial anomalous windows. To identify similarly behaving groups of points, clustering techniques have been proposed. There are parallels between both types of approaches but these approaches have not been used interchangeably. Thus, the focus of our work is to bridge this gap and identify anomalous spatial windows using clustering based methods. Specifically, we use the circular scan statistic based approach and DBSCAN- Density based Spatial Clustering of Applications with Noise, to bridge the gap between clustering and scan statistics based approach. We present experimental results in US crime data Our results show that our approach is effective in identifying spatial anomalous windows and performs equal or better than existing techniques and does better than pure clustering.
APA, Harvard, Vancouver, ISO, and other styles
45

Leng, Yong Lin, Hua Shen, and Fu Yu Lu. "Outlier Detection Clustering Algorithm Based on Density." Applied Mechanics and Materials 713-715 (January 2015): 1808–12. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.1808.

Full text
Abstract:
K-means is a classic algorithm of clustering analysis and widely applied to various data mining fields. Traditional K-means algorithm selects the initial centroids randomly, so the clustering result will be affected by the noise points, and the clustering result is not stable. For this problem, this paper proposed a k-means algorithm based on density outlier detection. The algorithm firstly detected the outliers with the density model and avoided selecting outliers as the initial cluster centers. After clustering the non outlier, according to distance of the outlier to each centroids, the algorithm distributed the outliers to the corresponding clustering. The algorithm effectively reduced the influence of outliers to K-means and improved the accuracy of clustering result. The experimental result demonstrated that this algorithm can effectively improve the accurate rate and stability of the clustering.
APA, Harvard, Vancouver, ISO, and other styles
46

Peter, J. Hencil, and A. Antonysamy. "An Optimised Density Based Clustering Algorithm." International Journal of Computer Applications 6, no. 9 (September 10, 2010): 20–25. http://dx.doi.org/10.5120/1102-1445.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Pan, Jiacai, Qingshan Jiang, and Zheping Shao. "Trajectory Clustering by Sampling and Density." Marine Technology Society Journal 48, no. 6 (November 1, 2014): 74–85. http://dx.doi.org/10.4031/mtsj.48.6.8.

Full text
Abstract:
AbstractThe trajectory data of moving objects contain huge amounts of information pertaining to traffic flow. It is incredibly important to extract valuable knowledge from this particular kind of data. Trajectory clustering is one of the most widely used approaches to complete this extraction. However, the current practice of trajectory clustering always groups similar subtrajectories that are partitioned from the trajectories; these methods would thus lose important information of the trajectory as a whole. To deal with this problem, this paper introduces a new trajectory-clustering algorithm based on sampling and density, which groups similar traffic movement tracks (car, ship, airplane, etc.) for further analysis of the characteristics of traffic flow. In particular, this paper proposes a novel technique of measuring distances between trajectories using point sampling. This distance measure does not divide the trajectory and thus conserves the integrated knowledge of these trajectories. This trajectory clustering approach is a new adaptation of a density-based clustering algorithm to the trajectories of moving objects. This paper then adopts the entropy theory as the heuristic for selecting the parameter values of this algorithm and the sum of the squared error method for measuring the clustering quality. Experiments on real ship trajectory data have shown that this algorithm is superior to the classical method TRACLUSS in the run time and that this method works well in discovering traffic flow patterns.
APA, Harvard, Vancouver, ISO, and other styles
48

Alamgir, Zareen, and Hina Naveed. "Efficient Density-Based Partitional Clustering Algorithm." Computing and Informatics 40, no. 6 (2021): 1322–44. http://dx.doi.org/10.31577/cai_2021_6_1322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Hao, Yu Xia, Yuekai Pan, and Qing Yang. "Time-series data dynamic density clustering." Intelligent Data Analysis 25, no. 6 (October 29, 2021): 1487–506. http://dx.doi.org/10.3233/ida-205459.

Full text
Abstract:
In many clustering problems, the whole data is not always static. Over time, part of it is likely to be changed, such as updated, erased, etc. Suffer this effect, the timeline can be divided into multiple time segments. And, the data at each time slice is static. Then, the data along the timeline shows a series of dynamic intermediate states. The union set of data from all time slices is called the time-series data. Obviously, the traditional clustering process does not apply directly to the time-series data. Meanwhile, repeating the clustering process at every time slices costs tremendous. In this paper, we analyze the transition rules of the data set and cluster structure when the time slice shifts to the next. We find there is a distinct correlation of data set and succession of cluster structure between two adjacent ones, which means we can use it to reduce the cost of the whole clustering process. Inspired by it, we propose a dynamic density clustering method (DDC) for time-series data. In the simulations, we choose 6 representative problems to construct the time-series data for testing DDC. The results show DDC can get high accuracy results for all 6 problems while reducing the overall cost markedly.
APA, Harvard, Vancouver, ISO, and other styles
50

Guo, Wenjie, Wenhai Wang, Shunping Zhao, Yunlong Niu, Zeyin Zhang, and Xinggao Liu. "Density Peak Clustering with connectivity estimation." Knowledge-Based Systems 243 (May 2022): 108501. http://dx.doi.org/10.1016/j.knosys.2022.108501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography