Dissertationen zum Thema „Clustering 3D“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-32 Dissertationen für die Forschung zum Thema "Clustering 3D" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Petrov, Anton Igorevich. „RNA 3D Motifs: Identification, Clustering, and Analysis“. Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1333929629.
Der volle Inhalt der QuelleWiberg, Benjamin. „Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail“. Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150534.
Der volle Inhalt der QuelleAbu, Almakarem Amal S. „Base Triples in RNA 3D Structures: Identifying, Clustering and Classifying“. Bowling Green State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1308783522.
Der volle Inhalt der QuelleBorke, Lukas. „Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA“. Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18307.
Der volle Inhalt der QuelleWith the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
Hasnat, Md Abul. „Unsupervised 3D image clustering and extension to joint color and depth segmentation“. Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.
Der volle Inhalt der QuelleAccess to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
Gianfrotta, Coline. „Modélisation, analyse et classification de motifs structuraux d'ARN à partir de leur contexte, par des méthodes d'algorithmique de graphes“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG056.
Der volle Inhalt der QuelleIn this thesis, we study the structural context of RNA structural motifs in order to make progress in their prediction. Indeed, some RNA motifs, which are substructures appearing recurrently in RNA structures, remain difficult to predict, because of the presence of non-canonical interactions in these motifs, and because of the distance on the primary sequence between the different parts of these motifs. We therefore model the topological structural context of these motifs by graphs, and compare the contexts of the different occurrences using several graph algorithms. We then classify the motif occurrences according to their topological context similarities and according to their 3D context similarities, using an overlapping clustering algorithm.First, we show on a dataset of three structural motifs that the observed similarities between the topological contexts are consistent with the similarities between the 3D contexts. This indicates that the topological context may be sufficient to determine the 3D context for these three motifs.In a second step, we study several classifications of occurrences of the A-minor motif, according to 3D context similarities. We observe that 3D context similarities exist between non-homologous occurrences, which could be a sign of an evolutionary convergence phenomenon. Moreover, we observe that some parts of the 3D context seem to be better conserved than others between non-homologous occurrences.In a third step, we study the predictive ability of the common topological context of A-minor motif occurrences, sharing similar 3D contexts, as well as the predictive ability of a sequence signal on these same occurrences. To this end, we study the occurrence of this topology and sequence in RNA structures in the absence of A-minor motifs. We conclude that the topology and the sequence represent a good signal for the majority of the studied classes
Borke, Lukas [Verfasser], Wolfgang Karl [Gutachter] Härdle und Stefan [Gutachter] Lessmann. „Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA / Lukas Borke ; Gutachter: Wolfgang Karl Härdle, Stefan Lessmann“. Berlin : Humboldt-Universität zu Berlin, 2017. http://d-nb.info/1189428857/34.
Der volle Inhalt der QuelleYu, En. „Social Network Analysis Applied to Ontology 3D Visualization“. Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1206497854.
Der volle Inhalt der QuelleNawaf, Mohamad Motasem. „3D structure estimation from image stream in urban environment“. Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4024/document.
Der volle Inhalt der QuelleIn computer vision, the 3D structure estimation from 2D images remains a fundamental problem. One of the emergent applications is 3D urban modelling and mapping. Here, we are interested in street-level monocular 3D reconstruction from mobile vehicle. In this particular case, several challenges arise at different stages of the 3D reconstruction pipeline. Mainly, lacking textured areas in urban scenes produces low density reconstructed point cloud. Also, the continuous motion of the vehicle prevents having redundant views of the scene with short feature points lifetime. In this context, we adopt the piecewise planar 3D reconstruction where the planarity assumption overcomes the aforementioned challenges.In this thesis, we introduce several improvements to the 3D structure estimation pipeline. In particular, the planar piecewise scene representation and modelling. First, we propose a novel approach that aims at creating 3D geometry respecting superpixel segmentation, which is a gradient-based boundary probability estimation by fusing colour and flow information using weighted multi-layered model. A pixel-wise weighting is used in the fusion process which takes into account the uncertainty of the computed flow. This method produces non-constrained superpixels in terms of size and shape. For the applications that imply a constrained size superpixels, such as 3D reconstruction from an image sequence, we develop a flow based SLIC method to produce superpixels that are adapted to reconstructed points density for better planar structure fitting. This is achieved by the mean of new distance measure that takes into account an input density map, in addition to the flow and spatial information. To increase the density of the reconstructed point cloud used to performthe planar structure fitting, we propose a new approach that uses several matching methods and dense optical flow. A weighting scheme assigns a learned weight to each reconstructed point to control its impact to fitting the structure relative to the accuracy of the used matching method. Then, a weighted total least square model uses the reconstructed points and learned weights to fit a planar structure with the help of superpixel segmentation of the input image sequence. Moreover, themodel handles the occlusion boundaries between neighbouring scene patches to encourage connectivity and co-planarity to produce more realistic models. The final output is a complete dense visually appealing 3Dmodels. The validity of the proposed approaches has been substantiated by comprehensive experiments and comparisons with state-of-the-art methods
Kéchichian, Razmig. „Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms“. Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00967381.
Der volle Inhalt der QuelleDolet, Aneline. „2D and 3D multispectral photoacoustic imaging - Application to the evaluation of blood oxygen concentration“. Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI070/document.
Der volle Inhalt der QuellePhotoacoustic imaging is a functional technique based on the creation of acoustic waves from tissues excited by an optical source (laser pulses). The illumination of a region of interest, with a range of optical wavelengths, allows the discrimination of the imaged media. This modality is promising for various medical applications in which growth, aging and evolution of tissue vascularization have to be studied. Thereby, photoacoustic imaging provides access to blood oxygenation in biological tissues and also allows the discrimination of benign or malignant tumors and the dating of tissue death (necrosis). The present thesis aims at developing a multispectral photoacoustic image processing chain for the calculation of blood oxygenation in biological tissues. The main steps are, first, the data discrimination (clustering), to extract the regions of interest, and second, the quantification of the different media in these regions (unmixing). Several unsupervised clustering and unmixing methods have been developed and their performance compared on experimental multispectral photoacoustic data. They were acquired on the experimental photoacoustic platform of the laboratory, during collaborations with other laboratories and also on a commercial system. For the validation of the developed methods, many phantoms containing different optical absorbers have been produced. During the co-supervision stay in Italy, specific imaging modes for 2D and 3D real-time photoacoustic imaging were developed on a research scanner. Finally, in vivo acquisitions using a commercial system were conducted on animal model (mouse) to validate these developments
Trávníčková, Kateřina. „Interaktivní segmentace 3D CT dat s využitím hlubokého učení“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-432864.
Der volle Inhalt der QuelleŠalplachta, Jakub. „Analýza 3D CT obrazových dat se zaměřením na detekci a klasifikaci specifických struktur tkání“. Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316836.
Der volle Inhalt der QuelleMauss, Benoit. „Réactions élastiques et inélastiques résonantes pour la caractérisation expérimentale de la cible active ACTAR TPC“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC226/document.
Der volle Inhalt der QuelleACTAR TPC (ACtive TARget and Time Projection Chamber) is a next generation active target that was designed and built at GANIL (Grand Accélérateur d'Ions Lourds). Active targets are gaseous targets in which the gas is also used to track charged particles following the principles of time projection chambers (TPC). The TPC of ACTAR has a segmented anode of 16384 2 mm side square pixels. The high density of pixels is processed using the GET (General Electronics for TPCs) electronic system. This system also digitizes the signals over a time interval, enabling a full 3D event reconstruction. An eight time smaller demonstrator was first built to verify the electronics operation and the mechanical design. ACTAR TPC's final design was based on results obtained with the demonstrator which was tested using 6Li, 24Mg and 58Ni beams. The commissioning of ACTAR TPC was then carried out for the case of resonant scattering on a proton target using 18O and 20Ne beams. A track reconstruction algorithm is used to extract the angles and energies of the ions involved in the reactions. Results are compared to previous data to determine the detection system performances. Comparing the commissioning data with R matrix calculations, excitation functions resolutions in different cases are obtained. The use of ACTAR TPC is validated for future experiments. Furthermore, alpha clustering was studied in 10B through the resonant scattering 6Li + 4He, carried out with the demonstrator. Two resonances at 8.58 MeV and 9.52 MeV are observed for the first time in elastic scattering with this reaction channel
Rouleau, Turcotte Audrey. „Étude du comportement des piles de pont confinées de PRFC par écoute acoustique“. Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9450.
Der volle Inhalt der QuelleRezaei, Alireza. „Detection of alterations in historical violins with optical monitoring“. Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG054.
Der volle Inhalt der QuellePreventive conservation is the constant monitoring of the state of conservation of an artwork to reduce the risk of damage in order to minimise the necessity of restorations. Many methods have been proposed to achieve this goal, generally including a mix of different analytical techniques. In this work, we present two probabilistic clustering algorithms for the detection of alterations on varnished surfaces, in particular those of historical musical instruments. Both methods are based on the a-contrario framework and the Number of False Alarms (NFA) criterion. The first one tackles the problem of detecting changes between a pair of colour images by analysing their difference map. It considers simultaneously grey-level and spatial density information with a single background model. The second method works with a sequence of images and analyses the evolution of the changed areas between frames. Both methods are robust to noise and avoid parameter tuning as well as any assumption about the shape and size of the changed areas. In both cases, tests have been conducted on UV-induced fluorescence (UVIFL) image sequences included in the “Violins UVIFL imagery” dataset. UVIFL photography is a well-known diagnostic technique used to see details of a surface not perceivable with visible light. The obtained results prove the capability of the algorithm to properly detect the altered regions. Comparisons with other state-of-the-art clustering methods show improvement in both precision and recall
Cebecauer, Matej. „Short-Term Traffic Prediction in Large-Scale Urban Networks“. Licentiate thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-250650.
Der volle Inhalt der QuelleQC 20190531
Bandieramonte, Marilena. „Muon Portal project: Tracks reconstruction, automated object recognition and visualization techniques for muon tomography data analysis“. Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/3751.
Der volle Inhalt der QuelleLi, Yichao. „Algorithmic Methods for Multi-Omics Biomarker Discovery“. Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1541609328071533.
Der volle Inhalt der QuelleChaumont, Marc. „Représentation en objets vidéo pour un codage progressif et concurrentiel des séquences d'images“. Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00004146.
Der volle Inhalt der QuelleTsai, Cheng-Lin, und 蔡政霖. „3D Cell Segmentation by Spatial Clustering of Subcellular Organelles“. Thesis, 2014. http://ndltd.ncl.edu.tw/handle/81778083199883054045.
Der volle Inhalt der Quelle國立陽明大學
生物醫學資訊研究所
102
Automatic segmentation of cell images is an essential task in a variety of biomedical applications. There are six main classes of approaches: intensity thresholding, feature detection, morphological ?ltering, region accumulation, deformable model ?tting, and other approaches. In this thesis, we investigate whether spatial clustering of subcellular organelles is useful for 3D cell segmentation. We used CHO cell 3D images as our dataset. The nuclear channel is segmented by double Otsu methods and mitochondrial channel is segmented by adaptive local thresholding. We calculated the spatial centroid and weighted centroid of the mitochondria and nuclei, and then used unsupervised clustering to group the mitochondria. We used the spatial extent of mitochondria in the same group as individual cell regions. Because there are several unsupervised clustering methods, we hope to know which method yields higher accuracy for cell segmentation. We compared the performance of GMM clustering, K-means, hierarchical clustering and normalized cuts methods. Regions of interest (ROI) for each cell in the 3D images are manually labeled slice-by-slice, and used as the gold standard for accuracy calculation. The following are results using methods that include nucleus centroids as data point. K-means clustering (81.43%) and GMM clustering (81.75%) with nucleus centroids initialization have higher accuracy than hierarchical clustering with average linkage (77.18%). We compared K-means with (81.22%) or without (81.43%) using nuclei centroids as initial cluster centers, and their accuracies are similar. Hierarchical clustering with nucleus centroids as data points with average (77.18%) or complete (77.02%) linkage has the same performance. Overall, K-means and GMM clustering in round and short cells have better accuracy than flat cells. GMM clustering with nucleus centroids as data points has the highest accuracy of 81.75%. GMM clustering is not suitable for whole field images, because there are many mitochondria from cells truncated by the image boundary, resulting in more mitochondrial clusters than nuclei. We designed a graphical user interface (GUI) system for K-means clustering without using nuclei centroids as initial cluster centers. The GUI was tested on another whole field 3D confocal image with manual cell ROI and achieved accuracy of 66.71%. Users can import a large number of image files for cell segmentation in our GUI. The proposed method can be applied to cell images with different subcellular organelle labels for automatic cell segmentation.
Lu, Yu-Ching, und 呂宥瑾. „A Density-Based Clustering Color Consistency Method for 3D Object Reconstruction“. Thesis, 2007. http://ndltd.ncl.edu.tw/handle/24888197886872449896.
Der volle Inhalt der Quelle國立交通大學
電機與控制工程系所
96
A voxel-based approach for 3D object reconstruction is used in this thesis, and there are four steps in the process of a voxel-based 3D reconstruction system. In the first step, the camera is calibrated, and the purpose of camera calibration is to acquire the intrinsic and extrinsic parameters of the camera. Second, image segmentation is executed to extract object from background. Third, a 3D model is built, and the coordinates and colors information of a large amount of surface points of the object are determined. The third step includes two sub-steps that are voxel visibility and color consistency, and color consistency is the main issue of this thesis. Finally, a reconstructed 3D object is displayed by computer language VC++ with OpenGL libraries in the fourth step. So far, generally speaking, there are three different methods for implementing color consistency, and these three methods are single threshold method, histogram method and adaptive threshold method. A new color consistency method by using the density-based clustering method is proposed in the thesis, and the proposed method is compared with the other three color consistency methods. According to the experimental results, the proposed method can eliminate the unnecessary voxels and determine the true colors of voxels very well.
Yang, Tzu-Chieh, und 楊子頡. „Three-Dimensional Possibilistic C-Template Shell Clustering and its Application in 3D Object Segmentation“. Thesis, 2016. http://ndltd.ncl.edu.tw/handle/38716084468398410151.
Der volle Inhalt der Quelle國立交通大學
多媒體工程研究所
104
The purpose of this thesis is to use a model to match a similar object in three-dimensional space.This research includes four main parts: First, using the Kinect sensor to take the real world; second, splitting the point cloud into separate items; third, creating a model to match each individual item; lastly, getting the final result. The thesis includes descriptions on using Kinect to establish a point cloud, using 3D Hough Transform to find and remove the cloud points of planes, and using connected-component to separate individual objects. The focus of this thesis is on matching with individual item and manually created models through the Template-Based Shell Clustering that is the process of detecting clusters of particular geometrical shapes through clustering algorithms. In experimental results, we can see accurate matching results.
Tseng, Wen-Hui, und 曾文慧. „Point Cloud Clustering for Surface Sampling and Its Application on 3D-Printing Quality Inspection“. Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4n2m76.
Der volle Inhalt der Quelle國立臺灣海洋大學
資訊工程學系
107
As 3D printing technology has driven to a maturity stage, this technology has applied to various fields. The growth of the popularity to such smart manufacturing makes quality inspection a sticking point, by applying the quality inspection to 3D printing would contribute to achieve the target of manufacturing high quality 3D models. We proposed the 3D printing quality inspection method based on 3D point cloud clustering. This paper clusters the 3D original point cloud with the proposed principal component analysis which computes three eigenvalues and eigenvectors from the input point cloud. We define the normal vector of a principal plane by the eigenvector corresponding to the largest eigenvalue. The 3D original point cloud model is then divided into two clusters by this principal plane, and the two clusters of point cloud are repeatedly executed with the proposed principal component analysis for clustering. This paper uses the fast point feature histograms to represent the feature descriptors of the 3D original point cloud model. To collect all fast point feature histograms of a point cloud, the clustering algorithm generates the shape dictionary of the point cloud. We create the R-Table by using this dictionary and the center point of each cluster. generates an offset vector O ⃑, i.e., a vector from the cluster center to the center point of the 3D original point cloud model. In addition, we use a 3D printing simulation system to create 3D printing simulation models. With each cluster center as a landmark type, the printing simulation models are used to label the voxels in a model using the 3D clustering results, which makes a set of training samples for learning a landmark classifier. The learned classifier is finally used to annotate voxels of the input reconstructed 3D model for further object segmentation and inspection. Using a 3D scanner to scan the printed 3D objects, this converts real-world objects into reconstructed 3D objects and point clouds. In object segmentation, the features are created for each vertex of the reconstructed 3D point cloud, which are inputted to the landmark classifier to discover the types of voxels in the input model. To combine with the 3D generalized Hough Transform, the center of the 3D original point cloud is located at the reconstructed point cloud. Utilizing 3D generalized Hough inverse transform to verify the correct center position, this segments the 3D model of the target object. Finally, we align the segmented and the original 3D models using the well-known ICP algorithm in order to calculate the 3D printing error in terms of the point correspondences. Experimental results demonstrate that the proposed approach outperforms the comrade methods in terms of the execution speed and accuracy of 3D printing.
Li, Zheng-Kuan, und 李政寬. „Applying Regression Coefficients Clustering in Multivariate Time Series Transforming for 3D Convolutional Neural Networks“. Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cpv42m.
Der volle Inhalt der Quelle國立臺灣科技大學
工業管理系
107
Multivariate time series data is very common in real life. Since most problems not only consider a single variable, but also multiple variables affect the label, how to effectively solve the problem of multivariate time series classification remain a major problem in research. In recent years, with the rapid development of Artificial Intelligence (AI), the deep learning framework has been tried to deal with multivariate time series classification problems. This study proposes a method to solve the problem of MTS classification. The multivariate time series data is used to find the regression equation by regression analysis. We use the regression coefficient and intercept to the cluster so that the time series with similar trends are divided into the same cluster, and the literature proposes to the four frameworks to encode time series data as different types of images. According to the clustering results, the time series with similar trends will be used the same method to encode time series into images and try a variety of experiment to determine encoding method for each cluster of time series. After encoding multivariate time series data as images according to the above method, each data is input into the 3D convolutional neural networks for feature extraction and image recognition, which can effectively solve the multivariate time series classification problem and find the best classification accuracy.
Yang, Huanyi. „Performance analysis of EM-MPM and K-means clustering in 3D ultrasound breast image segmentation“. 2014. http://hdl.handle.net/1805/3875.
Der volle Inhalt der QuelleMammographic density is an important risk factor for breast cancer, detecting and screening at an early stage could help save lives. To analyze breast density distribution, a good segmentation algorithm is needed. In this thesis, we compared two popularly used segmentation algorithms, EM-MPM and K-means Clustering. We applied them on twenty cases of synthetic phantom ultrasound tomography (UST), and nine cases of clinical mammogram and UST images. From the synthetic phantom segmentation comparison we found that EM-MPM performs better than K-means Clustering on segmentation accuracy, because the segmentation result fits the ground truth data very well (with superior Tanimoto Coefficient and Parenchyma Percentage). The EM-MPM is able to use a Bayesian prior assumption, which takes advantage of the 3D structure and finds a better localized segmentation. EM-MPM performs significantly better for the highly dense tissue scattered within low density tissue and for volumes with low contrast between high and low density tissues. For the clinical mammogram, image segmentation comparison shows again that EM-MPM outperforms K-means Clustering since it identifies the dense tissue more clearly and accurately than K-means. The superior EM-MPM results shown in this study presents a promising future application to the density proportion and potential cancer risk evaluation.
Liu, Hsiang-Ping, und 劉享屏. „Interpolation by Spline with GCV and Nonparametric Segmentation by Cell Clustering for 3D Ultrasound Images“. Thesis, 2002. http://ndltd.ncl.edu.tw/handle/18526098884368016464.
Der volle Inhalt der Quelle國立交通大學
統計所
90
This study is aimed to segment the tumor in 3D by a volume of 2D ultrasound images. This segmentation can provide the location information of tumor for doctors during operation and improve the accuracy of operation. Because the images obtained by 2D ultrasound scans are irregularly spaced most of the time, it is necessary to interpolate them into regularly spaced 3D images so that image processing techniques for 2D images can be generalized directly with fast computation speed. Spline interpolation is used in this study. Generalized cross validation is proposed to decide the size of control lattice in interpolation. After interpolation, we will generalize watershed transform and cell based approaches to 3D images. Gaussian smoothing is first applied to denoise the images. Sobel filters are then used to estimate the gradient. Based on the absolute values of gradients and the regularization term of the image intensities, image cells are obtained by watershed transform. Finally, cells are merged or split to locate the tumor by a new method with nonparametric testing and divisive clustering. This is called “nonparametric cell clustering” in this study. Simulation and empiric studies are performed for this new approach. The results are promising according to these studies.
Pape, Jasmin. „Multicolor 3D MINFLUX nanoscopy for biological imaging“. Doctoral thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-14E6-1.
Der volle Inhalt der Quelle(9226151), Camilo G. Aguilar Herrera. „NOVEL MODEL-BASED AND DEEP LEARNING APPROACHES TO SEGMENTATION AND OBJECT DETECTION IN 3D MICROSCOPY IMAGES“. Thesis, 2020.
Den vollen Inhalt der Quelle findenModeling microscopy images and extracting information from them are important problems in the fields of physics and material science.
Model-based methods, such as marked point processes (MPPs), and machine learning approaches, such as convolutional neural networks (CNNs), are powerful tools to perform these tasks. Nevertheless, MPPs present limitations when modeling objects with irregular boundaries. Similarly, machine learning techniques show drawbacks when differentiating clustered objects in volumetric datasets.
In this thesis we explore the extension of the MPP framework to detect irregularly shaped objects. In addition, we develop a CNN approach to perform efficient 3D object detection. Finally, we propose a CNN approach together with geometric regularization to provide robustness in object detection across different datasets.
The first part of this thesis explores the addition of boundary energy to the MPP by using active contours energy and level sets energy. Our results show this extension allows the MPP framework to detect material porosity in CT microscopy images and to detect red blood cells in DIC microscopy images.
The second part of this thesis proposes a convolutional neural network approach to perform 3D object detection by regressing objects voxels into clusters. Comparisons with leading methods demonstrate a significant speed-up in 3D fiber and porosity detection in composite polymers while preserving detection accuracy.
The third part of this thesis explores an improvement in the 3D object detection approach by regressing pixels into their instance centers and using geometric regularization. This improvement demonstrates robustness when comparing 3D fiber detection in several large volumetric datasets.
These methods can contribute to fast and correct structural characterization of large volumetric datasets, which could potentially lead to the development of novel materials.
Campagnolo, João Henrique Fróis Lameiras. „Unsupervised behavioral classification with 3D pose data from tethered Drosophila melanogaster“. Master's thesis, 2020. http://hdl.handle.net/10451/48345.
Der volle Inhalt der QuelleO comportamento animal e guiado por instruções geneticamente codificadas, com contribuições do meio envolvente e experiências antecedentes. O mesmo pode ser considerado como o derradeiro output da atividade neuronal, pelo que o estudo do comportamento animal constitui um meio de compreensão dos mecanismos subjacentes ao funcionamento do cérebro animal. Para desvendar a correspondência entre cérebro e comportamento são necessárias ferramentas que consigam medir um comportamento de forma precisa, apreciável e coerente. O domínio científico responsável pelo estudo dos comportamentos dos animais denomina-se Etologia. No início do seculo XX, os etólogos categorizavam comportamentos animais com recurso as suas próprias intuições e experiência. Consequentemente, as suas avaliações eram subjetivas e desprovidas de comportamentos que os etólogos não considerassem a priori. Com o ressurgimento de novas técnicas de captura e analise de comportamentos, os etólogos transitaram para paradigmas mais objetivos, quantitativos da medição de comportamentos. Tais ferramentas analíticas fomentaram a construção de datasets comportamentais que, por sua vez, promoveram o desenvolvimento de softwares para a quantificação de comportamentos: rastreamento de trajetórias, classificação de ações, analise de padrões comportamentais em grandes escalas consistem nos exemplos mais preeminentes. Este trabalho encontra-se inserido na segunda categoria referida (classificação de ações). Os classificadores de ações dividem-se consoante são supervisionados ou não-supervisionados. A primeira categoria compreende classificadores treinados para reconhecer padrões específicos, definidos por um especialista humano. Esta categoria de classificadores e encontra-se limitada por: 1) necessitar de um processo extenuado de anotação de frames para treino do classificador; 2) subjetividade face ao especialista que classifica os mesmos frames, 3) baixa dimensionalidade, na medida em que a classificação reduz os complexos comportamentos a um só rotulo; 4) assunções erróneas; 5) preconceito humano face aos comportamentos observados. Por sua vez, os classificadores não-supervisionados seguem exaustivamente uma formula: 1) computer vision e empregue para a extração das características posturais do animal; 2) dá-se o pré-processamento dos dados, que inclui um modulo vital que envolve a construção de uma representação dinâmico-postural das ações do animal, de forma a capturar os elementos dinâmicos do comportamento; 3) segue-se um modulo opcional de redução de dimensionalidade, caso o utilizador deseje visualizar diretamente os dados num espaço de reduzidas dimensões; 4) efetua-se a atribuição de um rótulo a cada elemento dos dados, por via de um algoritmo que opera quer diretamente no espaço de alta dimensão, ou no de baixa dimensão, resultante do passo anterior. O objetivo deste trabalho passa por alcançar uma classificação objetiva e reproduzível, de forma não-supervisionada de frames de Drosophila melanogaster suspensas numa bola que flutua no ar, tentando minimizar o número de intuições requeridas para o efeito e, se possível, dissipar a influência dos aspetos morfológicos de cada individuo (garantindo assim uma classificação generalizada dos comportamentos destes insetos). Para alcançar tal classificação, este estudo recorre a uma ferramenta recém desenvolvida que regista a pose tridimensional de Drosophila fixas, o DeepFly3D, para construir um dataset com as coordenadas x-, y- e z-, ao longo do tempo, das posições de referência de um conjunto de três genótipos de Drosophila melanogaster (linhas aDN>CsChrimson, MDN-GAL4/+ e aDNGAL4/+). Sucede-se uma operação inovadora de normalização que recorre ao cálculo de ângulos entre pontos de referência adjacentes, como as articulações, antenas e riscas dorsais das moscas, por via de relações trigonométricas e a definição dos planos anatómicos das moscas, que visa atenuar os pesos das diferenças morfológicas das moscas, ou a sua orientação relativa as camaras do DeepFly3D, para o classificador. O modulo de normalização e sucedido por outro de analise de frequência, focado na extração das frequências relevantes nas series temporais dos ângulos calculados, bem como dos seus pesos relativos. O produto final do pré-processamento consiste numa matriz com a norma dos ditos pesos – a matriz de expressão do espaço dinâmico-postural. Subsequentemente, seguem-se os módulos de redução de dimensionalidade e de atribuição de clusters (pontos 3) e 4) do paragrafo anterior). Para os mesmos, são propostas seis configurações possíveis de algoritmos, submetidas de imediato a uma anélise comparativa, de forma a determinar a mais apta para classificar este tipo de dados. Os algoritmos de redução de dimensionalidade aqui postos a prova são o t-SNE (t-distributed Stochastic Neighbor Embedding) e o PCA (Principal Component Analysis), enquanto que os algoritmos de clustering comparados são o Watershed, GMM-posterior probability assignment e o HDBSCAN (Hierarchical Density Based Spatial Clustering of Applications with Noise). Cada uma das pipelines candidatas e finalmente avaliada mediante a observação dos vídeos inclusos nos clusters produzidos e, dado o vasto numero destes vídeos, bem como a possibilidade de uma validação subjetiva face a observadores distintos, com o auxilio de métricas que expressam determinados critérios abrangentes de qualidade dos clusters: 1) Fly uncompactness, que avalia a eficiência do modulo de normalização com ângulos de referencia da mosca; 2) Homogeneity, que procura garantir que os clusters não refletem a identidade ou o genótipo das moscas; 3) Cluster entropy, que afere a previsibilidade das transições entre os clusters; 4) Mean dwell time, que pondera o tempo que um individuo demora em media a realizar uma Acão. Dois critérios auxiliares extra são ainda considerados: o número de parâmetros que foram estimados pelo utilizador (quanto maior, mais limitada e a reprodutibilidade da pipeline) e o tempo de execução do algoritmo (que deve ser igualmente minimizado). Apesar de manter alguma subjetividade face aquilo a que o utilizador considera um “bom” cluster, a inclusão das métricas aproxima esta abordagem a um cenário ideal de completa autonomia entre a conceção de uma definição de comportamento, e a validação dos resultados que decorrem das suas conjeturas. Os desempenhos das pipelines candidatas divergiram largamente: os espaços resultantes das operações de redução de dimensionalidade demonstram-se heterogéneos e anisotrópicos, com a presença de sequências de pontos que tomam formas vermiformes, ao invés de um antecipado conglomerado de pontos desassociados. Estas trajetórias vermiformes limitam o desempenho dos algoritmos de clustering que operam nos espaços de baixas (duas, neste caso) dimensões. A ausência de um passo intermedio de amostragem do espaço dinâmico-postural explica a génese destas trajetórias vermiformes. Não obstante, as pipelines que praticam redução de dimensionalidade geraram melhores resultados que a pipeline que recorre a clustering com HDBSCAN diretamente sobre a matriz de expressão do espaço dinâmico-postural. A combinação mais fortuita de módulos de redução de dimensionalidade e clustering adveio da pipeline PCA30-t-SNE2-GMM. Embora não sejam absolutamente consistentes, os clusters resultantes desta pipeline incluem um comportamento que se sobressai face aos demais que se encontram inseridos no mesmo cluster (erroneamente). Lacunas destes clusters envolvem sobretudo a ocasional fusão de dois comportamentos distintos no mesmo cluster, ou a presença inoportuna de sequências de comportamentos nas quais a mosca se encontra imóvel (provavelmente o resultado de pequenos erros de deteção produzidos pelo DeepFly3D). Para mais, a pipeline PCA30-t-SNE2-GMM foi capaz de reconhecer diferenças no fenótipo comportamental de moscas, validadas pelas linhas genéticas das mesmas. Apesar dos resultados obtidos manifestarem visíveis melhorias face aqueles produzidos por abordagens semelhantes, sobretudo a nível de vídeos dos clusters, uma vez que só uma das abordagens inclui métricas de sucesso dos clusters, alguns aspetos desta abordagem requerem correções: a inclusão de uma etapa de amostragem, sucedida de um novo algoritmo que fosse capaz de realizar reduções de dimensionalidade consistentes, de forma a reunir todos os pontos no mesmo espaço embutido será possivelmente a característica mais capaz de acrescentar valor a esta abordagem. Futuras abordagens não deverão descurar o contributo de múltiplas representações comportamentais que possam vir a validar-se mutuamente, substituindo a necessidade de métricas de sucesso definidas pelos utilizadores.
One of the preeminent challenges of Behavioral Neuroscience is the understanding of how the brain works and how it ultimately commands an animal’s behavior. Solving this brain-behavior linkage requires, on one end, precise, meaningful and coherent techniques for measuring behavior. Rapid technical developments in tools for collecting and analyzing behavioral data, paired with the immaturity of current approaches, motivate an ongoing search for systematic, unbiased behavioral classification techniques. To accomplish such a classification, this study employs a state-of-the-art tool for tracking 3D pose of tethered Drosophila, DeepFly3D, to collect a dataset of x-, y- and z- landmark positions over time, from tethered Drosophila melanogaster moving over an air-suspended ball. This is succeeded by unprecedented normalization across individual flies by computing the angles between adjoining landmarks, followed by standard wavelet analysis. Subsequently, six unsupervised behavior classification techniques are compared - four of which follow proven formulas, while the remaining two are experimental. Lastly, their performances are evaluated via meaningful metric scores along with cluster video assessment, as to ensure a fully unbiased cycle - from the conjecturing of a definition of behavior to the corroboration of the results that stem from its assumptions. Performances from different techniques varied significantly. Techniques that perform clustering in embedded low- (two-) dimensional spaces struggled with their heterogeneous and anisotropic nature. High-dimensional clustering techniques revealed that these properties emerged from the original highdimensional posture-dynamics spaces. Nonetheless, high and low-dimensional spaces disagree on the arrangement of their elements, with embedded data points showing hierarchical organization, which was lacking prior to their embedding. Low-dimensional clustering techniques were globally a better match against these spatial features and yielded more suitable results. Their candidate embedding algorithms alone were capable of revealing dissimilarities in preferred behaviors among contrasting genotypes of Drosophila. Lastly, the top-ranking classification technique produced satisfactory behavioral cluster videos (despite the irregular allocation of rest labels) in a consistent and repeatable manner, while requiring a marginal number of hand tuned parameters.
Gorricha, Jorge Manuel Lourenço. „Visualization of clusters in geo-referenced data using three-dimensional self-organizing maps“. Master's thesis, 2010. http://hdl.handle.net/10362/2631.
Der volle Inhalt der QuelleThe Self-Organizing Map (SOM) is an artificial neural network that performs simultaneously vector quantization and vector projection. Due to this characteristic, the SOM is an effective method for clustering analysis via visualization. The SOM can be visualized through the output space, generally a regular two-dimensional grid of nodes, and through the input space, emphasizing the vector quantization process. Among all the strategies for visualizing the SOM, we are particularly interested in those that allow dealing with spatial dependency, linking the SOM to the geographic visualization with color. One possible approach, commonly used, is the cartographic representation of data with label colors defined from the output space of a two-dimensional SOM. However, in the particular case of geo-referenced data, it is possible to consider the use of a three-dimensional SOM for this purpose, thus adding one more dimension in the analysis. In this dissertation is presented a method for clustering geo-referenced data that integrates the visualization of both perspectives of a three dimensional SOM: linking its output space to the cartographic representation through a ordered set of colors; and exploring the use of frontiers among geo-referenced elements, computed according to the distances in the input space between their Best Matching Units.
Gorricha, Jorge Manuel Lourenço. „Exploratory data analysis using self-organising maps defined in up to three dimensions“. Doctoral thesis, 2015. http://hdl.handle.net/10362/17852.
Der volle Inhalt der Quelle