Rozprawy doktorskie na temat „3D clustering”

Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: 3D clustering.

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 30 najlepszych rozpraw doktorskich naukowych na temat „3D clustering”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Petrov, Anton Igorevich. "RNA 3D Motifs: Identification, Clustering, and Analysis". Bowling Green State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1333929629.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
2

Wiberg, Benjamin. "Automatic Clustering of 3D Objects for Hierarchical Level-of-Detail". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-150534.

Pełny tekst źródła
Streszczenie:
This report describes an algorithm for computing 3D object hierarchies fit for hlod optimization. The algorithm is used as a pre-processing stage in an hlod pipeline that automatically optimizes 3D models containing multiple meshes. The algorithm for generating hierarchies groups together meshes in a hierarchical tree using operations on bounding spheres of the meshes. The algorithm prioritizes grouping close objects together in the early stages, and relaxes its constraints toward the end, resulting in a tree structure with a single root node. The hierarchical tree is then used by computing proxy meshes, i.e. simplified stand-in meshes, for the inner nodes of the hierarchy. Finally, the resulting proxy meshes, together with the generated hierarchy and the original meshes, are used to render the model using a tree-traversing hlod switching algorithm that renders deeper parts of the tree containing more detailed meshes when more detail is needed. In addition, a minor change to the clustering algorithm is proposed. By swapping the bounding spheres to AABBs (Axis-Aligned Bounding Boxes) in the clustering stage, hierarchies with different properties are generated. This change is shown to generate hierarchies with similar rendering performance as the hierarchies made with bounding spheres, while at the same time resulting in lower space requirements for all proxy meshes. Overall, the proposed automatic hlod pipeline is shown to increase rendering performance for all evaluated scenes in most frames, while never yielding noticeably worse performance than the original model as well.
Style APA, Harvard, Vancouver, ISO itp.
3

Abu, Almakarem Amal S. "Base Triples in RNA 3D Structures: Identifying, Clustering and Classifying". Bowling Green State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1308783522.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
4

Borke, Lukas. "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA". Doctoral thesis, Humboldt-Universität zu Berlin, 2017. http://dx.doi.org/10.18452/18307.

Pełny tekst źródła
Streszczenie:
Mit der wachsenden Popularität von GitHub, dem größten Online-Anbieter von Programm-Quellcode und der größten Kollaborationsplattform der Welt, hat es sich zu einer Big-Data-Ressource entfaltet, die eine Vielfalt von Open-Source-Repositorien (OSR) anbietet. Gegenwärtig gibt es auf GitHub mehr als eine Million Organisationen, darunter solche wie Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly und viele mehr. GitHub verfügt über eine umfassende REST API, die es Forschern ermöglicht, wertvolle Informationen über die Entwicklungszyklen von Software und Forschung abzurufen. Unsere Arbeit verfolgt zwei Hauptziele: (I) ein automatisches OSR-Kategorisierungssystem für Data Science Teams und Softwareentwickler zu ermöglichen, das Entdeckbarkeit, Technologietransfer und Koexistenz fördert. (II) Visuelle Daten-Exploration und thematisch strukturierte Navigation innerhalb von GitHub-Organisationen für reproduzierbare Kooperationsforschung und Web-Applikationen zu etablieren. Um Mehrwert aus Big Data zu generieren, ist die Speicherung und Verarbeitung der Datensemantik und Metadaten essenziell. Ferner ist die Wahl eines geeigneten Text Mining (TM) Modells von Bedeutung. Die dynamische Kalibrierung der Metadaten-Konfigurationen, TM Modelle (VSM, GVSM, LSA), Clustering-Methoden und Clustering-Qualitätsindizes wird als "Smart Clusterization" abgekürzt. Data-Driven Documents (D3) und Three.js (3D) sind JavaScript-Bibliotheken, um dynamische, interaktive Datenvisualisierung zu erzeugen. Beide Techniken erlauben Visuelles Data Mining (VDM) in Webbrowsern, und werden als D3-3D abgekürzt. Latent Semantic Analysis (LSA) misst semantische Information durch Kontingenzanalyse des Textkorpus. Ihre Eigenschaften und Anwendbarkeit für Big-Data-Analytik werden demonstriert. "Smart clusterization", kombiniert mit den dynamischen VDM-Möglichkeiten von D3-3D, wird unter dem Begriff "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA" zusammengefasst.
With the growing popularity of GitHub, the largest host of source code and collaboration platform in the world, it has evolved to a Big Data resource offering a variety of Open Source repositories (OSR). At present, there are more than one million organizations on GitHub, among them Google, Facebook, Twitter, Yahoo, CRAN, RStudio, D3, Plotly and many more. GitHub provides an extensive REST API, which enables scientists to retrieve valuable information about the software and research development life cycles. Our research pursues two main objectives: (I) provide an automatic OSR categorization system for data science teams and software developers promoting discoverability, technology transfer and coexistence; (II) establish visual data exploration and topic driven navigation of GitHub organizations for collaborative reproducible research and web deployment. To transform Big Data into value, in other words into Smart Data, storing and processing of the data semantics and metadata is essential. Further, the choice of an adequate text mining (TM) model is important. The dynamic calibration of metadata configurations, TM models (VSM, GVSM, LSA), clustering methods and clustering quality indices will be shortened as "smart clusterization". Data-Driven Documents (D3) and Three.js (3D) are JavaScript libraries for producing dynamic, interactive data visualizations, featuring hardware acceleration for rendering complex 2D or 3D computer animations of large data sets. Both techniques enable visual data mining (VDM) in web browsers, and will be abbreviated as D3-3D. Latent Semantic Analysis (LSA) measures semantic information through co-occurrence analysis in the text corpus. Its properties and applicability for Big Data analytics will be demonstrated. "Smart clusterization" combined with the dynamic VDM capabilities of D3-3D will be summarized under the term "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA".
Style APA, Harvard, Vancouver, ISO itp.
5

Hasnat, Md Abul. "Unsupervised 3D image clustering and extension to joint color and depth segmentation". Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4013/document.

Pełny tekst źródła
Streszczenie:
L'accès aux séquences d'images 3D s'est aujourd'hui démocratisé, grâce aux récentes avancées dans le développement des capteurs de profondeur ainsi que des méthodes permettant de manipuler des informations 3D à partir d'images 2D. De ce fait, il y a une attente importante de la part de la communauté scientifique de la vision par ordinateur dans l'intégration de l'information 3D. En effet, des travaux de recherche ont montré que les performances de certaines applications pouvaient être améliorées en intégrant l'information 3D. Cependant, il reste des problèmes à résoudre pour l'analyse et la segmentation de scènes intérieures comme (a) comment l'information 3D peut-elle être exploitée au mieux ? et (b) quelle est la meilleure manière de prendre en compte de manière conjointe les informations couleur et 3D ? Nous abordons ces deux questions dans cette thèse et nous proposons de nouvelles méthodes non supervisées pour la classification d'images 3D et la segmentation prenant en compte de manière conjointe les informations de couleur et de profondeur. A cet effet, nous formulons l'hypothèse que les normales aux surfaces dans les images 3D sont des éléments à prendre en compte pour leur analyse, et leurs distributions sont modélisables à l'aide de lois de mélange. Nous utilisons la méthode dite « Bregman Soft Clustering » afin d'être efficace d'un point de vue calculatoire. De plus, nous étudions plusieurs lois de probabilités permettant de modéliser les distributions de directions : la loi de von Mises-Fisher et la loi de Watson. Les méthodes de classification « basées modèles » proposées sont ensuite validées en utilisant des données de synthèse puis nous montrons leur intérêt pour l'analyse des images 3D (ou de profondeur). Une nouvelle méthode de segmentation d'images couleur et profondeur, appelées aussi images RGB-D, exploitant conjointement la couleur, la position 3D, et la normale locale est alors développée par extension des précédentes méthodes et en introduisant une méthode statistique de fusion de régions « planes » à l'aide d'un graphe. Les résultats montrent que la méthode proposée donne des résultats au moins comparables aux méthodes de l'état de l'art tout en demandant moins de temps de calcul. De plus, elle ouvre des perspectives nouvelles pour la fusion non supervisée des informations de couleur et de géométrie. Nous sommes convaincus que les méthodes proposées dans cette thèse pourront être utilisées pour la classification d'autres types de données comme la parole, les données d'expression en génétique, etc. Elles devraient aussi permettre la réalisation de tâches complexes comme l'analyse conjointe de données contenant des images et de la parole
Access to the 3D images at a reasonable frame rate is widespread now, thanks to the recent advances in low cost depth sensors as well as the efficient methods to compute 3D from 2D images. As a consequence, it is highly demanding to enhance the capability of existing computer vision applications by incorporating 3D information. Indeed, it has been demonstrated in numerous researches that the accuracy of different tasks increases by including 3D information as an additional feature. However, for the task of indoor scene analysis and segmentation, it remains several important issues, such as: (a) how the 3D information itself can be exploited? and (b) what is the best way to fuse color and 3D in an unsupervised manner? In this thesis, we address these issues and propose novel unsupervised methods for 3D image clustering and joint color and depth image segmentation. To this aim, we consider image normals as the prominent feature from 3D image and cluster them with methods based on finite statistical mixture models. We consider Bregman Soft Clustering method to ensure computationally efficient clustering. Moreover, we exploit several probability distributions from directional statistics, such as the von Mises-Fisher distribution and the Watson distribution. By combining these, we propose novel Model Based Clustering methods. We empirically validate these methods using synthetic data and then demonstrate their application for 3D/depth image analysis. Afterward, we extend these methods to segment synchronized 3D and color image, also called RGB-D image. To this aim, first we propose a statistical image generation model for RGB-D image. Then, we propose novel RGB-D segmentation method using a joint color-spatial-axial clustering and a statistical planar region merging method. Results show that, the proposed method is comparable with the state of the art methods and requires less computation time. Moreover, it opens interesting perspectives to fuse color and geometry in an unsupervised manner. We believe that the methods proposed in this thesis are equally applicable and extendable for clustering different types of data, such as speech, gene expressions, etc. Moreover, they can be used for complex tasks, such as joint image-speech data analysis
Style APA, Harvard, Vancouver, ISO itp.
6

Borke, Lukas [Verfasser], Wolfgang Karl [Gutachter] Härdle i Stefan [Gutachter] Lessmann. "Dynamic Clustering and Visualization of Smart Data via D3-3D-LSA / Lukas Borke ; Gutachter: Wolfgang Karl Härdle, Stefan Lessmann". Berlin : Humboldt-Universität zu Berlin, 2017. http://d-nb.info/1189428857/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Yu, En. "Social Network Analysis Applied to Ontology 3D Visualization". Miami University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=miami1206497854.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Nawaf, Mohamad Motasem. "3D structure estimation from image stream in urban environment". Thesis, Saint-Etienne, 2014. http://www.theses.fr/2014STET4024/document.

Pełny tekst źródła
Streszczenie:
Dans le domaine de la vision par ordinateur, l’estimation de la structure d’une scène 3D à partir d’images 2D constitue un problème fondamental. Parmi les applications concernées par cette problématique, nous nous sommes intéressés dans le cadre de cette thèse à la modélisation d’un environnement urbain. Nous nous sommes intéressés à la reconstruction de scènes 3D à partir d’images monoculaires générées par un véhicule en mouvement. Ici, plusieurs défis se posent à travers les différentes étapes de la chaine de traitement inhérente à la reconstruction 3D. L’un de ces défis vient du fait de l’absence de zones suffisamment texturées dans certaines scènes urbaines, d’où une reconstruction 3D (un nuage de points 3D) trop éparse. De plus, du fait du mouvement du véhicule, d’une image à l’autre il n’y a pas toujours un recouvrement suffisant entre différentes vues consécutives d’une même scène. Dans ce contexte, et ce afin de lever les verrous ci-dessus mentionnés, nous proposons d’estimer, de reconstruire, la structure d’une scène 3D par morceaux en se basant sur une hypothèse de planéité. Nous proposons plusieurs améliorations à la chaine de traitement associée à la reconstruction 3D. D’abord, afin de structurer, de représenter, la scène sous la forme d’entités planes nous proposons une nouvelle méthode de reconstruction 3D, basée sur le regroupement de pixels similaires (superpixel segmentation), qui à travers une représentation multi-échelle pondérée fusionne les informations de couleur et de mouvement. Cette méthode est basée sur l’estimation de la probabilité de discontinuités locales aux frontières des régions calculées à partir du gradient (gradientbased boundary probability estimation). Afin de prendre en compte l’incertitude liée à l’estimation du mouvement, une pondération par morceaux est appliquée à chaque pixel en fonction de cette incertitude. Cette méthode génère des regroupements de pixels (superpixels) non contraints en termes de taille et de forme. Pour certaines applications, telle que la reconstruction 3D à partir d’une séquence d’images, des contraintes de taille sont nécessaires. Nous avons donc proposé une méthode qui intègre à l’algorithme SLIC (Simple Linear Iterative Clustering) l’information de mouvement. L’objectif étant d’obtenir une reconstruction 3D plus dense qui estime mieux la structure de la scène. Pour atteindre cet objectif, nous avons aussi introduit une nouvelle distance qui, en complément de l’information de mouvement et de données images, prend en compte la densité du nuage de points. Afin d’augmenter la densité du nuage de points utilisé pour reconstruire la structure de la scène sous la forme de surfaces planes, nous proposons une nouvelle approche qui mixte plusieurs méthodes d’appariement et une méthode de flot optique dense. Cette méthode est basée sur un système de pondération qui attribue un poids pré-calculé par apprentissage à chaque point reconstruit. L’objectif est de contrôler l’impact de ce système de pondération, autrement dit la qualité de la reconstruction, en fonction de la précision de la méthode d’appariement utilisée. Pour atteindre cet objectif, nous avons appliqué un processus des moindres carrés pondérés aux données reconstruites pondérées par les calculés par apprentissage, qui en complément de la segmentation par morceaux de la séquence d’images, permet une meilleure reconstruction de la structure de la scène sous la forme de surfaces planes. Nous avons également proposé un processus de gestion des discontinuités locales aux frontières de régions voisines dues à des occlusions (occlusion boundaries) qui favorise la coplanarité et la connectivité des régions connexes. L’ensemble des modèles proposés permet de générer une reconstruction 3D dense représentative à la réalité de la scène. La pertinence des modèles proposés a été étudiée et comparée à l’état de l’art. Plusieurs expérimentations ont été réalisées afin de démontrer, d’étayer, la validité de notre approche
In computer vision, the 3D structure estimation from 2D images remains a fundamental problem. One of the emergent applications is 3D urban modelling and mapping. Here, we are interested in street-level monocular 3D reconstruction from mobile vehicle. In this particular case, several challenges arise at different stages of the 3D reconstruction pipeline. Mainly, lacking textured areas in urban scenes produces low density reconstructed point cloud. Also, the continuous motion of the vehicle prevents having redundant views of the scene with short feature points lifetime. In this context, we adopt the piecewise planar 3D reconstruction where the planarity assumption overcomes the aforementioned challenges.In this thesis, we introduce several improvements to the 3D structure estimation pipeline. In particular, the planar piecewise scene representation and modelling. First, we propose a novel approach that aims at creating 3D geometry respecting superpixel segmentation, which is a gradient-based boundary probability estimation by fusing colour and flow information using weighted multi-layered model. A pixel-wise weighting is used in the fusion process which takes into account the uncertainty of the computed flow. This method produces non-constrained superpixels in terms of size and shape. For the applications that imply a constrained size superpixels, such as 3D reconstruction from an image sequence, we develop a flow based SLIC method to produce superpixels that are adapted to reconstructed points density for better planar structure fitting. This is achieved by the mean of new distance measure that takes into account an input density map, in addition to the flow and spatial information. To increase the density of the reconstructed point cloud used to performthe planar structure fitting, we propose a new approach that uses several matching methods and dense optical flow. A weighting scheme assigns a learned weight to each reconstructed point to control its impact to fitting the structure relative to the accuracy of the used matching method. Then, a weighted total least square model uses the reconstructed points and learned weights to fit a planar structure with the help of superpixel segmentation of the input image sequence. Moreover, themodel handles the occlusion boundaries between neighbouring scene patches to encourage connectivity and co-planarity to produce more realistic models. The final output is a complete dense visually appealing 3Dmodels. The validity of the proposed approaches has been substantiated by comprehensive experiments and comparisons with state-of-the-art methods
Style APA, Harvard, Vancouver, ISO itp.
9

Kéchichian, Razmig. "Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00967381.

Pełny tekst źródła
Streszczenie:
We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
Style APA, Harvard, Vancouver, ISO itp.
10

Dolet, Aneline. "2D and 3D multispectral photoacoustic imaging - Application to the evaluation of blood oxygen concentration". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI070/document.

Pełny tekst źródła
Streszczenie:
L'imagerie photoacoustique est une modalité d'imagerie fonctionnelle basée sur la génération d'ondes acoustiques par des tissus soumis à une illumination optique (impulsion laser). L'utilisation de différentes longueurs d'ondes optiques permet la discrimination des milieux imagés. Cette modalité est prometteuse pour de nombreuses applications médicales liées, par exemple, à la croissance, au vieillissement et à l'évolution de la vascularisation des tissus. En effet, l'accès à l'oxygénation du sang dans les tissus est rendu possible par l'imagerie photoacoustique. Cela permet, entre autres applications, la discrimination de tumeurs bénignes ou malignes et la datation de la mort tissulaire (nécrose). Ce travail de thèse a pour objectif principal la construction d'une chaîne de traitement des données photoacoustiques multispectrales pour le calcul de l'oxygénation du sang dans les tissus. Les principales étapes sont, d'une part, la discrimination des données (clustering), pour extraire les zones d'intérêt, et d'autre part, la quantification des différents constituants présents dans celles-ci (unmixing). Plusieurs méthodes non supervisées de discrimination et de quantification ont été développées et leurs performances comparées sur des données photoacoustiques multispectrales expérimentales. Celles-ci ont été acquises sur la plateforme photoacoustique du laboratoire, lors de collaborations avec d'autres laboratoires et également sur un système commercial. Pour la validation des méthodes développées, de nombreux fantômes contenant différents absorbeurs optiques ont été conçus. Lors du séjour de cotutelle de thèse en Italie, des modes d'imagerie spécifiques pour l'imagerie photoacoustique 2D et 3D temps-réel ont été développés sur un échographe de recherche. Enfin, des acquisitions in vivo sur modèle animal (souris) au moyen d'un système commercial ont été réalisées pour valider ces développements
Photoacoustic imaging is a functional technique based on the creation of acoustic waves from tissues excited by an optical source (laser pulses). The illumination of a region of interest, with a range of optical wavelengths, allows the discrimination of the imaged media. This modality is promising for various medical applications in which growth, aging and evolution of tissue vascularization have to be studied. Thereby, photoacoustic imaging provides access to blood oxygenation in biological tissues and also allows the discrimination of benign or malignant tumors and the dating of tissue death (necrosis). The present thesis aims at developing a multispectral photoacoustic image processing chain for the calculation of blood oxygenation in biological tissues. The main steps are, first, the data discrimination (clustering), to extract the regions of interest, and second, the quantification of the different media in these regions (unmixing). Several unsupervised clustering and unmixing methods have been developed and their performance compared on experimental multispectral photoacoustic data. They were acquired on the experimental photoacoustic platform of the laboratory, during collaborations with other laboratories and also on a commercial system. For the validation of the developed methods, many phantoms containing different optical absorbers have been produced. During the co-supervision stay in Italy, specific imaging modes for 2D and 3D real-time photoacoustic imaging were developed on a research scanner. Finally, in vivo acquisitions using a commercial system were conducted on animal model (mouse) to validate these developments
Style APA, Harvard, Vancouver, ISO itp.
11

Trávníčková, Kateřina. "Interaktivní segmentace 3D CT dat s využitím hlubokého učení". Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-432864.

Pełny tekst źródła
Streszczenie:
This thesis deals with CT data segmentation using convolutional neural nets and describes the problem of training with limited training sets. User interaction is suggested as means of improving segmentation quality for the models trained on small training sets and the possibility of using transfer learning is also considered. All of the chosen methods help improve the segmentation quality in comparison with the baseline method, which is the use of automatic data specific segmentation model. The segmentation has improved by tens of percents in Dice score when trained with very small datasets. These methods can be used, for example, to simplify the creation of a new segmentation dataset.
Style APA, Harvard, Vancouver, ISO itp.
12

Šalplachta, Jakub. "Analýza 3D CT obrazových dat se zaměřením na detekci a klasifikaci specifických struktur tkání". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2017. http://www.nusl.cz/ntk/nusl-316836.

Pełny tekst źródła
Streszczenie:
This thesis deals with the segmentation and classification of paraspinal muscle and subcutaneous adipose tissue in 3D CT image data in order to use them subsequently as internal calibration phantoms to measure bone mineral density of a vertebrae. Chosen methods were tested and afterwards evaluated in terms of correctness of the classification and total functionality for subsequent BMD value calculation. Algorithms were tested in programming environment Matlab® on created patient database which contains lumbar spines of twelve patients. Following sections of this thesis contain theoretical research of the issue of measuring bone mineral density, segmentation and classification methods and description of practical part of this work.
Style APA, Harvard, Vancouver, ISO itp.
13

Mauss, Benoit. "Réactions élastiques et inélastiques résonantes pour la caractérisation expérimentale de la cible active ACTAR TPC". Thesis, Normandie, 2018. http://www.theses.fr/2018NORMC226/document.

Pełny tekst źródła
Streszczenie:
ACTAR TPC (ACtive TARget and Time Projection Chamber) est une cible active de nouvelle génération construite au GANIL (Grand Accélérateur d'Ions Lourds). Les cibles actives sont des cibles gazeuses où le gaz permet de détecter le passage de particules chargées selon le principe des chambres à projection temporelle (TPC). La TPC d'ACTAR est formée d'une anode segmentée de 16384 pixels carrés de 2 mm de côté. La haute densité de voies est gérée par le système électronique GET (General Electronics for TPCs). Ce système digitalise également les signaux sur un intervalle de temps donné, pour une reconstruction 3D complète des évènements. Un démonstrateur huit fois plus petit a d'abord été construit pour vérifier le fonctionnement de l’électronique et la conception mécanique. La finalisation d’ACTAR TPC s’est basée sur les résultats du démonstrateur, qui a été testé avec des faisceaux de 6Li, de 24Mg et de 58Ni. Le commissioning d'ACTAR TPC a été effectué dans le cas de la diffusion résonante sur cible de protons avec des faisceaux de 18O et de 20Ne.Un algorithme de reconstruction des traces mesurées dans la TPC permet d'en extraire les angles et les énergies des ions impliqués dans les réactions. Les résultats sont comparés à des données connues pour déterminer les performances du système de détection. Les résolutions obtenues sur le commissioning à partir de calculs en matrice R valident l'utilisation d'ACTAR TPC pour de futures expériences. Par ailleurs, la diffusion résonante 6Li + 4He réalisée avec le démonstrateur a permis d'étudier les états d’agrégat alpha dans le 10B. Deux résonances à 8.58 MeV et 9.52 MeV sont observées pour la première fois en diffusion élastique dans cette voie de réaction
ACTAR TPC (ACtive TARget and Time Projection Chamber) is a next generation active target that was designed and built at GANIL (Grand Accélérateur d'Ions Lourds). Active targets are gaseous targets in which the gas is also used to track charged particles following the principles of time projection chambers (TPC). The TPC of ACTAR has a segmented anode of 16384 2 mm side square pixels. The high density of pixels is processed using the GET (General Electronics for TPCs) electronic system. This system also digitizes the signals over a time interval, enabling a full 3D event reconstruction. An eight time smaller demonstrator was first built to verify the electronics operation and the mechanical design. ACTAR TPC's final design was based on results obtained with the demonstrator which was tested using 6Li, 24Mg and 58Ni beams. The commissioning of ACTAR TPC was then carried out for the case of resonant scattering on a proton target using 18O and 20Ne beams. A track reconstruction algorithm is used to extract the angles and energies of the ions involved in the reactions. Results are compared to previous data to determine the detection system performances. Comparing the commissioning data with R matrix calculations, excitation functions resolutions in different cases are obtained. The use of ACTAR TPC is validated for future experiments. Furthermore, alpha clustering was studied in 10B through the resonant scattering 6Li + 4He, carried out with the demonstrator. Two resonances at 8.58 MeV and 9.52 MeV are observed for the first time in elastic scattering with this reaction channel
Style APA, Harvard, Vancouver, ISO itp.
14

Rouleau, Turcotte Audrey. "Étude du comportement des piles de pont confinées de PRFC par écoute acoustique". Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9450.

Pełny tekst źródła
Streszczenie:
Une structure en béton armé est sujette à différents types de sollicitations. Les tremblements de terre font partie des événements exceptionnels qui induisent des sollicitations extrêmes aux ouvrages. Pour faire face à cette problématique, les codes de calcul des ponts routiers font appel à une approche basée sur des niveaux de performance qui sont rattachés à des états limites. Actuellement, les états limites d'une pile de ponts en béton armé (BA) confinée à l'aide de polymères renforcés de fibres de carbone (PRFC) proposés dans la littérature ne prennent pas en compte le confinement lié au chemisage de PRFC en combinaison avec celui des spirales d'acier. Ce projet de recherche était la suite du volet de contrôle non destructif d'une étude réalisée en 2012 qui comprenait un volet expérimental [Carvalho, 2012] et un volet numérique [Jean, 2012]. L'objectif principal était de compléter l'étude du comportement des poteaux en BA renforcés de PRFC soumis à un chargement cyclique avec les données acoustiques recueillies par St-Martin [2014]. Plus précisément, les objectifs spécifiques étaient de déterminer les états limites reliés aux niveaux de performance et de caractériser la signature acoustique de chaque état limite (p. ex. fissuration du béton, plastification de l'acier et rupture du PRFC). Une méthodologie d'analyse acoustique basée sur l'état de l'art de Behnia et al. [2014] a été utilisée pour quantifier la gravité, localiser et caractériser le type de dommages. Dans un premier temps, les données acoustiques provenant de poutres de 550 mm x 150 mm x 150 mm ont permis de caractériser la signature acoustique des états limites. Puis, des cinq spécimens d'essai construits en 2012, les données acoustiques de trois spécimens, soient des poteaux circulaires d'un diamètre de 305 mm et d'une hauteur de 2000 mm ont été utilisée pour déterminer les états limites. Lors de ces essais, les données acoustiques ont été recueillies avec 14 capteurs de résonances qui étaient reliés à un système multicanal et au logiciel AEwin SAMOS 5.23 de Physical Acoustics Corporation (PAC) [PAC, 2005] par St-Martin [2014]. Une analyse de la distribution des paramètres acoustiques (nbr. de comptes et énergie absolue) combiné à la localisation des événements et le regroupement statistique, communément appelé clustering, ont permis de déterminer les états limites et même, des signes précurseurs à l'atteinte de ces états limites (p. ex. l'initiation et la propagation des fissures, l'éclatement de l'enrobage, la fissuration parallèle aux fibres et l'éclatement du PRFC) qui sont rattachés aux niveaux de performances des poteaux conventionnels et confinés de PRFC. Cette étude a permis de caractériser la séquence d'endommagement d'un poteau en BA renforcé de PRFC tout en démontrant l'utilité de l'écoute acoustique pour évaluer l'endommagement interne des poteaux en temps réel. Ainsi, une meilleure connaissance des états limites est primordiale pour intégrer les PRFC dans la conception et la réhabilitation des ouvrages.
Style APA, Harvard, Vancouver, ISO itp.
15

Cebecauer, Matej. "Short-Term Traffic Prediction in Large-Scale Urban Networks". Licentiate thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-250650.

Pełny tekst źródła
Streszczenie:
City-wide travel time prediction in real-time is an important enabler for efficient use of the road network. It can be used in traveler information to enable more efficient routing of individual vehicles as well as decision support for traffic management applications such as directed information campaigns or incident management. 3D speed maps have been shown to be a promising methodology for revealing day-to-day regularities of city-level travel times and possibly also for short-term prediction. In this paper, we aim to further evaluate and benchmark the use of 3D speed maps for short-term travel time prediction and to enable scenario-based evaluation of traffic management actions we also evaluate the framework for traffic flow prediction. The 3D speed map methodology is adapted to short-term prediction and benchmarked against historical mean as well as against Probabilistic Principal Component Analysis (PPCA). The benchmarking and analysis are made using one year of travel time and traffic flow data for the city of Stockholm, Sweden. The result of the case study shows very promising results of the 3D speed map methodology for short-term prediction of both travel times and traffic flows. The modified version of the 3D speed map prediction outperforms the historical mean prediction as well as the PPCA method. Further work includes an extended evaluation of the method for different conditions in terms of underlying sensor infrastructure, preprocessing and spatio-temporal aggregation as well as benchmarking against other prediction methods.

QC 20190531

Style APA, Harvard, Vancouver, ISO itp.
16

Bandieramonte, Marilena. "Muon Portal project: Tracks reconstruction, automated object recognition and visualization techniques for muon tomography data analysis". Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/3751.

Pełny tekst źródła
Streszczenie:
The present Ph.D. thesis is contextualized within the Muon Portal project, a project dedicated to the creation of a tomograph for the control and scanning of containers at the border in order to reveal smuggled fissile material by means of the cosmic muons scattering. This work aims to extend and consolidate the research in the field of muon tomography in the context of applied physics. The main purpose of the thesis is to investigate new techniques for reconstruction of muon tracks within the detector and new approaches to the analysis of data from muon tomography for the automatic objects recognition and the 3D visualization, thus making possi- ble the realization of a tomography of the entire container. The research work was divided into different phases, described in this thesis document: from a prelimi- nary speculative study of the state of the art on the tracking issue and on the tracks reconstruction algorithms, to the study on the Muon Portal detector performance in the case of particle tracking at low and high multiplicity. A substantial part of the work was devoted to the study of different image reconstruction techniques based on the POCA algorithm (Point of Closest Approach) and the iterative EM-LM algorithm (Expectation-Maximization). In addition, more advanced methods for the tracks reconstruction and visualization, such as data-mining techniques and clustering algorithms have been the subject of the research and development ac- tivity which has culminated in the development of an unsupervised multiphase clustering algorithm (modified-Friends-of-Friends) for the muon tomography data analysis.
Style APA, Harvard, Vancouver, ISO itp.
17

Li, Yichao. "Algorithmic Methods for Multi-Omics Biomarker Discovery". Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1541609328071533.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Chaumont, Marc. "Représentation en objets vidéo pour un codage progressif et concurrentiel des séquences d'images". Phd thesis, Université Rennes 1, 2003. http://tel.archives-ouvertes.fr/tel-00004146.

Pełny tekst źródła
Streszczenie:
L'objectif de cette thèse est de valider l'hypothèse selon laquelle le codage par objets vidéo peut permettre d'obtenir des gains significatifs en utilisant le codage dynamique (mise en concurrence de plusieurs codeurs pour chaque objet vidéo). Afin de répondre à cet objectif, différents points ont été étudiés. Le premier point concerne l'étude de la segmentation en objet vidéo de manière automatique. Nous avons proposé un modèle d'objet faisant intervenir la notion de suivi long terme via la représentation d'un objet sous la forme mouvement/texture (avec l'utilisation d'un maillage actif pour représenter le mouvement). Un algorithme de clustering 3D a été développé basé sur ce modèle. Dans un deuxième temps, nous nous sommes attaché à l'amélioration des techniques de codage objet via la hiérarchisation ("scalabilité") du flux vidéo. Pour cela, nous utilisons un schéma de codage ondelette 3D et nous introduisons notamment un codage de contours avec perte. Enfin le dernier point étudié concerne le codage dynamique d'objets vidéo (mise en concurrence de plusieurs codeurs pour chaque objet vidéo). Les codeurs utilisés sont : le codeur H264/AVC, un codeur ondelette 3D, un codeur 3D et un codeur par mosaïque. La répartition automatique des débits permet d'obtenir des résultats dépassant ceux produits par chaque codeur pris séparément, tout en offrant le découpage du flux en objets vidéo. Mots clés : Segmentation en objets vidéo, segmentation long terme, modèle d'objet vidéo, fonctionelle d'énergie, clustering, maillage actif, mosaïque, codage vidéo, codage d'objet vidéo, décorrélation mouvement texture forme, hiérarchisation : scalabilté, ondelettes spatiotemporelles, ondelette 3D, codage de contour, codage de forme, prolongement : padding, codage dynamique, codage concurrentiel, optimisation débit-distorsion, répartition des débits, antialiasing.
Style APA, Harvard, Vancouver, ISO itp.
19

Tsai, Cheng-Lin, i 蔡政霖. "3D Cell Segmentation by Spatial Clustering of Subcellular Organelles". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/81778083199883054045.

Pełny tekst źródła
Streszczenie:
碩士
國立陽明大學
生物醫學資訊研究所
102
Automatic segmentation of cell images is an essential task in a variety of biomedical applications. There are six main classes of approaches: intensity thresholding, feature detection, morphological ?ltering, region accumulation, deformable model ?tting, and other approaches. In this thesis, we investigate whether spatial clustering of subcellular organelles is useful for 3D cell segmentation. We used CHO cell 3D images as our dataset. The nuclear channel is segmented by double Otsu methods and mitochondrial channel is segmented by adaptive local thresholding. We calculated the spatial centroid and weighted centroid of the mitochondria and nuclei, and then used unsupervised clustering to group the mitochondria. We used the spatial extent of mitochondria in the same group as individual cell regions. Because there are several unsupervised clustering methods, we hope to know which method yields higher accuracy for cell segmentation. We compared the performance of GMM clustering, K-means, hierarchical clustering and normalized cuts methods. Regions of interest (ROI) for each cell in the 3D images are manually labeled slice-by-slice, and used as the gold standard for accuracy calculation. The following are results using methods that include nucleus centroids as data point. K-means clustering (81.43%) and GMM clustering (81.75%) with nucleus centroids initialization have higher accuracy than hierarchical clustering with average linkage (77.18%). We compared K-means with (81.22%) or without (81.43%) using nuclei centroids as initial cluster centers, and their accuracies are similar. Hierarchical clustering with nucleus centroids as data points with average (77.18%) or complete (77.02%) linkage has the same performance. Overall, K-means and GMM clustering in round and short cells have better accuracy than flat cells. GMM clustering with nucleus centroids as data points has the highest accuracy of 81.75%. GMM clustering is not suitable for whole field images, because there are many mitochondria from cells truncated by the image boundary, resulting in more mitochondrial clusters than nuclei. We designed a graphical user interface (GUI) system for K-means clustering without using nuclei centroids as initial cluster centers. The GUI was tested on another whole field 3D confocal image with manual cell ROI and achieved accuracy of 66.71%. Users can import a large number of image files for cell segmentation in our GUI. The proposed method can be applied to cell images with different subcellular organelle labels for automatic cell segmentation.
Style APA, Harvard, Vancouver, ISO itp.
20

Lu, Yu-Ching, i 呂宥瑾. "A Density-Based Clustering Color Consistency Method for 3D Object Reconstruction". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/24888197886872449896.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
電機與控制工程系所
96
A voxel-based approach for 3D object reconstruction is used in this thesis, and there are four steps in the process of a voxel-based 3D reconstruction system. In the first step, the camera is calibrated, and the purpose of camera calibration is to acquire the intrinsic and extrinsic parameters of the camera. Second, image segmentation is executed to extract object from background. Third, a 3D model is built, and the coordinates and colors information of a large amount of surface points of the object are determined. The third step includes two sub-steps that are voxel visibility and color consistency, and color consistency is the main issue of this thesis. Finally, a reconstructed 3D object is displayed by computer language VC++ with OpenGL libraries in the fourth step. So far, generally speaking, there are three different methods for implementing color consistency, and these three methods are single threshold method, histogram method and adaptive threshold method. A new color consistency method by using the density-based clustering method is proposed in the thesis, and the proposed method is compared with the other three color consistency methods. According to the experimental results, the proposed method can eliminate the unnecessary voxels and determine the true colors of voxels very well.
Style APA, Harvard, Vancouver, ISO itp.
21

Yang, Tzu-Chieh, i 楊子頡. "Three-Dimensional Possibilistic C-Template Shell Clustering and its Application in 3D Object Segmentation". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/38716084468398410151.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
多媒體工程研究所
104
The purpose of this thesis is to use a model to match a similar object in three-dimensional space.This research includes four main parts: First, using the Kinect sensor to take the real world; second, splitting the point cloud into separate items; third, creating a model to match each individual item; lastly, getting the final result. The thesis includes descriptions on using Kinect to establish a point cloud, using 3D Hough Transform to find and remove the cloud points of planes, and using connected-component to separate individual objects. The focus of this thesis is on matching with individual item and manually created models through the Template-Based Shell Clustering that is the process of detecting clusters of particular geometrical shapes through clustering algorithms. In experimental results, we can see accurate matching results.
Style APA, Harvard, Vancouver, ISO itp.
22

Tseng, Wen-Hui, i 曾文慧. "Point Cloud Clustering for Surface Sampling and Its Application on 3D-Printing Quality Inspection". Thesis, 2018. http://ndltd.ncl.edu.tw/handle/4n2m76.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣海洋大學
資訊工程學系
107
As 3D printing technology has driven to a maturity stage, this technology has applied to various fields. The growth of the popularity to such smart manufacturing makes quality inspection a sticking point, by applying the quality inspection to 3D printing would contribute to achieve the target of manufacturing high quality 3D models. We proposed the 3D printing quality inspection method based on 3D point cloud clustering. This paper clusters the 3D original point cloud with the proposed principal component analysis which computes three eigenvalues and eigenvectors from the input point cloud. We define the normal vector of a principal plane by the eigenvector corresponding to the largest eigenvalue. The 3D original point cloud model is then divided into two clusters by this principal plane, and the two clusters of point cloud are repeatedly executed with the proposed principal component analysis for clustering. This paper uses the fast point feature histograms to represent the feature descriptors of the 3D original point cloud model. To collect all fast point feature histograms of a point cloud, the clustering algorithm generates the shape dictionary of the point cloud. We create the R-Table by using this dictionary and the center point of each cluster. generates an offset vector O ⃑, i.e., a vector from the cluster center to the center point of the 3D original point cloud model. In addition, we use a 3D printing simulation system to create 3D printing simulation models. With each cluster center as a landmark type, the printing simulation models are used to label the voxels in a model using the 3D clustering results, which makes a set of training samples for learning a landmark classifier. The learned classifier is finally used to annotate voxels of the input reconstructed 3D model for further object segmentation and inspection. Using a 3D scanner to scan the printed 3D objects, this converts real-world objects into reconstructed 3D objects and point clouds. In object segmentation, the features are created for each vertex of the reconstructed 3D point cloud, which are inputted to the landmark classifier to discover the types of voxels in the input model. To combine with the 3D generalized Hough Transform, the center of the 3D original point cloud is located at the reconstructed point cloud. Utilizing 3D generalized Hough inverse transform to verify the correct center position, this segments the 3D model of the target object. Finally, we align the segmented and the original 3D models using the well-known ICP algorithm in order to calculate the 3D printing error in terms of the point correspondences. Experimental results demonstrate that the proposed approach outperforms the comrade methods in terms of the execution speed and accuracy of 3D printing.
Style APA, Harvard, Vancouver, ISO itp.
23

Li, Zheng-Kuan, i 李政寬. "Applying Regression Coefficients Clustering in Multivariate Time Series Transforming for 3D Convolutional Neural Networks". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cpv42m.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣科技大學
工業管理系
107
Multivariate time series data is very common in real life. Since most problems not only consider a single variable, but also multiple variables affect the label, how to effectively solve the problem of multivariate time series classification remain a major problem in research. In recent years, with the rapid development of Artificial Intelligence (AI), the deep learning framework has been tried to deal with multivariate time series classification problems. This study proposes a method to solve the problem of MTS classification. The multivariate time series data is used to find the regression equation by regression analysis. We use the regression coefficient and intercept to the cluster so that the time series with similar trends are divided into the same cluster, and the literature proposes to the four frameworks to encode time series data as different types of images. According to the clustering results, the time series with similar trends will be used the same method to encode time series into images and try a variety of experiment to determine encoding method for each cluster of time series. After encoding multivariate time series data as images according to the above method, each data is input into the 3D convolutional neural networks for feature extraction and image recognition, which can effectively solve the multivariate time series classification problem and find the best classification accuracy.
Style APA, Harvard, Vancouver, ISO itp.
24

Yang, Huanyi. "Performance analysis of EM-MPM and K-means clustering in 3D ultrasound breast image segmentation". 2014. http://hdl.handle.net/1805/3875.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
Mammographic density is an important risk factor for breast cancer, detecting and screening at an early stage could help save lives. To analyze breast density distribution, a good segmentation algorithm is needed. In this thesis, we compared two popularly used segmentation algorithms, EM-MPM and K-means Clustering. We applied them on twenty cases of synthetic phantom ultrasound tomography (UST), and nine cases of clinical mammogram and UST images. From the synthetic phantom segmentation comparison we found that EM-MPM performs better than K-means Clustering on segmentation accuracy, because the segmentation result fits the ground truth data very well (with superior Tanimoto Coefficient and Parenchyma Percentage). The EM-MPM is able to use a Bayesian prior assumption, which takes advantage of the 3D structure and finds a better localized segmentation. EM-MPM performs significantly better for the highly dense tissue scattered within low density tissue and for volumes with low contrast between high and low density tissues. For the clinical mammogram, image segmentation comparison shows again that EM-MPM outperforms K-means Clustering since it identifies the dense tissue more clearly and accurately than K-means. The superior EM-MPM results shown in this study presents a promising future application to the density proportion and potential cancer risk evaluation.
Style APA, Harvard, Vancouver, ISO itp.
25

Liu, Hsiang-Ping, i 劉享屏. "Interpolation by Spline with GCV and Nonparametric Segmentation by Cell Clustering for 3D Ultrasound Images". Thesis, 2002. http://ndltd.ncl.edu.tw/handle/18526098884368016464.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
統計所
90
This study is aimed to segment the tumor in 3D by a volume of 2D ultrasound images. This segmentation can provide the location information of tumor for doctors during operation and improve the accuracy of operation. Because the images obtained by 2D ultrasound scans are irregularly spaced most of the time, it is necessary to interpolate them into regularly spaced 3D images so that image processing techniques for 2D images can be generalized directly with fast computation speed. Spline interpolation is used in this study. Generalized cross validation is proposed to decide the size of control lattice in interpolation. After interpolation, we will generalize watershed transform and cell based approaches to 3D images. Gaussian smoothing is first applied to denoise the images. Sobel filters are then used to estimate the gradient. Based on the absolute values of gradients and the regularization term of the image intensities, image cells are obtained by watershed transform. Finally, cells are merged or split to locate the tumor by a new method with nonparametric testing and divisive clustering. This is called “nonparametric cell clustering” in this study. Simulation and empiric studies are performed for this new approach. The results are promising according to these studies.
Style APA, Harvard, Vancouver, ISO itp.
26

Pape, Jasmin. "Multicolor 3D MINFLUX nanoscopy for biological imaging". Doctoral thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-14E6-1.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
27

(9226151), Camilo G. Aguilar Herrera. "NOVEL MODEL-BASED AND DEEP LEARNING APPROACHES TO SEGMENTATION AND OBJECT DETECTION IN 3D MICROSCOPY IMAGES". Thesis, 2020.

Znajdź pełny tekst źródła
Streszczenie:

Modeling microscopy images and extracting information from them are important problems in the fields of physics and material science.


Model-based methods, such as marked point processes (MPPs), and machine learning approaches, such as convolutional neural networks (CNNs), are powerful tools to perform these tasks. Nevertheless, MPPs present limitations when modeling objects with irregular boundaries. Similarly, machine learning techniques show drawbacks when differentiating clustered objects in volumetric datasets.

In this thesis we explore the extension of the MPP framework to detect irregularly shaped objects. In addition, we develop a CNN approach to perform efficient 3D object detection. Finally, we propose a CNN approach together with geometric regularization to provide robustness in object detection across different datasets.


The first part of this thesis explores the addition of boundary energy to the MPP by using active contours energy and level sets energy. Our results show this extension allows the MPP framework to detect material porosity in CT microscopy images and to detect red blood cells in DIC microscopy images.


The second part of this thesis proposes a convolutional neural network approach to perform 3D object detection by regressing objects voxels into clusters. Comparisons with leading methods demonstrate a significant speed-up in 3D fiber and porosity detection in composite polymers while preserving detection accuracy.


The third part of this thesis explores an improvement in the 3D object detection approach by regressing pixels into their instance centers and using geometric regularization. This improvement demonstrates robustness when comparing 3D fiber detection in several large volumetric datasets.


These methods can contribute to fast and correct structural characterization of large volumetric datasets, which could potentially lead to the development of novel materials.

Style APA, Harvard, Vancouver, ISO itp.
28

Campagnolo, João Henrique Fróis Lameiras. "Unsupervised behavioral classification with 3D pose data from tethered Drosophila melanogaster". Master's thesis, 2020. http://hdl.handle.net/10451/48345.

Pełny tekst źródła
Streszczenie:
Tese de mestrado integrado em Engenharia Biomédica e Biofísica (Biofísica Médica e Fisiologia de Sistemas), Universidade de Lisboa, Faculdade de Ciências, 2020
O comportamento animal e guiado por instruções geneticamente codificadas, com contribuições do meio envolvente e experiências antecedentes. O mesmo pode ser considerado como o derradeiro output da atividade neuronal, pelo que o estudo do comportamento animal constitui um meio de compreensão dos mecanismos subjacentes ao funcionamento do cérebro animal. Para desvendar a correspondência entre cérebro e comportamento são necessárias ferramentas que consigam medir um comportamento de forma precisa, apreciável e coerente. O domínio científico responsável pelo estudo dos comportamentos dos animais denomina-se Etologia. No início do seculo XX, os etólogos categorizavam comportamentos animais com recurso as suas próprias intuições e experiência. Consequentemente, as suas avaliações eram subjetivas e desprovidas de comportamentos que os etólogos não considerassem a priori. Com o ressurgimento de novas técnicas de captura e analise de comportamentos, os etólogos transitaram para paradigmas mais objetivos, quantitativos da medição de comportamentos. Tais ferramentas analíticas fomentaram a construção de datasets comportamentais que, por sua vez, promoveram o desenvolvimento de softwares para a quantificação de comportamentos: rastreamento de trajetórias, classificação de ações, analise de padrões comportamentais em grandes escalas consistem nos exemplos mais preeminentes. Este trabalho encontra-se inserido na segunda categoria referida (classificação de ações). Os classificadores de ações dividem-se consoante são supervisionados ou não-supervisionados. A primeira categoria compreende classificadores treinados para reconhecer padrões específicos, definidos por um especialista humano. Esta categoria de classificadores e encontra-se limitada por: 1) necessitar de um processo extenuado de anotação de frames para treino do classificador; 2) subjetividade face ao especialista que classifica os mesmos frames, 3) baixa dimensionalidade, na medida em que a classificação reduz os complexos comportamentos a um só rotulo; 4) assunções erróneas; 5) preconceito humano face aos comportamentos observados. Por sua vez, os classificadores não-supervisionados seguem exaustivamente uma formula: 1) computer vision e empregue para a extração das características posturais do animal; 2) dá-se o pré-processamento dos dados, que inclui um modulo vital que envolve a construção de uma representação dinâmico-postural das ações do animal, de forma a capturar os elementos dinâmicos do comportamento; 3) segue-se um modulo opcional de redução de dimensionalidade, caso o utilizador deseje visualizar diretamente os dados num espaço de reduzidas dimensões; 4) efetua-se a atribuição de um rótulo a cada elemento dos dados, por via de um algoritmo que opera quer diretamente no espaço de alta dimensão, ou no de baixa dimensão, resultante do passo anterior. O objetivo deste trabalho passa por alcançar uma classificação objetiva e reproduzível, de forma não-supervisionada de frames de Drosophila melanogaster suspensas numa bola que flutua no ar, tentando minimizar o número de intuições requeridas para o efeito e, se possível, dissipar a influência dos aspetos morfológicos de cada individuo (garantindo assim uma classificação generalizada dos comportamentos destes insetos). Para alcançar tal classificação, este estudo recorre a uma ferramenta recém desenvolvida que regista a pose tridimensional de Drosophila fixas, o DeepFly3D, para construir um dataset com as coordenadas x-, y- e z-, ao longo do tempo, das posições de referência de um conjunto de três genótipos de Drosophila melanogaster (linhas aDN>CsChrimson, MDN-GAL4/+ e aDNGAL4/+). Sucede-se uma operação inovadora de normalização que recorre ao cálculo de ângulos entre pontos de referência adjacentes, como as articulações, antenas e riscas dorsais das moscas, por via de relações trigonométricas e a definição dos planos anatómicos das moscas, que visa atenuar os pesos das diferenças morfológicas das moscas, ou a sua orientação relativa as camaras do DeepFly3D, para o classificador. O modulo de normalização e sucedido por outro de analise de frequência, focado na extração das frequências relevantes nas series temporais dos ângulos calculados, bem como dos seus pesos relativos. O produto final do pré-processamento consiste numa matriz com a norma dos ditos pesos – a matriz de expressão do espaço dinâmico-postural. Subsequentemente, seguem-se os módulos de redução de dimensionalidade e de atribuição de clusters (pontos 3) e 4) do paragrafo anterior). Para os mesmos, são propostas seis configurações possíveis de algoritmos, submetidas de imediato a uma anélise comparativa, de forma a determinar a mais apta para classificar este tipo de dados. Os algoritmos de redução de dimensionalidade aqui postos a prova são o t-SNE (t-distributed Stochastic Neighbor Embedding) e o PCA (Principal Component Analysis), enquanto que os algoritmos de clustering comparados são o Watershed, GMM-posterior probability assignment e o HDBSCAN (Hierarchical Density Based Spatial Clustering of Applications with Noise). Cada uma das pipelines candidatas e finalmente avaliada mediante a observação dos vídeos inclusos nos clusters produzidos e, dado o vasto numero destes vídeos, bem como a possibilidade de uma validação subjetiva face a observadores distintos, com o auxilio de métricas que expressam determinados critérios abrangentes de qualidade dos clusters: 1) Fly uncompactness, que avalia a eficiência do modulo de normalização com ângulos de referencia da mosca; 2) Homogeneity, que procura garantir que os clusters não refletem a identidade ou o genótipo das moscas; 3) Cluster entropy, que afere a previsibilidade das transições entre os clusters; 4) Mean dwell time, que pondera o tempo que um individuo demora em media a realizar uma Acão. Dois critérios auxiliares extra são ainda considerados: o número de parâmetros que foram estimados pelo utilizador (quanto maior, mais limitada e a reprodutibilidade da pipeline) e o tempo de execução do algoritmo (que deve ser igualmente minimizado). Apesar de manter alguma subjetividade face aquilo a que o utilizador considera um “bom” cluster, a inclusão das métricas aproxima esta abordagem a um cenário ideal de completa autonomia entre a conceção de uma definição de comportamento, e a validação dos resultados que decorrem das suas conjeturas. Os desempenhos das pipelines candidatas divergiram largamente: os espaços resultantes das operações de redução de dimensionalidade demonstram-se heterogéneos e anisotrópicos, com a presença de sequências de pontos que tomam formas vermiformes, ao invés de um antecipado conglomerado de pontos desassociados. Estas trajetórias vermiformes limitam o desempenho dos algoritmos de clustering que operam nos espaços de baixas (duas, neste caso) dimensões. A ausência de um passo intermedio de amostragem do espaço dinâmico-postural explica a génese destas trajetórias vermiformes. Não obstante, as pipelines que praticam redução de dimensionalidade geraram melhores resultados que a pipeline que recorre a clustering com HDBSCAN diretamente sobre a matriz de expressão do espaço dinâmico-postural. A combinação mais fortuita de módulos de redução de dimensionalidade e clustering adveio da pipeline PCA30-t-SNE2-GMM. Embora não sejam absolutamente consistentes, os clusters resultantes desta pipeline incluem um comportamento que se sobressai face aos demais que se encontram inseridos no mesmo cluster (erroneamente). Lacunas destes clusters envolvem sobretudo a ocasional fusão de dois comportamentos distintos no mesmo cluster, ou a presença inoportuna de sequências de comportamentos nas quais a mosca se encontra imóvel (provavelmente o resultado de pequenos erros de deteção produzidos pelo DeepFly3D). Para mais, a pipeline PCA30-t-SNE2-GMM foi capaz de reconhecer diferenças no fenótipo comportamental de moscas, validadas pelas linhas genéticas das mesmas. Apesar dos resultados obtidos manifestarem visíveis melhorias face aqueles produzidos por abordagens semelhantes, sobretudo a nível de vídeos dos clusters, uma vez que só uma das abordagens inclui métricas de sucesso dos clusters, alguns aspetos desta abordagem requerem correções: a inclusão de uma etapa de amostragem, sucedida de um novo algoritmo que fosse capaz de realizar reduções de dimensionalidade consistentes, de forma a reunir todos os pontos no mesmo espaço embutido será possivelmente a característica mais capaz de acrescentar valor a esta abordagem. Futuras abordagens não deverão descurar o contributo de múltiplas representações comportamentais que possam vir a validar-se mutuamente, substituindo a necessidade de métricas de sucesso definidas pelos utilizadores.
One of the preeminent challenges of Behavioral Neuroscience is the understanding of how the brain works and how it ultimately commands an animal’s behavior. Solving this brain-behavior linkage requires, on one end, precise, meaningful and coherent techniques for measuring behavior. Rapid technical developments in tools for collecting and analyzing behavioral data, paired with the immaturity of current approaches, motivate an ongoing search for systematic, unbiased behavioral classification techniques. To accomplish such a classification, this study employs a state-of-the-art tool for tracking 3D pose of tethered Drosophila, DeepFly3D, to collect a dataset of x-, y- and z- landmark positions over time, from tethered Drosophila melanogaster moving over an air-suspended ball. This is succeeded by unprecedented normalization across individual flies by computing the angles between adjoining landmarks, followed by standard wavelet analysis. Subsequently, six unsupervised behavior classification techniques are compared - four of which follow proven formulas, while the remaining two are experimental. Lastly, their performances are evaluated via meaningful metric scores along with cluster video assessment, as to ensure a fully unbiased cycle - from the conjecturing of a definition of behavior to the corroboration of the results that stem from its assumptions. Performances from different techniques varied significantly. Techniques that perform clustering in embedded low- (two-) dimensional spaces struggled with their heterogeneous and anisotropic nature. High-dimensional clustering techniques revealed that these properties emerged from the original highdimensional posture-dynamics spaces. Nonetheless, high and low-dimensional spaces disagree on the arrangement of their elements, with embedded data points showing hierarchical organization, which was lacking prior to their embedding. Low-dimensional clustering techniques were globally a better match against these spatial features and yielded more suitable results. Their candidate embedding algorithms alone were capable of revealing dissimilarities in preferred behaviors among contrasting genotypes of Drosophila. Lastly, the top-ranking classification technique produced satisfactory behavioral cluster videos (despite the irregular allocation of rest labels) in a consistent and repeatable manner, while requiring a marginal number of hand tuned parameters.
Style APA, Harvard, Vancouver, ISO itp.
29

Gorricha, Jorge Manuel Lourenço. "Visualization of clusters in geo-referenced data using three-dimensional self-organizing maps". Master's thesis, 2010. http://hdl.handle.net/10362/2631.

Pełny tekst źródła
Streszczenie:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
The Self-Organizing Map (SOM) is an artificial neural network that performs simultaneously vector quantization and vector projection. Due to this characteristic, the SOM is an effective method for clustering analysis via visualization. The SOM can be visualized through the output space, generally a regular two-dimensional grid of nodes, and through the input space, emphasizing the vector quantization process. Among all the strategies for visualizing the SOM, we are particularly interested in those that allow dealing with spatial dependency, linking the SOM to the geographic visualization with color. One possible approach, commonly used, is the cartographic representation of data with label colors defined from the output space of a two-dimensional SOM. However, in the particular case of geo-referenced data, it is possible to consider the use of a three-dimensional SOM for this purpose, thus adding one more dimension in the analysis. In this dissertation is presented a method for clustering geo-referenced data that integrates the visualization of both perspectives of a three dimensional SOM: linking its output space to the cartographic representation through a ordered set of colors; and exploring the use of frontiers among geo-referenced elements, computed according to the distances in the input space between their Best Matching Units.
Style APA, Harvard, Vancouver, ISO itp.
30

Gorricha, Jorge Manuel Lourenço. "Exploratory data analysis using self-organising maps defined in up to three dimensions". Doctoral thesis, 2015. http://hdl.handle.net/10362/17852.

Pełny tekst źródła
Streszczenie:
The SOM is an artificial neural network based on an unsupervised learning process that performs a nonlinear mapping of high dimensional input data onto an ordered and structured array of nodes, designated as the SOM output space. Being simultaneously a quantization algorithm and a projection algorithm, the SOM is able to summarize and map the data, allowing its visualization. Because using the most common visualization methods it is very difficult or even impossible to visualize the SOM defined with more than two dimensions, the SOM output space is generally a regular two dimensional grid of nodes. However, there are no theoretical problems in generating SOMs with higher dimensional output spaces. In this thesis we present evidence that the SOM output space defined in up to three dimensions can be used successfully for the exploratory analysis of spatial data, two-way data and three-way data. Although the differences between the methods that are proposed to visualize each group of data, the approach adopted is commonly based in the projection of colour codes, which are obtained from the output space of 3D SOMs, in some specific bi-dimensional surface, where data can be represented according to its own characteristics. This approach is, in some cases, also complemented with the simultaneous use of SOMs defined in one and two dimensions, so that patterns in data can be properly revealed. The results obtained by using this visualization strategy indicates not only the benefits of using the SOM defined in up to three dimensions but also shows the relevance of the combined and simultaneous use of different models of the SOM in exploratory data analysis.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii