Tesi sul tema "Segmentation multivariée"

Segui questo link per vedere altri tipi di pubblicazioni sul tema: Segmentation multivariée.

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-21 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Segmentation multivariée".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Asseraf, Mounir. "Extension et optimisation pour la segmentation de la distance de Kolmogorov-Smirnov". Paris 9, 1998. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1998PA090026.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La segmentation est une méthode qui entre dans le cadre de l'analyse des données multidimensionnelles ; elle se distingue des autres méthodes lorsqu'on passe à la phase descriptive des résultats, telle que la lisibilité des règles de décision. La segmentation peut être vue, d'une part, comme une méthode exploratoire et descriptive permettant de résumer et structurer, sous la forme d'un arbre binaire, un ensemble d'observations multidimensionnelles. D'autre part, comme un outil décisionnel et inférentiel visant à produire une règle de classement sur les objets appartenant à une partition connue a priori. Dans la phase décisionnelle, la segmentation emploie un ensemble d'outils statistiques et probabilistes (la théorie bayésienne, les techniques d'échantillonnage, l'estimation de paramètres). En pratique, plusieurs travaux sur la segmentation ont conduit récemment à développer des algorithmes d'aspects exploratoire et décisionnel, souvent fiables et efficaces ; quant aux règles de production elles sont aisément interprétables par des non-spécialistes de la statistique. On rencontre de nombreuses applications réalisées dans divers domaines tels que la médecine, la biologie ou la reconnaissance des formes. Dans cette thèse, on s'intéresse au critère de Kolmogorov-Smirnov, qui fait partie des outils de la segmentation sur les variables quantitatives. Plusieurs simulations ont conclu positivement, tant sur son pouvoir de discrimination assez puissant que sur sa robustesse et son efficacité asymptotique au sens de Bayes. La première phase de ce travail est consacrée à l'extension de ce critère aux variables qualitatives et aux propriétés asymptotiques. La deuxième phase porte sur la réduction de la complexité exponentielle pour la recherche d'une solution globalement optimale à une complexité polynomiale de degrés trois. La phase finale s'intéresse à la programmation de ce critère et à son intégration dans le logiciel Sicla.
2

Ghandi, Sanaa. "Analysis of network delay measurements : Data mining methods for completion and segmentation". Electronic Thesis or Diss., Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2023. http://www.theses.fr/2023IMTA0382.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
La croissance exponentielle d'Internet nécessite une supervision régulière des métriques réseau. Cette thèse se concentre sur les délais aller-retour et la possibilité de résoudre les problèmes de données manquantes et de segmentation multivariée. La première contribution comprend l'orchestration de campagnes de mesure des délais, ainsi que le développement d'un simulateur qui génère des traces de délais de bout en bout. La deuxième contribution de cette thèse est l’introduction de deux méthodes de complétion de données manquantes. La première méthode repose sur la factorisation de matrices non négatives et la seconde utilise le filtrage collaboratif neuronal. Testées sur des données synthétiques et réelles, ces méthodes démontrent leur efficacité et précision. La troisième contribution de cette thèse porte sur la segmentation multivariée des délais. Cette approche repose sur le regroupement hiérarchique et se déroule en deux étapes. Dans un premier temps, il s'agit de regrouper les séries de délais afin d'obtenir des séries présentant des variations similaires et synchrones. Ensuite, on segmente de manière conjointe les séries groupées. On utilise le regroupement hiérarchique suivi d'un post-traitement à l'aide de l'algorithme de Viterbi qui vise à lisser le résultat de la segmentation. Cette méthode a été testée sur des traces de délais réels et les résultats indiquent que cette méthode se rapproche de l'état de l'art en matière de segmentation tout en réduisant de manière significative la rapidité et les coûts de calcul
The exponential growth of the Internet requires regular monitoring of network metrics. This thesis focuses on round-trip delays and the possibility of addressing the problems of missing data and multivariate segmentation. The first contribution includes the orchestration of delay measurement campaigns, as well as the development of a simulator that generates end-to-end delay traces. The second contribution of this thesis is the introduction of two missing data completion methods. The first is based on non-negative matrix factorization, while the second uses collaborative neural filtering. Tested on synthetic and real data, these methods demonstrate their efficiency and accuracy. The third contribution of this thesis involves multivariate delay segmentation. This approach is based on hierarchical clustering and is implemented in two stages. Firstly, the delay time series are grouped to obtain, within the same group, series with similar and synchronous variations and trends. Next, the multivariate segmentation step collectively and jointly segments the series within each group. This step uses hierarchical clustering followed by post-processing using the Viterbi algorithm to smooth the segmentation result. This method was tested on real delay traces from two major events affecting two Internet Exchange Points (IXPs). The results show that this method approximates the state-of-the-art in segmentation, while significantly reducing computing speed and costs
3

Rzadca, Mark C. "Multivariate granulometry and its application to texture segmentation /". Online version of thesis, 1994. http://hdl.handle.net/1850/12200.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Templeton, William James. "Consumer interests as market segmentation variables". Thesis, London Business School (University of London), 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.312926.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Rye, Morten Beck. "Image segmentation and multivariate analysis in two-dimensional gel electrophoresis". Doctoral thesis, Norwegian University of Science and Technology, Department of Chemistry, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1744.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):

The topic of this thesis is data-analysis on images from two-dimensional electrophoretic gels. Because of the complexity of these images, there are numerous steps and approaches to such an analysis, and no “golden standard” has yet been established on how to produce the desired output. In this thesis focus is put on two essential fields concerning 2D-gel analysis; registration of images by segregation and protein spot identification, and data-analysis on the output of such a registration by multivariate methods. Image segmentation is mainly concerned with the task of identifying individual protein spots in a gel-image. This has generally been the natural starting point of all methods and procedures developed since the introduction of 2D-gels in the mid-seventies, simply because this best reproduces the results created by a human analyst, who manually identify protein-spot entities. The amount of data produced in a 2D-gel experiment can be quite large, especially in multiple gels where the human analyst is dependent on additional statistical data-analytical tools to produce results. Because of the correlated nature of most gel-data, analysis by multivariate methods is natural choice, and are therefore adopted in this thesis. The goal of this thesis is to introduce the above mentioned procedures at different stages in the analysis pipeline where they are not yet fully exploited, rather than to improve already existing algorithms. In this way new insight and ideas on how to handle data from 2D-gel experiments are achieved. The thesis starts with a review of segmentation methodology, and introduces a selected procedure used to identify protein spots throughout. Output from the segmentation is then used to create a multivariate spot-filtering model, which aims to separate protein spots from noise and artefacts often creating problems in 2D-gel analysis. Lately the use of common spot boundaries in multiple gels have been the method of choice when gels are analysed. How such boundaries should be defined is an important subject of discussion, and thus a new method for defining common boundaries based on the individual segmentation of each gel is introduced. Segmentation may be a natural starting point when gels are analysed, but it is not necessarily the most correct. Often the introduction of fixed spot entities introduces restrictions to the data which cause problems at later stages in the analysis. Analysing pixels from multiple gels directly has no such restrictions, and it is shown in this thesis that the output of such an analysis based on multivariate methods can produce very useful results. It can also give insight to the data problematic to achieve with the spot boundary approach. At last in the thesis an improved pixel-based approach is introduced, where a less restricted segmentation is used to reduce and concentrate the amount of data analysed, improving the final output.

6

Lu, Jiang. "Transforms for multivariate classification and application in tissue image segmentation /". free to MU campus, to others for purchase, 2002. http://wwwlib.umi.com/cr/mo/fullcit?p3052195.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Hosseini-Chaleshtari, Jamshid. "Segment Congruence Analysis: An Information Theoretic Approach". PDXScholar, 1987. https://pdxscholar.library.pdx.edu/open_access_etds/797.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
When there are several possible segmentation variables, marketers must investigate the ramifications of their potential interactions. These include their mutual association, the identification of the best (the distinguished) segmentation variable and its predictability by a set of descriptor variables, and the structure of the multivariate system(s) obtained from the segmentation and descriptor variables. This procedure has been defined as segment congruence analysis (SCA). This study utilizes the information theoretic and the log-linear/logit approaches to address a variety of research questions in segment congruence analysis. It is shown that the information theoretic approach expands the scope of SCA and offers some advantages over traditional methods. Data obtained from a survey conducted by the Bonneville Power Administration (BPA) and Northwest utilities is used to demonstrate the information theoretic and the log-linear/logit approaches and compare these two methods. The survey was designed to obtain information on energy consumption habits, attitudes toward selected energy issues, and the conservation measures utilized by the residents in the Pacific Northwest. The analyses are performed in two distinct phases. Phase I includes assessment of mutual association among segmentation variables and four methods (based on different information theoretic functions) for identifying candidates for the distinguished variable. Phase II addresses the selection and analysis of the distinguished variable. This variable is selected either a priori or by assessment of its predictability from (segmentation or exogenous) descriptor variables. The relations between the distinguished variable and the descriptor variables are further analyzed by examining the predictability issue in greater detail and by evaluating structural models of the multivariate systems. The methodological conclusions of this study are that the information theoretic and log-linear methods have deep similarities. The analyses produced intuitively plausible results. In Phase I, energy related awareness, behavior, perceptions, attitudes, and electricity consumption were identified as candidate segmentation variables. In Phase II, using exogenous descriptor variables, electricity consumption was selected as the distinguished variable. The analysis of this variable indicated that the demographic factors, type of dwelling, and geoclimatic environment are among the most important determinants of electricity consumption.
8

On, Vu Ngoc Minh. "A new minimum barrier distance for multivariate images with applications to salient object detection, shortest path finding, and segmentation". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS454.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Les représentations hiérarchiques d’images sont largement utilisées dans le traitement d’images pour modéliser le contenu d’une image par un arbre. Une hiérarchie bien connue est l’arbre des formes (AdF) qui encode la relation d’inclusion entre les composants connectés à partir de différents niveaux de seuil. Ce genre d’arbre est auto-duale et invariant de changement de contraste, ce qu’il est utilisé dans de nombreuses applications de vision par ordinateur. En raison de ses propriétés, dans cette thèse, nous utilisons cette représentation pour calculer la nouvelle distance qui appartient au domaine de la morphologie mathématique. Les transformations de distance et les cartes de saillance qu’elles induisent sont généralement utilisées dans le traitement d’images, la vision par ordinateur et la reconnaissance de formes. L’une des transformations de distance les plus couramment utilisées est celle géodésique. Malheureusement, cette distance n’obtient pas toujours des résultats satisfaisants sur des images bruyantes ou floues. Récemment, une nouvelle pseudo-distance, appelée distance de barrière minimale (MBD), plus robuste aux variations de pixels, a été introduite. Quelques années plus tard, Géraud et al. ont proposé une bonne approximation rapide de cette distance : la pseudodistance de Dahu. Puisque cette distance a été initialement développée pour les images en niveaux de gris, nous proposons ici une extension de cette transformation aux images multivariées ; nous l’appelons vectorielle Dahu pseudo-distance. Cette nouvelle distance est facilement et efficacement calculée grâce à à l’arbre multivarié des formes (AdFM). Nous vous proposons une méthode de calcul efficace cette distance et sa carte de saillants déduits dans cette thèse. Nous enquêtons également sur le propriétés de cette distance dans le traitement du bruit et du flou dans l’image. Cette distance s’est avéré robuste pour les pixels invariants. Pour valider cette nouvelle distance, nous fournissons des repères démontrant à quel point la pseudo-distance vectorielle de Dahu est plus robuste et compétitive par rapport aux autres distances basées sur le MB. Cette distance est prometteuse pour la détection des objets saillants, la recherche du chemin le plus court et la segmentation des objets. De plus, nous appliquons cette distance pour détecter le document dans les vidéos. Notre méthode est une approche régionale qui s’appuie sur le saillance visuelle déduite de la pseudo-distance de Dahu. Nous montrons que la performance de notre méthode est compétitive par rapport aux méthodes de pointe de l’ensemble de données du concours Smartdoc 2015 ICDAR
Hierarchical image representations are widely used in image processing to model the content of an image in the multi-scale structure. A well-known hierarchical representation is the tree of shapes (ToS) which encodes the inclusion relationship between connected components from different thresholded levels. This kind of tree is self-dual, contrast-change invariant and popular in computer vision community. Typically, in our work, we use this representation to compute the new distance which belongs to the mathematical morphology domain. Distance transforms and the saliency maps they induce are generally used in image processing, computer vision, and pattern recognition. One of the most commonly used distance transforms is the geodesic one. Unfortunately, this distance does not always achieve satisfying results on noisy or blurred images. Recently, a new pseudo-distance, called the minimum barrier distance (MBD), more robust to pixel fluctuation, has been introduced. Some years after, Géraud et al. have proposed a good and fast-to-compute approximation of this distance: the Dahu pseudodistance. Since this distance was initially developed for grayscale images, we propose here an extension of this transform to multivariate images; we call it vectorial Dahu pseudo-distance. This new distance is easily and efficiently computed thanks to the multivariate tree of shapes (MToS). We propose an efficient way to compute this distance and its deduced saliency map in this thesis. We also investigate the properties of this distance in dealing with noise and blur in the image. This distance has been proved to be robust for pixel invariant. To validate this new distance, we provide benchmarks demonstrating how the vectorial Dahu pseudo-distance is more robust and competitive compared to other MB-based distances. This distance is promising for salient object detection, shortest path finding, and object segmentation. Moreover, we apply this distance to detect the document in videos. Our method is a region-based approach which relies on visual saliency deduced from the Dahu pseudo-distance. We show that the performance of our method is competitive with state-of-the-art methods on the ICDAR Smartdoc 2015 Competition dataset
9

Liggett, Rachel Esther. "Multivariate Approaches for Relating Consumer Preference to Sensory Characteristics". The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282868174.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Johansson, David. "Automatic Device Segmentation for Conversion Optimization : A Forecasting Approach to Device Clustering Based on Multivariate Time Series Data from the Food and Beverage Industry". Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-81476.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis investigates a forecasting approach to clustering device behavior based on multivariate time series data. Identifying an equitable selection to use in conversion optimization testing is a difficult task. As devices are able to collect larger amounts of data about their behavior it becomes increasingly difficult to utilize manual selection of segments in traditional conversion optimization systems. Forecasting the segments can be done automatically to reduce the time spent on testing while increasing the test accuracy and relevance. The thesis evaluates the results of utilizing multiple forecasting models, clustering models and data pre-processing techniques. With optimal conditions, the proposed model achieves an average accuracy of 97,7%.
11

Motta, Sergio Luis Stirbolov. "Estudo sobre segmentação de mercado consumidor por atitude e atributos ecológicos de produtos". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/12/12139/tde-30062009-161308/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Este estudo pretendeu verificar se a combinação das variáveis atitude e atributos ecologicamente corretos de bens de consumo pode ser utilizada como base para a segmentação de mercado. Para satisfazer a essa proposição, buscou-se, primeiramente, o domínio da teoria disponível sobre os temas a ela relacionados, que serviu de base à pesquisa de campo. Esta teve um caráter quantitativo e foi do tipo descritivo, com método do estudo de campo; utilizou uma amostra não-probabilística por conveniência de estudantes e professores de uma universidade da cidade de São Paulo-SP, que expressaram suas opiniões por autopreenchimento de um instrumento de coleta de dados estruturado e disfarçado. A análise dos dados deu-se através da aplicação de três técnicas multivariadas: Análise Fatorial, Análise de Conglomerados e Análise de Correspondência. A primeira foi bem sucedida em seu propósito principal, já que foi possível reduzir o conjunto de variáveis a dois fatores; os escores fatoriais funcionaram como entradas à Análise de Conglomerados. Esta também foi bem sucedida, pois a grande maioria das simulações realizadas combinando as medidas de similaridade com os métodos de aglomeração gerou conglomerados, o que permitiu responder favoravelmente ao problema de pesquisa; uma das combinações - Quadrado da Distância Euclideana com within groups foi considerada a mais satisfatória e utilizada como base para a próxima técnica, Análise de Correspondência. Esta foi utilizada para perfilar os conglomerados gerados e dar relevância gerencial ao presente projeto; foi parcialmente bem sucedida, pois não pôde ser utilizada para algumas variáveis, dando vez à tabulação cruzada. As considerações finais confirmaram a expectativa do pesquisador quanto à possibilidade de obtenção de conglomerados utilizando concomitantemente as variáveis atitude e atributos ecologicamente corretos de produtos.
This study intended to verify if the variable attitude in conjunction with the consumer good´s ecologically characteristics may be used as market segmentation´s basis. To satisfy this proposition, we tried, at first, to know all the available theory about the topics that are related to and also the basis to the field research. It was a quantitative and descriptive one, with a field study method. A non-probabilistic sample of students and teachers was used to explain their opinions by self-administration of a strucured and disguised questionnaire. The data analysis ocurred by the application of three multivariate techniques: Factor Analysis, Cluster Analysis and Correspondence Analysis. The first of them was successfull, whereas it was possible to reduce the set of variables to two factors; the fatorial scores performed as inputs to the Cluster Analysis. This technique was successful too, because the majority of simulations combining similarity measures and aglomeration methods engendered clusters, which permitted an answer favorable to the research problem; one of the combinations Euclidean Square Distance and Withinn Groups was considered the most satisfactory and used as basis to the next technique, the Correspondence Analysis. It was applied to profile the clusters and give a relevance to this paper; it was partly successful, as we couldnt use some variables and it was replaced by Cross Tabulation. The final considerations confirmed the researchers expectation as regard to the possibility of obtain clusters using at the same time the variable attitude and good´s ecologically characteristics.
12

Lung-Yut-Fong, Alexandre. "Détection de ruptures pour les signaux multidimensionnels. Application à la détection d'anomalies dans les réseaux". Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00675543.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
L'objectif de cette thèse est de proposer des méthodes non-paramétriques de détection rétrospective de ruptures. L'application principale de cette étude est la détection d'attaques dans les réseaux informatiques à partir de données recueillies par plusieurs sondes disséminées dans le réseau. Nous proposons dans un premier temps une méthode en trois étapes de détection décentralisée d'anomalies faisant coopérer des sondes n'ayant accès qu'à une partie du trafic réseau. Un des avantages de cette approche est la possibilité de traiter un flux massif de données, ce qui est permis par une étape de filtrage par records. Un traitement local est effectué dans chaque sonde, et une synthèse est réalisée dans un centre de fusion. La détection est effectuée à l'aide d'un test de rang qui est inspiré par le test de rang de Wilcoxon et étendu aux données censurées. Dans une seconde partie, nous proposons d'exploiter les relations de dépendance entre les données recueillies par les différents capteurs afin d'améliorer les performances de détection. Nous proposons ainsi une méthode non-paramétrique de détection d'une ou plusieurs ruptures dans un signal multidimensionnel. Cette méthode s'appuie sur un test d'homogénéité utilisant un test de rang multivarié. Nous décrivons les propriétés asymptotiques de ce test ainsi que ses performances sur divers jeux de données (bio-informatiques, économétriques ou réseau). La méthode proposée obtient de très bons résultats, en particulier lorsque la distribution des données est atypique (par exemple en présence de valeurs aberrantes).
13

Rosa, Marlise. "SEGMENTAÇÃO DE GRÃOS DE HEMATITA EM AMOSTRAS DE MINÉRIO DE FERRO POR ANÁLISE DE IMAGENS DE LUZ POLARIZADA". Universidade Federal de Santa Maria, 2008. http://repositorio.ufsm.br/handle/1/8064.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The aim of the present work is to classify co-registered pixels of stacks of polarized light images of iron ore into their respective crystalline grains or pores, thus producing grain segmented images that can be analyzed by their size, shape and orientation distributions, as well as their porosity and the size and morphology of the pores. Polished sections of samples of hematite-rich ore are digitally imaged in a rotating polarizer microscope at varying planepolarization angles. An image stack is produced for every field of view, where each image corresponds to a polarizer position. Any point in the sample is registered to the same pixel coordinates at all images in the stack. The resulting set of intensities for each pixel is directly related to the orientation of the crystal sampled at the corresponding position. Multivariate analysis of the sets of intensities leads to the classification of the pixels into their respective crystalline grains. Individual hematite grains of iron ore, as well as their pores, are segmented. The results are compared to those obtained by visual point counting methods.
O objetivo do presente trabalho é classificar pixels co-registrados de pilhas de imagens de luz polarizada de minério de ferro nos seus respectivos grãos cristalinos ou poros, produzindo assim imagens segmentadas por grãos que podem ser analisados quanto às suas distribuições de tamanho, forma e orientação, bem como sua porosidade, tamanho e forma dos poros. Seções polidas de amostras de minério de ferro rico em hematita foram imageadas difratalmente em um microscópio com polarizador giratório em ângulos variados de polarização. Uma pilha de imagens foi produzida para cada campo na qual cada imagem corresponde a uma orientação do polarizador. Cada ponto na amostra foi registrado nas mesmas coordenadas em todas as imagens da pilha. O conjunto resultante de intensidades de cada pixel está diretamente relacionado com a orientação do cristal amostrado na posição correspondente. A análise multivariada dos conjuntos de intensidades leva à classificação dos pixels nos seus respectivos grãos cristalinos. Grãos individuais de hematita do minério de ferro, bem como os seus poros foram segmentados. Os resultados foram comparados com aqueles obtidos pelo método de contagem dos pontos, ou seja, por inspeção visual.
14

Costa, Wilian França. "Segmentação multiresolução variográfica ótima". Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-15122016-082121/.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
O desenvolvimento de soluções que auxiliem na extração de informações de dados oriundos de sistemas de sensoriamento remoto e outras geotecnologias são essenciais em diversas atividades, por exemplo, a identificação de requisitos para o monitoramento ambiental; a definição de regiões de conservação; o planejamento e execução de atividades de verificação quanto ao cumprimento e uso do espaço; o gerenciamento de recursos naturais; a definição de áreas protegidas e ecossistemas; e o planejamento para aplicação e reposição de insumos agrícolas. Neste contexto, o presente trabalho apresenta um método para parametrizar um algoritmo segmentador Multiresolution, de forma que os segmentos obtidos sejam os maiores possíveis dentro de limites pré-estabelecidos de heterogeneidade para os dados avaliados. O método faz uso de variografia, uma ferramenta geoestatística que apresenta uma estimativa de quanto duas amostras variam em uma região espacial, de acordo com a distância relativa entre elas. Mostra-se também como a avaliação de múltiplos variogramas pode ser empregada na delimitação de regiões quando combinada a este algoritmo de segmentação, desde que os dados estejam dispostos em uma grade amostral regularmente espaçada. O método desenvolvido utiliza o efeito pepita estimado para os atributos dispostos em camadas sobrepostas e quantifica a segmentação em dois momentos (ou médias) para identificar o valor do parâmetro espacial ótimo a ser aplicado no segmentador. Apresenta-se, como exemplos de aplicabilidade do método, três casos típicos desta área: (i) definição de zonas de manejo para agricultura de precisão; (ii) seleção de regiões para estimativas de degradação ambiental na vizinhança de ponto de coleta/observação de espécies; e (iii) a identificação de regiões bioclimáticas que compõem uma Unidade de Conservação da biodiversidade.
Information extraction of data derived from remote sensing and other geotechnologies is important for many activities, e.g., the identification of environmental requirements, the definition of conservation areas, the planning and implementation of activities regarding compliance of correct land use; the management of natural resources, the definition of protected ecosystem areas, and the spatial planning of agricultural input reposition. This thesis presents a parameter optimisation method for the Multiresolution segmentation algorithm. The goal of the method is to obtain maximum sized segments within the established heterogeneity limits. The method makes use of variography, a geostatistical tool that gives a measure of how much two samples will vary in a region depending on the distance between each one of them. The variogram nugget effect is measured for each attribute layer and then averaged to obtain the optimal value for spatial segmentation with the Multiresolution algorithm. The segments thus obtained are superimposed on a regularly spaced sampled grid of georeferenced data to divide the region under study. To show the usefulnesss of this method, the following three case studies were performed: (i) the delineation of precision farming management zones; (ii) the selection of regions for environmental degradation estimates in the neighbourhood of species occurrence points; and (iii) the identification of bioclimatic regions that are present in biodiversity conservation units.
15

Barrera, Núñez Víctor Augusto. "Automatic diagnosis of voltage disturbances in power distribution networks". Doctoral thesis, Universitat de Girona, 2012. http://hdl.handle.net/10803/80944.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
This thesis proposes a framework for identifying the root-cause of a voltage disturbance, as well as, its source location (upstream/downstream) from the monitoring place. The framework works with three-phase voltage and current waveforms collected in radial distribution networks without distributed generation. Real-world and synthetic waveforms are used to test it. The framework involves features that are conceived based on electrical principles, and assuming some hypothesis on the analyzed phenomena. Features considered are based on waveforms and timestamp information. Multivariate analysis of variance and rule induction algorithms are applied to assess the amount of meaningful information explained by each feature, according to the root-cause of the disturbance and its source location. The obtained classification rates show that the proposed framework could be used for automatic diagnosis of voltage disturbances collected in radial distribution networks. Furthermore, the diagnostic results can be subsequently used for supporting power network operation, maintenance and planning.
En esta tesis se propone una metodología para la identificación de la localización relativa (aguas arriba/abajo) y la causa de una perturbación eléctrica. La metodología utiliza las ondas trifásicas de tensión y de corriente registradas en redes de distribución radial sin presencia de generación distribuida. La metodología es validada utilizando perturbaciones eléctricas reales y simuladas. La metodología involucra atributos que han sido concebidos basándose en principios eléctricos e hipótesis de acuerdo a cada uno de los fenómenos eléctricos analizados. Se propusieron atributos tanto basados en la forma de onda como en la fecha de ocurrencia de la perturbación. La cantidad de información contenida y/o explicada por cada atributo es valorada mediante la aplicación del análisis multivariante de la varianza y algoritmos de extracción automática de reglas de decisión. Los resultados de clasificación muestran que la metodología propuesta puede ser utilizada para el diagnóstico automático de perturbaciones eléctricas registradas en redes de distribución radial. Los resultados de diagnóstico pueden ser utilizados para apoyar las tareas de operación, mantenimiento y planeamiento de las redes de distribución.
16

Saker, Halima. "Segmentation of Heterogeneous Multivariate Genome Annotation Data". 2021. https://ul.qucosa.de/id/qucosa%3A75914.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
Due to the potential impact of next-generation sequencing NGS, we have seen a rapid increase in genomic information and annotation information that can be naturally mapped to genomic locations. In cancer research, for example, there are significant efforts to chart DNA methylation at single-nucleotide resolution. The NIH Roadmap Epigenomics Projects, on the other hand, has set out to chart a large number of different histone modifications. However, throughout the last few years, a very diverse set of aspects has become the aim of large-scale experiments with a genome-wide readout. Therefore, the identification of functional units of the genomic DNA is considered a significant and essential challenge. Subsequently, we have been motivated to implement multi-dimensional segmentation approaches that serve gene variety and genome heterogeneity. The segmentation of multivariate genomic, epigenomic, and transcriptomic data from multiple time points, tissue, and cell types to compare changes in genomic organization and identify common elements form the headline of our research. Next generation sequencing offers a rich material used in bioinformatics research to find answers, solutions, and exploration for the molecular functions, diseases causes, etc. Rapid advances in technology also have led to the proliferation of types of experiments. Although sharing next-generation sequencing as the readout produces signals with an entirely different inherent resolution, ranging from a precise transcript structure at the single-nucleotide resolution to pull-down and enrichment-based protocols with resolutions on order 100 nt to chromosome conformation data that are only accurate at kilobase resolution. Therefore, the main goal of the dissertation project is to design, implement, and test novel segmentation algorithms that work on one- and multi-dimensional and can accommodate data of different types and resolutions. The target data in this project is multivariate genetic, epigenetic, transcriptomic, and proteomic data; the reason is that these datasets can change under the effect of several conditions such as chemical, genetic and epigenetic modifications. A promising approach towards this end is to identify intervals of the genomic DNA that behave coherently in multiple conditions and tissues and could be defined as intervals on which all measured quantities are constant within each experiment. A naive approach would take each data set in isolation and estimate intervals in which the signal at hand is constant. Another approach takes datasets all at once as input without recurring to one-dimensional segmentation. Once implemented, the algorithm should be applied on heterogeneous genomic, transcriptomic, proteomic, and epigenomic data; the aim here is to draw and improve the map of functionally coherent segments of a genome. Current approaches either focus on individual datasets, as in the case of tiling array transcriptomics data; Or on the analysis of comparable experiments such as ChIP-seq data for various histone modifications. The simplest sub-problem in segmentation is to decide whether two adjacent intervals should form two distinct segments or whether they should be combined into a single one. We have to find out how this should be done in the multi-D segmentation; in 1-D, this is relatively well known. This leads to a segmentation of the genome concerning the particular dataset. The intersection of segmentations for different datasets could identify then the DNA elements.
17

Ungerer, Leona M. "Values as multivariate consumer market segmentation discriminators : a subjective well-being approach". Thesis, 2009. http://hdl.handle.net/10500/3188.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
The Living Standards Measure (LSM), a South African marketing segmentation method, is a wealth measure based on standard of living. This research study investigates whether a rationale can be found for the inclusion of value-related variables in this type of multivariate segmentation approach. Schwartz’s (1992; 2006) values model was used to operationalise personal values and individual-level culture – focusing on two of its dimensions, ideocentrism and allocentrism. The current positive psychology research trend manifests in the inclusion of subjective wellbeing (SWB), as measured by satisfaction with life (SWL). The primary objective of this research was to investigate at individual (and not group or societal) level whether values and SWL can be used to discriminate among multivariate consumer segments. Data were collected by means of a survey from a nationally representative sample (n = 2566) of purchase decision-makers (PDMs). The measurement instruments used were the Portrait Values Questionnaire (PVQ) and the Satisfaction with Life Scale (SWLS). A multi-group confirmatory factor analysis (MGCFA) was used to assess the psychometric properties and test the equivalence of the scales across cultural groups. MGCFA was also used to test for differences across LSM groups on the PVQ and SWLS. Centred value scores were used to test for differences between LSM groups in terms of their values and SWL, using MANOVA. The findings supported Schwartz’s theory of basic human values, and small differences were found in the PVQ values between LSM groups using the MGCFA approach. MANOVA analyses showed stronger differences across LSM groups. PDMs in the higher LSM segments were more satisfied with their lives. No significant relationships between values and SWL were found, and the effect of individual-level culture, as a higher-order dimension of four values, showed a small but significant effect on SWL.
Psychology
D. Litt. et Phil. (Psychology))
18

Amornnikun, Patipharn, e Patipharn Amornnikun. "Metaheuristic-Based Possibilistic Multivariate Fuzzy Weighted C-Means Algorithms for Market Segmentation". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/swmwcm.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Hsiu-Wen, Liu. "Hierarchical Bayes Conjoint Analysis with Multivariate Mixture of Normal Heterogeneity: Implications for Market Segmentation". 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-2304200709150500.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Liu, Hsiu-Wen, e 劉秀雯. "Hierarchical Bayes Conjoint Analysis with Multivariate Mixture of Normal Heterogeneity: Implications for Market Segmentation". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/47042305067659434744.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
博士
國立臺灣大學
國際企業學研究所
95
Conjoint analysis, designed to estimate individual preference and the relative competition among brands, has become one of the most widely-used quantitative methods in Marketing Research. In addition, hierarchical Bayes inference is one of the most favored approaches because of its superior in recovering individual part-worths. However, current application of hierarchical Bayes model is not without drawbacks, because consumer heterogeneity is assumed to follow a multivariate normal distribution. The normal distribution has its own characteristic such as unimodal, symmetric and inverted U shape, which might lead to bias or limitation in part-worth density inference. Alternatively, the mixture of normal distributions is a more flexible and general approach in modeling consumer heterogeneity. It is especially suitable for the heterogeneity density inference, such as the unknown consumer heterogeneity distribution. It is flexible in modeling any symmetric or asymmetric distributions with either multi-modes or heavy tails. Furthermore, the normal distribution is just a special case of mixture of normal distributions. In this study, we develop a Bayesian Inference of multivariate mixture of normal distributions. Then, the model is applied in different hierarchical Bayes models as an assumption to modeling consumer heterogeneity. Two approaches in recent hierarchical Bayes conjoint analysis will be studied. They are continuous response conjoint analysis and discrete choice conjoint analysis. As a final point, the mixture of normal assumption in modeling consumer heterogeneity is also favored by marketing society, because it ideally corresponds to the strategic implication of market segmentation. However, a recent argument regarding segmentation encourages us to focus on extreme rather then the homogeneous segments. Therefore, the author will further investigate these different arguments, and explain why the modeling framework proposed in this study is so flexible providing mixed information in targeting the advantages of either extreme or cluster based approach. As expected, it will provide new insights and strategic implications for market segmentation.
21

胡承民. "Market Segmentation through Self-Organizing Map and Multivariate Cluster Analysis - A Case Study of 3C Stores". Thesis, 1998. http://ndltd.ncl.edu.tw/handle/20087587313118252913.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
Abstract (sommario):
碩士
義守大學
管理科學研究所
86
Market segmentaiton, which is the foundation of selecting marketing strategies, is an indispensable procedure in analyzing market structure. Thus, market segmentation is one of the most important findings in marketing. The conventional research usually employed the "cluster analysis" approaches, which are statistics oriented. Recently, due to its high performance in engineering, artificial neural networks are also applied in management. However, most of these researches use the supervised networks instead of unsupervised networks. Hence, this study dedicates to apply the unsupervised neural network, Self-Organizing Map network, in market segmentation.   So far, there are still no suitable criteria for selecting the cluster analysis approach. Therefore, this study tries to use the simulated data, which has been well grouped, as the base for comparing the different clustering approaches. These three approaches are: (1) traditional two-stage cluster analysis, (2) Self-Organizing Map neural network, and (3) proposed two-stage cluster analysis which integrates Self-Organizing Map neural network and K-means. The simulation results showed that both the traditional and proposed two-stage cluster analysis approaches have the similar performance and are better than the single Self-Organizing Map neural network.   In order to further testify the proposed approach, a real-fife problem, 3C stores market segmentation, is employed. First, factor analysis technique, or principle component analysis, divided the factors into six factors. Then, the above two mentioned approaches cluster the customers into four distinct groups after examining three alternatives, which are three, four and five groups. The evaluation results showed that the proposed approach has better performance based on Wilds'''''''' Labmda and discriminant analysis.

Vai alla bibliografia