Dissertations / Theses on the topic 'Image partition'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 47 dissertations / theses for your research on the topic 'Image partition.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Bernat, Andrew. "Which partition scheme for what image?, partitioned iterated function systems for fractal image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65602.pdf.
Full textLu, Huihai. "Evolutionary image analysis in binary partition trees." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438156.
Full textValero, Valbuena Silvia. "Hyperspectral image representation and processing with binary partition trees." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/130832.
Full textZhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." University of Sydney. School of Electrical and Information Engineering, 2005. http://hdl.handle.net/2123/699.
Full textZhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/699.
Full textCutolo, Alfredo. "Image partition and video segmentation using the Mumford-Shah functional." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/280.
Full textThe aim of this Thesis is to present an image partition and video segmentation procedure, based on the minimization of a modified version of Mumford-Shah functional. The Mumford-Shah functional used for image partition has been then extended to develop a video segmentation procedure. Differently by the image processing, in video analysis besides the usual spatial connectivity of pixels (or regions) on each single frame, we have a natural notion of “temporal” connectivity between pixels (or regions) on consecutive frames given by the optical flow. In this case, it makes sense to extend the tree data structure used to model a single image with a graph data structure that allows to handle a video sequence. The video segmentation procedure is based on minimization of a modified version of a Mumford-Shah functional. In particular the functional used for image partition allows to merge neighboring regions with similar color without considering their movement. Our idea has been to merge neighboring regions with similar color and similar optical flow vector. Also in this case the minimization of Mumford-Shah functional can be very complex if we consider each possible combination of the graph nodes. This computation becomes easy to do if we take into account a hierarchy of partitions constructed starting by the nodes of the graph.[edited by author]
X n.s.
Sudirman. "Colour image coding indexing and retrieval using binary space partition tree." Thesis, University of Nottingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275171.
Full textBerry, Dominic William. "Adaptive phase measurements /." [St. Lucia, Qld.], 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16247.pdf.
Full textKim, Il-Ryeol. "Wavelet domain partition-based signal processing with applications to image denoising and compression." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 2.98 Mb., 119 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3221054.
Full textGomila, Cristina. "Mise en correspondance de partitions en vue du suivi d'objets." Phd thesis, École Nationale Supérieure des Mines de Paris, 2001. http://pastel.archives-ouvertes.fr/pastel-00003272.
Full textCannon, Paul C. "Extending the information partition function : modeling interaction effects in highly multivariate, discrete data /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2263.pdf.
Full textValero, Silvia. "Arbre de partition binaire : Un nouvel outil pour la représentation hiérarchique et l'analyse des images hyperspectrales." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00796108.
Full textGolodetz, Stuart Michael. "Zipping and unzipping : the use of image partition forests in the analysis of abdominal CT scans." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558768.
Full textJoder, Cyril. "Alignement temporel musique-sur-partition par modèles graphiques discriminatifs." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00664260.
Full textZabiba, Mohammed. "Variational approximation of interface energies and applications." Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0419/document.
Full textMinimal partition problems consist in finding a partition of a domain into a given number of components in order to minimize a geometric criterion. In applicative fields such as image processing or continuum mechanics, it is standard to incorporate in this objective an interface energy that accounts for the lengths of the interfaces between components. The present work is focused on thetheoretical and numerical treatment of minimal partition problems with interface energies. The considered approach is based on a Gamma-convergence approximation and duality techniques
Green, Christopher Lee. "IP Algorithm Applied to Proteomics Data." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd618.pdf.
Full textKéchichian, Razmig. "Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00967381.
Full textBuard, Benjamin. "Contribution à la compréhension des signaux de fluxmétrie laser Doppler : traitement des signaux et interprétations physiologiques." Phd thesis, Université d'Angers, 2010. http://tel.archives-ouvertes.fr/tel-00584166.
Full textRandrianasoa, Tianatahina Jimmy Francky. "Représentation d'images hiérarchique multi-critère." Thesis, Reims, 2017. http://www.theses.fr/2017REIMS040/document.
Full textSegmentation is a crucial task in image analysis. Novel acquisition devices bring new images with higher resolutions, containing more heterogeneous objects. It becomes also easier to get many images of an area from different sources. This phenomenon is encountered in many domains (e.g. remote sensing, medical imaging) making difficult the use of classical image segmentation methods. Hierarchical segmentation approaches provide solutions to such issues. Particularly, the Binary Partition Tree (BPT) is a hierarchical data-structure modeling an image content at different scales. It is built in a mono-feature way (i.e. one image, one metric) by merging progressively similar connected regions. However, the metric has to be carefully thought by the user and the handling of several images is generally dealt with by gathering multiple information provided by various spectral bands into a single metric. Our first contribution is a generalized framework for the BPT construction in a multi-feature way. It relies on a strategy setting up a consensus between many metrics, allowing us to obtain a unified hierarchical segmentation space. Surprisingly, few works were devoted to the evaluation of hierarchical structures. Our second contribution is a framework for evaluating the quality of BPTs relying both on intrinsic and extrinsic quality analysis based on ground-truth examples. We also discuss about the use of this evaluation framework both for evaluating the quality of a given BPT and for determining which BPT should be built for a given application. Experiments using satellite images emphasize the relevance of the proposed frameworks in the context of image segmentation
Kong, Tian Fook. "Multilevel spectral clustering : graph partitions and image segmentation." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45275.
Full textIncludes bibliographical references (p. 145-146).
While the spectral graph partitioning method gives high quality segmentation, segmenting large graphs by the spectral method is computationally expensive. Numerous multilevel graph partitioning algorithms are proposed to reduce the segmentation time for the spectral partition of large graphs. However, the greedy local refinement used in these multilevel schemes has the tendency of trapping the partition in poor local minima. In this thesis, I develop a multilevel graph partitioning algorithm that incorporates the inverse powering method with greedy local refinement. The combination of the inverse powering method with greedy local refinement ensures that the partition quality of the multilevel method is as good as, if not better than, segmenting the large graph by the spectral method. In addition, I present a scheme to construct the adjacency matrix, W and degree matrix, D for the coarse graphs. The proposed multilevel graph partitioning algorithm is able to bisect a graph (k = 2) with significantly shorter time than segmenting the original graph without the multilevel implementation, and at the same time achieving the same normalized cut (Ncut) value. The starting eigenvector, obtained by solving a generalized eigenvalue problem on the coarsest graph, is close to the Fiedler vector of the original graph. Hence, the inverse iteration needs only a few iterations to converge the starting vector. In the k-way multilevel graph partition, the larger the graph, the greater the reduction in the time needed for segmenting the graph. For the multilevel image segmentation, the multilevel scheme is able to give better segmentation than segmenting the original image. The multilevel scheme has higher success of preserving the salient part of an object.
(cont.) In this work, I also show that the Ncut value is not the ultimate yardstick for the segmentation quality of an image. Finding a partition that has lower Ncut value does not necessary means better segmentation quality. Segmenting large images by the multilevel method offers both speed and quality.
by Tian Fook Kong.
S.M.
Valero, Valbuena Silvia. "Arbre de partition binaire : un nouvel outil pour la représentation hiérarchique et l’analyse des images hyperspectrales." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT123/document.
Full textThe optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation:the Binary Partition Tree (BPT). This hierarchical region-based representation can be interpretedas a set of hierarchical regions stored in a tree structure. Hence, the Binary Partition Tree succeedsin presenting: (i) the decomposition of the image in terms of coherent regions and (ii) the inclusionrelations of the regions in the scene. Based on region-merging techniques, the construction of BPTis investigated in this work by studying hyperspectral region models and the associated similaritymetrics. As a matter of fact, the very high dimensionality and the complexity of the data require the definition of specific region models and similarity measures. Once the BPT is constructed, the fixed tree structure allows implementing efficient and advanced application-dependent techniqueson it. The application-dependent processing of BPT is generally implemented through aspecific pruning of the tree. Accordingly, some pruning techniques are proposed and discussed according to different applications. This Ph.D is focused in particular on segmentation, object detectionand classification of hyperspectral imagery. Experimental results on various hyperspectraldata sets demonstrate the interest and the good performances of the BPT representation
Wingate, David. "Solving Large MDPs Quickly with Partitioned Value Iteration." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd437.pdf.
Full textZanoguera, Tous Maria Fransisca. "Segmentation interactive d'images fixes et de séquences vidéo basée sur des hiérarchies de partitions." Phd thesis, École Nationale Supérieure des Mines de Paris, 2001. http://pastel.archives-ouvertes.fr/pastel-00003264.
Full textLe, Capitaine Hoel. "Opérateurs d'agrégation pour la mesure de similarité. Application à l'ambiguïté en reconnaissance de formes." Phd thesis, Université de La Rochelle, 2009. http://tel.archives-ouvertes.fr/tel-00438516.
Full textGatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.
Full textBenammar, Riyadh. "Détection non-supervisée de motifs dans les partitions musicales manuscrites." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI112.
Full textThis thesis is part of the data mining applied to ancient handwritten music scores and aims at a search for frequent melodic or rhythmic motifs defined as repetitive note sequences with characteristic properties. There are a large number of possible variations of motifs: transpositions, inversions and so-called "mirror" motifs. These motifs allow musicologists to have a level of in-depth analysis on the works of a composer or a musical style. In a context of exploring large corpora where scores are just digitized and not transcribed, an automated search for motifs that verify targeted constraints becomes an essential tool for their study. To achieve the objective of detecting frequent motifs without prior knowledge, we started from images of digitized scores. After pre-processing steps on the image, we exploited and adapted a model for detecting and recognizing musical primitives (note-heads, stems...) from the family of Region-Proposal CNN (RPN) convolution neural networks. We then developed a primitive encoding method to generate a sequence of notes without the complex task of transcribing the entire manuscript work. This sequence was then analyzed using the CSMA (Constraint String Mining Algorithm) approach designed to detect the frequent motifs present in one or more sequences, taking into account constraints on their frequency and length, as well as the size and number of gaps allowed within the motifs. The gap was then studied to avoid recognition errors produced by the RPN network, thus avoiding the implementation of a post-correction system for transcription errors. The work was finally validated by the study of musical motifs for composers identification and classification
Klava, Bruno. "Redução no esforço de interação em segmentação de imagens digitais através de aprendizagem computacional." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-08122014-152731/.
Full textSegmentation is an important step in nearly all tasks involving digital image processing. Due to the variety of images and segmentation needs, automation of segmentation is not a trivial task. In many situations, interactive approaches in which the user can intervene to guide the segmentation process, are quite useful. Watershed transformation based approaches are suitable for interactive image segmentation: the watershed from markers allows the user to mark the regions of interest in the image; the hierarchical watershed generates a hierarchy of partitions of the image being analyzed, hierarchy in which the user can easily navigate and select a particular partition (segmentation). In a previous work, we have proposed a method that integrates the two approaches so that the user can combine the strong points of these two forms of interaction interchangeably. Despite the versatility obtained by integrating the two approaches, the built hierarchies hardly contain interesting partitions and the interaction effort needed to obtain a desired outcome can be very high. In this thesis we propose a method, based on machine learning, that uses images previously segmented to try to adapt a given hierarchy so that it contains partitions closer to the partition of interest. In the machine learning formulation, different image features are associated to the possible region contours, and these are classified as ones that must or must not be present in the final partition by a previously trained support vector machine. The given hierarchy is adapted to contain a partition that is consistent with the obtained classification. This approach is particularly interesting in scenarios where batches of similar images or sequences of images, such as frames in video sequences or cuts produced by imaging diagnosis procedures, need to be segmented. In such cases, it is expected that for each new image to be segmented, the interaction effort required to achieve the desired segmentation is reduced relative to the effort that would be required when using the original hierarchy. In order to do not depend on experiments with users in assessing the reduction in interaction effort, we propose and use an interaction model that simulates human users in the context of hierarchical segmentation. Simulations of this model were compared with interaction sequences observed in experiments with humans users. Experiments with different bacthes and image sequences show that the method is able to reduce the interaction effort.
Silva, Cauane Blumenberg. "Adaptive tiling algorithm based on highly correlated picture regions for the HEVC standard." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96040.
Full textThis Master Thesis proposes an adaptive algorithm that is able to dynamically choose suitable tile partitions for intra- and inter-predicted frames in order to reduce the impact on coding efficiency arising from such partitioning. Tiles are novel parallelismoriented tools that integrate the High Efficiency Video Coding (HEVC) standard, which divide the frame into independent rectangular regions that can be processed in parallel. To enable the parallelism, tiles break the coding dependencies across their boundaries leading to coding efficiency impacts. These impacts can be even higher if tile boundaries split highly correlated picture regions, because most of the coding tools use context information during the encoding process. Hence, the proposed algorithm clusters the highly correlated picture regions inside the same tile to reduce the inherent coding efficiency impact of using tiles. To wisely locate the highly correlated picture regions, image characteristics and encoding information are analyzed, generating partitioning maps that serve as the algorithm input. Based on these maps, the algorithm locates the natural context break of the picture and defines the tile boundaries on these key regions. This way, the dependency breaks caused by the tile boundaries match the natural context breaks of a picture, then minimizing the coding efficiency losses caused by the use of tiles. The proposed adaptive tiling algorithm, in some cases, provides over 0.4% and over 0.5% of BD-rate savings for intra- and inter-predicted frames respectively, when compared to uniform-spaced tiles, an approach which does not consider the picture context to define the tile partitions.
Queiroga, Eduardo Vieira. "Abordagens meta-heurísticas para clusterização de dados e segmentação de imagens." Universidade Federal da Paraíba, 2017. http://tede.biblioteca.ufpb.br:8080/handle/tede/9249.
Full textMade available in DSpace on 2017-08-14T11:28:15Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 7134434 bytes, checksum: a99ec0d172a3be38a844f44b70616b16 (MD5) Previous issue date: 2017-02-17
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Many computational problems are considered to be hard due to their combinatorial nature. In such cases, the use of exaustive search techniques for solving medium and large size instances becomes unfeasible. Some data clustering and image segmentation problems belong to NP-Hard class, and require an adequate treatment by means of heuristic techniques such as metaheuristics. Data clustering is a set of problems in the fields of pattern recognition and unsupervised machine learning which aims at finding groups (or clusters) of similar objects in a benchmark dataset, using a predetermined measure of similarity. The partitional clustering problem aims at completely separating the data in disjont and non-empty clusters. For center-based clustering methods, the minimal intracluster distance criterion is one of the most employed. This work proposes an approach based on the metaheuristic Continuous Greedy Randomized Adaptive Search Procedure (CGRASP). High quality results were obtained through comparative experiments between the proposed method and other metaheuristics from the literature. In the computational vision field, image segmentation is the process of partitioning an image in regions of interest (set of pixels) without allowing overlap. Histogram thresholding is one of the simplest types of segmentation for images in grayscale. Thes Otsu’s method is one of the most populars and it proposes the search for the thresholds that maximize the variance between the segments. For images with deep levels of gray, exhaustive search techniques demand a high computational cost, since the number of possible solutions grows exponentially with an increase in the number of thresholds. Therefore, metaheuristics have been playing an important role in finding good quality thresholds. In this work, an approach based on Quantum-behaved Particle Swarm Optimization (QPSO) were investigated for multilevel thresholding of available images in the literature. A local search based on Variable Neighborhood Descent (VND) was proposed to improve the convergence of the search for the thresholds. An specific application of thresholding for electronic microscopy images for microstructural analysis of cementitious materials was investigated, as well as graph algorithms to crack detection and feature extraction.
Muitos problemas computacionais s˜ao considerados dif´ıceis devido `a sua natureza combinat´oria. Para esses problemas, o uso de t´ecnicas de busca exaustiva para resolver instˆancias de m´edio e grande porte torna-se impratic´avel. Quando modelados como problemas de otimiza¸c˜ao, alguns problemas de clusteriza¸c˜ao de dados e segmenta¸c˜ao de imagens pertencem `a classe NP-Dif´ıcil e requerem um tratamento adequado por m´etodos heur´ısticos. Clusteriza¸c˜ao de dados ´e um vasto conjunto de problemas em reconhecimento de padr˜oes e aprendizado de m´aquina n˜ao-supervisionado, cujo objetivo ´e encontrar grupos (ou clusters) de objetos similares em uma base de dados, utilizando uma medida de similaridade preestabelecida. O problema de clusteriza¸c˜ao particional consiste em separar completamente os dados em conjuntos disjuntos e n˜ao vazios. Para m´etodos de clusteriza ¸c˜ao baseados em centros de cluster, minimizar a soma das distˆancias intracluster ´e um dos crit´erios mais utilizados. Para tratar este problema, ´e proposta uma abordagem baseada na meta-heur´ıstica Continuous Greedy Randomized Adaptive Search Procedure (C-GRASP). Resultados de alta qualidade foram obtidos atrav´es de experimentos envolvendo o algoritmo proposto e outras meta-heur´ısticas da literatura. Em vis˜ao computacional, segmenta¸c˜ao de imagens ´e o processo de particionar uma imagem em regi˜oes de interesse (conjuntos de pixels) sem que haja sobreposi¸c˜ao. Um dos tipos mais simples de segmenta¸c˜ao ´e a limiariza¸c˜ao do histograma para imagens em n´ıvel de cinza. O m´etodo de Otsu ´e um dos mais populares e prop˜oe a busca pelos limiares que maximizam a variˆancia entre os segmentos. Para imagens com grande profundidade de cinza, t´ecnicas de busca exaustiva possuem alto custo computacional, uma vez que o n´umero de solu¸c˜oes poss´ıveis cresce exponencialmente com o aumento no n´umero de limiares. Dessa forma, as meta-heur´ısticas tem desempenhado um papel importante em encontrar limiares de boa qualidade. Neste trabalho, uma abordagem baseada em Quantum-behaved Particle Swarm Optimization (QPSO) foi investigada para limiariza¸c˜ao multin´ıvel de imagens dispon´ıveis na literatura. Uma busca local baseada em Variable Neighborhood Descent (VND) foi proposta para acelerar a convergˆencia da busca pelos limiares. Al´em disso, uma aplica¸c˜ao espec´ıfica de segmenta¸c˜ao de imagens de microscopia eletrˆonica para an´alise microestrutural de materiais ciment´ıcios foi investigada, bem como a utiliza¸c˜ao de algoritmos em grafos para detec¸c˜ao de trincas e extra¸c˜ao de caracter´ısticas de interesse.
Becer, Huseyin Caner. "A Robust Traffic Sign Recognition System." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612912/index.pdf.
Full textNordliden, Petter, and Sjöbladh Linda Didrik. "Måste det alltid bråkas med bråk? : En systematisk litteraturstudie om stambråkets betydelse i matematikundervisningen." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-91687.
Full textCara, Michel. "Stratégies d'apprentissage de la lecture musicale à court-terme : mémoire de travail et oculométrie cognitive." Thesis, Dijon, 2013. http://www.theses.fr/2013DIJOL013.
Full textThroughout this thesis, evaluation of music performance is viewed as a latent object of study in order to provide tools for learning to read music. We have defined some variables from eye movements and music performance accounting for expert performance and interactions between skill groups when learning a new piece of music. In more details, we have observed the use of different strategies for music information intake, processes and information retrieval depending on musicians’ expertise and we have stressed the importance of learning through interaction. In the process of skill acquisition, when self-confidence is gained strategies are simultaneously adjusted (Bandura, 1997; McPherson and McCormick, 2006). In reference to the current debate about the nature of music reading, we have compared musical and verbal processing during comprehensive reading of texts and scores. On the whole, considering the model of Baddeley (1990), musicians’ cognitive resources during music reading would be mobilized depending on the expertise and the music style
Proença, Patrícia Aparecida. "Recuperação de imagens digitais com base na distribuição de características de baixo nível em partições do domínio utilizando índice invertido." Universidade Federal de Uberlândia, 2010. https://repositorio.ufu.br/handle/123456789/12500.
Full textThe main goal of a images retrieval system is to obtain images from a collection that assist a need of the user. To achieve this objective, in generally, the systems of retrieval of images calculate the similarity between the user's need represented by a query and representations of the images of the collection. Such an objective is dicult of being obtain due to the subjectivity of the similarity concept among images, because a same image can be interpreted in dierent ways by dierent people. In the attempt of solving this problem the content based image retrieval systems explore the characteristics of low level color, forms and texture in the calculation of the similarity among the images. A problem of this approach is that in most of the systems the calculation of the similarity is accomplished being compared the query image with all of the images of the collection, turning the dicult and slow processing. Considering the indexation of characteristics of low level of partitions of digital images mapped to an inverted index, this work looks for improvements in the acting of the processing of querys and improve in the precision considering the group of images retrieval in great bases of data. We used an approach based in inverted index that is here adapted for partitions images. In this approach the concept of term of the retrieval textual, main element of the indexation, it is used in the work as characteristic of partitions of images for the indexation. Experiments show improvement in the quality of the precision using two collections of digital images.
O principal objetivo de um sistema de recuperação de imagens é obter imagens de uma coleção que atendam a uma necessidade do usuário. Para atingir esse objetivo, em geral, os sistemas de recuperação de imagens calculam a similaridade entre a necessidade do usuário, representada por uma consulta, e representações das imagens da coleção. Tal objetivo é difícil de ser alcançado devido à subjetividade do conceito de similaridade entre imagens, visto que uma mesma imagem pode ser interpretada de formas diferentes por pessoas distintas. Na tentativa de resolver este problema os sistemas de recuperação de imagens por conteúdo exploram as características de baixo nível cor, forma e textura no cálculo da similaridade entre as imagens. Um problema desta abordagem é que na maioria dos sistemas o cálculo da similaridade é realizado comparando-se a imagem de consulta com todas as imagens da coleção, tornando o processamento difícil e lento. Considerando a indexação de características de baixo nível de partições de imagens digitais mapeadas para um índice invertido, este trabalho busca melhorias no desempenho do processamento de consultas e ganho na precisão considerando o conjunto de imagens recuperadas em grandes bases de dados. Utilizamos uma abordagem baseada em índice invertido, que é aqui adaptada para imagens particionadas. Nesta abordagem o conceito de termo da recuperação textual, principal elemento da indexação, é utilizado no trabalho como característica de partições de imagens para a indexação. Experimentos mostram ganho na qualidade da precisão usando duas coleções de imagens digitais.
Mestre em Ciência da Computação
ZHANG, SHENG-CAI, and 張生財. "Band partition methods for subband image coding." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/79357947091909839641.
Full textChou, Meng-Ying, and 周孟穎. "Vector Partition Method on Spectral Matting and Image Segmentation." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/90659731843359584291.
Full text國立交通大學
統計學研究所
100
This study investigates the segmentation of an image foreground from the background image. In the approach of image spectral matting, the segmentation of an image can be obtained by optimizing an objective function which contains matting Laplacian. However the optimized alpha matte of objective function is not always the entire foreground object. To obtain the better segmentation result of foreground object, the optimal alpha matte and the sub-optimal alpha mattes are all considered at the same time. The technique of unsupervised clustering can be applied to combine several foreground components into a complete foreground object. In this study, we investigate the matting Laplacian from the perspective of graph theory. Then we use the community detection method which is called network modularity to perform clustering. This detected community corresponds to the foreground component. Optimizing the modularity will turn out to be the vector partition problem. We propose an algorithm which finds the initial groups by the sign information of vectors to perform vector partition for unsupervised clustering Through empirical studies, the results of vector partition can improve the segmentation of test images. It can not only distinguish the foreground from the background, but also form less component regions of the foreground. This new approach will enhance the segmentation of the foreground object that is matted with background image components.
Ho, Pei Hao, and 何霈豪. "A path partition algorithmand its application to image processing." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/30446609659850619073.
Full text樹德科技大學
資訊工程學系
95
Given a path with a positive weight on each vertex, the minimum L2 p-partition problem is to find a way to cut the path into p subpaths such that the sum of squares of the subpath weights is minimized. In this thesis, we propose an O(pnlogn) time algorithm for the problem. In addition, we studied how to use this algorithm to compress a gray-level image by reducing the number of gray-levels. We investigated the running times and the effects of four algorithms, named Naive, Greedy, MUP and L2-norm. The experience results show that L2-norm algorithm is efficient than MUP and is effective than the other two algorithms.
Chang, Fang-Jung, and 張芳榮. "Application of Frame Partition Scheme to Shot Detection and Image Retrieval." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/34940158648529389320.
Full text朝陽科技大學
資訊工程系碩士班
98
This thesis presents approaches to shot detection and image retrieval based on frame partitioning scheme. For the shot detection, the proposed approach is called SD/PFDS (Shot Detection Based on Partitioned Frame Differencing Scheme). In the SD/PFDS, frames are grouped and partitioned into image blocks. The first frame in the group is considered as reference frame and the others compared frames. Then the differences for each image blocks between partitioned reference and compared frames are calculated. By the differences, changes of shots are detected. The proposed SD/PFDS approach is verified by several examples. The results indicate that the overall average accuracy of detection is as high as 0.94 in F1 measure. By the results, the SD/PFDS approach has been justified and shown feasible. Also, we apply the frame partitioning scheme to image retrieval. With color and texture features, the thesis present three approaches to image retrieval: IR/PCF (Image Retrieval with Partitioned Color Features), IR/PTF (Image Retrieval with Partitioned Texture Features), and IR/PCTF (Image Retrieval with Partitioned Color and Texture Features). Based on partitioned color features, several stages are involved in the IR/PCF. First, images are partitioned. Second, energies in R-, G-, B-components for the partitioned query image are calculated through which weights on the similarity measure are found. Third, find averages of R-, G-, B-components in partitioned image as color features. Finally, calculate the similarity with weights obtained in the second stage. In the IR/PTF, texture features are acquired by GLCM (Gray Level Co-occurrence Matrix). The IR/PTF approach consists of the following stages. First, convert color images in gray-level images. Second, find texture features by GLCM in the partitioned images. Third, calculate the similarity between query image and images in database. The IR/PCTF approach uses both partitioned color and texture features. The following stages are involved in the IR/PCTF. First, the similarity measures are obtained by the IR/PCF and the IR/PTF, respectively. Then the similarity measures are normalized and linearly combined with weights proportional to the performance with only one partitioned feature, i.e., color or texture. The resulted similarity is then used in image retrieval. The three proposed image retrieval approaches are justified by image databases. It shows that the IR/PCTF is of highest retrieval performance and then the IR/PCF, and finally the IR/PCTF. With an appropriate combination of partitioned color and texture features, the IR/PCTF shows better performance than those in the IR/PCF and the IR/PTF.
蔡秋彥. "A Study on Efficient Partition-Based and Region-Based Image Retrieval Methods." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/52309255275404736553.
Full text國立清華大學
資訊工程學系
87
More and more digital images can be obtained by users from the world-wide-web. From the large number of images, it is very important for users to retrieve desired images via the efficient and effective mechanisms. In this paper, we proposed two efficient approaches to facilitate image retrieval by using a simple way to represent the content of images. Each image is partitioned into m×n equal-size sub-images (or blocks). A color that has enough number of pixels in a block is extracted to represent the content of this block. In the first approach, the content of an image is represented by these extracted colors of the blocks directly. The spatial information between images is considered in image retrieval. In the second approach, the colors of the blocks in an image are used to extract objects (or regions). A block-level process is proposed to perform the region extraction. The spatial information between regions is considered unimportant in the similarity measurement. Our experiments show that the block-based information used in these two approaches can speed up the image retrieval. Moreover, the two approaches are effective on different requirements of image similarity. Users can choose a proper approach to process their queries based on their similarity requirements.
Yu, Chien-Yang, and 余乾揚. "Dynamic Workload Partition on Parallel Medical Image Reconstruction Algorithm in Computational Grid Environments." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/c83fpw.
Full text國立臺灣科技大學
資訊管理系
95
Parallel computing has an issue that every computing node starts to execute an iteration round at the same time until every node is done at that round. Ideally in a cluster computing environment, the hardware of all nodes are the same; however, there are different resources which form a Grid. Therefore, if users want to execute parallel program in Grid, the work loads become more important regarding to the distribution of task amount. For instance, a task is divided into several smaller tasks, and three smaller tasks will be executed on different computational resources. Because of inequality in computing power and/or network bandwidth, some resources may finish the tasks sooner than the others. The fastest finished node has to wait until the other nodes also finished The issue in parallel computing as previously mentioned is that computing nodes need to wait each other in every execution round. The purpose of this study is to find the best workload distribution for each node, and distribute the suitable amount of tasks during the execution period of each round. In this research, OSEM and COSEM-ML were chosen for experiments. Both of them are algorithms of medical image reconstruction.
Su, Yen-Wei, and 蘇衍維. "Image Retrieval based on Object''s Centroid-Extended Spanning Representation using Triangular Partition Approach." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/21545466942053260142.
Full text國立中興大學
資訊科學系所
94
Content-based image retrieval (CBIR) is the current trend of designing image database systems as opposed to text-based image retrieval. Spatial relationships between objects are important features for designing a content-based image retrieval system. In this paper, we propose a new spatial representation based on centroid-extended spanning concept using a triangular partition approach. Such a representation can facilitate spatial reasoning and similarity retrieval. This representation provides twelve types of similarity measures to meet user’s different requirements. Experimental results demonstrate that image database systems based on the representation method proposed in this thesis have high performance in terms of recall and precision.
SHIH, CHENG-FU, and 施承甫. "A Reversible Data Hiding Method Based on Partition Variable Block Size and Exclusive-OR Operation with Two Host Images for Binary Image." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/yhm8ny.
Full text玄奘大學
資訊管理學系碩士班
106
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colours, black or white, it is hard to hide data imperceptible. The capacities and imperceptions are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image C and R into several non-overlapping(2m+1)×(2n+1)sub-blocks in an M by N host image as many as possible, where m=1,2,3…,n=1,2,3…,or min (M,N). Then we partition each sub-block into four overlapping(m+1)×(n+1)sub-blocks. We skip the all blacks or all whites in each(2m+1)×(2n+1)sub-blocks. We consider all four(m+1)×(n+1)sub-blocks to check the XOR between the non-overlapping parts and centre pixel of the(2m+1)×(2n+1)sub-block, it embed m×n bits in each(m+1)×(n+1)sub-block, totally are 4×m×n. When candidate pixel of C is embedded secret bit and changed, the corresponding position pixel of R will be marked 1. The entire host image can be embedded 4× m×n×M/(2m+1)×N/(2n+1)bits. The extraction way is simply to test the XOR between centre pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperceptions that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless, also used the R host image to reverse the original host image completely.
WANG, YU-TZU, and 王愉慈. "A Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/28999648088974917843.
Full text玄奘大學
資訊管理學系碩士班
104
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colors, black or white, it is hard to hide data imperceptible. The capacities and imperception are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image into several non-overlapping(2k+1)×(2k+1)sub-blocks in an M by N host image as many as possible, where k=1,2,3,…,or min (M,N). Then we partition each sub-block into four overlapping(k+1)×(k+1)sub-blocks. We skip the all blacks or all whites in each(2k+1)×(2k+1)sub-blocks. We consider all four(k+1)×(k+1)sub-blocks to check the XOR between the non-overlapping parts and center pixel of the(2k+1)×(2k+1)sub-block, it embed k^2bits in each(k+1)×(k+1)sub-block, totally are4×k^2. The entire host image can be embedded 4× k^2×M/(2k+1)×N/(2k+1)bits. The extraction way is simply to test the XOR between center pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperception that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
CHEN, JI-MING, and 陳紀銘. "An Optimal Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2ta8pc.
Full text玄奘大學
資訊管理學系碩士班
105
In this thesis, we propose a high capacity data hiding method applying in binary images. We divide the host image into several non-overlapping blocks as many as possible. Then we partition each block into four overlapping sub-blocks. We skip the all blacks or all whites in each block. We consider all four sub-blocks to check the XOR between the nonoverlapping parts and the center pixel of the block. The entire host image can be embedded 4×m×n×M/(2m+1)×N/(2n+1) bits. The extraction way is simply to test the XOR between center pixel with its non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The optimal means the partitioning sub-block may affect the capacities and imperception that we can reach the best. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
Hedjam, Rachid. "Segmentation non-supervisée d'images couleur par sur-segmentation Markovienne en régions et procédure de regroupement de régions par graphes pondérés." Thèse, 2008. http://hdl.handle.net/1866/7221.
Full textWang, Yu-Ching, and 王有慶. "Implementation of Image Transmission Quality via Applying a Turbo Code Rate for Different Partitions of SPIHT." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/18274003650484952842.
Full text國立中正大學
電機工程研究所
89
With the development of wireless technology,people use products with wireless technology more frequently in their life. Some medimedia application on wireless platform have been on the market.Mmultimedia transmission become more and more popular,but another problem comes out concurrently. Multimedia contains huge data information (video and voice) so that the present bandwidth of wireless is not sufficient to support. So,demand for efficient compression to transfer data with better quality,but less bandwith is an important topic. The purpose of this thesis is to analyze the image quality of SPIHT bitstreams through wavelet transform in a wireless environments .Through study of joint source coding and channel transmission ,we have found that the image quality may have very serious effect due to error propagation .Based on the simulation results ,we have purposed some modified coding scheme to maintain the image quality .We further consider unequal protection via grouping and giving a different coding rate for different groups .From the results ,we found that bandwidth can be saved for a specific quality and PSNR performance can be improved. Finally ,considering the trade off of bandwidth requirement ,iterative times and PSNR ,we can obtain better performance via UEP scheme.
Wen-Pin and 鄭文斌. "A system of 3D image retrieval and judgment of partitions of the tumor located in the liver." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/35888464744927174314.
Full text中山醫學大學
應用資訊科學學系碩士班
99
Because the amount of pictorial information stored in medical databases is growing, efficient image indexing and retrieval becomes very important. Therefore, the need to develop a medical image retrieval system for disease diagnosis is urgent. In this thesis, a system for assisting in diagnosing the liver tumors and planning the corresponding radiation treatment is proposed. The proposed system provides the capabilities of 3D image retrieval as well as judging in which partitions of the liver the tumor is located. In the proposed system, the emphasis is on the development of an efficient and practical database for recognizing and retrieving similar patterns with known diagnoses in 3D medical images in an efficient manner. Furthermore, in order to assist physician in planning the radiation treatment, it can also judge the partitions of the liver in which the tumor is located. To retrieve similar images efficiently, we have developed an image representation which can capture the shape, size and location of the tumor. The image representation has the properties of image scaling-, translation- and rotation-invariance, and these properties are necessary for an image retrieval system which works to a high degree of accuracy. To satisfy the different requirements of physicians, some similarity measures and a retrieval method based on our image representation approach are also proposed. Finally, a method based on our image representation to judge the partitions of liver where the tumor is located is also provided. Experiment results showed that the system has a good performance in terms of assisting physician in diagnosing the liver tumor and planning the radiation treatment.
Tsai, Tsung-Lin, and 蔡宗霖. "Integration of data, function, pipeline partition schemes on distributed system--real-time implementation of correspondence matching in stereo images." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/94128290188731236059.
Full text國立東華大學
資訊工程學系
92
We use distributed system and three partition schemes to make program achieve real-time performance. The three partition schemes are data partition, function partition, pipelining partition. In the paper, we analysis the advantages and disadvantages of the three partition, for example, advantage of data partition is the communication cost of processors is less, but the data partition is only suitable for the condition of algorithm only use local data, function partition can assign different task to different hardware, this can make more efficient utilization of hardware, but it can only be used when there are no relation of input and output between tasks. Pipelining partition is easy to applied to program and can raise mass throughput, but is only suitable for successive inputs and pipelining partition will raise the response time of system. At the end, we propose a strategy to integrate three partition schemes to make exploit highest parallelism, and get best throughput. In the field of computer vision, using two images to compute depth of object in images is a long discussed technique. And before compute depth of objects in images, we must computed disparity of corresponding points, but because of the mass computation of the matching of corresponding points , this technique can not be applied to real-time application, and the application is limited. To compute disparity of corresponding points in real-time, we employ an efficient algorithm and a distributed system to compute depth. The algorithm uses two calibrated images and a special data structure to compute disparity of corresponding points in images.