Dissertations / Theses on the topic 'Image partition'

To see the other types of publications on this topic, follow the link: Image partition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Image partition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bernat, Andrew. "Which partition scheme for what image?, partitioned iterated function systems for fractal image compression." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2002. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ65602.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Huihai. "Evolutionary image analysis in binary partition trees." Thesis, University of Essex, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.438156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Valero, Valbuena Silvia. "Hyperspectral image representation and processing with binary partition trees." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/130832.

Full text
Abstract:
The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation: the Binary Partition Tree (BPT). This hierarchical region-based representation can be interpreted as a set of hierarchical regions stored in a tree structure. Hence, the Binary Partition Tree succeeds in presenting: (i) the decomposition of the image in terms of coherent regions and (ii) the inclusion relations of the regions in the scene. Based on region-merging techniques, the construction of BPT is investigated in this work by studying hyperspectral region models and the associated similarity metrics. As a matter of fact, the very high dimensionality and the complexity of the data require the definition of specific region models and similarity measures. Once the BPT is constructed, the fixed tree structure allows implementing efficient and advanced application-dependent techniques on it. The application-dependent processing of BPT is generally implemented through a specific pruning of the tree. Accordingly, some pruning techniques are proposed and discussed according to different applications. This Ph.D is focused in particular on segmentation, object detection and classification of hyperspectral imagery. Experimental results on various hyperspectral data sets demonstrate the interest and the good performances of the BPT representation
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." University of Sydney. School of Electrical and Information Engineering, 2005. http://hdl.handle.net/2123/699.

Full text
Abstract:
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have "e;C"e; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the "e;Onion-Peeling"e; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A "e;bad"e; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the "e;C"e; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the "e;Onion-Peeling"e; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/699.

Full text
Abstract:
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have "e;C"e; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the "e;Onion-Peeling"e; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A "e;bad"e; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the "e;C"e; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the "e;Onion-Peeling"e; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
APA, Harvard, Vancouver, ISO, and other styles
6

Cutolo, Alfredo. "Image partition and video segmentation using the Mumford-Shah functional." Doctoral thesis, Universita degli studi di Salerno, 2012. http://hdl.handle.net/10556/280.

Full text
Abstract:
2010 - 2011
The aim of this Thesis is to present an image partition and video segmentation procedure, based on the minimization of a modified version of Mumford-Shah functional. The Mumford-Shah functional used for image partition has been then extended to develop a video segmentation procedure. Differently by the image processing, in video analysis besides the usual spatial connectivity of pixels (or regions) on each single frame, we have a natural notion of “temporal” connectivity between pixels (or regions) on consecutive frames given by the optical flow. In this case, it makes sense to extend the tree data structure used to model a single image with a graph data structure that allows to handle a video sequence. The video segmentation procedure is based on minimization of a modified version of a Mumford-Shah functional. In particular the functional used for image partition allows to merge neighboring regions with similar color without considering their movement. Our idea has been to merge neighboring regions with similar color and similar optical flow vector. Also in this case the minimization of Mumford-Shah functional can be very complex if we consider each possible combination of the graph nodes. This computation becomes easy to do if we take into account a hierarchy of partitions constructed starting by the nodes of the graph.[edited by author]
X n.s.
APA, Harvard, Vancouver, ISO, and other styles
7

Sudirman. "Colour image coding indexing and retrieval using binary space partition tree." Thesis, University of Nottingham, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Berry, Dominic William. "Adaptive phase measurements /." [St. Lucia, Qld.], 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16247.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Il-Ryeol. "Wavelet domain partition-based signal processing with applications to image denoising and compression." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 2.98 Mb., 119 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3221054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gomila, Cristina. "Mise en correspondance de partitions en vue du suivi d'objets." Phd thesis, École Nationale Supérieure des Mines de Paris, 2001. http://pastel.archives-ouvertes.fr/pastel-00003272.

Full text
Abstract:
Dans le domaine des applications multimédia, les futurs standards vont permettre de créer de nouvelles voies de communication, d'accès et de manipulation de l'information audiovisuelle qui vont bien au-delà de la simple compression à laquelle se limitaient les standards de codage précédents. Parmi les nouvelles fonctionnalités, il est espéré que l'utilisateur pourra avoir accès au contenu des images par édition et manipulation des objets présents. Néanmoins, la standardisation ne couvre que la représentation et le codage de ces objets, en laissant ouvert un large champ de développement pour ce qui concerne la probl ématique liée à leur extraction et à leur suivi lorsqu'ils évoluent au long d'une séquence vidéo. C'est précisément sur ce point que porte cette thèse. Dans un premier temps, nous avons procédé à l' étude et à la mise au point d'algorithmes de filtrage et de segmentation à caractère générique, car ces outils sont à la base de tout système d'analyse du contenu d'une image ou d'une séquence. Plus concr ètement, nous avons étudié en détail une nouvelle classe de filtres morphologiques connus sous le nom de nivellements ainsi qu'une variation des algorithmes de segmentation basée sur l'inondation contrainte d'une image gradient. Les techniques de segmentation ont pour but de produire une partition de l'image aussi proche que possible de celle faite par l' oeil humain, en vue de la reconnaissance postérieure des objets. Néanmoins, dans la plupart des cas, cette dernière tâche ne peut être faite que par interaction humaine et, pourtant, lorsqu'on veut retrouver un objet dans une large collection d'images, ou suivre son évolution au long d'une s équence, la surveillance de chacune des partitions devient impossible. S'impose alors le développement d'algorithmes de mise en correspondance capables de propager l'information dans une série d'images, en limitant l'interaction humaine à une seule étape d'initialisation. En faisant le passage des images fixes aux séquences, la partie centrale de cette thèse est consacrée à l' étude du problème de la mise en correspondance de partitions. La méthode que nous avons développée, nommée technique de Segmentation et Appariement Conjoint (SAC), peut être définie comme étant de nature hybride. Elle combine des algorithmes classiques de mise en correspondance de graphes avec de nouvelles techniques d' édition, basées sur les hiérarchies de partitions fournies par la segmentation morphologique. Cette combinaison a donné lieu à un algorithme très robuste, malgré l'instabilité typiquement associée aux processus de segmentation. La segmentation de deux images peut différer fortement si on la considère du seul point de vue d'une partition unique ; néanmoins nous avons montré qu'elle est beaucoup plus stable si on considère des hiérarchies de partitions emboîtées, dans lesquelles tous les contours présents apparaissent, chacun avec une valuation indiquant sa force. Les résultats obtenus par la technique SAC ont fait d'elle une approche très prometteuse. Souple et puissante, elle est capable de reconnaître un objet lorsqu'il réapparaît après occultation grâce à la gestion d'un graphe de mémoire. Bien que nous nous soyons int éressés tout particulièrement à la problématique du suivi, les algorithmes mis au point ont un champ d'application beaucoup plus vaste dans le domaine de l'indexation, en particulier pour la recherche d'objets dans une base de données d'images ou de séquences. Finalement, dans le cadre du projet européen M4M (MPEG f(o)ur mobiles) nous avons abordé la mise en oeuvre d'un démonstrateur de segmentation en temps réel capable de détecter, segmenter et suivre un personnage dans des séquences de vidéophonie. Dans le cadre de cette application, la contrainte du temps réel est devenue le grand d éfi à surmonter, en nous obligeant a simplifier et à optimiser nos algorithmes. L'int erêt principal en termes des nouveaux services est double : d'un côté le détourage automatique du locuteur permettrait d'adapter le codage à l'objet, économisant du débit sans perte de qualité sur les régions d'int erêt ; d'un autre côté il permettrait de faire l' édition personnalisée des séquences en changeant la composition de la scène, par exemple en introduisant un nouveau fond, ou en disposant plusieurs locuteurs dans une salle de conférence virtuelle.
APA, Harvard, Vancouver, ISO, and other styles
11

Cannon, Paul C. "Extending the information partition function : modeling interaction effects in highly multivariate, discrete data /." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2263.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Valero, Silvia. "Arbre de partition binaire : Un nouvel outil pour la représentation hiérarchique et l'analyse des images hyperspectrales." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00796108.

Full text
Abstract:
Une image hyperspectrale est formée par un ensemble de bandes spectrales contenant les informations correspondantes à un intervalle du spectre électromagnétique. Le grand avantage de l'imagerie hyperspectrale par rapport l'imagerie traditionnelle est la capacité de mesurer le rayonnement électromagnétique dans le visible et dans d'autres longueurs d'onde. Cette caractéristique permet la détection des différences subtiles existantes parmi les plusieurs objets composant une image. Le traitement de ces images aussi volumineuses nécessite le développement d'algorithmes avancés qui permettent une exploitation optimale des données hyperspectrales. La représentation traditionnelle de ces images est un ensemble de mesures spectrales, ou spectres, une pour chaque pixel de l'image. Le principal inconvénient de cette représentation est que le pixel est l'unité la plus fondamentale des images numériques. Une analyse individuelle des spectres formant une image hyperspectrale fournit une information qui n'est pas optimale. Dans ce cadre, il est nécessaire d'établir des connexions entre les pixels d'une image hyperspectral afin de distinguer des formes dans l'image qui caractérisent leur contenu. Les représentations basées sur des régions fournissent un moyen de réaliser un premier niveau d'abstraction permettant une réduction du nombre d'éléments à traiter et une obtention des informations sémantiques du contenu de l'image. Ce type de représentations fournit une nette amélioration par rapport la représentation classique basée sur des pixels individuels. Sous le titre "La représentation et le traitement des images hyperspectrales en utilisant l'arbre binaire de partitions", cette thèse propose la construction d'une nouvelle représentation hiérarchique d'images hyperspectrales basée sur des régions : l'arbre binaire des partitions (ou BPT, sigles en anglais). Cette nouvelle représentation peut être interprétée comme un ensemble de régions de l'image dans une structure arborescente. L'arbre binaire de partitions peut être utilisé pour représenter : (i) la décomposition d'une image en plusieurs régions ayant un contenu sémantique et (ii) les différentes relations d'inclusion des régions dans la scène. L'arbre binaire de partitions est basée sur la construction d'un algorithme itératif de fusion de régions. La construction du BPT a été étudiée dans cette thèse par l'étude de différents modèles de représentation d'une région hyperspectrale et de différentes distances de similitude entre deux régions hyperspectrales. Cette recherche a été nécessaire en face la grande dimensionalité et complexité des données qui font nécessaire la définition d'un modèle de région et d'une distance de similarité spécifiques. Grâce à la structure en forme d'arbre, le BPT permet la définition d'un grand nombre de techniques pour un traitement avancé des images hyperspectrales. Ces techniques sont typiquement basées sur l'élagage de l'arbre grâce auquel les régions les plus intéressantes pour une application donnée sont extraites. Cette thèse se concentre sur trois applications particulières : la segmentation, la classification et la détection d'objets dans les images hyperspectrales. Les résultats expérimentaux obtenus sur différentes jeux de données montrent les qualités de la représentation BPT.
APA, Harvard, Vancouver, ISO, and other styles
13

Golodetz, Stuart Michael. "Zipping and unzipping : the use of image partition forests in the analysis of abdominal CT scans." Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.558768.

Full text
Abstract:
This thesis focuses on how to allow a computer to identify features (such as major organs) in abdominal computerised tomography (CT) scans in an automatic way, whilst still facilitating user interaction with the results. Identifying such features is important if they are to be visualised in 3D for the purposes of diagnosis or surgical planning, or if their volumes are to be calculated when assessing a patient's response to therapy, but manual identification is time-consuming and error-prone. Some degree of computerised automation is therefore highly desirable, and indeed a small number of existing approaches have even attempted to fully automate the simultaneous identification of multiple abdominal organs. However, no existing method is capable of achieving results that are completely accurate in all cases, and due to the difficulties even of specifying when a result is correct, the development of such a method seems unlikely in the near future. It is thus important that medics retain the ability to correct the results when automated methods fail. My research proposes a way of facilitating both automatic feature identification and intuitive editing of the results by representing CT images as a hierarchy of partitions, or image partition forest (IPF). This data structure has appeared extensively in existing literature, but its potential uses for editing have hitherto received little attention. This thesis shows how it can be used for this purpose, by presenting a systematic set of algorithms that allow the user to modify the IPF, and select and identify features therein, via an intuitive graphical user interface. It further shows how such an IPF can be initially constructed from a set of CT images using morphological techniques, before presenting a series of novel methods for automatic feature identification in both 2D axial CT slices and 3D CT volumes.
APA, Harvard, Vancouver, ISO, and other styles
14

Joder, Cyril. "Alignement temporel musique-sur-partition par modèles graphiques discriminatifs." Phd thesis, Télécom ParisTech, 2011. http://pastel.archives-ouvertes.fr/pastel-00664260.

Full text
Abstract:
Cette thèse étudie le problème de l'alignement temporel d'un enregistrement musical et de la partition correspondante. Cette tâche peut trouver de nombreuses applications dans le domaine de l'indexation automatique de documents musicaux. Nous adoptons une approche probabiliste et nous proposons l'utilisation de modèles graphiques discriminatifs de type champs aléatoires conditionnels pour l'alignement, en l'exprimant comme un problème d'étiquetage de séquence. Cette classe de modèles permet d'exprimer des modèles plus flexibles que les modèles de Markov cachés ou les modèles semi-markoviens cachés, couramment utilisés dans ce domaine. En particulier, elle rend possible l'utilisation d'attributs (ou descripteurs acoustiques) extraits de séquences de trames audio qui se recouvrent, au lieu d'observations disjointes. Nous tirons parti de cette propriété pour introduire des attributs qui réalisent une modélisation implicite du tempo au plus bas niveau du modèle. Nous proposons trois structures de modèles différentes de complexité croissant, correspondant à différents niveaux de précision dans la modélisation de la durées des évènements musicaux. Trois types de descripteurs acoustiques sont utilisés, pour caractériser localement l'harmonie, les attaques de notes et le tempo de l'enregistrement. Une série d'expériences réalisées sur une base de données de piano classique et de musique pop permet de valider la grande précision de nos modèles. En effet, avec le meilleur des systèmes proposés, plus de 95 % des attaques de notes sont détectées à moins de 100 ms de leur position réelle. Plusieurs attributs acoustiques classiques, calculés à partir de différentes représentation de l'audio, sont utiliser pour mesurer la correspondance instantanée entre un point de la partition et une trame de l'enregistrement. Une comparaison de ces descripteurs est alors menée sur la base de leurs performances d'alignement. Nous abordons ensuite la conception de nouveaux attributs, grâce à l'apprentissage d'une transformation linéaire de la représentation symbolique vers une représentation temps-fréquence quelconque de l'audio. Nous explorons deux stratégies différentes, par minimum de divergence et maximum de vraisemblance, pour l'apprentissage de la transformation optimale. Les expériences effectuées montrent qu'une telle approche peut améliorer la précision des alignements, quelle que soit la représentation de l'audio utilisée. Puis, nous étudions différents ajustements à effectuer afin de confronter les systèmes à des cas d'utilisation réalistes. En particulier, une réduction de la complexité est obtenue grâce à une stratégie originale d'élagage hiérarchique. Cette méthode tire parti de la structure hiérarchique de la musique en vue d'un décodage approché en plusieurs passes. Une diminution de complexité plus importante que celle de la méthode classique de recherche par faisceaux est observée dans nos expériences. Nous examinons en outre une modification des modèles proposés afin de les rendre robustes à d'éventuelles différences structurelles entre la partition et l'enregistrement. Enfin, les propriétés de scalabilité des modèles utilisés sont étudiées.
APA, Harvard, Vancouver, ISO, and other styles
15

Zabiba, Mohammed. "Variational approximation of interface energies and applications." Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0419/document.

Full text
Abstract:
Les problèmes de partition minimale consistent à déterminer une partition d’un domaine en un nombre donné de composantes de manière à minimiser un critère géométrique. Dans les champs d’application tels que le traitement d’images et la mécanique des milieux continus, il est courant d’incorporer dans cet objectif une énergie d’interface qui prend en compte les longueurs des interfaces entre composantes. Ce travail est focalisé sur le traitement théorique et numérique de problèmes de partition minimale avec énergie d’interface. L’approche considérée est basée sur une approximation par Gamma-convergence et des techniques de dualité
Minimal partition problems consist in finding a partition of a domain into a given number of components in order to minimize a geometric criterion. In applicative fields such as image processing or continuum mechanics, it is standard to incorporate in this objective an interface energy that accounts for the lengths of the interfaces between components. The present work is focused on thetheoretical and numerical treatment of minimal partition problems with interface energies. The considered approach is based on a Gamma-convergence approximation and duality techniques
APA, Harvard, Vancouver, ISO, and other styles
16

Green, Christopher Lee. "IP Algorithm Applied to Proteomics Data." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd618.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kéchichian, Razmig. "Structural priors for multiobject semi-automatic segmentation of three-dimensional medical images via clustering and graph cut algorithms." Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00967381.

Full text
Abstract:
We develop a generic Graph Cut-based semiautomatic multiobject image segmentation method principally for use in routine medical applications ranging from tasks involving few objects in 2D images to fairly complex near whole-body 3D image segmentation. The flexible formulation of the method allows its straightforward adaption to a given application.\linebreak In particular, the graph-based vicinity prior model we propose, defined as shortest-path pairwise constraints on the object adjacency graph, can be easily reformulated to account for the spatial relationships between objects in a given problem instance. The segmentation algorithm can be tailored to the runtime requirements of the application and the online storage capacities of the computing platform by an efficient and controllable Voronoi tessellation clustering of the input image which achieves a good balance between cluster compactness and boundary adherence criteria. Qualitative and quantitative comprehensive evaluation and comparison with the standard Potts model confirm that the vicinity prior model brings significant improvements in the correct segmentation of distinct objects of identical intensity, the accurate placement of object boundaries and the robustness of segmentation with respect to clustering resolution. Comparative evaluation of the clustering method with competing ones confirms its benefits in terms of runtime and quality of produced partitions. Importantly, compared to voxel segmentation, the clustering step improves both overall runtime and memory footprint of the segmentation process up to an order of magnitude virtually without compromising the segmentation quality.
APA, Harvard, Vancouver, ISO, and other styles
18

Buard, Benjamin. "Contribution à la compréhension des signaux de fluxmétrie laser Doppler : traitement des signaux et interprétations physiologiques." Phd thesis, Université d'Angers, 2010. http://tel.archives-ouvertes.fr/tel-00584166.

Full text
Abstract:
La compréhension du système cardiovasculaire périphérique est une clé indispensable pour le diagnostic précoce de nombreuses pathologies. Les signaux de fluxmétrie laser Doppler donnent des informations sur la microcirculation sanguine et permettent ainsi d'avoir une vue périphérique du système cardiovasculaire. Ce travail de thèse s'inscrit dans l'étude des propriétés de ces signaux physiologiques. Dans un premier temps nous présentons la technique de fluxmétrie laser Doppler et son utilité en recherche clinique. Nous détaillons ensuite l'analyse que nous avons menée afin d'obtenir des informations sur l'origine des fluctuations observées sur les signaux. L'implémentation de différents outils de traitement du signal dans les domaines temporel et fréquentiel a permis de montrer que ces fluctuations pourraient provenir, en partie, des propriétés physiologiques et/ou anatomiques de la zone étudiée. Afin d'étudier plus en détails ces fluctuations, nous avons ensuite mis en place une analyse multifractale des signaux de fluxmétrie laser Doppler. Les différents résultats obtenus ont permis de faire ressortir la possible implication des propriétés physiologiques de la zone étudiée dans la complexité des signaux de fluxmétrie laser Doppler.
APA, Harvard, Vancouver, ISO, and other styles
19

Randrianasoa, Tianatahina Jimmy Francky. "Représentation d'images hiérarchique multi-critère." Thesis, Reims, 2017. http://www.theses.fr/2017REIMS040/document.

Full text
Abstract:
La segmentation est une tâche cruciale en analyse d’images. L’évolution des capteurs d’acquisition induit de nouvelles images de résolution élevée, contenant des objets hétérogènes. Il est aussi devenu courant d’obtenir des images d’une même scène à partir de plusieurs sources. Ceci rend difficile l’utilisation des méthodes de segmentation classiques. Les approches de segmentation hiérarchiques fournissent des solutions potentielles à ce problème. Ainsi, l’Arbre Binaire de Partitions (BPT) est une structure de données représentant le contenu d’une image à différentes échelles. Sa construction est généralement mono-critère (i.e. une image, une métrique) et fusionne progressivement des régions connexes similaires. Cependant, la métrique doit être définie a priori par l’utilisateur, et la gestion de plusieurs images se fait en regroupant de multiples informations issues de plusieurs bandes spectrales dans une seule métrique. Notre première contribution est une approche pour la construction multicritère d’un BPT. Elle établit un consensus entre plusieurs métriques, permettant d’obtenir un espace de segmentation hiérarchique unifiée. Par ailleurs, peu de travaux se sont intéressés à l’évaluation de ces structures hiérarchiques. Notre seconde contribution est une approche évaluant la qualité des BPTs en se basant sur l’analyse intrinsèque et extrinsèque, suivant des exemples issus de vérités-terrains. Nous discutons de l’utilité de cette approche pour l’évaluation d’un BPT donné mais aussi de la détermination de la combinaison de paramètres adéquats pour une application précise. Des expérimentations sur des images satellitaires mettent en évidence la pertinence de ces approches en segmentation d’images
Segmentation is a crucial task in image analysis. Novel acquisition devices bring new images with higher resolutions, containing more heterogeneous objects. It becomes also easier to get many images of an area from different sources. This phenomenon is encountered in many domains (e.g. remote sensing, medical imaging) making difficult the use of classical image segmentation methods. Hierarchical segmentation approaches provide solutions to such issues. Particularly, the Binary Partition Tree (BPT) is a hierarchical data-structure modeling an image content at different scales. It is built in a mono-feature way (i.e. one image, one metric) by merging progressively similar connected regions. However, the metric has to be carefully thought by the user and the handling of several images is generally dealt with by gathering multiple information provided by various spectral bands into a single metric. Our first contribution is a generalized framework for the BPT construction in a multi-feature way. It relies on a strategy setting up a consensus between many metrics, allowing us to obtain a unified hierarchical segmentation space. Surprisingly, few works were devoted to the evaluation of hierarchical structures. Our second contribution is a framework for evaluating the quality of BPTs relying both on intrinsic and extrinsic quality analysis based on ground-truth examples. We also discuss about the use of this evaluation framework both for evaluating the quality of a given BPT and for determining which BPT should be built for a given application. Experiments using satellite images emphasize the relevance of the proposed frameworks in the context of image segmentation
APA, Harvard, Vancouver, ISO, and other styles
20

Kong, Tian Fook. "Multilevel spectral clustering : graph partitions and image segmentation." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45275.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008.
Includes bibliographical references (p. 145-146).
While the spectral graph partitioning method gives high quality segmentation, segmenting large graphs by the spectral method is computationally expensive. Numerous multilevel graph partitioning algorithms are proposed to reduce the segmentation time for the spectral partition of large graphs. However, the greedy local refinement used in these multilevel schemes has the tendency of trapping the partition in poor local minima. In this thesis, I develop a multilevel graph partitioning algorithm that incorporates the inverse powering method with greedy local refinement. The combination of the inverse powering method with greedy local refinement ensures that the partition quality of the multilevel method is as good as, if not better than, segmenting the large graph by the spectral method. In addition, I present a scheme to construct the adjacency matrix, W and degree matrix, D for the coarse graphs. The proposed multilevel graph partitioning algorithm is able to bisect a graph (k = 2) with significantly shorter time than segmenting the original graph without the multilevel implementation, and at the same time achieving the same normalized cut (Ncut) value. The starting eigenvector, obtained by solving a generalized eigenvalue problem on the coarsest graph, is close to the Fiedler vector of the original graph. Hence, the inverse iteration needs only a few iterations to converge the starting vector. In the k-way multilevel graph partition, the larger the graph, the greater the reduction in the time needed for segmenting the graph. For the multilevel image segmentation, the multilevel scheme is able to give better segmentation than segmenting the original image. The multilevel scheme has higher success of preserving the salient part of an object.
(cont.) In this work, I also show that the Ncut value is not the ultimate yardstick for the segmentation quality of an image. Finding a partition that has lower Ncut value does not necessary means better segmentation quality. Segmenting large images by the multilevel method offers both speed and quality.
by Tian Fook Kong.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Valero, Valbuena Silvia. "Arbre de partition binaire : un nouvel outil pour la représentation hiérarchique et l’analyse des images hyperspectrales." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENT123/document.

Full text
Abstract:
Résumé non communiqué par le doctorant
The optimal exploitation of the information provided by hyperspectral images requires the development of advanced image processing tools. Therefore, under the title Hyperspectral image representation and Processing with Binary Partition Trees, this PhD thesis proposes the construction and the processing of a new region-based hierarchical hyperspectral image representation:the Binary Partition Tree (BPT). This hierarchical region-based representation can be interpretedas a set of hierarchical regions stored in a tree structure. Hence, the Binary Partition Tree succeedsin presenting: (i) the decomposition of the image in terms of coherent regions and (ii) the inclusionrelations of the regions in the scene. Based on region-merging techniques, the construction of BPTis investigated in this work by studying hyperspectral region models and the associated similaritymetrics. As a matter of fact, the very high dimensionality and the complexity of the data require the definition of specific region models and similarity measures. Once the BPT is constructed, the fixed tree structure allows implementing efficient and advanced application-dependent techniqueson it. The application-dependent processing of BPT is generally implemented through aspecific pruning of the tree. Accordingly, some pruning techniques are proposed and discussed according to different applications. This Ph.D is focused in particular on segmentation, object detectionand classification of hyperspectral imagery. Experimental results on various hyperspectraldata sets demonstrate the interest and the good performances of the BPT representation
APA, Harvard, Vancouver, ISO, and other styles
22

Wingate, David. "Solving Large MDPs Quickly with Partitioned Value Iteration." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd437.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Zanoguera, Tous Maria Fransisca. "Segmentation interactive d'images fixes et de séquences vidéo basée sur des hiérarchies de partitions." Phd thesis, École Nationale Supérieure des Mines de Paris, 2001. http://pastel.archives-ouvertes.fr/pastel-00003264.

Full text
Abstract:
La grande variété des images et séquences vidéo rencontrées dans le domaine multimédia rendent tout projet de segmentation automatique extrêmement complexe. Notre approche cherche à obtenir une segmentation efficace au prix d'un minimum d'interaction. Pour permettre une grande flexibilité et des temps de réponse rapides, le contenu de la séquence est représenté en forme de partitions emboitées. Tous les contours possibles dans l'image sont détectés chacun avec un indice indiquant sa force. L'étape de segmentation proprement dite offrira à l'utilisateur divers mécanismes de sélection finale des contours qui réellement l'intéressent. Ainsi de multiples segmentations sont possibles sur cette représentation hiérarchique, sans nécessiter de nouveaux calculs. Dans un premier temps, différentes hiérarchies associées aux inondations morphologiques sont étudiées, ainsi que plusieurs mécanismes permettant l'introduction de connaissances à priori quand elles sont disponibles. Dans un deuxième temps, les notions présentées pour les images fixes sont étendues aux séquences vidéo en utilisant une approche 3D-récursive.Ainsi, une unique hiérarchie associée à une séquence vidéo complète est calculée. Des outils d'interaction sont proposés permettant à l'utilisateur de manipuler la hiérarchie de manière intuitive. Grâce aux représentations en forme d'arbre utilisées, la manipulation de la hiérarchie se fait avec un très faible coût de calcul et les résultats de l'interaction sont perçus par l'utilisateur comme étant immédiats.
APA, Harvard, Vancouver, ISO, and other styles
24

Le, Capitaine Hoel. "Opérateurs d'agrégation pour la mesure de similarité. Application à l'ambiguïté en reconnaissance de formes." Phd thesis, Université de La Rochelle, 2009. http://tel.archives-ouvertes.fr/tel-00438516.

Full text
Abstract:
Dans cette thèse, nous nous intéressons à deux problèmes de reconnaissance de formes : l'option de rejet en classification supervisée, et la détermination du nombre de classes en classification non supervisée. Le premier problème consiste à déterminer les zones de l'espace des attributs où les observations n'appartiennent pas clairement à une seule classe. Le second problème repose sur l'analyse d'un nuage d'observations pour lesquelles on ne connait pas les classes d'appartenance. L'objectif est de dégager des structures permettant de distinguer les différentes classes, et en particulier de trouver leur nombre. Pour résoudre ces problèmes, nous fondons nos propositions sur des opérateurs d'agrégation, en particulier des normes triangulaires. Nous définissons de nouvelles mesures de similarité permettant la caractérisation de situations variées. En particulier, nous proposons de nouveaux types de mesures de similarité : la similarité d'ordre, la similarité par blocs, et enfin la similarité par une approche logique. Ces différentes mesures de similarité sont ensuite appliquées aux problèmes évoqués précédemment. Le caractère générique des mesures proposées permet de retrouver de nombreuses propositions de la littérature, ainsi qu'une grande souplesse d'utilisation en pratique. Des résultats expérimentaux sur des jeux de données standard des domaines considérés viennent valider notre approche.
APA, Harvard, Vancouver, ISO, and other styles
25

Gatica, Perez Daniel. "Extensive operators in lattices of partitions for digital video analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/5874.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Benammar, Riyadh. "Détection non-supervisée de motifs dans les partitions musicales manuscrites." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI112.

Full text
Abstract:
Cette thèse s'inscrit dans le contexte de la fouille de données appliquées aux partitions musicales manuscrites anciennes et vise une recherche de motifs mélodiques ou rythmiques fréquents définis comme des séquences de notes répétitives aux propriétés caractéristiques. On rencontre un grand nombre de déclinaisons possibles de motifs : les transpositions, les inversions et les motifs dits « miroirs ». Ces motifs permettent aux musicologues d'avoir un niveau d'analyse approfondi sur les œuvres d'un compositeur ou d'un style musical. Dans un contexte d'exploration de corpus de grande taille où les partitions sont juste numérisées et non transcrites, une recherche automatisée de motifs vérifiant des contraintes ciblées devient un outil indispensable à leur étude. Pour la réalisation de l'objectif de détection de motifs fréquents sans connaissance a priori, nous sommes partis d'images de partitions numérisées. Après des étapes de prétraitements sur l'image, nous avons exploité et adapté un modèle de détection et de reconnaissance de primitives musicales (tête de notes, hampes...) de la famille de réseaux de neurones à convolutions de type Region-Proposal CNN (RPN). Nous avons ensuite développé une méthode d'encodage de primitives pour générer une séquence de notes en évitant la tâche complexe de transcription complète de l'œuvre manuscrite. Cette séquence a ensuite été analysée à travers l'approche CSMA (Contraint String Mining Algorithm) que nous avons conçue pour détecter les motifs fréquents présents dans une ou plusieurs séquences avec une prise en compte de contraintes sur leur fréquence et leur taille, ainsi que la taille et le nombre de sauts autorisés (gaps) à l'intérieur des motifs. La prise en compte du gap a ensuite été étudiée pour contourner les erreurs de reconnaissance produites par le réseau RPN évitant ainsi la mise en place d'un système de post-correction des erreurs de transcription des partitions. Le travail a été finalement validé par l'étude des motifs musicaux pour des applications d'identification et de classification de compositeurs
This thesis is part of the data mining applied to ancient handwritten music scores and aims at a search for frequent melodic or rhythmic motifs defined as repetitive note sequences with characteristic properties. There are a large number of possible variations of motifs: transpositions, inversions and so-called "mirror" motifs. These motifs allow musicologists to have a level of in-depth analysis on the works of a composer or a musical style. In a context of exploring large corpora where scores are just digitized and not transcribed, an automated search for motifs that verify targeted constraints becomes an essential tool for their study. To achieve the objective of detecting frequent motifs without prior knowledge, we started from images of digitized scores. After pre-processing steps on the image, we exploited and adapted a model for detecting and recognizing musical primitives (note-heads, stems...) from the family of Region-Proposal CNN (RPN) convolution neural networks. We then developed a primitive encoding method to generate a sequence of notes without the complex task of transcribing the entire manuscript work. This sequence was then analyzed using the CSMA (Constraint String Mining Algorithm) approach designed to detect the frequent motifs present in one or more sequences, taking into account constraints on their frequency and length, as well as the size and number of gaps allowed within the motifs. The gap was then studied to avoid recognition errors produced by the RPN network, thus avoiding the implementation of a post-correction system for transcription errors. The work was finally validated by the study of musical motifs for composers identification and classification
APA, Harvard, Vancouver, ISO, and other styles
27

Klava, Bruno. "Redução no esforço de interação em segmentação de imagens digitais através de aprendizagem computacional." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-08122014-152731/.

Full text
Abstract:
A segmentação é um passo importante em praticamente todas as tarefas que envolvem processamento de imagens digitais. Devido à variedade de imagens e diferentes necessidades da segmentação, a automação da segmentação não é uma tarefa trivial. Em muitas situações, abordagens interativas, nas quais o usuário pode intervir para guiar o processo de segmentação, são bastante úteis. Abordagens baseadas na transformação watershed mostram-se adequadas para a segmentação interativa de imagens: o watershed a partir de marcadores possibilita que o usuário marque as regiões de interesse na imagem; o watershed hierárquico gera uma hierarquia de partições da imagem sendo analisada, hierarquia na qual o usuário pode navegar facilmente e selecionar uma particular partição (segmentação). Em um trabalho prévio, propomos um método que integra as duas abordagens de forma que o usuário possa combinar os pontos fortes dessas duas formas de interação intercaladamente. Apesar da versatilidade obtida ao se integrar as duas abordagens, as hierarquias construídas dificilmente contêm partições interessantes e o esforço de interação necessário para se obter um resultado desejado pode ser muito elevado. Nesta tese propomos um método, baseado em aprendizagem computacional, que utiliza imagens previamente segmentadas para tentar adaptar uma dada hierarquia de forma que esta contenha partições mais próximas de uma partição de interesse. Na formulação de aprendizagem computacional, diferentes características da imagem são associadas a possíveis contornos de regiões, e esses são classificados como contornos que devem ou não estar presentes na partição final por uma máquina de suporte vetorial previamente treinada. A hierarquia dada é adaptada de forma a conter uma partição que seja consistente com a classificação obtida. Essa abordagem é particularmente interessante em cenários nos quais lotes de imagens similares ou sequências de imagens, como frames em sequências de vídeo ou cortes produzidas por exames de diagnóstico por imagem, precisam ser segmentadas. Nesses casos, é esperado que, a cada nova imagem a ser segmentada, o esforço de interação necessário para se obter a segmentação desejada seja reduzido em relação ao esforço que seria necessário com o uso da hierarquia original. Para não dependermos de experimentos com usuários na avaliação da redução no esforço de interação, propomos e utilizamos um modelo de interação que simula usuários humanos no contexto de segmentação hierárquica. Simulações deste modelo foram comparadas com sequências de interação observadas em experimentos com usuários humanos. Experimentos com diferentes lotes e sequências de imagens mostram que o método é capaz de reduzir o esforço de interação.
Segmentation is an important step in nearly all tasks involving digital image processing. Due to the variety of images and segmentation needs, automation of segmentation is not a trivial task. In many situations, interactive approaches in which the user can intervene to guide the segmentation process, are quite useful. Watershed transformation based approaches are suitable for interactive image segmentation: the watershed from markers allows the user to mark the regions of interest in the image; the hierarchical watershed generates a hierarchy of partitions of the image being analyzed, hierarchy in which the user can easily navigate and select a particular partition (segmentation). In a previous work, we have proposed a method that integrates the two approaches so that the user can combine the strong points of these two forms of interaction interchangeably. Despite the versatility obtained by integrating the two approaches, the built hierarchies hardly contain interesting partitions and the interaction effort needed to obtain a desired outcome can be very high. In this thesis we propose a method, based on machine learning, that uses images previously segmented to try to adapt a given hierarchy so that it contains partitions closer to the partition of interest. In the machine learning formulation, different image features are associated to the possible region contours, and these are classified as ones that must or must not be present in the final partition by a previously trained support vector machine. The given hierarchy is adapted to contain a partition that is consistent with the obtained classification. This approach is particularly interesting in scenarios where batches of similar images or sequences of images, such as frames in video sequences or cuts produced by imaging diagnosis procedures, need to be segmented. In such cases, it is expected that for each new image to be segmented, the interaction effort required to achieve the desired segmentation is reduced relative to the effort that would be required when using the original hierarchy. In order to do not depend on experiments with users in assessing the reduction in interaction effort, we propose and use an interaction model that simulates human users in the context of hierarchical segmentation. Simulations of this model were compared with interaction sequences observed in experiments with humans users. Experiments with different bacthes and image sequences show that the method is able to reduce the interaction effort.
APA, Harvard, Vancouver, ISO, and other styles
28

Silva, Cauane Blumenberg. "Adaptive tiling algorithm based on highly correlated picture regions for the HEVC standard." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/96040.

Full text
Abstract:
Esta dissertação de mestrado propõe um algoritmo adaptativo que é capaz de dinamicamente definir partições tile para quadros intra- e inter-preditos com o objetivo de reduzir o impacto na eficiência de codificação. Tiles são novas ferramentas orientadas ao paralelismo que integram o padrão de codificação de vídeos de alta eficiência (HEVC – High Efficiency Video Coding standard), as quais dividem o quadro em regiões retangulares independentes que podem ser processadas paralelamente. Para viabilizar o paralelismo, os tiles quebram as dependências de codificação através de suas bordas, gerando impactos na eficiência de codificação. Este impacto pode ser ainda maior caso os limites dos tiles dividam regiões altamente correlacionadas do quadro, porque a maior parte das ferramentas de codificação usam informações de contexto durante o processo de codificação. Assim, o algoritmo proposto agrupa as regiões do quadro que são altamente correlacionadas dentro de um mesmo tile para reduzir o impacto na eficiência de codificação que é inerente ao uso de tiles. Para localizar as regiões altamente correlacionadas do quadro de uma maneira inteligente, as características da imagem e também as informações de codificação são analisadas, gerando mapas de particionamento que servem como parâmetro de entrada para o algoritmo. Baseado nesses mapas, o algoritmo localiza as quebras naturais de contexto presentes nos quadros do vídeo e define os limites dos tiles nessas regiões. Dessa maneira, as quebras de dependência causadas pelas bordas dos tiles coincidem com as quebras de contexto naturais do quadro, minimizando as perdas na eficiência de codificação causadas pelo uso dos tiles. O algoritmo proposto é capaz de reduzir mais de 0.4% e mais de 0.5% o impacto na eficiência de codificação causado pelos tiles em quadros intra-preditos e inter-preditos, respectivamente, quando comparado com tiles uniformes.
This Master Thesis proposes an adaptive algorithm that is able to dynamically choose suitable tile partitions for intra- and inter-predicted frames in order to reduce the impact on coding efficiency arising from such partitioning. Tiles are novel parallelismoriented tools that integrate the High Efficiency Video Coding (HEVC) standard, which divide the frame into independent rectangular regions that can be processed in parallel. To enable the parallelism, tiles break the coding dependencies across their boundaries leading to coding efficiency impacts. These impacts can be even higher if tile boundaries split highly correlated picture regions, because most of the coding tools use context information during the encoding process. Hence, the proposed algorithm clusters the highly correlated picture regions inside the same tile to reduce the inherent coding efficiency impact of using tiles. To wisely locate the highly correlated picture regions, image characteristics and encoding information are analyzed, generating partitioning maps that serve as the algorithm input. Based on these maps, the algorithm locates the natural context break of the picture and defines the tile boundaries on these key regions. This way, the dependency breaks caused by the tile boundaries match the natural context breaks of a picture, then minimizing the coding efficiency losses caused by the use of tiles. The proposed adaptive tiling algorithm, in some cases, provides over 0.4% and over 0.5% of BD-rate savings for intra- and inter-predicted frames respectively, when compared to uniform-spaced tiles, an approach which does not consider the picture context to define the tile partitions.
APA, Harvard, Vancouver, ISO, and other styles
29

Queiroga, Eduardo Vieira. "Abordagens meta-heurísticas para clusterização de dados e segmentação de imagens." Universidade Federal da Paraíba, 2017. http://tede.biblioteca.ufpb.br:8080/handle/tede/9249.

Full text
Abstract:
Submitted by Fernando Souza (fernandoafsou@gmail.com) on 2017-08-14T11:28:15Z No. of bitstreams: 1 arquivototal.pdf: 7134434 bytes, checksum: a99ec0d172a3be38a844f44b70616b16 (MD5)
Made available in DSpace on 2017-08-14T11:28:15Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 7134434 bytes, checksum: a99ec0d172a3be38a844f44b70616b16 (MD5) Previous issue date: 2017-02-17
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Many computational problems are considered to be hard due to their combinatorial nature. In such cases, the use of exaustive search techniques for solving medium and large size instances becomes unfeasible. Some data clustering and image segmentation problems belong to NP-Hard class, and require an adequate treatment by means of heuristic techniques such as metaheuristics. Data clustering is a set of problems in the fields of pattern recognition and unsupervised machine learning which aims at finding groups (or clusters) of similar objects in a benchmark dataset, using a predetermined measure of similarity. The partitional clustering problem aims at completely separating the data in disjont and non-empty clusters. For center-based clustering methods, the minimal intracluster distance criterion is one of the most employed. This work proposes an approach based on the metaheuristic Continuous Greedy Randomized Adaptive Search Procedure (CGRASP). High quality results were obtained through comparative experiments between the proposed method and other metaheuristics from the literature. In the computational vision field, image segmentation is the process of partitioning an image in regions of interest (set of pixels) without allowing overlap. Histogram thresholding is one of the simplest types of segmentation for images in grayscale. Thes Otsu’s method is one of the most populars and it proposes the search for the thresholds that maximize the variance between the segments. For images with deep levels of gray, exhaustive search techniques demand a high computational cost, since the number of possible solutions grows exponentially with an increase in the number of thresholds. Therefore, metaheuristics have been playing an important role in finding good quality thresholds. In this work, an approach based on Quantum-behaved Particle Swarm Optimization (QPSO) were investigated for multilevel thresholding of available images in the literature. A local search based on Variable Neighborhood Descent (VND) was proposed to improve the convergence of the search for the thresholds. An specific application of thresholding for electronic microscopy images for microstructural analysis of cementitious materials was investigated, as well as graph algorithms to crack detection and feature extraction.
Muitos problemas computacionais s˜ao considerados dif´ıceis devido `a sua natureza combinat´oria. Para esses problemas, o uso de t´ecnicas de busca exaustiva para resolver instˆancias de m´edio e grande porte torna-se impratic´avel. Quando modelados como problemas de otimiza¸c˜ao, alguns problemas de clusteriza¸c˜ao de dados e segmenta¸c˜ao de imagens pertencem `a classe NP-Dif´ıcil e requerem um tratamento adequado por m´etodos heur´ısticos. Clusteriza¸c˜ao de dados ´e um vasto conjunto de problemas em reconhecimento de padr˜oes e aprendizado de m´aquina n˜ao-supervisionado, cujo objetivo ´e encontrar grupos (ou clusters) de objetos similares em uma base de dados, utilizando uma medida de similaridade preestabelecida. O problema de clusteriza¸c˜ao particional consiste em separar completamente os dados em conjuntos disjuntos e n˜ao vazios. Para m´etodos de clusteriza ¸c˜ao baseados em centros de cluster, minimizar a soma das distˆancias intracluster ´e um dos crit´erios mais utilizados. Para tratar este problema, ´e proposta uma abordagem baseada na meta-heur´ıstica Continuous Greedy Randomized Adaptive Search Procedure (C-GRASP). Resultados de alta qualidade foram obtidos atrav´es de experimentos envolvendo o algoritmo proposto e outras meta-heur´ısticas da literatura. Em vis˜ao computacional, segmenta¸c˜ao de imagens ´e o processo de particionar uma imagem em regi˜oes de interesse (conjuntos de pixels) sem que haja sobreposi¸c˜ao. Um dos tipos mais simples de segmenta¸c˜ao ´e a limiariza¸c˜ao do histograma para imagens em n´ıvel de cinza. O m´etodo de Otsu ´e um dos mais populares e prop˜oe a busca pelos limiares que maximizam a variˆancia entre os segmentos. Para imagens com grande profundidade de cinza, t´ecnicas de busca exaustiva possuem alto custo computacional, uma vez que o n´umero de solu¸c˜oes poss´ıveis cresce exponencialmente com o aumento no n´umero de limiares. Dessa forma, as meta-heur´ısticas tem desempenhado um papel importante em encontrar limiares de boa qualidade. Neste trabalho, uma abordagem baseada em Quantum-behaved Particle Swarm Optimization (QPSO) foi investigada para limiariza¸c˜ao multin´ıvel de imagens dispon´ıveis na literatura. Uma busca local baseada em Variable Neighborhood Descent (VND) foi proposta para acelerar a convergˆencia da busca pelos limiares. Al´em disso, uma aplica¸c˜ao espec´ıfica de segmenta¸c˜ao de imagens de microscopia eletrˆonica para an´alise microestrutural de materiais ciment´ıcios foi investigada, bem como a utiliza¸c˜ao de algoritmos em grafos para detec¸c˜ao de trincas e extra¸c˜ao de caracter´ısticas de interesse.
APA, Harvard, Vancouver, ISO, and other styles
30

Becer, Huseyin Caner. "A Robust Traffic Sign Recognition System." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612912/index.pdf.

Full text
Abstract:
The traffic sign detection and recognition system is an essential part of the driver warning and assistance systems. In this thesis, traffic sign recognition system is studied. We considered circular, triangular and square Turkish traffic signs. For detection stage, we have two different approaches. In first approach, we assume that the detected signs are available. In the second approach, the region of interest of the traffic sign image is given. Traffic sign is extracted from ROI by using a detection algorithm. In recognition stage, the ring-partitioned method is implemented. In this method, the traffic sign is divided into rings and the normalized fuzzy histogram is used as an image descriptor. The histograms of these rings are compared with the reference histograms. Ring-partitions provide robustness to rotation because the rotation does not change the histogram of the ring. This is very critical for circle signs because rotation is hard to detect in circle signs. To overcome illumination problem, specified gray scale image is used. To apply this method to triangle and square signs, the circumscribed circle of these shapes is extracted. Ring partitioned method is tested for the case where the detected signs are available and the region of interests of the traffic sign is given. The data sets contain about 500 static and video captured images and the images in the data set are taken in daytime.
APA, Harvard, Vancouver, ISO, and other styles
31

Nordliden, Petter, and Sjöbladh Linda Didrik. "Måste det alltid bråkas med bråk? : En systematisk litteraturstudie om stambråkets betydelse i matematikundervisningen." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-91687.

Full text
Abstract:
Denna systematiska litteraturstudie syftar till att med hjälp av forskning identifiera avgörande faktorer för framgångsrika undervisningsstrategier av stambråk i grundskolans matematikundervisning. Studien baseras på elva vetenskapliga artiklar som bearbetats systematiskt med hjälp av innehållsanalys för att besvara forsknings-frågorna om vilka avgörande faktorer som forskningen visar för undervisningen av stambråk samt vilka framgångsrika undervisningsstrategier som finns. Forskningen visar att areamodellen som representationsform dominerar undervisningen av bråk vilket innebär att stambråk får lite plats i undervisningen. Stambråket är en viktig del för att kunna tillägna sig avgörande faktorer av bråk. Resultatet visar att en undervisning med linear measurement (linjära representationsformer) betonar stambråkets roll som tolkningsverktyg för att kunna jämföra andra bråk samt det omvända förhållandet där en större nämnare utgör en mindre andel. Resultatet visar också att undervisningen av stambråk etablerar grundläggande principer för rationella tal och mer avancerade matematiska områden som proportionalitet och algebra. Därmed är lärares val av undervisningsstrategier och representationsformer samt deras kunskaper inom dessa områden vitala för vad eleverna kan tillägna sig i samband med bråkundervisningen.
APA, Harvard, Vancouver, ISO, and other styles
32

Cara, Michel. "Stratégies d'apprentissage de la lecture musicale à court-terme : mémoire de travail et oculométrie cognitive." Thesis, Dijon, 2013. http://www.theses.fr/2013DIJOL013.

Full text
Abstract:
Tout au long de cette thèse, l’évaluation musicale est traitée comme un objet d’étude latent visant à donner des outils pour l’apprentissage de la lecture musicale. Grâce à l’analyse des mouvements oculaires et les variables provenant de la performance, nous avons défini certaines variables qui rendent compte de l’expertise et des interactions entre différents groupes de niveaux d’expertise musicale lors de l’apprentissage d’un nouveau morceau de musique. De façon plus détaillée, nous avons observé la mise en œuvre de différentes stratégies de prise d’information, de traitement et de récupération de l’information musicale en fonction du niveau pianistique et souligné l’importance d’apprendre en interaction avec la classe et le professeur. Les stratégies sont en même temps ajustées par rapport à la confiance acquise au cours du processus d'acquisition de compétences (Bandura, 1997 ; McPherson et McCormick, 2006). En référence au débat actuel concernant la nature de la lecture de partitions, nous avons comparé les traitements musicaux et verbaux pendant une tâche de lecture « compréhensive » de textes et de partitions. Dans l’ensemble et au regard du modèle de Baddeley (1990), les ressources cognitives des musiciens pendant la lecture musicale seraient mobilisées en fonction de l’expertise et du type de style musical
Throughout this thesis, evaluation of music performance is viewed as a latent object of study in order to provide tools for learning to read music. We have defined some variables from eye movements and music performance accounting for expert performance and interactions between skill groups when learning a new piece of music. In more details, we have observed the use of different strategies for music information intake, processes and information retrieval depending on musicians’ expertise and we have stressed the importance of learning through interaction. In the process of skill acquisition, when self-confidence is gained strategies are simultaneously adjusted (Bandura, 1997; McPherson and McCormick, 2006). In reference to the current debate about the nature of music reading, we have compared musical and verbal processing during comprehensive reading of texts and scores. On the whole, considering the model of Baddeley (1990), musicians’ cognitive resources during music reading would be mobilized depending on the expertise and the music style
APA, Harvard, Vancouver, ISO, and other styles
33

Proença, Patrícia Aparecida. "Recuperação de imagens digitais com base na distribuição de características de baixo nível em partições do domínio utilizando índice invertido." Universidade Federal de Uberlândia, 2010. https://repositorio.ufu.br/handle/123456789/12500.

Full text
Abstract:
Fundação de Amparo a Pesquisa do Estado de Minas Gerais
The main goal of a images retrieval system is to obtain images from a collection that assist a need of the user. To achieve this objective, in generally, the systems of retrieval of images calculate the similarity between the user's need represented by a query and representations of the images of the collection. Such an objective is dicult of being obtain due to the subjectivity of the similarity concept among images, because a same image can be interpreted in dierent ways by dierent people. In the attempt of solving this problem the content based image retrieval systems explore the characteristics of low level color, forms and texture in the calculation of the similarity among the images. A problem of this approach is that in most of the systems the calculation of the similarity is accomplished being compared the query image with all of the images of the collection, turning the dicult and slow processing. Considering the indexation of characteristics of low level of partitions of digital images mapped to an inverted index, this work looks for improvements in the acting of the processing of querys and improve in the precision considering the group of images retrieval in great bases of data. We used an approach based in inverted index that is here adapted for partitions images. In this approach the concept of term of the retrieval textual, main element of the indexation, it is used in the work as characteristic of partitions of images for the indexation. Experiments show improvement in the quality of the precision using two collections of digital images.
O principal objetivo de um sistema de recuperação de imagens é obter imagens de uma coleção que atendam a uma necessidade do usuário. Para atingir esse objetivo, em geral, os sistemas de recuperação de imagens calculam a similaridade entre a necessidade do usuário, representada por uma consulta, e representações das imagens da coleção. Tal objetivo é difícil de ser alcançado devido à subjetividade do conceito de similaridade entre imagens, visto que uma mesma imagem pode ser interpretada de formas diferentes por pessoas distintas. Na tentativa de resolver este problema os sistemas de recuperação de imagens por conteúdo exploram as características de baixo nível cor, forma e textura no cálculo da similaridade entre as imagens. Um problema desta abordagem é que na maioria dos sistemas o cálculo da similaridade é realizado comparando-se a imagem de consulta com todas as imagens da coleção, tornando o processamento difícil e lento. Considerando a indexação de características de baixo nível de partições de imagens digitais mapeadas para um índice invertido, este trabalho busca melhorias no desempenho do processamento de consultas e ganho na precisão considerando o conjunto de imagens recuperadas em grandes bases de dados. Utilizamos uma abordagem baseada em índice invertido, que é aqui adaptada para imagens particionadas. Nesta abordagem o conceito de termo da recuperação textual, principal elemento da indexação, é utilizado no trabalho como característica de partições de imagens para a indexação. Experimentos mostram ganho na qualidade da precisão usando duas coleções de imagens digitais.
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
34

ZHANG, SHENG-CAI, and 張生財. "Band partition methods for subband image coding." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/79357947091909839641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Chou, Meng-Ying, and 周孟穎. "Vector Partition Method on Spectral Matting and Image Segmentation." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/90659731843359584291.

Full text
Abstract:
碩士
國立交通大學
統計學研究所
100
This study investigates the segmentation of an image foreground from the background image. In the approach of image spectral matting, the segmentation of an image can be obtained by optimizing an objective function which contains matting Laplacian. However the optimized alpha matte of objective function is not always the entire foreground object. To obtain the better segmentation result of foreground object, the optimal alpha matte and the sub-optimal alpha mattes are all considered at the same time. The technique of unsupervised clustering can be applied to combine several foreground components into a complete foreground object. In this study, we investigate the matting Laplacian from the perspective of graph theory. Then we use the community detection method which is called network modularity to perform clustering. This detected community corresponds to the foreground component. Optimizing the modularity will turn out to be the vector partition problem. We propose an algorithm which finds the initial groups by the sign information of vectors to perform vector partition for unsupervised clustering Through empirical studies, the results of vector partition can improve the segmentation of test images. It can not only distinguish the foreground from the background, but also form less component regions of the foreground. This new approach will enhance the segmentation of the foreground object that is matted with background image components.
APA, Harvard, Vancouver, ISO, and other styles
36

Ho, Pei Hao, and 何霈豪. "A path partition algorithmand its application to image processing." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/30446609659850619073.

Full text
Abstract:
碩士
樹德科技大學
資訊工程學系
95
Given a path with a positive weight on each vertex, the minimum L2 p-partition problem is to find a way to cut the path into p subpaths such that the sum of squares of the subpath weights is minimized. In this thesis, we propose an O(pnlogn) time algorithm for the problem. In addition, we studied how to use this algorithm to compress a gray-level image by reducing the number of gray-levels. We investigated the running times and the effects of four algorithms, named Naive, Greedy, MUP and L2-norm. The experience results show that L2-norm algorithm is efficient than MUP and is effective than the other two algorithms.
APA, Harvard, Vancouver, ISO, and other styles
37

Chang, Fang-Jung, and 張芳榮. "Application of Frame Partition Scheme to Shot Detection and Image Retrieval." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/34940158648529389320.

Full text
Abstract:
碩士
朝陽科技大學
資訊工程系碩士班
98
This thesis presents approaches to shot detection and image retrieval based on frame partitioning scheme. For the shot detection, the proposed approach is called SD/PFDS (Shot Detection Based on Partitioned Frame Differencing Scheme). In the SD/PFDS, frames are grouped and partitioned into image blocks. The first frame in the group is considered as reference frame and the others compared frames. Then the differences for each image blocks between partitioned reference and compared frames are calculated. By the differences, changes of shots are detected. The proposed SD/PFDS approach is verified by several examples. The results indicate that the overall average accuracy of detection is as high as 0.94 in F1 measure. By the results, the SD/PFDS approach has been justified and shown feasible. Also, we apply the frame partitioning scheme to image retrieval. With color and texture features, the thesis present three approaches to image retrieval: IR/PCF (Image Retrieval with Partitioned Color Features), IR/PTF (Image Retrieval with Partitioned Texture Features), and IR/PCTF (Image Retrieval with Partitioned Color and Texture Features). Based on partitioned color features, several stages are involved in the IR/PCF. First, images are partitioned. Second, energies in R-, G-, B-components for the partitioned query image are calculated through which weights on the similarity measure are found. Third, find averages of R-, G-, B-components in partitioned image as color features. Finally, calculate the similarity with weights obtained in the second stage. In the IR/PTF, texture features are acquired by GLCM (Gray Level Co-occurrence Matrix). The IR/PTF approach consists of the following stages. First, convert color images in gray-level images. Second, find texture features by GLCM in the partitioned images. Third, calculate the similarity between query image and images in database. The IR/PCTF approach uses both partitioned color and texture features. The following stages are involved in the IR/PCTF. First, the similarity measures are obtained by the IR/PCF and the IR/PTF, respectively. Then the similarity measures are normalized and linearly combined with weights proportional to the performance with only one partitioned feature, i.e., color or texture. The resulted similarity is then used in image retrieval. The three proposed image retrieval approaches are justified by image databases. It shows that the IR/PCTF is of highest retrieval performance and then the IR/PCF, and finally the IR/PCTF. With an appropriate combination of partitioned color and texture features, the IR/PCTF shows better performance than those in the IR/PCF and the IR/PTF.
APA, Harvard, Vancouver, ISO, and other styles
38

蔡秋彥. "A Study on Efficient Partition-Based and Region-Based Image Retrieval Methods." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/52309255275404736553.

Full text
Abstract:
碩士
國立清華大學
資訊工程學系
87
More and more digital images can be obtained by users from the world-wide-web. From the large number of images, it is very important for users to retrieve desired images via the efficient and effective mechanisms. In this paper, we proposed two efficient approaches to facilitate image retrieval by using a simple way to represent the content of images. Each image is partitioned into m×n equal-size sub-images (or blocks). A color that has enough number of pixels in a block is extracted to represent the content of this block. In the first approach, the content of an image is represented by these extracted colors of the blocks directly. The spatial information between images is considered in image retrieval. In the second approach, the colors of the blocks in an image are used to extract objects (or regions). A block-level process is proposed to perform the region extraction. The spatial information between regions is considered unimportant in the similarity measurement. Our experiments show that the block-based information used in these two approaches can speed up the image retrieval. Moreover, the two approaches are effective on different requirements of image similarity. Users can choose a proper approach to process their queries based on their similarity requirements.
APA, Harvard, Vancouver, ISO, and other styles
39

Yu, Chien-Yang, and 余乾揚. "Dynamic Workload Partition on Parallel Medical Image Reconstruction Algorithm in Computational Grid Environments." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/c83fpw.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
95
Parallel computing has an issue that every computing node starts to execute an iteration round at the same time until every node is done at that round. Ideally in a cluster computing environment, the hardware of all nodes are the same; however, there are different resources which form a Grid. Therefore, if users want to execute parallel program in Grid, the work loads become more important regarding to the distribution of task amount. For instance, a task is divided into several smaller tasks, and three smaller tasks will be executed on different computational resources. Because of inequality in computing power and/or network bandwidth, some resources may finish the tasks sooner than the others. The fastest finished node has to wait until the other nodes also finished The issue in parallel computing as previously mentioned is that computing nodes need to wait each other in every execution round. The purpose of this study is to find the best workload distribution for each node, and distribute the suitable amount of tasks during the execution period of each round. In this research, OSEM and COSEM-ML were chosen for experiments. Both of them are algorithms of medical image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
40

Su, Yen-Wei, and 蘇衍維. "Image Retrieval based on Object''s Centroid-Extended Spanning Representation using Triangular Partition Approach." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/21545466942053260142.

Full text
Abstract:
碩士
國立中興大學
資訊科學系所
94
Content-based image retrieval (CBIR) is the current trend of designing image database systems as opposed to text-based image retrieval. Spatial relationships between objects are important features for designing a content-based image retrieval system. In this paper, we propose a new spatial representation based on centroid-extended spanning concept using a triangular partition approach. Such a representation can facilitate spatial reasoning and similarity retrieval. This representation provides twelve types of similarity measures to meet user’s different requirements. Experimental results demonstrate that image database systems based on the representation method proposed in this thesis have high performance in terms of recall and precision.
APA, Harvard, Vancouver, ISO, and other styles
41

SHIH, CHENG-FU, and 施承甫. "A Reversible Data Hiding Method Based on Partition Variable Block Size and Exclusive-OR Operation with Two Host Images for Binary Image." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/yhm8ny.

Full text
Abstract:
碩士
玄奘大學
資訊管理學系碩士班
106
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colours, black or white, it is hard to hide data imperceptible. The capacities and imperceptions are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image C and R into several non-overlapping(2m+1)×(2n+1)sub-blocks in an M by N host image as many as possible, where m=1,2,3…,n=1,2,3…,or min (M,N). Then we partition each sub-block into four overlapping(m+1)×(n+1)sub-blocks. We skip the all blacks or all whites in each(2m+1)×(2n+1)sub-blocks. We consider all four(m+1)×(n+1)sub-blocks to check the XOR between the non-overlapping parts and centre pixel of the(2m+1)×(2n+1)sub-block, it embed m×n bits in each(m+1)×(n+1)sub-block, totally are 4×m×n. When candidate pixel of C is embedded secret bit and changed, the corresponding position pixel of R will be marked 1. The entire host image can be embedded 4× m×n×M/(2m+1)×N/(2n+1)bits. The extraction way is simply to test the XOR between centre pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperceptions that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless, also used the R host image to reverse the original host image completely.
APA, Harvard, Vancouver, ISO, and other styles
42

WANG, YU-TZU, and 王愉慈. "A Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/28999648088974917843.

Full text
Abstract:
碩士
玄奘大學
資訊管理學系碩士班
104
In this paper, we propose a high capacity data hiding method applying in binary images. Since a binary image has only two colors, black or white, it is hard to hide data imperceptible. The capacities and imperception are always in a trade-off problem. Before embedding we shuffle the secret data by a pseudo-random number generator to keep more secure. We divide the host image into several non-overlapping(2k+1)×(2k+1)sub-blocks in an M by N host image as many as possible, where k=1,2,3,…,or min (M,N). Then we partition each sub-block into four overlapping(k+1)×(k+1)sub-blocks. We skip the all blacks or all whites in each(2k+1)×(2k+1)sub-blocks. We consider all four(k+1)×(k+1)sub-blocks to check the XOR between the non-overlapping parts and center pixel of the(2k+1)×(2k+1)sub-block, it embed k^2bits in each(k+1)×(k+1)sub-block, totally are4×k^2. The entire host image can be embedded 4× k^2×M/(2k+1)×N/(2k+1)bits. The extraction way is simply to test the XOR between center pixel with their non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The adaptive means the partitioning sub-block may affect the capacities and imperception that we want to select. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
APA, Harvard, Vancouver, ISO, and other styles
43

CHEN, JI-MING, and 陳紀銘. "An Optimal Data Hiding Method Based on Partition Variable Block Size with Exclusive-OR Operation on Binary Image." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/2ta8pc.

Full text
Abstract:
碩士
玄奘大學
資訊管理學系碩士班
105
In this thesis, we propose a high capacity data hiding method applying in binary images. We divide the host image into several non-overlapping blocks as many as possible. Then we partition each block into four overlapping sub-blocks. We skip the all blacks or all whites in each block. We consider all four sub-blocks to check the XOR between the nonoverlapping parts and the center pixel of the block. The entire host image can be embedded 4×m×n×M/(2m+1)×N/(2n+1) bits. The extraction way is simply to test the XOR between center pixel with its non-overlapping part of each sub-block. All embedding bits are collected and shuffled back to the original order. The optimal means the partitioning sub-block may affect the capacities and imperception that we can reach the best. The experimental results show that the method provides the large embedding capacity and keeps imperceptible and reveal the host image lossless.
APA, Harvard, Vancouver, ISO, and other styles
44

Hedjam, Rachid. "Segmentation non-supervisée d'images couleur par sur-segmentation Markovienne en régions et procédure de regroupement de régions par graphes pondérés." Thèse, 2008. http://hdl.handle.net/1866/7221.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wang, Yu-Ching, and 王有慶. "Implementation of Image Transmission Quality via Applying a Turbo Code Rate for Different Partitions of SPIHT." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/18274003650484952842.

Full text
Abstract:
碩士
國立中正大學
電機工程研究所
89
With the development of wireless technology,people use products with wireless technology more frequently in their life. Some medimedia application on wireless platform have been on the market.Mmultimedia transmission become more and more popular,but another problem comes out concurrently. Multimedia contains huge data information (video and voice) so that the present bandwidth of wireless is not sufficient to support. So,demand for efficient compression to transfer data with better quality,but less bandwith is an important topic. The purpose of this thesis is to analyze the image quality of SPIHT bitstreams through wavelet transform in a wireless environments .Through study of joint source coding and channel transmission ,we have found that the image quality may have very serious effect due to error propagation .Based on the simulation results ,we have purposed some modified coding scheme to maintain the image quality .We further consider unequal protection via grouping and giving a different coding rate for different groups .From the results ,we found that bandwidth can be saved for a specific quality and PSNR performance can be improved. Finally ,considering the trade off of bandwidth requirement ,iterative times and PSNR ,we can obtain better performance via UEP scheme.
APA, Harvard, Vancouver, ISO, and other styles
46

Wen-Pin and 鄭文斌. "A system of 3D image retrieval and judgment of partitions of the tumor located in the liver." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/35888464744927174314.

Full text
Abstract:
碩士
中山醫學大學
應用資訊科學學系碩士班
99
Because the amount of pictorial information stored in medical databases is growing, efficient image indexing and retrieval becomes very important. Therefore, the need to develop a medical image retrieval system for disease diagnosis is urgent. In this thesis, a system for assisting in diagnosing the liver tumors and planning the corresponding radiation treatment is proposed. The proposed system provides the capabilities of 3D image retrieval as well as judging in which partitions of the liver the tumor is located. In the proposed system, the emphasis is on the development of an efficient and practical database for recognizing and retrieving similar patterns with known diagnoses in 3D medical images in an efficient manner. Furthermore, in order to assist physician in planning the radiation treatment, it can also judge the partitions of the liver in which the tumor is located. To retrieve similar images efficiently, we have developed an image representation which can capture the shape, size and location of the tumor. The image representation has the properties of image scaling-, translation- and rotation-invariance, and these properties are necessary for an image retrieval system which works to a high degree of accuracy. To satisfy the different requirements of physicians, some similarity measures and a retrieval method based on our image representation approach are also proposed. Finally, a method based on our image representation to judge the partitions of liver where the tumor is located is also provided. Experiment results showed that the system has a good performance in terms of assisting physician in diagnosing the liver tumor and planning the radiation treatment.
APA, Harvard, Vancouver, ISO, and other styles
47

Tsai, Tsung-Lin, and 蔡宗霖. "Integration of data, function, pipeline partition schemes on distributed system--real-time implementation of correspondence matching in stereo images." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/94128290188731236059.

Full text
Abstract:
碩士
國立東華大學
資訊工程學系
92
We use distributed system and three partition schemes to make program achieve real-time performance. The three partition schemes are data partition, function partition, pipelining partition. In the paper, we analysis the advantages and disadvantages of the three partition, for example, advantage of data partition is the communication cost of processors is less, but the data partition is only suitable for the condition of algorithm only use local data, function partition can assign different task to different hardware, this can make more efficient utilization of hardware, but it can only be used when there are no relation of input and output between tasks. Pipelining partition is easy to applied to program and can raise mass throughput, but is only suitable for successive inputs and pipelining partition will raise the response time of system. At the end, we propose a strategy to integrate three partition schemes to make exploit highest parallelism, and get best throughput. In the field of computer vision, using two images to compute depth of object in images is a long discussed technique. And before compute depth of objects in images, we must computed disparity of corresponding points, but because of the mass computation of the matching of corresponding points , this technique can not be applied to real-time application, and the application is limited. To compute disparity of corresponding points in real-time, we employ an efficient algorithm and a distributed system to compute depth. The algorithm uses two calibrated images and a special data structure to compute disparity of corresponding points in images.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography