Siga este enlace para ver otros tipos de publicaciones sobre el tema: Image Merging.

Tesis sobre el tema "Image Merging"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 37 mejores tesis para su investigación sobre el tema "Image Merging".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Ipson, Heather. "T-spline Merging". Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd804.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Munechika, Curtis K. "Merging panchromatic and multispectral images for enhanced image analysis /". Online version of thesis, 1990. http://hdl.handle.net/1850/11366.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Tang, Weiran. "Frequency merging for demosaicking /". View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20TANGW.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tan, Zhigang y 譚志剛. "A region merging methodology for color and texture image segmentation". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43224143.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Tan, Zhigang. "A region merging methodology for color and texture image segmentation". Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43224143.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Cui, Ying. "Image merging in a dynamic visual communication system with multiple cameras". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0030/NQ27126.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

USUI, Shin'ichi, Masayuki TANIMOTO, Toshiaki FUJII, Tadahiko KIMOTO y Hiroshi OHYAMA. "Fractal Image Coding Based on Classified Range Regions". Institute of Electronics, Information and Communication Engineers, 1998. http://hdl.handle.net/2237/14996.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Zhao, Guang. "Automatic boundary extraction in medical images based on constrained edge merging". Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22030207.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhao, Guang y 趙光. "Automatic boundary extraction in medical images based on constrained edge merging". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223904.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ocampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography". Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.

Texto completo
Resumen
Dans de nombreuses situations, la dynamique des capteurs ou la profondeur de champ des appareils photographiques conventionnels sont insuffisantes pour capturer fidèlement des scènes naturelles. Une méthode classique pour contourner ces limitations est de fusionner des images acquises avec des paramètres de prise de vue variables. Ces méthodes nécessitent que les images soient parfaitement alignées et que les scènes soient statiques, faute de quoi des artefacts (fantômes) ou des structures irrégulières apparaissent lors de la fusion. Le but de cette thèse est de développer des techniques permettant de traiter directement des images dynamiques et non-alignées, en exploitant des mesures de similarité locales par patchs entre images.Dans la première partie de cette thèse, nous présentons une méthode pour la fusion d'images de scènes dynamiques capturées avec des temps d'exposition variables. Notre méthode repose sur l'utilisation jointe d'une normalisation de contraste, de combinaisons non-locales de patchs et de régularisations. Ceci permet de produire de manière efficace des images contrastées et bien exposées, même dans des cas difficiles (objets en mouvement, scènes non planes, déformations optiques, etc.).Dans la deuxième partie de la thèse nous proposons, toujours dans des cas dynamiques, une méthode de fusion d'images acquises avec des mises au point variables. Le cœur de notre méthode repose sur une comparaison de patchs entre images ayant des niveaux de flou variables.Nos méthodes ont été évaluées sur des bases de données classiques et sur d'autres, nouvelles, crées pour les besoins de ce travail. Les expériences montrent la robustesse des méthodes aux distortions géométriques, aux variations d'illumination et au flou. Ces méthodes se comparent favorablement à des méthodes de l'état de l'art, à un coût algorithmique moindre. En marge de ces travaux, nous analysons également la capacité de l'algorithme PatchMatch à reconstruire des images en présence de flou et de changements d'illumination, et nous proposons différentes stratégies pour améliorer ces reconstructions
The most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Medeiros, Rafael Sachett. "Detecção de pele humana utilizando modelos estocásticos multi-escala de textura". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/70193.

Texto completo
Resumen
A detecção de gestos é uma etapa importante em aplicações de interação humanocomputador. Se a mão do usuário é detectada com precisão, tanto a análise quanto o reconhecimento do gesto de mão se tornam mais simples e confiáveis. Neste trabalho, descrevemos um novo método para detecção de pele humana, destinada a ser empregada como uma etapa de pré-processamento para segmentação de gestos de mão em sistemas que visam o seu reconhecimento. Primeiramente, treinamos os modelos de cor e textura de pele (material a ser identificado) a partir de um conjunto de treinamento formado por imagens de pele. Nessa etapa, construímos um modelo de mistura de Gaussianas (GMM), para determinar os tons de cor da pele e um dicionário de textons, para textura de pele. Em seguida, introduzimos um estratégia de fusão estocástica de regiões de texturas, para determinar todos os segmentos de diferentes materiais presentes na imagem (cada um associado a uma textura). Tendo obtido todas as regiões, cada segmento encontrado é classificado com base nos modelos de cor de pele (GMM) e textura de pele (dicionário de textons). Para testar o desempenho do algoritmo desenvolvido realizamos experimentos com o conjunto de imagens SDC, projetado especialmente para esse tipo de avaliação (detecção de pele humana). Comparado com outras técnicas do estado-daarte em segmentação de pele humana disponíveis na literatura, os resultados obtidos em nossos experimentos mostram que a abordagem aqui proposta é resistente às variações de cor e iluminação decorrentes de diferentes tons de pele (etnia do usuário), assim como de mudanças de pose da mão, mantendo sua capacidade de discriminar pele humana de outros materiais altamente texturizados presentes na imagem.
Gesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Jacob, Alexander. "Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping". Thesis, KTH, Geoinformatik och Geodesi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-45978.

Texto completo
Resumen
The creation and classification of segments for object based urban land cover mapping is the key goal of this master thesis. An algorithm based on region growing and merging was developed, implemented and tested. The synergy effects of a fused data set of SAR and optical imagery were evaluated based on the classification results. The testing was mainly performed with data of the city of Beijing China. The dataset consists of SAR and optical data and the classified land cover/use maps were evaluated using standard methods for accuracy assessment like confusion matrices, kappa values and overall accuracy. The classification for the testing consists of 9 classes which are low density buildup, high density buildup, road, park, water, golf course, forest, agricultural crop and airport. The development was performed in JAVA and a suitable graphical interface for user friendly interaction was created parallel to the development of the algorithm. This was really useful during the period of extensive testing of the parameter which easily could be entered through the dialogs of the interface. The algorithm itself treats the pixels as a connected graph of pixels which can always merge with their direct neighbors, meaning sharing an edge with those. There are three criteria that can be used in the current state of the algorithm, a mean based spectral homogeneity measure, a variance based textural homogeneity measure and fragmentation test as a shape measure. The algorithm has 3 key parameters which are the minimum and maximum segments size as well as a homogeneity threshold measure which is based on a weighted combination of relative change due to merging two segments. The growing and merging is divided into two phases the first one is based on mutual best partner merging and the second one on the homogeneity threshold. In both phases it is possible to use all three criteria for merging in arbitrary weighting constellations. A third step is the check for the fulfillment of minimum size which can be performed prior to or after the other two steps. The segments can then in a supervised manner be labeled interactively using once again the graphical user interface for creating a training sample set. This training set can be used to derive a support vector machine which is based on a radial base function kernel. The optimal settings for the required parameters of this SVM training process can be found from a cross-validation grid search process which is implemented within the program as well. The SVM algorithm is based on the LibSVM java implementation. Once training is completed the SVM can be used to predict the whole dataset to get a classified land-cover map. It can be exported in form of a vector dataset. The results yield that the incorporation of texture features already in the segmentation is superior to spectral information alone especially when working with unfiltered SAR data. The incorporation of the suggested shape feature however doesn’t seem to be of advantage, especially when taking the much longer processing time into account, when incorporating this criterion. From the classification results it is also evident, that the fusion of SAR and optical data is beneficial for urban land cover mapping. Especially the distinction of urban areas and agricultural crops has been improved greatly but also the confusion between high and low density could be reduced due to the fusion.
Dragon 2 Project
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Lersch, Rodrigo Pereira. "Introdução de dados auxiliares na classificação de imagens digitais de sensoriamento remoto aplicando conceitos da teoria da evidência". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15276.

Texto completo
Resumen
Nesta tese investiga-se uma nova abordagem visando implementar os conceitos propostos na Teoria da Evidencia para fins de classificação de imagens digitais em Sensoriamento Remoto. Propõe-se aqui a utilização de variáveis auxiliares, estruturadas na forma de Planos de Informação (P.I.s) como em um SIG para gerar dados de confiança e de plausibilidade. São então aplicados limiares aos dados de confiança e de plausibilidade, com a finalidade de detectar erros de inclusão e de omissão, respectivamente, na imagem temática. Propõe-se nesta tese que estes dois limiares sejam estimados em função das acurácias do usuário e do produtor. A metodologia proposta nesta tese foi testada em uma área teste, coberta pela classe Mata Nativa com Araucária. O experimento mostrou que a metodologia aqui proposta atinge seus objetivos.
In this thesis we investigate a new approach to implement concepts developed by the Theory of Evidence to Remote Sensing digital image classification. In the proposed approach auxiliary variables are structured as layers in a GIS-like format to produce layers of belief and plausibility. Thresholds are applied to the layers of belief and plausibility to detect errors of commission and omission, respectively on the thematic image. The thresholds are estimated as functions of the user’s and producer’s accuracy. Preliminary tests were performed over an area covered by natural forest with Araucaria, showing some promising results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Kuperberg, Marcia Clare. "The integrated image : an investigation into the merging of video and computer graphics techniques incorporating the production of a video as a practical element in the investigation". Thesis, Middlesex University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.568498.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Mondésir, Jacques Philémon. "Apports de la texture multibande dans la classification orientée-objets d'images multisources (optique et radar)". Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9706.

Texto completo
Resumen
Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).
Abstract : Texture has a good discriminating power which complements the radiometric parameters in the image classification process. The index Compact Texture Unit multiband, recently developed by Safia and He (2014), allows to extract texture from several bands at a time, so taking advantage of extra information not previously considered in the traditional textural analysis: the interdependence between bands. However, this new tool has not yet been tested on multi-source images, use that could be an interesting added-value considering, for example, all the textural richness the radar can provide in addition to optics, by combining data. This study allows to complete validation initiated by Safia (2014), by applying the CTU on an optics-radar dataset. The textural analysis of this multisource data allowed to produce a "color texture" image. These newly created textural bands are again combined with the initial optical bands before their use in a classification process of land cover in eCognition. The same classification process (but without CTU) was applied respectively to: Optics data, then Radar, finally on the Optics-Radar combination. Otherwise, the CTU generated on the optics separately (monosource) was compared to CTU arising from Optical-Radar couple (multisource). The analysis of the separating power of these different bands (radiometric and textural) with histograms, and the confusion matrix tool allows to compare the performance of these different scenarios and classification parameters. These comparators show the CTU, including the CTU multisource, as the most discriminating criterion; his presence adds variability in the image thus allowing a clearer segmentation (homogeneous and non-redundant), a classification both more detailed and more efficient. Indeed, the accuracy changes from 0.5 with the Optics image to 0.74 for the CTU image while confusion decreases from 0.30 (in Optics) to 0.02 (in the CTU).
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Cheng, Sarah X. "A method of merging VMware disk images through file system unification". Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/62752.

Texto completo
Resumen
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 67).
This thesis describes a method of merging the contents of two VMware disk images by merging the file systems therein. Thus, two initially disparate file systems are joined to appear and behave as a single file system. The problem of file system namespace unification is not a new one, with predecessors dating as far back as 1988 to present-day descendants such as UnionFS and union mounts. All deal with the same major issues - merging directory contents of source branches and handling any naming conflicts (namespace de-duplication), and allowing top-level edits of file system unions in presence of read-only source branches (copy-on-write). The previous solutions deal with exclusively with file systems themselves, and most perform the bulk of the unification logic at runtime. This project is unique in that both the sources and union are disk images that can be directly run as virtual machines. This lets us exploit various features of the VMware disk image format, eventually prompting us to move the unification logic to an entirely offline process. This decision, however, carry a variety of unique implications and side effects, which we shall also discuss in the paper.
by Sarah X. Cheng.
M.Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Butkienė, Roma. "9 -10 klasių merginų fizinio savivaizdžio formavimo(-si) veiksniai". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060608_184703-78273.

Texto completo
Resumen
Summary PHYSICAL SELF – IMAGE FORMATION FACTORS OF 9 – 10 FORMERS Intense social, economic, political and transformational processes, ongoing in the Western Europe, induce changes in the general culture, influencing changes in the body culture as a subject. Physical activity becomes a healthy way of life, new lifestyle, when self-image is shaped through body. In the work I describe the psychological, biological and social factors, influencing girls’ body self-image shaping. The group of social factors, not yet sufficiently examined in Lithuania, consists of a number of smaller components. Conception of body beauty can be predetermined by historical development or sociocultural influence. Mass media broadly informs about body shaping, it is an often topic in the discussions between friends, it is actively promoted and influenced by parents. All this is observed by a young person, mismatching or thinking that she is mismatching the given appearance standards and sensitively reacting in the critical adolescence period. It is not so easy to form positive physical improvement motivation, because the body culture values are formed slowly, results are observed in some time. More and more investigations have been performed lately, searching for new body development technologies and seeking to improve physical activity or teenagers. Aim of the investigation: to find out factors of physical self-image shaping among the girls of senior forms. Tasks of investigation: 1. To analyse... [to full text]
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Žukauskaitė, Andželika. "Paauglių merginų fizinį savivaizdį formuojantys veiksniai". Bachelor's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120604_125408-87705.

Texto completo
Resumen
Pastaruoju metu mokslininkų tarpe vis labiau domimasi kūno įvaizdžiu. Tai nulėmė visuomenėje įsivyravę aukšti tobulo kūno standartai, kurių siekimas žmonėms tampa vis svarbesnis. Vis didesnį neramumą kelia paauglių požiūris į save ir santykį su išvaizda. Pastebima, kad jie yra linkę save vertinti kritiškai, vis dažniau yra nepatenkinti savo išvaizda. Įvairūs mokslininkai (Druxman, 2003; Grogan, 2008; Pruskus, 2008) pripažįsta neabejotiną socialinių veiksnių (šeimos, draugų, bendraamžių, žiniasklaidos) įtaką paauglių kūno savivaizdžiui. Tačiau ne visuomet sutariama, kurie veiksniai dažniausiai atlieka didžiausią vaidmenį. Dar svarbesniu klausimu tampa, kurie veiksniai lemia didžiausią neigiamą paauglių kūno vaizdo formavimąsi.
Recently, scientists are increasingly interested in body image. This was caused by high standards of the perfect body prevailing in the society, the pursuit of which is becoming more and more important to people. Adolescent attitude toward themselves and their relation with the appearance are the growing restlessness. It is observed that adolescents tend to evaluate themselves critically, that they are increasingly dissatisfied with their appearance. Various scientists (Druxman, 2003; Grogan, 2008; Pruskus, 2008) acknowledge that social factors (family, friends, peers, media) definitely influence adolescent physical self-images. However, it is not always agreed on what factors mostly play the greatest role. Another important issue is which factors determine the most negative adolescent body image formation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Dievaitytė, Ugnė. "Užsiėmimų, paremtų šokio - judesio terapija, efektyvumas keičiant 18-25 m. merginų kūno vaizdą". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110621_094008-00823.

Texto completo
Resumen
Tyrimo tikslas – įvertinti užsiėmimų, paremtų šokio-judesio terapija, efektyvumą keičiant 18-25 m. merginų kūno vaizdą. Tyrime dalyvavo 105 merginos iš Vytauto Didžiojo universiteto Socialinių mokslų fakulteto. Intervencinėje grupėje visame penkių užsiėmimų cikle dalyvavo 40 merginų, o lyginamojoje grupėje, paskaitoje ir abiejose apklausose dalyvavo 37 merginos. Intervencinės grupės tiriamieji dalyvavo penkiuose 1,5 val. trukmės, kartą per savaitę vykstančiuose, šokio judesio terapija paremtuose užsiėmimuose, skirtuose formuoti pozityvesnį savo kūno vaizdą. Lyginamoji grupė dalyvavo vienoje 2 val. trukmės paskaitoje apie valgymo sutrikimus, sutrikusį kūno vaizdą ir šokio-judesio terapijos taikymo galimybes. Siekiant nustatyti intervencinių užsiėmimų efektyvumą prieš intervenciją ir po jos tiriamųjų buvo prašoma užpildyti Kūno formos, Požiūrio į valgymą, užsiėmimų naudingumo vertinimo klausimynus, Teigiamų ir neigiamų emocijų, Susirūpinimo svoriu, Išvaizdos vertinimo, Pasitenkinimo kūno sritimis skalės. Taip pat, kiekvieną užsiėmimą (prieš ir po) tiriamosios pildė Teigiamų ir Neigiamų emocijų skales. Tyrimo rezultatai parodė, kad šokio-judesio terapija paremtuose užsiėmimuose taikomos poveikio priemonės yra efektyvios formuojant pozityvesnį kūno vaizdą, t.y. jaučiama daugiau teigiamų emocijų savo kūno atžvilgiu, mažiau– neigiamų, sumažėja susirūpinimas savo išvaizda ir svoriui ir pagerėja išvaizdos vertinimas 18-25 m. merginoms. Taip pat, nustatyta, kad po kiekvieno... [toliau žr. visą tekstą]
The aim of the research is to evaluate the efficacy of sessions based on dance/movement therapy in altering body image of 18-25 years old girls. 105 girls from Vytautas Magnus University Social Sciences Faculty participated in the research. 40 girls of the intervention group attended all of the five sessions based on dance/movement therapy. 37 girls of the control group attended a lecture and pre and post measuring. Once a week intervention group participants attended one and a half an hour duration sessions based on dance/movement therapy designed for improving body image. Control group participated in two hours duration lecture about eating disorders, distorted body image and the applicability of dance/movement therapy. In order to evaluate the efficacy of the sessions based on dance/movement therapy research participants filled in Body Shape, Eating Attitude, Evaluation of Sessions Utility Questionnaires, Positive and Negative Emotions, Weight Preoccupation, Appearance Evaluation, Appearance Orientation, Body Areas Satisfaction Scales. Research participants had to fill in Positive and Negative Emotions scales before and after each session as well. The results of the research indicate that the methods used in the sessions based on dance/movement therapy have a positive effect on improving body image, i.e. positive emotions towards one’s body increased, negative ones – decreased in quantity, preoccupation with weight and appearance reduced and one’s appearance evaluation... [to full text]
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Gui, Shengxi. "A Model-Driven Approach for LoD-2 Modeling Using DSM from Multi-stereo Satellite Images". The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1593620776362528.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Herrera, Castro D. (Daniel). "From images to point clouds:practical considerations for three-dimensional computer vision". Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208534.

Texto completo
Resumen
Abstract Three-dimensional scene reconstruction has been an important area of research for many decades. It has a myriad of applications ranging from entertainment to medicine. This thesis explores the 3D reconstruction pipeline and proposes novel methods to improve many of the steps necessary to achieve a high quality reconstruction. It proposes novel methods in the areas of depth sensor calibration, simultaneous localization and mapping, depth map inpainting, point cloud simplification, and free-viewpoint rendering. Geometric camera calibration is necessary in every 3D reconstruction pipeline. This thesis focuses on the calibration of depth sensors. It presents a review of sensors models and how they can be calibrated. It then examines the case of the well-known Kinect sensor and proposes a novel calibration method using only planar targets. Reconstructing a scene using only color cameras entails di_erent challenges than when using depth sensors. Moreover, online applications require real-time response and must update the model as new frames are received. The thesis looks at these challenges and presents a novel simultaneous localization and mapping system using only color cameras. It adaptively triangulates points based on the detected baseline while still utilizing non-triangulated features for pose estimation. The thesis addresses the extrapolating missing information in depth maps. It presents three novel methods for depth map inpainting. The first utilizes random sampling to fit planes in the missing regions. The second method utilizes a 2nd-order prior aligned with intensity edges. The third method learns natural filters to apply a Markov random field on a joint intensity and depth prior. This thesis also looks at the issue of reducing the quantity of 3D information to a manageable size. It looks at how to merge depth maps from multiple views without storing redundant information. It presents a method to discard this redundant information while still maintaining the naturally variable resolution. Finally, transparency estimation is examined in the context of free-viewpoint rendering. A procedure to estimate transparency maps for the foreground layers of a multi-view scene is presented. The results obtained reinforce the need for a high accuracy 3D reconstruction pipeline including all the previously presented steps
Tiivistelmä Kolmiuloitteisen ympäristöä kuvaavan mallin rakentaminen on ollut tärkeä tutkimuksen kohde jo usean vuosikymmenen ajan. Sen sovelluskohteet ulottuvat aina lääketieteestä viihdeteollisuuteen. Väitöskirja tarkastelee 3D ympäristöä kuvaavan mallin tuottamisprosessia ja esittää uusia keinoja parantaa korkealaatuisen rekonstruktion tuottamiseen vaadittavia vaiheita. Työssä esitetään uusia menetelmiä etäisyyssensoreiden kalibrointiin, samanaikaisesti tapahtuvaan paikannukseen ja kartoitukseen, syvyyskartan korjaamiseen, etäisyyspistepilven yksinkertaistamiseen ja vapaan katselukulman kuvantamiseen. Väitöskirjan ensi osa keskittyy etäisyyssensoreiden kalibrointiin. Työ esittelee erilaisia sensorimalleja ja niiden kalibrointia. Yleisen tarkastelun lisäksi keskitytään hyvin tunnetun Kinect-sensorin käyttämiseen, ja ehdotetaan uutta kalibrointitapaa pelkkiä tasokohteita hyväksikäyttäen. Pelkkien värikameroiden käyttäminen näkymän rekonstruointiin tuottaa erilaisia haasteita verrattuna etäisyyssensoreiden käyttöön kuvan muodostamisessa. Lisäksi verkkosovellukset vaativat reaaliaikaista vastetta. Väitös tarkastelee kyseisiä haasteita ja esittää uudenlaisen yhtäaikaisen paikannuksen ja kartoituksen mallin tuottamista pelkkiä värikameroita käyttämällä. Esitetty tapa kolmiomittaa adaptiivisesti pisteitä taustan pohjalta samalla kun hyödynnetään eikolmiomitattuja piirteitä asentotietoihin. Työssä esitellään kolme uudenlaista tapaa syvyyskartan korjaamiseen. Ensimmäinen tapa käyttää satunnaispisteitä tasojen kohdentamiseen puuttuvilla alueilla. Toinen tapa käyttää 2nd-order prior kohdistusta ja intensiteettireunoja. Kolmas tapa oppii filttereitä joita se soveltaa Markov satunnaiskenttiin yhteisillä tiheys ja syvyys ennakoinneilla. Tämä väitös selvittää myös mahdollisuuksia 3D-information määrän pienentämiseen käsiteltävälle tasolle. Työssä selvitetään, kuinka syvyyskarttoja voidaan yhdistää ilman päällekkäisen informaation tallentamista. Työssä esitetään tapa jolla päällekkäisestä datasta voidaan luopua kuitenkin säilyttäen luonnollisesti muuttuva resoluutio. Viimeksi, tutkimuksessa on esitetty läpinäkyvyyskarttojen arviointiproseduuri etualan kerroksien monikatselukulmanäkymissä vapaan katselukulman renderöinnin näkökulmasta. Saadut tulokset vahvistavat tarkan 3D-näkymän rakentamisliukuhihnan tarvetta sisältäen kaikki edellä mainitut vaiheet
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Carvalho, Eduardo Alves de. "SegmentaÃÃo de imagens de radar de abertura sintÃtica por crescimento e fusÃo estatÃstica de regiÃes". Universidade Federal do CearÃ, 2005. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2038.

Texto completo
Resumen
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico
A cobertura regular de quase todo o planeta por sistemas de radar de abertura sintÃtica (synthetic aperture radar - SAR) orbitais e o uso de sistemas aerotransportados tÃm propiciado novos meios para obter informaÃÃes atravÃs do sensoriamento remoto de vÃrias regiÃes de nosso planeta, muitas delas inacessÃveis. Este trabalho trata do processamento de imagens digitais geradas por radar de abertura sintÃtica, especificamente da segmentaÃÃo, que consiste do isolamento ou particionamento dos objetos relevantes presentes em uma cena. A segmentaÃÃo de imagens digitais visa melhorar a interpretaÃÃo das mesmas em procedimentos subseqÃentes. As imagens SAR sÃo corrompidas por ruÃdo coerente, conhecido por speckle, que mascara pequenos detalhes e zonas de transiÃÃo entre os objetos. Tal ruÃdo à inerente ao processo de formaÃÃo dessas imagens e dificulta tarefas como a segmentaÃÃo automÃtica dos objetos existentes e a identificaÃÃo de seus contornos. Uma possibilidade para efetivar a segmentaÃÃo de imagens SAR consiste na filtragem preliminar do ruÃdo speckle, como etapa de tratamento dos dados. A outra possibilidade, aplicada neste trabalho, consiste em segmentar diretamente a imagem ruidosa, usando seus pixels originais como fonte de informaÃÃo. Para isso, à desenvolvida uma metodologia de segmentaÃÃo baseada em crescimento e fusÃo estatÃstica de regiÃes, que requer alguns parÃmetros para controlar o processo. As vantagens da utilizaÃÃo dos dados originais para realizar a segmentaÃÃo de imagens de radar sÃo a eliminaÃÃo de etapas de prÃ-processamento e o favorecimento da detecÃÃo das estruturas presentes nas mesmas. à realizada uma avaliaÃÃo qualitativa e quantitativa das imagens segmentadas, sob diferentes situaÃÃes, aplicando a tÃcnica proposta em imagens de teste contaminadas artificialmente com ruÃdo multiplicativo. Este segmentador à aplicado tambÃm no processamento de imagens SAR reais e os resultados sÃo promissores.
The regular coverage of the planet surface by spaceborne synthetic aperture radar (SAR)and also airborne systems have provided alternative means to gather remote sensing information of various regions of the planet, even of inaccessible areas. This work deals with the digital processing of synthetic aperture radar imagery, where segmentation is the main subject. It consists of isolating or partitioning relevant objects in a scene, aiming at improving image interpretation and understanding in subsequent tasks. SAR images are contaminated by coherent noise, known as speckle, which masks small details and transition zones among the objects. Such a noise is inherent in radar image generation process, making difficult tasks like automatic segmentation of the objects, as well as their contour identification. To segment radar images, one possible way is to apply speckle filtering before segmentation. Another one, applied in this work, is to perform noisy image segmentation using the original SAR pixels as input data, without any preprocessing,such as filtering. To provide segmentation, an algorithm based on region growing and statistical region merging has been developed, which requires some parameters to control the process. This task presents some advantages, as long as it eliminates preprocessing steps and favors the detection of the image structures, since original pixel information is exploited. A qualitative and quantitative performance evaluation of the segmented images is also executed, under different situations, by applying the proposed technique to simulated images corrupted with multiplicative noise. This segmentation method is also applied to real SAR images and the produced results are promising.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

WANG, PIN-WEN y 王品文. "Superpixel-based Image Segmentation and Region Merging". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/tkqdsb.

Texto completo
Resumen
碩士
國立中正大學
資訊管理系研究所
105
In the field of computer vision and image processing, image segmentation occupies a very important position. Image segmentation technology is constantly being put forward, however, the current image cutting still has many difficulties, and now some methods can be applied to the color image segmentation, most of the thinking only the image from one-dimensional space expansion to three-dimensional color space, and did not discuss other relevant information provided in color data. Therefore, the color space for image segmentation is also a very worthy of one of the topics of in-depth study. The purpose of image segmentation is to be able to find areas of interest from the image, or a meaningful area. Superpixel can achieve redundant information, and reduce the complexity of follow-up processing tasks, has been the growing concern of researchers at home and abroad. This study presents a segmentation approach to the SLIC superpixel approach and the sub-regions with the smallest of the eigenvalues of the H, S, V, R, G and B color characteristics combined with the texture are combined with the sub-regions, and the background is complex and the background is complex. Object and background difference of low type of color image for regional segmentation. According to the experimental results, this study suggests that the way to successfully segment the complex objects in complex background images. Finally, the results of this study and application of the results of the discussion and future prospects.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ko, Hsuan-Yi y 柯宣亦. "Adaptive Growing and Merging Algorithm for Image Segmentation". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30705048738626014480.

Texto completo
Resumen
碩士
國立臺灣大學
電信工程學研究所
104
In computer vision, image segmentation plays an important role due to its widespread applications such as object tracking and image compression. Image segmentation is a process of clustering pixels into homogeneous and salient regions, and a number of image segmentation algorithms and techniques have been developed for different applications. To segment an image accurately with the number of regions user gives, we propose an adaptive growing and merging algorithm. Our procedure is described as follows: First, a superpixel segmentation is applied to the original image to reduce the computation time and provide helpful regional information. Second, we exploit the color histogram and textures to measure the similarity between two adjacent superpixels. Then we conduct the superpixel growing based on the similarity under the constraint of the edge’s intensity. Finally, we generate a dissimilarity matrix for the entire image according to color, texture, contours, saliency values and region size, and subsequently merge regions in the order of the dissimilarity. The region merging process is adaptive to the number of regions and local image features. After the superpixel growing has been finished, some superpixels expand to larger regions, which contain more accurate edges and regional information such as mean color and texture, to help with the final process of region merging. Simulations show that our proposed method segments most of images well and outperforms state-of-the-art methods.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Liu, Teng-Lieh y 劉燈烈. "Point Cloud Adjustment Merging and Image Mapping for Ground-Based Lidar". Thesis, 2004. http://ndltd.ncl.edu.tw/handle/41596582674112226727.

Texto completo
Resumen
碩士
國立成功大學
測量工程學系碩博士班
92
Ground-based laser scanners can quickly obtain high density point cloud data of scanned object surface in high accuracy. Multiple scans are frequently required for a complete scan project of a large or complicated object. Because the data set of each scan is defined in a local coordinate system, data sets of multi-station must be merged into a unified coordinate system. For many surveying applications, transforming the scanned data coordinates into a previously defined ground coordinate system is also needed.   Based on the theory of independent model adjustment developed in the field of photogrammetry, a point cloud data merging adjustment is proposed. Each point cloud data set is treated as a single model. It is assumed that adjacent model should be overlapped. Identification of conjugate points in the overlap areas should be done in advance as tie points. Ground control points are also needed for the transformation of the merged data the ground coordinate system. The model coordinates of tie points and control points will be treated as observations in the adjustment calculation. The unknown parameters of the adjustment include : 1. the transformation parameters of each model coordinate system; 2.the ground coordinates of all tie points. After adjustment, the data sets can be merging using the transformation parameters, and the standard derivation of observation residuals indicates the quality of data merging.   This thesis also proposed a method to integrate ground-based LiDAR data sets and digital images. An image scene can be projected onto the LiDAR point cloud data as long as image orientation is solved. The experimental results demonstrate the proposed method can be successfully applied for merging point cloud data and reconstructed 3D-model from ground-based LiDAR.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

HUANG, CHIA-HORNG y 黃嘉宏. "Fast Region Merging Methods and Watershed Analysis applied to Image Segmentation". Thesis, 2001. http://ndltd.ncl.edu.tw/handle/79773059322821473536.

Texto completo
Resumen
碩士
國立海洋大學
電機工程學系
89
Over-segmentation is a serious problem in conventional watershed analysis owing to the topographic relief inherent in the input image. To this problem, currently existing watershed methods merge two regions in sequence. However, sequential merging would require heavy computation load. This thesis presents two novel approaches that incorporate the watershed analysis and fuzzy theory, namely the synchronous Fuzzy-based Feature Tuning (FFT) and Clustering Merging (CM), to perform image segmentation. Both FFT and CM need not pre-specify the final number of regions. Each region Ri obtained from watershed analysis is first represented by the mean intensity (noted as mi) of gray pixels in Ri. FFT simultaneously adjust mi values of all regions by referencing their adjacent neighboring regions. Due to the use of synchronous strategy, FFT can achieve fast merging and provides great potentiality for a fully parallel hardware implementation. The iterative algorithm of FFT is terminated when the number of merged regions of two successive iterations is identical. In the CM method, the region merging processing has been formulated as clustering with special constraint. Each small region is regarded as a virtual data point and all the small regions are clustered if they share great similarity. When two small regions are adjacent and are clustered into an identical cluster, we say that they are of the same object and can be merged. Finally, empirical results are provided to show that the proposed approaches outperform other methods in terms of computation efficiency and segmentation accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Cui, Ying. "Image merging in a dynamic visual communication system with multiple cameras". Thesis, 1997. http://hdl.handle.net/2429/8473.

Texto completo
Resumen
In tele-operation, visual communication plays an important role as a source of information for control of a remote machine. The main objective of this thesis is to investigate the image merging in a dynamic visual communication system (DVCS) that can provide better visual presentation of the remote machine's working environment to the operator. The conventional VCS such as television cannot provide wide field of view (WFOV) and high resolution at the same time without significantly increasing the number of pixels and the bandwidth which is difficult and expensive. One of the proposed alternatives is to have a high resolution insert at the area of interest (AOI), determined by the observer's current eye orientation, projected into a cutout in the low resolution wide field of view (WFOV) background. This system is called a dynamic VCS (DVCS) in this thesis because of its active feedback control over the viewing scene. A DVCS requires a multi-channel imaging system, dual-resolution presentation, an eye tracker controlling the location of the AOI insert within pixel level accuracy, and an image merging system that can register and fuse AOI and WFOV images, all in real time. This thesis discuss some of these issues, mainly focusing on the design and implementation of the image merging in such a system. Several possible approaches are analyzed with regard to the free parameters in the implementation, and experiments are carried out on seven sets of AOI and WFOV images. These images are taken by off-the-shelf cameras with different rotational angles, zooms (scale), and optical centres (translational change) (RST). The optical axis for AOI and WFOV imaging are kept parallel. Based on the analysis and experiments, a new multi-process approach was designed and implemented which can trade off performance characteristics for various imaging conditions. This approach requires only rough estimation of the RST values to start with and presents a registered and fused dual resolution image to the viewer. This processing is also calibration free and can relax the specification requirements of the position sensor and camera control devices. A new study of using comer attributes to recover RST values leads to a derivation of an analytical representation of the significance value for detecting scale-consistent corners. There are many other issues to be studied in the future for a better DVC system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Yeh, Hao-Wei y 葉浩瑋. "Unsupervised Hierarchical Image Segmentation Based on Bayesian Sequential Partitioning and Merging". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/pz2rqy.

Texto completo
Resumen
碩士
國立交通大學
電子研究所
105
In this thesis, we present an unsupervised hierarchical clustering algorithm based on a split-and-merge scheme. Using image segmentation as an example of the applications, we propose an unsupervised image segmentation algorithm which outperforms the existing algorithms. In the split phase, we propose an efficient partition algorithm, named Just-Noticeable-Difference Bayesian Sequential Partitioning (JND-BSP), to partition image pixels into a few regions, within which the color variations are perceived to be smoothly changing without apparent color differences. In the merge phase, we proposed a Probability Based Sequential Merging algorithm to sequentially construct a hierarchical structure that represents the relative similarity among these partitioned regions. Instead of generating a segmentation result with a fixed number of segments, the new algorithm produces an entire hierarchical representation of the given image in a single run. This hierarchical representation is informative and can be very useful for subsequent processing, like object recognition and scene analysis. To demonstrate the effectiveness and efficiency of our method, we compare our new segmentation algorithm with several existing algorithms. Experiment results show that our new algorithm can not only offers a more flexible way to segment images but also provides segmented results close to human’s visual perception. The proposed algorithm can also be widely used on applications analyzing other types of data, and can be used to analyze Big Data with high dimension efficiently.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Fann, Sheng-En y 范聖恩. "Image Language Identification Using Shapelet Feature-Application in Merging Broken Chinese Characters". Thesis, 2009. http://ndltd.ncl.edu.tw/handle/np82s8.

Texto completo
Resumen
碩士
國立中央大學
資訊工程研究所
97
In this paper, a novel language identifier using shapelet feature with Adaboost and SVM has been developed. Different from previous works, our proposed mechanism not only can identify the language type in either Chinese or English of each connected component in the document image, but also obtain better robustness and gain highly efficiency and performance. First of all, the input connected component image has been divided into several sub-windows logically. After then, the gradient responses of each sub-image in different directions are extracted and the local average of these responses around each pixel is manipulated. In the following, the Adaboost is performed to select a subset of its low-level features to construct a mid-level shapelet feature. Finally, the shapelet features are merged together in all sub-windows. Through the above process, all of the information from different parts of the image is combined together and treated as the feature of the final language identifier. The broken or partial Chinese character connected components are tried to be combined with their neighboring connected components. The experimental results demonstrate that our proposed method not only can achieve the goal of improving the correctness rate for OCR process, but also obtain great merits for advanced document analysis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Tao, Trevor. "An extended Mumford-Shah model and improved region merging algorithm for image segmentation". Thesis, 2005. http://hdl.handle.net/2440/37749.

Texto completo
Resumen
In this thesis we extend the Mumford-Shah model and propose a new region merging algorithm for image segmentation. The segmentation problem is to determine an optimal partition of an image into constituent regions such that individual regions are homogenous within and adjacent regions have contrasting properties. By optimimal, we mean one that minimizes a particular energy functional. In region merging, the image is initially divided into a very fine grid, with each pixel being a separate region. Regions are then recursively merged until it is no longer possible to decrease the energy functional. In 1994, Koepfler, Lopez and Morel developed a region merging algorithm for segmentating an image. They consider the piecewise constant Mumford-Shah model, where the energy functional consists of two terms, accuracy versus complexity, with the trade - off controlled by a scale parameter. They show that one can efficiently generate a hierarchy of segmentations from coarse to fine. This algorithm is complemented by a sound theoretical analysis of the piecewise constant model, due to Morel and Solimini. The primary motivation for extending the Mumford-Shah model stems from the fact that this model is only suitable for " cartoon " images, where each region is uncomtaminated by any form of noise. Other shortcomings also need to be addressed. In the algorithm of Koepfler et al., it is difficult to determine the order in which the regions are merged and a " schedule " is required in order to determine the number and fine - ness of segmentations in the hierarchy. Both of these difficulties mitigate the theoretical analysis of Koepfler ' s algorithm. There is no definite method for selecting the " optimal " value of the scale parameter itself. Furthermore, the mathematical analysis is not well understood for more complex models. None of these issues are convincingly answered in the literature. This thesis aims to provide some answers to the above shortcomings by introducing new techniques for region merging algorithms and a better understanding of the theoretical analysis of both the mathematics and the algorithm ' s performance. A review of general segmentation techniques is provided early in this thesis. Also discussed is the development of an " extended " model to account for white noise contamination of images, and an improvement of Koepfler ' s original algorithm which eliminates the need for a schedule. The work of Morel and Solimini is generalized to the extended model. Also considered is an application to textured images and the issue of selecting the value of the scale parameter.
Thesis (Ph.D.)--School of Mathematical Sciences, 2005.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Tao, Trevor. "An extended Mumford-Shah model and an improved region merging algorithm for image segmentation". 2005. http://hdl.handle.net/2440/37749.

Texto completo
Resumen
In this thesis we extend the Mumford-Shah model and propose a new region merging algorithm for image segmentation. The segmentation problem is to determine an optimal partition of an image into constituent regions such that individual regions are homogenous within and adjacent regions have contrasting properties. By optimimal, we mean one that minimizes a particular energy functional. In region merging, the image is initially divided into a very fine grid, with each pixel being a separate region. Regions are then recursively merged until it is no longer possible to decrease the energy functional. In 1994, Koepfler, Lopez and Morel developed a region merging algorithm for segmentating an image. They consider the piecewise constant Mumford-Shah model, where the energy functional consists of two terms, accuracy versus complexity, with the trade - off controlled by a scale parameter. They show that one can efficiently generate a hierarchy of segmentations from coarse to fine. This algorithm is complemented by a sound theoretical analysis of the piecewise constant model, due to Morel and Solimini. The primary motivation for extending the Mumford-Shah model stems from the fact that this model is only suitable for " cartoon " images, where each region is uncomtaminated by any form of noise. Other shortcomings also need to be addressed. In the algorithm of Koepfler et al., it is difficult to determine the order in which the regions are merged and a " schedule " is required in order to determine the number and fine - ness of segmentations in the hierarchy. Both of these difficulties mitigate the theoretical analysis of Koepfler ' s algorithm. There is no definite method for selecting the " optimal " value of the scale parameter itself. Furthermore, the mathematical analysis is not well understood for more complex models. None of these issues are convincingly answered in the literature. This thesis aims to provide some answers to the above shortcomings by introducing new techniques for region merging algorithms and a better understanding of the theoretical analysis of both the mathematics and the algorithm ' s performance. A review of general segmentation techniques is provided early in this thesis. Also discussed is the development of an " extended " model to account for white noise contamination of images, and an improvement of Koepfler ' s original algorithm which eliminates the need for a schedule. The work of Morel and Solimini is generalized to the extended model. Also considered is an application to textured images and the issue of selecting the value of the scale parameter.
Thesis (Ph.D.)--School of Mathematical Sciences, 2005.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Yang, Shen-I. y 楊紳誼. "The Study on Image Compression Using the Genetic Algorithm and Segmentation Using the K-merging Algorithm". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/60340870293520289545.

Texto completo
Resumen
碩士
立德大學
數位應用研究所
98
The K-means algorithm had been widely applied to the study of image compression, in recently years, it is mainly used to design the codebook. In our study, a genetic algorithm is proposed to accomplish the purpose, with a parameter(w), controlling the clustering result in the algorithm. In our experiments, image compression base on the genetic algorithm outperforms that based on the K-means algorithm. The mean shift method had been applied to perform the image segmentation. Since the method segments the image based on pixels. The computation complexity is relatively high. In this study, we propose a K-merging method to segment the image based on blocks of image. The image segmentation based on K-merging method outperforms that based on the mean shift method in our study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Tsai, Jung-Huo y 蔡鎔壑. "Determining South China Sea bathymetry by the regression model: merging of altimeter-only and optical image-derived results". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/wfm7p7.

Texto completo
Resumen
碩士
國立交通大學
土木工程系所
102
In this study, satellite altimeter data from missions Geosat/GM and ERS-1/GM in the 1990s and 2000s, and from the latest missions Jason-1/GM and Cryosat-2, are used to compute gravity anomaly models and then to construct bathymetry models in the South China Sea (SCS). Sub-waveform threshold retracking is used to improve altimeter range accuracy. The Inverse Vening Meinesz (IVM) and Least Squares Collocation (LSC) are employed to compute gravity anomalies from retracked altimeter data. The regression model, based on a priori knowledge of gravity and depth in the SCS, is used to estimate depths from altimeter-derived gravity, which are compared with that from the gravity-geological method (GGM). Comparisons of altimeter-derived gravity anomalies with shipborne gravity anomalies show that, the gravity precision is increased by 30% from altimeter data that are improved by sub-waveform retracking and increased by 4% from using Jason-1/GM and Cryosat-2 altimeter data. The regression model outperforms the GGM, based on assessments using shipborne depths. We fuse depths from altimetry and optical images over atolls. On average, the fusion with optical images improves the definitions of coastlines over atolls by compared to the altimetry-only depths.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Hedjam, Rachid. "Segmentation non-supervisée d'images couleur par sur-segmentation Markovienne en régions et procédure de regroupement de régions par graphes pondérés". Thèse, 2008. http://hdl.handle.net/1866/7221.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Lin, Yi-fan y 林依梵. "Investigating the y-Band Images of Merging Galaxies". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/wfsxqw.

Texto completo
Resumen
碩士
國立中央大學
天文研究所
102
We study the y′−band images of merging galaxies from the observations of the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS). The merging systems were selected from the merging catalog of Hwang & Chang (2009), which were identified by checking the images of the Red-sequence Cluster Survey 2from the observations of the Canada France Hawaii Telescope (CFHT). By using a homomorphic-aperture method developed by Huang & Hwang (2014), we determine the photometric results of these merging systems. To obtain results with accurate photometry, we calibrated the r′−, z′−,and y′−band data to match the results of the SDSS DR9. We used the calibrated y′−band data to investigate the stellar mass of merging galaxies. Our results show that the stellar mass of merging galaxies are about 10^10 to 10^12M⊙. We also created a new catalog to record the y′−band results of the merging galaxies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Lin, Wen-Cheng y 林文誠. "The Resolution Enhancement of Unchanged Objects by Merging Multiple SPOT Images". Thesis, 1995. http://ndltd.ncl.edu.tw/handle/00860159300585102488.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Cho, Shih-Hsuan y 卓士軒. "Semantic Segmentation of Indoor-Scene RGB-D Images Based on Iterative Contraction and Merging". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/c9a9vg.

Texto completo
Resumen
碩士
國立交通大學
電子研究所
105
For semantic segmentation of indoor-scene images, we propose a method which combines convolutional neural network (CNNs) and the Iterative Contraction & Merging (ICM) algorithm. We also simultaneously utilize the depth images to efficiently analyze the 3-D space in indoor-scene images. The raw depth image from the depth camera is processed by two bilateral filters to recover a smoother and more complete depth image. On the other hand, the ICM algorithm is an unsupervised segmentation method that can preserve the boundary information well. We utilize the dense prediction from CNN, depth image and normal vector map as the high-level information to guide the ICM process for generating image segments in a more accurate way. In other words, we progressively generate the regions from high resolution to low resolution and generate a hierarchical segmentation tree. We also propose a decision process to determine the final decision of the semantic segmentation based on the hierarchical segmentation tree by using the dense prediction map as a reference. The proposed method can generate more accurate object boundaries as compared to the state-of-the-art methods. Our experiments also show that the use of high-level information does improve the performance of semantic segmentation as compared to the use of RGB information only.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía