Dissertations / Theses on the topic 'Image Merging'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 37 dissertations / theses for your research on the topic 'Image Merging.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Ipson, Heather. "T-spline Merging." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd804.pdf.
Full textMunechika, Curtis K. "Merging panchromatic and multispectral images for enhanced image analysis /." Online version of thesis, 1990. http://hdl.handle.net/1850/11366.
Full textTang, Weiran. "Frequency merging for demosaicking /." View abstract or full-text, 2009. http://library.ust.hk/cgi/db/thesis.pl?ECED%202009%20TANGW.
Full textTan, Zhigang, and 譚志剛. "A region merging methodology for color and texture image segmentation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43224143.
Full textTan, Zhigang. "A region merging methodology for color and texture image segmentation." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43224143.
Full textCui, Ying. "Image merging in a dynamic visual communication system with multiple cameras." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape17/PQDD_0030/NQ27126.pdf.
Full textUSUI, Shin'ichi, Masayuki TANIMOTO, Toshiaki FUJII, Tadahiko KIMOTO, and Hiroshi OHYAMA. "Fractal Image Coding Based on Classified Range Regions." Institute of Electronics, Information and Communication Engineers, 1998. http://hdl.handle.net/2237/14996.
Full textZhao, Guang. "Automatic boundary extraction in medical images based on constrained edge merging." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22030207.
Full textZhao, Guang, and 趙光. "Automatic boundary extraction in medical images based on constrained edge merging." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223904.
Full textOcampo, Blandon Cristian Felipe. "Patch-Based image fusion for computational photography." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.
Full textThe most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Medeiros, Rafael Sachett. "Detecção de pele humana utilizando modelos estocásticos multi-escala de textura." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/70193.
Full textGesture detection is an important task in human-computer interaction applications. If the hand of the user is precisely detected, both analysis and recognition of hand gesture become more simple and reliable. This work describes a new method for human skin detection, used as a pre-processing stage for hand gesture segmentation in recognition systems. First, we obtain the models of color and texture of human skin (material to be identified) from a training set consisting of skin images. At this stage, we build a Gaussian mixture model (GMM) for identifying skin color tones and a dictionary of textons for skin texture. Then, we introduce a stochastic region merging strategy, to determine all segments of different materials present in the image (each associated with a texture). Once the texture regions are obtained, each segment is classified based on skin color (GMM) and skin texture (dictionary of textons) model. To verify the performance of the developed algorithm, we perform experiments on the SDC database, specially designed for this kind of evaluation (human skin detection). Also, compared with other state-ofthe- art skin segmentation techniques, the results obtained in our experiments show that the proposed approach is robust to color and illumination variations arising from different skin tones (ethnicity of the user) as well as changes of pose, while keeping its ability for discriminating human skin from other highly textured background materials.
Jacob, Alexander. "Radar and Optical Data Fusion for Object Based Urban Land Cover Mapping." Thesis, KTH, Geoinformatik och Geodesi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-45978.
Full textDragon 2 Project
Lersch, Rodrigo Pereira. "Introdução de dados auxiliares na classificação de imagens digitais de sensoriamento remoto aplicando conceitos da teoria da evidência." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/15276.
Full textIn this thesis we investigate a new approach to implement concepts developed by the Theory of Evidence to Remote Sensing digital image classification. In the proposed approach auxiliary variables are structured as layers in a GIS-like format to produce layers of belief and plausibility. Thresholds are applied to the layers of belief and plausibility to detect errors of commission and omission, respectively on the thematic image. The thresholds are estimated as functions of the user’s and producer’s accuracy. Preliminary tests were performed over an area covered by natural forest with Araucaria, showing some promising results.
Kuperberg, Marcia Clare. "The integrated image : an investigation into the merging of video and computer graphics techniques incorporating the production of a video as a practical element in the investigation." Thesis, Middlesex University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.568498.
Full textMondésir, Jacques Philémon. "Apports de la texture multibande dans la classification orientée-objets d'images multisources (optique et radar)." Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/9706.
Full textAbstract : Texture has a good discriminating power which complements the radiometric parameters in the image classification process. The index Compact Texture Unit multiband, recently developed by Safia and He (2014), allows to extract texture from several bands at a time, so taking advantage of extra information not previously considered in the traditional textural analysis: the interdependence between bands. However, this new tool has not yet been tested on multi-source images, use that could be an interesting added-value considering, for example, all the textural richness the radar can provide in addition to optics, by combining data. This study allows to complete validation initiated by Safia (2014), by applying the CTU on an optics-radar dataset. The textural analysis of this multisource data allowed to produce a "color texture" image. These newly created textural bands are again combined with the initial optical bands before their use in a classification process of land cover in eCognition. The same classification process (but without CTU) was applied respectively to: Optics data, then Radar, finally on the Optics-Radar combination. Otherwise, the CTU generated on the optics separately (monosource) was compared to CTU arising from Optical-Radar couple (multisource). The analysis of the separating power of these different bands (radiometric and textural) with histograms, and the confusion matrix tool allows to compare the performance of these different scenarios and classification parameters. These comparators show the CTU, including the CTU multisource, as the most discriminating criterion; his presence adds variability in the image thus allowing a clearer segmentation (homogeneous and non-redundant), a classification both more detailed and more efficient. Indeed, the accuracy changes from 0.5 with the Optics image to 0.74 for the CTU image while confusion decreases from 0.30 (in Optics) to 0.02 (in the CTU).
Cheng, Sarah X. "A method of merging VMware disk images through file system unification." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/62752.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 67).
This thesis describes a method of merging the contents of two VMware disk images by merging the file systems therein. Thus, two initially disparate file systems are joined to appear and behave as a single file system. The problem of file system namespace unification is not a new one, with predecessors dating as far back as 1988 to present-day descendants such as UnionFS and union mounts. All deal with the same major issues - merging directory contents of source branches and handling any naming conflicts (namespace de-duplication), and allowing top-level edits of file system unions in presence of read-only source branches (copy-on-write). The previous solutions deal with exclusively with file systems themselves, and most perform the bulk of the unification logic at runtime. This project is unique in that both the sources and union are disk images that can be directly run as virtual machines. This lets us exploit various features of the VMware disk image format, eventually prompting us to move the unification logic to an entirely offline process. This decision, however, carry a variety of unique implications and side effects, which we shall also discuss in the paper.
by Sarah X. Cheng.
M.Eng.
Butkienė, Roma. "9 -10 klasių merginų fizinio savivaizdžio formavimo(-si) veiksniai." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060608_184703-78273.
Full textŽukauskaitė, Andželika. "Paauglių merginų fizinį savivaizdį formuojantys veiksniai." Bachelor's thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120604_125408-87705.
Full textRecently, scientists are increasingly interested in body image. This was caused by high standards of the perfect body prevailing in the society, the pursuit of which is becoming more and more important to people. Adolescent attitude toward themselves and their relation with the appearance are the growing restlessness. It is observed that adolescents tend to evaluate themselves critically, that they are increasingly dissatisfied with their appearance. Various scientists (Druxman, 2003; Grogan, 2008; Pruskus, 2008) acknowledge that social factors (family, friends, peers, media) definitely influence adolescent physical self-images. However, it is not always agreed on what factors mostly play the greatest role. Another important issue is which factors determine the most negative adolescent body image formation.
Dievaitytė, Ugnė. "Užsiėmimų, paremtų šokio - judesio terapija, efektyvumas keičiant 18-25 m. merginų kūno vaizdą." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110621_094008-00823.
Full textThe aim of the research is to evaluate the efficacy of sessions based on dance/movement therapy in altering body image of 18-25 years old girls. 105 girls from Vytautas Magnus University Social Sciences Faculty participated in the research. 40 girls of the intervention group attended all of the five sessions based on dance/movement therapy. 37 girls of the control group attended a lecture and pre and post measuring. Once a week intervention group participants attended one and a half an hour duration sessions based on dance/movement therapy designed for improving body image. Control group participated in two hours duration lecture about eating disorders, distorted body image and the applicability of dance/movement therapy. In order to evaluate the efficacy of the sessions based on dance/movement therapy research participants filled in Body Shape, Eating Attitude, Evaluation of Sessions Utility Questionnaires, Positive and Negative Emotions, Weight Preoccupation, Appearance Evaluation, Appearance Orientation, Body Areas Satisfaction Scales. Research participants had to fill in Positive and Negative Emotions scales before and after each session as well. The results of the research indicate that the methods used in the sessions based on dance/movement therapy have a positive effect on improving body image, i.e. positive emotions towards one’s body increased, negative ones – decreased in quantity, preoccupation with weight and appearance reduced and one’s appearance evaluation... [to full text]
Gui, Shengxi. "A Model-Driven Approach for LoD-2 Modeling Using DSM from Multi-stereo Satellite Images." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1593620776362528.
Full textHerrera, Castro D. (Daniel). "From images to point clouds:practical considerations for three-dimensional computer vision." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526208534.
Full textTiivistelmä Kolmiuloitteisen ympäristöä kuvaavan mallin rakentaminen on ollut tärkeä tutkimuksen kohde jo usean vuosikymmenen ajan. Sen sovelluskohteet ulottuvat aina lääketieteestä viihdeteollisuuteen. Väitöskirja tarkastelee 3D ympäristöä kuvaavan mallin tuottamisprosessia ja esittää uusia keinoja parantaa korkealaatuisen rekonstruktion tuottamiseen vaadittavia vaiheita. Työssä esitetään uusia menetelmiä etäisyyssensoreiden kalibrointiin, samanaikaisesti tapahtuvaan paikannukseen ja kartoitukseen, syvyyskartan korjaamiseen, etäisyyspistepilven yksinkertaistamiseen ja vapaan katselukulman kuvantamiseen. Väitöskirjan ensi osa keskittyy etäisyyssensoreiden kalibrointiin. Työ esittelee erilaisia sensorimalleja ja niiden kalibrointia. Yleisen tarkastelun lisäksi keskitytään hyvin tunnetun Kinect-sensorin käyttämiseen, ja ehdotetaan uutta kalibrointitapaa pelkkiä tasokohteita hyväksikäyttäen. Pelkkien värikameroiden käyttäminen näkymän rekonstruointiin tuottaa erilaisia haasteita verrattuna etäisyyssensoreiden käyttöön kuvan muodostamisessa. Lisäksi verkkosovellukset vaativat reaaliaikaista vastetta. Väitös tarkastelee kyseisiä haasteita ja esittää uudenlaisen yhtäaikaisen paikannuksen ja kartoituksen mallin tuottamista pelkkiä värikameroita käyttämällä. Esitetty tapa kolmiomittaa adaptiivisesti pisteitä taustan pohjalta samalla kun hyödynnetään eikolmiomitattuja piirteitä asentotietoihin. Työssä esitellään kolme uudenlaista tapaa syvyyskartan korjaamiseen. Ensimmäinen tapa käyttää satunnaispisteitä tasojen kohdentamiseen puuttuvilla alueilla. Toinen tapa käyttää 2nd-order prior kohdistusta ja intensiteettireunoja. Kolmas tapa oppii filttereitä joita se soveltaa Markov satunnaiskenttiin yhteisillä tiheys ja syvyys ennakoinneilla. Tämä väitös selvittää myös mahdollisuuksia 3D-information määrän pienentämiseen käsiteltävälle tasolle. Työssä selvitetään, kuinka syvyyskarttoja voidaan yhdistää ilman päällekkäisen informaation tallentamista. Työssä esitetään tapa jolla päällekkäisestä datasta voidaan luopua kuitenkin säilyttäen luonnollisesti muuttuva resoluutio. Viimeksi, tutkimuksessa on esitetty läpinäkyvyyskarttojen arviointiproseduuri etualan kerroksien monikatselukulmanäkymissä vapaan katselukulman renderöinnin näkökulmasta. Saadut tulokset vahvistavat tarkan 3D-näkymän rakentamisliukuhihnan tarvetta sisältäen kaikki edellä mainitut vaiheet
Carvalho, Eduardo Alves de. "SegmentaÃÃo de imagens de radar de abertura sintÃtica por crescimento e fusÃo estatÃstica de regiÃes." Universidade Federal do CearÃ, 2005. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2038.
Full textA cobertura regular de quase todo o planeta por sistemas de radar de abertura sintÃtica (synthetic aperture radar - SAR) orbitais e o uso de sistemas aerotransportados tÃm propiciado novos meios para obter informaÃÃes atravÃs do sensoriamento remoto de vÃrias regiÃes de nosso planeta, muitas delas inacessÃveis. Este trabalho trata do processamento de imagens digitais geradas por radar de abertura sintÃtica, especificamente da segmentaÃÃo, que consiste do isolamento ou particionamento dos objetos relevantes presentes em uma cena. A segmentaÃÃo de imagens digitais visa melhorar a interpretaÃÃo das mesmas em procedimentos subseqÃentes. As imagens SAR sÃo corrompidas por ruÃdo coerente, conhecido por speckle, que mascara pequenos detalhes e zonas de transiÃÃo entre os objetos. Tal ruÃdo à inerente ao processo de formaÃÃo dessas imagens e dificulta tarefas como a segmentaÃÃo automÃtica dos objetos existentes e a identificaÃÃo de seus contornos. Uma possibilidade para efetivar a segmentaÃÃo de imagens SAR consiste na filtragem preliminar do ruÃdo speckle, como etapa de tratamento dos dados. A outra possibilidade, aplicada neste trabalho, consiste em segmentar diretamente a imagem ruidosa, usando seus pixels originais como fonte de informaÃÃo. Para isso, à desenvolvida uma metodologia de segmentaÃÃo baseada em crescimento e fusÃo estatÃstica de regiÃes, que requer alguns parÃmetros para controlar o processo. As vantagens da utilizaÃÃo dos dados originais para realizar a segmentaÃÃo de imagens de radar sÃo a eliminaÃÃo de etapas de prÃ-processamento e o favorecimento da detecÃÃo das estruturas presentes nas mesmas. à realizada uma avaliaÃÃo qualitativa e quantitativa das imagens segmentadas, sob diferentes situaÃÃes, aplicando a tÃcnica proposta em imagens de teste contaminadas artificialmente com ruÃdo multiplicativo. Este segmentador à aplicado tambÃm no processamento de imagens SAR reais e os resultados sÃo promissores.
The regular coverage of the planet surface by spaceborne synthetic aperture radar (SAR)and also airborne systems have provided alternative means to gather remote sensing information of various regions of the planet, even of inaccessible areas. This work deals with the digital processing of synthetic aperture radar imagery, where segmentation is the main subject. It consists of isolating or partitioning relevant objects in a scene, aiming at improving image interpretation and understanding in subsequent tasks. SAR images are contaminated by coherent noise, known as speckle, which masks small details and transition zones among the objects. Such a noise is inherent in radar image generation process, making difficult tasks like automatic segmentation of the objects, as well as their contour identification. To segment radar images, one possible way is to apply speckle filtering before segmentation. Another one, applied in this work, is to perform noisy image segmentation using the original SAR pixels as input data, without any preprocessing,such as filtering. To provide segmentation, an algorithm based on region growing and statistical region merging has been developed, which requires some parameters to control the process. This task presents some advantages, as long as it eliminates preprocessing steps and favors the detection of the image structures, since original pixel information is exploited. A qualitative and quantitative performance evaluation of the segmented images is also executed, under different situations, by applying the proposed technique to simulated images corrupted with multiplicative noise. This segmentation method is also applied to real SAR images and the produced results are promising.
WANG, PIN-WEN, and 王品文. "Superpixel-based Image Segmentation and Region Merging." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/tkqdsb.
Full text國立中正大學
資訊管理系研究所
105
In the field of computer vision and image processing, image segmentation occupies a very important position. Image segmentation technology is constantly being put forward, however, the current image cutting still has many difficulties, and now some methods can be applied to the color image segmentation, most of the thinking only the image from one-dimensional space expansion to three-dimensional color space, and did not discuss other relevant information provided in color data. Therefore, the color space for image segmentation is also a very worthy of one of the topics of in-depth study. The purpose of image segmentation is to be able to find areas of interest from the image, or a meaningful area. Superpixel can achieve redundant information, and reduce the complexity of follow-up processing tasks, has been the growing concern of researchers at home and abroad. This study presents a segmentation approach to the SLIC superpixel approach and the sub-regions with the smallest of the eigenvalues of the H, S, V, R, G and B color characteristics combined with the texture are combined with the sub-regions, and the background is complex and the background is complex. Object and background difference of low type of color image for regional segmentation. According to the experimental results, this study suggests that the way to successfully segment the complex objects in complex background images. Finally, the results of this study and application of the results of the discussion and future prospects.
Ko, Hsuan-Yi, and 柯宣亦. "Adaptive Growing and Merging Algorithm for Image Segmentation." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/30705048738626014480.
Full text國立臺灣大學
電信工程學研究所
104
In computer vision, image segmentation plays an important role due to its widespread applications such as object tracking and image compression. Image segmentation is a process of clustering pixels into homogeneous and salient regions, and a number of image segmentation algorithms and techniques have been developed for different applications. To segment an image accurately with the number of regions user gives, we propose an adaptive growing and merging algorithm. Our procedure is described as follows: First, a superpixel segmentation is applied to the original image to reduce the computation time and provide helpful regional information. Second, we exploit the color histogram and textures to measure the similarity between two adjacent superpixels. Then we conduct the superpixel growing based on the similarity under the constraint of the edge’s intensity. Finally, we generate a dissimilarity matrix for the entire image according to color, texture, contours, saliency values and region size, and subsequently merge regions in the order of the dissimilarity. The region merging process is adaptive to the number of regions and local image features. After the superpixel growing has been finished, some superpixels expand to larger regions, which contain more accurate edges and regional information such as mean color and texture, to help with the final process of region merging. Simulations show that our proposed method segments most of images well and outperforms state-of-the-art methods.
Liu, Teng-Lieh, and 劉燈烈. "Point Cloud Adjustment Merging and Image Mapping for Ground-Based Lidar." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/41596582674112226727.
Full text國立成功大學
測量工程學系碩博士班
92
Ground-based laser scanners can quickly obtain high density point cloud data of scanned object surface in high accuracy. Multiple scans are frequently required for a complete scan project of a large or complicated object. Because the data set of each scan is defined in a local coordinate system, data sets of multi-station must be merged into a unified coordinate system. For many surveying applications, transforming the scanned data coordinates into a previously defined ground coordinate system is also needed. Based on the theory of independent model adjustment developed in the field of photogrammetry, a point cloud data merging adjustment is proposed. Each point cloud data set is treated as a single model. It is assumed that adjacent model should be overlapped. Identification of conjugate points in the overlap areas should be done in advance as tie points. Ground control points are also needed for the transformation of the merged data the ground coordinate system. The model coordinates of tie points and control points will be treated as observations in the adjustment calculation. The unknown parameters of the adjustment include : 1. the transformation parameters of each model coordinate system; 2.the ground coordinates of all tie points. After adjustment, the data sets can be merging using the transformation parameters, and the standard derivation of observation residuals indicates the quality of data merging. This thesis also proposed a method to integrate ground-based LiDAR data sets and digital images. An image scene can be projected onto the LiDAR point cloud data as long as image orientation is solved. The experimental results demonstrate the proposed method can be successfully applied for merging point cloud data and reconstructed 3D-model from ground-based LiDAR.
HUANG, CHIA-HORNG, and 黃嘉宏. "Fast Region Merging Methods and Watershed Analysis applied to Image Segmentation." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/79773059322821473536.
Full text國立海洋大學
電機工程學系
89
Over-segmentation is a serious problem in conventional watershed analysis owing to the topographic relief inherent in the input image. To this problem, currently existing watershed methods merge two regions in sequence. However, sequential merging would require heavy computation load. This thesis presents two novel approaches that incorporate the watershed analysis and fuzzy theory, namely the synchronous Fuzzy-based Feature Tuning (FFT) and Clustering Merging (CM), to perform image segmentation. Both FFT and CM need not pre-specify the final number of regions. Each region Ri obtained from watershed analysis is first represented by the mean intensity (noted as mi) of gray pixels in Ri. FFT simultaneously adjust mi values of all regions by referencing their adjacent neighboring regions. Due to the use of synchronous strategy, FFT can achieve fast merging and provides great potentiality for a fully parallel hardware implementation. The iterative algorithm of FFT is terminated when the number of merged regions of two successive iterations is identical. In the CM method, the region merging processing has been formulated as clustering with special constraint. Each small region is regarded as a virtual data point and all the small regions are clustered if they share great similarity. When two small regions are adjacent and are clustered into an identical cluster, we say that they are of the same object and can be merged. Finally, empirical results are provided to show that the proposed approaches outperform other methods in terms of computation efficiency and segmentation accuracy.
Cui, Ying. "Image merging in a dynamic visual communication system with multiple cameras." Thesis, 1997. http://hdl.handle.net/2429/8473.
Full textYeh, Hao-Wei, and 葉浩瑋. "Unsupervised Hierarchical Image Segmentation Based on Bayesian Sequential Partitioning and Merging." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/pz2rqy.
Full text國立交通大學
電子研究所
105
In this thesis, we present an unsupervised hierarchical clustering algorithm based on a split-and-merge scheme. Using image segmentation as an example of the applications, we propose an unsupervised image segmentation algorithm which outperforms the existing algorithms. In the split phase, we propose an efficient partition algorithm, named Just-Noticeable-Difference Bayesian Sequential Partitioning (JND-BSP), to partition image pixels into a few regions, within which the color variations are perceived to be smoothly changing without apparent color differences. In the merge phase, we proposed a Probability Based Sequential Merging algorithm to sequentially construct a hierarchical structure that represents the relative similarity among these partitioned regions. Instead of generating a segmentation result with a fixed number of segments, the new algorithm produces an entire hierarchical representation of the given image in a single run. This hierarchical representation is informative and can be very useful for subsequent processing, like object recognition and scene analysis. To demonstrate the effectiveness and efficiency of our method, we compare our new segmentation algorithm with several existing algorithms. Experiment results show that our new algorithm can not only offers a more flexible way to segment images but also provides segmented results close to human’s visual perception. The proposed algorithm can also be widely used on applications analyzing other types of data, and can be used to analyze Big Data with high dimension efficiently.
Fann, Sheng-En, and 范聖恩. "Image Language Identification Using Shapelet Feature-Application in Merging Broken Chinese Characters." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/np82s8.
Full text國立中央大學
資訊工程研究所
97
In this paper, a novel language identifier using shapelet feature with Adaboost and SVM has been developed. Different from previous works, our proposed mechanism not only can identify the language type in either Chinese or English of each connected component in the document image, but also obtain better robustness and gain highly efficiency and performance. First of all, the input connected component image has been divided into several sub-windows logically. After then, the gradient responses of each sub-image in different directions are extracted and the local average of these responses around each pixel is manipulated. In the following, the Adaboost is performed to select a subset of its low-level features to construct a mid-level shapelet feature. Finally, the shapelet features are merged together in all sub-windows. Through the above process, all of the information from different parts of the image is combined together and treated as the feature of the final language identifier. The broken or partial Chinese character connected components are tried to be combined with their neighboring connected components. The experimental results demonstrate that our proposed method not only can achieve the goal of improving the correctness rate for OCR process, but also obtain great merits for advanced document analysis.
Tao, Trevor. "An extended Mumford-Shah model and improved region merging algorithm for image segmentation." Thesis, 2005. http://hdl.handle.net/2440/37749.
Full textThesis (Ph.D.)--School of Mathematical Sciences, 2005.
Tao, Trevor. "An extended Mumford-Shah model and an improved region merging algorithm for image segmentation." 2005. http://hdl.handle.net/2440/37749.
Full textThesis (Ph.D.)--School of Mathematical Sciences, 2005.
Yang, Shen-I., and 楊紳誼. "The Study on Image Compression Using the Genetic Algorithm and Segmentation Using the K-merging Algorithm." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/60340870293520289545.
Full text立德大學
數位應用研究所
98
The K-means algorithm had been widely applied to the study of image compression, in recently years, it is mainly used to design the codebook. In our study, a genetic algorithm is proposed to accomplish the purpose, with a parameter(w), controlling the clustering result in the algorithm. In our experiments, image compression base on the genetic algorithm outperforms that based on the K-means algorithm. The mean shift method had been applied to perform the image segmentation. Since the method segments the image based on pixels. The computation complexity is relatively high. In this study, we propose a K-merging method to segment the image based on blocks of image. The image segmentation based on K-merging method outperforms that based on the mean shift method in our study.
Tsai, Jung-Huo, and 蔡鎔壑. "Determining South China Sea bathymetry by the regression model: merging of altimeter-only and optical image-derived results." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/wfm7p7.
Full text國立交通大學
土木工程系所
102
In this study, satellite altimeter data from missions Geosat/GM and ERS-1/GM in the 1990s and 2000s, and from the latest missions Jason-1/GM and Cryosat-2, are used to compute gravity anomaly models and then to construct bathymetry models in the South China Sea (SCS). Sub-waveform threshold retracking is used to improve altimeter range accuracy. The Inverse Vening Meinesz (IVM) and Least Squares Collocation (LSC) are employed to compute gravity anomalies from retracked altimeter data. The regression model, based on a priori knowledge of gravity and depth in the SCS, is used to estimate depths from altimeter-derived gravity, which are compared with that from the gravity-geological method (GGM). Comparisons of altimeter-derived gravity anomalies with shipborne gravity anomalies show that, the gravity precision is increased by 30% from altimeter data that are improved by sub-waveform retracking and increased by 4% from using Jason-1/GM and Cryosat-2 altimeter data. The regression model outperforms the GGM, based on assessments using shipborne depths. We fuse depths from altimetry and optical images over atolls. On average, the fusion with optical images improves the definitions of coastlines over atolls by compared to the altimetry-only depths.
Hedjam, Rachid. "Segmentation non-supervisée d'images couleur par sur-segmentation Markovienne en régions et procédure de regroupement de régions par graphes pondérés." Thèse, 2008. http://hdl.handle.net/1866/7221.
Full textLin, Yi-fan, and 林依梵. "Investigating the y-Band Images of Merging Galaxies." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/wfsxqw.
Full text國立中央大學
天文研究所
102
We study the y′−band images of merging galaxies from the observations of the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS). The merging systems were selected from the merging catalog of Hwang & Chang (2009), which were identified by checking the images of the Red-sequence Cluster Survey 2from the observations of the Canada France Hawaii Telescope (CFHT). By using a homomorphic-aperture method developed by Huang & Hwang (2014), we determine the photometric results of these merging systems. To obtain results with accurate photometry, we calibrated the r′−, z′−,and y′−band data to match the results of the SDSS DR9. We used the calibrated y′−band data to investigate the stellar mass of merging galaxies. Our results show that the stellar mass of merging galaxies are about 10^10 to 10^12M⊙. We also created a new catalog to record the y′−band results of the merging galaxies.
Lin, Wen-Cheng, and 林文誠. "The Resolution Enhancement of Unchanged Objects by Merging Multiple SPOT Images." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/00860159300585102488.
Full textCho, Shih-Hsuan, and 卓士軒. "Semantic Segmentation of Indoor-Scene RGB-D Images Based on Iterative Contraction and Merging." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/c9a9vg.
Full text國立交通大學
電子研究所
105
For semantic segmentation of indoor-scene images, we propose a method which combines convolutional neural network (CNNs) and the Iterative Contraction & Merging (ICM) algorithm. We also simultaneously utilize the depth images to efficiently analyze the 3-D space in indoor-scene images. The raw depth image from the depth camera is processed by two bilateral filters to recover a smoother and more complete depth image. On the other hand, the ICM algorithm is an unsupervised segmentation method that can preserve the boundary information well. We utilize the dense prediction from CNN, depth image and normal vector map as the high-level information to guide the ICM process for generating image segments in a more accurate way. In other words, we progressively generate the regions from high resolution to low resolution and generate a hierarchical segmentation tree. We also propose a decision process to determine the final decision of the semantic segmentation based on the hierarchical segmentation tree by using the dense prediction map as a reference. The proposed method can generate more accurate object boundaries as compared to the state-of-the-art methods. Our experiments also show that the use of high-level information does improve the performance of semantic segmentation as compared to the use of RGB information only.