To see the other types of publications on this topic, follow the link: Image thresholding.

Dissertations / Theses on the topic 'Image thresholding'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image thresholding.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hertz, Lois. "Robust image thresholding techniques for automated scene analysis." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/15050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Katakam, Nikhil. "Pavement crack detection system through localized thresholding /." Connect to full text in OhioLINK ETD Center, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=toledo1260820344.

Full text
Abstract:
Thesis (M.S.)--University of Toledo, 2009.
Typescript. "Submitted as partial fulfillment of the requirements for The Master of Science in Engineering." "A thesis entitled"--at head of title. Bibliography: leaves 65-68.
APA, Harvard, Vancouver, ISO, and other styles
3

Kieri, Andreas. "Context Dependent Thresholding and Filter Selection for Optical Character Recognition." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-197460.

Full text
Abstract:
Thresholding algorithms and filters are of great importance when utilizing OCR to extract information from text documents such as invoices. Invoice documents vary greatly and since the performance of image processing methods when applied to those documents will vary accordingly, selecting appropriate methods is critical if a high recognition rate is to be obtained. This paper aims to determine if a document recognition system that automatically selects optimal processing methods, based on the characteristics of input images, will yield a higher recognition rate than what can be achieved by a manual choice. Such a recognition system, including a learning framework for selecting optimal thresholding algorithms and filters, was developed and evaluated. It was established that an automatic selection will ensure a high recognition rate when applied to a set of arbitrary invoice images by successfully adapting and avoiding the methods that yield poor recognition rates.
APA, Harvard, Vancouver, ISO, and other styles
4

Granlund, Oskar, and Kai Böhrnsen. "Improving character recognition by thresholding natural images." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208899.

Full text
Abstract:
The current state of the art optical character recognition (OCR) algorithms are capable of extracting text from images in predefined conditions. OCR is extremely reliable for interpreting machine-written text with minimal distortions, but images taken in a natural scene are still challenging. In recent years the topic of improving recognition rates in natural images has gained interest because more powerful handheld devices are used. The main problem faced dealing with recognition in natural images are distortions like illuminations, font textures, and complex backgrounds. Different preprocessing approaches to separate text from its background have been researched lately. In our study, we assess the improvement reached by two of these preprocessing methods called k-means and Otsu by comparing their results from an OCR algorithm. The study showed that the preprocessing made some improvement on special occasions, but overall gained worse accuracy compared to the unaltered images.
Dagens optisk teckeninläsnings (OCR) algoritmer är kapabla av att extrahera text från bilder inom fördefinierade förhållanden. De moderna metoderna har uppnått en hög träffsäkerhet för maskinskriven text med minimala förvrängningar, men bilder tagna i en naturlig scen är fortfarande svåra att hantera. De senaste åren har ett stort intresse för att förbättra tecken igenkännings algoritmerna uppstått, eftersom fler kraftfulla och handhållna enheter används. Det huvudsakliga problemet när det kommer till igenkänning i naturliga bilder är olika förvrängningar som infallande ljus, textens textur och komplicerade bakgrunder. Olika metoder för förbehandling och därmed separation av texten och dess bakgrund har studerats under den senaste tiden. I våran studie bedömer vi förbättringen som uppnås vid förbehandlingen med två metoder som kallas för k-means och Otsu genom att jämföra svaren från en OCR algoritm. Studien visar att Otsu och k-means kan förbättra träffsäkerheten i vissa förhållanden men generellt sett ger det ett sämre resultat än de oförändrade bilderna.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." University of Sydney. School of Electrical and Information Engineering, 2005. http://hdl.handle.net/2123/699.

Full text
Abstract:
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have "e;C"e; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the "e;Onion-Peeling"e; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A "e;bad"e; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the "e;C"e; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the "e;Onion-Peeling"e; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhao, Mansuo. "Image Thresholding Technique Based On Fuzzy Partition And Entropy Maximization." Thesis, The University of Sydney, 2004. http://hdl.handle.net/2123/699.

Full text
Abstract:
Thresholding is a commonly used technique in image segmentation because of its fast and easy application. For this reason threshold selection is an important issue. There are two general approaches to threshold selection. One approach is based on the histogram of the image while the other is based on the gray scale information located in the local small areas. The histogram of an image contains some statistical data of the grayscale or color ingredients. In this thesis, an adaptive logical thresholding method is proposed for the binarization of blueprint images first. The new method exploits the geometric features of blueprint images. This is implemented by utilizing a robust windows operation, which is based on the assumption that the objects have "e;C"e; shape in a small area. We make use of multiple window sizes in the windows operation. This not only reduces computation time but also separates effectively thin lines from wide lines. Our method can automatically determine the threshold of images. Experiments show that our method is effective for blueprint images and achieves good results over a wide range of images. Second, the fuzzy set theory, along with probability partition and maximum entropy theory, is explored to compute the threshold based on the histogram of the image. Fuzzy set theory has been widely used in many fields where the ambiguous phenomena exist since it was proposed by Zadeh in 1965. And many thresholding methods have also been developed by using this theory. The concept we are using here is called fuzzy partition. Fuzzy partition means that a histogram is parted into several groups by some fuzzy sets which represent the fuzzy membership of each group because our method is based on histogram of the image . Probability partition is associated with fuzzy partition. The probability distribution of each group is derived from the fuzzy partition. Entropy which originates from thermodynamic theory is introduced into communications theory as a commonly used criteria to measure the information transmitted through a channel. It is adopted by image processing as a measurement of the information contained in the processed images. Thus it is applied in our method as a criterion for selecting the optimal fuzzy sets which partition the histogram. To find the threshold, the histogram of the image is partitioned by fuzzy sets which satisfy a certain entropy restriction. The search for the best possible fuzzy sets becomes an important issue. There is no efficient method for the searching procedure. Therefore, expansion to multiple level thresholding with fuzzy partition becomes extremely time consuming or even impossible. In this thesis, the relationship between a probability partition (PP) and a fuzzy C-partition (FP) is studied. This relationship and the entropy approach are used to derive a thresholding technique to select the optimal fuzzy C-partition. The measure of the selection quality is the entropy function defined by the PP and FP. A necessary condition of the entropy function arriving at a maximum is derived. Based on this condition, an efficient search procedure for two-level thresholding is derived, which makes the search so efficient that extension to multilevel thresholding becomes possible. A novel fuzzy membership function is proposed in three-level thresholding which produces a better result because a new relationship among the fuzzy membership functions is presented. This new relationship gives more flexibility in the search for the optimal fuzzy sets, although it also increases the complication in the search for the fuzzy sets in multi-level thresholding. This complication is solved by a new method called the "e;Onion-Peeling"e; method. Because the relationship between the fuzzy membership functions is so complicated it is impossible to obtain the membership functions all at once. The search procedure is decomposed into several layers of three-level partitions except for the last layer which may be a two-level one. So the big problem is simplified to three-level partitions such that we can obtain the two outmost membership functions without worrying too much about the complicated intersections among the membership functions. The method is further revised for images with a dominant area of background or an object which affects the appearance of the histogram of the image. The histogram is the basis of our method as well as of many other methods. A "e;bad"e; shape of the histogram will result in a bad thresholded image. A quadtree scheme is adopted to decompose the image into homogeneous areas and heterogeneous areas. And a multi-resolution thresholding method based on quadtree and fuzzy partition is then devised to deal with these images. Extension of fuzzy partition methods to color images is also examined. An adaptive thresholding method for color images based on fuzzy partition is proposed which can determine the number of thresholding levels automatically. This thesis concludes that the "e;C"e; shape assumption and varying sizes of windows for windows operation contribute to a better segmentation of the blueprint images. The efficient search procedure for the optimal fuzzy sets in the fuzzy-2 partition of the histogram of the image accelerates the process so much that it enables the extension of it to multilevel thresholding. In three-level fuzzy partition the new relationship presentation among the three fuzzy membership functions makes more sense than the conventional assumption and, as a result, performs better. A novel method, the "e;Onion-Peeling"e; method, is devised for dealing with the complexity at the intersection among the multiple membership functions in the multilevel fuzzy partition. It decomposes the multilevel partition into the fuzzy-3 partitions and the fuzzy-2 partitions by transposing the partition space in the histogram. Thus it is efficient in multilevel thresholding. A multi-resolution method which applies the quadtree scheme to distinguish the heterogeneous areas from the homogeneous areas is designed for the images with large homogeneous areas which usually distorts the histogram of the image. The new histogram based on only the heterogeneous area is adopted for partition and outperforms the old one. While validity checks filter out the fragmented points which are only a small portion of the whole image. Thus it gives good thresholded images for human face images.
APA, Harvard, Vancouver, ISO, and other styles
7

Bunn, Wendy J. "Sensitivity to distributional assumptions in estimation of the ODP thresholding function /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1918.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Quan, Jin. "Image Denoising of Gaussian and Poisson Noise Based on Wavelet Thresholding." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1380556846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pakalapati, Himani Raj. "Programming of Microcontroller and/or FPGA for Wafer-Level Applications - Display Control, Simple Stereo Processing, Simple Image Recognition." Thesis, Linköpings universitet, Elektroniksystem, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-89795.

Full text
Abstract:
In this work the usage of a WLC (Wafer Level Camera) for ensuring road safety has been presented. A prototype of a WLC along with the Aptina MT9M114 stereoboard has been used for this project. The basic idea is to observe the movements of the driver. By doing so an understanding of whether the driver is concentrating on the road can be achieved. For this project the display of the required scene is captured with a wafer-level camera pair. Using the image pairs stereo processing is performed to obtain the real depth of the objects in the scene. Image recognition is used to separate the object from the background. This ultimately leads to just concentrating on the object which in the present context is the driver.
APA, Harvard, Vancouver, ISO, and other styles
10

Vantaram, Sreenath Rao. "Fast unsupervised multiresolution color image segmentation using adaptive gradient thresholding and progressive region growing /." Online version of thesis, 2009. http://hdl.handle.net/1850/9016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Sorwar, Golam 1969. "A novel distance-dependent thresholding strategy for block-based performance scalability and true object motion estimation." Monash University, Gippsland School of Computing and Information Technology, 2003. http://arrow.monash.edu.au/hdl/1959.1/5510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kaur, Ravneet. "THRESHOLDING METHODS FOR LESION SEGMENTATION OF BASAL CELL CARCINOMA IN DERMOSCOPY IMAGES." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/dissertations/1367.

Full text
Abstract:
Purpose: Automatic border detection is the first and most crucial step for lesion segmentation and can be very challenging, due to several lesion characteristics. There are many melanoma border-detecting algorithms that perform poorly on dermoscopy images of basal cell carcinoma (BCC), which is the most common skin cancer. One of the reasons for poor lesion detection performance is that there are very few algorithms that detect BCC borders, because they are difficult to segment, even for dermatologists. This difficulty is due to low contrast, variation in lesion color and artifacts inside/outside the lesion. Segmentation that has adequate lesion-feature capture, with acceptable tolerance, will facilitate accurate feature segmentation, thereby maximizing classification accuracy. Methods: The main objective of this research was to develop an effective BCC border detecting algorithm whose accuracy is better than the existing melanoma border detectors that have been applied to BCCs. Fifteen auto-thresholding techniques were implemented for BCC lesion segmentation; but, only five were selected for use in algorithm development. A novel technique was developed to automatically expand BCC lesion borders, to completely circumscribe the lesion. Two error metrics were used that better measure Type II (false-negative) errors: Relative XOR error and Lesion Capture Ratio (a novel error metric). Results: On training and test sets of 1023 and 119 images, respectively, based on two error metrics, five thresholding-based algorithms outperformed two state-of-the-art melanoma segmentation techniques, in segmenting BCCs. Five algorithms generated borders that appreciably better matched dermatologists’ hand-drawn borders which were used as the “gold standard.” Conclusion: The five developed algorithms, which included solutions for image-vignetting correction and border expansion, to achieve dermatologist-like borders, provided more inclusive and therefore, feature-preserving border detection, favoring better BCC classification accuracy, for future work.
APA, Harvard, Vancouver, ISO, and other styles
13

Manay, Siddharth. "Applications of anti-geometric diffusion of computer vision : thresholding, segmentation, and distance functions." Diss., Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/33626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Široký, Vít. "Implementace algoritmů zpracování obrazového rastru v FPGA." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-237206.

Full text
Abstract:
This thesis is about unusal view of implementation of graphic algorithms in FPGA in computer vision context. There are some informations about raster image and raster image operations, raster image segmentation usign threhsholding and adaptive thresholding and FPGA and DSP platforms. Next, there is a concept of the concrete project realization in the Unicam2D camera and description other ways of implementation. Next, there is a description of implemented tests with some demonstration followed by discussion of ressults in the end of the work.
APA, Harvard, Vancouver, ISO, and other styles
15

Yanni, Mamdouh. "The influence of thresholding and spatial resolution variations on the performance of the complex moments descriptor feature extractor." Thesis, University of Kent, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Mirshahi, Nazanin. "A Hierarchical Image Processing Approach for Diagnostic Analysis of Microcirculation Videos." VCU Scholars Compass, 2011. http://scholarscompass.vcu.edu/etd/286.

Full text
Abstract:
Knowledge of the microcirculatory system has added significant value to the analysis of tissue oxygenation and perfusion. While developments in videomicroscopy technology have enabled medical researchers and physicians to observe the microvascular system, the available software tools are limited in their capabilities to determine quantitative features of microcirculation, either automatically or accurately. In particular, microvessel density has been a critical diagnostic measure in evaluating disease progression and a prognostic indicator in various clinical conditions. As a result, automated analysis of the microcirculatory system can be substantially beneficial in various real-time and off-line therapeutic medical applications, such as optimization of resuscitation. This study focuses on the development of an algorithm to automatically segment microvessels, calculate the density of capillaries in microcirculatory videos, and determine the distribution of blood circulation. The proposed technique is divided into four major steps: video stabilization, video enhancement, segmentation and post-processing. The stabilization step estimates motion and corrects for the motion artifacts using an appropriate motion model. Video enhancement improves the visual quality of video frames through preprocessing, vessel enhancement and edge enhancement. The resulting frames are combined through an adjusted weighted median filter and the resulting frame is then thresholded using an entropic thresholding technique. Finally, a region growing technique is utilized to correct for the discontinuity of blood vessels. Using the final binary results, the most commonly used measure for the assessment of microcirculation, i.e. Functional Capillary Density (FCD), is calculated. The designed technique is applied to video recordings of healthy and diseased human and animal samples obtained by MicroScan device based on Sidestream Dark Field (SDF) imaging modality. To validate the final results, the calculated FCD results are compared with the results obtained by blind detailed inspection of three medical experts, who have used AVA (Automated Vascular Analysis) semi-automated microcirculation analysis software. Since there is neither a fully automated accurate microcirculation analysis program, nor a publicly available annotated database of microcirculation videos, the results acquired by the experts are considered the gold standard. Bland-Altman plots show that there is ``Good Agreement" between the results of the algorithm and that of gold standard. In summary, the main objective of this study is to eliminate the need for human interaction to edit/ correct results, to improve the accuracy of stabilization and segmentation, and to reduce the overall computation time. The proposed methodology impacts the field of computer science through development of image processing techniques to discover the knowledge in grayscale video frames. The broad impact of this work is to assist physicians, medical researchers and caregivers in making diagnostic and therapeutic decisions for microcirculatory abnormalities and in studying of the human microcirculation.
APA, Harvard, Vancouver, ISO, and other styles
17

Novotný, Stanislav. "Detekce mobilního robotu zpracováním obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230171.

Full text
Abstract:
This master´s thesis deals with processing of image sequence taken by statically placed camera over plane of robot movement. At first there are methods for image segmentation and localization methods described. In the next part, selected methods are implemented and compared to individual images. In the final part, selected methods are further implemented in algorithm for batch processing of image sequence.
APA, Harvard, Vancouver, ISO, and other styles
18

Hutchison, Luke Alexander Daysh. "Fast Registration of Tabular Document Images Using the Fourier-Mellin Transform." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/4.

Full text
Abstract:
Image registration, the process of finding the transformation that best maps one image to another, is an important tool in document image processing. Having properly-aligned microfilm images can help in manual and automated content extraction, zoning, and batch compression of images. An image registration algorithm is presented that quickly identifies the global affine transformation (rotation, scale, translation and/or shear) that maps one tabular document image to another, using the Fourier-Mellin Transform. Each component of the affine transform is recovered independantly from the others, dramatically reducing the parameter space of the problem, and improving upon standard Fourier-Mellin Image Registration (FMIR), which only directly separates translation from the other components. FMIR is also extended to handle shear, as well as different scale factors for each document axis. This registration method deals with all transform components in a uniform way, by working in the frequency domain. Registration is limited to foreground pixels (the document form and printed text) through the introduction of a novel, locally adaptive foreground-background segmentation algorithm, based on the median filter. The background removal algorithm is also demonstrated as a useful tool to remove ambient signal noise during correlation. Common problems with FMIR are eliminated by background removal, meaning that apodization (tapering down to zero at the edge of the image) is not needed for accurate recovery of the rotation parameter, allowing the entire image to be used for registration. An effective new optimization to the median filter is presented. Rotation and scale parameter detection is less susceptible to problems arising from the non-commutativity of rotation and "tiling" (periodicity) than for standard FMIR, because only the regions of the frequency domain directly corresponding to tabular features are used in registration. An original method is also presented for automatically obtaining blank document templates from a set of registered document images, by computing the "pointwise median" of a set of registered documents. Finally, registration is demonstrated as an effective tool for predictive image compression. The presented registration algorithm is reliable and robust, and handles a wider range of transformation types than most document image registration systems (which typically only perform deskewing).
APA, Harvard, Vancouver, ISO, and other styles
19

Sarma, Subramonia P. "Relationship between suspicious coincidence in natural images and contour-salience in oriented filter responses." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/472.

Full text
Abstract:
Salient contour detection is an important lowlevel visual process in the human visual system, and has significance towards understanding higher visual and cognitive processes. Salience detection can be investigated by examining the visual cortical response to visual input. Visual response activity in the early stages of visual processing can be approximated by a sequence of convolutions of the input scene with the difference-of-Gaussian (DoG) and the oriented Gabor filters. The filtered responses are unusually high for prominent edge locations in the image, and are uniformly similar across different natural image inputs. Furthermore, such a response follows a power law distribution. The aim of this thesis is to examine how these response properties could be utilized to the problem of salience detection. First, I identify a method to find the best threshold on the response activity (orientation energy) toward the detection of salient contours: compare the response distribution to a Gaussian distribution of equal variance. Second, I justify this comparison by providing an explanation under the framework of Suspicious Coincidence proposed by Barlow [1]. A connection is provided between perceived salience of contours and the neuronal goal of detecting suspiciousness, where salient contours are seen as affording suspicious coincidences by the visual system. Finally, the neural plausibility of such a salience detection mechanism is investigated, and the representational effciency is shown which could potentially explain why the human visual system can effortlessly detect salience.
APA, Harvard, Vancouver, ISO, and other styles
20

Al-Azawi, Mohammad Ali Naji Said. "A new approach to automatic saliency identification in images based on irregularity of regions." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/11122.

Full text
Abstract:
This research introduces an image retrieval system which is, in different ways, inspired by the human vision system. The main problems with existing machine vision systems and image understanding are studied and identified, in order to design a system that relies on human image understanding. The main improvement of the developed system is that it uses the human attention principles in the process of image contents identification. Human attention shall be represented by saliency extraction algorithms, which extract the salient regions or in other words, the regions of interest. This work presents a new approach for the saliency identification which relies on the irregularity of the region. Irregularity is clearly defined and measuring tools developed. These measures are derived from the formality and variation of the region with respect to the surrounding regions. Both local and global saliency have been studied and appropriate algorithms were developed based on the local and global irregularity defined in this work. The need for suitable automatic clustering techniques motivate us to study the available clustering techniques and to development of a technique that is suitable for salient points clustering. Based on the fact that humans usually look at the surrounding region of the gaze point, an agglomerative clustering technique is developed utilising the principles of blobs extraction and intersection. Automatic thresholding was needed in different stages of the system development. Therefore, a Fuzzy thresholding technique was developed. Evaluation methods of saliency region extraction have been studied and analysed; subsequently we have developed evaluation techniques based on the extracted regions (or points) and compared them with the ground truth data. The proposed algorithms were tested against standard datasets and compared with the existing state-of-the-art algorithms. Both quantitative and qualitative benchmarking are presented in this thesis and a detailed discussion for the results has been included. The benchmarking showed promising results in different algorithms. The developed algorithms have been utilised in designing an integrated saliency-based image retrieval system which uses the salient regions to give a description for the scene. The system auto-labels the objects in the image by identifying the salient objects and gives labels based on the knowledge database contents. In addition, the system identifies the unimportant part of the image (background) to give a full description for the scene.
APA, Harvard, Vancouver, ISO, and other styles
21

Ozkilic, Sibel. "Performance Improvement Of A 3-d Configuration Reconstruction Algorithm For An Object Using A Single Camera Image." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/1095793/index.pdf.

Full text
Abstract:
Performance improvement of a 3-D configuration reconstruction algorithm using a passive secondary target has been focused in this study. In earlier studies, a theoretical development of the 3-D configuration reconstruction algorithm was achieved and it was implemented by a computer program on a system consisting of an optical bench and a digital imaging system. The passive secondary target used was a circle with two internal spots. In order to use this reconstruction algorithm in autonomous systems, an automatic target recognition algorithm has been developed in this study. Starting from a pre-captured and stored 8-bit gray-level image, the algorithm automatically detects the elliptical image of a circular target and determines its contour in the scene. It was shown that the algorithm can also be used for partially captured elliptical images. Another improvement achieved in this study is the determination of internal camera parameters of the vision system.
APA, Harvard, Vancouver, ISO, and other styles
22

Akyay, Tolga. "Wavelet-based Outlier Detection And Denoising Of Airborne Laser Scanning Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12610164/index.pdf.

Full text
Abstract:
The method of airborne laser scanning &ndash
also named as LIDAR &ndash
has recently turned out to be an efficient way for generating high quality digital surface and elevation models. In this work, wavelet-based outlier detection and different wavelet thresholding (wavelet shrinkage) methods for denoising of airborne laser scanning data are discussed. The task is to investigate the effect of wavelet-based outlier detection and find out which wavelet thresholding methods provide best denoising results for post-processing. Data and results are analyzed and visualized by using a MATLAB program which was developed during this work.
APA, Harvard, Vancouver, ISO, and other styles
23

Novotný, Radek. "Aplikace waveletové transformace v software Mathematica a Sage." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220339.

Full text
Abstract:
This thesis focuses on image processing using wavelet transform. The usage of wavelet transform is analysed especially for image compression and image noise reduction purposes. The analysis describes in detail aspects and application of the following wavelet transform methods: CWT, DWT, DTWT, 2D DWT. The thesis further explains the meaning of the mother wavelet and studies certain specific kinds of wavelets, kinds of thresholding and its purposes and also touches on the JPEG2000 standard. Mathematica and Sage software packages were used to design algorithms for image compression and image noise reduction, utilising relevant wavelet transform findings. The concluding part of the thesis compares the two software packages and results obtained using different algorithms.
APA, Harvard, Vancouver, ISO, and other styles
24

Kardell, Martin. "Automatic Segmentation of Tissues in CT Images of the Pelvic Region." Thesis, Linköpings universitet, Institutionen för medicinsk teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-112540.

Full text
Abstract:
In brachytherapy, radiation therapy is performed by placing the radiation source into or very close to the tumour. When calculating the absorbed dose, water is often used as the radiation transport and dose scoring medium for soft tissues and this leads to inaccuracies. The iterative reconstruction algorithm DIRA is under development at the Center for Medical Imaging Science and Visualization, Linköping University. DIRA uses dual-energy CT to decompose tissues into different doublets and triplets of base components for a better absorbed dose estimation. To accurately determine mass fractions of these base components for different tissues, the tissues needs to be identified in the image. The aims of this master thesis are: (i) Find an automated segmentation algorithm in CT that best segments the male pelvis. (ii) Implement a segmentation algorithm that can be used in DIRA. (iii) Implement a fully automatic segmentation algorithm. Seven segmentation methods were tested in Matlab using images obtained from Linköping University Hospital. The methods were: active contours, atlas based registration, graph cuts, level set, region growing, thresholding and watershed. Four segmentation algorithms were selected for further analysis: phase based atlas registration, region growing, thresholding and active contours without edges. The four algorithms were combined and supplemented with other image analysis methods to form a fully automated segmentation algorithm that was implemented in DIRA. The newly developed algorithm (named MK2014) was sufficiently stable for pelvic image segmentation with a mean computational time of 45.3 s and a mean Dice similarity coefficient of 0.925 per 512×512 image. The performance of MK2014 tested on a simplified anthropomorphic phantom in DIRA gave promising result. Additional tests with more realistic phantoms are needed to confirm the general applicability of MK2014 in DIRA.
APA, Harvard, Vancouver, ISO, and other styles
25

Rathnayaka, Mudiyanselage Kanchana. "3D reconstruction of long bones utilising magnetic resonance imaging (MRI)." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/49779/1/Kanchana_Rathnayaka_Mudiyanselage_Thesis.pdf.

Full text
Abstract:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.
APA, Harvard, Vancouver, ISO, and other styles
26

Ramamoorthy, Dhyanesh. "Muscle Fatigue Detection using Infrared Thermography: Image Segmentation to Extract the Region of Interest from Thermograms." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543923019568392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Oz, Sinan. "Implement Of Three Segmentation Algorithms For Ct Images Of Torso." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12612866/index.pdf.

Full text
Abstract:
Many practical applications in the field of medical image processing require valid and reliable segmentation of images. In this dissertation, we propose three different semi-automatic segmentation frameworks for 2D-upper torso medical images to construct 3D geometric model of the torso structures. In the first framework, an extended version of the Otsu&rsquo
s method for three level thresholding and a recursive connected component algorithm are combined. The segmentation process is accomplished by first using Extended Otsu&rsquo
s method and then labeling in each consecutive slice. Since there is no information about pixel positions in the outcome of Extended Otsu&rsquo
s method, we perform some processing after labeling to connect pixels belonging with the same tissue. In the second framework, Chan-Vese (CV) method, which is an example of active contour models, and a recursive connected component algorithm are used together. The segmentation process is achieved using CV method without egde information as stopping criteria. In the third and last framework, the combination of watershed transformation and K-means are used as the segmentation method. After segmentation operation, the labeling is performed for the determination of the medical structures. In addition, segmentation and labeling operation is realized for each consecutive slice in each framework. The results of each framework are compared quantitatively with manual segmentation results to evaluate their performances.
APA, Harvard, Vancouver, ISO, and other styles
28

Ohrádka, Marek. "Stabilizace obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219771.

Full text
Abstract:
This thesis deals with digital image stabilization. It contains a brief overview of the problem and available methods for digital image stabilization. The aim was to design and implement image stabilization system in JAVA, which is designed for RapidMiner. Two new stabilization methods have been proposed. The first is based on the motion estimation and motion compensation using Full-search and Three-step search algorithms. The basis of the second method is the detection of object boundaries. The functionality of the proposed method was tested on video sequences with contain visible shake of the scene, which has beed created for this purpose. Testing results show that with the proper set of input parameters for the object border detection method, successful stabilization of the scene is achieved. The rate of error reduction between images is approximately about 65 to 85%. The output of the method is stabilized image sequence and a set of metadata collected during stabilization, which can be further processed in an environment of RapidMiner.
APA, Harvard, Vancouver, ISO, and other styles
29

Svoboda, Ondřej. "Pokročilé metody segmentace cévního řečiště na fotografiích sítnice." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220290.

Full text
Abstract:
Segmentation of vasculature tree is an important step of the process of image processing. There are many methods of automatic blood vessel segmentation. These methods are based on matched filters, pattern recognition or image classification. Use of automatic retinal image processing greatly simplifies and accelerates retinal images diagnosis. The aim of the automatic image segmentation algorithms is thresholding. This work primarily deals with retinal image thresholding. We discuss a few works using local and global image thresholding and supervised image classification to segmentation of blood tree from retinal images. Subsequently is to set of results from two different methods used image classification and discuss effectiveness of the vessel segmentation. Use image classification instead of global thresholding changed statistics of first method on healthy part of HRF. Sensitivity and accuracy decreased to 62,32 %, respectively 94,99 %. Specificity increased to 95,75 %. Second method achieved sensitivity 69.24 %, specificity 98.86% and 95.29 % accuracy. Combining the results of both methods achieved sensitivity up to72.48%, specificity to 98.59% and the accuracy to 95.75%. This confirmed the assumption that the classifier will achieve better results. At the same time, was shown that extend the feature vector combining the results from both methods have increased sensitivity, specificity and accuracy.
APA, Harvard, Vancouver, ISO, and other styles
30

Gregor, Michal. "Zvýraznění biomedicinských obrazových signálů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218292.

Full text
Abstract:
When scanning biomedical images by magnetic resonance or ultrasound, unwanted elements in the form of noise are entered to the image. With help of various methods it is possible the noise from the image partially remove. There are many methods for noise reduction and every one works on a different principle. As a result of this the results of these methods are different and is necessary for them to be objectively assessed. There is use for the adjustment of the images wavelet transformation and some treshold techniques in the work. The quality of the resulting pictures is tested by the methods for objective quallity tests. Testing was done in the MATLAB program environment on the pictures from magnetic resonance and pictures from ultrasound.
APA, Harvard, Vancouver, ISO, and other styles
31

Pavlicova, Martina. "Thresholding FMRI images." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1097769474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Pavlicová, Martina. "Thresholding FMRI images." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1097769474.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.
Title from first page of PDF file. Document formatted into pages; contains xvii, 109 p.; also includes graphics (some col.) Includes bibliographical references (p. 107-109). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
33

Václavek, Martin. "Automatická detekce výpadku ve vrstvě nervových vláken." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218724.

Full text
Abstract:
This work is focused on detection of loss in nerve fibre layer on colour pictures of retina, witch are makes by fundus camera. It describe every simple objects of retina, optic nerve head, macula lutea and vascular bed. It detect optic nerve head and his near area, witch is general for detection of breakdownds. It use several metodes of picture adjusting for picture elaboration and objects detection (segmentation, thresholding, enhancement, hough transformation ). The detection of loss in nerve fibre layer is based on comparing of statistic parameters ( average, standart deviation, skewness coefficient and kurtosis coefficient histogram, entropy ) in choosed areas with and withou destruction of nerve layers. Vascular bed have badwatsh on results, cause of this we using hand choosing of essay.
APA, Harvard, Vancouver, ISO, and other styles
34

Valouch, Lukáš. "Implementace vlnkové transformace v jazyku C++." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219301.

Full text
Abstract:
The aim of this thesis is implementation of wavelet transform algorithm for noise reduction. The noise reduction itself is focused on improving informative capabilities of sonographic (ultrasound) images in medicine. For this purpose, thresholding of detailed coefficients on individual levels of multiresolution analysis was used. Common procedures were not used for searching for the most suitable thresholds of those levels. The alternative concept's design is based on fundamental empirical approach, where the individual thresholds are optimised by evolution algorithms. However, with this algorithmic procedure, more problems manifest regarding the objective evaluation of the success of noise reduction. Because of this, the program uses commonly used parameters such as mean square error of the whole image, linear slope edge approximation, relative contrast of two differently bright and distinct points and the standard deviation of compact surface. Described theoretical knowledge is used in developed application DTWT. It executes multilevel decomposition and reversed reconstruction by discrete time wavelet transform, thresholding of detailed coefficients and final evaluation of performed noise reduction. The developed tool can be used separately to reduce noise. For our purposes, it has been modified in way, that it executed through the component for evolutionary optimization of parameters (Optimize Parameters) in created scenario in RapidMiner program. In the optimization process, this component used evaluation received from DTWT program as fitness function. Optimal thresholds were sought separately for three wavelet families - Daubeschies, Symmlets and Coiflets. The evolution algorithm chose soft threshold for all three wavelet families. In comparison to hard threshold, it is more suitable for noise reduction, but it has tendencies to blur the edges more. The devised method had in most cases greater evaluated success of noise reduction with wavelet transform with threshold search done by evolution algorithms, than commonly used filters. In visual comparison however the wavelet transform introduced some minor depreciating artefacts into the image. It is always about compromise between noise reduction and maximal preservation of image information. Objectively evaluating this dilemma is not easy and is always dependant on subjective viewpoint which in case of sonographic images is that of the attending physician.
APA, Harvard, Vancouver, ISO, and other styles
35

Oliveira, Helder Cesar Rodrigues de. "Proposta de redução da dose de radiação na mamografia digital utilizando novos algoritmos de filtragem de ruído Poisson." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-29032016-160603/.

Full text
Abstract:
O objetivo deste trabalho é apresentar um novo método para a remoção do ruído Poisson em imagens de mamografia digital adquiridas com baixa dosagem de radiação. Sabe-se que a mamografia por raios X é o exame mais eficiente para a detecção precoce do câncer de mama, aumentando consideravelmente as chances de cura da doença. No entanto, a radiação absorvida pela paciente durante o exame ainda é um problema a ser tratado. Estudos indicam que a exposição à radiação pode induzir a formação do câncer em algumas mulheres radiografadas. Apesar desse número ser significativamente baixo em relação ao número de mulheres que são salvas pelo exame, existe a necessidade do desenvolvimento de meios que viabilizem a diminuição da dose de radiação empregada. No entanto, uma redução na dose de radiação piora a qualidade da imagem pela diminuição da relação sinal-ruído, prejudicando o diagnóstico médico e a detecção precoce da doença. Nesse sentido, a proposta deste trabalho é apresentar um método para a filtragem do ruído Poisson que é adicionado às das imagens mamográficas quando adquiridas com baixa dosagem de radiação, fazendo com que ela apresente qualidade equivalente àquela adquirida com a dose padrão de radiação. O algoritmo proposto foi desenvolvido baseado em adaptações de algoritmos bem estabelecidos na literatura, como a filtragem no domínio Wavelet, aqui usando o Shrink-thresholding (WTST), e o Block-matching and 3D Filtering (BM3D). Os resultados obtidos com imagens mamográficas adquiridas com phantom e também imagens clínicas, mostraram que o método proposto é capaz de filtrar o ruído adicional incorporado nas imagens sem perda aparente de informação.
The aim of this work is to present a novel method for removing the Poisson noise in digital mammography images acquired with reduced radiation dose. It is known that the X-ray mammography is the most effective exam for early detection of breast cancer, greatly increasing the chances of healing the disease. However, the radiation absorbed by the patient during the exam is still a problem to be treated. Some studies showed that mammography can induce breast cancer in a few women. Although this number is significantly low compared to the number of women who are saved by the exam, it is important to develop methods to enable the reduction of the radiation dose used in the exam. However, dose reduction led to a decrease in image quality by means of the signal to noise ratio, impairing medical diagnosis and the early detection of the disease. In this sense, the purpose of this study is to propose a new method to reduce Poisson noise in mammographic images acquired with low radiation dose, in order to achive the same quality as those acquired with the standard dose. The method is based on well established algorithms in the literature as the filtering in Wavelet domain, here using Shrink-thresholding (WTST) and the Block-matching and 3D Filtering (BM3D). Results using phantom and clinical images showed that the proposed algorithm is capable of filtering the additional noise in images without apparent loss of information.
APA, Harvard, Vancouver, ISO, and other styles
36

Jína, Miroslav. "Segmentace ledvin z renální perfúzní MR sekvence obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-219899.

Full text
Abstract:
This master’s thesis deals with kidney segmentation in perfusion magnetic resonance image sequences. Kidney segmentation is carry out by a few methods such as regionbased techniques, deformable models, specimen-based methods, edge-oriented methods etc. The universal algorithm for patient kidney segmentation still does not exist. Proposed method is an active contour Snake, which is created in programming environment MatLab. Final contours are quantitatively and visually compared to manual kidney segmentation.
APA, Harvard, Vancouver, ISO, and other styles
37

Gabriel, Petr. "Klasifikace objektů v obrazech." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2009. http://www.nusl.cz/ntk/nusl-217783.

Full text
Abstract:
This master's thesis deal with problems of classification objects on the basis of atributes get from images. This thesis pertain to a branch of computer vision. Describe possible instruments of classification (e.g. neural networks, decision tree, etc.). Essential part is description objects by means of atributes. They are imputs to classifier. Practical part of this thesis deal with classification of object collection, which can be usually found at home (e.g. scissors, compact disc, sticky, etc.). Analyzed image is preprocessed , segmented by thresholding in HSV color map. Then defects caused by a segmentation are reconstructed by morfological operations. After are determined atribute values, which are imputs to classifier. Classifier has form of decision tree.
APA, Harvard, Vancouver, ISO, and other styles
38

Čišecký, Roman. "Metody pro odstranění šumu z digitálních obrazů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219769.

Full text
Abstract:
The master's thesis is concerned with digital image denoising methods. The theoretical part explains some elementary terms related to image processing, image noise, categorization of noise and quality determining criteria of denoising process. There are also particular denoising methods described, mentioning their advantages and disadvantages in this paper. The practical part deals with an implementation of the selected denoising methods in a Java, in the environment of application RapidMiner. In conclusion, the results obtained by different methods are compared.
APA, Harvard, Vancouver, ISO, and other styles
39

Šejnohová, Marie. "Rentgenová počítačová tomografie embrya myši." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221377.

Full text
Abstract:
The aim of this semestral thesis is to compare the possibilities of available micro-CT systems. Theoretic part of this thesis there deals with possibilities of staining soft tissues and embryos because of enhancement the contrast of micro-CT images. Here follows a description of sources X-ray and detectors of available micro-CT systems. In practice there was realized the staining of embryo in cooperation with Department of histology and embryology in Brno. Then followed a measuring on FSI in Brno, ČVUT in Prague and synchrotron Elettra in Italy. In semestral thesis are described of the thesis there are compared the micro-CT systems and results of measuring embryos by means of presented systems and results are compared.The best results were obtained on micro-CT in Brno, where X-ray tube and flat panel detector were used. This images were used for a segmentation of cartilage olfactory system by means of 3D region growing. From results they were created 3D models for comparison with a manually segmented model. A less accurate results were obtain with the semi-automatic segmentation but this method isn’t too time-consuming.
APA, Harvard, Vancouver, ISO, and other styles
40

Hedman, Karolina. "Differences in tumor volume for treated glioblastoma patients examined with 18F-fluorothymidine PET and contrast-enhanced MRI." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-173693.

Full text
Abstract:
Background: Glioblastoma (GBM) is the most common and malignant primary brain tumor. It is a rapidly progressing tumor that infiltrates the adjacent healthy brain tissue and is difficult to treat. Despite modern treatment including surgical resection followed by radiochemotherapy and adjuvant chemotherapy, the outcome remains poor. The median overall survival is 10-12 months. Neuroimaging is the most important diagnostic tool in the assessment of GBMs and the current imaging standard is contrast-enhanced magnetic resonance imaging (MRI). Positron emission tomography (PET) has been recommended as a complementary imaging modality. PET provides additional information to MRI, in biological behavior and aggressiveness of the tumor. This study aims to investigate if the combination of PET and MRI can improve the diagnostic assessment of these tumors. Patients and methods: In this study, 22 patients fulfilled the inclusion criteria, diagnosed with GBM, and participated in all four 18F-fluorothymidine (FLT)-PET/MR examinations. FLT-PET/MR examinations were performed preoperative (baseline), before the start of the oncological therapy, at two and six weeks into therapy. Optimization of an adaptive thresholding algorithm, a batch processing pipeline, and image feature extraction algorithms were developed and implemented in MATLAB and the analyzing tool imlook4d. Results: There was a significant difference in radiochemotherapy treatment response between long-term and short-term survivors’ tumor volume in MRI (p<0.05), and marginally significant (p<0.10) for maximum standard uptake value (SUVmax), PET tumor volume, and total lesion activity (TLA). Preoperative short-term survivors had on average larger tumor volume, higher SUV, and total lesion activity (TLA). The overall trend seen was that long-term survivors had a better treatment response in both MRI and PET than short-term survivors.  During radiochemotherapy, long-term survivors displayed shrinking MR tumor volume after two weeks, and almost no remaining tumor volume was left after six weeks; the short-term survivors display marginal tumor volume reduction during radiochemotherapy. In PET, long-term survivors mean tumor volumes start to decrease two weeks into radiochemotherapy. Short-term survivors do not show any PET volume reduction two and six weeks into radiochemotherapy. For patients with more or less than 200 days progression-free survival, PET volume and TLA were significantly different, and MR volume only marginally significant, suggesting that PET possibly could have added value. Conclusion: The combination of PET and MRI can be used to predict radiochemotherapy response between two and six weeks, predicting overall survival and progression-free survival using MR and PET volume, SUVmax, and TLA. This study is limited by small sample size and further research with greater number of participants is recommended.
APA, Harvard, Vancouver, ISO, and other styles
41

Fernando, Gerard Marius Xavier. "Variable thresholding of images with application to ventricular angiograms." Thesis, Imperial College London, 1985. http://hdl.handle.net/10044/1/37690.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ramalingam, Nagarajan. "Non-contact multispectral and thermal sensing techniques for detecting leaf surface wetness." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1104392582.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xxii, 271 p.; also includes graphics (some col.) Includes bibliographical references (p. 206-214).
APA, Harvard, Vancouver, ISO, and other styles
43

Smékalová, Veronika. "Automatická segmentace cévních systémů myších jater v tomografických datech." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377665.

Full text
Abstract:
The methodology of visualization of soft tissue is in biology and medicine a topic for many years. During this period there were approving many techniques how to achieve accurate and authentic image of the researched object or structure. X-ray computed tomography is very helpful to get this goal but is necessary to improve contrasting techniques as well as the techniques of image post-processing. This thesis deals with imaging soft tissue. Specifically, it focuses on mouse liver contrasting with the artificial resin Microfil. Thesis also describes image processing technique (thresholding and region growing) for the data of the measurement with the goal of the visualization of the sample in 3D.
APA, Harvard, Vancouver, ISO, and other styles
44

Lassauce, Aurélia. "Visualisation, granulométrie et évaporation de gouttes et de sprays - Etude dans une atmosphère close et pressurisée." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00667898.

Full text
Abstract:
L'objectif de cette thèse consiste à déterminer l'influence d'une pression ambiante comprise entre 100 à 600 KPa sur l'évaporation d'une goutte, puis sur l'évaporation d'un spray soumis aux mêmes conditions. La première étape consiste à étudier l'influence de la pression ambiante sur l'évolution de la forme, du diamètre, de la vitesse et du débit d'évaporation d'une goutte de liquide en chute libre. Pour cela, une technique de mesure optique a été utilisée et une méthodologie a été développée pour calibrer cette technique de mesure et ainsi minimiser les erreurs de mesures sur la taille des particules. En parallèle, un modèle analytique d'évaporation de gouttes en chute libre a été développé : une attention particulière a été portée sur la détermination d'une corrélation adaptée au calcul du coefficient de traînée afin de tenir compte de l'évolution de la forme des gouttes au cours de leur chute. Ce modèle d'évaporation de gouttes est comparé à un modèle d'évaporation de spray (prenant en compte l'entrainement d'air, la concentration de vapeur au loin de la goutte et l'influence de la pression ambiante) pour montrer les limites du modèle d'évaporation de gouttes lors de son application à l'évaporation d'un spray. La deuxième étape de l'étude a consisté à appliquer les techniques de mesure et d'analyse mises au point précédemment à l'étude de la granulométrie d'un spray pour caractériser l'influence de trois paramètres : la pression ambiante, la pression d'injection du liquide et la nature du liquide. L'analyse des résultats a permis de développer un modèle statistique pour déterminer la granulométrie de ces sprays.
APA, Harvard, Vancouver, ISO, and other styles
45

Šebela, Miroslav. "Detekce objektu ve videosekvencích." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218732.

Full text
Abstract:
The thesis consists of three parts. Theoretical description of digital image processing, optical character recognition and design of system for car licence plate recognition (LPR) in image or video sequence. Theoretical part describes image representation, smoothing, methods used for blob segmentation and proposed are two methods for optical character recognition (OCR). Concern of practical part is to find solution and design procedure for LPR system included OCR. The design contain image pre-processing, blob segmentation, object detection based on its properties and OCR. Proposed solution use grayscale trasformation, histogram processing, thresholding, connected component,region recognition based on its patern and properties. Implemented is also optical recognition method of licence plate where acquired values are compared with database used to manage entry of vehicles into object.
APA, Harvard, Vancouver, ISO, and other styles
46

El, Dor Abbas. "Perfectionnement des algorithmes d'optimisation par essaim particulaire : applications en segmentation d'images et en électronique." Phd thesis, Université Paris-Est, 2012. http://tel.archives-ouvertes.fr/tel-00788961.

Full text
Abstract:
La résolution satisfaisante d'un problème d'optimisation difficile, qui comporte un grand nombre de solutions sous-optimales, justifie souvent le recours à une métaheuristique puissante. La majorité des algorithmes utilisés pour résoudre ces problèmes d'optimisation sont les métaheuristiques à population. Parmi celles-ci, nous intéressons à l'Optimisation par Essaim Particulaire (OEP, ou PSO en anglais) qui est apparue en 1995. PSO s'inspire de la dynamique d'animaux se déplaçant en groupes compacts (essaims d'abeilles, vols groupés d'oiseaux, bancs de poissons). Les particules d'un même essaim communiquent entre elles tout au long de la recherche pour construire une solution au problème posé, et ce en s'appuyant sur leur expérience collective. L'algorithme PSO, qui est simple à comprendre, à programmer et à utiliser, se révèle particulièrement efficace pour les problèmes d'optimisation à variables continues. Cependant, comme toutes les métaheuristiques, PSO possède des inconvénients, qui rebutent encore certains utilisateurs. Le problème de convergence prématurée, qui peut conduire les algorithmes de ce type à stagner dans un optimum local, est un de ces inconvénients. L'objectif de cette thèse est de proposer des mécanismes, incorporables à PSO, qui permettent de remédier à cet inconvénient et d'améliorer les performances et l'efficacité de PSO. Nous proposons dans cette thèse deux algorithmes, nommés PSO-2S et DEPSO-2S, pour remédier au problème de la convergence prématurée. Ces algorithmes utilisent des idées innovantes et se caractérisent par de nouvelles stratégies d'initialisation dans plusieurs zones, afin d'assurer une bonne couverture de l'espace de recherche par les particules. Toujours dans le cadre de l'amélioration de PSO, nous avons élaboré une nouvelle topologie de voisinage, nommée Dcluster, qui organise le réseau de communication entre les particules. Les résultats obtenus sur un jeu de fonctions de test montrent l'efficacité des stratégies mises en oeuvre par les différents algorithmes proposés. Enfin, PSO-2S est appliqué à des problèmes pratiques, en segmentation d'images et en électronique
APA, Harvard, Vancouver, ISO, and other styles
47

Pavlišta, Libor. "Programové vybavení pro práci s mikroskopickým obrazem." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219633.

Full text
Abstract:
This project informs about the basic image processing on grayscale and binary images. The project deals with analyzing the data obtained. Acquaints with terms such as morphology and image segmentation. Acquired knowledge is applied to create a program to complete the processing of microscopic images in LabView programming environment with subsequent analysis and plotting of results into image. For a better idea supporting data about operation in progress are continuous images.
APA, Harvard, Vancouver, ISO, and other styles
48

Oré, Huacles Gian Carlos, and García Alexis Vásquez. "Desarrollo de un equipo electrónico/computacional orientado a extraer información de interés para el diagnóstico de Mildiu en plantaciones de quinua de la costa peruana basado en procesamiento digital de imágenes." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/654958.

Full text
Abstract:
La presente tesis propone un equipo portátil y ergonómico que permita la captura de imágenes de cultivos de quinua y, mediante un método de procesamiento eficaz, detecte los segmentos donde la planta se encuentra afectada por la enfermedad del Mildiu (representada por un amarillamiento particular sobre las hojas) para así obtener un resultado numérico que represente dicho efecto. La realización de este proyecto resuelve el principal problema del análisis cualitativo en los que se basa el cliente para el diagnóstico de la enfermedad ya que ofrecerá una solución cuantitativa para la identificación y medición de daño en los cultivos que proporcione al agrónomo un dato vital para poder suministrar la dosis adecuada de fungicida a las plantaciones y obtener un producto de mejor calidad. Este trabajo se basa en dos procesos de segmentación: primero, se realizó, a partir de la imagen original capturada, la segmentación de vegetación sobre el entorno mediante el modelo de color L*a*b, histograma bidimensional, filtrado y binarización; y, segundo, se realizó, a partir de la imagen resultante del primer proceso, la segmentación de amarillamiento sobre la vegetación mediante de los modelos de histogramas bidimensionales, filtrado, binarización y propiedades de excentricidad. Para la validación se tomó 50 imágenes de un cultivo de quinua del Instituto Nacional de Innovación Agraria (INIA) - Sede Lima, las cuales fueron procesadas a través del equipo desarrollado y verificado por el agrónomo especialista. Finalmente, se utilizó el índice de Kappa de Cohen para comparar los resultados donde se obtuvo un resultado de 0.789.
This thesis proposes a portable and ergonomic equipment that allows the capture of images of quinoa crops and, through an effective processing method, detect the segments where the plant is affected by Mildew disease (represented by a particular yellowing on the leaves) in order to obtain a numerical result that represents that effect. The realization of this project solves the main problem of the qualitative analysis on which the client is based for the diagnosis of the disease since it will offer a quantitative solution for the identification and measurement of crop damage that provides the agronomist with a vital data to be able to Supply the appropriate dose of herbicide to the plantations and obtain a better quality product. This work is based on two segmentation processes: first, from the original image captured, the segmentation of vegetation over the environment was carried out using the L*a*b color model, two-dimensional histogram, filtering and binarization; and, secondly, from the image resulting from the first process, the segmentation of yellowing on the vegetation was carried out using the two-dimensional histogram, filtering, binarization and eccentricity properties models. For validation, 50 images of a quinoa crop from INIA - Lima Headquarters were taken, which were processed through the equipment developed and verified by the specialist agronomist. Finally, Cohen’s Kappa index was used to compare the results where a result of 0.789 was obtained.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
49

Cheung, Anthony Hing-lam. "Design and implementation of an Arabic optical character recognition system." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36073/1/36073_Cheung_1998.pdf.

Full text
Abstract:
Character recognition is not a difficult task for humans who repeat the process thousands of times every day as they read papers or books. However, after more than 40 years of intensive investigation, there is still no machine that can recognize alphabetic characters as well as humans. Optical Character Recognition (OCR) is the process of converting a raster image representation of a document into a format that a computer can process. It involves many sub-disciplines of computer science including digital image processing, pattern recognition, natural language processing, artificial intelligence, and database systems. Applications of OCR systems are broad and include postal code recognition in postal departments, automatic document entries in companies and government departments, cheque sorting in banks, machine translation, etc. The objective of this thesis is to design an optical character recognition system which can recognize Arabic script. This system has to be: 1) accurate: with a recognition accuracy of 953; 2) robust: able to recognize two different Arabic fonts; and 3) efficient: it should be a real time system. This proposed system is composed of five image processing processes: 1) Image Acquisition; 2) Preprocessing; 3) Segmentation; 4) Feature Extraction; and 5) Classification. The recognized results are presented to users via a window-based user-interface. Thus, they can control the system, recognize and edit documents by a click on the mouse button. A thinning algorithm, a word segmentation algorithm and a recognition based character segmentation algorithm for Arabic script have been proposed to increase the recognition accuracy of the system. The Arabic word segmentation algorithm successfully segments the horizontally overlapped Arabic words, whereas the recognition-based character segmentation algorithm replaces the classical character segmentation method and raises the level of accuracy of recognition of the proposed system. These blocks have been integrated. Results to test the requirements of accuracy, robustness and efficiency are presented. Finally, some extensions to the system have also been proposed.
APA, Harvard, Vancouver, ISO, and other styles
50

Čambalová, Kateřina. "Volné algebraické struktury a jejich využití pro segmentaci digitálního obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231711.

Full text
Abstract:
The thesis covers methods for image segmentation. Fuzzy segmentation is based on the thresholding method. This is generalized to accept multiple criteria. The whole process is mathematically based on the free algebra theory. Free distributive lattice is created from poset of elements based on image properties and the lattice members are represented by terms used by the threshoding. Possible segmentation results compose the equivalence classes distribution. The thesis also contains description of resulting algorithms and methods for their optimization. Also the method of area subtracting is introduced.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography