Dissertations / Theses on the topic 'Image analysis'

To see the other types of publications on this topic, follow the link: Image analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Moëll, Mattias. "Digital image analysis for wood fiber images /." Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 2001. http://epsilon.slu.se/avh/2001/91-576-6309-2.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Feng, Sitao. "Image Analysis on Wood Fiber Cross-Section Images." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156428.

Full text
Abstract:
Lignification of wood fibers has a significant impact on wood properties. To measure the distribution of lignin in compression wood fiber cross-section images, a crisp segmentation method had been developed. It segments the lumen, the normally lignified cell wall and the highly lignified cell wall of each fiber. In order to refine this given segmentation the following two fuzzy segmentation methods were evaluated in this thesis: Iterative Relative Multi Objects Fuzzy Connectedness and Weighted Distance Transform on Curved Space. The crisp segmentation is used for the multi-seed selection. The crisp and the two fuzzy segmentations are then evaluated by comparing with the manual segmentation. It shows that Iterative Relative Multi Objects Fuzzy Connectedness has the best performance on segmenting the lumen, whereas Weighted Distance Transform on Curved Space outperforms the two other methods regarding the normally lignified cell wall and the highly lignified cell wall.
APA, Harvard, Vancouver, ISO, and other styles
3

Gavin, John. "Subpixel image analysis." Thesis, University of Bath, 1995. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307131.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Xianghong. "Automated image analysis for petrographic image assessments." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ62444.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hoxha, Genc. "IMAGE CAPTIONING FOR REMOTE SENSING IMAGE ANALYSIS." Doctoral thesis, Università degli studi di Trento, 2022. http://hdl.handle.net/11572/351752.

Full text
Abstract:
Image Captioning (IC) aims to generate a coherent and comprehensive textual description that summarizes the complex content of an image. It is a combination of computer vision and natural language processing techniques to encode the visual features of an image and translate them into a sentence. In the context of remote sensing (RS) analysis, IC has been emerging as a new research area of high interest since it not only recognizes the objects within an image but also describes their attributes and relationships. In this thesis, we propose several IC methods for RS image analysis. We focus on the design of different approaches that take into consideration the peculiarity of RS images (e.g. spectral, temporal and spatial properties) and study the benefits of IC in challenging RS applications. In particular, we focus our attention on developing a new decoder which is based on support vector machines. Compared to the traditional decoders that are based on deep learning, the proposed decoder is particularly interesting for those situations in which only a few training samples are available to alleviate the problem of overfitting. The peculiarity of the proposed decoder is its simplicity and efficiency. It is composed of only one hyperparameter, does not require expensive power units and is very fast in terms of training and testing time making it suitable for real life applications. Despite the efforts made in developing reliable and accurate IC systems, the task is far for being solved. The generated descriptions are affected by several errors related to the attributes and the objects present in an RS scene. Once an error occurs, it is propagated through the recurrent layers of the decoders leading to inaccurate descriptions. To cope with this issue, we propose two post-processing techniques with the aim of improving the generated sentences by detecting and correcting the potential errors. They are based on Hidden Markov Model and Viterbi algorithm. The former aims to generate a set of possible states while the latter aims at finding the optimal sequence of states. The proposed post-processing techniques can be injected to any IC system at test time to improve the quality of the generated sentences. While all the captioning systems developed in the RS community are devoted to single and RGB images, we propose two captioning systems that can be applied to multitemporal and multispectral RS images. The proposed captioning systems are able at describing the changes occurred in a given geographical through time. We refer to this new paradigm of analysing multitemporal and multispectral images as change captioning (CC). To test the proposed CC systems, we construct two novel datasets composed of bitemporal RS images. The first one is composed of very high-resolution RGB images while the second one of medium resolution multispectral satellite images. To advance the task of CC, the constructed datasets are publically available in the following link: https://disi.unitn.it/~melgani/datasets.html. Finally, we analyse the potential of IC for content based image retrieval (CBIR) and show its applicability and advantages compared to the traditional techniques. Specifically, we focus our attention on developing a CBIR systems that represents an image with generated descriptions and uses sentence similarity to search and retrieve relevant RS images. Compare to traditional CBIR systems, the proposed system is able to search and retrieve images using either an image or a sentence as a query making it more comfortable for the end-users. The achieved results show the promising potentialities of our proposed methods compared to the baselines and state-of-the art methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Asplund, Raquel. "Evaluation of a cloud-based image analysis and image display system for medical images." Thesis, Umeå universitet, Institutionen för tillämpad fysik och elektronik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-105984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Munechika, Curtis K. "Merging panchromatic and multispectral images for enhanced image analysis /." Online version of thesis, 1990. http://hdl.handle.net/1850/11366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jenkinson, Mark. "Saliency in image analysis." Thesis, University of Oxford, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302069.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Moore, George G. "Guided aerial image analysis." Thesis, University of Ulster, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hansson, Jonas. "Image analysis, an approach to measure grass roots from images." Thesis, University of Skövde, Department of Computer Science, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-592.

Full text
Abstract:

In this project a method to analyse images is presented. The images document the development of grassroots in a tilled field in order to study the movement of nitrate in the field. The final aim of the image analysis is to estimate the volume of dead and living roots in the soil. Since the roots and the soil have a broad and overlapping range of colours the fundamental problem is to find the roots in the images. Earlier methods for analysis of root images have used methods based on thresholds to extract the roots. To use a threshold the pixels of the object must have a unique range of colours separating them from the colour of the background, this is not the case for the images in this project. Instead the method uses a neural network to classify the individual pixels. In this paper a complete method to analyse images is presented and although the results are far from perfect, the method gives interesting results

APA, Harvard, Vancouver, ISO, and other styles
11

Roman-Gonzalez, Avid. "Compression Based Analysis of Image Artifacts: Application to Satellite Images." Phd thesis, Telecom ParisTech, 2013. http://tel.archives-ouvertes.fr/tel-00935029.

Full text
Abstract:
This thesis aims at an automatic detection of artifacts in optical satellite images such as aliasing, A/D conversion problems, striping, and compression noise; in fact, all blemishes that are unusual in an undistorted image. Artifact detection in Earth observation images becomes increasingly difficult when the resolution of the image improves. For images of low, medium or high resolution, the artifact signatures are sufficiently different from the useful signal, thus allowing their characterization as distortions; however, when the resolution improves, the artifacts have, in terms of signal theory, a similar signature to the interesting objects in an image. Although it is more difficult to detect artifacts in very high resolution images, we need analysis tools that work properly, without impeding the extraction of objects in an image. Furthermore, the detection should be as automatic as possible, given the quantity and ever-increasing volumes of images that make any manual detection illusory. Finally, experience shows that artifacts are not all predictable nor can they be modeled as expected. Thus, any artifact detection shall be as generic as possible, without requiring the modeling of their origin or their impact on an image. Outside the field of Earth observation, similar detection problems have arisen in multimedia image processing. This includes the evaluation of image quality, compression, watermarking, detecting attacks, image tampering, the montage of photographs, steganalysis, etc. In general, the techniques used to address these problems are based on direct or indirect measurement of intrinsic information and mutual information. Therefore, this thesis has the objective to translate these approaches to artifact detection in Earth observation images, based particularly on the theories of Shannon and Kolmogorov, including approaches for measuring rate-distortion and pattern-recognition based compression. The results from these theories are then used to detect too low or too high complexities, or redundant patterns. The test images being used are from the satellite instruments SPOT, MERIS, etc. We propose several methods for artifact detection. The first method is using the Rate-Distortion (RD) function obtained by compressing an image with different compression factors and examines how an artifact can result in a high degree of regularity or irregularity affecting the attainable compression rate. The second method is using the Normalized Compression Distance (NCD) and examines whether artifacts have similar patterns. The third method is using different approaches for RD such as the Kolmogorov Structure Function and the Complexity-to-Error Migration (CEM) for examining how artifacts can be observed in compression-decompression error maps. Finally, we compare our proposed methods with an existing method based on image quality metrics. The results show that the artifact detection depends on the artifact intensity and the type of surface cover contained in the satellite image.
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Tae-Kyun. "Discriminant analysis of patterns in images, image ensembles, and videos." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Karelid, Mikael. "Image Enhancement over a Sequence of Images." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12523.

Full text
Abstract:

This Master Thesis has been conducted at the National Laboratory of Forensic Science (SKL) in Linköping. When images that are to be analyzed at SKL, presenting an interesting object, are of bad quality there may be a need to enhance them. If several images with the object are available, the total amount of information can be used in order to estimate one single enhanced image. A program to do this has been developed by studying methods for image registration and high resolution image estimation. Tests of important parts of the procedure have been conducted. The final results are satisfying and the key to a good high resolution image seems to be the precision of the image registration. Improvements of this part may lead to even better results. More suggestions for further improvementshave been proposed.


Detta examensarbete har utförts på uppdrag av Statens Kriminaltekniska Laboratorium (SKL) i Linköping. Då bilder av ett intressant objekt som ska analyseras på SKL ibland är av dålig kvalitet finns det behov av att förbättra dessa. Om ett flertal bilder på objektet finns tillgängliga kan den totala informationen fråndessa användas för att skatta en enda förbättrad bild. Ett program för att göra detta har utvecklats genom studier av metoder för bildregistrering och skapande av högupplöst bild. Tester av viktiga delar i proceduren har genomförts. De slutgiltiga resultaten är goda och nyckeln till en bra högupplöst bild verkar ligga i precisionen för bildregistreringen. Genom att förbättra denna del kan troligtvis ännu bättre resultat fås. Även andra förslag till förbättringar har lagts fram.

APA, Harvard, Vancouver, ISO, and other styles
14

Aksu, tIbrahim. "Performance analysis of image motion analysis algorithms." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/28443.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Toh, Vivian. "Statistical image analysis : length estimation and colour image segmentation." Thesis, University of Strathclyde, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Xu, Dongxiang. "Image segmentation and its application on MR image analysis /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nahar, Vikas. "Content based image retrieval for bio-medical images." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Nahar_09007dcc80721e0b.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).
APA, Harvard, Vancouver, ISO, and other styles
18

Cheriyadat, Anil Meerasa. "Limitations of principal component analysis for dimensionality-reduction for classification of hyperspectral data." Master's thesis, Mississippi State : Mississippi State University, 2003. http://library.msstate.edu/etd/show.asp?etd=etd-11072003-133109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Hagen, Reidar Strand. "a Multivariate Image Analysis Toolbox." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9275.

Full text
Abstract:

The toolkit has been implemented as planned: The ground work for visualisation mappings and relationships between datasets have been finished. Wavelet transforms have been to compress datasets in order to reduce computational time. Principal Component Analysis and other transforms are working. Examples of use have been provided, and several ways of visualizing them have been provided. Multivariate Image Analysis is viable on regular Workstations.

APA, Harvard, Vancouver, ISO, and other styles
20

Marpu, Prashanth Reddy. "Geographic object-based image analysis." Doctoral thesis, Technische Universitaet Bergakademie Freiberg Universitaetsbibliothek "Georgius Agricola&quot, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:105-5519610.

Full text
Abstract:
The field of earth observation (EO) has seen tremendous development over recent time owing to the increasing quality of the sensor technology and the increasing number of operational satellites launched by several space organizations and companies around the world. Traditionally, the satellite data is analyzed by only considering the spectral characteristics measured at a pixel. The spatial relations and context were often ignored. With the advent of very high resolution satellite sensors providing a spatial resolution of ≤ 5m, the shortfalls of traditional pixel-based image processing techniques became evident. The need to identify new methods then led to focusing on the so called object-based image analysis (OBIA) methodologies. Unlike the pixel-based methods, the object-based methods which are based on segmenting the image into homogeneous regions use the shape, texture and context associated with the patterns thus providing an improved basis for image analysis. The remote sensing data normally has to be processed in a different way to that of the other types of images. In the geographic sense OBIA is referred to as Geographic Object-Based Image Analysis (GEOBIA), where the GEO pseudo prefix emphasizes the geographic components. This thesis will provide an overview of the principles of GEOBIA, describe some fundamentally new contributions to OBIA in the geographical context and, finally, summarize the current status with ideas for future developments.
APA, Harvard, Vancouver, ISO, and other styles
21

Desjardins, Steven J. "Image analysis in Fourier space." Thesis, University of Ottawa (Canada), 2002. http://hdl.handle.net/10393/6207.

Full text
Abstract:
General results on Fourier Transforms, tight frame wavelets and pseudodifferential operators are presented to provide a theoretical framework for the applications. Known and new tight frame wavelets that are characteristic and tapered characteristic functions in Fourier Space are constructed in Cartesian and polar Fourier Space with frame bound 1. These wavelets are used to localize singularities in images. A review of the use of the diffusion equation in de-noising images is presented. A new method, which applies a multi-directional diffusion in Fourier Space is given. Properties of this new product filter method are described and the product filter's de-noising ability is evaluated. The product filter algorithm is also compared to other techniques, including MATLAB's built-in filters and two recent wavelet techniques. Finally, a summary of the results of a study of the de-noising abilities of third-order partial differential equations is presented.
APA, Harvard, Vancouver, ISO, and other styles
22

Basnandan, Anneil. "Image analysis of carpet tufting." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/18213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Shapiro, Larry Saul. "Affine analysis of image sequences." Thesis, University of Oxford, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Petroudi, Styliani. "Texture in mammographic image analysis." Thesis, University of Oxford, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.422668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, De Quan. "Morphological filters in image analysis." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260779.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Gadkari, Dhanashree. "IMAGE QUALITY ANALYSIS USING GLCM." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3246.

Full text
Abstract:
Gray level co-occurrence matrix has proven to be a powerful basis for use in texture classification. Various textural parameters calculated from the gray level co-occurrence matrix help understand the details about the overall image content. The aim of this research is to investigate the use of the gray level co-occurrence matrix technique as an absolute image quality metric. The underlying hypothesis is that image quality can be determined by a comparative process in which a sequence of images is compared to each other to determine the point of diminishing returns. An attempt is made to study whether the curve of image textural features versus image memory sizes can be used to decide the optimal image size. The approach used digitized images that were stored at several levels of compression. GLCM proves to be a good discriminator in studying different images however no such claim can be made for image quality. Hence the search for the best image quality metric continues.
M.S.
Other
Arts and Sciences
Modeling and Simulation
APA, Harvard, Vancouver, ISO, and other styles
27

Luan, Jian'an. "Image analysis and prenatal screening." Thesis, University of Plymouth, 1998. http://hdl.handle.net/10026.1/2452.

Full text
Abstract:
Information obtained from ultrasound images of fetal heads is often used to screen for various types of physical abnormality. In particular, at around 16 to 23 weeks' gestation two-dimensional cross-sections are examined to assess whether a fetus is affected by Neural Tube Defects, a class of disorders that includes Spina Bifida. Unfortunately, ultrasound images are of relatively poor quality and considerable expertise is required to extract meaningful information from them. Developing an ultrasound image recognition method that does not rely upon an experienced sonographer is of interest. In the course of this work we review standard statistical image analysis techniques, and explain why they are not appropriate for the ultrasound image data that we have. A new iterative method for edge detection based on a kernel function is developed and discussed. We then consider ways of improving existing techniques that have been applied to ultrasound Images. Storvik (1994)'s algorithm is based on the minimisation of a certain energy function by simulated annealing. We apply a cascade type blocking method to speed up this minimisation and to improve the performance of the algorithm when the noise level is high. Kass, Witkin and Terzopoulos (1988)'s method is based on an active contour or 'snake' which is deformed in such a way as to minimise a certain energy function. We suggest modifications to this energy function and use simulated annealing plus iterated conditional modes to perform the associated minimisation. We demonstrate the effectiveness of the new edge detection method, and of the improvements to the existing techniques by means of simulation studies.
APA, Harvard, Vancouver, ISO, and other styles
28

Tillett, R. D. "Image analysis for agricultural processes." Thesis, Cardiff University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.361090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sabaté-Cequier, Anna. "Image analysis for retinal screening." Thesis, St George's, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Karaolani, Persephoni. "Finite elements for image analysis." Thesis, University of Reading, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Alfaer, Nada Mansour. "Dynamic modelling for image analysis." Thesis, University of Leeds, 2018. http://etheses.whiterose.ac.uk/21215/.

Full text
Abstract:
Image segmentation is an important task in many image analysis applications, where it is an essential first stage before further analysis is possible. The levelset method is an implicit approach to image segmentation problems. The main advantages are that it can handle an unknown number of regions and can deal with complicated topological changes in a simple and natural way. The research presented in this thesis is motivated by the need to develop statistical methodologies for modelling image data through level sets. The fundamental idea is to combine the level-set method with statistical modelling based on the Bayesian framework to produce an attractive approach for tackling a wider range of segmentation problems in image analysis. A complete framework for a Bayesian level set model is given to allow a wider interpretation of model components. The proposed model is described based on a Gaussian likelihood and exponential prior distributions on object area and boundary length, and an investigation of uncertainty and a sensitivity analysis are carried out. The model is then generalized using a more robust noise model and more flexible prior distributions. A new Bayesian modelling approach to object identification is introduced. The proposed model is based on the level set method which assumes the implicit representation of the object outlines as a zero level set contour of a higher dimensional function. The Markov chain Monte Carlo (MCMC) algorithm is used to estimate the model parameters, by generating approximate samples from the posterior distribution. The proposed method is applied to simulated and real datasets. A new temporal model is proposed in a Bayesian framework for level-set based image sequence segmentation. MCMC methods are used to explore the model and to obtain information about solution behaviour. The proposed method is applied to simulated image sequences.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhao, Nilu. "Haze measurements through image analysis." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92216.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 28).
In the recent years, Singapore has been affected by haze caused by slash-and-bum fires in Indonesia. Currently, haze concentration is measured by filtering air samples at various stations in Singapore. In this thesis, optical approaches to haze measurements are explored. Images of haze were taken in fifteen minute intervals in June, 2013. These images were analyzed to obtain image contrast, and power spectral density functions. The power spectral density functions were characterized by maximum power, full width at half maximum, second and third moments, and exponential fit. Out of these methods, contrast and exponential fit results showed trend to the Pollutant Standards Index (PSI) values provided by the National Environmental Agency (NEA). Further studies on mapping contrast to PSI values are recommended.
by Nilu Zhao.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
33

Yoo, Kyung Hyun. "Image analysis using mathematical morphology." Thesis, Kansas State University, 1989. http://hdl.handle.net/2097/15232.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Chaganti, Shikha. "Image Analysis of Glioblastoma Histopathology." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1406820611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dickason, Gregory John. "Image analysis of Bacillus thuringiensis." Master's thesis, University of Cape Town, 1998. http://hdl.handle.net/11427/21432.

Full text
Abstract:
This thesis concerns the development of a method to quantify the morphology of the bacterium Bacillus thuringiensis, and to automatically count the bacteria. The need to quantify the bacterial morphology arose out of the possibility of controlling a fermentation based on the morphology of the observed bacteria. Automatic counting of bacteria was considered necessary to reduce the inaccuracies that resulted in manual counts performed by different people. Bacillus thuringiensis is a spore forming, gram-positive bacterium, which produces both intracellular spores and insecticidal protein crystals. The production of the insecticidal protein crystal makes Bacillus thuringiensis important as a producer of biological insecticides. Automatic counting was developed in a Thoma counting chamber (Webber Scientific) at 200x magnification under dark field illumination. It was found that at this magnification the problem of out of focus cells was eliminated. The use of a thick coverslip, which reduces variability in slide preparation, was also possible at 200xmagnification as the focal depth of the 20x objective lens was considerably larger than the 1 00x objective lens and thus the 20x objective lens could focus through the thick coverslip (20x objective lens with 1 Ox magnification in eyepiece = 200xmagnification). An automatic algorithm to acquire images was developed and 5images per sample were acquired. Processing of the images involved automatically thresholding and then counting the number of bright objects in the image. Processing was thus rapid and the processing of the five images took no more than a few seconds. Results showed that the correlation between the automatic and manual counts was good and that the use of a thick coverslip reduced variability in slide preparation. It was shown that the manual -counting procedure, which necessarily used a thin coverslip at 1000x magnification, underestimated the volume of the Thoma counting chamber. This was a result of warping in the thin coverslip.
APA, Harvard, Vancouver, ISO, and other styles
36

Thomson, Robert Clark. "Petrographic image analysis and understanding." Thesis, Aston University, 1991. http://publications.aston.ac.uk/14391/.

Full text
Abstract:
This study considers the application of image analysis in petrography and investigates the possibilities for advancing existing techniques by introducing feature extraction and analysis capabilities of a higher level than those currently employed. The aim is to construct relevant, useful descriptions of crystal form and inter-crystal relations in polycrystalline igneous rock sections. Such descriptions cannot be derived until the `ownership' of boundaries between adjacent crystals has been established: this is the fundamental problem of crystal boundary assignment. An analysis of this problem establishes key image features which reveal boundary ownership; a set of explicit analysis rules is presented. A petrographic image analysis scheme based on these principles is outlined and the implementation of key components of the scheme considered. An algorithm for the extraction and symbolic representation of image structural information is developed. A new multiscale analysis algorithm which produces a hierarchical description of the linear and near-linear structure on a contour is presented in detail. Novel techniques for symmetry analysis are developed. The analyses considered contribute both to the solution of the boundary assignment problem and to the construction of geologically useful descriptions of crystal form. The analysis scheme which is developed employs grouping principles such as collinearity, parallelism, symmetry and continuity, so providing a link between this study and more general work in perceptual grouping and intermediate level computer vision. Consequently, the techniques developed in this study may be expected to find wider application beyond the petrographic domain.
APA, Harvard, Vancouver, ISO, and other styles
37

Francis, Nicholas David. "Parallel architectures for image analysis." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108844/.

Full text
Abstract:
This thesis is concerned with the problem of designing an architecture specifically for the application of image analysis and object recognition. Image analysis is a complex subject area that remains only partially defined and only partially solved. This makes the task of designing an architecture aimed at efficiently implementing image analysis and recognition algorithms a difficult one. Within this work a massively parallel heterogeneous architecture, the Warwick Pyramid Machine is described. This architecture consists of SIMD, MIMD and MSIMD modes of parallelism each directed at a different part of the problem. The performance of this architecture is analysed with respect to many tasks drawn from very different areas of the image analysis problem. These tasks include an efficient straight line extraction algorithm and a robust and novel geometric model based recognition system. The straight line extraction method is based on the local extraction of line segments using a Hough style algorithm followed by careful global matching and merging. The recognition system avoids quantising the pose space, hence overcoming many of the problems inherent with this class of methods and includes an analytical verification stage. Results and detailed implementations of both of these tasks are given.
APA, Harvard, Vancouver, ISO, and other styles
38

Nilsson, Felix. "Image analysis for smart manufacturing." Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-39856.

Full text
Abstract:
The world of industrial manufacturing has changed a lot during the past decades. It has gone from a labour-intensive process of manual control of machines to a fully connected and automated process. The next big leap in industrial manufacturing is known as industry 4.0 or smart manufacturing. With industry 4.0 comes increased integration between IT systems and the factory floor. This change has proven challenging to implement into existing factories many with the intended lifespan of several decades. One of the single most important parameters to measure is the operating hours of each machine. This information can help companies better utilize their resources and save huge amounts of money.  The goal is to develop a solution which can track the operating hours of the machines using image analysis and the signal lights already mounted on the machines. Using methods commonly used for traffic light recognition in autonomous cars, a system with an accuracy of over 99% during the specified conditions, has been developed. It is believed that if more diverse video data becomes available a system, with high reliability that generalizes well, could be developed using similar methodology.
Industriell tillverkning har förändrats mycket under de senaste decennierna. Det har gått från en process som krävt mycket manuellt arbete till en process som är nästan helt uppkopplad och automatiserad. Nästa stora steg inom industriell tillverkning går under benämningen industri 4.0 eller smart tillverkning. Med industri 4.0 kommer en ökad integration mellan IT-system och fabriksgolvet. Denna förändring har visat sig vara särskilt svår att implementera i redan existerande fabriker som kan ha en förväntad livstid på flera årtionden. En av de viktigaste parametrarna att mäta inom industriell tillverkning är varje maskins operativa timmar. Denna information kan hjälpa företag att bättre utnyttja tillgängliga resurser och därigenom spara stora summor pengar. Målet är att utveckla en lösning som, med hjälp av bildanalys och de signalljus som maskinerna kommer utrustade med, kan mäta maskinernas operativa timmar. Med hjälp av metoder som vanligen används för trafikljusigenkänning i autonoma fordon har ett system med en träffsäkerhet på över 99% under de förutsättningar som presenteras i rapporten utvecklats. Om mer video med större variation blir tillgänglig är det mycket troligt att det går att utveckla ett system som har hög pålitlighet i de flesta produktionsmiljöer.
APA, Harvard, Vancouver, ISO, and other styles
39

Wu, Qian. "Segmentation-based Retinal Image Analysis." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18524.

Full text
Abstract:
Context. Diabetic retinopathy is the most common cause of new cases of legal blindness in people of working age. Early diagnosis is the key to slowing the progression of the disease, thus preventing blindness. Retinal fundus image is an important basis for judging these retinal diseases. With the development of technology, computer-aided diagnosis is widely used. Objectives. The thesis is to investigate whether there exist specific regions that could assist in better prediction of the retinopathy disease, it means to find the best region in fundus image that works the best in retinopathy classification with the use of computer vision and machine learning techniques. Methods. An experiment method was used as research methods. With image segmentation techniques, the fundus image is divided into regions to obtain the optic disc dataset, blood vessel dataset, and other regions (regions other than blood vessel and optic disk) dataset. These datasets and original fundus image dataset were tested on Random Forest (RF), Support Vector Machines (SVM) and Convolutional Neural Network (CNN) models, respectively. Results. It is found that the results on different models are inconsistent. As compared to the original fundus image, the blood vessel region exhibits the best performance on SVM model, the other regions perform best on RF model, while the original fundus image has higher prediction accuracy on CNN model. Conclusions. The other regions dataset has more predictive power than original fundus image dataset on RF and SVM models. On CNN model, extracting features from the fundus image does not significantly improve predictive performance as compared to the entire fundus image.
APA, Harvard, Vancouver, ISO, and other styles
40

McGarry, Gregory John. "Model-based mammographic image analysis." Thesis, Queensland University of Technology, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
41

Monnier, Tom. "Unsupervised image analysis by synthesis." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0037.

Full text
Abstract:
Le but de cette thèse est de développer des approches d'intelligence artificielle (IA) pour analyser des collections d'images sans annotations. Des avancées dans ce domaine sont prometteuses pour des applications à fort impact reliées à la 3D (e.g., reconstruire une scène avec des composantes 3D manipulables pour les films d'animation ou les jeux vidéos) où annoter des exemples pour entrainer l'IA est difficile, et aussi pour des applications plus spécifiques (e.g., analyser l'évolution des charactères dans des documents du 12ème siècle) où employer des efforts conséquents pour annoter de larges bases de données pose question. L'idée centrale de cette dissertation est de construire des IA qui apprennent l'analyse d'une collection d'images en synthétisant ces mêmes images. Apprendre des modèles d'analyse par synthèse est difficile car cela nécessite la conception d'un système de génération d'images apprenable qui exhibite explicitement l'analyse voulue. Pour atteindre notre but, nous présentons trois contributions clés.La première contribution de cette thèse est une nouvelle approche conceptuelle à la modélisation de catégorie. Nous proposons de représenter la catégorie d'une image, d'un objet 2D ou d'une forme 3D, avec un prototype qui est transformé via appprentissage profond pour modéliser les différentes instances au sein de la catégorie. Plus spécifiquement, nous introduisons des transformations paramétriques concrètes (e.g., des déformations géométriques ou des variations de couleurs) et utilisons des réseaux de neurones pour prédire les paramètres de transformations nécessaires pour instancier le prototype pour une image donnée. Nous démontrons l'efficacité de cette idée en regroupant des images et reconstruisant des objets 3D à part d'images d'une seule vue de l'objet. Nous obtenons des performances égales aux meilleures méthodes qui utilisent des représentations d'image ad-hoc ou des annotations.La deuxième contribution est une nouvelle manière de découvrir des éléments dans une collection d'images. Nous proposons de représenter une collection d'images par un ensemble d'éléments apprennables, composés pour synthétiser les images et optimisés par descente de gradient. Nous démontrons l'efficacité de cette idée en découvrant des éléments 2D reliées à des objets sémantiques représentés dans la collection d'images. Notre approche a des performances semblables aux meilleures méthodes qui synthétisent les images par réseaux de neurones, et est plus interprétable. Nous démontrons aussi son efficacité en découvrant des éléments 3D reliées à des formes primitives étant donnée une collection d'images illustrant une scène via différents points de vue. Comparé aux travaux précédents calculant des primitives dans des nuages de points 3D, nous obtenons des résultats qualitatifs et quantitatifs supérieurs.La troisième contribution est plus technique et consiste en une nouvelle formulation pour calculer le rendu differentiable d'un mesh. Plus spécifiquement, nous formulons le rendu différentiable d'un mesh 3D comme l'alpha composition des faces du mesh par ordre de profondeur croissante. Comparée aux travaux précédents, cette formulation est clé pour apprendre des meshes 3D sans utiliser des annotations représentant les régions d'objet. En outre, cette formulation nous permet de facilement introduire la possibilité d'apprendre des meshes transparents, que nous modélisons pour représenter une scène comme une composition d'un nombre variable de meshes
The goal of this thesis is to develop machine learning approaches to analyze collections of images without annotations. Advances in this area hold particular promises for high-impact 3D-related applications (e.g., reconstructing a real-world scene with 3D actionable components for animation movies or video games) where annotating examples to teach the machines is difficult, as well as more micro applications related to specific needs (e.g., analyzing the character evolution from 12th century documents) where spending significant effort on annotating large-scale database is debatable. The central idea of this dissertation is to build machines that learn to analyze an image collection by synthesizing the images in the collection. Learning analysis models by synthesis is difficult because it requires the design of a learnable image generation system that explicitly exhibits the desired analysis output. To achieve our goal, we present three key contributions.The first contribution of this thesis is a new conceptual approach to category modeling. We propose to represent the category of an image, a 2D object or a 3D shape, with a prototype that is transformed using deep learning to model the different instances within the category. Specifically, we design meaningful parametric transformations (e.g., geometric deformations or colorimetric variations) and use neural networks to predict the transformation parameters necessary to instantiate the prototype for a given image. We demonstrate the effectiveness of this idea to cluster images and reconstruct 3D objects from single-view images. We obtain performances on par with the best state-of-the-art methods which leverage handcrafted features or annotations.The second contribution is a new way to discover elements in a collection of images. We propose to represent an image collection by a set of learnable elements composed together to synthesize the images and optimized by gradient descent. We first demonstrate the effectiveness of this idea by discovering 2D elements related to semantic objects represented by a large image collection. Our approach have performances similar to the best concurrent methods which synthesize images with neural networks, and ours comes with better interpretability. We also showcase the capability of this idea by discovering 3D elements related to simple primitive shapes given as input a collection of images depicting a scene from multiple viewpoints. Compared to prior works finding primitives in 3D point clouds, we showcase much better qualitative and quantitative performances.The third contribution is more technical and consist in a new formulation to compute differentiable mesh rendering. Specifically, we formulate the differentiable rendering of a 3D mesh as the alpha compositing of the mesh faces in an increasing depth order. Compared to prior works, this formulation is key to enable us to learn 3D meshes without requiring object region annotations. In addition, it allows us to seamlessly introduce the possibility to learn transparent meshes, which we design to model a scene as a composition of a variable number of meshes
APA, Harvard, Vancouver, ISO, and other styles
42

Das, Mohammed. "Image analysis techniques for vertebra anomaly detection in X-ray images." Diss., Rolla, Mo. : University of Missouri--Rolla i.e. [Missouri University of Science and Technology], 2008. http://scholarsmine.mst.edu/thesis/MohammedDas_Thesis_09007dcc804c3cf6.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2008.
Degree granted by Missouri University of Science and Technology, formerly known as University of Missouri--Rolla. Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed March 24, 2008) Includes bibliographical references (p. 87-88).
APA, Harvard, Vancouver, ISO, and other styles
43

Louridas, Efstathios. "Image processing and analysis of videofluoroscopy images in cleft palate patients." Thesis, University of Kent, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hillmer, Dirk. "Computer-based analysis of Biological Images Neuronal Networks for Image Processing." Electronic Thesis or Diss., Bordeaux, 2024. https://theses.hal.science/tel-04650911.

Full text
Abstract:
L’IA en médecine est un domaine en croissance rapide et son importance en dermatologie est de plus en plus prononcée. Les progrès des réseaux neuronaux, accélérés par de puissants GPU, ont catalysé le développement de systèmes d’IA pour l’analyse des troubles cutanés. Cette étude présente une nouvelle approche qui exploite les techniques d’infographie pour créer des réseaux d’IA adaptés aux troubles cutanés. La synergie de ces techniques génère non seulement des données de formation, mais optimise également la manipulation des images pour un traitement amélioré. Le vitiligo, un trouble cutané dépigmentant courant, constitue une étude de cas poignante. L’évolution des thérapies ciblées souligne la nécessité d’une évaluation précise de la surface touchée. Cependant, les méthodes d’évaluation traditionnelles prennent beaucoup de temps et sont sujettes à une variabilité inter-évaluateur et intra-évaluateur. En réponse, cette recherche vise à construire un système d'intelligence artificielle (IA) capable de quantifier objectivement la gravité du vitiligo facial. La formation et la validation du modèle d'IA ont exploité un ensemble de données d'une centaine d'images de vitiligo facial. Par la suite, un ensemble de données indépendant de soixante-neuf images de vitiligo facial a été utilisé pour l’évaluation finale. Les scores attribués par trois médecins experts ont été comparés aux performances inter-évaluateurs et intra-évaluateurs, ainsi qu'aux évaluations de l'IA. De manière impressionnante, le modèle d’IA a atteint une précision remarquable de 93 %, démontrant son efficacité dans la quantification de la gravité du vitiligo facial. Les résultats ont mis en évidence une concordance substantielle entre les scores générés par l'IA et ceux fournis par les évaluateurs humains. Au-delà du vitiligo facial, l'utilité de ce modèle dans l'analyse des images du corps entier et des images sous différents angles est apparue comme une voie d'exploration prometteuse. L'intégration de ces images dans une représentation complète pourrait offrir un aperçu de la progression du vitiligo au fil du temps, améliorant ainsi le diagnostic clinique et les résultats de la recherche. Bien que le voyage ait été fructueux, certains aspects de la recherche se sont heurtés à des obstacles en raison de ressources insuffisantes en images et en données. Une exploration de l'analyse de modèles de souris in vivo et de l'analyse de la pigmentation des cellules de la peau dans des modèles d'embryons précliniques ainsi que de la reconnaissance d'images de la rétine a malheureusement été interrompue. Néanmoins, ces défis mettent en lumière la nature dynamique de la recherche et soulignent l’importance de l’adaptabilité pour surmonter les obstacles imprévus.En conclusion, cette étude met en valeur le potentiel de l’IA pour révolutionner l’évaluation dermatologique. En fournissant une évaluation objective de la gravité du vitiligo facial, le modèle d’IA proposé constitue un complément précieux à l’évaluation humaine, tant dans la pratique clinique que dans la recherche. La poursuite continue de l’intégration de l’IA dans l’analyse de divers ensembles de données d’images est prometteuse pour des applications plus larges en dermatologie et au-delà
AI in medicine is a rapidly growing field, and its significance in dermatology is increasingly pronounced. Advancements in neural networks, accelerated by powerful GPUs, have catalyzed the development of AI systems for skin disorder analysis. This study presents a novel approach that harnesses computer graphics techniques to create AI networks tailored to skin disorders. The synergy of these techniques not only generates training data but also optimizes image manipulation for enhanced processing. Vitiligo, a common depigmenting skin disorder, serves as a poignant case study. The evolution of targeted therapies underscores the necessity for precise assessment of the affected surface area. However, traditional evaluation methods are time-intensive and prone to inter- and intra-rater variability. In response, this research endeavors to construct an artificial intelligence (AI) system capable of objectively quantifying facial vitiligo severity.The AI model's training and validation leveraged a dataset of one hundred facial vitiligo images. Subsequently, an independent dataset of sixty-nine facial vitiligo images was used for final evaluation. The scores assigned by three expert physicians were compared with both inter- and intra-rater performances, as well as the AI's assessments. Impressively, the AI model achieved a remarkable accuracy of 93%, demonstrating its efficacy in quantifying facial vitiligo severity. The outcomes highlighted substantial concordance between AI-generated scores and those provided by human raters.Expanding beyond facial vitiligo, this model's utility in analyzing full-body images and images from various angles emerged as a promising avenue for exploration. Integrating these images into a comprehensive representation could offer insights into vitiligo's progression over time, thereby enhancing clinical diagnosis and research outcomes. While the journey has been fruitful, certain aspects of the research encountered roadblocks due to insufficient image and data resources. An exploration into analysis of in vivo mouse models and analysing pigmentation of skin cells in a preclinical embryo models as well as retina image recognition was regrettably halted. Nevertheless, these challenges illuminate the dynamic nature of research and underscore the importance of adaptability in navigating unforeseen obstacles.In conclusion, this study showcases the potential of AI to revolutionize dermatological assessment. By providing an objective evaluation of facial vitiligo severity, the proposed AI model offers a valuable adjunct to human assessment in both clinical practice and research settings. The ongoing pursuit of integrating AI into the analysis of diverse image datasets holds promise for broader applications in dermatology and beyond
APA, Harvard, Vancouver, ISO, and other styles
45

Ramakrishna, Yogendra Jayanth. "Image Analysis Methods For Additive Manufacturing Applications." Thesis, Högskolan Väst, Avdelningen för avverkande och additativa tillverkningsprocesser (AAT), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-15891.

Full text
Abstract:
There is an upsurge of research interest on Ni-based superalloys additively manufactured (AM) in aerospace sectors. However, achieving the accuracy and quality of the AM part is a challenging task because it is a process of adding material layer by layer with different process parameters. Hence, defects can be observed, and these defects have a detrimental effect on the mechanical properties of the material. Also, AM materials commonly portray a columnar grain structure which also makes it difficult to determine the average grain size because while using the commonly used intercept method, the grain boundaries do not intercept to the test line appropriately. It is important to measure the defects and grain size before performing mechanical testing on the material. Defect measurement and grain size measurements are usually measured manually which results in longer lead time. This work is addressed towards testing recipes in the automated image analysis software to optimize the lead time with good accuracy. Haynes 282, a γ' strengthened superalloy is used in this work. It was assumed that 1,5mm of material from the surface will be machined away so defects had to be measured in this region of interest. The image analysis tools used to test its potentials are MIPAR and ImageJ. Initially, five images in MIPAR and Image J were tested keeping the manual measurements as a benchmark. From this part, it was concluded that metallography and image quality play an important role in the automated measurement. Also, basic Image J software cannot give the measurements of lack of fusion in terms of caliper diameter (longest measurable diameter). Hence, MIPAR was chosen for the application because it was more promising. In the next part, 15 samples were used with manual measurements from a stitched sample and batch processing with MIPAR. The total caliper diameter results were plotted to compare manual measurements and MIPAR. It was observed that scratches were measured as lack of fusion defects at few instances by MIPAR which were further refined using a post-processing function. The defect density results were plotted and compared as well. Due to the difference in calculation of region of interest, the difference in results was observed.To perform the grain size measurement, Haynes 282 was used in HIP and heat treated condition, achieving equiaxed grains. The etchant should be appropriate to reveal the grains. Hence four different etchants were used in this study hydrogen peroxide+HCl, Kallings (electro etch), Kallings (swab) and diluted oxalic acid. This measurement was performed on the material which was cut along the build direction as well as 90º to the growth direction. Since there is no standard for additively manufactured material yet, the results were tested with hall-petch equation to be convinced of the results obtained. It was observed that MIPAR recipe portrayed good results. The results of manual measurements and MIPAR measurements were plotted and compared. It was observed that Hydrogen peroxide and Kallings (swab) showed the grains evidently but twin boundaries were revealed as well. MIPAR calculated the twin boundaries as grains so it over calculated than manual measurements. Kallings (electro etch) and diluted oxalic acid did not reveal the grains so it was difficult for MIPAR to identify the grains.
APA, Harvard, Vancouver, ISO, and other styles
46

Arias, Chipana Fredy Elmer 1979. "Uma proposta para um modelo de exibição de imagens em displays de dispositivos móveis baseado um método de atenção visual." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260047.

Full text
Abstract:
Orientador: Yuzo Iano
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T18:30:39Z (GMT). No. of bitstreams: 1 AriasChipana_FredyElmer_M.pdf: 3100154 bytes, checksum: f18211712dbcff0de28b7aa8569b96bb (MD5) Previous issue date: 2014
Resumo: A apresentação de imagens em telas de dispositivos móveis tem limitações que dependem da experiência do usuário. A adaptação de imagens é acondicionar o tamanho e à resolução para-se visualizar na tela do dispositivo. O uso de mecanismos de atenção visual permite reduzir esforços no processamento do estímulo visual do olho humano. Os modelos de atenção visual também ajudam a reduzir a complexidade computacional em aplicações de processamento de imagens. Propõe-se neste uma melhora ao um modelo de atenção visual para ser aplicado à adaptação de imagens em telas de dispositivos móveis
Abstract: The presentation of images on screens of mobile devices has limitations that depend on the user experience. The adaptation of images is pack the size and resolution to be visualized on the device screen. The use of a visual attention mechanism allows reducing efforts in processing the visual stimulus of the human eye. Models of visual attention also help to reduce the computational complexity of image processing applications. We propose an improvement to this model of visual attention to be applied to the adaptation of images on screens of mobile devices
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
47

Mohammad, Suraya. "Textural measurements for retinal image analysis." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/textural-measurements-for-retinal-image-analysis(4efe6635-edec-4e8e-89a4-d119da3cf7cd).html.

Full text
Abstract:
This thesis present research work conducted in the field of retina image analysis. More specifically, the work is directed at the application of texture analysis technique for the segmentation of common retinal landmark and for retina image classification. The main challenge in this research is in identifying the suitable texture measurement for retina images. In this research we proposed the used of texture measurement based on Binary Robust Independent Elementary Features (BRIEF). BRIEF measure texture by performing an intensity comparison in a local image patch, thus it is very fast to compute and tolerant to any monotonic increase or decrease of image intensities, which makes the descriptor invariant to illumination. The performance of BRIEF as texture measurement is first shown in an experiment involving texture classification and segmentation using common texture datasets. The result demonstrates good performance from BRIEF in this experiment. BRIEF is next used in two applications of retinal image analysis, namely optic disc segmentation and glaucoma classification. In the former, we proposed the used of pixel classification using BRIEF as textural features and circular template matching to segment the optic disc. In addition, an extension of BRIEF called Rotation Invariant BRIEF (OBRIEF) is later proposed to improve the segmentation result. For glaucoma classification, we described two approaches for glaucoma classification using BRIEF/OBRIEF features. The first is based on determination of cup to disc ratio (CDR) and the second is classification using image features i.e. BRIEF features. Overall, our preliminary results on using BRIEF as texture measurement for retinal image analysis are encouraging and demonstrate that it has the potential to be used in retina image analysis.
APA, Harvard, Vancouver, ISO, and other styles
48

Moltisanti, Marco. "Image Representation using Consensus Vocabulary and Food Images Classification." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3968.

Full text
Abstract:
Digital images are the result of many physical factors, such as illumination, point of view an thermal noise of the sensor. These elements may be irrelevant for a specific Computer Vision task; for instance, in the object detection task, the viewpoint and the color of the object should not be relevant in order to answer the question "Is the object present in the image?". Nevertheless, an image depends crucially on all such parameters and it is simply not possible to ignore them in analysis. Hence, finding a representation that, given a specific task, is able to keep the significant features of the image and discard the less useful ones is the first step to build a robust system in Computer Vision. One of the most popular model to represent images is the Bag-of-Visual-Words (BoW) model. Derived from text analysis, this model is based on the generation of a codebook (also called vocabulary) which is subsequently used to provide the actual image representation. Considering a set of images, the typical pipeline, consists in: 1. Select a subset of images to be the training set for the model; 2. Extract the desired features from the all the images; 3. Run a clustering algorithm on the features extracted from the training set: each cluster is a codeword, the set containing all the clusters is the codebook; 4. For each feature point, nd the closest codeword according to a distance function or metric; 5. Build a normalized histogram of the occurrences of each word. The choices made in the design phase influence strongly the final outcome of the representation. In this work we will discuss how to aggregate di fferent kind of features to obtain more powerful representations, presenting some state-of-the-art methods in Computer Vision community. We will focus on Clustering Ensemble techniques, presenting the theoretical framework and a new approach (Section 2.5). Understanding food in everyday life (e.g., the recognition of dishes and the related ingredients, the estimation of quantity, etc.) is a problem which has been considered in different research areas due its important impact under the medical, social and anthropological aspects. For instance, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced Computer Vision tools to recognize food images (e.g., acquired with mobile/wearable cameras), as well as their properties (e.g., calories, volume), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). On the other hand, the great diffusion of low cost image acquisition devices embedded in smartphones allows people to take pictures of food and share them on Internet (e.g., on social media); the automatic analysis of the posted images could provide information on the relationship between people and their meals and can be exploited by food retailer to better understand the preferences of a person for further recommendations of food and related products. Image representation plays a key role while trying to infer information about food items depicted in the image. We propose a deep review of the state-of-the-art two different novel representation techniques.
APA, Harvard, Vancouver, ISO, and other styles
49

Guo, Y. (Yimo). "Image and video analysis by local descriptors and deformable image registration." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526201412.

Full text
Abstract:
Abstract Image description plays an important role in representing inherent properties of entities and scenes in static images. Within the last few decades, it has become a fundamental issue of many practical vision tasks, such as texture classification, face recognition, material categorization, and medical image processing. The study of static image analysis can also be extended to video analysis, such as dynamic texture recognition, classification and synthesis. This thesis contributes to the research and development of image and video analysis from two aspects. In the first part of this work, two image description methods are presented to provide discriminative representations for image classification. They are designed in unsupervised (i.e., class labels of texture images are not available) and supervised (i.e., class labels of texture images are available) manner, respectively. First, a supervised model is developed to learn discriminative local patterns, which formulates the image description as an integrated three-layered model to estimate an optimal pattern subset of interest by simultaneously considering the robustness, discriminative power and representation capability of features. Second, in the case that class labels of training images are unavailable, a linear configuration model is presented to describe microscopic image structures in an unsupervised manner, which is subsequently combined together with a local descriptor: local binary pattern (LBP). This description is theoretically verified to be rotation invariant and is able to provide a discriminative complement to the conventional LBPs. In the second part of the thesis, based on static image description and deformable image registration, video analysis is studied for the applications of dynamic texture description, synthesis and recognition. First, a dynamic texture synthesis model is proposed to create a continuous and infinitely varying stream of images given a finite input video, which stitches video clips in the time domain by selecting proper matching frames and organizing them into a logical order. Second, a method for the application of facial expression recognition, which formulates the dynamic facial expression recognition problem as the construction of longitudinal atlases and groupwise image registration problem, is proposed
Tiivistelmä Kuvan deskriptiolla on tärkeä rooli staattisissa kuvissa esiintyvien luontaisten kokonaisuuksien ja näkymien kuvaamisessa. Viime vuosikymmeninä se on tullut perustavaa laatua olevaksi ongelmaksi monissa käytännön konenäön tehtävissä, kuten tekstuurien luokittelu, kasvojen tunnistaminen, materiaalien luokittelu ja lääketieteellisten kuvien analysointi. Staattisen kuva-analyysin tutkimusala voidaan myös laajentaa videoanalyysiin, kuten dynaamisten tekstuurien tunnistukseen, luokitteluun ja synteesiin. Tämä väitöskirjatutkimus myötävaikuttaa kuva- ja videoanalyysin tutkimukseen ja kehittymiseen kahdesta näkökulmasta. Työn ensimmäisessä osassa esitetään kaksi kuvan deskriptiomenetelmää erottelukykyisten esitystapojen luomiseksi kuvien luokitteluun. Ne suunnitellaan ohjaamattomiksi (eli tekstuurikuvien luokkien leimoja ei ole käytettävissä) tai ohjatuiksi (eli luokkien leimat ovat saatavilla). Aluksi kehitetään ohjattu malli oppimaan erottelukykyisiä paikallisia kuvioita, mikä formuloi kuvan deskriptiomenetelmän integroituna kolmikerroksisena mallina - tavoitteena estimoida optimaalinen kiinnostavien kuvioiden alijoukko ottamalla samanaikaisesti huomioon piirteiden robustisuus, erottelukyky ja esityskapasiteetti. Seuraavaksi, sellaisia tapauksia varten, joissa luokkaleimoja ei ole saatavilla, esitetään työssä lineaarinen konfiguraatiomalli kuvaamaan kuvan mikroskooppisia rakenteita ohjaamattomalla tavalla. Tätä käytetään sitten yhdessä paikallisen kuvaajan, eli local binary pattern (LBP) –operaattorin kanssa. Teoreettisella tarkastelulla osoitetaan kehitetyn kuvaajan olevan rotaatioinvariantti ja kykenevän tuottamaan erottelukykyistä, täydentävää informaatiota perinteiselle LBP-menetelmälle. Työn toisessa osassa tutkitaan videoanalyysiä, perustuen staattisen kuvan deskriptioon ja deformoituvaan kuvien rekisteröintiin – sovellusaloina dynaamisten tekstuurien kuvaaminen, synteesi ja tunnistaminen. Aluksi ehdotetaan sellainen malli dynaamisten tekstuurien synteesiin, joka luo jatkuvan ja äärettömän kuvien virran annetusta äärellisen mittaisesta videosta. Menetelmä liittää yhteen videon pätkiä aika-avaruudessa valitsemalla keskenään yhteensopivia kuvakehyksiä videosta ja järjestämällä ne loogiseen järjestykseen. Seuraavaksi työssä esitetään sellainen uusi menetelmä kasvojen ilmeiden tunnistukseen, joka formuloi dynaamisen kasvojen ilmeiden tunnistusongelman pitkittäissuuntaisten kartastojen rakentamisen ja ryhmäkohtaisen kuvien rekisteröinnin ongelmana
APA, Harvard, Vancouver, ISO, and other styles
50

Gahli, Ahmed. "Novel probabilistic image representations for information-based image description and analysis." Thesis, University of Nottingham, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography