Dissertations / Theses on the topic 'Image analysis – Software'

To see the other types of publications on this topic, follow the link: Image analysis – Software.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Image analysis – Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Teo, Ching Leong. "Bistatic radar system analysis and software development." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FTeo%5FChing.pdf.

Full text
Abstract:
Thesis (M.S. in Engineering Science)--Naval Postgraduate School, December 2003.
Thesis advisor(s): David C. Jenn, D. Curtis Schleher. Includes bibliographical references (p. 95-96). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
2

Francis, Nicholas David. "Parallel architectures for image analysis." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108844/.

Full text
Abstract:
This thesis is concerned with the problem of designing an architecture specifically for the application of image analysis and object recognition. Image analysis is a complex subject area that remains only partially defined and only partially solved. This makes the task of designing an architecture aimed at efficiently implementing image analysis and recognition algorithms a difficult one. Within this work a massively parallel heterogeneous architecture, the Warwick Pyramid Machine is described. This architecture consists of SIMD, MIMD and MSIMD modes of parallelism each directed at a different part of the problem. The performance of this architecture is analysed with respect to many tasks drawn from very different areas of the image analysis problem. These tasks include an efficient straight line extraction algorithm and a robust and novel geometric model based recognition system. The straight line extraction method is based on the local extraction of line segments using a Hough style algorithm followed by careful global matching and merging. The recognition system avoids quantising the pose space, hence overcoming many of the problems inherent with this class of methods and includes an analytical verification stage. Results and detailed implementations of both of these tasks are given.
APA, Harvard, Vancouver, ISO, and other styles
3

Thomas, Mathew. "Semi-Automated Dental Cast Analysis Software." Wright State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=wright1310404863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Self, Joel. "On-the-fly dynamic dead variable analysis /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1791.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Minch, Stacy Lynn. "Validity of Seven Syntactic Analyses Performed by the Computerized Profiling Software." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd2956.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rungta, Neha Shyam. "Guided Testing for Automatic Error Discovery in Concurrent Software." Diss., CLICK HERE for online access, 2009. http://contentdm.lib.byu.edu/ETD/image/etd3175.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brown, Christopher A. "Usability analysis of the channel application programming interface." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FBrown.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chen, He. "Microscopic Hyperspectral Image Analysis via Deep Learning." Thesis, Griffith University, 2020. http://hdl.handle.net/10072/396188.

Full text
Abstract:
Hyperspectral imaging (HSI) is a technique that can obtain more spectral information than that in normal color images. Due to this property and strength in material classification, it is widely used in remote sensing, agriculture, and environmental monitoring. In recent years, with the rapid developments of hardware, hyperspectral cameras have become more portable and a ordable. An increasing number of studies are being conducted on HSI systems, and research focuses have expanded from remote sensing to close-range objects. With a proper microscopic kit, a hyperspectral camera can capture images of objects of micrometers in size. In this thesis, an HSI system is introduced which consists of a hyperspectral camera, a microscope, control software, and an image processing workstation. The samples are placed under the microscope which has the camera mounted on the top. The parameters of the camera can be tuned by the control software to have the best image quality. After the setup, the camera takes the HSI image of the samples. Then, the image is transferred to the workstation and saved as a raw HSI image for further process. Two datasets of cells and microplastics are collected and introduced as benchmark datasets for this research. The reason to build these two benchmarks is because of their demands. In the area of cell viability assay, traditional methods use uorescent dyes to distinguish live and dead cells. Although working very reliably, they require physical contact with the cells, which a ects the appearance of the cells and some of the original cell features. As a consequence, there is a demand for the development of non-invasive technology for cell analysis. Our HSI system is capable of using computer vision techniques to classify live and dead cells as a non-invasive and systematic method so that the property of the cells can remain unchanged and the system can be operated without special skills. The microplastics dataset is built to address the needs of environmental protection which is an important research topic with significant social and economic values. The increasing amount of microplastics in the ocean has attracted enormous concern because of its potential to damage the ecosystem and a ect the health of humans and animals. While HSI has shown great potential in analyzing microplastics, studies in this direction are hindered by the lack of public available image data. Therefore, there is an urgent so that there is an urgent demand to build a dataset for microplastics detection. After the datasets have been constructed, we evaluate the support vector machine (SVM) on them for the baseline approach. We apply several feature extraction methods to process the HSI images of the cells before feeding them into the SVM, including extended morphology profile (EMP), tensor morphology profile (TMP), 3D scale-invariant feature transform (SIFT3D), 3D local derivative pattern (3DLDP) and spectral-spatial scaleinvariant feature transform (SS-SIFT). Among them, TMP has the best performance for the cell classification task. Regarding the detection of microplastics, the spectral signature is used to extract the feature and is fed into SVM for detection. Furthermore, we propose a novel attention-based convolutional neural networks (CNN) to classify the cells to take advantage of the development in deep learning. Inspired by the VGG networks, we first build a classification network for our hyperspectral data. Then, a band weighting network and a spatial weighting network are integrated into the backbone. The band weighting network assigns a weight to each band in the hyperspectral images. The weights can suppress redundant bands that do not make an important contribution to the classification task and make the classification network focus on the bands that have more important features for classification. The spatial weighting network assigns a weight to each pixel in the hyperspectral images. The weights can help the classification network focus on important parts of the images and ignore the irrelevant parts. These two weighting networks help to improve the final classification accuracy of the cells. In the experiments on hand-crafted features, SVM with TMP feature extraction method has the best accuracy of 83.72% for the cell classification task. SVM with spectral signature produces 99.13% accuracy on the microplastics detection task. In comparison, the attention-based CNN achieves 98.17% for the cell classification task. These results show that our HSI system and classification methods have great potential for these two classification and detection tasks. The richness of spectral information that is provided by hyperspectral images has a great potential in material recognition tasks, helping to classify di erent materials based on their unique spectral signatures of each material. Because of this, our research can contribute to a wider range of biomedical and environmental domains.
Thesis (Masters)
Master of Philosophy (MPhil)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
APA, Harvard, Vancouver, ISO, and other styles
9

Caffall, Dale Scott. "Conceptual framework approach for system-of-systems software developments." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Mar%5FCaffall.pdf.

Full text
Abstract:
Thesis (M.S. in Software Engineering)--Naval Postgraduate School, March 2003.
Thesis advisor(s): James Bret Michael, Man-Tak Shing. Includes bibliographical references (p. 83-84). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
10

Rexilius, Jan [Verfasser], and Klaus-Dietz [Akademischer Betreuer] Tönnies. "Software phantoms in medical image analysis / Jan Rexilius. Betreuer: Klaus-Dietz Tönnies." Magdeburg : Universitätsbibliothek, 2015. http://d-nb.info/1072685531/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rexilius, Jan [Verfasser], and Klaus [Akademischer Betreuer] Tönnies. "Software phantoms in medical image analysis / Jan Rexilius. Betreuer: Klaus-Dietz Tönnies." Magdeburg : Universitätsbibliothek, 2015. http://nbn-resolving.de/urn:nbn:de:gbv:ma9:1-6064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kerbyson, Darren James. "A multiple-SIMD architecture for image and tracking analysis." Thesis, University of Warwick, 1992. http://wrap.warwick.ac.uk/80185/.

Full text
Abstract:
The computational requirements for real-time image based applications are such as to warrant the use of a parallel architecture. Commonly used parallel architectures conform to the classifications of Single Instruction Multiple Data (SIMD), or Multiple Instruction Multiple Data (MIMD). Each class of architecture has its advantages and dis-advantages. For example, SIMD architectures can be used on data-parallel problems, such as the processing of an image. Whereas MIMD architectures are more flexible and better suited to general purpose computing. Both types of processing are typically required for the analysis of the contents of an image. This thesis describes a novel massively parallel heterogeneous architecture, implemented as the Warwick Pyramid Machine. Both SIMD and MIMD processor types are combined within this architecture. Furthermore, the SIMD array is partitioned, into smaller SIMD sub-arrays, forming a Multiple-SIMD array. Thus, local data parallel, global data parallel, and control parallel processing are supported. After describing the present options available in the design of massively parallel machines and the nature of the image analysis problem, the architecture of the Warwick Pyramid Machine is described in some detail. The performance of this architecture is then analysed, both in terms of peak available computational power and in terms of representative applications in image analysis and numerical computation. Two tracking applications are also analysed to show the performance of this architecture. In addition, they illustrate the possible partitioning of applications between the SIMD and MIMD processor arrays. Load-balancing techniques are then described which have the potential to increase the utilisation of the Warwick Pyramid Machine at run-time. These include mapping techniques for image regions across the Multiple-SIMD arrays, and for the compression of sparse data. It is envisaged that these techniques may be found useful in other parallel systems.
APA, Harvard, Vancouver, ISO, and other styles
13

Lai, Ka Chon. "Constructing social networks based on image analysis." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2586279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Stevenson, Clint Wesley. "A logistic regression analysis of utah colleges exit poll response rates using SAS software /." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1578.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hockgraver, Valerie Ruth. "Implementation of ImageActionplus software for improved image analysis of solid propellant combustion holograms." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27089.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Masood, Khalid. "Histological image analysis and gland modelling for biopsy classification." Thesis, University of Warwick, 2010. http://wrap.warwick.ac.uk/3918/.

Full text
Abstract:
The area of computer-aided diagnosis (CAD) has undergone tremendous growth in recent years. In CAD, the computer output is used as a second opinion for cancer diagnosis. Development of cancer is a multiphase process and mutation of genes is involved over the years. Cancer grows out of normal cells in the body and it usually occurs when growth of the cells in the body is out of control. This phenomenon changes the shape and structure of the tissue glands. In this thesis, we have developed three algorithms for classification of colon and prostate biopsy samples. First, we computed morphological and shape based parameters from hyperspectral images of colon samples and used linear and non-linear classifiers for the identification of cancerous regions. To investigate the importance of hyperspectral imagery in histopathology, we selected a single spectral band from its hyperspectral cube and performed an analysis based on texture of the images. Texture refers to an arrangement of the basic constituents of the material and it is represented by the interrelationships between the spatial arrangements of the image pixels. A novel feature selection method based on the quality of clustering is developed to discard redundant information. In the third algorithm, we used Bayesian inference for segmentation of glands in colon and prostate biopsy samples. In this approach, glands in a tissue are represented by polygonal models with variuos number of vertices depending on the size of glands. An appropriate set of proposals for Metropolis- Hastings-Green algorithm is formulated and a series of Markov chain Monte Carlo (MCMC) simulations are run to extract polygonal models for the glands. We demonstrate the performance of 3D spectral and spatial and 2D spatial analyses with over 90% classification accuracies and less than 10% average segmentation error for the polygonal models.
APA, Harvard, Vancouver, ISO, and other styles
17

Giuliani, Giulia. "Analysis and improvement of a software framework for solving mathematical puzzles." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
I giochi matematici costituiscono un ampio e popolare campo, richiedendo differenti livelli di capacità ed abilità per risolvere i diversi problemi. Sono usati ogni giorno nelle scuole per allenare gli studenti, per spronarli nell'applicazione delle loro conoscenze. Se pensassimo i sistemi informatici come un giovane, che ha bisogno di imparare e migliorare le sue abilità, potremmo poi applicar loro lo stesso tipo di allenamento. Con quest’idea, in questo lavoro ci focalizzeremo sull'analisi dei diversi tipi di giochi matematici che le competizioni per giovani studenti offrono, capendo le differenti categorie e le loro caratteristiche. Approfondiremo anche i problemi legati all'analisi delle immagini, poiché nella risoluzione è molto importante la corretta comprensione dei dati di input, un compito che diventa più impegnativo nel momento in cui dobbiamo confrontarci con diversi tipi di fonti. Continuando ad analizzare questo percorso, analizzeremo come l’elaborazione del testo e dei diagrammi funzioni singolarmente, in modo da poter poi alla fine modellare i puzzle usando la combinazione di questi dati. L'NLP è il campo collegato all'elaborazione delle informazioni testuali, mentre per le immagini partiremo da un lavoro esistente, provando a migliorarlo e incrementare le funzionalità che offre. Il lavoro è quindi strutturato in 3 diverse aree: • Riorganizzazione e miglioramento dell’esistente framework, rendendolo più user friendly e andando a colmare alcune mancanze nei predicati per l’analisi dell’immagine. • Sviluppo del middle layer compreso tra l'NLP del testo e la definizione del modello del problema, mostrando come il modello stesso sia costruito, partendo dai dati iniziali. • Sviluppo di una web application che combina tutti i lavori, in modo da rendere disponibile agli utenti uno strumento per la risoluzione di giochi matematici, offrendo inoltre la possibilità di personalizzare il problema selezionato.
APA, Harvard, Vancouver, ISO, and other styles
18

Raza, Shan-e.-Ahmed. "Multi-variate image analysis for detection of biomedical anomalies." Thesis, University of Warwick, 2014. http://wrap.warwick.ac.uk/61713/.

Full text
Abstract:
Multi-modal images are commonly used in the field of medicine for anomaly detection, for example CT/MRI images for tumour detection. Recently, thermal imaging has demonstrated its potential for detection of anomalies (e.g., water stress, disease) in plants. In biology, multi channel imaging systems are now becoming available which combine information about the level of expression of various molecules of interest (e.g., proteins) which can be employed to investigate molecular signatures of diseases such as cancer or their subtypes. Before combining information from multiple modalities/channels, however, we need to align (register) the images together in a way that the same point in the multiple images obtained from different sources/channels corresponds to the same point on the object (e.g., a particular point on a leaf in a plant or a particular cell in a tissue) under observation. In this thesis, we propose registration methods to align multi-modal/channel images of plants and human tissues. For registration of thermal and visible light images of plants we propose a registration method using silhouette extraction. For silhouette extraction, we propose a novel multi-scale method which can be used to extract highly accurate silhouettes of diseased plants in thermal and visible light images. The extracted silhouettes can be used to register plant regions in thermal and visible light images. After alignment of multi-modal images, we combine thermal and visible light information for classification of water deficient regions of spinach canopies. We add depth information as another dimension to our set of features for detection of diseased plants. For depth estimation, we use disparity between stereo image pair. We then compare different disparity estimation algorithms and propose a method which can be used to obtain not only accurate and smooth disparity maps but also less sensitive to the acquisition noise. Our results show that by combining information from multiple modalities, classification accuracy of different classifiers can be increased. In the second part of this thesis, we propose a block-based registration method using mutual information as a similarity measure for registration of multi-channel fluorescence microscopy images. The proposed block-based approach is fast, accurate and robust to local variations in the images. In addition, we propose a method for selection of a reference image with maximal overlap i.e., a method to choose a reference image, from a stack of dozens of multi-channel images, which when used as reference image causes minimum amount of information loss during the registration process. Images registered using this method have been used in other studies to investigate techniques for mining molecular patterns of cancer. Both the registration algorithms proposed in this thesis produce highly accurate results where the block-based registration algorithm is shown to be capable of registering the images up to sub-pixel accuracy. The disparity estimation algorithm produces smooth and accurate disparity maps in the presence of noise where commonly used disparity estimation algorithms fail to perform. Our results show that by combining multi-modal image data, one can easily increase the accuracy of classifiers to detect anomalies in plants, which helps to avoid huge losses due to disease or lack of water at commercial level.
APA, Harvard, Vancouver, ISO, and other styles
19

Amundberg, Joel, and Martin Moberg. "System Agnostic GUI Testing : Analysis of Augmented Image Recognition Testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Van, der Westhuizen Lynette. "Concise analysis and testing of a software model of a satellite remote sensing system used for image generation." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/96029.

Full text
Abstract:
Thesis (MEng) -- Stellenbosch University, 2014.
ENGLISH ABSTRACT: The capability of simulating the output image of earth observation satellite sensors is of great value, as it reduces the dependency on extensive field tests when developing, testing and calibrating satellite sensors. The aim of this study was to develop a software model to simulate the data acquisition process used by passive remote sensing satellites for the purpose of image generation. To design the software model, a comprehensive study was done of a physical real world satellite remote sensing system in order to identify and analyse the different elements of the data acquisition process. The different elements were identified as being the target, the atmosphere, the sensor and satellite, and radiation. These elements and a signature rendering equation are used to model the target-atmosphere-sensor relationship of the data acquisition process. The signature rendering equation is a mathematical model of the different solar and self-emitted thermal radiance paths that contribute to the radiance reaching the sensor. It is proposed that the software model be implemented as an additional space remote sensing application in the Optronics Sensor Simulator (OSSIM) simulation environment. The OSSIM environment provides the infrastructure and key capabilities upon which this specialist work builds. OSSIM includes a staring array sensor model, which was adapted and expanded in this study to operate as a generic satellite sensor. The OSSIM signature rendering equation was found to include all the necessary terms required to model the at-sensor radiance for a satellite sensor with the exception of an adjacency effect term. The equation was expanded in this study to include a term to describe the in-field-of-view adjacency effect due to aerosol scattering. This effect was modelled as a constant value over the sensor field of view. Models were designed to simulate across-track scanning mirrors, the satellite orbit trajectory and basic image processing for geometric discontinuities. Testing of the software model showed that all functions operated correctly within the set operating conditions and that the in-field-of-view adjacency effect can be modelled effectively by a constant value over the sensor field of view. It was concluded that the satellite remote sensing software model designed in this study accurately simulates the key features of the real world system and provides a concise and sound framework on which future functionality can be expanded.
AFRIKAANSE OPSOMMING: Dit is nuttig om ’n sagteware program te besit wat die gegenereerde beelde van ’n satellietsensor vir aarde-waarneming kan naboots. So ’n sagteware program sal die afhanklikheid van breedvoerige veldwerktoetse verminder gedurende die ontwerp, toetsing en kalibrasie fases van die ontwikkeling van ’n satellietsensor. Die doel van hierdie studie was om ’n sagteware model te ontwerp wat die dataverwerwingsproses van ’n passiewe satelliet afstandswaarnemingstelsel kan naboots, met die doel om beelde te genereer. Om die sagteware model te ontwerp het ’n omvattende studie van ’n fisiese regte wêreld satelliet afstandswaarnemingstelsel geverg, om die verskillende elemente van die dataverwerwingsproses te identifiseer en te analiseer. Die verskillende elemente is geïdentifiseer as die teiken, die atmosfeer, die sensor en satelliet, en vloed. Hierdie elemente, tesame met ’n duimdrukvergelyking, is gebruik om die teiken-atmosfeer-sensor verhouding van die dataverwerwingsproses te modelleer. Die duimdrukvergelyking is ’n wiskundige model van die verskillende voortplantingspaaie van gereflekteerde sonvloed en self-stralende termiese vloed wat bydra tot die totale vloed wat die sensor bereik. Dit is voorgestel dat die sagteware model as ’n addisionele ruimte afstandswaarnemingtoepassing in die ‘Optronics sensor Simulator’ (OSSIM) simulasie-omgewing geïmplementeer word. Die OSSIM simulasie-omgewing voorsien die nodige infrastruktuur en belangrike funksies waarop hierdie spesialis werk gebou kan word. OSSIM het ’n starende-skikking sensor model wat in hierdie studie aangepas is en uitgebrei is om as ’n generiese satellietsensor te funksioneer. Die OSSIM duimdrukvergelyking bevat al die nodige radiometriese terme, behalwe ’n nabyheids-verstrooiing term, om die vloed by die satellietsensor te modeleer. Die duimdrukvergelyking is uitgebrei in hierdie studie om ’n term in te sluit wat die verstrooiing van vloed vanaf naby-geleë voorwerpe, as gevolg van aerosol verstrooiing, kan beskryf. Die nabyheids-verstrooiing is gemodeleer as ’n konstante waarde oor die sigveld van die sensor. Modelle is ontwerp om die beweging van oor-baan skandering-spieëls en die satelliet wentelbaan trajek te bereken. ’n Basiese beeldverwerkings model is ook ontwerp om diskontinuïteite in geometriese vorms in die sensor beelde reg te stel. Toetsing van die sagteware model het gewys dat al die funksies korrek gefunksioneer het binne die limiete van die vasgestelde operasionele voorwaardes. Die toets resultate het ook bewys dat die in-sig-veld nabyheids-verstrooiing akkuraat gemodeleer kan word as ’n konstante waarde oor die sensor sigveld. Daar is tot die gevolgtrekking gekom dat die satelliet afstandswaarneming sagteware model wat in hierdie studie ontwerp is al die belangrikste kenmerke van die werklike wêreld stelsel kan simuleer. Die model vorm ’n beknopte en stewige raamwerk waarop toekomstige werk uitgebrei kan word.
APA, Harvard, Vancouver, ISO, and other styles
21

Khim, Chamroeun [Verfasser], and Willi [Akademischer Betreuer] Jäger. "3D Image Processing, Analysis, and Software Development of Khmer Inscriptions / Chamroeun Khim ; Betreuer: Willi Jäger." Heidelberg : Universitätsbibliothek Heidelberg, 2016. http://d-nb.info/1180737024/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Song, Yu. "Modelling and analysis of plant image data for crop growth monitoring in horticulture." Thesis, University of Warwick, 2008. http://wrap.warwick.ac.uk/2032/.

Full text
Abstract:
Plants can be characterised by a range of attributes, and measuring these attributes accurately and reliably is a major challenge for the horticulture industry. The measurement of those plant characteristics that are most relevant to a grower has previously been tackled almost exclusively by a combination of manual measurement and visual inspection. The purpose of this work is to propose an automated image analysis approach in order to provide an objective measure of plant attributes to remove subjective factors from assessment and to reduce labour requirements in the glasshouse. This thesis describes a stereopsis approach for estimating plant height, since height information cannot be easily determined from a single image. The stereopsis algorithm proposed in this thesis is efficient in terms of the running time, and is more accurate when compared with other algorithms. The estimated geometry, together with colour information from the image, are then used to build a statistical plant surface model, which represents all the information from the visible spectrum. A self-organising map approach can be adopted to model plant surface attributes, but the model can be improved by using a probabilistic model such as a mixture model formulated in a Bayesian framework. Details of both methods are discussed in this thesis. A Kalman filter is developed to track the plant model over time, extending the model to the time dimension, which enables smoothing of the noisy measurements to produce a development trend for a crop. The outcome of this work could lead to a number of potentially important applications in horticulture.
APA, Harvard, Vancouver, ISO, and other styles
23

Knights, MS. "Flexible shape models for image analysis in an automated lobster catch assessment system." Thesis, Honours thesis, University of Tasmania, 2007. https://eprints.utas.edu.au/3013/2/1_front_Knights.pdf.

Full text
Abstract:
Management of fisheries is an evolving science combining multiple techniques and strategies. The involvement of the computer in industry management and research continues to grow. The area of image analysis is currently limited but continues to grow as computing equipment becomes faster and cheaper. Locating a particular object in an image and processing information about that object is a significant task that requires a great deal of processing power and finesse. The benefits of a functioning automated task that processes data on an object, such as a lobster, simply by processing an image of that object would greatly enhance the ability to manage a fishery with accurate, up to date data. The Tasmanian Aquaculture and Fisheries Institute (TAFI) intend to create a lobster-sorting tray, which can be used on lobster fishing vessels as standard equipment. This tray would include functionality to take an image of the current lobster and estimate its sex and weight from pertinent measurements on the lobster. This research demonstrates that through the use of the Active Shape Modeller (ASM) these details can be identified and processed from an image of the lobster. The ASM is used within an image analysis process, which can be fully automated, to draw out the required salient details of a lobster from an area of interest in the images. A series of experiments showed that the ASM was able to draw out and fully identify 77.3% images in a test set of 216 images. These images then had pertinent lengths and a sex estimated based on these measurements where 90% of the matched lobsters were sexed correctly.
APA, Harvard, Vancouver, ISO, and other styles
24

Eckerberg, Klas. "Etta eller nolla? : landskapsarkitekter, yrkeskunnande och informationsteknologi /." Uppsala : Dept. of Landscape Planning Ultuna, Swedish Univ. of Agricultural Sciences, 2004. http://epsilon.slu.se/a463.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Westlund, Arvid. "Image analysis tool for geometric variations of the jugular veins in ultrasonic sequences : Development and evaluation." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-348336.

Full text
Abstract:
The aim of this project is to develop and perform a first evaluation of a software, based on the active contour, which automatically computes the cross-section area of the internal jugular veins through a sequence of 90 ultrasound images. The software is intended to be useful in future research in the field of intra cranial pressure and its associated diseases. The biomechanics of the internal jugular veins and its relationship to the intra cranial pressure is studied with ultrasound. It generates data in the form of ultrasound sequences shot in seven different body positions, supine to upright. Vein movements in cross section over the cardiac cycle are recorded for all body positions. From these films, it is interesting to know how the cross-section area varies over the cardiac cycle and between body positions, in order to estimate the pressure. The software created was semi-automatic, where the operator loads each individual sequence and sets the initial contour on the first frame. It was evaluated in a test by comparing its computed areas with manually estimated areas.  The test showed that the software was able to track and compute the area with a satisfactory accuracy for a variety of sequences. It is also faster and more consistent than manual measurements. The most difficult sequences to track were small vessels with narrow geometries, fast moving walls, and blurry edges. Further development is required to correct a few bugs in the algorithm. Also, the improved algorithm should be evaluated on a larger sample of sequences before using it in research.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Chunliang. "Computer Assisted Coronary CT Angiography Analysis : Disease-centered Software Development." Licentiate thesis, Linköping University, Linköping University, Radiology, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-17783.

Full text
Abstract:

The substantial advances of coronary CTA have resulted in a boost of use of this new technique in the last several years, which brings a big challenge to radiologists by the increasing number of exams and the large amount of data for each patient. The main goal of this study was to develop a computer tool to facilitate coronary CTA analysis by combining knowledge of medicine and image processing.Firstly, a competing fuzzy connectedness tree algorithm was developed to segment the coronary arteries and extract centerlines for each branch. The new algorithm, which is an extension of the “virtual contrast injection” method, preserves the low density soft tissue around the coronary, which reduces the possibility of introducing false positive stenoses during segmentation.Secondly, this algorithm was implemented in open source software in which multiple visualization techniques were integrated into an intuitive user interface to facilitate user interaction and provide good over¬views of the processing results. Considerable efforts were put on optimizing the computa¬tional speed of the algorithm to meet the clinical requirements.Thirdly, an automatic seeding method, that can automatically remove rib cage and recognize the aortic root, was introduced into the interactive segmentation workflow to further minimize the requirement of user interactivity during post-processing. The automatic procedure is carried out right after the images are received, which saves users time after they open the data. Vessel enhance¬ment and quantitative 2D vessel contour analysis are also included in this new version of the software. In our preliminary experience, visually accurate segmentation results of major branches have been achieved in 74 cases (42 cases reported in paper II and 32 cases in paper III) using our software with limited user interaction. On 128 branches of 32 patients, the average overlap between the centerline created in our software and the manually created reference standard was 96.0%. The average distance between them was 0.38 mm, lower than the mean voxel size. The automatic procedure ran for 3-5 min as a single-thread application in the background. Interactive processing took 3 min in average with the latest version of software. In conclusion, the presented software provides fast and automatic coron¬ary artery segmentation and visualization. The accuracy of the centerline tracking was found to be acceptable when compared to manually created centerlines.

APA, Harvard, Vancouver, ISO, and other styles
27

Gonulsen, Aysegul. "Feature Extraction Of Honeybee Forewings And Hindlegs Using Image Processing And Active Contours." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12604738/index.pdf.

Full text
Abstract:
Honeybees have a rich genetic diversity in Anatolia. This is reflected in the presence of numerous subspecies of honeybee in Turkey. In METU, Department of Biology, honeybee populations of different regions in Turkey are investigated in order to characterize population variation in these regions. A total of 23 length and angle features belonging to the honeybee hindlegs and forewings are measured in these studies using a microscope and a monitor. These measurements are carried out by placing rulers on the monitor that shows the honeybee image and getting the length and angle features. However, performing measurements in this way is a time consuming process and is open to human-dependent errors. In this thesis, a &ldquo
semi-automated honeybee feature extraction system&rdquo
is presented. The aim is to increase the efficiency by decreasing the time spent on handling these measurements and by increasing the accuracy of measured hindleg and forewing features. The problem is studied from the acquisition of the microscope images, to the feature extraction of the honeybee features. In this scope, suitable methods are developed for segmentation of honeybee hindleg and forewing images. Within intermediate steps, blob analysis is utilized, and edges of the forewing and hindlegs are thinned using skeletonization. Templates that represent the forewing and hindleg edges are formed by either Bezier Curves or Polynomial Interpolation. In the feature extraction phase, Active Contour (Snake) algorithm is applied to the images in order to find the critical points using these templates.
APA, Harvard, Vancouver, ISO, and other styles
28

Ferreira, Adriane Pedroso Dias. "MetImage: uma metodologia para desenvolvimento de software para o processamento e análise de imagens." Universidade Federal de Santa Maria, 2006. http://repositorio.ufsm.br/handle/1/8243.

Full text
Abstract:
The development of image processing and analysis software is a complex task by using mathematical methods to solve problems, by needing multidisciplinary team and demanding high degree of software developed quality. Therefore, is very important to utilize a methodology that organizes and improves the process of development of this type of software. The existence of a methodology is pointed out as one of the first steps toward the management and improvement the process software development. Therefore, this work presents a specific methodology for the development of image processing and analysis software, called, in this work, MetImage. The goal of this methodology is to improve the deficiencies detected in existing methodologies, such as the excessive resources, bureaucracy, exaggerated control and the documentation gap, in same specific cases. The methodology proposal was implanted in the context of a research group. The main results obtained were the specification of the team activities, the inclusion of the stage of learning on the necessary mathematical methods for the implementation of the functionalities and the standardization of code. Moreover, the documentation generated can be use as a support for the agreement between specialists of the different areas that make part of the research group.
O desenvolvimento de software para processamento e análise de imagens é uma tarefa complexa por utilizar métodos matemáticos para resolver os problemas, por necessitar de uma equipe multidisciplinar e por exigir alto grau de qualidade do software desenvolvido. Portanto, fazer uso de uma metodologia que organize e melhore o processo de desenvolvimento desse tipo de software é de vital importância. A existência de uma metodologia é apontada como um dos primeiros passos em direção ao gerenciamento e a melhoria do processo de desenvolvimento de software. Assim, este trabalho apresenta uma metodologia específica para o desenvolvimento de software para processamento e análise de imagens, chamada nesse trabalho de MetImage. O objetivo dessa metodologia é suprir as deficiências detectadas nas metodologias existentes, tais como o excesso de recursos, burocracia, controle exagerado e falta de documentação, em alguns casos específicos. A metodologia proposta foi implantada no contexto de um grupo de pesquisa. Os principais resultados obtidos foram: a especificação das atividades da equipe, a inclusão de uma etapa de aprendizagem sobre os métodos matemáticos necessários para a implementação das funcionalidades requeridas pelos sistemas e a padronização de código. Além disso, a documentação gerada pode servir de apoio para o entendimento entre especialistas das diferentes áreas que fazem parte do grupo de pesquisa.
APA, Harvard, Vancouver, ISO, and other styles
29

Ivarsson, Adam. "Expediting Gathering and Labeling of Data from Zebrafish Models of Tumor Progression and Metastasis Using Bespoke Software." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-148691.

Full text
Abstract:
In this paper I describe a set of algorithms used to partly automate the labeling and preparation of images of zebrafish embryos used as models of tumor progression and metastasis. These algorithms show promise for saving time for researchers using zebrafish in this way.
APA, Harvard, Vancouver, ISO, and other styles
30

Smrt, Richard D., Sara A. Lewis, Robert Kraft, and Linda L. Restifo. "Primary culture of Drosophila larval neurons with morphological analysis using NeuronMetrics." University of Oklahoma, 2015. http://hdl.handle.net/10150/604939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Magula, Filip. "Software pro zpracování retinálních snímků." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218653.

Full text
Abstract:
This thesis deals with practical solutions of software for retinal images digital processing. The theoretic part describes human eye and retinal anatomy and also glaucoma disease. It is also focused on description of method for retinal nerve fiber layer enhancement and analysis. These enhancement are then used for designing of automated image processing. One chapter is devoted to detection and analysis of retinal nerve fibers layer. The practical part includes the user manual for application Image Blockz, which was established within this thesis. Further practical part contains the programmer's manual describing the basic structure of the program and its possible extensions.
APA, Harvard, Vancouver, ISO, and other styles
32

Coleti, Thiago Adriano. "Um ambiente de avaliação da usabilidade de software apoiado por técnicas de processamento de imagens e reconhecimento de fala." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-06032014-140810/.

Full text
Abstract:
A filmagem e a verbalização são métodos de teste de usabilidade considerados fundamentais para apoiar a avaliação da usabilidade de software, pois permitem ao avaliador coletar dados reais da capacidade de interação de um sistema e sua influência sobre o usuário. Os testes são, geralmente, realizados com usuário reais do software para que os mesmos possam submeter a interface as mais diversas situações. Embora eficazes, a filmagem e a verbalização são pouco eficientes, pois necessitam de muito trabalho para análise dos dados coletados e identificação de problemas de usabilidade. Pesquisas já realizadas na área apontam para um tempo de análise de duas a dez vezes o tempo do teste. Este trabalho teve como objetivo desenvolver um ambiente computacional que utilizava eventos de pronuncia de palavras chave e reações faciais para apoiar o processo de coleta, análise e identificação de interfaces com possíveis problemas de usabilidade de forma rápida e segura. O ambiente foi composto por um aplicativo que monitorava (em segundo plano) a utilização de um determinado aplicativo registrando palavras chave pronunciadas pelo participante e imagens faciais em determinados intervalos de tempo. Além destes dados, imagens das telas do sistema (snapshots) também eram registrados a fim de indicar quais interfaces eram utilizadas no momento de um determinado evento. Após a coleta, estes dados eram organizados e disponibilizados para avaliador com destaques para eventos que poderiam indicar insatisfação do participante ou possíveis problemas na utilização. Foi possível concluir que os eventos relacionados à verbalização com palavras chave foram eficazes para apoiar a tarefa de análise e identificação de interfaces problemáticas, pois as palavras estavam relacionadas com classificadores que indicavam satisfação ou insatisfação por parte do usuário. A atividade de verbalização se mostrou mais eficiente quando a análise de seus dados foi aplicada em conjunto com as imagens faciais, pois permitiram uma análise mais confiável e abrangente. Nesta análise, o avaliador teve condições de identificar quais interfaces do sistema foram mal classificadas pelo usuário e qual era o foco de visão/utilização do usuário no momento do evento. Para análises efetuadas com utilização de palavras chave com/sem utilização de imagens, o tempo gasto para identificar as interfaces e possíveis problemas foi reduzido para menos de duas vezes o tempo de teste.
Filming and verbalization are considered fundamental usability test methods to support software usability evaluation, due to the reason that allows the evaluator to collect real data about the software interaction capacity and how it influences the user. The tests are, usually, performed by real software users because they can submit the system to several situations that were not presupposed by evaluator in the labs. Although effective, the filming and the verbalization are not efficient due to the reason that require a long time to analyzing the data and identify usability problems. Researches performed in the area present that the time to data analysis is two to ten times the test time. This research aimed to develop an environment that used events as words pronounced and face reactions to support the collect, analysis and identification of interfaces with usability problems easily and safe. The environment is composed by a software to monitoring (background) of the user activities. The software collects key words pronounced by the participant and face images in specific time intervals. Besides these data, snapshots of the interfaces were registered in order to present which interfaces were in used in the event moment. After the collect stage, these data were processed and available to the evaluator with highlights to events that could indicate unsatisfactory events or potential utilization problems. In this research, was possible to conclude that the verbalization events using key words were effective to support the analysis and identification of problematic interfaces because the words were related to specific context that indicated the user opinion. The verbalization activities were more effective in the moments that the data analysis was performed using the face images to support it, allowing more reliable and comprehensive data analysis. In this analysis, the evaluator was able to identify which interfaces were classified negatively by the participant and which was the user focus of view/use in the event moment. In analysis performed using key words and/or not using the face images, the time to identifying the interfaces and potentials usability problems was reduced to less than twice the time of test.
APA, Harvard, Vancouver, ISO, and other styles
33

John, Björn, Daniel Markert, Norbert Englisch, Michael Grimm, Marc Ritter, Wolfram Hardt, and Danny Kowerko. "Quantification of geometric properties of the melting zone in laser-assisted welding." Wissenschaftliche Gesellschaft für Lasertechnik e.V, 2017. https://monarch.qucosa.de/id/qucosa%3A21479.

Full text
Abstract:
By using camera systems – suitable for industrial applications – in combination with a large number of different measurement sensors, it is possible to monitor laser welding processes and their results in real-time. However, a low signal to noise ratio at framerates up to 2,400 fps allows only limited statements about the process behavior; especially concerning the analysis of new welding parameters and their impact on the melting bath. This article strives towards research of kinetic and geometric dependencies of the melting zone induced by different laser parameters through usage of a camera system with a high frame rate (1280x800 by 3,140 fps) in combination with model-driven image and data processing.
APA, Harvard, Vancouver, ISO, and other styles
34

Strand, Mattias. "A Software Framework for Facial Modelling and Tracking." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54563.

Full text
Abstract:

The WinCandide application, a platform for face tracking and model based coding, had become out of date and needed to be upgraded. This report is based on the work of investigating possible open source GUIs and computer vision tool kits that could replace the old ones that are unsupported. Multi platform GUIs are of special interest.

APA, Harvard, Vancouver, ISO, and other styles
35

Karuppuswamy, Jaiganesh. "Detection and Avoidance of Simulated Potholes in Autonomous Vehicles in an Unstructured Environment." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin990731390.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Dave, Palak P. "A Quantitative Analysis of Shape Characteristics of Marine Snow Particles with Interactive Visualization: Validation of Assumptions in Coagulation Models." Scholar Commons, 2018. https://scholarcommons.usf.edu/etd/7279.

Full text
Abstract:
The Deepwater Horizon oil spill that started on April 20, 2010, in the Gulf of Mexico was the largest marine oil spill in the history of the petroleum industry. There was an unexpected and prolonged sedimentation event of oil-associated marine snow to the seafloor due to the oil spill. The sedimentation event occurred because of the coagulation process among oil associated marine particles. Marine scientists are developing models for the coagulation process of marine particles and oil, in order to estimate the amount of oil that may reach the seafloor along with marine particles. These models, used certain assumptions regarding the shape and the texture parameters of marine particles. Such assumptions may not be based on accurate information or may vary during and after the oil spill. The work performed here provided a quantitative analysis of the assumptions used in modeling the coagulation process of marine particles. It also investigated the changes in model parameters (shape and texture) during and after the Deepwater Horizon oil spill in different seasons (spring and summer). An Interactive Visualization Application was developed for data exploration and visual analysis of the trends in these parameters. An Interactive Statistical Analysis Application was developed to create a statistical summary of these parameter values.
APA, Harvard, Vancouver, ISO, and other styles
37

MEYER, ANSTETT COLETTE. "Analyse d'images de cellules en culture : description automatique de cellules musculaires lisses vasculaires et suivi de leur comportement." Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR13063.

Full text
Abstract:
Developpement d'un logiciel de traitement d'image pour la reconnaissance de cellules aortiques, l'observation et la quantification des modifications au cours du temps de la forme, de la surface et de la mobilite de myocytes aortiques de rat ont pu etre visualisees. Utilisation de la morphologie mathematique comme outil d'analyse et de description de l'image
APA, Harvard, Vancouver, ISO, and other styles
38

Ozgenel, Caglar Firat. "Developing A Tool For Acoustical Performance Evaluation Throughout The Design." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614066/index.pdf.

Full text
Abstract:
Performance of the buildings has always been a concern for the architects. With the enhancements in the technology, it is possible to measure, analyze and evaluate the performance of an architectural design before it is built via simulation tools developed. With the evaluation of the analysis performance of the concerned space can be upgraded if simulation tools are employed throughout the design process. However, even though the simulation tools are developed for the acoustical simulation and performance analysis, it is not always simple to integrate the simulation tools to whole design process because of both specific knowledge required for the usage of the tools and the nature of the acoustical simulation tools. Within the scope of the thesis, a simulation tool, which does not require advanced knowledge on acoustics and which provides rapid feedbacks about the performance of the design for the enhancement of the performance is developed using method of image sources.
APA, Harvard, Vancouver, ISO, and other styles
39

Poulsen, Andrew Joseph. "Real-time Adaptive Cancellation of Satellite Interference in Radio Astronomy." Diss., CLICK HERE for online access, 2003. http://contentdm.lib.byu.edu/ETD/image/etd238.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Pelcat, Yann S. "Soil landscape characterization of crop stubble covered fields using Ikonos high resolution panchromatic images." Thesis, Winnipeg : University of Manitoba, 2006. http://www.collectionscanada.ca/obj/s4/f2/dsk3/MWU/TC-MWU-224.pdf.

Full text
Abstract:
Thesis (M.Sc.)--University of Manitoba, 2006.
A thesis submitted to the Faculty of Graduate Studies in partial fulfillment of the requirements for the degree of Master of Science, Department of Soil Science. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
41

Kadlec, Jiri. "Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5727.

Full text
Abstract:
This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7 – 1.2 %. The output snow probability map data sets are published online using web applications and web services.
APA, Harvard, Vancouver, ISO, and other styles
42

Adolphe, Fabienne. "Programme pour l'analyse de la qualité et la stratégie des collecte des données de mono-cristaux avec détecteurs bi-dimensionnels." Grenoble INPG, 1997. http://www.theses.fr/1997INPG0197.

Full text
Abstract:
La cristallographie est un domaine scientifique interdisciplinaire qui comporte de multiples champs d'applications comme la chimie, la physique, la biologie, la geologie et la science des materiaux. Elle fait appel aujourd'hui a de nombreux domaines de competences tels que la chimie, la physique, les mathematiques ou encore l'informatique. Avec le developpement de sources de rayonnement x toujours plus puissantes (sources de rayonnement synchrotron) et celui de nouveaux detecteurs plus perfectionnes et efficaces (detecteurs bi-dimensionnels), les cristallographes ont eu besoin d'une part de nouvelles techniques et procedures pour effectuer leurs experiences et collecter leurs donnees, et d'autre part de nouveaux programmes informatiques pour les traiter et les analyser. Cette these a ete motivee par la demande de cristallographes desireux d'avoir a leur disposition un programme rapide et simple a utiliser pour les aider lors d'experiences de diffraction par des cristaux de petites molecules. Le programme regroupant divers algorithmes lies a la cristallographie, a l'analyse et la correction d'images collectees avec des detecteurs bi-dimensionnels, a ete concu pour donner aux cristallographes le plus d'informations possible sur la qualite et l'orientation du cristal etudie et les aider lors de l'acquisition de leurs donnees.
APA, Harvard, Vancouver, ISO, and other styles
43

Bodnarova, Adriana. "Texture analysis for automatic visual inspection and flaw detection in textiles." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
44

Barros, Netto Stelmo Magalhães. "Métodos computacionais para identificação, quantificação e análise de mudanças no tecido da lesão pulmonar através de imagens de tomografia computadorizada." Universidade Federal do Maranhão, 2016. http://tedebc.ufma.br:8080/jspui/handle/tede/1700.

Full text
Abstract:
Submitted by Rosivalda Pereira (mrs.pereira@ufma.br) on 2017-06-26T19:30:57Z No. of bitstreams: 1 Stelmo.pdf: 9433038 bytes, checksum: 2b73bb4f0f32aec1145044fb676465e6 (MD5)
Made available in DSpace on 2017-06-26T19:30:57Z (GMT). No. of bitstreams: 1 Stelmo.pdf: 9433038 bytes, checksum: 2b73bb4f0f32aec1145044fb676465e6 (MD5) Previous issue date: 2016-10-17
Lung cancer is one of the most common types of cancer around the world. Temporal evaluation has become a very useful tool when to whoever needs to analyze a lung lesion. The analysis occurs when a malignant lesion is under treatment or when there are indeterminate lesions, but they are probably benign. The objective from this work is to develop computational methods to detect, quantify and analyze local and global density changes of pulmonary lesions over time. Thus, it were developed four groups of methods to perform this task. The rst identi es local density changes and it has been denominated voxel-based. The second one is composed of the Jensen divergence and the hypothesis test with global and local approaches. Similarly, the third group has only one method, the principal component analysis. The last group has one method, it has been denominated modi ed quality threshold, and identi es the local density changes. In order to reach the objectives, it was proposed a methodology composed of ve steps: The rst step consists in image acquisition of the lesion at various instants. Two image databases were acquired and two models of lesions were created to evaluate the methods. The rst database has 24 lesions under treatment (public database) and the second has 13 benign nodules (private database) in monitoring. The second step refers to rigid registration of the lesion images. The next step is to apply the proposed four groups of methods. As a result, the second group of methods detected more density changes than the fourth group, which in turn, this latter detected more regions than the rst group and this more than the third group, for the public database. For the private database, the fourth group of density change methods detected more regions than the rst group. The third group detected few regions of changes when compared to the rst group and the second group had the lowest number of detected regions. In addition to the density changes found, the proposed classi cation model with texture features had accuracy above 98% in the diagnosis prediction. The results state that there are changes in both databases. However, the detected changes for each group of methods have di erent intensity and location to the databases. This conclusion is based from high accuracy that was obtained from the prediction of the lesion diagnosis from both databases.
O câncer de pulmão é um dos tipos de câncer de maior incidência no mundo. A avaliação temporal aparece como ferramenta bastante útil quando se deseja analisar uma lesão. A análise pode ocorrer quando uma lesão maligna está em tratamento ou quando surgem lesões indeterminadas, mas essas são provavelmente benignas. O objetivo deste trabalho é desenvolver métodos computacionais para detectar, quantifi car e analisar mudanças de densidade locais e globais das lesões pulmonares ao longo do tempo. Desta forma, foram desenvolvidos quatro conjuntos de métodos para realização da tarefa de detectar mudanças de densidade em lesões pulmonares. O primeiro conjunto identifi ca mudanças de densidade locais e foi denominado de métodos baseados em voxel. O segundo conjunto é composto da divergência de Jensen e do teste de hipótese com abordagens locais e globais. Com o mesmo propósito de detectar mudanças de densidade locais em lesões pulmonares, o terceiro conjunto possui um único método, a análise de componentes principais. O último conjunto também possui um único método, denominado de quality threshold modi ficado e identifi ca as mudanças locais de densidade. Para cumprir o objetivo deste trabalho, propõe-se uma metodologia composta de cinco etapas. A primeira etapa consiste na aquisição das imagens da lesão em diversos instantes. Duas bases de lesões foram utilizadas e dois modelos de lesões foram propostos para avaliação dos métodos. A primeira base possui 24 lesões em tratamento (base pública) e a segunda possui 13 nódulos benignos (base privada) em acompanhamento. A segunda etapa corresponde ao registro rígido das imagens da lesão. A próxima etapa é a aplicação dos quatro conjuntos de métodos propostos. Como resultado, o segundo conjunto de métodos detectou mais mudanças de densidade que o quarto conjunto, que por sua vez, este ultimo detectou mais regões que o primeiro conjunto e este mais que o terceiro conjunto, para a base pública de lesões. Em relação a base privada, o quarto conjunto de métodos detectou mais regiões de mudança de densidade que o primeiro conjunto. O terceiro conjunto detectou menos regiões de mudança quando comparado ao primeiro conjunto e o segundo conjunto teve o menor n úmero de regiões detectadas. Em adição às mudanças de densidade encontradas, o modelo de classi ficação proposto com medidas clássicas de textura para predição do diagnóstico da lesão teve acurácia acima de 98%. Os resultados encontrados indicam que existem mudanças de densidade em ambas as bases de lesões pulmonares. Entretanto, as mudanças detectadas por cada um dos métodos propostos possuem características de intensidade e localização diferentes em ambas as bases. Essa conclusão é motivada pela alta acurácia obtida em seu diagnóstico para as bases utilizadas.
APA, Harvard, Vancouver, ISO, and other styles
45

Garrigues, Matthieu. "Accélération Algorithmique et Logicielle del’Analyse Vidéo du Mouvement." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY018.

Full text
Abstract:
L’analyse du mouvement dans une vidéo consiste à estimer, à partir d’une séquence d’images, le déplacement apparent des objets projetés sur le plan focal d’une caméra, statique ou mobile. Un grand nombre de domaines comme la robotique, la vidéo surveillance, le cinéma ou encore les applications militaires, reposent sur cette analyse pour interpréter le contenu d’une vidéo. Ce problème a été l’un des premiers à être approché par les chercheurs en traitement d’image. De nombreuses solutions ont été proposées et permettent une estimation suffisamment précise et robuste pour un grand nombre d’applications. Cependant, la complexité algorithmique de ces solutions et/ou le manque d’optimisations de leur implantations logicielles rendent leur utilisation dans les applications à forte contraintes de calculs difficile voire impossible.Dans les travaux présentés dans cette thèse, nous avons optimisé trois types d’analyses de mouvement en prenant en compte, non seulement la complexité algorithmique, mais aussi tous les facteurs impactant le temps de calcul sur les processeurs actuels comme la parallélisation, la consommation mémoire, la régularité des accès mémoire ou encore le type des opérations arithmétiques. Cette diversité des problématiques nous a conduits à élaborer notre thèse à l’intersection des domaines du génie logiciel et du traitement d’image. Nos contributions ont permis le développement d’applications temps réel comme la reconnaissance d’actions, la stabilisation vidéo et la segmentation d’objets mobiles
Motion analysis in a video consists in estimating, from a sequence of images, the displacement of the objects projected on the focal plane of a camera, static or mobile. A large number of fields such as robotics, video surveillance, cinema or military applications rely on this analysis to interpret the contentof a video.This problem was one of the first to be approached by researchers in image processing. Numerous solutions have been proposed and allow a sufficiently accurate and robust estimate for a large number of applications. However, the algorithmic complexity of these solutions and/or the lack of optimizations of their software implementations make their use in applications with high computational constraints difficult or impossible.In the work presented in this thesis, we optimized three types of motion analysis taking into account not only the algorithmic complexity, but also all the factors affecting computation time on current processors such as parallelization, memory consumption, the regularity of memory accesses, or the type of arithmetic operations. This led us to develop our thesis at the intersection of software engineering and image processing. Our contributions have enabled the development of real-time applications such as action recognition, video stabilization andsegmentation of mobile objects
APA, Harvard, Vancouver, ISO, and other styles
46

Buchanan, Aeron Morgan. "Tracking non-rigid objects in video." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:82efb277-abc9-4725-9506-5d114a83bd96.

Full text
Abstract:
Video is a sequence of 2D images of the 3D world generated by a camera. As the camera moves relative to the real scene and elements of that scene themselves move, correlated frame-to-frame changes in the video images are induced. Humans easily identify such changes as scene motion and can readily assess attempts to quantify it. For a machine, the identification of the 2D frame-to-frame motion is difficult. This problem is addressed by the computer vision process of tracking. Tracking underpins the solution to the problem of augmenting general video sequences with artificial imagery, a staple task in the visual effects industry. The problem is difficult because tracking in general video sequences is complicated by the presence of non-rigid motion, repeated texture and arbitrary occlusions. Existing methods provide solutions that rely on imposing limitations on the scenes that can be processed or that rely on human artistry and hard work. I introduce new paradigms, frameworks and algorithms for overcoming the challenges of processing general video and thus provide solutions that fill the gap between the `automated' and `manual' approaches. The work is easily sectioned into three parts, which can be considered separately or taken together for dealing with video without limitations. The initial focus is on directly addressing practical issues of human interaction in the tracking process: a new solution is developed by explicitly incorporating the user into an interactive algorithm. It is a novel tracking system based on fast full-frame patch searching and high-speed optimal track determination. This approach makes only minimal assumptions about motion and appearance, making it suitable for the widest variety of input video. I detail an implementation of the new system using k-d trees and dynamic programming. The second distinct contribution is an important extension to tracking algorithms in general. It can be noted that existing tracking algorithms occupy a spectrum in their use of global motion information. Local methods are easily confused by occlusions, repeated texture and image noise. Global motion models offer strong predictions to see through these difficulties and have been used in restricted circumstances, but are defeated by scenes containing independently moving objects or modest levels of non-rigid motion. I present a well principled way of combining local and global models to improve tracking, especially in these highly problematic cases. By viewing rank-constrained tracking as a probabilistic model of 2D tracks instead of 3D motion, I show how one can obtain a robust motion prior that can be easily incorporated in any existing tracking algorithm. The development of the global motion prior is based on rank-constrained factorization of measurement matrices. A common difficulty comes from the frequent occurrence of occlusions in video, which means that the relevant matrices are often not complete due to missing data. This defeats standard factorization algorithms. To fully explain and understand the algorithmic complexities of factorization in this practical context, I present a common notation for the direct comparison of existing algorithms and propose a new family of hybrid approaches that combine the superb initial performance of alternation methods with the convergence power of the Newton algorithm. Together, these investigations provide a wide-ranging, yet coherent exploration of tracking non-rigid objects in video.
APA, Harvard, Vancouver, ISO, and other styles
47

Silva, Samuel de Sousa. "Left ventricle functional analysis from coronary CT angiography." Doctoral thesis, Universidade de Aveiro, 2012. http://hdl.handle.net/10773/8077.

Full text
Abstract:
Doutoramento em Engenharia Informática
Coronary CT angiography is widely used in clinical practice for the assessment of coronary artery disease. Several studies have shown that the same exam can also be used to assess left ventricle (LV) function. LV function is usually evaluated using just the data from end-systolic and end-diastolic phases even though coronary CT angiography (CTA) provides data concerning multiple cardiac phases, along the cardiac cycle. This unused wealth of data, mostly due to its complexity and the lack of proper tools, has still to be explored in order to assess if further insight is possible regarding regional LV functional analysis. Furthermore, different parameters can be computed to characterize LV function and while some are well known by clinicians others still need to be evaluated concerning their value in clinical scenarios. The work presented in this thesis covers two steps towards extended use of CTA data: LV segmentation and functional analysis. A new semi-automatic segmentation method is presented to obtain LV data for all cardiac phases available in a CTA exam and a 3D editing tool was designed to allow users to fine tune the segmentations. Regarding segmentation evaluation, a methodology is proposed in order to help choose the similarity metrics to be used to compare segmentations. This methodology allows the detection of redundant measures that can be discarded. The evaluation was performed with the help of three experienced radiographers yielding low intraand inter-observer variability. In order to allow exploring the segmented data, several parameters characterizing global and regional LV function are computed for the available cardiac phases. The data thus obtained is shown using a set of visualizations allowing synchronized visual exploration. The main purpose is to provide means for clinicians to explore the data and gather insight over their meaning, as well as their correlation with each other and with diagnosis outcomes. Finally, an interactive method is proposed to help clinicians assess myocardial perfusion by providing automatic assignment of lesions, detected by clinicians, to a myocardial segment. This new approach has obtained positive feedback from clinicians and is not only an improvement over their current assessment method but also an important first step towards systematic validation of automatic myocardial perfusion assessment measures.
A angiografia coronária por TC (angio-TC) é prática clínica corrente para a avaliação de doença coronária. Alguns estudos mostram que é também possível utilizar o exame de angio-TC para avaliar a função do ventrículo esquerdo (VE). A função ventricular esquerda (FVE) é normalmente avaliada considerando as fases de fim de sístole e de fim de diástole, apesar de a angio-TC proporcionar dados relativos a diferentes fases distribuídas ao longo do ciclo cardíaco. Estes dados não considerados, devido à sua complexidade e à falta de ferramentas apropriadas para o efeito, têm ainda de ser explorados para que se perceba se possibilitam uma melhor compreensão da FVE. Para além disso, podem ser calculados diferentes parâmetros para caracterizar a FVE e, enquanto alguns são bem conhecidos dos médicos, outros requerem ainda uma avaliação do seu valor clínico. No âmbito de uma utilização alargada dos dados proporcionados pelos angio- TC, este trabalho apresenta contributos ao nível da segmentação do VE e da sua análise funcional. É proposto um método semi-automático para a segmentação do VE de forma a obter dados para as diferentes fases cardíacas presentes no exame de angio- TC. Foi também desenvolvida uma ferramenta de edição 3D que permite aos utilizadores a correcção das segmentações assim obtidas. Para a avaliação do método de segmentação apresentado foi proposta uma metodologia que permite a detecção de medidas de similaridade redundantes, a usar no âmbito da avaliação para comparação entre segmentações, para que tais medidas redundantes possam ser descartadas. A avaliação foi executada com a colaboração de três técnicos de radiologia experientes, tendo-se verificado uma baixa variabilidade intra- e inter-observador. De forma a permitir explorar os dados segmentados, foram calculados vários parâmetros para caracterização global e regional da FVE, para as diversas fases cardíacas disponíveis. Os resultados assim obtidos são apresentados usando um conjunto de visualizações que permitem uma exploração visual sincronizada dos mesmos. O principal objectivo é proporcionar ao médico a exploração dos resultados obtidos para os diferentes parâmetros, de modo a que este tenha uma compreensão acrescida sobre o seu significado clínico, assim como sobre a correlação existente entre diferentes parâmetros e entre estes e o diagnóstico. Finalmente, foi proposto um método interactivo para ajudar os médicos durante a avaliação da perfusão do miocárdio, que atribui automaticamente as lesões detectadas pelo médico ao respectivo segmento do miocárdio. Este novo método obteve uma boa receptividade e constitui não só uma melhoria em relação ao método tradicional mas é também um primeiro passo para a validação sistemática de medidas automáticas da perfusão do miocárdio.
APA, Harvard, Vancouver, ISO, and other styles
48

Edvinsson, Marcus. "Implementing the circularly polarized light method for determining wall thickness of cellulosic fibres." Thesis, Uppsala universitet, Bildanalys och människa-datorinteraktion, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-174066.

Full text
Abstract:
The wall thickness of pulp fibers plays a major role in the paper industry, but it is currently not possible to measure this property without manual laboratory work. In 2007, researcher Ho Fan Jang patented a technique to automatically measure fiber wall thickness, combining the unique optical properties of pulp fibers with image analysis. In short, the method creates images through the use of an optical system resulting in color values which demonstrate the retardation of a particular wave length instead of the intensity. A device based on this patent has since been developed by Eurocon Analyzer. This thesis investigates the software aspects of this technique, using sample images generated by the Eurocon Analyzer prototype. The software developed in this thesis has been subdivided into three groups for independent consideration. First being the problem of solving wall thickness for colors in the images. Secondly, the image analysis process of identifying fibers and good points for measuring them. Lastly, it is investigated how statistical analysis can be applied to improve results and derive other useful properties such as fiber coarseness. With the use of this technique there are several problems which need to be overcome. One such problem is that it may be difficult to disambiguate the colors produced by fibers of different thickness. This complication may be reduced by using image analysis and statistical analysis. Another challenge can be that theoretical values often differ greatly from the observed values which makes the computational aspect of the method problematic. The results of this thesis show that the effects of these problems can be greatly reduced and that the method offers promising results. The results clearly distinguish between and show the expected characteristics of different pulp samples, but more qualitative reference measurements are needed in order to draw conclusions on the correctness of the results.
APA, Harvard, Vancouver, ISO, and other styles
49

Bernd, Arthur Barcellos. "Registros dinâmicos de representação e aprendizagem de conceitos de geometria analítica." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/175209.

Full text
Abstract:
A Teoria dos Registros de Representação, de Duval, compreende e analisa a peculiaridade dos objetos matemáticos, acessíveis através de suas diferentes representações. Fischbein e Hershkowitz, entre outros teóricos, desenvolveram as noções de Imagem Mental e Imagem Conceitual como a interpretação de um dado conceito matemático por um sujeito. Esta dissertação estabelece conexões entre estas duas discussões teóricas e, a partir disto, faz uma proposta de ensino para alguns conceitos de Geometria Analítica através do uso dos registros dinâmicos no software GeoGebra. A proposta, na forma de sequência didática, foi implementada em turma do terceiro ano do Ensino Médio de uma escola da rede particular de ensino do município de Porto Alegre. A análise da produção dos estudantes estabelece diálogo constante com os referenciais teóricos escolhidos. É uma pesquisa, sob a forma de estudo de caso, que busca investigar como ocorre o processo de aprendizagem de Geometria Analítica através utilização do software GeoGebra no ensino e aprendizagem de matemática, apresentando e discutindo os resultados obtidos de modo a contribuir para esta área de pesquisa.
The Registers of Representation Theory, from Duval, understands and analysis the peculiarity of mathematics objects, accessible through its different representations. Fischbein and Hershkowitz, among others researchers, developed the notions of Mental Image and Conceptual Image to explain the construction process of mathematical concepts by the subject. This dissertation establishes connections between these theories and uses this approach to propose a didactic sequence for teaching some concepts of Analytic Geometry using dynamic representation offered by the GeoGebra software. The proposal was implemented in a 3rd grade private high school. The research is a case study. The analysis of students’ production establishes constant dialog with the theoretical approach and presents results that can be a contribution to research in the area of dynamic representations and learning of school mathematics.
APA, Harvard, Vancouver, ISO, and other styles
50

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography