Dissertations / Theses on the topic 'Image processing programs'

To see the other types of publications on this topic, follow the link: Image processing programs.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 47 dissertations / theses for your research on the topic 'Image processing programs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Darbhamulla, Lalitha. "A Java image editor and enhancer." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2705.

Full text
Abstract:
The purpose of this project is to develop a Java Applet that provides all the tools needed for creating image fantasies. It lets the user pick a template and an image, and combine them together. The user can then apply image processing techniques such as rotation, zooming, blurring etc according to his/her requirements.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Dan Chary. "Pathological image processing and geometric modelling for improved management of colorectal cancer." Thesis, University of Oxford, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pei, Mo Mo. "Modeling the performance of many-core programs on GPUs with advanced features." Thesis, University of Macau, 2012. http://umaclib3.umac.mo/record=b2592954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Teng, Shyh Wei 1973. "Image indexing and retrieval based on vector quantization." Monash University, Gippsland School of Computing and Information Technology, 2003. http://arrow.monash.edu.au/hdl/1959.1/5764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schaefer, Charles Robert. "Magnification of bit map images with intelligent smoothing of edges." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kirkpatrick, Michael Gorden. "Optical character recognition : an approach using self- adjusting segmentation of a matrix." Virtual Press, 1997. http://liblink.bsu.edu/uhtbin/catkey/1048390.

Full text
Abstract:
The problem of optical pattern recognition is a broad one. It ranges from identifying shapes in aerial photographs to recognizing letters in hand or machine printed words. This thesis examines many of the issues relating to pattern recognition and, specifically, those pertaining to the optical recognition of characters. It discusses several approaches to various parts of the problem as an illustration of the variety of methods of attack. Some of the particular strengths and weaknesses of those approaches are discussed as well. Finally, a new method of approaching OCR is introduced, developed, and studied. At the conclusion, the study is summarized, the results are examined, and suggestions are made for continued research.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
7

Hobson, Adrian Surveying &amp Spatial Information Systems Faculty of Engineering UNSW. "Digital plan lodgement and dissemination." Awarded by:University of New South Wales. School of Surveying and Spatial Information Systems, 2004. http://handle.unsw.edu.au/1959.4/24231.

Full text
Abstract:
In Australia, in recent years there has been increasing demand for more streamlined lodgement of cadastral plans and for their later dissemination. There are a number of approaches to meeting this demand, one of which is developed in detail in this dissertation. The current status of digital lodgement and Digital Cadastral Databases (DCDB) throughout Australia and New Zealand is reviewed. Each of the states and territories in Australia and also New Zealand are examined, looking at the process involved in the lodgement of survey plans and the state of the DCDB in each jurisdiction. From this examination the key issues in digital lodgement and dissemination are extracted and a needs analysis for an Australia-wide generic system is carried out. This needs analysis is directed at technological change allied with sound cadastral principles. Extensible Markup Language (XML) is considered for the storage and transport of all the required data and to facilitate the dissemination of information over the Internet. The benefits of using XML are comprehensive, leading to its selection and the use of related technologies LandXML, Extensible Structured Query Language (XSQL) and Extensible Stylesheet Language (XSL). Vector graphics are introduced as the means to display plans and maps on the Internet. A number of vector standards and Web mapping solutions are compared to determine the most suitable for this project. A new standard developed by the World Wide Web Consortium (W3C), Scalable Vector Graphics (SVG), is chosen. A prototype Web interface and the underlying database and Web server were developed using Oracle as the database and Apache as the Web server. Each aspect of the development is described, starting with the installation and configuration of the database, the Web server and the XSQL servlet. Testing was undertaken using LandXML cadastral data and displaying plans using SVG. Both Internet Explorer and Mozilla were trialled as the Web browser, with Mozilla being chosen because of incompatibilities between Internet Explorer, LandXML and SVG. An operational pilot was created. At this stage it requires manual intervention to centre and maximise a plan in the display area. The result indicates that an automated system is feasible and this dissertation provides a basis for further development by Australian land administration organisations.
APA, Harvard, Vancouver, ISO, and other styles
8

Sullivan, Kevin Michael. "An image delta compression tool: IDelta." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2543.

Full text
Abstract:
The purpose of this thesis is to present a modified version of the algorithm used in the open source differencing tool zdelta, entitled "iDelta". This algorithm will manage file data and will be built specifically to difference images in the Photoshop file format.
APA, Harvard, Vancouver, ISO, and other styles
9

Kim, Jongwoo. "A robust hough transform based on validity /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

King, Kraig. "Linking Moving Object Databases with Ontologies." Fogler Library, University of Maine, 2007. http://www.library.umaine.edu/theses/pdf/KingK2007.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chenini, Hanen. "A rapid design methodology for generating of parallel image processing applications and parallel architectures for smart camera." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22459.

Full text
Abstract:
Dû à la complexité des algorithmes de traitement d’images récents et dans le but d'accélérer la procédure de la conception des MPSoCs, méthodologies de prototypage rapide sont nécessaires pour fournir différents choix pour le programmeur de générer des programmes parallèles efficaces. Ce manuscrit présente les travaux menés pour proposer une méthodologie de prototypage rapide permettant la conception des architectures MPSOC ainsi que la génération automatique de système matériel / logiciel dédié un circuit reprogrammable (FPGA). Pour faciliter la programmation parallèle, l'approche MPSoC proposée est basée sur l’utilisation de Framework « CubeGen » qui permet la génération des différentes solutions envisageables pour réaliser des prototypes dans le domaine du traitement d’image. Ce document décrit une méthode basée sur le concept des squelettes générés en fonction des caractéristiques d'application afin d'exploiter tous les types de parallélisme des algorithmes réels. Un ensemble d’expérimentations utilisant des algorithmes courants permet d’évaluer les performances du flot de conception proposé équivalente à une architecture basé des processeurs hardcore et les solutions traditionnels basé sur cibles ASIC
Due to the complexity of image processing algorithms and the restrictions imposed by MPSoC designs to reach their full potentials, automatic design methodologies are needed to provide guidance for the programmer to generate efficient parallel programs. In this dissertation, we present a MPSoC-based design methodology solution supporting automatic design space exploration, automatic performance evaluation, as well as automatic hardware/software system generation. To facilitate the parallel programming, the presented MPSoC approach is based on a CubeGen framework that permits the expression of different scenarios for architecture and algorithmic design exploring to reach the desired level of performance, resulting in short time development. The generated design could be implemented in a FPGA technology with an expected improvement in application performance and power consumption. Starting from the application, we have evolved our effective methodology to provide several parameterizable algorithmic skeletons in the face of varying application characteristics to exploit all types of parallelism of the real algorithms. Implementing such applications on our parallel embedded system shows that our advanced methods achieve increased efficiency with respect to the computational and communication requirements. The experimental results demonstrate that the designed multiprocessing architecture can be programmed efficiently and also can have an equivalent performance to a more powerful designs based hard-core processors and better than traditional ASIC solutions which are too slow and too expensive
APA, Harvard, Vancouver, ISO, and other styles
12

Dannenberg, Matthew. "Pattern Recognition in High-Dimensional Data." Scholarship @ Claremont, 2016. https://scholarship.claremont.edu/hmc_theses/76.

Full text
Abstract:
Vast amounts of data are produced all the time. Yet this data does not easily equate to useful information: extracting information from large amounts of high dimensional data is nontrivial. People are simply drowning in data. A recent and growing source of high-dimensional data is hyperspectral imaging. Hyperspectral images allow for massive amounts of spectral information to be contained in a single image. In this thesis, a robust supervised machine learning algorithm is developed to efficiently perform binary object classification on hyperspectral image data by making use of the geometry of Grassmann manifolds. This algorithm can consistently distinguish between a large range of even very similar materials, returning very accurate classification results with very little training data. When distinguishing between dissimilar locations like crop fields and forests, this algorithm consistently classifies more than 95 percent of points correctly. On more similar materials, more than 80 percent of points are classified correctly. This algorithm will allow for very accurate information to be extracted from these large and complicated hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
13

Burger, Joseph. "Real-time engagement area dvelopment program (READ-Pro)." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FBurger.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

DeVaul, Richard W. (Richard Wayne) 1971. "Emergent design and image processing : a case study." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/61107.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.
Includes bibliographical references (p. 97-99).
The digital revolution which has changed so many other aspects of modem life has yet to profoundly affect the working process of visual artists and designers. High-quality digital design tools exist, but they provide the user with an improved traditional design process, not a radically new way of designing. Conventional digital design tools are useful, but when design software emulates a paintbrush or photostudio many powerful possibilities of the computational medium are overlooked. This thesis explores emergent design, a design methodology based on a new process, enhanced interactive genetic programming. The emergent design methodology and tools allow designers to effectively create procedural design solutions (design solutions that take the form of a procedure or program) in a way that requires little or no programming on the part of the designer. The use of preliminary fitness functions in the interactive genetic programming process allows the designer to specify heuristics to guide the search and manage the complexity of the interactive genetic programming task. This document is structured in the form of a case study, in which the enhanced genetic programming process and emergent design methodology are described through their application to the specific problem of developing procedural image filters for still and moving images. Two interactive genetic programming systems for image filter evolution are described, GPI and evolution++, along with the Sol programming language that was used to create them. Results from the implementation and use of GPI and evolution++ are presented, including a number of filtered images and image sequences. These results suggest that fitness-agent enhanced interactive genetic programming and the emergent design methodology may play a useful role in the visual design process, allowing designers to explore a wider range of options with greater ease than is possible through a traditional, procedural, or conventional genetic programming design process.
Richard W. DeVaul.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
15

Avern, Geoffrey J. "High-resolution computer imaging in 2D and 3D for recording and interpreting archaeological excavations =: Le rôle de l'image numérique bidimensionelle et tridimensionelle de haute résolution dans l'enregistrement et l'interprétation des données archéologiques." Doctoral thesis, Universite Libre de Bruxelles, 2000. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Voils, Danny. "Scale Invariant Object Recognition Using Cortical Computational Models and a Robotic Platform." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/632.

Full text
Abstract:
This paper proposes an end-to-end, scale invariant, visual object recognition system, composed of computational components that mimic the cortex in the brain. The system uses a two stage process. The first stage is a filter that extracts scale invariant features from the visual field. The second stage uses inference based spacio-temporal analysis of these features to identify objects in the visual field. The proposed model combines Numenta's Hierarchical Temporal Memory (HTM), with HMAX developed by MIT's Brain and Cognitive Science Department. While these two biologically inspired paradigms are based on what is known about the visual cortex, HTM and HMAX tackle the overall object recognition problem from different directions. Image pyramid based methods like HMAX make explicit use of scale, but have no sense of time. HTM, on the other hand, only indirectly tackles scale, but makes explicit use of time. By combining HTM and HMAX, both scale and time are addressed. In this paper, I show that HTM and HMAX can be combined to make a com- plete cortex inspired object recognition model that explicitly uses both scale and time to recognize objects in temporal sequences of images. Additionally, through experimentation, I examine several variations of HMAX and its
APA, Harvard, Vancouver, ISO, and other styles
17

Bergström, Joel. "Disparity Tool : A disparity estimaion program." Thesis, Mittuniversitetet, Institutionen för informationsteknologi och medier, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-12362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gaddam, Purna Chandra Srinivas Kumar, and Prathik Sunkara. "Advanced Image Processing Using Histogram Equalization and Android Application Implementation." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13735.

Full text
Abstract:
Now a days the conditions at which the image taken may lead to near zero visibility for the human eye. They may usually due to lack of clarity, just like effects enclosed on earth’s atmosphere which have effects upon the images due to haze, fog and other day light effects. The effects on such images may exists, so useful information taken under those scenarios should be enhanced and made clear to recognize the objects and other useful information. To deal with such issues caused by low light or through the imaging devices experience haze effect many image processing algorithms were implemented. These algorithms also provide nonlinear contrast enhancement to some extent. We took pre-existed algorithms like SMQT (Successive mean Quantization Transform), V Transform, histogram equalization algorithms to improve the visual quality of digital picture with large range scenes and with irregular lighting conditions. These algorithms were performed in two different method and tested using different image facing low light and color change and succeeded in obtaining the enhanced image. These algorithms helps in various enhancements like color, contrast and very accurate results of images with low light. Histogram equalization technique is implemented by interpreting histogram of image as probability density function. To an image cumulative distribution function is applied so that accumulated histogram values are obtained. Then the values of the pixels are changed based on their probability and spread over the histogram. From these algorithms we choose histogram equalization, MATLAB code is taken as reference and made changes to implement in API (Application Program Interface) using JAVA and confirms that the application works properly with reduction of execution time.
APA, Harvard, Vancouver, ISO, and other styles
19

Rimmasch, Kathryn. "A Process-Based CALL Assessment: A Comparison of Input Processing and Program Use Behavior by Activity Type." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2220.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hariharan, Sriram. "Image retrieval by spatial similarity a Java - based prototye." Ohio : Ohio University, 1998. http://www.ohiolink.edu/etd/view.cgi?ohiou1176496040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Gambarato, Renato Luiz 1980. "Desenvolvimento de um programa computacional para classificação do uso da terra em imagens CBERS 2 /." Botucatu : [s.n.], 2008. http://hdl.handle.net/11449/90643.

Full text
Abstract:
Orientador: Célia Regina Lopes Zimback
Banca: Zacarias Xavier de Barros
Banca: Osmar Delmanto Junior
Resumo: Entre as diversas áreas de estudo reunidas sob o denominador comum de processamento digital de imagens, encontra-se a área conhecida como análise de imagens. Este campo de estudos visa ao desenvolvimento de técnicas que permitam extrair informações das imagens, possibilitando às pessoas e aos equipamentos maior poder de análise, resultando em maior suporte às decisões. Neste processo, uma etapa importante é a de segmentação que se refere à divisão da imagem em diversas partes elementares, permitindo a análise destas isoladamente. Este é um processo complexo porque tenta traduzir para o computador um processo cognitivo extremamente sofisticado realizado através da visão humana que realiza agrupamentos baseados na proximidade, similaridade e continuidade das imagens captadas. Tais agrupamentos são utilizados na classificação e análise semântica dos objetos percebidos. Atualmente, o processamento de imagens de satélite é uma ferramenta importante e eficaz no planejamento agrícola e no monitoramento ambiental. Fazendo uso de imagens de satélite e de técnicas de processamento de imagens, o profissional pode analizar a área de interesse e realizar um planejamento prévio sem a necessidade de uma visita ao local. As técnicas de segmentação dividem a imagem em partes homogêneas, identificando, assim, as áreas de cultivo, as áreas de mata, rios e lagos, facilitando o processo de identificação de áreas de interesse do profissional. Diante deste contexto, o presente trabalho visou facilitar a detecção de áreas de cultivo de eucalipto através do desenvolvimento do programa SmartClass, que realiza a composição de imagens, a partir das bandas espectrais isoladas coletadas pelos satélites imageadores, e o processamento para este fim, sendo que as etapas do processamento são realizadas de forma automática. A detecção das áreas de cultivo de eucalipto foi... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Among the various fields of study grouped under the common denominator of digital image processing, is the area known as analysis of images. This field of study aims to develop techniques that allow extracting information from images, enabling the people and equipment increased power of analysis, resulting in greater support for the decisions. In the process, an important step is to target regard to the division of the image in various parts elementary, allowing the analysis of isolation. This is a complex process because the computer tries to translate to an extremely sophisticated cognitive process through the vision that conducts human groupings based on proximity, similarity and continuity of images. Such groupings are used in the classification and semantic analysis of the objects perceived. Currently, the processing of satellite imagery is an important and effective tool in agricultural planning and environmental monitoring. Making use of satellite imagery and techniques of image processing, the operator can analyze the area of interest and conduct a preliminary planning without the need for a site visit. The techniques of image segmentation divided into parts homogeneous, identifying thus the areas under cultivation, the areas of forest, rivers and lakes, facilitating the process of identifying areas of interest to the profession. In this context, the present study to facilitate the detection of areas of cultivation of eucalyptus by developing the SmartClass program, which makes the composition of images, from the individual spectral bands collected by satellite images, and processing for this purpose, with the processing stages are performed automatically. The detection of areas of cultivation of eucalyptus has been successful and the program proved to be easy to use.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
22

Gupta, Davender Nath. "Expressing imaging algorithms using a C++ based image algebra programming environment /." Online version of thesis, 1990. http://hdl.handle.net/1850/11370.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gambarato, Renato Luiz [UNESP]. "Desenvolvimento de um programa computacional para classificação do uso da terra em imagens CBERS 2." Universidade Estadual Paulista (UNESP), 2008. http://hdl.handle.net/11449/90643.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:24:43Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-12-03Bitstream added on 2014-06-13T20:12:53Z : No. of bitstreams: 1 gambarato_rl_me_botfca.pdf: 2023614 bytes, checksum: 79b7ed04822d3f14eaf0d40f43fd3022 (MD5)
Entre as diversas áreas de estudo reunidas sob o denominador comum de processamento digital de imagens, encontra-se a área conhecida como análise de imagens. Este campo de estudos visa ao desenvolvimento de técnicas que permitam extrair informações das imagens, possibilitando às pessoas e aos equipamentos maior poder de análise, resultando em maior suporte às decisões. Neste processo, uma etapa importante é a de segmentação que se refere à divisão da imagem em diversas partes elementares, permitindo a análise destas isoladamente. Este é um processo complexo porque tenta traduzir para o computador um processo cognitivo extremamente sofisticado realizado através da visão humana que realiza agrupamentos baseados na proximidade, similaridade e continuidade das imagens captadas. Tais agrupamentos são utilizados na classificação e análise semântica dos objetos percebidos. Atualmente, o processamento de imagens de satélite é uma ferramenta importante e eficaz no planejamento agrícola e no monitoramento ambiental. Fazendo uso de imagens de satélite e de técnicas de processamento de imagens, o profissional pode analizar a área de interesse e realizar um planejamento prévio sem a necessidade de uma visita ao local. As técnicas de segmentação dividem a imagem em partes homogêneas, identificando, assim, as áreas de cultivo, as áreas de mata, rios e lagos, facilitando o processo de identificação de áreas de interesse do profissional. Diante deste contexto, o presente trabalho visou facilitar a detecção de áreas de cultivo de eucalipto através do desenvolvimento do programa SmartClass, que realiza a composição de imagens, a partir das bandas espectrais isoladas coletadas pelos satélites imageadores, e o processamento para este fim, sendo que as etapas do processamento são realizadas de forma automática. A detecção das áreas de cultivo de eucalipto foi...
Among the various fields of study grouped under the common denominator of digital image processing, is the area known as analysis of images. This field of study aims to develop techniques that allow extracting information from images, enabling the people and equipment increased power of analysis, resulting in greater support for the decisions. In the process, an important step is to target regard to the division of the image in various parts elementary, allowing the analysis of isolation. This is a complex process because the computer tries to translate to an extremely sophisticated cognitive process through the vision that conducts human groupings based on proximity, similarity and continuity of images. Such groupings are used in the classification and semantic analysis of the objects perceived. Currently, the processing of satellite imagery is an important and effective tool in agricultural planning and environmental monitoring. Making use of satellite imagery and techniques of image processing, the operator can analyze the area of interest and conduct a preliminary planning without the need for a site visit. The techniques of image segmentation divided into parts homogeneous, identifying thus the areas under cultivation, the areas of forest, rivers and lakes, facilitating the process of identifying areas of interest to the profession. In this context, the present study to facilitate the detection of areas of cultivation of eucalyptus by developing the SmartClass program, which makes the composition of images, from the individual spectral bands collected by satellite images, and processing for this purpose, with the processing stages are performed automatically. The detection of areas of cultivation of eucalyptus has been successful and the program proved to be easy to use.
APA, Harvard, Vancouver, ISO, and other styles
24

Oliveira, Victor Matheus de Araujo 1988. "Uma coleção de estudos de caso sobre o uso da linguagem Halide de domínio-específico em processamento de imagens e arquiteturas paralelas." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259825.

Full text
Abstract:
Orientador: Roberto de Alencar Lotufo
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-24T09:58:17Z (GMT). No. of bitstreams: 1 Oliveira_VictorMatheusdeAraujo_M.pdf: 5485933 bytes, checksum: 54457856261050bf86360bc7acbb2e5b (MD5) Previous issue date: 2013
Resumo: Um novo desenvolvimento no campo de Linguagens de Domínio-Específico são linguagens de programação que podem funcionar tanto em CPUs multi-núcleo quanto em GPUs. Nesta dissertação, avaliamos Halide, uma Linguagem de Domínio Específico (DSL) para processamento de imagens. Halide funciona tanto em CPUs como em GPUs e almeja ser uma forma mais simples e eficiente, em termos de desempenho, de expressar algoritmos da área do que as alternativas tradicionais. Para mostrar o potencial e as limitações da linguagem Halide, fazemos nesta dissertação alguns estudos de caso com algoritmos que acreditamos ser bons exemplos de categorias-chave em Processamento de Imagens, especialmente em manipulação e edição de Imagens. Comparamos o desempenho e simplicidade de implementação desses problemas com implementações em C++ usando \emph{threads} e vetorização, para arquiteturas CPU multi-núcleo, e OpenCL, para CPUs e GPUs. Mostramos que há problemas na implementação atual de Halide e que alguns tipos de algoritmos da área não podem ser bem expressos na linguagem, o que limita a sua aplicabilidade prática. Entretanto, onde isso é possível, Halide tem performance similar à implementações em OpenCL, onde vemos que há de fato ganho em termos de produtividade do programador. Halide é, portanto, apropriado para um grande conjunto de algoritmos usados em imagens e é um passo na direção certa para um modo de desenvolvimento mais fácil para aplicações de alto desempenho na área
Abstract: A development in the field of Domain-Specific Languages (DSL) are programming languages that can target both Multi-Core CPUs and accelerators like GPUs. We use Halide, a Domain-Specific Language that is suited for Image Processing tasks and that claims to be a more simple and efficient (performance-wise) way of expressing imaging algorithms than traditional alternatives. In order to show both potential and limitations of the Halide language, we do several case studies with algorithms we believe are representatives of key categories in today's Image Processing, specially in the area of Image Manipulation and Editing. We compare performance and simplicity of Halide implementations with multi-threaded C++ (for multi-core architectures) and OpenCL (for CPU and GPUs). We show that there are problems in the current implementation of the DSL and that many imaging algorithms cannot be efficiently expressed in the language, which limits its practical application; Nevertheless, in the cases where it is possible, Halide has similar performance to OpenCL and is much more simple to develop for. So we find that Halide is appropriate for a big class of image manipulation algorithms and is a step in the right direction for an easier way to use GPUs in imaging applications
Mestrado
Engenharia de Computação
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
25

Kiang, Kai-Ming Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Natural feature extraction as a front end for simultaneous localization and mapping." Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/26960.

Full text
Abstract:
This thesis is concerned with algorithms for finding natural features that are then used for simultaneous localisation and mapping, commonly known as SLAM in navigation theory. The task involves capturing raw sensory inputs, extracting features from these inputs and using the features for mapping and localising during navigation. The ability to extract natural features allows automatons such as robots to be sent to environments where no human beings have previously explored working in a way that is similar to how human beings understand and remember where they have been. In extracting natural features using images, the way that features are represented and matched is a critical issue in that the computation involved could be wasted if the wrong method is chosen. While there are many techniques capable of matching pre-defined objects correctly, few of them can be used for real-time navigation in an unexplored environment, intelligently deciding on what is a relevant feature in the images. Normally, feature analysis that extracts relevant features from an image is a 2-step process, the steps being firstly to select interest points and then to represent these points based on the local region properties. A novel technique is presented in this thesis for extracting a small enough set of natural features robust enough for navigation purposes. The technique involves a 3-step approach. The first step involves an interest point selection method based on extrema of difference of Gaussians (DOG). The second step applies Textural Feature Analysis (TFA) on the local regions of the interest points. The third step selects the distinctive features using Distinctness Analysis (DA) based mainly on the probability of occurrence of the features extracted. The additional step of DA has shown that a significant improvement on the processing speed is attained over previous methods. Moreover, TFA / DA has been applied in a SLAM configuration that is looking at an underwater environment where texture can be rich in natural features. The results demonstrated that an improvement in loop closure ability is attained compared to traditional SLAM methods. This suggests that real-time navigation in unexplored environments using natural features could now be a more plausible option.
APA, Harvard, Vancouver, ISO, and other styles
26

GAJJELA, VENKATA SARATH, and SURYA DEEPTHI DUPATI. "Mobile Application Development with Image Applications Using Xamarin." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15838.

Full text
Abstract:
Image enhancement improves an image appearance by increasing dominance of some features or by decreasing ambiguity between different regions of the image. Image enhancement techniques have been widely used in many applications of image processing where the subjective quality of images is important for human interpretation. In many cases, the images have lack of clarity and have some effects on images due to fog, low light and other daylight effects exist. So, the images which have these scenarios should be enhanced and made clear to recognize the objects clearly. Histogram-based image enhancement technique is mainly based on equalizing the histogram of the image and increasing the dynamic range corresponding to the image. The Histogram equalization algorithm was performed and tested using different images facing the low light, fog images and colour contrast and succeeded in obtaining enhanced images. This technique is implemented by averaging the histogram values as the probability density function. Initially, we have worked with the MATLAB code on Histogram Equalization and made changes to implement an Application Program Interface i.e., API using Xamarin software. The mobile application developed using Xamarin software works efficiently and has less execution time when compared to the application developed in Android Studio. Debugging of the application is successfully done in both Android and IOS versions. The focus of this thesis is to develop a mobile application on Image enhancement using Xamarin on low light, foggy images.
APA, Harvard, Vancouver, ISO, and other styles
27

Kimura, João Paulo Eiti. "Programas para geração de imagens por ultra-som e formação de feixe acústico." [s.n.], 2007. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258948.

Full text
Abstract:
Orientador: Eduardo Tavares Costa
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-18T14:12:14Z (GMT). No. of bitstreams: 1 Kimura_JoaoPauloEiti_M.pdf: 2957897 bytes, checksum: 673b44ecf63cb018401bbbbc71413541 (MD5) Previous issue date: 2007
Resumo: O diagnóstico médico por ultra-som vem sendo amplamente difundido, tornando se referência em muitos exames clínicos. Por meio das imagens por ultra-som é possível representar a anatomia de tecidos e órgãos de forma não-invasiva, em "tempo real" e sem a utilização de radiação ionizante. A construção de equipamentos de geração de imagens por ultra-som exige um conjunto confiável de circuitos e componentes eletrônicos, de forma a excitar os transdutores ultra-sônicos e também receber os sinais refletidos de forma rápida e robusta. Entretanto, há também a necessidade da utilização de softwares capazes de processar os sinais ultra-sônicos e gerar as imagens de maneira eficiente. Esse trabalho teve como objetivo primário o desenvolvimento de um software código aberto para formação de imagens por ultra-som, empregando técnicas de formação de imagem por ultra-som em "tempo real". O feixe acústico produzido pelos transdutores matriciais do tipo array pode ser defletido e/ou focalizado pela ativação eletrônica dos elementos do transdutor. Dessa forma, como objetivo secundário, foram desenvolvidos circuitos digitais que geram os estímulos com as seqüências de ativação dos elementos transdutores, para que o feixe acústico seja defletido ou focalizado em uma dada distância ou ângulo a partir da face do transdutor matricial. Os circuitos digitais foram criados utilizando FPGA's. O software de geração de imagens bidimensionais por ultra-som, batizado de ImageB, foi desenvolvido em linguagem C++ com Qt Toolkit 4, com estrutura modular, pode ser estendido por meio de plug-ins além de ser multiplataforma e de licença livre. Além dos algoritmos clássicos para conversão do sinal de RF para imagem em escala de cinza, o software incorpora também as técnicas de abertura e focalização sintética (SAFT e SF). O software e o hardware desenvolvidos nesse trabalho foram testados com um transdutor matricial linear de doze elementos, com freqüência central de ressonância de 1MHz. Foi possível observar que os circuitos foram capazes de defletir e focalizar o feixe acústico e o software ImageB foi capaz de gerar imagens dinâmicas de uma estrutura conhecida (phantom de laboratório), trabalhando de forma paralela e integrada com o hardware desenvolvido
Abstract: The ultrasound medical diagnosis has been widely used, becoming a reference in many clinical procedures. Ultrasound imaging makes it possible to represent the anatomy of organs and tissues in a non-invasive, real time way and without using ionizing radiation. The construction of ultrasound imaging systems requires a set of reliable circuits and electronic components, for exciting the ultrasonic transducers and receiving the reflected signals in a fast and robust way. However, one has to use software capable to efficiently process the received ultrasound signals and generate images. This work, as primary objective, aimed at the development of an open-source software for ultrasound image formation, employing techniques for real time ultrasound image formation. The acoustic beam produced by array transducers can be steered and/or focused by electronic activation of the elements of the transducer. As secondary objectives, digital circuits were developed to generate the sequence of activation of the transducer elements in order to steer and focus the acoustic beam electronically over the region of interest at a given distance or angle from the face of the transducer array. These digital circuits were created using FPGA's. The software to generate two-dimensional ultrasound images, ImageB, was developed in C++ with Qt Toolkit 4, has been designed in a modular form, can be extended via plug-ins and is multiplatform and freeware. Besides the traditional algorithms for conversion of the RF signal to grayscale image, the software also incorporates the techniques of aperture and synthetic focus (SAFT and SF). The hardware and software developed in this work were tested using a 1 MHz 12-element array transducer. It was possible to notice that the circuits were capable to steer and focus the acoustic beam and the software ImageB was capable to generate dynamic ultrasound images of a known structure (laboratory phantom), working with the developed hardware in an integrated and parallel way
Mestrado
Engenharia Biomedica
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
28

Kaeli, Jeffrey W. "Computational strategies for understanding underwater optical image datasets." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85539.

Full text
Abstract:
Thesis: Ph. D. in Mechanical and Oceanographic Engineering, Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Mechanical Engineering; and the Woods Hole Oceanographic Institution), 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 117-135).
A fundamental problem in autonomous underwater robotics is the high latency between the capture of image data and the time at which operators are able to gain a visual understanding of the survey environment. Typical missions can generate imagery at rates hundreds of times greater than highly compressed images can be transmitted acoustically, delaying that understanding until after the vehicle has been recovered and the data analyzed. While automated classification algorithms can lessen the burden on human annotators after a mission, most are too computationally expensive or lack the robustness to run in situ on a vehicle. Fast algorithms designed for mission-time performance could lessen the latency of understanding by producing low-bandwidth semantic maps of the survey area that can then be telemetered back to operators during a mission. This thesis presents a lightweight framework for processing imagery in real time aboard a robotic vehicle. We begin with a review of pre-processing techniques for correcting illumination and attenuation artifacts in underwater images, presenting our own approach based on multi-sensor fusion and a strong physical model. Next, we construct a novel image pyramid structure that can reduce the complexity necessary to compute features across multiple scales by an order of magnitude and recommend features which are fast to compute and invariant to underwater artifacts. Finally, we implement our framework on real underwater datasets and demonstrate how it can be used to select summary images for the purpose of creating low-bandwidth semantic maps capable of being transmitted acoustically.
by Jeffrey W. Kaeli.
Ph. D. in Mechanical and Oceanographic Engineering
APA, Harvard, Vancouver, ISO, and other styles
29

Jerbi, Khaled. "Synthèse Matérielle Haut Niveau des Programmes Flot de Donnée RVC." Phd thesis, INSA de Rennes, 2012. http://tel.archives-ouvertes.fr/tel-00827163.

Full text
Abstract:
L'évolution des algorithmes de traitement de la vidéo a impliqué l'apparition de plusieurs standards. Ces standards présentent plusieurs algorithmes communs. Cependant, il n'est pas facile de réutiliser ces algorithmes à cause du monolithisme des codes. Pour résoudre ces problèmes, la communauté ISO/IEC MPEG a créé le standard " Reconfigurable Video Coding " (RVC) basé sur le principe que les algorithmes peuvent être définis sous la forme d'une librairie de composants séparés suivant le modèle de calcul flot de données. Ainsi, les composants sont normalisés au lieu du décodeur entier. Un programme flot de données peut être décrit comme un graphe orienté dont les sommets représentent les process (acteurs) à exécuter et les arcs représentent les FIFOs de communication entre ces processes. Les informations échangées dans les FIFOs s'appellent des jetons. Ce concept fait en sorte que les process sont totalement indépendants les uns des autres et c'est seulement la présence de jetons dans les FIFOs qui est responsable du déclanchement d'un process. Pour traduire ce modèle de calcul en une description fonctionnelle, un langage spécifique appelé CAL Actor Language (CAL) est considéré dans ce travail. Ce langage est standardisé par la norme MPEG-RVC sous le nom RVC-CAL. Le standard RVC est supporté par une infrastructure complète pour concevoir et compiler le RVC-CAL en implémentations matérielles et logicielles mais les compilateurs hardware existants présentent plusieurs limitations essentiellement pour la validation et la compilation de certaines structures haut niveau du langage RVC-CAL. Pour la validation, nous proposons une méthodologie fonctionnelle qui permet la validation des algorithmes dans toutes les étapes du flow de conception. Nous montrons dans ce document l'impact important de cette méthodologie sur la réduction du temps de conception. Concernant les limitations de la compilation hardware, nous introduisons une transformation automatique que nous avons intégrée dans le cœur d'un compilateur du langage RVC-CAL appelé Orcc (Open RVC-CAL Compiler). Cette transformation détecte les structures non supportées par les compilateurs hardware et réalise les changements nécessaires dans la représentation intermédiaire de Orcc pour obtenir un code synthétisable tout en conservant le comportement global de l'acteur. Cette transformation a résolu le plus important goulot d'étranglement de la génération hardware à partir des programmes flow de données. Pour évaluer nos méthodologies, nous avons appliqué la vérification fonctionnelle sur plusieurs applications de traitement d'image et de vidéo et nous avons appliqué la génération matérielle automatique sur le décodeur MPEG-4 part 2 Simple Profile et le codec d'images fixes LAR et nous proposons des études comparatives pour ces deux contextes applicatifs.
APA, Harvard, Vancouver, ISO, and other styles
30

Matějka, Lukáš. "Obslužný program pro colony-picking robot." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219703.

Full text
Abstract:
From an overview of most commonly used kinematic conceptions of robotic manipulators, the conception of Cartesian robot was identified as the most suitable for the given task of colony picking. A control system consisting of two modular parts has been designed for the colony picking robot. ColonyCounter module is a set of image processing libraries for identification of microbial colonies in image data and precise localization of individual colonies. This has been achieved by combination of multiple methods, most importantly connected components labelling and Hough circular transform. The second module – ColonyPicker – utilizes output of ColonyCounter module to plan the picking and placing of colonies. Subsequently it controls the transfer process itself using an innovative task planning and executing system.
APA, Harvard, Vancouver, ISO, and other styles
31

Wipliez, Matthieu. "Infrastructure de compilation pour des programmes flux de données." Phd thesis, INSA de Rennes, 2010. http://tel.archives-ouvertes.fr/tel-00598914.

Full text
Abstract:
Les programmes flux de données (" data flow " en anglais) sont des programmes décrits sous la forme d'un graphe afin de mettre en évidence un certain nombre de propriétés, comme le parallélisme disponible, la localité des données, la certitude de ne pas avoir d'inter-blocages, etc. Ma thèse présente les problématiques liées à la mise en place d'une infrastructure de compilation pour ce type de programmes. Cette infrastructure a pour but de compiler, analyser, transformer, et exécuter un programme flux de données sur différentes plateformes, depuis des composants logiques programmables jusqu'à des processeurs multi-coeurs avec mémoire partagée. Nous présentons les aspects théoriques associés aux problèmes de compilation, d'analyse et d'ordonnancement des programmes flux de données, ainsi que les aspects pratiques et les résultats obtenus concernant la génération de code et l'exécution de ces programmes.
APA, Harvard, Vancouver, ISO, and other styles
32

João, Renato Stoffalette. "Projeto de operadores de imagens binárias usando combinação de classificadores." reponame:Repositório Institucional da UFABC, 2014.

Find full text
Abstract:
Orientador: Prof. Dr. Carlos da Silva dos Santos
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2014.
Uma tarefa recorrente em Processamento Digital de Imagens é o projeto de um operador que mapeie uma ou mais imagens de entrada em uma imagem resultante. Muitas vezes, o projeto de um operador específico exige um dispendioso processo de tentativa e erro, podendo resultar em um operador sub-ótimo para a tarefa em questão Diversos métodos já foram propostos para automatizar o projeto de operadores utilizando aprendizado de máquina. No cenário mais comum, o projeto de operadores parte de um conjunto de treinamento formado por pares de imagens (entrada, saída), em que as imagens de entrada correspondem as imagens originais inicialmente observadas e as imagens de saída correspondem ao produto ideal do processamento. Para obter um operador que realiza o mapeamento entre a entrada e saída, é necessário que seja realizado um procedimento de aprendizado de máquina supervisionado. Neste projeto, será investigado o projeto automático de operadores utilizando uma representação de operadores de janela (W-operadores), os quais utilizam um subconjunto local dos pixels da imagem, determinado por uma janela. Uma questão de suma importância relacionada a essa representação envolve a escolha eficiente das janelas. Baseando-se em trabalhos anteriores, adotamos o uso de combinação de operadores e seleção de características para obter um melhor desempenho. Nossa contribuição consiste em uma nova técnica para determinar as janelas de um conjunto de operadores, que são em seguida combinados para criar um operador. Nosso procedimento é inspirado pela técnica AdaBoost para combinação de classificadores e tem como objetivo determinar iterativamente uma janela que minimize o erro do operador final a cada passo da iteração. Neste trabalho, propomos a implementação do método descrito e sua comparação com técnicas anteriores de projeto de operadores, utilizando conjuntos de dados públicos em algumas tarefas de processamento de imagens.
A recurring task in Digital Image Processing is the project of an operator that maps one or more images input in a resulting image. Often, the design of a specic operator requires a expensive process of trial and error and may result in a sub-optimal operator. Several methods have been proposed to automate the design of operators using Machine Learning. In the most common scenario, the project starts from a training set formed by pairs of images (input, output), in which output images represent what would be the ideal product of the processing. Some supervised learning procedure is then used, resulting in an operator who performs input-output mapping. In this project, we will investigate the automatic design representation of operators using a window (W-operators), which use a local image subset of pixels, determined by a window. An issue related to this representation involves choosing ecient windows. Based on previous work, we adopt the use of ensembles to achieve better performance. Our contribution consists of a new technique for determining the windows of a set of operators which are then combined to create a nal operator. Our procedure is inspired by the AdaBoost technique for combining classiers and aims to iteratively determine a window that minimizes the operator error at the end of each iteration step. In this project, we propose the implementation of the described method and its comparison with prior project operators from the state of the art, using public data sets in some image processing tasks.
APA, Harvard, Vancouver, ISO, and other styles
33

Bruce, Elizabeth J. (Elizabeth Jane) 1972. "The characterization of particle clouds using optical imaging techniques." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/58860.

Full text
Abstract:
Thesis (M. Eng. in Ocean Engineering)--Joint Program in Marine Environmental Systems (Massachusetts Institute of Technology, Dept. of Ocean Engineering; and the Woods Hole Oceanographic Institution), 1998.
Includes bibliographical references (leaves 70-71).
Optical imaging techniques can be used to provide a better understanding of the physical properties of particle clouds. The purpose of this thesis is to design, perform and evaluate a set of experiments using optical imaging techniques to characterize parameters such as shape factor and entrainment coefficient which govern the initial descent phase of particle clouds in water. Several different aspects of optical imaging are considered and evaluated such as the illumination, camera, and data acquisition components. A description of the experimental layout and procedure are presented along with a description of the image processing techniques used to analyze the data collected. Results are presented from a set of experiments conducted with particle sizes ranging from 250 to 980um. A shape factor is used to demonstrate how the cloud's shape changes from approximately spherical to approximately hemispherical over depth. The entrainment coefficient is shown to vary both as a function of depth and particle size diameter. The experimental cloud velocity is compared to the output of a simplified version of the model, STFATE, used to simulate the short term fate of dredged materials in water. This analysis provides a method of evaluating the experimental results and examining the feasibility of using the experimental data to refine the input parameters to the model.
by Elizabeth J. Bruce.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
34

Rodrigues, Davi Silva. "TAIGA: uma abordagem para geração de dados de teste por meio de algoritmo genético para programas de processamento de imagens." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/100/100131/tde-21122017-180309/.

Full text
Abstract:
As atividades de teste de software são de crescente importância devido à maciça presença de sistemas de informação em nosso cotidiano. Programas de Processamento de Imagens (PI) têm um domínio de entrada bastante complexo e, por essa razão, o teste tradicional realizado com esse tipo de programa, conduzido majoritariamente de forma manual, é uma tarefa de alto custo e sujeita a imperfeições. No teste tradicional, em geral, as imagens de entrada são construídas manualmente pelo testador ou selecionadas aleatoriamente de bases de imagens, muitas vezes dificultando a revelação de defeitos no software. A partir de um mapeamento sistemático da literatura realizado, foi identificada uma lacuna no que se refere à geração automatizada de dados de teste no domínio de imagens. Assim, o objetivo desta pesquisa é propor uma abordagem - denominada TAIGA (Test imAge generatIon by Genetic Algorithm) - para a geração de dados de teste para programas de PI por meio de algoritmo genético. Na abordagem proposta, operadores genéticos tradicionais (mutação e crossover) são adaptados para o domínio de imagens e a função fitness é substituída por uma avaliação de resultados provenientes de teste de mutação. A abordagem TAIGA foi validada por meio de experimentos com oito programas de PI distintos, nos quais observaram-se ganhos de até 38,61% em termos de mutation score em comparação ao teste tradicional. Ao automatizar a geração de dados de teste, espera-se conferir maior qualidade ao desenvolvimento de sistemas de PI e contribuir com a diminuição de custos com as atividades de teste de software neste domínio
The massive presence of information systems in our lives has been increasing the importance of software test activities. Image Processing (IP) programs have very complex input domains and, therefore, the traditional testing for this kind of program is a highly costly and vulnerable to errors task. In traditional testing, usually, testers create images by themselves or they execute random selection from images databases, which can make it harder to reveal faults in the software under test. In this context, a systematic mapping study was conducted and a gap was identified concerning the automated test data generation in the images domain. Thus, an approach for generating test data for IP programs by means of genetic algorithms was proposed: TAIGA - Test imAge generatIon by Genetic Algorithm. This approach adapts traditional genetic operators (mutation and crossover) to the images domain and replaces the fitness function by the evaluation of the results of mutation testing. The proposed approach was validated by the execution of experiments involving eight distinct IP programs. TAIGA was able to provide up to 38.61% increase in mutation score when compared to the traditional testing for IP programs. It\'s expected that the automation of test data generation elevates the quality of image processing systems development and reduces the costs of software test activities in the images domain
APA, Harvard, Vancouver, ISO, and other styles
35

末永, 康仁. "「社会情報基盤のための音声・映像の知的統合」の概要." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2003. http://hdl.handle.net/2237/10448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Barbosa, David Pereira. "Classificação supervisionada da cobertura do solo : uma abordagem aplicada em imagens de sensoriamento remoto." reponame:Repositório Institucional da UFABC, 2016.

Find full text
Abstract:
Orientador: Prof. Dr. Alexandre Noma
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Ciência da Computação, 2016.
A classificação supervisionada consiste em utilizar uma base de dados rotulada para avaliar o desempenho de um determinado classifcador. Mensurando tal desempenho, podemos inferir se, para o problema abordado, tal classifcador poderá ser empregado ou não. Métodos classicos de classificação utilizam um unico classifcador para a analise de um problema. Uma forma de melhorar o desempenho da classificação é empregar técnicas que misturam classifcadores, sejam com base em seus resultados ou nas caracteristicas intrinsecas que cada classicador possui. Neste trabalho, foram empregados os métodos Votação e Adaboost para combinar classifcadores e utilizando base de dados rotuladas provenientes de imagens satelitais extraídas da regi~ao da Amazonia Legal para classificar a cobertura do solo. Resultados obtidos mostraram que o algoritmo SVM por si so consegue resultados de classificação em torno dos 90% em casos gerais. Para casos especifios, a empregabilidade do Adaboost resultou em um acrescimo de, aproximadamente, 10% na taxa de acurácia para um tipo de classe em comparação o com o melhor resultado dos métodos tradicionais.
Supervised classification is based on using a labeled database to evaluate a given classifer's performance. Measuring such performance, it is possible to infer if, for the problem addressed, such a classifer can be employed or not. Classical classification methods use a single classier to analyze a problem. One way to improve classifcation's performance is to employ techniques that mix classifers, based on their results or by each classifer's intrinsic characteristics. In this paper, the methods Voting and Adaboost were used to combine classifers and using labeled data bases from satellite's images extracted from the Legal Amazon region to classify the soil cover. Results obtained showed that the SVM algorithm alone achieves classifcation results around 90 % in general cases. For specific cases, the employability of Adaboost resulted in an increase of approximately 10 % in the accuracy rate for a class type compared to the best result of the traditional methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Ferreira, Leticia Alves. "Uma aplicação stand-alone multiplataforma para a quantificação semi-automatica da perfusão miocardica em imagens de ecocardiografia com contraste." [s.n.], 2008. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258946.

Full text
Abstract:
Orientadores: Eduardo Tavares Cost, Marden Leonardi Lopes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-11T20:29:41Z (GMT). No. of bitstreams: 1 Ferreira_LeticiaAlves_M.pdf: 5085906 bytes, checksum: 14376ec37e81ed3a052c5ee7d1858799 (MD5) Previous issue date: 2008
Resumo: Os scanners ultra-sônicos atuais oferecem ferramentas específicas para estudos de Ecocardiografia do Miocárdio por Contraste de Microbolhas (ECM) e apesar do potencial comprovado para a análise quantitativa não invasiva da perfusão miocárdica, seu uso se restringe praticamente à interpretação qualitativa (visual) das imagens clínicas. O objetivo desta tese foi desenvolver e criar uma aplicação stand-alone multiplataforma baseada nos algoritmos criados por Lopes (2005) e implementados em seu protótipo MCEToolRS. A aplicação proposta, denominada JMCETool, tem como principais características ser de fácil utilização e não comprometer a precisão, exatidão e robustez nos processos que envolvem a quantificação da perfusão miocárdica. Assim como no protótipo desenvolvido, os principais algoritmos do processo de quantificação são: o alinhamento automático baseado em Template Matching, técnicas de busca rápida e correlação; a colocação automática das ROIS sobre a parede do miocárdio; e a quantificação da perfusão miocárdica. Entre as diferenças do protótipo desenvolvido em Matlab® e da aplicação desenvolvida em Java, destacam-se a criação de uma interface mais amigável ao usuário, a implantação de uma arquitetura de software, melhor tratamento de exceções e uma nova forma de correção manual do alinhamento das imagens. A aplicação foi testada com 15 seqüências de ECM (288 imagens), sendo 14 seqüências provenientes de estudos com animais e uma proveniente de estudos com humanos. Os resultados obtidos são comparáveis aos obtidos por Lopes (2005), testes quantitativos demonstraram precisão média no processo de alinhamento de 1 pixel (para translação) e 1 grau (para rotação), com exatidão aproximada de ± 1 pixel e de ± 1 grau.
Abstract: Current commercial ultrasound scanners incorporate tools for Myocardial Contrast Echocardiography (MCE) and techniques which have a great potential for non-invasive quantitative myocardial perfusion analysis, although its use is practically restricted to qualitative (visual) reading of clinical data. The objective of this thesis was to create a new easy-to-use multiplatform standalone application for quantification of myocardium perfusion in a MCE sequence of images based on the algorithms developed by Lopes (2005) and their implementation, the prototype, called MCEToolRS. The main objective of the proposed application, called JMCETool, is the execution of these algorithms with no loss of precision, accuracy and robustness of the quantification process, compared to the first prototype. The main algorithms of the quantification process are: the automatic alignment, based on Template Matching, fast search algorithms and correlation; the automatic ROI's placement over the myocardium wall; and the quantification of myocardium perfusion. Among other features, compared to the prototype, the application JMCETool handles the algorithms exceptions and has a more user-friendly interface, including changes in the interface for manual alignment. Fifteen MCE sequences (288 images) were used during the application trials. Fourteen out of fifteen sequences belong to studies with animals (dogs) and only one belongs to studies with humans. Performance tests demonstrated that our results were similar to those of Lopes (2005), Quantitative tests have shown mean precision of 1 pixel (translation) and 1 degree (rotation) in the alignment process, and accuracy around ± 1 pixel and ± 1 degree.
Mestrado
Engenharia Biomedica
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
38

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. The findings also demonstrate examples of ensemble classifiers being effectively consolidated into unified models using the distillation technique. An analogue arrangement with optimization against multiple output targets, incorporating additional information, showed accuracy gains comparable to ensembling. For use of the classifier on test data with statistics different than those of the dataset, results indicate that augmentation of the input data during classifier creation helps performance, but would, in the current case, likely need to be guided by information about the distribution shift to have sufficiently positive impact to enable a practical application. I suggest, for future development, updated architectures, automated hyperparameter search and leveraging the bountiful unlabeled data potentially available from production lines.
APA, Harvard, Vancouver, ISO, and other styles
39

Mittner, Ondřej. "Určování velikosti plochy a rozměrů vybraných objektů v obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2010. http://www.nusl.cz/ntk/nusl-218637.

Full text
Abstract:
The master’s thesis describes hardware opto-electronic instruments for contactless measurement of surfaces. It concentrates on instruments used for opto-electrical transformation of a taking scene and software processing of digital pictures. It presents selected methods of pre-processing, segmenting and following final modifications of these pictures. Then it deals with measuring of areas of surfaces and sizes of selected objects in these pictures and conversion of results from pixels to SI system of units. Possible divergences of measuring are described as well. A flow chart of a program for automatic and manual measuring of surfaces and sizes of objects in a picture, which is commented in detail, is a part of the thesis. The main product of this thesis is application Merovo, which provides measuring of an area and proportions of included objects. This application is analysed and described in detail in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
40

Ferreira, Breno Mendes. "Modelagem e implementação de um sistema de processamento digital de sinais baseado em FPGA para geração de imagens por ultrassom usando Simulink." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2874.

Full text
Abstract:
O ultrassom (US) é uma técnica bem consolidada que vem sendo amplamente utilizada para teste, caracterização e visualização de estruturas internas de materiais biológicos e não biológicos. Na Universidade Tecnológica Federal do Paraná, o grupo de pesquisa do US desenvolveu o sistema ULTRA-ORS que, apesar de adequado para pesquisa relacionada à excitação e recepção multicanal, possui tempo de computação muito elevado, devido a processamento em computador pessoal. Este trabalho apresenta a modelagem, implementação e validação de um sistema de processamento digital de sinais baseado em dispositivo FPGA (Field-Programmable Gate Array) de alto desempenho para reconstrução de imagens por US através da técnica beamforming. O software Simulink e a ferramenta DSP Builder foram empregados para simulação e transformação dos seguintes modelos em linguagem de descrição de hardware: filtro digital FIR (Finite Impulse Response), filtro de interpolação CIC (Cascaded Integrator-Comb), atraso variável, apodização, somatório coerente, decimação, demodulação com detecção de envoltória e compressão logarítmica. Após validação no Simulink, o projeto foi sintetizado para uma FPGA Stratix IV e implementado na placa Terasic DE4-230. A ferramenta SignalTap II do software Quartus II foi utilizada para aquisição dos sinais processados pela FPGA. Para avaliação gráfica e quantitativa da acurácia deste método, foram empregados dados brutos reais de US, adquiridos do ULTRA-ORS com frequência de amostragem de 40 MHz e resolução de 12 bits, e a função de custo da raiz quadrada do erro quadrático médio normalizado (NRMSE) em comparação com as mesmas funções implementadas através de scripts no Matlab. Como resultado principal do modelamento, além das respostas individuais de cada bloco implementado, são apresentadas as comparações entre as imagens reconstruídas pelo ULTRA-ORS e pelo processamento em FPGA para quatro janelas de apodização. A excelente concordância entre os resultados simulados e experimentais com valores de NRMSE inferiores à 6,2% e latência total de processamento de 0,83 µs corroboram a simplicidade, modularidade e efetividade do modelamento proposto para utilização em pesquisas sobre o processamento de sinais de US para reconstrução de imagens em tempo real.
Ultrasound (US) is a well-established technique that has been widely used for testing, characterizing and visualizing internal structures of biological and non-biological material. The US research group of the Federal University of Technology - Paraná developed the ULTRA-ORS system, which, although suitable for research related to multichannel excitation and reception, uses a large computing time, due to the personal computer processing. This research presents the modeling, implementation and validation of a digital processing system of signals based on a FPGA (Field-Programmable Gate Array) device of high performance for the reconstruction of images through US, using the beamforming technique. The software Simulink and the tool DSP Builder were used for simulation and transformation of the following models in hardware description language: digital filter FIR (Finite Impulse Response), CIC (Cascaded Integrator-Comb) Interpolation filter, variable delay, apodization, coherent summation, decimation, demodulation with envelope detection and logarithmic compression. After the Simulink validation, the design was synthesized for a Stratix IV FPGA and implemented on the Terasic DE4-230 board. The tool SignalTap II in the software Quartus II was used to acquire the processed signals from the FPGA. For the graphic and quantitative evaluation of the accuracy of this method, we used real raw US data, acquired from the ULTRA-ORS with sampling frequency of 40 MHz and 12-bit resolution, and the normalized root mean squared error (NRMSE) in comparison with the same functions implemented through scripts in Matlab. As a main result of the modeling, in addition to the individual responses of each implemented block, comparisons between the reconstructed images by ULTRA-ORS and FPGA processing for four apodization windows are presented. The excellent agreement between the simulated and experimental results with NRMSE values lower than 6.2% and total processing latency of 0.83 µs corroborates the simplicity, modularity and effectiveness of the proposed modeling for use in US signal processing research for real-time image reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
41

Khan, Muhammad Javed Iqbal. "Attention modulated associative computing and content-associative search in image archive." Thesis, 1995. http://hdl.handle.net/10125/9755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

"GL4D: a GPU-based architecture for interactive 4D visualization." 2011. http://library.cuhk.edu.hk/record=b5896690.

Full text
Abstract:
Chu, Alan.
"October 2010."
Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.
Includes bibliographical references (leaves 74-80).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.2
Chapter 1.1 --- Motivation --- p.3
Chapter 2 --- Background --- p.4
Chapter 2.1 --- OpenGL and OpenGL Shading Language --- p.4
Chapter 2.2 --- 4D Visualization --- p.6
Chapter 2.2.1 --- 3-manifold as Surface for 4D Objects --- p.7
Chapter 2.2.2 --- Visualizing 4D Objects in Euclidean 3-space --- p.8
Chapter 2.2.3 --- The 4D Rendering Pipeline --- p.9
Chapter 3 --- Related Work --- p.11
Chapter 3.1 --- General Purpose Processing on Graphics Processing Units --- p.11
Chapter 3.2 --- Volume Rendering --- p.12
Chapter 3.2.1 --- Indirect Volume Rendering --- p.13
Chapter 3.2.2 --- Direct Volume Rendering on Structured Grid --- p.13
Chapter 3.2.3 --- Direct Volume Rendering on Unstructured Grid --- p.18
Chapter 3.2.4 --- Acceleration of DVR --- p.19
Chapter 3.3 --- 4D Visualization --- p.22
Chapter 4 --- GL4D: Hardware Accelerated Interactive 4D Visualization --- p.26
Chapter 4.1 --- Preprocessing: Prom Equations to Tetrahedral Mesh --- p.28
Chapter 4.2 --- Core Rendering Pipeline: OpenGL for 4D Rendering --- p.29
Chapter 4.2.1 --- Vertex Data Upload --- p.30
Chapter 4.2.2 --- Slice-based Multi-pass Tetrahedral Mesh Rendering --- p.30
Chapter 4.2.3 --- Back-to-front Composition --- p.38
Chapter 4.3 --- Advanced Visualization Features in GL4D --- p.38
Chapter 4.3.1 --- Stereoscopic Rendering --- p.39
Chapter 4.3.2 --- False Intersection Detection --- p.40
Chapter 4.3.3 --- Transparent 4D Objects Rendering --- p.42
Chapter 4.3.4 --- Optimization --- p.44
Chapter 5 --- Results --- p.48
Chapter 5.1 --- Data Sets --- p.48
Chapter 5.1.1 --- 3-manifolds in E4´ؤM3 4 --- p.49
Chapter 5.1.2 --- 2-manifolds in E4´ؤM2 4 --- p.50
Chapter 5.2 --- Performance --- p.69
Chapter 6 --- Conclusion --- p.71
Chapter 7 --- Future Work --- p.72
Bibliography --- p.74
APA, Harvard, Vancouver, ISO, and other styles
43

Schmalzried, Terry Eugene. "Classification of wheat kernels by machine-vision measurement." 1985. http://hdl.handle.net/2097/27530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

So, Wing Wah Simon. "Content-based image indexing and retrieval for visual information systems." Thesis, 2000. https://vuir.vu.edu.au/15318/.

Full text
Abstract:
The dominance of visual data in recent times has made a fundamental change to our everyday life. Less than five to ten years ago, Internet and World Wide Web were not the daily vocabulary for the general public. But now, even a young child can use the Internet to search for information. This, however, does not mean that we have a mature technology to perform visual information search. On the contrary, visual information retrieval is still in its infancy. The problem lies on the semantic richness and complexity of visual information in comparison to alphanumeric information. In this thesis, we present new paradigms for content-based image indexing and retrieval for Visual Information Systems. The concept of Image Hashing and the developments of Composite Bitplane Signatures with Inverted Image Indexing and Compression are the main contributions to this dissertation. These paradigms are analogous to the signature-based indexing and inversion-based postings for text information retrieval. We formulate the problem of image retrieval as a two dimensional hashing as oppose to a one-dimensional hash vector used in conventional hashing techniques. Wavelets are used to generate the bitplane signatures. The natural consequence to our bitplane signature scheme is the superimposed bitplane signatures for efficient retrieval. Composite bitplanes can then be used as the low-level feature information together with high-level semantic indexing to form a unified and integrated framework in our inverted model for content-based image retrieval.
APA, Harvard, Vancouver, ISO, and other styles
45

"Axial deformation with controllable local coordinate frames." 2010. http://library.cuhk.edu.hk/record=b5894293.

Full text
Abstract:
Chow, Yuk Pui.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2010.
Includes bibliographical references (leaves 83-87).
Abstracts in English and Chinese.
Chapter 1. --- Introduction --- p.13-16
Chapter 1.1. --- Motivation --- p.13
Chapter 1.2 --- Objectives --- p.14-15
Chapter 1.3 --- Thesis Organization --- p.16
Chapter 2. --- Related Works --- p.17-24
Chapter 2.1 --- Axial and the Free Form Deformation --- p.17
Chapter 2.1.1 --- The Free-Form Deformation --- p.18
Chapter 2.1.2 --- The Lattice-based Representation --- p.18
Chapter 2.1.3 --- The Axial Deformation --- p.19-20
Chapter 2.1.4 --- Curve Pair-based Representation --- p.21-22
Chapter 2.2 --- Self Intersection Detection --- p.23-24
Chapter 3. --- Axial Deformation with Controllable LCFs --- p.25-46
Chapter 3.1 --- Related Methods --- p.25
Chapter 3.2 --- Axial Space --- p.26-27
Chapter 3.3 --- Definition of Local Coordinate Frame --- p.28-29
Chapter 3.4 --- Constructing Axial Curve with LCFs --- p.30
Chapter 3.5 --- Point Projection Method --- p.31-32
Chapter 3.5.1 --- Optimum Reference Axial Curve Point --- p.33
Chapter 3.6 --- Advantages using LCFs in Axial Deformation --- p.34
Chapter 3.6.1 --- Deformation with Smooth Interpolated LCFs --- p.34-37
Chapter 3.6.2 --- Used in Closed-curve Deformation --- p.38-39
Chapter 3.6.3 --- Hierarchy of Axial Curve --- p.40
Chapter 3.6.4 --- Applications in Soft Object Deformation --- p.41
Chapter 3.7 --- Experiments and Results --- p.42-46
Chapter 4. --- Self Intersection Detection of Axial Curve with LCFs --- p.47-76
Chapter 4.1 --- Related Works --- p.48-49
Chapter 4.2 --- Algorithms for Solving Self-intersection Problem with a set of LCFs --- p.50-51
Chapter 4.2.1 --- The Intersection of Two Plane --- p.52
Chapter 4.2.1.1 --- Constructing the Normal Plane --- p.53-54
Chapter 4.2.1.2 --- A Line Formed by Two Planes Intersection --- p.55-57
Chapter 4.2.1.3 --- Problems --- p.58
Chapter 4.2.1.4 --- Sphere as Constraint --- p.59-60
Chapter 4.2.1.5 --- Intersecting Line between Two Circular Discs --- p.61
Chapter 4.2.2 --- Distance between a Mesh Vertex and a Curve Point --- p.62-63
Chapter 4.2.2.1 --- Possible Cases of a Line and a Circle --- p.64-66
Chapter 4.3 --- Definition Proof --- p.67
Chapter 4.3.1 --- Define the Meaning of Self-intersection --- p.67
Chapter 4.3.2 --- Cross Product of Two Vectors --- p.68
Chapter 4.4 --- Factors Affecting the Accuracy of the Algorithm --- p.69
Chapter 4.3.1 --- High Curvature of the Axial Curve --- p.69-70
Chapter 4.3.2 --- Mesh Density of an Object. --- p.71-73
Chapter 4.5 --- Architecture of the Self Intersection Algorithm --- p.74
Chapter 4.6 --- Experimental Results --- p.75- 79
Chapter 5. --- Conclusions and Future Development --- p.80-82
Chapter 5.1 --- Contribution and Conclusions --- p.80-81
Chapter 5.2 --- Limitations and Future Developments --- p.82
References --- p.83-87
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Tsai-Wen. "A Systems Level Analysis of Neuronal Network Function in the Olfactory Bulb: Coding, Connectivity, and Modular organization." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-000D-F166-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Heisen, Burkhard Clemens. "New Algorithms for Macromolecular Structure Determination." Doctoral thesis, 2009. http://hdl.handle.net/11858/00-1735-0000-0006-B503-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography