Dissertations / Theses on the topic 'Images'

To see the other types of publications on this topic, follow the link: Images.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Arias, Chipana Fredy Elmer 1979. "Uma proposta para um modelo de exibição de imagens em displays de dispositivos móveis baseado um método de atenção visual." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260047.

Full text
Abstract:
Orientador: Yuzo Iano
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-25T18:30:39Z (GMT). No. of bitstreams: 1 AriasChipana_FredyElmer_M.pdf: 3100154 bytes, checksum: f18211712dbcff0de28b7aa8569b96bb (MD5) Previous issue date: 2014
Resumo: A apresentação de imagens em telas de dispositivos móveis tem limitações que dependem da experiência do usuário. A adaptação de imagens é acondicionar o tamanho e à resolução para-se visualizar na tela do dispositivo. O uso de mecanismos de atenção visual permite reduzir esforços no processamento do estímulo visual do olho humano. Os modelos de atenção visual também ajudam a reduzir a complexidade computacional em aplicações de processamento de imagens. Propõe-se neste uma melhora ao um modelo de atenção visual para ser aplicado à adaptação de imagens em telas de dispositivos móveis
Abstract: The presentation of images on screens of mobile devices has limitations that depend on the user experience. The adaptation of images is pack the size and resolution to be visualized on the device screen. The use of a visual attention mechanism allows reducing efforts in processing the visual stimulus of the human eye. Models of visual attention also help to reduce the computational complexity of image processing applications. We propose an improvement to this model of visual attention to be applied to the adaptation of images on screens of mobile devices
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
2

Dalkvist, Mikael. "Image Completion Using Local Images." Thesis, Linköpings universitet, Informationskodning, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-70940.

Full text
Abstract:
Image completion is a process of removing an area from a photograph and replacing it with suitable data. Earlier methods either search for this relevant data within the image itself, or extends the search to some form of additional data, usually some form of database. Methods that search for suitable data within the image itself has problems when no suitable data can be found in the image. Methods that extend their search has in earlier work either used some form of database with labeled images or a massive database with photos from the Internet. For the labels in a database to be useful they typically needs to be entered manually, which is a very time consuming process. Methods that uses databases with millions of images from the Internet has issues with copyrighted images, storage of the photographs and computation time. This work shows that a small database of the user’s own private, or professional, photos can be used to improve the quality of image completions. A photographer today typically take many similar photographs on similar scenes during a photo session. Therefore a smaller number of images are needed to find images that are visually and structurally similar, than when random images downloaded from the internet are used. Thus, this approach gains most of the advantages of using additional data for the image completions, while at the same time minimizing the disadvantages. It gains a better ability to find suitable data without having to process millions of irrelevant photos.
APA, Harvard, Vancouver, ISO, and other styles
3

Murphy, Brian P. "Image processing techniques for acoustic images." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26585.

Full text
Abstract:
Approved for public release; distribution is unlimited
The primary goal of this research is to test the effectiveness of various image processing techniques applied to acoustic images generated in MATLAB. The simulated acoustic images have the same characteristics as those generated by a computer model of a high resolution imaging sonar. Edge Detection and Segmentation are the two image processing techniques discussed in this study. The two methods tested are a modified version of the Kalman filtering and median filtering
APA, Harvard, Vancouver, ISO, and other styles
4

Zeng, Ziming. "Medical image segmentation on multimodality images." Thesis, Aberystwyth University, 2013. http://hdl.handle.net/2160/17cd13c2-067c-451b-8217-70947f89164e.

Full text
Abstract:
Segmentation is a hot issue in the domain of medical image analysis. It has a wide range of applications on medical research. A great many medical image segmentation algorithms have been proposed, and many good segmentation results were obtained. However, due to the noise, density inhomogenity, partial volume effects, and density overlap between normal and abnormal tissues in medical images, the segmentation accuracy and robustness of some state-of-the-art methods still have room for improvement. This thesis aims to deal with the above segmentation problems and improve the segmentation accuracy. This project investigated medical image segmentation methods across a range of modalities and clinical applications, covering magnetic resonance imaging (MRI) in brain tissue segmentation, MRI based multiple sclerosis (MS) lesions segmentation, histology based cell nuclei segmentation, and positron emission tomography (PET) based tumour detection. For the brain MRI tissue segmentation, a method based on mutual information was developed to estimate the number of brain tissue groups. Then a unsupervised segmentation method was proposed to segment the brain tissues. For the MS lesions segmentation, 2D/3D joint histogram modelling were proposed to model the grey level distribution of MS lesions in multimodality MRI. For the PET segmentation of the head and neck tumours, two hierarchical methods based on improved active contour/surface modelling were proposed to segment the tumours in PET volumes. For the histology based cell nuclei segmentation, a novel unsupervised segmentation based on adaptive active contour modelling driven by morphology initialization was proposed to segment the cell nuclei. Then the segmentation results were further processed for subtypes classification. Among these segmentation approaches, a number of techniques (such as modified bias field fuzzy c-means clustering, multiimage spatially joint histogram representation, and convex optimisation of deformable model, etc.) were developed to deal with the key problems in medical image segmentation. Experiments show that the novel methods in this thesis have great potential for various image segmentation scenarios and can obtain more accurate and robust segmentation results than some state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
5

Barros, Erna Raisa Lima Rodrigues de. "Os muros também falam : grafite: as ruas como lugares de representação." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/284590.

Full text
Abstract:
Orientador: Etienne Ghislain Samain
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes
Made available in DSpace on 2018-08-21T11:25:50Z (GMT). No. of bitstreams: 1 Barros_ErnaRaisaLimaRodriguesde_M.pdf: 17623615 bytes, checksum: 3fe497bb6760be522d553bdb37a2d353 (MD5) Previous issue date: 2012
Resumo: Nossa pesquisa se constitui em uma tentativa de aproximação com o universo do grafite e aborda alguns aspectos de sua linguagem enquanto prática transgressora, arte contestatória, subversiva, mas também enquanto forma de expressão e representação artística que tem a capacidade de informar, carregar idéias e também de ser apresentado como veículo de expressão lúdica e poética presente nas cidades. Propomo-nos a refletir representações e intervenções estéticas que se referem a temáticas e questionamentos singulares com os quais nos munimos para pensar o papel das imagens na contemporaneidade, e o mundo através da arte
Abstract: Our research is an attempt to approach the world of graffiti and some aspects of their language as a practice transgressive, contestatory, subversive art, but also as a form of artistic expression and representation that has the ability to inform, carrying ideas and also to be presented as a means of ludic and poetic expression present in the cities. We propose to reflect representations and aesthetic interventions that relate to issue and questions with which we equip us to think the role of images in contemporary society and the world through art
Mestrado
Multimeios
Mestre em Multimeios
APA, Harvard, Vancouver, ISO, and other styles
6

Sousa, Ramayana Lira de. "Violent imagens and the images of violence." reponame:Repositório Institucional da UFSC, 2012. http://repositorio.ufsc.br/xmlui/handle/123456789/93075.

Full text
Abstract:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão, Programa de Pós-Graduação em Letras/Inglês e Literatura Correspondente, Florianópolis, 2009
Made available in DSpace on 2012-10-24T16:09:44Z (GMT). No. of bitstreams: 1 270783.pdf: 7460448 bytes, checksum: 94de8f69c8a92d5acb8ec8c519b10292 (MD5)
This dissertation proposes study of contemporary Brazilian films focusing on the portrayal of violence in urban spaces in a number of films set in different cities, namely Estorvo, Cidade de Deus, Carandiru, O Invasor, Amarelo Manga, Cidade Baixa and Tropa de Elite. The problem to be discussed in these films concerns the possibility of understanding violence as a political force that destabilizes notions such as unified self, representation, agency, nationality and class. The films analyzed suggest a tension between images of violence, mimetic, coagulated, normalized violence, and violent images, violence as dissemination, irradiation, fragmentation, explosion. Following the recent theorizations about biopolitics and community (which include the thought of Giorgio Agamben, Roberto Esposito, Jean-Luc Nancy and Jacques Rancière), this work explores how this tension suggests a (re)configuration of ways of living together.
Esta tese propõe um estudo de filmes brasileiros contemporâneos, com ênfase na apresentação da violência e sua relação com o espaço urbano representado, em uma série de obras que se passam em diferentes cidades, a saber, Estorvo, Cidade de Deus, Carandiru, O Invasor, Amarelo Manga, que se passam em diferentes cidades Cidade Baixa e Tropa de Elite. O problema a ser discutido diz respeito à possibilidade de entender a violência como uma força política capaz de desestabilizar noções como self, representação, agência, nacionalidade e classe. Os filmes analisados sugerem uma tensão entre imagens da violência, violência mimética, coagulada, normalizadas, e imagens violentas, violência como disseminação, irradiação, fragmentação, explosão. Tendo como base teorizações recentes sobre biopolítica e comunidade (incluindo o pensamento de Giorgio Agamben, Roberto Esposito, Jean-Luc Nancy e Jacques Rancière), este trabalho explora como essa tensão sugere uma (re)configuração do modos de viver junto.
APA, Harvard, Vancouver, ISO, and other styles
7

Karelid, Mikael. "Image Enhancement over a Sequence of Images." Thesis, Linköping University, Department of Electrical Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12523.

Full text
Abstract:

This Master Thesis has been conducted at the National Laboratory of Forensic Science (SKL) in Linköping. When images that are to be analyzed at SKL, presenting an interesting object, are of bad quality there may be a need to enhance them. If several images with the object are available, the total amount of information can be used in order to estimate one single enhanced image. A program to do this has been developed by studying methods for image registration and high resolution image estimation. Tests of important parts of the procedure have been conducted. The final results are satisfying and the key to a good high resolution image seems to be the precision of the image registration. Improvements of this part may lead to even better results. More suggestions for further improvementshave been proposed.


Detta examensarbete har utförts på uppdrag av Statens Kriminaltekniska Laboratorium (SKL) i Linköping. Då bilder av ett intressant objekt som ska analyseras på SKL ibland är av dålig kvalitet finns det behov av att förbättra dessa. Om ett flertal bilder på objektet finns tillgängliga kan den totala informationen fråndessa användas för att skatta en enda förbättrad bild. Ett program för att göra detta har utvecklats genom studier av metoder för bildregistrering och skapande av högupplöst bild. Tester av viktiga delar i proceduren har genomförts. De slutgiltiga resultaten är goda och nyckeln till en bra högupplöst bild verkar ligga i precisionen för bildregistreringen. Genom att förbättra denna del kan troligtvis ännu bättre resultat fås. Även andra förslag till förbättringar har lagts fram.

APA, Harvard, Vancouver, ISO, and other styles
8

Moëll, Mattias. "Digital image analysis for wood fiber images /." Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 2001. http://epsilon.slu.se/avh/2001/91-576-6309-2.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ahmad, Fauzi Mohammad Faizal. "Content-based image retrieval of museum images." Thesis, University of Southampton, 2004. https://eprints.soton.ac.uk/261546/.

Full text
Abstract:
Content-based image retrieval (CBIR) is becoming more and more important with the advance of multimedia and imaging technology. Among many retrieval features associated with CBIR, texture retrieval is one of the most difficult. This is mainly because no satisfactory quantitative definition of texture exists at this time, and also because of the complex nature of the texture itself. Another difficult problem in CBIR is query by low-quality images, which means attempts to retrieve images using a poor quality image as a query. Not many content-based retrieval systems have addressed the problem of query by low-quality images. Wavelet analysis is a relatively new and promising tool for signal and image analysis. Its time-scale representation provides both spatial and frequency information, thus giving extra information compared to other image representation schemes. This research aims to address some of the problems of query by texture and query by low quality images by exploiting all the advantages that wavelet analysis has to offer, particularly in the context of museum image collections. A novel query by low-quality images algorithm is presented as a solution to the problem of poor retrieval performance using conventional methods. In the query by texture problem, this thesis provides a comprehensive evaluation on wavelet-based texture method as well as comparison with other techniques. A novel automatic texture segmentation algorithm and an improved block oriented decomposition is proposed for use in query by texture. Finally all the proposed techniques are integrated in a content-based image retrieval application for museum image collections.
APA, Harvard, Vancouver, ISO, and other styles
10

MAURY, ARNAUD. "Qualite image et rectification des images meteosat." Nice, 1993. http://www.theses.fr/1993NICE4654.

Full text
Abstract:
Cette these traite de la rectification des images du satellite meteosat ; soit la methode d'allouer a chaque pixel de l'image sa localisation geographique. Meteosat est un satellite meteorologique geostationnaire manoeuvre par l'agence spatiale europeenne. Dans sa configuration nominale, le satellite meteosat permet la saisie d'images pour lesquelles les lignes sont orientees parfaitement en est-ouest et avec la terre parfaitement centree. De petites deviations de cette situation nominale induisent l'apparition de distorsions geometriques. L'ensemble des causes de ces distorsions sont decrites et definies. Afin de rectifier les images un modele de deformation est elabore, ses parametres etant estimes. Les images une fois rectifiees, l'erreur de rectification residuelle est quantifiee a partir de la localisation d'amers par un procede de correlation. L'ensemble des resultats et methodes ont ete valides a partir de donnees operationnelles au moyen de simulations informatiques; ce pour l'ensemble du processus de traitement image: des images brutes aux images rectifiees et re-echantillonnees
APA, Harvard, Vancouver, ISO, and other styles
11

Allen, Elizabeth. "Image quality evaluation in lossy compressed images." Thesis, University of Westminster, 2017. https://westminsterresearch.westminster.ac.uk/item/q0vq5/image-quality-evaluation-in-lossy-compressed-images.

Full text
Abstract:
This research focuses on the quantification of image quality in lossy compressed images, exploring the impact of digital artefacts and scene characteristics upon image quality evaluation. A subjective paired comparison test was implemented to assess perceived quality of JPEG 2000 against baseline JPEG over a range of different scene types. Interval scales were generated for both algorithms, which indicated a subjective preference for JPEG 2000, particularly at low bit rates, and these were confirmed by an objective distortion measure. The subjective results did not follow this trend for some scenes however, and both algorithms were found to be scene dependent as a result of the artefacts produced at high compression rates. The scene dependencies were explored from the interval scale results, which allowed scenes to be grouped according to their susceptibilities to each of the algorithms. Groupings were correlated with scene measures applied in a linked study. A pilot study was undertaken to explore perceptibility thresholds of JPEG 2000 of the same set of images. This work was developed with a further experiment to investigate the thresholds of perceptibility and acceptability of higher resolution JPEG 2000 compressed images. A set of images was captured using a professional level full-frame Digital Single Lens Reflex camera, using a raw workflow and carefully controlled image-processing pipeline. The scenes were quantified using a set of simple scene metrics to classify them according to whether they were average, higher than, or lower than average, for a number of scene properties known to affect image compression and perceived image quality; these were used to make a final selection of test images. Image fidelity was investigated using the method of constant stimuli to quantify perceptibility thresholds and just noticeable differences (JNDs) of perceptibility. Thresholds and JNDs of acceptability were also quantified to explore suprathreshold quality evaluation. The relationships between the two thresholds were examined and correlated with the results from the scene measures, to identify more or less susceptible scenes. It was found that the level and differences between the two thresholds was an indicator of scene dependency and could be predicted by certain types of scene characteristics. A third study implemented the soft copy quality ruler as an alternative psychophysical method, by matching the quality of compressed images to a set of images varying in a single attribute, separated by known JND increments of quality. The imaging chain and image processing workflow were evaluated using objective measures of tone reproduction and spatial frequency response. An alternative approach to the creation of ruler images was implemented and tested, and the resulting quality rulers were used to evaluate a subset of the images from the previous study. The quality ruler was found to be successful in identifying scene susceptibilities and observer sensitivity. The fourth investigation explored the implementation of four different image quality metrics. These were the Modular Image Difference Metric, the Structural Similarity Metric, The Multi-scale Structural Similarity Metric and the Weighted Structural Similarity Metric. The metrics were tested against the subjective results and all were found to have linear correlation in terms of predictability of image quality.
APA, Harvard, Vancouver, ISO, and other styles
12

Travisani, Tatiana Giovannone [UNESP]. "Imagem digital em movimento." Universidade Estadual Paulista (UNESP), 2008. http://hdl.handle.net/11449/86983.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:29Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-04Bitstream added on 2014-06-13T19:27:53Z : No. of bitstreams: 1 travisani_tg_me_ia.pdf: 2449318 bytes, checksum: 3292e9571a4159da36b4a5591c796aee (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Este trabalho reúne análises sobre as imagens digitais com questões estruturais e poéticas do movimento, fazendo um levantamento histórico e artístico desde os primeiros experimentos em cronofotografia, até as manifestações atuais, do universo digital, que incorporam o movimento como temática. A imagem digital tratada não é de cunho sintético, mas sim que passou por algum processo de captura analógica, por meio de câmeras, e após isso, foi digitalizada. A partir daí, a pesquisa reflete as transformações estéticas e sinestéticas que as imagens podem sofrer de acordo com as possibilidades oferecidas pela tecnologia digital. Procuramos discutir os processos e precedimentos artísticos sob a luz desse caráter tecnológico, em que as imagens ganham novas formas dinâmicas e novos padrões de movimento. Buscamos incluir os principais fatores que são singulares ao universo digital e permitem concepções artísticas inovadoras através das novas mídias surgidas. Entre as características estão a convergência, desmaterialização, ubiqüidade e a replicabilidade dos remixes e samplers. Três obras foram discutidas com maior profundidade por oferecerem suporte conceitural, visual e nova proposta estética: Stop Motion Studies (David Crawford), Soft Cinema (Lev Manovich) e as Live Images (Luiz Duva). Durante a pesquisa, a autora também elaborou três obras construídas em momentos distintos, que acompanhavam os questionamentos feitos ao longo do processo: convergência, gradeativa e passagens. As obras sugerem um estreitamento intenso entre arte e pesquisa, pratica e teoria, no intuito de atingir complexidade sobre o tema, para contribuir com estudos contemporâneos sobre as manifestações artísticas nas novas mídias.
This work analyses the digital under the structural and poetical issues of the movement, making a historical and artistic retrospective since the first experiments in cronophotography, to the current manisfestations of the digital universe, that incorporate the movement as thema. The treated digital image is not asynthetic matrix, but the one that passed for some process of analogical capture by cameras and then digitalized. Further on, the research reflects about the aesthetic and synesthesic transformations that images can receive with possibilities offered by the digital technology. We look for debate the processes and artistic procedures and its techological features, when the images gain new dynamic forms and new standards of movement. We try to include the main factors that are singular to the digital universe and allow innovative artistic comceptions through the new appeared medias. Some of the features are the convergence, dematerialization, ubiquity and the replicability of remixes and samplers. Three artistic creation had been deeply analysed because of its conceptual visual and new aesthetic proposal support: Stop Motion Studies (David Crawford), Soft Cinema (Lev Manovich) and the Live Images (Luiz Duva). During this work, the author also elaboreted three creactions built in different times, that accompany the debates made throughout the process: Convergência, Gradeativa and Passagens. These creations suggest an intense relationship between aty and research, practical and theory in intention to reach complexity on the subject, to contribute with contemporaneous studies on artistic manisfestations in the new medias.
APA, Harvard, Vancouver, ISO, and other styles
13

Suri, Sahil. "Automatic image to image registration for multimodal remote sensing images." kostenfrei, 2010. https://mediatum2.ub.tum.de/node?id=967187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Mennborg, Alexander. "AI-Driven Image Manipulation : Image Outpainting Applied on Fashion Images." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85148.

Full text
Abstract:
The e-commerce industry frequently has to deal with displaying product images in a website where the images are provided by the selling partners. The images in question can have drastically different aspect ratios and resolutions which makes it harder to present them while maintaining a coherent user experience. Manipulating images by cropping can sometimes result in parts of the foreground (i.e. product or person within the image) to be cut off. Image outpainting is a technique that allows images to be extended past its boundaries and can be used to alter the aspect ratio of images. Together with object detection for locating the foreground makes it possible to manipulate images without sacrificing parts of the foreground. For image outpainting a deep learning model was trained on product images that can extend images by at least 25%. The model achieves 8.29 FID score, 44.29 PSNR score and 39.95 BRISQUE score. For testing this solution in practice a simple image manipulation pipeline was created which uses image outpainting when needed and it shows promising results. Images can be manipulated in under a second running on ZOTAC GeForce RTX 3060 (12GB) GPU and a few seconds running on a Intel Core i7-8700K (16GB) CPU. There is also a special case of images where the background has been digitally replaced with a solid color and they can be outpainted even faster without deep learning.
APA, Harvard, Vancouver, ISO, and other styles
15

Martins, Mariana Zaparolli. "Audible Images: síntese de imagens controladas por áudio." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-01042008-010011/.

Full text
Abstract:
Neste trabalho é apresentada a biblioteca AIM, uma biblioteca de objetos Pd que combina ferramentas de análise de áudio e síntese de imagens para a geração de acompanhamentos visuais para uma entrada musical. O usuário estabelece conexões que determinam como os parâmetros musicais afetam a síntese de objetos gráficos, e controla estas conexões em tempo-real durante a performance. A biblioteca combina um protocolo de comunicação para intercâmbio de parâmetros musicais e visuais com uma interface fácil de usar, tornando-a acessível a usuários sem experiência em programação de computadores. Suas áreas possíveis de aplicação incluem a musicalização infantil e a indústria de entretenimento.
This thesis describes the AIM library, a Pd object library which combines audio analysis and image synthesis tools for generating visual accompaniments to musical input data. The user establishes connections that determine how musical parameters affect the synthesis of graphical objects, and controls these connections in real-time during performance. The library combines a straightforward communication protocol for exchanging musical and visual parameters with an easy-to-use interface that makes it accessible for users with no computer programming experience. Its potential applications areas include children\'s musical education and the entertainment industry.
APA, Harvard, Vancouver, ISO, and other styles
16

Matheus, Bruno Roberto Nepomuceno. "Sistema JAVA para gerenciamento de esquema CADx em mamografia." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/18/18152/tde-22102015-093201/.

Full text
Abstract:
Estudos mostram que a maior parte dos erros de diagnóstico mamográfico estão ligados a dificuldades de classificação e não de detecção (MEYER, EBERLEIN, et al., 1990; KARSSEMEIJER, 2011; SCHIABEL, 2014). Uma possível solução a este problema é a estruturação de um esquema CADx (Computer-aided Diagnosis), ou seja, um sistema computacional que analisa as informações disponíveis e tenta apresentar um diagnóstico com base nos dados fornecidos pela imagem processada. Este trabalho tem como intuito apresentar um esquema CADe/Dx completo e funcional para uso por radiologistas. O software final poderá ser usado em qualquer sistema operacional, ou mesmo via Internet, permitindo que qualquer médico interessado acesse e utilize o sistema como segunda opinião sem restrições. A formação da biblioteca JAVA também visa a permitir que outros desenvolvedores possam fazer uso das ferramentas desenvolvidas em projetos futuros, facilitando ampliações e melhorias no esquema CADx. Vários módulos de processamento previamente desenvolvidos para o protótipo do esquema CADx-LAPIMO tiveram que ser reconstruídos, e outros elaborados completamente, produzindo novos resultados que são analisados neste trabalho, assim como suas vantagens e limitações. Estes módulos estão divididos em duas partes: o pré-processamento, que inclui a correção baseada na curva característica do digitalizador (técnica BCC), amplamente testada neste trabalho, e o processamento propriamente, incluindo detecção de microcalcificações e detecção e classificação de nódulos. O programa CADx desenvolvido neste trabalho foi separado em duas versões (cada uma com uma versão online correspondente): um é o esquema CADe/Dx que envolve tanto a detecção como a classificação da estrutura encontrada e o outro é um esquema CADx semiautomático, cujas regiões que devem ser classificadas são previamente demarcadas pelo usuário. Os principais resultados obtidos neste trabalho estão associados ao detector de microcalcificações e ao classificador de nódulos. Para o detector de microcalcificações atingiu-se 89% de sensibilidade com 1,4 falso-positivo por imagem quando usado em imagens digitais de sistemas FFDM e 99% de sensibilidade com 5,4 falsos-positivos quando usado em imagens digitalizadas de mamas densas . Já o classificador de nódulos apresentou 72% de acurácia, usando apenas 4 atributos associados a contorno, densidade e textura, resultando em um sistema robusto e de fácil treinamento.
Studies show that most diagnostic errors are linked to classification difficulties and not detection (MEYER, EBERLEIN, et al., 1990; KARSSEMEIJER, 2011; SCHIABEL, 2014). A possible solution for this problem is the construction of a CAD (Computer aided Diagnosis), a computational system that analyses the available information e tries to present a diagnosis based on the data offered by the processed image. This thesis presents a complete and functional mammographic CADe/Dx scheme for radiologist use. The software is designed to function in any operational system, or even online, allowing any interested radiologist to access the software as a second opinion. The formation of a JAVA library also allows any future developers can use all tools developed for this system, easing future improvements in the CADx scheme. Several modules of the scheme previously developed for the CADx-LAPIMO prototype had to be rebuilt or completely developed, generating new results that are analyzed in here, as are their advantages and limitations. Those modules are divided in two parts, the preprocessing, that includes the scanner\'s characteristic curve based correction, detailed tested in this thesis and the processing itself, including detection of microcalcifications, and detections and classification of masses. The CADx scheme developed here was separated in two versions (each one with a corresponding online version): one is a CADe/Dx scheme that involves both detection and classification of the found structures and the other is a CADx semi-automatic scheme, where the classified regions are previously marked by the user. The main results obtained in this thesis are associated with the microcalcifications detector and the mass classification. The microcalcifications detector obtained a 89% sensibility with 1,4 false-positives per image when used in digital FFDM systems and 99% sensibility with 5,4 false-positives per image in digitalized images of dense breasts. The mass classification module presented a 72% accuracy, using only 4 attributes associated to contour, density and texture, resulting in a robust system and of easy training.
APA, Harvard, Vancouver, ISO, and other styles
17

Feng, Sitao. "Image Analysis on Wood Fiber Cross-Section Images." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-156428.

Full text
Abstract:
Lignification of wood fibers has a significant impact on wood properties. To measure the distribution of lignin in compression wood fiber cross-section images, a crisp segmentation method had been developed. It segments the lumen, the normally lignified cell wall and the highly lignified cell wall of each fiber. In order to refine this given segmentation the following two fuzzy segmentation methods were evaluated in this thesis: Iterative Relative Multi Objects Fuzzy Connectedness and Weighted Distance Transform on Curved Space. The crisp segmentation is used for the multi-seed selection. The crisp and the two fuzzy segmentations are then evaluated by comparing with the manual segmentation. It shows that Iterative Relative Multi Objects Fuzzy Connectedness has the best performance on segmenting the lumen, whereas Weighted Distance Transform on Curved Space outperforms the two other methods regarding the normally lignified cell wall and the highly lignified cell wall.
APA, Harvard, Vancouver, ISO, and other styles
18

Yallop, Marc Richard. "Image processing techniques for passive millimetre wave images." Thesis, University of Reading, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Boukouvala, Erisso. "Image restoration techniques and application on astronomical images." Thesis, University of Reading, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414571.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Gieffers, Amy Christina 1975. "Image alignment algorithms for ultrasound images with contrast." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/46193.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (leaves 70-74).
by Amy Christina Gieffers.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
21

Nasir, Haidawati Mohamad. "Super-resolution image reconstruction from low-resolution images." Thesis, University of Strathclyde, 2012. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=17814.

Full text
Abstract:
The thesis addresses the problem of obtaining high-resolution image from a set of one or more low-resolution images. The thesis focused on three building blocks of super-resolution algorithms i.e., image registration for super-resolution, image fusion for super-resolution and super-resolution image reconstruction. These three parts are addressed separately and singular value decomposition-based fusion is introduced before performing interpolation or single-image super-resolution. An accurate image registration is crucial for super-resolution. An image registration approach for super-resolution based on a combination of Scale Invariant Feature Transform (SIFT), Belief Propagation (BP) and Random Sampling Consensus (RANSAC) is described to automatically register the low-resolution images. The results have shown effective for the removal of the mismatched features in the image. A novel SVD-based image fusion for super-resolution is developed for integrating the significant features from low-resolution images. The SVD-based image fusion is shown to enhance the super-resolution results. The implementation of a novel interpolation method based on a linear combination of the bicubic interpolation and their first-order derivates and the use of first-order difference equation to extract the features from the low-resolution images are described and shown to improve the method of single image super-resolution using sparse representation. The proposed method has shown to reduces the computational time and enhance the prior estimation of the high-resolution image as well as the final super-resolution results. The performance of the algorithms is evaluated using synthetic sequences and also on real sequences subjectively and objectively.
APA, Harvard, Vancouver, ISO, and other styles
22

Richardson, Richard Thomas. "Image Enhancement of Cancerous Tissue in Mammography Images." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/39.

Full text
Abstract:
This research presents a framework for enhancing and analyzing time-sequenced mammographic images for detection of cancerous tissue, specifically designed to assist radiologists and physicians with the detection of breast cancer. By using computer aided diagnosis (CAD) systems as a tool to help in the detection of breast cancer in computed tomography (CT) mammography images, previous CT mammography images will enhance the interpretation of the next series of images. The first stage of this dissertation applies image subtraction to images from the same patient over time. Image types are defined as temporal subtraction, dual-energy subtraction, and Digital Database for Screening Mammography (DDSM). Image enhancement begins by applying image registration and subtraction using Matlab 2012a registration for temporal images and dual-energy subtraction for dual-energy images. DDSM images require no registration or subtraction as they are used for baseline analysis. The image data are from three different sources and all images had been annotated by radiologists for each image type using an image mask to identify malignant and benign. The second stage involved the examination of four different thresholding techniques. The amplitude thresholding method manipulates objects and backgrounds in such a way that object and background pixels have grey levels grouped into two dominant and different modes. In these cases, it was possible to extract the objects from the background using a threshold that separates the modes. The local thresholding introduced posed no restrictions on region shape or size, because it maximized edge features by thresholding local regions separately. The overall histogram analysis showed minima and maxima of the image and provided four feature types--mean, variance, skewness, and kurtosis. K-means clustering provided sequential splitting, initially performing dynamic splits. These dynamic splits were then further split into smaller, more variant regions until the regions of interest were isolated. Regional-growing methods used recursive splitting to partition the image top-down by using the average brightness of a region. Each thresholding method was applied to each of the three image types. In the final stage, the training set and test set were derived by applying the four thresholding methods on each of the three image types. This was accomplished by running Matlab 2012a grey-level, co-occurrence matrix (GLCM) and utilizing 21 target feature types, which were obtained from the Matlab function texture features. An additional four feature types were obtained from the state of the histogram-based features types. These 25 feature types were applied to each of the two classifications malignant and benign. WEKA 3.6.10 was used along with classifier J48 and cross-validation 10 fold to find the precision, recall, and f-measure values. Best results were obtained from these two combinations: temporal subtraction with amplitude thresholding, and temporal subtraction with regional-growing thresholding. To summarize, the researcher's contribution was to assess the effectiveness of various thresholding methods in the context of a three-stage approach, to help radiologists find cancerous tissue lesions in CT and MRI mammography images.
APA, Harvard, Vancouver, ISO, and other styles
23

Hillman, Peter. "Segmentation of motion picture images and image sequences." Thesis, University of Edinburgh, 2002. http://hdl.handle.net/1842/15026.

Full text
Abstract:
For Motion Picture Special Effects, it is often necessary to take a source image of an actor, segment the actor from the unwanted background, and then composite over a new background. The resultant image appears as if the actor was filmed in front of the new background. The standard approach requires the unwanted background to be a blue or green screen. While this technique is capable of handling areas where the foreground (the actor) blends into the background, the physical requirements present many practical problems. This thesis investigates the possibility of segmenting images where the unwanted background is more varied. Standard segmentation techniques tend not to be effective, since motion picture images have extremely high resolution and high accuracy is required to make the result appear convincing. A set of novel algorithms which require minimal human interaction to initialise the processing is presented. These algorithms classify each pixel by comparing its colour to that of known background and foreground areas. They are shown to be effective where there is a sufficient distinction between the colours of the foreground and background. A technique for assessing the quality of an image segmentation in order to compare these algorithms to alternative solutions is presented. Results are included which suggest that in most cases the novel algorithms have the best performance, and that they produce results more quickly than the alternative approaches. Techniques for segmentation of moving images sequences are then presented. Results are included which show that only a few frames of the sequence need to be initialised by hand, as it is often possible to generate automatically the input required to initialise processing for the remaining frames. A novel algorithm which can produce acceptable results on image sequences where more conventional approaches fail or are too slow to be of use is presented.
APA, Harvard, Vancouver, ISO, and other styles
24

Baabd, A., M. Y. Tymkovich, and О. Г. Аврунін. "Image Processing of Panoramic Dental X-Ray Images." Thesis, ХГУ, 2018. http://openarchive.nure.ua/handle/document/6204.

Full text
Abstract:
The panoramic image allows to clearly see the state of the teeth, the dental rudiments, which are located in the jaw, temporomandibular joints, as well as the maxillary sinuses. It is noted that this type of study has a small dose of radiation. Indications for this type of study are dental implantation, bite correction, suspicion of bone tissue inflammation, control of the growth and development of the teeth, as well as the diagnosis of other dental problems.
APA, Harvard, Vancouver, ISO, and other styles
25

AbouRayan, Mohamed. "Real-time Image Fusion Processing for Astronomical Images." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1461449811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Nahar, Vikas. "Content based image retrieval for bio-medical images." Diss., Rolla, Mo. : Missouri University of Science and Technology, 2010. http://scholarsmine.mst.edu/thesis/pdf/Nahar_09007dcc80721e0b.pdf.

Full text
Abstract:
Thesis (M.S.)--Missouri University of Science and Technology, 2010.
Vita. The entire thesis text is included in file. Title from title screen of thesis/dissertation PDF file (viewed Dec. 23, 2009). Includes bibliographical references (p. 82-83).
APA, Harvard, Vancouver, ISO, and other styles
27

Thornström, Johan. "Domain Adaptation of Unreal Images for Image Classification." Thesis, Linköpings universitet, Datorseende, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165758.

Full text
Abstract:
Deep learning has been intensively researched in computer vision tasks like im-age classification. Collecting and labeling images that these neural networks aretrained on is labor-intensive, which is why alternative methods of collecting im-ages are of interest. Virtual environments allow rendering images and automaticlabeling,  which could speed up the process of generating training data and re-duce costs.This  thesis  studies  the  problem  of  transfer  learning  in  image  classificationwhen the classifier has been trained on rendered images using a game engine andtested on real images. The goal is to render images using a game engine to createa classifier that can separate images depicting people wearing civilian clothingor camouflage.  The thesis also studies how domain adaptation techniques usinggenerative  adversarial  networks  could  be  used  to  improve  the  performance  ofthe classifier.  Experiments show that it is possible to generate images that canbe used for training a classifier capable of separating the two classes.  However,the experiments with domain adaptation were unsuccessful.  It is instead recom-mended to improve the quality of the rendered images in terms of features usedin the target domain to achieve better results.
APA, Harvard, Vancouver, ISO, and other styles
28

Bingabr, Mohamed Gabr. "Robust method for the transmission of DCT coded images and image quality evaluation of the received images." Related electronic resource: Current Research at SU : database of SU dissertations, recent titles available full text, 2002. http://wwwlib.umi.com/cr/syr/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Archambeau, Olivier. "Images au sol, images aériennes et images satellites : regard d'une géographie dynamique." Paris 1, 1997. http://www.theses.fr/1997PA010542.

Full text
Abstract:
Le travail exposé cherche à démontrer l'efficacité d'une méthode d'observation géographique mise en place à partir d'une combinaison de techniques de prises de vue réalisées à différentes échelles. La complémentarité des images satellites, images aériennes et images au sol doit fournir, dans le cadre d'une méthode de travail, des éléments d'analyse suffisamment précis pour répondre à des problématiques géographiques. Tout d’abord nous mesurerons l'évolution du regard et de l'iconographie dans la discipline. L'expérience des archives de la planète, réalisée par Albert Kahn et Jean Brunhes entre 1906 et 1930, nous servira de base. Nous reprendrons trois missions effectuées par des opérateurs photographes, l'une au Canada en 1926 et deux autres en Indes en 1913 et 1927. Puis, après avoir fait un bilan des techniques d'observations visuelles les plus récentes, nous étudierons la possibilité d'intégrer différents types d'images au sein d'un système d'observation géographique multiséculaire. Une méthode est alors proposée et des normes précises sont définies pour l'expérience (notamment sur les types d'action à mener sur le terrain et sur l'identification des prises de vue qui devront intégrer les coordonnées G. P. S. , et à terme les caps et les angles de visées des prises de vue). Pour confronter la méthode au terrain, une phase d'expérimentation est nécessaire. Une expédition de 20 mois est lancée autour de la planète avec un camion et un ULM pour les prises de vue aériennes. Les images satellites seront fournies par spot. 32 sites tests sont choisis en fonction de leurs intérêts géographiques. Les conclusions de ce travail seront élaborées à partir de trois exemples présentes dans la thèse, Vancouver (Canada), Calama (Chili) et Sao Luis (Brésil)
This work shows the efficiency of an observational geographic method created with a combination of pictures of several scales. The combination of satelite images, aerial and ground pictures is able to answer geographical problematics. We have analysed the evolution of the photographic and iconography observations in geography. The "archives de la planete" experimentation, realised by Albert Kahn and Jean Brunhes, between 1906 and 1930, constitutes the base of our research. Three of their expeditions (Canada, 1926 ; India, 1913 and 1927) have been revisited during this study. After that we looked for the different methods of observations by images until today. We propose a new method, to prouve and test the efficiency of this method, we organised an expedition around the world wich lasted 20 months with a truck and an ultra-light aircraft. The satelite images were supplied by spot image, 32 sites were studied. The conclusion of this work are presented in three examples : Vancouver (Canada), Calama (Chili), et Sao Luis (Brasil)
APA, Harvard, Vancouver, ISO, and other styles
30

Srinivas, Umamahesh Bose N. K. "Thermal image superresolution and higher order whitening of images." [University Park, Pa.] : Pennsylvania State University, 2009. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-3979/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Munechika, Curtis K. "Merging panchromatic and multispectral images for enhanced image analysis /." Online version of thesis, 1990. http://hdl.handle.net/1850/11366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sun, Ning. "HDR image construction from multi-exposed stereo LDR images." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/40130.

Full text
Abstract:
The vast majority of cameras in the market nowadays can only capture a limited dynamic range of a scene. To generate high dynamic range (HDR) images, most existing methods use multiple images obtained from a single low dynamic range (LDR) camera at consecutive instances. These methods can obtain good quality HDR images for still or slow motion scenes but not for scenes with fast motion. In this thesis, we propose the use of two LDR cameras which have different exposures. To generate an HDR image, the two differently exposed LDR images of the same scene are used. The two LDR images should be captured at the same instance, so as to deal with scenes with fast motion. The most challenging step in this approach is to obtain accurate estimates of the disparity maps of the scenes. This will allow us to correctly align the pixels from the two differently exposed pictures when forming the HDR images. Very few state-of-the-art stereo matching algorithms can deal with the problem of obtaining accurate estimates of the disparity map from two differently exposed images. This is because the input LDR images that are used to construct HDR images have large radiometric changes. In addition, the two input LDR images usually have saturations in different areas. To obtain accurate disparity maps, we present a novel algorithm that obtains an initial estimate of the disparity map. Then a refinement step is used to minimize the edge effect and interpolates the values in the saturated regions. Compared to other state-of-the-art methods, our algorithm has a simpler set up with only two standard commercial LDR cameras. The offline processing of the LDR images has a simpler cost function, especially the cost function we use in the refinement step of the disparity map. This reduces the computational complexity and thus the processing time of the LDR images to form the HDR image. Moreover, the disparity map computed by our algorithm can tolerate greater radiometric changes and saturations. Therefore, the HDR images constructed by our algorithm are smoother and have fewer defects than those constructed by other methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Kim, Jin-Seo. "Evaluation of image quality for still and moving images." Thesis, University of Leeds, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.496538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

BATISTA, LEONARDO VIDAL. "COMPARING AUTOMATIC IMAGE CLASSIFICATION TECHNIQUES OF REMOTE SENSING IMAGES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1993. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=8870@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Neste trabalho, diversas técnicas de classificação automática de imagens de sensoriamento remoto são investigadas. Na análise, incluem-se um método não- paramétrico, denominado K-Médias. Adaptativos Hierárquico (KMAH), e seis paramétricos: o Classificador de Máxima Verossimilhança (MV), o de Máxima Probabilidade a Posteriori (MAP), o MAP Adaptativo (MAPA), por Subimagens (MAPSI), o Contextual Tilton-Swain (CXTS) e o Contextual por Subimagens (CXSI). O treinamento necessário à implementação das técnicas paramétricas foi realizado de forma não-supervisionada, usando-se para tanto a classificação efetuada pelo KMAH. Considerações a respeito das vantagens e desvantagens dos classificadores, de acordo com a observação das taxas de erros e dos tempos de processamento, apontaram as técnicas MAPA e MAPSI com as mais convenientes
In this thesis, several techniques of automatic classfication of remote sensing impeages are investigated. Included in the analysis are ane non-parametric method, known as Adaptative hierarchical K-means (KMAH), and six parametric ones: the Maximum Likelihood (MV), the Maximum a Posteriori Probability (MAP), the Adaptative MAP (MAPA), the Subimages MAP (MAPSI), the tilton-Swain Contextual, (CXTS) and the Subimages Contextual (CXSI) classifiers. The necessary training for the parametric case was done in a non-supervised form, by using the KMAH classification. Considerations about the advantages and disadvantages of the classifiers were made and, based on the observation of the error rates and processing time, the MAPA and MAPSI have shown the best performances.
APA, Harvard, Vancouver, ISO, and other styles
35

RIBEIRO, FRANCISCO TAUNAY COSTA. "VJING: THE IMAGES COMMUNICATION AND THE INTERACTION MAN-IMAGE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10407@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Este trabalho investiga a ação do Vj, o Vjing e seus desdobramentos e relações com a evolução da produção e recepção das imagens cinéticas. O Vjing, que ocorre geralmente em festas e eventos de música eletrônica, se caracteriza pela manipulação de imagens no ato da apresentação, acompanhando a música do Dj, e serve como paradigma para pensar a interação (influência recíproca) entre homem e imagem, bem como a relação entre música e imagem. A imagem cinética, pensada sob a perspectiva de sua plástica- rítmica, evanescente, que atua no eixo espaço-temporal tal qual a música, é característica do Vjing, com sua narrativa fragmentária e sua desconstrução e manipulação do movimento e do tempo nas imagens projetadas. A partir dessa ação, a forma de comunicar das imagens em movimento pode ser investigada, desviando-se daquela narrativa dramática encontrada no cinema clássico. O aparecimento de novas tecnologias de criação e manipulação de imagens e sons possibilita uma explosão de criatividade e da expressão das subjetividades através do meio audiovisual, com o surgimento e o resgate de diferentes formas narrativas.
This work investigates the Vj s performance, the Vjing and its derivatives, in relation with the evolution of production and reception of moving pictures. The Vjing, which generally happens in parties and electronic music events, is characterized by manipulation of images in the act of presentation, matching the Dj s music. And it also works as a paradigm to explore the interaction (mutual influence) between image and man, as well as the relation between music and image. The moving picture, seen under the perspective of its evanescent plastic and rhythmics, acting in the spacetime axis just like music, is part of Vjing, with its fragmentary narrative and its deconstruction and manipulation of movement and time in the projected images. From this action, the way images communicate can be researched, deviating from the dramatic narrative of classical cinema. The rise of new Technologies of image and sound creation and manipulation enables an explosion of creativity and expression of personal subjectivity through the audiovisual media, with the emergence and the rescue of different narrative forms.
APA, Harvard, Vancouver, ISO, and other styles
36

Murphy, Sean Daniel. "Medical image segmentation in volumetric CT and MR images." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3816/.

Full text
Abstract:
This portfolio thesis addresses several topics in the field of 3D medical image analysis. Automated methods are used to identify structures and points of interest within the body to aid the radiologist. The automated algorithms presented here incorporate many classical machine learning and imaging techniques, such as image registration, image filtering, supervised classification, unsupervised clustering, morphology and probabilistic modelling. All algorithms are validated against manually collected ground truth. Chapter two presents a novel algorithm for automatically detecting named anatomical landmarks within a CT scan, using a linear registration based atlas framework. The novel scans may contain a wide variety of anatomical regions from throughout the body. Registration is typically posed as a numerical optimisation problem. For this problem the associated search space is shown to be non-convex and so standard registration approaches fail. Specialised numerical optimisation schemes are developed to solve this problem with an emphasis placed on simplicity. A semi-automated algorithm for finding the centrelines of coronary arterial trees in CT angiography scans given a seed point is presented in chapter three. This is a modified classical region growing algorithm whereby the topology and geometry of the tree are discovered as the region grows. The challenges presented by the presence of large organs and other extraneous material in the vicinity of the coronary trees is mitigated by the use of an efficient modified 3D top-hat transform. Chapter four compares the accuracy of three unsupervised clustering algorithms when applied to automated tissue classification within the brain on 3D multi-spectral MR images. Chapter five presents a generalised supervised probabilistic framework for the segmentation of structures/tissues in medical images called a spatially varying classifier (SVC). This algorithm leverages off non-rigid registration techniques and is shown to be a generalisation of atlas based techniques and supervised intensity based classification. This is achieved by constructing a multivariate Gaussian classifier for each voxel in a reference scan. The SVC is applied in the context of tissue classification in multi-spectral MR images in chapter six, by simultaneously extracting the brain and classifying the tissues types within it. A specially designed pre-processing pipeline is presented which involves inter-sequence registration, spatial normalisation and intensity normalisation. The SVC is then applied to the problem of multi-compartment heart segmentation in CT angiography data with minimal modification. The accuracy of this method is shown to be comparable to other state of the art methods in the field.
APA, Harvard, Vancouver, ISO, and other styles
37

Cabrera, Gil Blanca. "Deep Learning Based Deformable Image Registration of Pelvic Images." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279155.

Full text
Abstract:
Deformable image registration is usually performed manually by clinicians,which is time-consuming and costly, or using optimization-based algorithms, which are not always optimal for registering images of different modalities. In this work, a deep learning-based method for MR-CT deformable image registration is presented. In the first place, a neural network is optimized to register CT pelvic image pairs. Later, the model is trained on MR-CT image pairs to register CT images to match its MR counterpart. To solve the unavailability of ground truth data problem, two approaches were used. For the CT-CT case, perfectly aligned image pairs were the starting point of our model, and random deformations were generated to create a ground truth deformation field. For the multi-modal case, synthetic CT images were generated from T2-weighted MR using a CycleGAN model, plus synthetic deformations were applied to the MR images to generate ground truth deformation fields. The synthetic deformations were created by combining a coarse and fine deformation grid, obtaining a field with deformations of different scales. Several models were trained on images of different resolutions. Their performance was benchmarked with an analytic algorithm used in an actual registration workflow. The CT-CT models were tested using image pairs created by applying synthetic deformation fields. The MR-CT models were tested using two types of test images. The first one contained synthetic CT images and MR ones deformed by synthetically generated deformation fields. The second test set contained real MR-CT image pairs. The test performance was measured using the Dice coefficient. The CT-CT models obtained Dice scores higherthan 0.82 even for the models trained on lower resolution images. Despite the fact that all MR-CT models experienced a drop in their performance, the biggest decrease came from the analytic method used as a reference, both for synthetic and real test data. This means that the deep learning models outperformed the state-of-the-art analytic benchmark method. Even though the obtained Dice scores would need further improvement to be used in a clinical setting, the results show great potential for using deep learning-based methods for multi- and mono-modal deformable image registration.
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Kang. "Image Transfer Between Magnetic Resonance Images and Speech Diagrams." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41533.

Full text
Abstract:
Realtime Magnetic Resonance Imaging (MRI) is a method used for human anatomical study. MRIs give exceptionally detailed information about soft-tissue structures, such as tongues, that other current imaging techniques cannot achieve. However, the process requires special equipment and is expensive. Hence, it is not quite suitable for all patients. Speech diagrams show the side view positions of organs like the tongue, throat, and lip of a speaking or singing person. The process of making a speech diagram is like the semantic segmentation of an MRI, which focuses on the selected edge structure. Speech diagrams are easy to understand with a clear speech diagram of the tongue and inside mouth structure. However, it often requires manual annotation on the MRI machine by an expert in the field. By using machine learning methods, we achieved transferring images between MRI and speech diagrams in two directions. We first matched videos of speech diagram and tongue MRIs. Then we used various image processing methods and data augmentation methods to make the paired images easy to train. We built our network model inspired by different cross-domain image transfer methods and applied reference-based super-resolution methods—to generate high-resolution images. Thus, we can do the transferring work through our network instead of manually. Also, generated speech diagram can work as an intermediary part to be transferred to other medical images like computerized tomography (CT), since it is simpler in structure compared to an MRI. We conducted experiments using both the data from our database and other MRI video sources. We use multiple methods to do the evaluation and comparisons with several related methods show the superiority of our approach.
APA, Harvard, Vancouver, ISO, and other styles
39

Montayne, Amanda. "Fitspirational Images, Body Image, Disordered Eating, and Compulsive Exercise." Thesis, Southern Illinois University at Edwardsville, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10275337.

Full text
Abstract:

The purpose of this study was to examine the relationship between viewing fitspirational content and women's body image, exercise attitudes, and eating attitudes. It was hypothesized that viewing fitspirational content would lead to a reduction in body image and an increase in eating disorder-related thoughts and guilt or sadness related to exercising. One significant interaction was found, which implied that individuals who had viewed the fitspirational content had more guilt and depressive feelings related to exercise than individuals in the control group when comparing to the pre-test. None of the remaining hypotheses were supported.

APA, Harvard, Vancouver, ISO, and other styles
40

Abdulla, Ghaleb. "An image processing tool for cropping and enhancing images." Master's thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-12232009-020207/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Råhlén, Oskar, and Sacharias Sjöqvist. "Image Classification of Real Estate Images with Transfer Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-259759.

Full text
Abstract:
Each minute, over 2 000 searches are made on Sweden’s largest real estate website. The site has over 20 000 apartments for sale in the Stockholm region alone. This makes the search-function a vital tool for the users to find their dream apartment, and thus the quality of the search-function is of significance. As of today, it’s only possible to filter and sort by meta-data such as number of rooms, living area, price, and location, but not on more complex attributes, such as balcony or fireplace. To prevent the need for manual categorization of objects on the market, one option could be to use images of the apartments as data-points in deep neural networks to automatically add rich attributes. This thesis aims to investigate if a high rate of success when classifying apartment images can be achieved using deep neural networks, specifically looking at the categories and attributes balcony, fireplace, as well as type of room. Different types of architectures was compared amongst each other and feature extraction was compared against fine-tuning, in order to exhaustively investigate the thesis. The investigation showed that the balcony model could determine if a balcony exists in an image, with a certainty of 98.1%. For fireplaces, the maximum certainty reached was 85.5%, which is significantly lower. The type-of-room classification reached a certainty of 97,9%. This all proves the possibility of using deep neural networks in order to classify and attribute real estate images.
Varje minut görs 2000 sökningar på Sveriges största webbplats för bostadsannonser som har 20 000 bostadsrätter till salu bara i Stockholm. Detta ställer höga krav på sökfunktionen för att ge användarna en chans att hitta sin drömbostad. Idag finns det möjlighet att filtrera på attribut såsom antal rum, boarea, pris och område men inte på attribut som balkong och eldstad. För att inte behöva kategorisera objekt manuellt för attribut såsom balkong och eldstad finns det möjlighet att använda sig av mäklarbilder samt djupa neurala nätverk för att klassificera objekten automatiskt. Denna uppsats syftar till att utreda om det med hög sannolikhet går att klassificera mäklarbilder efter attributen balkong, eldstad samt typ av rum, med hjälp av djupa neurala nätverk. För att undersöka detta på ett utförligt sätt jämfördes olika arkitekturer med varandra samt feature extraction mot fine-tuning. Testerna visade att balkongmodellen med 98,1% sannolikhet kan avgöra om det finns en balkong på någon av bilderna eller inte. För eldstäder nåddes ett maximum på 85,5% vilket är väsentligt sämre än för balkonger. Under sista klassificeringen, den för rum, nåddes ett resultat på 97,9%.Sammanfattningsvis påvisar detta att det är fullt möjligt att använda djupa neurala nätverk för att klassificera mäklarbilder.
APA, Harvard, Vancouver, ISO, and other styles
42

Zhao, Bowen. "Tissue preserving deformable image registration for 4DCT pulmonary images." Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/2172.

Full text
Abstract:
This thesis mainly focuses on proposing a 4D (three spatial dimensions plus time) tissue-volume preserving non-rigid image registration algorithm for pulmonary 4D computed tomography (4DCT) data sets to provide relevant information for radiation therapy and to estimate pulmonary ventilation. The sum of squared tissue volume difference (SSTVD) similarity cost takes into account the CT intensity changes of spatially corresponding voxels, which is caused by variations of the fraction of tissue within voxels throughout the respiratory cycle. The proposed 4D SSTVD registration scheme considers the entire dynamic 4D data set simultaneously, using both spatial and temporal information. We employed a uniform 4D cubic B-spline parametrization of the transform and a temporally extended linear elasticity regularization of deformation field to ensure temporal smoothness and thus biological plausibility of estimated deformation. A multi-resolution multi-grid registration framework was used with a limited-memory Broyden Fletcher Goldfarb Shanno (LBFGS) optimizer for rapid convergence rate, robustness against local minima and limited memory consumption. The algorithm was prototyped in Matlab and then fully implemented in C++ in Elastix package based on the Insight Segmentation and Registration Toolkit (ITK). We conducted experiments on 2D+t synthetic images to demonstrate the effectiveness of the proposed method. The 4D SSTVD algorithm was also tested on clinical pulmonary 4DCT data sets in comparison with existing 3D pairwise SSTVD algorithm and 4D sum of squared difference (SSD) algorithm. The mean landmark error and mean landmark irregularity were calculated based on manually annotated landmarks on publicly available 4DCT data sets to evaluate the accuracy and temporal smoothness of the registration results. A 4D landmarking software tool was also designed and implemented in Java as an ImageJ plug-in to help facilitate the landmark labeling process in 4DCT data sets.
APA, Harvard, Vancouver, ISO, and other styles
43

Moltisanti, Marco. "Image Representation using Consensus Vocabulary and Food Images Classification." Doctoral thesis, Università di Catania, 2016. http://hdl.handle.net/10761/3968.

Full text
Abstract:
Digital images are the result of many physical factors, such as illumination, point of view an thermal noise of the sensor. These elements may be irrelevant for a specific Computer Vision task; for instance, in the object detection task, the viewpoint and the color of the object should not be relevant in order to answer the question "Is the object present in the image?". Nevertheless, an image depends crucially on all such parameters and it is simply not possible to ignore them in analysis. Hence, finding a representation that, given a specific task, is able to keep the significant features of the image and discard the less useful ones is the first step to build a robust system in Computer Vision. One of the most popular model to represent images is the Bag-of-Visual-Words (BoW) model. Derived from text analysis, this model is based on the generation of a codebook (also called vocabulary) which is subsequently used to provide the actual image representation. Considering a set of images, the typical pipeline, consists in: 1. Select a subset of images to be the training set for the model; 2. Extract the desired features from the all the images; 3. Run a clustering algorithm on the features extracted from the training set: each cluster is a codeword, the set containing all the clusters is the codebook; 4. For each feature point, nd the closest codeword according to a distance function or metric; 5. Build a normalized histogram of the occurrences of each word. The choices made in the design phase influence strongly the final outcome of the representation. In this work we will discuss how to aggregate di fferent kind of features to obtain more powerful representations, presenting some state-of-the-art methods in Computer Vision community. We will focus on Clustering Ensemble techniques, presenting the theoretical framework and a new approach (Section 2.5). Understanding food in everyday life (e.g., the recognition of dishes and the related ingredients, the estimation of quantity, etc.) is a problem which has been considered in different research areas due its important impact under the medical, social and anthropological aspects. For instance, an insane diet can cause problems in the general health of the people. Since health is strictly linked to the diet, advanced Computer Vision tools to recognize food images (e.g., acquired with mobile/wearable cameras), as well as their properties (e.g., calories, volume), can help the diet monitoring by providing useful information to the experts (e.g., nutritionists) to assess the food intake of patients (e.g., to combat obesity). On the other hand, the great diffusion of low cost image acquisition devices embedded in smartphones allows people to take pictures of food and share them on Internet (e.g., on social media); the automatic analysis of the posted images could provide information on the relationship between people and their meals and can be exploited by food retailer to better understand the preferences of a person for further recommendations of food and related products. Image representation plays a key role while trying to infer information about food items depicted in the image. We propose a deep review of the state-of-the-art two different novel representation techniques.
APA, Harvard, Vancouver, ISO, and other styles
44

Fabián, Arteaga Junior John 1987. "Searching for people through textual and visual attributes = Busca de pessoas a partir de atributos visuais e textuais." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275492.

Full text
Abstract:
Orientador: Anderson de Rezende Rocha
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-24T07:58:15Z (GMT). No. of bitstreams: 1 FabianArteaga_JuniorJohn_M.pdf: 5046344 bytes, checksum: 42a497d56da6118f1f860730ea66f81d (MD5) Previous issue date: 2013
Resumo: Utilizar características pessoais para procurar pessoas é fundamental em diversas áreas de aplicação e nos últimos anos tem atraído uma atenção crescente por parte da comunidade científica com aplicações no campo da forense digital e vigilância tais como: localização de suspeitos ou de pessoas desaparecidas em espaços públicos. Neste trabalho, objetivamos utilizar atributos visuais descritíveis (por exemplo, homens brancos com bochechas em destaque usando óculos e com franja) como rótulos nas imagens para descrever sua aparência e, dessa forma, realizar buscas visuais por conteúdo sem depender de anotações nas imagens durante os testes. Para isso, criamos representações robustas para imagens de faces baseadas em dicionários visuais, vinculando as propriedades visuais das imagens aos atributos descritíveis. Primeiro, propomos duas abordagens de caracterização das imagens, uma de escala única e outra de múltiplas escalas para resolver consultas simples (somente um atributo). Em ambos os métodos, obtemos as características de baixo nível das imagens utilizando amostragens esparsas ou densas. Em seguida, selecionamos as características de maior repetibilidade para a criação de representações de médio nível baseadas em dicionários visuais. Posteriormente, treinamos classificadores binários para cada atributo visual os quais atribuem, para cada imagem, uma pontuação de decisão utilizada para obter sua classificação. Também propomos diferentes formas de fusão para o método de descrição de múltiplas escalas. Para consultas mais complexas (mais de dois atributos), avaliamos três abordagens presentes na literatura para combinar ordens (rankings): produto de probabilidades, rank aggregation e rank position. Além disso, propomos uma extensão do método de combinação baseado em rank aggregation para levar em conta informações complementares produzidas pelos diferentes métodos. Consideramos quinze classificadores de atributos e, consequentemente, seus negativos, permitindo, teoricamente, 32 768 diferentes consultas combinadas. Os experimentos mostram que a abordagem de descrição em múltiplas escalas melhora a precisão de recuperação para a maior parte dos atributos em comparação com outros métodos. Finalmente, para consultas mais complexas, a abordagem de descrição em múltiplas escalas em conjunto com versão estendida do rank aggregation melhoram a precisão em comparação com outros métodos de fusão como o produto de probabilidades e o rank positionUtilizar características pessoais para procurar pessoas é fundamental em diversas áreas de aplicação e nos últimos anos tem atraído uma atenção crescente por parte da comunidade científica com aplicações no campo da forense digital e vigilância tais como: localização de suspeitos ou de pessoas desaparecidas em espaços públicos. Neste trabalho, objetivamos utilizar atributos visuais descritíveis (por exemplo, homens brancos com bochechas em destaque usando óculos e com franja) como rótulos nas imagens para descrever sua aparência e, dessa forma, realizar buscas visuais por conteúdo sem depender de anotações nas imagens durante os testes. Para isso, criamos representações robustas para imagens de faces baseadas em dicionários visuais, vinculando as propriedades visuais das imagens aos atributos descritíveis. Primeiro, propomos duas abordagens de caracterização das imagens, uma de escala única e outra de múltiplas escalas para resolver consultas simples (somente um atributo). Em ambos os métodos, obtemos as características de baixo nível das imagens utilizando amostragens esparsas ou densas. Em seguida, selecionamos as características de maior repetibilidade para a criação de representações de médio nível baseadas em dicionários visuais. Posteriormente, treinamos classificadores binários para cada atributo visual os quais atribuem, para cada imagem, uma pontuação de decisão utilizada para obter sua classificação. Também propomos diferentes formas de fusão para o método de descrição de múltiplas escalas. Para consultas mais complexas (mais de dois atributos), avaliamos três abordagens presentes na literatura para combinar ordens (rankings): produto de probabilidades, rank aggregation e rank position. Além disso, propomos uma extensão do método de combinação baseado em rank aggregation para levar em conta informações complementares produzidas pelos diferentes métodos. Consideramos quinze classificadores de atributos e, consequentemente, seus negativos, permitindo, teoricamente, 32 768 diferentes consultas combinadas. Os experimentos mostram que a abordagem de descrição em múltiplas escalas melhora a precisão de recuperação para a maior parte dos atributos em comparação com outros métodos. Finalmente, para consultas mais complexas, a abordagem de descrição em múltiplas escalas em conjunto com versão estendida do rank aggregation melhoram a precisão em comparação com outros métodos de fusão como o produto de probabilidades e o rank position
Abstract: Using personal traits for searching people is paramount in several application areas and has attracted an ever-growing attention from the scientific community over the past years. Some practical applications in the realm of digital forensics and surveillance include locating a suspect or finding missing people in a public space. In this work, we aim at assigning describable visual attributes (e.g., white chubby male wearing glasses and with bangs) as labels to images to describe their appearance and performing visual searches without relying on image annotations during testing. For that, we create mid-level image representations for face images based on visual dictionaries linking visual properties in the images to describable attributes. First, we propose one single-level and one multilevel approaches to solve simple queries (queries containing only one attribute). For both methods, the first step consists of obtaining image low-level features either using a sparse or a dense-sampling scheme. The characterization is followed by the visual dictionary creation step in which we assess both a random selection and a clustering algorithm for selecting the most important features collected in the first stage. Such features then feed 2-class classifiers for the describable visual attributes of interest which assign to each image a decision score used to obtain its ranking. As the multi-level image characterization involves combining the answers of different levels, we also propose some fusion methods in this regard. For more complex queries (2+ attributes), we use three state-of-the-art approaches for combining the rankings: product of probabilities, rank aggregation and rank position. We also extend upon the rank aggregation method in order to take advantage of complementary information produced by the different characterization schemes. We have considered fifteen attribute classifiers and, consequently, their direct counterparts theoretically allowing 32 768 different combined queries (the actual number is smaller since some attributes are contradictory or mutually exclusive). Experimental results show that the multilevel approach improves retrieval precision for most of the attributes in comparison with other methods. Finally, for combined attributes, the multilevel characterization approach along with the modified rank aggregation scheme boosts the precision performance when compared to other methods such as product of probabilities and rank position
Mestrado
Ciência da Computação
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
45

Tu, Guoyun. "Image Captioning On General Data And Fashion Data : An Attribute-Image-Combined Attention-Based Network for Image Captioning on Mutli-Object Images and Single-Object Images." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-282925.

Full text
Abstract:
Image captioning is a crucial field across computer vision and natural language processing. It could be widely applied to high-volume web images, such as conveying image content to visually impaired users. Many methods are adopted in this area such as attention-based methods, semantic-concept based models. These achieve excellent performance on general image datasets such as the MS COCO dataset. However, it is still left unexplored on single-object images.In this paper, we propose a new attribute-information-combined attention- based network (AIC-AB Net). At each time step, attribute information is added as a supplementary of visual information. For sequential word generation, spatial attention determines specific regions of images to pass the decoder. The sentinel gate decides whether to attend to the image or to the visual sentinel (what the decoder already knows, including the attribute information). Text attribute information is synchronously fed in to help image recognition and reduce uncertainty.We build a new fashion dataset consisting of fashion images to establish a benchmark for single-object images. This fashion dataset consists of 144,422 images from 24,649 fashion products, with one description sentence for each image. Our method is tested on the MS COCO dataset and the proposed Fashion dataset. The results show the superior performance of the proposed model on both multi-object images and single-object images. Our AIC-AB net outperforms the state-of-the-art network, Adaptive Attention Network by 0.017, 0.095, and 0.095 (CIDEr Score) on the COCO dataset, Fashion dataset (Bestsellers), and Fashion dataset (all vendors), respectively. The results also reveal the complement of attention architecture and attribute information.
Bildtextning är ett avgörande fält för datorsyn och behandling av naturligt språk. Det kan tillämpas i stor utsträckning på högvolyms webbbilder, som att överföra bildinnehåll till synskadade användare. Många metoder antas inom detta område såsom uppmärksamhetsbaserade metoder, semantiska konceptbaserade modeller. Dessa uppnår utmärkt prestanda på allmänna bilddatamängder som MS COCO-dataset. Det lämnas dock fortfarande outforskat på bilder med ett objekt.I denna uppsats föreslår vi ett nytt attribut-information-kombinerat uppmärksamhetsbaserat nätverk (AIC-AB Net). I varje tidsteg läggs attributinformation till som ett komplement till visuell information. För sekventiell ordgenerering bestämmer rumslig uppmärksamhet specifika regioner av bilder som ska passera avkodaren. Sentinelgrinden bestämmer om den ska ta hand om bilden eller den visuella vaktposten (vad avkodaren redan vet, inklusive attributinformation). Text attributinformation matas synkront för att hjälpa bildigenkänning och minska osäkerheten.Vi bygger en ny modedataset bestående av modebilder för att skapa ett riktmärke för bilder med en objekt. Denna modedataset består av 144 422 bilder från 24 649 modeprodukter, med en beskrivningsmening för varje bild. Vår metod testas på MS COCO dataset och den föreslagna Fashion dataset. Resultaten visar den överlägsna prestandan hos den föreslagna modellen på både bilder med flera objekt och enbildsbilder. Vårt AIC-AB-nät överträffar det senaste nätverket Adaptive Attention Network med 0,017, 0,095 och 0,095 (CIDEr Score) i COCO-datasetet, modedataset (bästsäljare) respektive modedatasetet (alla leverantörer). Resultaten avslöjar också komplementet till uppmärksamhetsarkitektur och attributinformation.
APA, Harvard, Vancouver, ISO, and other styles
46

Clarkson, Matthew John. "Registration of medical images to 3D medical images." Thesis, King's College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.409018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

HSIAO, WEN-TING, and 蕭文婷. "Image ‧Impression ‧ Imagery Alishan -------- Visual Images in the Sense." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/946ba8.

Full text
Abstract:
碩士
吳鳳科技大學
應用數位媒體研究所
102
In recent years, the government has actively promoted tourism and leisure activities. Taiwan tourist attractions through various festivals, use the Internet or printed media marketing to contact with people or tourists, using a variety of visual stimulation image in an attempt to influence tourists’ travel ideas and impressions. Video is the material reproduction of the visual perception. This study, entitled “Image, Impression, Imagery Alishan” explores Alishan video image transmission in the visual sense. With admal experience of well-known Alishan five wonderful views:sunrise, clouds, sunset, the tree, railways, and the spring cherry blossoms and the red maples, etc. which have recently attracted the attention of tourists; we can realize how these beautiful scenery attracts the eyes of the photographer, and becomes most often the image to be reproduced. The auther has explored and analyzed the Alishan visual images which are widely used on the Internet. Also, field visits to Alishan and shoots the scenery there were conducted from 2012 to 2014. Through the viewfinder and recording the scenes, the express presents her strong subjective imagery scenes. In addition to conveying the beauty of Alishan by shooting, the literal description and color interpretation along with different perspectives and lighting to create a special atmosphere, strengthen and pass the photographer’s feeling. At the same time, it passes the viewers the moving after appreciating them. It stimulates their sight, vitalizes them, and touches their heart. Then the meaningful goal will be achieved. Keywords: Tourism Website, Image transmission, Tourism, Marketing, Visual Communication
APA, Harvard, Vancouver, ISO, and other styles
48

Wang, Neng-Chien, and 王能謙. "Image Deblurring Technologies for Large Images and Light Field Images." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/23708713883408629099.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
104
Image processing has been developed for a long time. This paper can be separated into two parts. We will introduce the proposed techniques of image deblurring at first. Then the proposed light field deblurring algorithm will be introduced. The literatures of image deblurring can be categorized into two classes: blind deconvolution and non-blind deconvolution. First, we try to improve the efficiency of non-blind deconvolution in ultra-high resolution images. The complexity of deblurring is raised in ultra-high resolution images. Therefore, we try to reduce the computation time. We modified the algorithm “Fast Image Deconvolution” proposed by Krishnan in 2009. To reduce complexity, we process the image in block, and find the optimal division that can minimize the complexity. Merging the result of each block directly will cause blocking effect, so it should be overlapped between sub-images with linear weight. The size of overlapping decided our computing time and performance. Less overlapping is more efficient but leads to worse performance. For balance, we choose a specific size of overlapping which give consideration both efficiency and performance. Another topic is light field deblurring. A light field camera can capture the location and the angle information from a single shot. Thereby, we can reconstruct the depth of scene and stereoscopic images can be obtained. A light field camera is composed by the array of lens. We will obtain sub-images by every lens. If we want to render the image, we have to obtain the disparity of each microimage pair and hence we can estimate the information of the depth. At first, we obtain the relationship among microlenses by using regression analysis. Then, we take white image into consideration to compensate the luminance of the edge of every microimage and use quad-trees to compute disparity more precisely. Moreover, we use the image-based rendering technique to improve the quality of the reconstructed image. After rendering image, we use the technique of image segmentation. Then, every object will be cut apart. We estimate the depth of every object by the disparity, and hence we can reconstruct the depth map of the whole image.
APA, Harvard, Vancouver, ISO, and other styles
49

Maji, Sukumar. "Image Skew Detection and Correction in Regular Images and Document Images." Thesis, 2015. http://ethesis.nitrkl.ac.in/7926/1/634.pdf.

Full text
Abstract:
During any Document scanning and processing of regular images in our daily life activities image skew is a very important part that should be kept in mind before processing the images. Skew is generally referred to the degree of rotation of an image in comparison with its actual position. So before proceeding to any further activity with the images we need to assure the skew of an image is correct or not. So detection of skew of an image would be the first thing to be applied to regular images some times and specially scanned documents when transforming them to appropriate format. There are different algorithms for detection of skew of an image that have been implemented in different kind of works. The basic and very commonly used one is Scan line based skew detection. In this technique several lines are passed through the image from left to right, right to left, top to bottom and bottom to top and then the number of black pixels encountered in different projection of line are counted. The projection with maximum black pixels encountered is to be taken to consider the skew of the image. There is another approaches like Hough transform, Base-point method etc. In Hough transform method the pixel value is calculated for each value of θ. The angle producing maximum variance is considered to be the skew angle of the image. These two algorithms have been implemented and the results have been represented to compare the accuracy
APA, Harvard, Vancouver, ISO, and other styles
50

Sarangi, Ishan Kumar, and Sudarshan Nayak. "Image mosaicing of panoramic images." Thesis, 2014. http://ethesis.nitrkl.ac.in/6455/1/E-58.pdf.

Full text
Abstract:
Image mosaicing is combining or stitching several images of a scene or object taken from different angles into a single image with a greater angle of view. This is practised a developing field. Recent years have seen quite a lot of advancement in the field. Many algorithms have been developed over the years. Our work is based on feature based approach of image mosaicing. The steps in image mosaic consist of feature point detection, feature point descriptor extraction and feature point matching. RANSAC algorithm is applied to eliminate variety of mismatches and acquire transformation matrix between the images. The input image is transformed with the right mapping model for image stitching. Therefore, this paper proposes an algorithm for mosaicing two images efficiently using Harris-corner feature detection method, RANSAC feature matching method and then image transformation, warping and by blending methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography