Dissertations / Theses on the topic 'Digital images'

To see the other types of publications on this topic, follow the link: Digital images.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital images.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Travisani, Tatiana Giovannone [UNESP]. "Imagem digital em movimento." Universidade Estadual Paulista (UNESP), 2008. http://hdl.handle.net/11449/86983.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:29Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-04Bitstream added on 2014-06-13T19:27:53Z : No. of bitstreams: 1 travisani_tg_me_ia.pdf: 2449318 bytes, checksum: 3292e9571a4159da36b4a5591c796aee (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Este trabalho reúne análises sobre as imagens digitais com questões estruturais e poéticas do movimento, fazendo um levantamento histórico e artístico desde os primeiros experimentos em cronofotografia, até as manifestações atuais, do universo digital, que incorporam o movimento como temática. A imagem digital tratada não é de cunho sintético, mas sim que passou por algum processo de captura analógica, por meio de câmeras, e após isso, foi digitalizada. A partir daí, a pesquisa reflete as transformações estéticas e sinestéticas que as imagens podem sofrer de acordo com as possibilidades oferecidas pela tecnologia digital. Procuramos discutir os processos e precedimentos artísticos sob a luz desse caráter tecnológico, em que as imagens ganham novas formas dinâmicas e novos padrões de movimento. Buscamos incluir os principais fatores que são singulares ao universo digital e permitem concepções artísticas inovadoras através das novas mídias surgidas. Entre as características estão a convergência, desmaterialização, ubiqüidade e a replicabilidade dos remixes e samplers. Três obras foram discutidas com maior profundidade por oferecerem suporte conceitural, visual e nova proposta estética: Stop Motion Studies (David Crawford), Soft Cinema (Lev Manovich) e as Live Images (Luiz Duva). Durante a pesquisa, a autora também elaborou três obras construídas em momentos distintos, que acompanhavam os questionamentos feitos ao longo do processo: convergência, gradeativa e passagens. As obras sugerem um estreitamento intenso entre arte e pesquisa, pratica e teoria, no intuito de atingir complexidade sobre o tema, para contribuir com estudos contemporâneos sobre as manifestações artísticas nas novas mídias.
This work analyses the digital under the structural and poetical issues of the movement, making a historical and artistic retrospective since the first experiments in cronophotography, to the current manisfestations of the digital universe, that incorporate the movement as thema. The treated digital image is not asynthetic matrix, but the one that passed for some process of analogical capture by cameras and then digitalized. Further on, the research reflects about the aesthetic and synesthesic transformations that images can receive with possibilities offered by the digital technology. We look for debate the processes and artistic procedures and its techological features, when the images gain new dynamic forms and new standards of movement. We try to include the main factors that are singular to the digital universe and allow innovative artistic comceptions through the new appeared medias. Some of the features are the convergence, dematerialization, ubiquity and the replicability of remixes and samplers. Three artistic creation had been deeply analysed because of its conceptual visual and new aesthetic proposal support: Stop Motion Studies (David Crawford), Soft Cinema (Lev Manovich) and the Live Images (Luiz Duva). During this work, the author also elaboreted three creactions built in different times, that accompany the debates made throughout the process: Convergência, Gradeativa and Passagens. These creations suggest an intense relationship between aty and research, practical and theory in intention to reach complexity on the subject, to contribute with contemporaneous studies on artistic manisfestations in the new medias.
APA, Harvard, Vancouver, ISO, and other styles
2

Moëll, Mattias. "Digital image analysis for wood fiber images /." Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 2001. http://epsilon.slu.se/avh/2001/91-576-6309-2.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bourne, John D. A. "Processing digital images." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq22757.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kailasanathan, Chandrapal. "Securing digital images." Access electronically, 2003. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041026.150935/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Travisani, Tatiana Giovannone. "Imagem digital em movimento /." São Paulo : [s.n.], 2008. http://hdl.handle.net/11449/86983.

Full text
Abstract:
Orientador: Milton Terumitsu Sogabe
Banca: Pelópidas Cypriano de Oliveira
Banca: Silvia Laurentz
Resumo: Este trabalho reúne análises sobre as imagens digitais com questões estruturais e poéticas do movimento, fazendo um levantamento histórico e artístico desde os primeiros experimentos em cronofotografia, até as manifestações atuais, do universo digital, que incorporam o movimento como temática. A imagem digital tratada não é de cunho sintético, mas sim que passou por algum processo de captura analógica, por meio de câmeras, e após isso, foi digitalizada. A partir daí, a pesquisa reflete as transformações estéticas e sinestéticas que as imagens podem sofrer de acordo com as possibilidades oferecidas pela tecnologia digital. Procuramos discutir os processos e precedimentos artísticos sob a luz desse caráter tecnológico, em que as imagens ganham novas formas dinâmicas e novos padrões de movimento. Buscamos incluir os principais fatores que são singulares ao universo digital e permitem concepções artísticas inovadoras através das novas mídias surgidas. Entre as características estão a convergência, desmaterialização, ubiqüidade e a replicabilidade dos remixes e samplers. Três obras foram discutidas com maior profundidade por oferecerem suporte conceitural, visual e nova proposta estética: Stop Motion Studies (David Crawford), Soft Cinema (Lev Manovich) e as Live Images (Luiz Duva). Durante a pesquisa, a autora também elaborou três obras construídas em momentos distintos, que acompanhavam os questionamentos feitos ao longo do processo: convergência, gradeativa e passagens. As obras sugerem um estreitamento intenso entre arte e pesquisa, pratica e teoria, no intuito de atingir complexidade sobre o tema, para contribuir com estudos contemporâneos sobre as manifestações artísticas nas novas mídias.
Abstract: This work analyses the digital under the structural and poetical issues of the movement, making a historical and artistic retrospective since the first experiments in cronophotography, to the current manisfestations of the digital universe, that incorporate the movement as thema. The treated digital image is not asynthetic matrix, but the one that passed for some process of analogical capture by cameras and then digitalized. Further on, the research reflects about the aesthetic and synesthesic transformations that images can receive with possibilities offered by the digital technology. We look for debate the processes and artistic procedures and its techological features, when the images gain new dynamic forms and new standards of movement. We try to include the main factors that are singular to the digital universe and allow innovative artistic comceptions through the new appeared medias. Some of the features are the convergence, dematerialization, ubiquity and the replicability of remixes and samplers. Three artistic creation had been deeply analysed because of its conceptual visual and new aesthetic proposal support: Stop Motion Studies (David Crawford), Soft Cinema (Lev Manovich) and the Live Images (Luiz Duva). During this work, the author also elaboreted three creactions built in different times, that accompany the debates made throughout the process: Convergência, Gradeativa and Passagens. These creations suggest an intense relationship between aty and research, practical and theory in intention to reach complexity on the subject, to contribute with contemporaneous studies on artistic manisfestations in the new medias.
Mestre
APA, Harvard, Vancouver, ISO, and other styles
6

Zain, Jasni Mohamad. "Digital watermarking in medical images." Thesis, Brunel University, 2005. http://bura.brunel.ac.uk/handle/2438/4978.

Full text
Abstract:
This thesis addresses authenticity and integrity of medical images using watermarking. Hospital Information Systems (HIS), Radiology Information Systems (RIS) and Picture Archiving and Communication Systems (P ACS) now form the information infrastructure for today's healthcare as these provide new ways to store, access and distribute medical data that also involve some security risk. Watermarking can be seen as an additional tool for security measures. As the medical tradition is very strict with the quality of biomedical images, the watermarking method must be reversible or if not, region of Interest (ROI) needs to be defined and left intact. Watermarking should also serve as an integrity control and should be able to authenticate the medical image. Three watermarking techniques were proposed. First, Strict Authentication Watermarking (SAW) embeds the digital signature of the image in the ROI and the image can be reverted back to its original value bit by bit if required. Second, Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) uses the same principal as SAW, but is able to survive some degree of JPEG compression. Third, Authentication Watermarking with Tamper Detection and Recovery (AW-TDR) is able to localise tampering, whilst simultaneously reconstructing the original image.
APA, Harvard, Vancouver, ISO, and other styles
7

Condell, Joan V. "Motion tracking in digital images." Thesis, University of Ulster, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ahmed, Kamal Ali. "Digital watermarking of still images." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/digital-watermarking-of-still-images(0dc4b146-3d97-458f-9506-8c67bc3a155b).html.

Full text
Abstract:
This thesis presents novel research work on copyright protection of grey scale and colour digital images. New blind frequency domain watermarking algorithms using one dimensional and two dimensional Walsh coding were developed. Handwritten signatures and mobile phone numbers were used in this project as watermarks. In this research eight algorithms were developed based on the DCT using 1D and 2D Walsh coding. These algorithms used the low frequency coefficients of the 8 × 8 DCT blocks for embedding. A shuffle process was used in the watermarking algorithms to increase the robustness against the cropping attacks. All algorithms are blind since they do not require the original image. All algorithms caused minimum distortion to the host images and the watermarking is invisible. The watermark is embedded in the green channel of the RGB colour images. The Walsh coded watermark is inserted several times by using the shuffling process to improve its robustness. The effect of changing the Walsh lengths and the scaling strength of the watermark on the robustness and image quality were studied. All algorithms are examined by using several grey scale and colour images of sizes 512 × 512. The fidelity of the images was assessed by using the peak signal to noise ratio (PSNR), the structural similarity index measure (SSIM), normalized correlation (NC) and StirMark benchmark tools. The new algorithms were tested on several grey scale and colour images of different sizes. Evaluation techniques using several tools with different scaling factors have been considered in the thesis to assess the algorithms. Comparisons carried out against other methods of embedding without coding have shown the superiority of the algorithms. The results have shown that use of 1D and 2D Walsh coding with DCT Blocks offers significant improvement in the robustness against JPEG compression and some other image processing operations compared to the method of embedding without coding. The originality of the schemes enables them to achieve significant robustness compared to conventional non-coded watermarking methods. The new algorithms offer an optimal trade-off between perceptual distortion caused by embedding and robustness against certain attacks. The new techniques could offer significant advantages to the digital watermark field and provide additional benefits to the copyright protection industry.
APA, Harvard, Vancouver, ISO, and other styles
9

Stokes, Mike. "Colorimetric tolerances of digital images /." Online version of thesis, 1991. http://hdl.handle.net/1850/10896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cembalo, Maurizio. "Forensic analysis for digital images." Doctoral thesis, Universita degli studi di Salerno, 2011. http://hdl.handle.net/10556/227.

Full text
Abstract:
IX n.s.
Nowadays, taking and sharing digital pictures is becoming a very popular activity. This is witnessed by the explosive growth of the digital cameras market: e.g., more than one billion of digital cameras have been produced and shipped in 2010. A consequence of this trend is that also the number of crimes involving digital pictures increases, either because pictures are part of the crime (e.g., exchanging pedopornographic pictures) or because their analysis may reveal some important clue about the author of the crime. The highly technical nature of computer crimes facilitated a wholly new branch of forensic science called digital forensics. The Digital Forensic Sci- ence involves processes such as acquisition of data from an electronic source, analysis of the acquired data, extraction of evidence from the data, and the preservation and presentation of the evidence. Digital Imaging Forensics is a specialization of the Digital Forensics which deals with digital images. One of the many issues that the Digital Imaging Forensics tries to deal with is the source camera identi cation problem, i.e., establish if a given image has been taken by a given digital camera. Today this is a practical and important problem aiming to identify reliably the imaging device that acquired a particular digital image. Techniques to authenticate an electronic image are especially important in court. For example, identifying the source device could establish the origin of images presented as evidence. In a prosecution for child pornography, for example, it could be desirable that one could prove that certain imagery was obtained with a speci c camera and is thus not an image generated by a computer, given that "virutal images" are not considered offense. As electronic images and digital video replace their analog counterparts, the importance of reliable, inexpensive, and fast identification of the origin of a particular image will increase. The identification of a source camera of an image is a complex issue which requires the understanding of the several steps involved in the creation of the digital photographic representation of a real scene. In particular, it is necessary to understand how the digital images are created, which are the processes which create (and therefore affect) the creation of the digital data, starting from the real scene. Moreover, it is necessary to point out the factors which can be used to support the camera identification and, may be even more important, which are the factors which can tamper the photos and prevent (maliciously or not) the camera identification. Many identification techniques have been proposed so far in literature. All these techniques generally work by using the sensor noise (an unexpected variation of the digital signal) left by a digital sensor when taking a picture as a fingerprint for identifying the sensor. These studies are generally accompanied with tests proving the effectiveness of these techniques, both in terms of False Acceptance Rate (FAR) and False Rejection Rate (FRR). Unfortunately, most of these contributions do not take into consideration that, in practice, the images that are shared and exchanged over the Internet have often been pre-processed. Instead, it is a common practice to assume that the images to be examined are unmodified or, at most, to ignore the e ects of the pre-processing. Even without considering the case of malicious users that could intention- ally process a picture in order to fool the existing identification techniques, this assumption is unrealistic for at least two reasons. The first is that, as previously mentioned, almost all current photo-managing software o ers several functions for adjusting, sometimes in a "magic" way (see the "I'm feeling lucky" function on Google Picasa) different characteristics of a picture. The second reason can be found in the way the images are managed by some of the most important online social network (OSN) and online photo sharing (OPS) sites. These services usually make several modifications to the original photos before publishing them in order to either improve their appearance or reduce their size. In this thesis we have first implemented the most prominent source camera identification technique, proposed by Lukas et al. and based on the Photo-Response Non-Uniformity. Then, we present a new identification technique that use a SVM (Support Vector Macchine) classifier to associate photos to the right camera. Both our implementation of Lukas et al. technique and our SVM technique have been extensively tested on a test-sample of nearly 2500 images taken from 8 different cameras. The main purpose of the experiments conducted is to see how these techniques performs in presence of pre-processed images, either explicit modified by a user with photo management tools or modified by OSNs and OPSs services without user awareness. The results confirm that, in several cases, the method by Lukas et al. and our SVM technique is resilient to the modifications introduced by the considered image-processing functions. However, in the experiments it has been possible to identify several cases where the quality of the identifica- tion process was deteriorated because of the noise introduced by the image- processing. In addition, when dealing with Online Social Networks and Online Photo Sharing services, it has been noted that some of them process and modify the uploaded pictures. These modifications make ineffective, in many cases, the method by Lukas et al. while SVM technique performs slightly better. [edited by author]
2009 - 2010
APA, Harvard, Vancouver, ISO, and other styles
11

Wyllie, Michael. "A comparative quantitative approach to digital image compression." Huntington, WV : [Marshall University Libraries], 2006. http://www.marshall.edu/etd/descript.asp?ref=719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pedron, Ilario. "Digital image processing for cancer cell finding using color images." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ahtaiba, Ahmed Mohamed A. "Restoration of AFM images using digital signal and image processing." Thesis, Liverpool John Moores University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604322.

Full text
Abstract:
All atomic force microscope (AFM) images suffer from distortions, which are principally produced by the interaction between the measured sample and the AFM tip. If the three-dimensional shape of the tip is known, the distorted image can be processed and the original surface form ' restored' typically by deconvolution approaches. This restored image gives a better representation of the real 3D surface or the measured sample than the original distorted image. In this thesis, a quantitative investigation of using morphological deconvolution has been used to restore AFM images via computer simulation using various computer simulated tips and objects. This thesis also presents the systematic quantitative study of the blind tip estimation algorithm via computer simulation using various computer simulated tips and objects. This thesis proposes a new method for estimating the impulse response of the AFM by measuring a micro-cylinder with a-priori known dimensions using contact mode AFM. The estimated impulse response is then used to restore subsequent AFM images, when measured with the same tip, under similar measurement conditions. Significantly, an approximation to what corresponds to the impulse response of the AFM can be deduced using this method. The suitability of this novel approach for restoring AFM images has been confirmed using both computer simulation and also with real experimental AFM images. This thesis suggests another new approach (impulse response technique) to estimate the impulse response of the AFM. this time from a square pillar sample that is measured using contact mode AFM. Once the impulse response is known, a deconvolution process is carried out between the estimated impulse response and typical 'distorted' raw AFM images in order to reduce the distortion effects. The experimental results and the computer simulations validate the performance of the proposed approach, in which it illustrates that the AFM image accuracy has been significantly improved. A new approach has been implemented in this research programme for the restoration of AFM images enabling a combination of cantilever and feedback signals at different scanning speeds. In this approach, the AFM topographic image is constructed using values obtained by summing the height image that is used for driving the Z-scanner and the deflection image with a weight function oc that is close to 3. The value of oc has been determined experimentally using tri al and error. This method has been tested 3t ten different scanning speeds and it consistently gives more faithful topographic images than the original AFM images.
APA, Harvard, Vancouver, ISO, and other styles
14

Vernacotola, Mark J. "Characterization of digital film scanner systems for use with digital scene algorithms /." Online version of thesis, 1995. http://hdl.handle.net/1850/11967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Khire, Sourabh Mohan. "Time-sensitive communication of digital images, with applications in telepathology." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29761.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Jayant, Nikil; Committee Member: Anderson, David; Committee Member: Lee, Chin-Hui. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
16

Keskinarkaus, A. (Anja). "Digital watermarking techniques for printed images." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526200583.

Full text
Abstract:
Abstract During the last few decades, digital watermarking techniques have gained a lot of interest. Such techniques enable hiding imperceptible information to images; information which can be extracted later from those images. As a result, digital watermarking techniques have many interesting applications for example in Internet distribution. Contents such as images are today manipulated mainly in digital form; thus, traditionally, the focus of watermarking research has been the digital domain. However, a vast amount of images will still appear in some physical format such as in books, posters or labels, and there are a number of possible applications of hidden information also in image printouts. In this case, an additional level of challenge is introduced, as the watermarking technique should be robust to extraction from printed output. In this thesis, methods are developed, where a watermarked image appears in a printout and the invisible information can be later extracted using a scanner or mobile phone camera and watermark extraction software. In these cases, the watermarking method has to be carefully designed because both the printing and capturing process cause distortions that make watermark extraction challenging. The focus of the study is on developing blind, multibit watermarking techniques, where the robustness of the algorithms is tested in an office environment, using standard office equipment. The possible effect of the background of the printed images, as well as compound attacks, are both paid particular attention to, since these are considered important in practical applications. The main objective is thus to provide technical means to achieve high robustness and to develop watermarking methods robust to printing and scanning process. A secondary objective is to develop methods where the extraction is possible with the aid of a mobile phone camera. The main contributions of the thesis are: (1) Methods to increase watermark extraction robustness with perceptual weighting; (2) Methods to robustly synchronize the extraction of a multibit message from a printout; (3) A method to encode a multibit message, utilizing directed periodic patterns and a method to decode the message after attacks; (4) A demonstrator of an interactive poster application and a key based robust and secure identification method from a printout
Tiivistelmä Digitaalinen vesileimaus on parin viime vuosikymmenen aikana runsaasti huomiota saanut tekniikka, jonka avulla kuviin voidaan piilottaa aistein havaitsematonta tietoa. Tämä tieto voidaan myöhemmin poimia esiin, minkä vuoksi sovelluskohteita esimerkiksi Internetin kautta tapahtuvassa jakelussa on useita. Perinteisesti vesileimaustekniikat keskittyvät pelkästään digitaalisessa muodossa pysyvään tietoon. Kuitenkin iso osa kuvainformaatiosta saa yhä vielä myös fyysisen muodon esimerkiksi kirjoissa, julisteissa ja etiketeissä. Myös vesileimauksella on useita sovelluskohteita painettujen kuvienkin osalta. Vesileimausta ajatellen painatus tuo kumminkin omat erityishaasteensa vesileimaustekniikoille. Tässä väitöskirjassa kehitetään menetelmiä, jotka mahdollistavat piilotetun tiedon säilymisen painetussa kuvassa. Piilotettu tieto voidaan lukea käyttämällä skanneria tai matkapuhelimen kameraa tiedon digitalisointiin. Digitalisoinnin jälkeen vesileimausohjelma osaa lukea piilotetun tiedon. Vesileimauksen osalta haasteellisuus tulee vääristymistä, joita sekä kuvien tulostus sekä digitalisointi aiheuttavat. Väitöstyössä keskitytään monibittisiin vesileimaustekniikoihin, joissa alkuperäistä kuvaa ei tarvita vesileimaa poimittaessa. Väitöstyössä kehitetyt menetelmät on testattu toimistoympäristössä standardi toimistolaitteita käyttäen. Käytännön sovelluksia ajatellen, testeissä on kiinnitetty huomiota myös yhdistelmähyökkäysten sekä painetun kuvan taustan vaikutukseen algoritmin robustisuudelle. Ensisijainen tavoite on kehittää menetelmiä, jotka kestävät printtaus ja skannaus operaation. Toinen tavoite on tiedon kestävyys luettaessa tietoa matkapuhelimen kameran avulla. Väitöskirjassa tarkastellaan ja kehitellään ratkaisuja neljälle eri osa-alueelle: (1) Ihmisaisteja mallintavien menetelmien käyttö vesileimauksen kestävyyden lisäämiseksi; (2) Robusti synkronointi luettaessa monibittistä tietoa painotuotteesta; (3) Suunnattuja jaksollisia kuvioita käyttävä menetelmä, joka mahdollistaa monibittisen tiedon koodaamisen ja dekoodaamisen hyökkäysten jälkeen; (4) Sovellustasolla tarkastellaan kahta pääsovellusta: interaktiivinen juliste sekä kestävä ja turvattu avaimen avulla tapahtuva painotuotteen identifiointi
APA, Harvard, Vancouver, ISO, and other styles
17

Allott, David. "Efficient source coding for digital images." Thesis, Loughborough University, 1985. https://dspace.lboro.ac.uk/2134/27563.

Full text
Abstract:
A requirement exists for interactive view-data systems to incorporate some facility for the transmission of good quality multi-level single-frame images (both monochrome and colour) for both general information services and more commercial applications such as mail order catalogues.
APA, Harvard, Vancouver, ISO, and other styles
18

Mahfoudi, Gaël. "Authentication of Digital Images and Videos." Thesis, Troyes, 2021. http://www.theses.fr/2021TROY0043.

Full text
Abstract:
Les médias digitaux font partie de notre vie de tous les jours. Après des années de photojournalisme, nous nous sommes habitués à considérer ces médias comme des témoignages objectifs de la réalité. Cependant les logiciels de retouches d'images et de vidéos deviennent de plus en plus puissants et de plus en plus simples à utiliser, ce qui permet aux contrefacteurs de produire des images falsifiées d'une grande qualité. L'authenticité de ces médias ne peut donc plus être prise pour acquise. Récemment, de nouvelles régulations visant à lutter contre le blanchiment d'argent ont vu le jour. Ces régulations imposent notamment aux institutions financières de vérifier l'identité de leurs clients. Cette vérification est souvent effectuée de manière distantielle au travers d'un Système de Vérification d'Identité à Distance (SVID). Les médias digitaux sont centraux dans de tels systèmes, il est donc essentiel de pouvoir vérifier leurs authenticités. Cette thèse se concentre sur l'authentification des images et vidéos au sein d'un SVID. Suite à la définition formelle d'un tel système, les attaques probables à l'encontre de ceux-ci ont été identifiées. Nous nous sommes efforcés de comprendre les enjeux de ces différentes menaces afin de proposer des solutions adaptées. Nos approches sont basées sur des méthodes de traitement de l'image ou sur des modèles paramétriques. Nous avons aussi proposé de nouvelles bases de données afin d'encourager la recherche sur certains défis spécifiques encore peu étudiés
Digital media are parts of our day-to-day lives. With years of photojournalism, we have been used to consider them as an objective testimony of the truth. But images and video retouching software are becoming increasingly more powerful and easy to use and allow counterfeiters to produce highly realistic image forgery. Consequently, digital media authenticity should not be taken for granted any more. Recent Anti-Money Laundering (AML) relegation introduced the notion of Know Your Customer (KYC) which enforced financial institutions to verify their customer identity. Many institutions prefer to perform this verification remotely relying on a Remote Identity Verification (RIV) system. Such a system relies heavily on both digital images and videos. The authentication of those media is then essential. This thesis focuses on the authentication of images and videos in the context of a RIV system. After formally defining a RIV system, we studied the various attacks that a counterfeiter may perform against it. We attempt to understand the challenges of each of those threats to propose relevant solutions. Our approaches are based on both image processing methods and statistical tests. We also proposed new datasets to encourage research on challenges that are not yet well studied
APA, Harvard, Vancouver, ISO, and other styles
19

Naderi, Ramin. "Quadtree-based processing of digital images." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3590.

Full text
Abstract:
Image representation plays an important role in image processing applications, which usually. contain a huge amount of data. An image is a two-dimensional array of points, and each point contains information (eg: color). A 1024 by 1024 pixel image occupies 1 mega byte of space in the main memory. In actual circumstances 2 to 3 mega bytes of space are needed to facilitate the various image processing tasks. Large amounts of secondary memory are also required to hold various data sets. In this thesis, two different operations on the quadtree are presented. There are, in general, two types of data compression techniques in image processing. One approach is based on elimination of redundant data from the original picture. Other techniques rely on higher levels of processing such as interpretations, generations, inductions and deduction procedures (1, 2). One of the popular techniques of data representation that has received a considerable amount of attention in recent years is the quadtree data structure. This has led to the development of various techniques for performing conversions and operations on the quadtree. Klinger and Dyer (3) provide a good bibliography of the history of quadtrees. Their paper reports experiments on the degree of compaction of picture representation which may be achieved with tree encoding. Their experiments show that tree encoding can produce memory savings. Pavlidis [15] reports on the approximation of pictures by quadtrees. Horowitz and Pavidis [16] show how to segment a picture using traversal of a quadtree. They segment the picture by polygonal boundaries. Tanimoto [17] discusses distortions which may occur in quadtrees for pictures. Tanimoto [18, p. 27] observes that quadtree representation is particularly convenient for scaling a picture by powers of two. Quadtrees are also useful in graphics and animation applications [19, 20] which are oriented toward construction of images from polygons and superpositiofis of images. Encoded pictures are useful for display, especially if encoding lends itself to processing.
APA, Harvard, Vancouver, ISO, and other styles
20

Phan, Quoc-Tin. "On the provenance of digital images." Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3590/1/PHAN_thesis.pdf.

Full text
Abstract:
Digital images are becoming the most commonly used multimedia data nowadays thanks to the massive manufacturing of cheap acquisition devices coupled with the unprecedented popularity of Online Social Networks. As two sides of a coin, massive use of digital images triggers the development of user-friendly editing tools and intelligent techniques that violate image authenticity. By this respect, digital images are less and less trustable as they are easily modified not only by experts or researchers, but also by unexperienced users. It has been also witnessed that malicious use of images has tremendous impact on human perception as well as system reliability. Those concerns highlight the importance to verify image authenticity. In practice, digital images are created, manipulated, and diffused world-wide via many channels. Simply answering to the question "Is an image authentic?'' appears insufficient. Further steps aiming at understanding the provenance of images with respect to acquisition device or distributed platforms as well as the processing history have to be considered significant. This doctoral study contributes solutions to recover digital image provenance under multiple aspects: i) image acquisition device, ii) social network origin, and iii) source-target disambiguation in image copy-move forgery.
APA, Harvard, Vancouver, ISO, and other styles
21

Bravo-Solorio, Sergio. "Integrity verification of digital images through fragile watermarking and image forensics." Thesis, University of Liverpool, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Nasir, Ibrahim A. "Digital Watermarking of Images towards Content Protection." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4432.

Full text
Abstract:
With the rapid growth of the internet and digital media techniques over the last decade, multimedia data such as images, video and audio can easily be copied, altered and distributed over the internet without any loss in quality. Therefore, protection of ownership of multimedia data has become a very significant and challenging issue. Three novel image watermarking algorithms have been designed and implemented for copyright protection. The first proposed algorithm is based on embedding multiple watermarks in the blue channel of colour images to achieve more robustness against attacks. The second proposed algorithm aims to achieve better trade-offs between imperceptibility and robustness requirements of a digital watermarking system. It embeds a watermark in adaptive manner via classification of DCT blocks with three levels: smooth, edges and texture, implemented in the DCT domain by analyzing the values of AC coefficients. The third algorithm aims to achieve robustness against geometric attacks, which can desynchronize the location of the watermark and hence cause incorrect watermark detection. It uses geometrically invariant feature points and image normalization to overcome the problem of synchronization errors caused by geometric attacks. Experimental results show that the proposed algorithms are robust and outperform related techniques found in literature.
APA, Harvard, Vancouver, ISO, and other styles
23

Nasir, Ibrahim Alsonosi. "Digital watermarking of images towards content protection." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4432.

Full text
Abstract:
With the rapid growth of the internet and digital media techniques over the last decade, multimedia data such as images, video and audio can easily be copied, altered and distributed over the internet without any loss in quality. Therefore, protection of ownership of multimedia data has become a very significant and challenging issue. Three novel image watermarking algorithms have been designed and implemented for copyright protection. The first proposed algorithm is based on embedding multiple watermarks in the blue channel of colour images to achieve more robustness against attacks. The second proposed algorithm aims to achieve better trade-offs between imperceptibility and robustness requirements of a digital watermarking system. It embeds a watermark in adaptive manner via classification of DCT blocks with three levels: smooth, edges and texture, implemented in the DCT domain by analyzing the values of AC coefficients. The third algorithm aims to achieve robustness against geometric attacks, which can desynchronize the location of the watermark and hence cause incorrect watermark detection. It uses geometrically invariant feature points and image normalization to overcome the problem of synchronization errors caused by geometric attacks. Experimental results show that the proposed algorithms are robust and outperform related techniques found in literature.
APA, Harvard, Vancouver, ISO, and other styles
24

Udas, Swati. "Classification algorithms for finding the eye fixation from digital images /." free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p1418072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Haddad, Nicholas. "Transmission of digital images using data-flow architecture." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1184007755.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Brisbane, Gareth Charles Beattie. "On information hiding techniques for digital images." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050221.122028/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Noriega, Leonardo Antonio. "The colorimetric segmentation of textured digital images." Thesis, Southampton Solent University, 1998. http://ssudl.solent.ac.uk/2444/.

Full text
Abstract:
This study approaches the problem of colour image segmentation as a pattern recognition task. This leads to the problem being broken down into two component parts: feature extraction and classification algorithms. Measures to enable the objective assessment of segmentation algorithms are considered. In keeping with this pattern-recognition based philosophy, the issue of texture is approached by a consideration of features, follwed by experimentation based on classification. Techniques based on Gabor filters and fractal dimension are compared. Also colour is considered in terms of its features, and a systematic exploration of colour features in undertaken. The technique for assessing colour features is also used as the basis for a segmentation algorithm that can be used for combining colour and texture. In this study, several novel techniques are presented and discussed. Firstly a methodology for the judgement of image segmentation algorithms. Secondly a technique for segmenting images using fractal dimension is presented, including a novel application of information dimension. thirdly an objective assessment of colour spaces using the techniques discussed as the first point of this study. Finally strategies for combining colour and texture in the segmentation process are discussed and techniques presented.
APA, Harvard, Vancouver, ISO, and other styles
28

Morgado, Ana M. de O. "Automated procedures for orientation of digital images." Thesis, University College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Farhan, Haya M. "Digital images assessment of posterior capsule opacification." Thesis, City University London, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.397670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Khayeat, Ali. "Copy-move forgery detection in digital images." Thesis, Cardiff University, 2017. http://orca.cf.ac.uk/107043/.

Full text
Abstract:
The ready availability of image-editing software makes it important to ensure the authenticity of images. This thesis concerns the detection and localization of cloning, or Copy-Move Forgery (CMF), which is the most common type of image tampering, in which part(s) of the image are copied and pasted back somewhere else in the same image. Post-processing can be used to produce more realistic doctored images and thus can increase the difficulty of detecting forgery. This thesis presents three novel methods for CMF detection, using feature extraction, surface fitting and segmentation. The Dense Scale Invariant Feature Transform (DSIFT) has been improved by using a different method to estimate the canonical orientation of each circular block. The Fitting Function Rotation Invariant Descriptor (FFRID) has been developed by using the least squares method to fit the parameters of a quadratic function on each block curvatures. In the segmentation approach, three different methods were tested: the SLIC superpixels, the Bag of Words Image and the Rolling Guidance filter with the multi-thresholding method. We also developed the Segment Gradient Orientation Histogram (SGOH) to describe the gradient of irregularly shaped blocks (segments). The experimental results illustrate that our proposed algorithms can detect forgery in images containing copy-move objects with different types of transformation (translation, rotation, scaling, distortion and combined transformation). Moreover, the proposed methods are robust to post-processing (i.e. blurring, brightness change, colour reduction, JPEG compression, variations in contrast and added noise) and can detect multiple duplicated objects. In addition, we developed a new method to estimate the similarity threshold for each image by optimizing a cost function based probability distribution. This method can detect CMF better than using a fixed threshold for all the test images, because our proposed method reduces the false positive and the time required to estimate one threshold for different images in the dataset. Finally, we used the hysteresis to decrease the number of false matches and produce the best possible result.
APA, Harvard, Vancouver, ISO, and other styles
31

Gonçalves, Bruno Filipe Pimparel. "Digital imaging processing tools for neuronal images." Master's thesis, Universidade de Aveiro, 2012. http://hdl.handle.net/10773/10584.

Full text
Abstract:
Mestrado em Biomedicina Molecular
Os neurónios são celulas especializadas do Sistema Nervoso, cujas funções se baseiam na correta formação de três compartimentos subcelulares primários – corpo celular, axónio e dendrites – e na rede neuronal que formam para passar a informação entre si. A análise quantitativa das características destas estruturas pode ser usada para estudar a relação entre a morfologia e função neuronal, e monitorizar alterações que ocorram em células individuais ou ao nível da rede, que se possam correlacionar com doenças neurológicas. Nesta tese foi efetuada uma pesquisa de ferramentas digitais disponíveis dedicadas ao processamento e análise de imagens neuronais, com enfoque na sua aplicabilidade para analisar as nossas bioimagens neuronais de fluorescência adquiridas no dia-a-dia. Nos programas selecionados (NeuronJ, NeurphologyJ e NeuriteQuant) foi primeiro avaliada a necessidade de preprocessamento, e os programas foram subsequentemente utilizados em conjuntos de imagens de culturas primárias de córtex de rato para comparar a sua eficácia no processamento destas bioimagens. Os dados obtidos com os vários programas foram comparados com a análise manual usando o ImageJ como ferramenta de análise. Os resultados demonstraram que o programa que aparenta funcionar melhor com as nossas imagens de fluorescência é o NeuriteQuant, porque é automático e dá resultados globalmente semelhantes aos da análise manual, especialmente na avaliação do Comprimento das Neurites por célula. Uma das desvantagens é que a quantificação da ramificação das neurites não dá resultados satisfatórios e deve continuar a ser realizada manualmente. Também realizamos uma pesquisa de ferramentas de processamento de imagem dedicada a imagens de contraste de fase, mas poucos programas foram encontrados. Estas imagens são mais fáceis de obter e mais acessíveis economicamente, contudo são mais difíceis de analisar devido às suas características intrínsecas. Para contornar esta lacuna, estabeleceu-se e otimizou-se uma sequência de processamento e análise para melhor extrair informação neuronal relevante de imagens de contraste de fase utilizando o programa ImageJ. A sequência desenvolvida, na forma de uma macro do ImageJ designada NeuroNet, foi aplicada a imagens de contraste de fase de culturas neuronais em diferentes dias de diferenciação, na presença ou ausência de um inibidor farmacológico, com o objetivo de responder a uma questão científica. A macro NeuroNet desenvolvida provou ser útil para analisar estas bioimagens, existindo contudo espaço para ser aperfeiçoada.
Neurons are specialized cells of the Nervous System, with their function being based on the formation of the three primary sub cellular compartments – soma, axons, and dendrites – and on the neuritic network they form to contact and pass information to each other. The quantitative analysis of the characteristics of these structures can be used to study the relation between neuronal morphology and function, and to monitor distortions occurring in individual cells or at the network level that may correlate with neurological diseases. In this thesis a survey of freely available digital tools dedicated to neuronal images processing and analysis was made with an interest in their applicability to analyse our routinely acquired neuronal fluorescent bioimages. The selected program´ (NeuronJ, NeurphologyJ and NeuriteQuant) preprocessing requirements were first evaluated, and the programs were subsequently applied to a set of images of rat cortical neuronal primary cultures in order to compare their effectiveness in bioimage processing. Data obtained with the various programs was compared to the manual analysis of the images using the ImageJ analysis tool. The result show that the program that seems to work better with our fluorescence images is NeuriteQuant, since it is automatic and gives overall results more similar to the manual analysis. This is particularly true for the evaluation of the Neurite Length per Cell. One of the drawbacks is that the quantification of neuritic ramification does not give satisfactory results and is better to be performed manually. We also performed a survey of digital image processing tools dedicated to phase contrast microphotographs, but very few programs were found. These images are easier to obtain and more affordable in economic terms, however they are harder to analyse due to their intrinsic characteristics. To surpass this gap we have established and optimized a sequence of steps to better extract relevant information of neuronal phase contrast images using ImageJ. The work-flow developed, in the form of an ImageJ macro named NeuroNet, was then used to answer a scientific question by applying it to phase contrast images of neuronal cultures at different differentiating days, in the presence or absence of a pharmacological inhibitor. The developed macro NeuroNet proved to be useful to analyse the images however there is still space to improvement.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Yiyan Carleton University Dissertation Engineering Electrical. "Multi-stage hybrid coding of digital images." Ottawa, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
33

Yau, Chin-ko, and 游展高. "Super-resolution image restoration from multiple decimated, blurred and noisy images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30292529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kim, Kyu-Heon. "Segmentation of natural texture images using a robust stochastic image model." Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Tsujiguchi, Vitor Hitoshi. "Identificação da correlação entre as características das imagens de documentos e os impactos na fidelidade visual em função da taxa de compressão." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-19032012-112737/.

Full text
Abstract:
Imagens de documentos são documentos digitalizados com conteúdo textual. Estes documentos são compostos de caracteres e diagramação, apresentando características comuns entre si, como a presença de bordas e limites no formato de cada caractere. A relação entre as características das imagens de documentos e os impactos do processo de compressão com respeito à fidelidade visual são analisadas nesse trabalho. Métricas objetivas são empregadas na análise das características das imagens de documentos, como a medida da atividade da imagem (IAM) no domínio espacial dos pixels, e a verificação da medida de atividade espectral (SAM) no domínio espectral. Os desempenhos das técnicas de compressão de imagens baseada na transformada discreta de cosseno (DCT) e na transformada discreta de Wavelet (DWT) são avaliados sobre as imagens de documentos ao aplicar diferentes níveis de compressão sobre as mesmas, para cada técnica. Os experimentos são realizados sobre imagens digitais de documentos impressos e manuscritos de livros e periódicos, explorando texto escritos entre os séculos 16 ao século 19. Este material foi coletado na biblioteca Brasiliana Digital (www.brasiliana.usp.br), no Brasil. Resultados experimentais apontam que as medidas de atividade nos domínios espacial e espectral influenciam diretamente a fidelidade visual das imagens comprimidas para ambas as técnicas baseadas em DCT e DWT. Para uma taxa de compressão fixa de uma imagem comprimida em ambas técnicas, a presença de valores superiores de IAM e níveis menores de SAM na imagem de referência resultam em menor fidelidade visual, após a compressão.
Document images are digitized documents with textual content. These documents are composed of characters and their layout, with common characteristics among them, such as the presence of borders and boundaries in the shape of each character. The relationship between the characteristics of document images and the impact of the compression process with respect to visual fidelity are analyzed herein. Objective metrics are employed to analyze the characteristics of document images, such as the Image Activity Measure (IAM) in the spatial domain, and assessment of Spectral Activity Measure (SAM) in the spectral domain. The performance of image compression techniques based on Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are evaluated from document images by applying different compression levels for each technique to these images. The experiments are performed on digital images of printed documents and manuscripts of books and magazines, exploring texts written from the 16th to the 19th century. This material was collected in the Brasiliana Digital Library in Brazil. Experimental results show that the activity measures in spatial and spectral domains directly influence the visual fidelity of compressed images for both the techniques based on DCT and DWT. For a fixed compression ratio for both techniques on a compressed image, higher values of IAM and low levels of SAM in the reference image result in less visual fidelity after compression.
APA, Harvard, Vancouver, ISO, and other styles
36

Mohamed, Aamer S. S. "From content-based to semantic image retrieval. Low level feature extraction, classification using image processing and neural networks, content based image retrieval, hybrid low level and high level based image retrieval in the compressed DCT domain." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4438.

Full text
Abstract:
Digital image archiving urgently requires advanced techniques for more efficient storage and retrieval methods because of the increasing amount of digital. Although JPEG supply systems to compress image data efficiently, the problems of how to organize the image database structure for efficient indexing and retrieval, how to index and retrieve image data from DCT compressed domain and how to interpret image data semantically are major obstacles for further development of digital image database system. In content-based image, image analysis is the primary step to extract useful information from image databases. The difficulty in content-based image retrieval is how to summarize the low-level features into high-level or semantic descriptors to facilitate the retrieval procedure. Such a shift toward a semantic visual data learning or detection of semantic objects generates an urgent need to link the low level features with semantic understanding of the observed visual information. To solve such a -semantic gap¿ problem, an efficient way is to develop a number of classifiers to identify the presence of semantic image components that can be connected to semantic descriptors. Among various semantic objects, the human face is a very important example, which is usually also the most significant element in many images and photos. The presence of faces can usually be correlated to specific scenes with semantic inference according to a given ontology. Therefore, face detection can be an efficient tool to annotate images for semantic descriptors. In this thesis, a paradigm to process, analyze and interpret digital images is proposed. In order to speed up access to desired images, after accessing image data, image features are presented for analysis. This analysis gives not only a structure for content-based image retrieval but also the basic units ii for high-level semantic image interpretation. Finally, images are interpreted and classified into some semantic categories by semantic object detection categorization algorithm.
APA, Harvard, Vancouver, ISO, and other styles
37

Brichta, Simone de FÃtima. "Youths in dialogue: digital educational practices and tessitura imagery." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=15424.

Full text
Abstract:
nÃo hÃ
This research with youths unfolds itself in an analysis of digital educational practices (DEPs) about mediating possibilities of art through images. Pictures are taken as meaningful through the production of audiovisual narratives, shared on socialnetworks by young high school students. The DEPs are addressed from EMdiÃlogo Portal, as practice of youths observatory, operating in national territory network of universities, with support of the Ministry of Education - MEC. In Fortaleza, that was conducted in partnership with the Federal University of Cearà - UFC, having as central scenario the imagetic field of youth culture of the State School President Castelo Branco, in 2013. With the analytical field of research, students from popular classes from public state schools, we have seen the power of these spaces as socializing factors, but with needing of new forms of production of subjectivities, where interaction and dialogue can afford the art mode, creating new sensitivities in considered young multidimensional subjects. The movement of these young people was analyzed under the conceptual prism of Youth, DEPs and Images. To discuss the investigation of such imagery creations in her expressive content, these productions was unveiled with the representations of bodies, values, daily life and imagination of youth culture. The image as a practice that puts into play the subjectivity of each individual, which makes turning himself more deeply in a production of meaning to their experiences, culminating in a look at their living conditions as a mean of expression more critical of themselves, the group and the other. The daily life and imagination of youth culture, as trials of self and other, indicate an interlaced complex of conflictualities and new socializing propositions, when mediation of educators occurs in a certain way - as stimulating the youth position as actors of their lives and expressions. Thus, it was seen that the presencial educational mediation is critical in digital educational practices and also that in these movements of artistic expressiviness, the exercise of authorship should be done where the reception of images is combined with an enhancement of imagery artistic production. In this movement, the subjects exchange places and perpectives, constituting significational fields. So it is that in this qualitative research, when configuring itself upon the observation in virtual ethnography, we can see how it takes place the critical and creative juvenile reflection in the school context, since a new visual culture can counteract the other mass forms of tame the subject insert in the merchandise logic. Evidencing authoral compositions of the subjects in the weaving of their imagery artistic experiences we believe it is possible to image culture gestate itself within new socialization parameters in weavings of imagery artistic experiences.
Este trabalho de pesquisa com as juventudes se desdobra em uma anÃlise das prÃticas educativas digitais (PEDs) acerca das possibilidades mediadoras da arte atravÃs das imagens. As imagens sÃo tomadas como significativas por meio da produÃÃo de narrativas a udiovisuais, compartilhadas em redes sociais por jovens estudantes do Ensino MÃdio. As PEDs sÃo abordadas a partir do Portal EMd iÃlogo, Ãnsito como prÃtica do observatÃrio de juventudes, atuante no territÃrio nacional, em uma rede de universidades, com apoio do MinistÃrio da EducaÃÃo - MEC. Em Fortaleza, foi realizado em parceria com a Universidade Federal do Cearà â UFC, apresentando como cenÃrio central o plano imagÃtico das culturas juvenis da Escola Estadual Presidente Castelo Branco, no ano de 2013. Tendo como campo analÃtico da pesquisa estudantes das classes populares da rede estadual pÃblica de ensino, vimos a potÃncia desses espaÃos como agÃncias socializadoras, mas necessitando alimentar - se de novas formas de produÃÃo de subjetividades, onde a i nteraÃÃo e o diÃlogo possam se dar ao modo da arte, criando sensibilidades novas, em sujeitos jovens considerados de modo multidimensio nal. O movimento desses jovens foi analisado sob o prisma conceitual das Juventudes, PrÃticas Educativas Digitais e Image ns . Ao problematizar a investigaÃÃo dessas criaÃÃes imagÃticas em seus conteÃdos expressivos, desvelou - se a produÃÃo tecida com as representaÃÃes dos corpos, valores, cotidiano e imaginÃrio das culturas juvenis. A imagem como prÃtica que coloca em cena as subjetividades de cada indivÃduo, que o faz voltar - se a si mesmo de maneira mais profunda em uma produÃÃo de sentidos para suas experiÃncias, culmina em um olhar sobre suas condiÃÃes de vida, como meio de expressÃo mais crÃtica de si, do grupo e do outro. O cotidiano e imaginÃrio das culturas juvenis, como experimentaÃÃes de si e do outro, assinalam um entrelaÃado jogo de conflitualidades e de novas proposiÃÃes socializadoras, quando a mediaÃÃo dos educadores se dà de certo modo - como estimuladora da posià Ão dos jovens como atores de suas vidas e expressÃes. Desse modo, viu - se que a mediaÃÃo educadora, presencial, à fundamental em prÃticas educativas digitais e que nesses movimentos de expressividade artÃstica o exercÃcio de autorias se deve fazer onde a re cepÃÃo de imagens se alia a uma potencializaÃÃo da produÃÃo artÃstica imagÃtica. Neste movimento, os sujeitos trocam lugares e deslocam olhares, constituindo campos de significaÃÃo. Assim à que nesta pesquisa qualitativa, ao se configurar valendo - se da obs ervaÃÃo em etnografia virtual, podemo s ver como à preciso situar a reflexÃo crÃtica e criativa juvenil no contexto escolar, jà que uma cultura visual nova pode se contrapor a outras formas massivas de 6 apassivar o sujeito inserto na lÃgica da mercadoria. Ev idenciando as composiÃÃes autorais dos sujeitos, nas tessituras de suas experiÃncias artÃsticas imagÃticas, julgamos ser possÃvel uma cultura da imagem se gestar dentro de novos parÃmetros de socializaÃÃo nas tessituras de experiÃncias artÃsticas imagÃtica s
APA, Harvard, Vancouver, ISO, and other styles
38

Smith, Jeffrey Statler. "Multi-camera: interactive rendering of abstract digital images." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/341.

Full text
Abstract:
The purpose of this thesis is the development of an interactive computer-generated rendering system that provides artists with the ability to create abstract paintings simply and intuitively. This system allows the user to distort a computer-generated environment using image manipulation techniques that are derived from fundamentals of expressionistic art. The primary method by which these images will be abstracted stems from the idea of several small images assembled into a collage that represents multiple viewing points rendered simultaneously. This idea has its roots in the multiple-perspective and collage techniques used by many cubist and futurist artists of the early twentieth century.
APA, Harvard, Vancouver, ISO, and other styles
39

Kiraci, Ali Coskun. "Direct Georeferencing And Orthorectification Of Airborne Digital Images." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/3/12609972/index.pdf.

Full text
Abstract:
GPS/INS (Global Positioning System / Inertial Navigation System) brings possibility of relaxing the demand for aerial triangulation in stereo model construction and rectification. In this thesis a differential rectification algorithm is programmed with Matlab software for aerial frame camera images. This program is tested using exterior orientation parameters obtained by GPS/INS and images are ortho-rectified. Ground Control Points (GCP) are measured in the orthorectified images and compared with other rectification methods according to RMSE and mean error. Besides, direct georeferencing accuracy is investigated by using GPS/INS data. Therefore, stereo models and ortho-images are constructed by using exterior orientation parameters obtained with both aerial triangulation and GPS/INS integration. GCPs are measured in both stereo models and ortho-images, compared with respect to their RMSE and mean error. In order to determine Digital Elevation Model (DEM) effect in ortho-rectification, different DEM data are used and the results are compared.
APA, Harvard, Vancouver, ISO, and other styles
40

Heiss, Detlef Guntram. "Calibrating the photographic reproduction of colour digital images." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/24680.

Full text
Abstract:
Colour images can be formed by the combination of stimuli in three primary colours. As a result, digital colour images are typically represented as a triplet of values, each value corresponding to the stimulus of a primary colour. The precise stimulus that the eye receives as a result of any particular triplet of values depends on the display device or medium used. Photographic film is one such medium for the display of colour images. This work implements a software system to calibrate the response given to a triplet of values by an arbitrary combination of film recorder and film, in terms of a measurable film property. The implemented system determines the inverse of the film process numerically. It is applied to calibrate the Optronics C-4500 colour film writer of the UBC Laboratory for Computational Vision. Experimental results are described and compared in order to estimate the expected accuracy that can be obtained with this device using commercially available film processing.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
41

Rey, Claudio Gustavo. "Noise filtering with edge preservation in digital images." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/26322.

Full text
Abstract:
The widespread use of the absolute gradient and the sample variance in present day local noise filters in digital image processing is pointed out. It is shown that the sample variance and the absolute gradient can be viewed as measures of the modelling error for a simple zeroth order local image model. This is shown to lead to a general formulation of local noise filtering applicable to the great majority of current local noise niters for digital images. This formulations describes local noise filtering as a two step process. In the first step a robust estimation of every pixel z[sub o] is obtained. In the second step a better estimate of z[sub o] is obtained by performing a weighted sum within a neighborhood of z[sub o]. The weights in the second step are related to some measure of modelling error of the above zeroth order model; namely, the absolute gradient or the sample variance. Of the above two measures of modelling error, the sample variance is shown to be the least sensitive to noise. Furthermore, the sample variance is also more sensitive to faint image edges. Therefore,the sample variance is the most desirable of the above two measures of modelling error. Unfortunately, its use until now has been hampered by its poor edge localization. To solve this problem a new measure of modelling error is introduced which achieves far superior edge localization than the sample variance (though still lower than the absolute gradient) but maintains the low noise sensitivity. A filter is designed based on this new measure of modelling error (of the zeroth order model described above) which is shown to perform better, in a least squares sense, than all other local noise filters for non-impulsive additive and multiplicative noise. A practical implementation of this original filter is also presented.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Pacheco-Martínez, Ana María. "Extracting cell complexes from 4-dimensional digital images." Thesis, Poitiers, 2012. http://www.theses.fr/2012POIT2262/document.

Full text
Abstract:
Une image numérique peut être définie comme un ensemble de n-xels sur une grille constituée de n-cubes. La segmentation consiste à calculer une partition d'une image en régions. Les n-xels ayant des caractéristiques similaires (couleur, intensité, etc.) sont regroupés. Schématiquement, à chaque n-xel est attribuée une étiquette, et chaque région de l'image est constituée de n-xels de même étiquette. Les méthodes "de type" Marching cubes et Kenmochi et al. construisent des complexes représentant la topologie de la région d'intérêt d'une image numérique binaire de dimension 3. Dans la première méthode, l'algorithme construit un complexe simplicial, dont 0-cellules sont des points des arêtes de la grille duale. Dans la deuxième méthode, les auteurs construisent un complexe cellulaire sur une grille duale, c.a.d les 0-cellules du complexe sont des sommets de la grille duale. Afin de construire le complexe, Kenmochi et al. calculent (à rotations près) les différentes configurations de sommets blancs et noirs d'un cube, puis, ils construisent les enveloppes convexes des points noirs de ces configurations. Ces enveloppes convexes définissent les cellules du complexe, à rotations près. Le travail développé dans cette thèse étend la méthode de Kenmochi et al. en dimension 4. L'objectif est de construire un complexe cellulaire à partir d'une image numérique binaire définie sur une grille duale. Nous calculons d'abord les différentes configurations de sommets blancs et noirs d'un 4-cube (à isométries près), puis, nous construisons des enveloppes convexes définies par ces configurations. Ces enveloppes convexes sont construites par déformation du 4-cube d'origine, et nous distinguon
A digital image can be defined as a set of n-xels on a grid made up by n-cubes. Segmentation consists in computing a partition of an image into regions. The n-xels having similar characteristics (color, intensity, etc.) are regrouped. Schematically, each n-xel is assigned a label, and each region of the image is made up by n-xels with the same label. The methods "type" Marching cubes and Kenmochi et al. construct complexes representing the topology of the region of interest of a 3-dimensional binary digital image. In the first method, the algorithm constructs a simplicial complex, whose 0-cells are points of the edges of the dual grid. Inthe second one, the authors construct a cell complex on a dual grid, i.e. the 0-cells of the complex are vertices of the dual grid. In order to construct the complex, Kenmochi et al. compute (up to rotations) the different configurations of white and black vertices of a cube, and then, they construct the convex hulls of the black points of these configurations. These convex hulls define the cells of the complex, up to rotations. The work developed in this thesis extends Kenmochi et al. method todimension 4. The goal is to construct a cell complex from a binary digital image defined on a dual grid. First, we compute the different configurations of white and black vertices of a 4-cube, up to isometries, and then, we construct the convex hulls defined by these configurations. These convex hulls are constructed by deforming the original 4-cube, and we distinguishseveral basic construction operations (deformation, degeneracy of cells, etc.). Finally, we construct the cell complex corresponding to the dual image by assembling the cells so o
Una imagen digital puede ser definida como un conjunto de n–xeles en un mallado constituido de n–cubos. Los n–xeles pueden ser identificados con: (1) los n–cubos del mallado, o con (2) los puntos centrales de estos n–cubos. En el primer caso, trabajamos con un mallado primal, mientras que en el segundo, trabajamos con un mallado dual construido a partir del mallado primal. La segmentación consiste en calcular una partición de una imagen en regiones. Los n–xeles que tienen características similares (color, intensidad, etc.) son reagrupados. Esquemáticamente, a cada n–xel se le asocia una etiqueta, y cada región de la imagen está constituida de n–xeles con la misma etiqueta. En particular, si las únicas etiquetas permitidas para los n–xeles son “blanca” y “negra”, la segmentación se dice binaria: los n–xeles negros forman el primer plano (foreground) o región de interés en cuestión de análisis de la imagen, y los n–xeles blancos forman el fondo (background). Ciertos modelos, como los Grafos de Adyacencia de Regiones (RAGs), los Grafos Duales (DGs) y la carta topológica, han sido propuestos para representar las particiones en regiones, y en particular para representar la topología de estas regiones, es decir las relaciones de incidencia y/o adyacencia entre las diferentes regiones. El RAG [27] es un precursor de este tipo de modelos, y ha sido una fuente de inspiración de los DGs [18] y de la carta topológica [9, 10]. Un RAG representa una imagen primal etiquetada por un grafo: los vértices del grafo corresponden a regiones de la imagen, y las aristas del grafo representan las relaciones de adyacencia entre la regiones. Los DGs son un modelo que permite resolver ciertos inconvenientes de los RAGs para representar imágenes de dimensión 2. La carta topológica es una extensión de los modelos anteriores definida para manipular imágenes primales de dimensión 2 y 3, representando no solamente las relaciones topológicas, sino también las relaciones geométricas
APA, Harvard, Vancouver, ISO, and other styles
43

McKoen, K. M. H. H. "Digital restoration of low light level video images." Thesis, Imperial College London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343720.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Qadir, Ghulam. "Digital foresnic analysis for compressed images and videos." Thesis, University of Surrey, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.604341.

Full text
Abstract:
The advancement of imaging devices and image manipulation software has made the tasks of tracking and protecting of digital multimedia content becoming increasingly difficult. In order to protect and verify the integrity of the digital content, many active watermarking and passive forensic techniques have been developed for various image and video formats in the past decade or so. In this thesis, we focus on the research and development of digit.al image forensic techniques, particularly for the processing history recovery of JPEG2000 (J2I<) images. J2K is a. new and improved format introduced by the Joint Photographic experts Group (JPEG). Unlike JPEG, it is based on the Discrete Wavelet Transform (DWT) and has a more complex coding system. However, the size-to-compression ratio of J2K is significantly better than JPEG and can be used for storing CCTV data and also for digital cinema applications. In this thesis, the novel use of the Benfords Law for the analysis of J2K compressed images is investigated. The Benfords law is essentially a statistical law that has previously been used for the detection of linfU1cial and accounting frauds . Initial results obtained after testing 1,338 grayscale images show that the first digit probability distribution of the DWT follows the Benfords Law. However, when images are compressed with J2K compression, the first digit probability graph starts to deviate from the actual distribution of the Benfords Law curve. The compression can also be detected via the divergence factor derived from the graph. furthermore, the use of Benfords law can be applied for the analysis of an image feature known as glare, by investigating the anomaly in the first digit probability curve of DWT coefficients. The results show that out of 1,338 images, 122 images exhibit the irregular peak at digit 5 with each of these images possesses glare. This can potentially be used. as a tool to isolate images containing glare for large-scale image databases. This thesis also presents a novel J2I< compression strength detection technique. The compression strength is classified into three categories which correspond to high, medium and low subjective image quality representing compression strength low, medium and high compression strengths, respectively, ranging from 0 to 1 bits per' pixel (bpp). The proposed technique employs a. no-reference (NR) perceptual blur metric and double compression calibration to identify some heuristic rules that arc then used to design an unsupervised classifier for determining the J2K compression strength of a given image. In our experiments we experiment on 100 images to identify the heuristic rules, followed by another set of 100 different images for testing the ,Performance of our method. The results show that the compression strength achieves an accuracy of approximately 90%. The thesis also presents a new benchmarking tool for video forensics known as Surrey University Library for Forensics Analysis (SULFA). The library is considered to be the first of its kind available to the research community and contains 150 untouched original videos obtained from three digital cameras of different makes and models, as well as a. number of tampered videos and supporting ground-truth datasets tha.t can be used for video forensic experiments and analysis.
APA, Harvard, Vancouver, ISO, and other styles
45

Goodson, Kelvin J. "Automated interpretation of digital images of hydrographic charts." Thesis, Bournemouth University, 1987. http://eprints.bournemouth.ac.uk/382/.

Full text
Abstract:
Details of research into the automated generation of a digital database of hydrographic charts is presented. Low level processing of digital images of hydrographic charts provides image line feature segments which serve as input to a semi-automated feature extraction system, (SAFE). This system is able to perform a great deal of the building of chart features from the image segments simply on the basis of proximity of the segments. The system solicits user interaction when ambiguities arise. IThe creation of an intelligent knowledge based system (IKBS) implemented in the form of a backward chained production rule based system, which cooperates with the SAFE system, is described. The 1KBS attempts to resolve ambiguities using domain knowledge coded in the form of production rules. The two systems communicate by the passing of goals from SAFE to the IKBS and the return of a certainty factor by the IKBS for each goal submitted. The SAFE system can make additional feature building decisions on the basis of collected sets of certainty factors, thus reducing the need for user interaction. This thesis establishes that the cooperating IKBS approach to image interpretation offers an effective route to automated image understanding.
APA, Harvard, Vancouver, ISO, and other styles
46

Coleman, Sonya. "Scalable operators for adaptive processing of digital images." Thesis, University of Ulster, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Conradie, Johannes Hendrik. "Characterising failure of structural materials using digital images." Thesis, Stellenbosch : Stellenbosch University, 2015. http://hdl.handle.net/10019.1/96755.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2015.
ENGLISH ABSTRACT: The fracture of ductile materials is currently regarded as a complex and challenging phenomenon to characterise and predict. Recently, a bond-based, non-local theory was formulated called the peridynamic theory, which is able to directly solve solid mechanics problems that include fracture. The failure criterion is governed by a critical stretch relation between bonds. It was found in literature that the critical stretch relates to the popular fracture mechanics parameter called the critical energy release rate for predicting brittle linear-elastic failure. It was also proposed that the non-linear critical energy release rate or J-integral can be used to model ductile failure using peridynamics. The aim of this thesis was to investigate the validity of using the J-integral to determine the critical stretch for predicting ductile failure. Standard ASTM fracture mechanics tests on Compact Tension specimens of Polymethyl methacrylate, stainless steel 304L and aluminium 1200H4 were performed to determine the critical energy release rates and non-linear Resistance-curves. Furthermore, a novel peridynamic-based algorithm was developed that implements a critical energy release rate based failure criterion and Digital Image Correlation (DIC) measured full surface displacement fields of cracked materials. The algorithm is capable of estimating and mapping both the peridynamic damage caused by brittle cracking and damage caused by plastic deformation. This approach was used to validate the use of an energy release rate based failure criterion for predicting linear-elastic brittle failure using peridynamics. Also, it showed a good correlation among the test results for detecting plastic damage in the alloys when incorporating the respective J-integral derived critical stretch values. Additionally, Modified Arcan tests were performed to obtain Mode I, Mode II and mixed Mode fracture load results of brittle materials. Mode I peridynamic models compared closely to test results when using the Mode I critical energy release rate, derived critical stretch and served as validation for the approach. Moreover, it was argued that Mode I failure criteria cannot in principle be used to model shear failure. Therefore, it was proposed to rather use the appropriate Mode II and mixed Mode critical energy release rates to predict the respective failures in peridynamics. Also, for predicting ductile failure loads it was found that using a threshold energy release rate derived from the R-curve yielded considerably more accurate failure load results compared to the usage of the critical energy release rate, i.e. J-integral. In this thesis it was shown that there exists great potential for detecting and characterising cracking and failure by using a peridynamic-based approach through coupling DIC full displacement field measurements and the critical energy release rate of a particular structural material.
AFRIKAANSE OPSOMMING: Duktiele breeking van materiale word tans beskou as 'n kompleks- en uitdagende fenomeen om te voorspel en te karakteriseer. 'n Binding-gebaseerde, nie-lokale teorie is onlangs geformuleer, genaamd die peridinamika teorie. Die laasgenoemde stel ons in staat om soliede meganiese probleme met krake direk op te los. Die falings kriterium word bemagtig deur die kritiese strekfaktor tussen verbindings. Daar was bewys dat die kritiese strekfaktor in verband staan met die popul^ere breek meganika parameter, genaamd die kritiese vrylatings-energie-koers vir die voorspelling van bros line^ere-elastiese faling. 'n Onlangse verklaring meen dat die kritiese strekfaktor vir duktiele falingsgedrag, bereken kan word met die nie-line^ere kritiese vrylatings-energie-koers, beter bekend as die J- integraal. Die doel van hierdie tesis was om te meet hoe geldig die gebruik van die J-integraal is om die kritiese strekfaktor te bereken, om sodoende duktiele breking te ondersoek. Standaard ASTM breukmeganika toetse op Polimetilmetakrilat, vlekvrye staal 304L en aluminium 1200H4 is uitgevoer om die kritiese vrylatings-energie-koers en Weerstandskurwes te bepaal. Verder was 'n nuwe peridinamies-gebaseerde algoritme ontwikkel. Die laasgenoemde implementeer die berekening van 'n kritiese strekfaktor, gebaseer op die kritiese vrylatings-energie-koers, sowel as Digitale Beeld Korrelasie (BDK) vol oppervlaks-verplasings veld metings van gebreekte materiale. Dit is in staat om die peridinamiese skade te bereken, tesame met die beeld wat veroorsaak was van bros krake en plastiese vervorming in duktiele materiale. Hierdie benadering is aangewend om die gebruik van 'n vrylatings-energie-koers gebaseerde falings kriterium vir bros line^ere-elastiese falings in peridinamika te bekragtig. 'n Goeie korrelasie tussen toets resultate is ook gevind vir die opsporing van skade wat veroorsaak is deur plastiese deformasie in die legerings waar die onderskeilike J-integrale gebruik was as falings kriteria. Daarbenewens, was Verandere Arcan toetse uitgevoer om die Modes I, Modes II en gemenge Modes falingsresultate te verkry. Die Modes I peridinamiese model het goed vergelyk met die toetsresultate en het gedien as bekragtiging vir die falingsbenaderings. Verder was dit aangevoer dat Modes I falings kriterium in beginsel nie gebruik kan word om skuiffaling te modelleer nie. Dus was dit voorgestel om eerder die toepaslike Modes II en gemengde Modes kritieke vrylatings-energie-koerse te gebruik om onderskeie falings te voorspel in peridinamiese modelle. Dit was ook gevind dat vir die voorspelling van duktiele falingslaste die drumpel vrylatings-energie-koers, wat verkrygbaar is vanaf die Weerstands-kurwe, aansienlik meer akkurate resultate gee, in vergelyking met die gebruik van die kritiese vrylatings-energie-koers, m.a.w. die J-integraal. In hierdie tesis was dit gewys dat daar groot potensiaal bestaan vir die opsporing en karakterisering van krake en falings met 'n peridinamies-gebaseerde benadering, deur dit te skakel met BDK vol verplasings veld metings en die kritiese vrylatings-energie-koers van 'n bepaalde strukturele materiaal.
APA, Harvard, Vancouver, ISO, and other styles
48

Rieger, James L., and Sherri Gattis. "Draft Standard for Digital Transmission of Television Images." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615074.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
This paper describes the characteristics of the HORACE digital protocol intended for transmission of black-and-white standard television images and associated data through a digital channel and reconstruction of an NTSC standard television picture at the receiving end, using adaptive transmission to allow maximum picture quality at a selected data rate. Tradeoffs are discussed for transmission rates in the range from near DC to over 40 Mbits/second. The HORACE protocol will be a government test range standard to be issued by the Telecommunications Group [TCG] of the Range Commanders' Council as RCC Document 209.
APA, Harvard, Vancouver, ISO, and other styles
49

Forbes, Keith. "Volume estimation of fruit from digital profile images." Master's thesis, University of Cape Town, 2000. http://hdl.handle.net/11427/5220.

Full text
Abstract:
Includes bibliographical references.
This dissertation investigates the feasibility of using the same digital profile images of fruit that are used in commercial packing houses for colour sorting and blemish detection purposes to estimate the volumes of the corresponding individual pieces of fruit, Data sets of actual fruit volumes and digitial images of the fruit that simulate both single and multiple camera set-ups are obtained.
APA, Harvard, Vancouver, ISO, and other styles
50

Teo, Chek Koon. "Digital enhancement of night vision and thermal images." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Dec%5FTeo%5FChek.pdf.

Full text
Abstract:
Thesis (M.S. in Combat Systems Technology)--Naval Postgraduate School, December 2003.
Thesis advisor(s): Monique P. Fargues, Alfred W. Cooper. Includes bibliographical references (p. 75-76). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography