Dissertations / Theses on the topic 'Images, Photographic – Digital techniques'

To see the other types of publications on this topic, follow the link: Images, Photographic – Digital techniques.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Images, Photographic – Digital techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Heiss, Detlef Guntram. "Calibrating the photographic reproduction of colour digital images." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/24680.

Full text
Abstract:
Colour images can be formed by the combination of stimuli in three primary colours. As a result, digital colour images are typically represented as a triplet of values, each value corresponding to the stimulus of a primary colour. The precise stimulus that the eye receives as a result of any particular triplet of values depends on the display device or medium used. Photographic film is one such medium for the display of colour images. This work implements a software system to calibrate the response given to a triplet of values by an arbitrary combination of film recorder and film, in terms of a measurable film property. The implemented system determines the inverse of the film process numerically. It is applied to calibrate the Optronics C-4500 colour film writer of the UBC Laboratory for Computational Vision. Experimental results are described and compared in order to estimate the expected accuracy that can be obtained with this device using commercially available film processing.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
2

Polignano, Sergio. "Do sensivel a significação : uma poetica da fotografia." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/284754.

Full text
Abstract:
Orientador: Ernesto Giovanni Boccara
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes
Made available in DSpace on 2018-08-07T06:14:51Z (GMT). No. of bitstreams: 1 Polignano_Sergio_M.pdf: 19138121 bytes, checksum: a2a1fa7467459819c2d2f07679e2acbd (MD5) Previous issue date: 2006
Resumo: Este trabalho propõe uma abordagem conceitual, dos diversos conteúdos da imagem fotográfica, a sua leitura e decodificação, que vai da sensibilidade aos significados. Apresenta uma proposta de análise que busca justificar e demonstrar a condição da poética (Arte) da fotografia. Nesse sentido, mostra o que de incomum pode deter o olhar que eterniza e o olhar que ressuscita, dando um real valor às imagens fotográficas, sejam elas quais forem e mostrem o que de mais importante possam mostrar, mantendo as informações ao longo do tempo. Desse modo, busca contribuir para uma melhor compreensão da época em que as fotografias foram feitas, dos cenários que as mesmas registram, de seus contextos, assim como das implicações e relações que tenham com as formas de expressão diferenciadas, que chamamos: Arte
Abstract: This work proposes an conceptual approach to the various contents of photographic image, its interpretation, and decoding, which goes from sensibility to signification; it brings a proposal for an analysis that can effectively justify and demonstrate the condition of poetry (art) of photography, showing which of its uncommon aspects can capture the point of view that makes it eternal and the point of view that resuscitates it, bringing the real value to photographic images, being them whatever they are, showing whatever most important subject they can hold, maintaining their image content throughout time, and serving for a better understanding of the time in which photographs were taken, their scenarios, contexts, implications, and relationships that they have with the different forms of expression that we cal!: Art
Mestrado
Mestre em Artes
APA, Harvard, Vancouver, ISO, and other styles
3

McQuade, Patrick John Art College of Fine Arts UNSW. "Visualising the invisible :articulating the inherent features of the digital image." Awarded by:University of New South Wales. Art, 2007. http://handle.unsw.edu.au/1959.4/43307.

Full text
Abstract:
Contemporary digital imaging practice has largely adopted the visual characteristics of its closest mediatic relative, the analogue photograph, In this regard, new media theorist Lev Manovich observes that "Computer software does not produce such images by default. The paradox of digital visual culture is that although all imaging is becoming computer-based, the dominance of photographic and cinematic imagery is becoming even stronger. But rather than being a direct, "natural" result of photo and film technology, these images are constructed on computers" (Manovich 2001: 179), Manovich articulates the disjuncture between the technical processes involved in the digital image creation process and the visual characteristics of the final digital image with its replication of the visual qualities of the analogue photograph. This research addresses this notion further by exploring the following. What are the defining technical features of these computer-based imaging processes? Could these technical features be used as a basis in developing an alternative aesthetic for the digital image? Why is there a reticence to visually acknowledge these technical features in contemporary digital imaging practice? Are there historic mediated precedents where the inherent technical features of the medium are visually acknowledged in the production of imagery? If these defining technical features of the digital imaging process were visually acknowledged in this image creation process, what would be the outcome? The studio practice component of the research served as a foundation for the author's artistic and aesthetic development where the intent was to investigate and highlight four technical qualities of the digital image identified through the case studies of three digital artists, and other secondary sources, These technical qualities include: the composite RGB colour system of the digital image as it appears on screen; the pixellated microstructure of the digital image; the luminosity of the digital image as it appears on a computer monitor, and the underlying numeric and (ASCII based) alphanumeric codes of the image file which enables that most defining feature of the image file, that of programmability, Based on research in the visualization of these numeric and alphanumeric codes, digital images of bacteria produced through the use of the scanning electron microscope, were chosen as image content for an experimental body of work to draw the conceptual link between these numeric and alphanumeric codes of the image file and the coded genetic sequence of an individual bacterial entity.
APA, Harvard, Vancouver, ISO, and other styles
4

Carstens, Andries Theunis. "Digitising photographic negatives and prints for preservation." Thesis, Cape Peninsula University of Technology, 2013. http://hdl.handle.net/20.500.11838/1355.

Full text
Abstract:
A DISSERTATION PRESENTED TO THE FACULTY OF INFORMATICS AND DESIGN OF THE CAPE PENINSULA UNIVERSITY OF TECHNOLOGY IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MAGISTER TECHNOLOGIAE PHOTOGRAPHY CAPE PENINSULA UNIVERSITY OF TECHNOLOGY 2013
This study deals with the pitfalls and standards associated with the digitisation of photographic artefacts in formal collections. The popularity of the digital medium caused a rapid increase in the demand for converting images into digital files. The need for equipment capable of executing the task successfully, the pressure on collection managers to display their collections to the world and the demand for knowledge needed by managers and operators created pressure to perform optimally and often in great haste. As a result of the rush to create digital image files to be displayed and to be preserved, the decisions that are being made may be questionable. The best choice of file formats for longevity, setting and maintaining standards to guarantee quality digital files and consultation with experts in the field of digitisation as well as attention to best practices are important aspects which must be considered. In order to determine the state of affairs in countries with an advanced knowledge and experience in the field of digitisation, a comprehensive literature study was done. It was found that enough information exists to enable collection managers in South Africa to make well informed decisions to ensure a high quality of digital collection. By means of questionnaires, a survey was undertaken amongst selected Western Cape image preservation institutions to determine the level of knowledge of the managers who are required to make informed decisions. The questionnaire was designed to give insight into choices being made regarding the technical quality, workflow and best practice aspects of digitisation. Comparing the outcome of the questionnaires with best practices and recommended standards in countries with an advanced level of experience it was found that not enough of this experience and knowledge is used by local collection managers although readily available. In some cases standards are disregarded completely. The study also investigated by means of questionnaires the perception of the digital preservation of image files by fulltime photographic students and volunteer members of the Photographic Society of South Africa. It was found that uncertainty exist within both groups with regard to file longevity and access to files in five to ten year's time. Digitisation standards are set and maintained by the use of specially designed targets which enable digitising managers to maintain control over the quality of the digital content as well as monitoring of equipment performance. The use of these targets to set standards were investigated and found to be an accurate and easy method of maintaining control over the standard and quality of digital files. Suppliers of digitising equipment very often market their equipment as being of a high quality and being able to fulfil the required digitisation tasks. Testing selected digitising equipment by means of specially designed targets proved however that potential buyers of equipment in the high cost range should be very cautious about suppliers' claims without proof of performance. Using targets to verify performance should be a routine check before any purchase. The study concludes with recommendations of implementing standards and it points to potential future research.
APA, Harvard, Vancouver, ISO, and other styles
5

Phillips, Carlos. "Photographic transformations and greyscale pictures." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101163.

Full text
Abstract:
We have introduced a geometry which is invariant to certain forms of burning and dodging photographic prints. We then used this geometry to create invariant measurements which represent information which would not change given different photographic printing processes.
The presented algorithm used properties of best-fit planes to represent a photograph. There are many other possibilities for measurements which would fit this framework. Further, the representation of photographs presented in this thesis could be combined with existing computer vision algorithms for such tasks as object recognition within photographs for which we do not know the development process.
APA, Harvard, Vancouver, ISO, and other styles
6

Gavard, Sandra. "Photo-graft : a critical analysis of image manipulation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0015/MQ54990.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Datodi, Mark. "Digital imaging: Creating new realities." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 1999. https://ro.ecu.edu.au/theses/1253.

Full text
Abstract:
More and more it is becoming increasingly difficult to discern photo reality from digital reality. Digital imagery is revolutionising photography and challenging preconceived notions of this art form. Over the years, photography has been viewed metaphorically as a window on the world and on the past. No longer however, is the creation of photographic imagery reliant upon its intrinsic relationship with reality. Using computer technology original photographic material can be altered, manipulated and seamlessly combined with other fictional imagery without obvious detection and with relative ease. The proliferation of digital imaging is producing two apparent crises for photography. The first is the perceived threat to photography, involving the fear that traditional photographic processes, methods and product will be superseded by manipulated digital images passing themselves off as real photographs. Added to these growing concerns for photography's longevity, is the prospect that viewers will no longer believe m photography as a deliverer of objective truth and that the medium itself will lose its power as a 'privileged conveyer of information'(Batchen, 1994,p.47). The second crisis pertains to ethical concerns that these digital simulations raise: copyright, moral rights and artistic integrity.
APA, Harvard, Vancouver, ISO, and other styles
8

Pedroso, Anderson Antonio. "Vilém flusser : de la philosophie de la photographie à l’univers des images techniques." Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUL103.

Full text
Abstract:
La pensée de Vilém Flusser (1920-1991) a donné lieu à une abondante bibliographie critique sur ses apports en matière de communication, où son pari d’une « nouvelle imagination » fait tenir ensemble l’art, la science et la technologie, sous l’égide de sa philosophie de la photographie. L’archéologie de sa pensée, proposée ici, a été construite en tenant compte de l’ancrage disciplinaire de ses travaux dans les théories de la communication et des médias : elle s’est intéressée à la notion de l’art que Flusser développe au cours de son parcours, afin de saisir le déploiement de cette dimension artistique, son statut et ses contours, aussi bien que sa portée. La notion d’art chez Flusser est indissociablement liée à l’histoire : l’expérience traumatisante de l’exil est devenue centrale dans sa pensée critique de tout totalitarisme. L’ensemble de pratiques et de savoirs qui fondent son rapport au monde et qui informent sur l’élaboration de sa pensée sont travaillés par une Kulturgeschichte, vue depuis la perspective d’une « post-histoire ». Mais il y a en particulier une pensée du jeu, ludique, où l’objectif est moins de jouer à l’intérieur des règles établies que de déjouer les règles, de « jouer pour changer le jeu ». Autrement dit, il s’agit d’une pensée radicalement dialogique et polyphonique, dont la Kommunikologie, façonnée comme métathéorie, représente l’essor de Flusser. Sa trajectoire historique et les savoirs qui lui sont associés contribuent à la construction d’une pensée cybernétique qui projette les lignes fondamentales d’une Kunstwissenschaft, dans laquelle il propose une sorte d’iconoclasme sans renoncer aux images
The thought of Vilém Flusser (1920-1991) has given rise to an abundant critical bibliography on his contributions in the field of communication, where his wager on a "new imagination" brings together art, science and technology, under the aegis of his philosophy of photography. The archaeology of his thought, proposed here, is constructed alongside this disciplinary underpinning of his work in the theories of communication and media: it involves the notion of art that Flusser developed during his career, in order to grasp the deployment of this artistic dimension, its status and its contours, in his thought. Flusser's notion of art is inseparably linked to history: the traumatic experience of exile becames central in his critical thinking regarding all totalitarianism. The practices and knowledge that underpin his relationship to the world and guide the development of his conception are worked on by a Kulturgeschichte, seen from the perspective of a "post-history". But there is, in particular, a playful thinking of the game, where the objective is less to play within the established rules than to thwart the rules, to "play to change the game". In other words, it is a radically dialogical and polyphonic way of thinking, whose Kommunikologie, shaped as metatheory, directs the development of Flusser's work. His historical trajectory and the knowledge associated with it contribute to the construction of a cybernetic way of thinking that sketches the fundamental lines of a Kunstwissenschaft, in which he proposes a kind of iconoclasm without giving up images
APA, Harvard, Vancouver, ISO, and other styles
9

Van, der Westhuizen Christo Carel. "Efficient registration of limited field of view ocular fundus imagery." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/85633.

Full text
Abstract:
Thesis (MScEng)-- Stellenbosch University, 2013.
ENGLISH ABSTRACT: Diabetic- and hypertensive retinopathy are two common causes of blindness that can be prevented by managing the underlying conditions. Patients suffering from these conditions are encouraged to undergo regular examinations to monitor the retina for signs of deterioration. For these routine examinations an ophthalmoscope is used. An ophthalmoscope is a relatively inexpensive device that allows an examiner to directly observe the ocular fundus (the interior back wall of the eye that contains the retina). These devices are analog and do not allow the capture of digital imagery. Fundus cameras, on the other hand, are larger devices that o er high quality digital images. They do, however, come at an increased cost and are not practical for use in the eld. In this thesis the design and implementation of a system that digitises imagery from an ophthalmoscope is discussed. The main focus is the development of software algorithms to increase the quality of the images to yield results of a quality closer to that of a fundus camera. The aim is not to match the capabilities of a fundus camera, but rather to o er a cost-e ective alternative that delivers su cient quality for use in conducting routine monitoring of the aforementioned conditions. For the digitisation the camera of a mobile phone is proposed. The camera is attached to an ophthalmoscope to record a video of an examination. Software algorithms are then developed to parse the video frames and combine those that are of better quality. For the parsing a method of rapidly selecting valid frames based on colour thresholding and spatial ltering techniques are developed. Registration is the process of determining how the selected frames t together. Spatial cross-correlation is used to register the frames. Only translational transformations are assumed between frames and the designed algorithms focuses on estimating this relative translation in a large set of frames. Methods of optimising these operations are also developed. For the combination of the frames, averaging is used to form a composite image. The results obtained are in the form of enhanced grayscale images of the fundus. These images do not match those captured with fundus cameras in terms of quality, but do show a signi cant increase when compared to the individual frames that they consists of. Collectively a set of video frames can cover a larger region of the fundus than what they do individually. By combining these frames an e ective increase in the eld of view is obtained. Due to low light exposure, the individual frames also contain signi cant noise. In the results the noise is reduced through the averaging of several frames that overlap at the same location.
AFRIKAANSE OPSOMMING: Diabetiese- en hipertensiewe retinopatie is twee algemene oorsake van blindheid wat deur middel van die behandeling van die onderliggende oorsake voorkom kan word. Pasiënte met hierdie toestande word aangemoedig om gereeld ondersoeke te ondergaan om die toestand van die retina te monitor. 'n Oftalmoskoop word gebruik vir hierdie roetine ondersoeke. 'n Oftalmoskoop is 'n relatiewe goedkoop, analoë toestel wat 'n praktisyn toelaat om die agterste interne wand van die oog the ondersoek waar die retina geleë is. Fundus kameras, aan die ander kant, is groter toestelle wat digitale beelde van 'n hoë gehalte kan neem. Dit kos egter aansienlik meer en is dus nie geskik vir gebruik in die veld nie. In hierdie tesis word die ontwerp en implementering van 'n stelsel wat beelde digitaliseer vanaf 'n oftalmoskoop ondersoek. Die fokus is op die ontwikkeling van sagteware algoritmes om die gehalte van die beelde te verhoog. Die doel is nie om die vermoëns van 'n fundus kamera te ewenaar nie, maar eerder om 'n koste-e ektiewe alternatief te lewer wat voldoende is vir gebruik in die veld tydens die roetine monitering van die bogenoemde toestande. 'n Selfoonkamera word vir die digitaliserings proses voorgestel. Die kamera word aan 'n oftalmoskoop geheg om 'n video van 'n ondersoek af te neem. Sagteware algoritmes word dan ontwikkel om die videos te ontleed en om videogrepe van goeie kwaliteit te selekteer en te kombineer. Vir die aanvanklike ontleding van die videos word kleurband drempel tegnieke voorgestel. Registrasie is die proses waarin die gekose rame bymekaar gepas word. Direkte kruiskorrelasie tegnieke word gebruik om die videogrepe te registreer. Daar word aanvaar dat die videogrepe slegs translasie tussen hulle het en die voorgestelde registrasie metodes fokus op die beraming van die relatiewe translasie van 'n groot versameling videogrepe. Vir die kombinering van die grepe, word 'n gemiddeld gebruik om 'n saamgestelde beeld te vorm. Die resultate wat verkry word, word in die vorm van verbeterde gryskleur beelde van die fundus ten toon gestel. Hierdie beelde is nie gelykstaande aan die kwaliteit van beelde wat deur 'n fundus kamera geneem is nie. Hulle toon wel 'n beduidende verbetering teenoor individuele videogrepe. Deur dat 'n groot versameling videogrepe wat gesamentlik 'n groter area van die fundus dek gekombineer word, word 'n e ektiewe verhoging van data in die area van die saamgestelde beeld verkry. As gevolg van lae lig blootstelling van die individuele grepe bevat hul beduidende ruis. In die saamgestelde beelde is die ruis aansienlik minder as gevolg van 'n groter hoeveelheid data wat gekombineer is om sodoende die ruis uit te sluit.
APA, Harvard, Vancouver, ISO, and other styles
10

Suen, Tsz-yin Simon, and 孫子彥. "Curvature domain stitching of digital photographs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38800901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Abdulrahman, Hasan. "Oriented filters for feature extraction in digital Images : Application to corners detection, Contours evaluation and color Steganalysis." Thesis, Montpellier, 2017. http://www.theses.fr/2017MONTS077/document.

Full text
Abstract:
L’interprétation du contenu de l’image est un objectif très important dans le traitement de l’image et la vision par ordinateur. Par conséquent, plusieurs chercheurs y sont intéressés. Une image contient des informations multiples qui peuvent être étudiés, telles que la couleur, les formes, les arêtes, les angles, la taille et l’orientation. En outre, les contours contiennent les structures les plus importantes de l’image. Afin d’extraire les caractéristiques du contour d’un objet, nous devons détecter les bords de cet objet. La détection de bords est un point clé dans plusieurs applications, telles que :la restauration, l’amélioration de l’image, la stéganographie, le filigrane, la récupération, la reconnaissance et la compression de l’image, etc. Toutefois, l’évaluation de la performance de la méthode de détection de bords reste un grand défi. Les images numériques sont parfois modifiées par une procédure légale ou illégale afin d’envoyer des données secrètes ou spéciales. Afin d’être moins visibles, la plupart des méthodes stéganographiques modifient les valeurs de pixels dans les bords/textures de parties de l’image. Par conséquent, il est important de détecter la présence de données cachées dans les images numériques. Cette thèse est divisée principalement en deux parties.La première partie discute l’évaluation des méthodes de détection des bords du filtrage, des contours et des angles. En effet, cinq contributions sont présentées dans cette partie : d’abord, nous avons proposé un nouveau plan de surveillance normalisée de mesure de la qualité. En second lieu, nous avons proposé une nouvelle technique pour évaluer les méthodes de détection des bords de filtrage impliquant le score minimal des mesures considérées. En plus, nous avons construit une nouvelle vérité terrain de la carte de bords étiquetée d’une manière semi-automatique pour des images réelles.En troisième lieu, nous avons proposé une nouvelle mesure prenant en compte les distances de faux points positifs pour évaluer un détecteur de bords d’une manière objective. Enfin, nous avons proposé une nouvelle approche de détection de bords qui combine la dérivée directionnelle et l’homogénéité des grains. Notre approche proposée est plus stable et robuste au bruit que dix autres méthodes célèbres de détection. La seconde partie discute la stéganalyse de l’image en couleurs, basée sur l’apprentissage automatique (machine learning). En effet, trois contributions sont présentées dans cette partie : d’abord, nous avons proposé une nouvelle méthode de stéganalyse de l’image en couleurs, basée sur l’extraction de caractéristiques de couleurs à partir de corrélations entre les gradients de canaux rouge, vert et bleu. En fait, ces caractéristiques donnent le cosinus des angles entre les gradients. En second lieu, nous avons proposé une nouvelle méthode de stéganalyse de l’image en couleurs, basée sur des mesures géométriques obtenues par le sinus et le cosinus des angles de gradients entre tous les canaux de couleurs. Enfin, nous avons proposé une nouvelle méthode de stéganalyse de l’image en couleurs, basée sur une banque de filtres gaussiens orientables. Toutes les trois méthodes proposées présentent des résultats intéressants et prometteur en devançant l’état de l’art de la stéganalyse en couleurs
Interpretation of image contents is very important objective in image processing and computer vision. Wherefore, it has received much attention of researchers. An image contains a lot of information which can be studied such as color, shapes, edges, corners, size, and orientation. Moreover, contours include the most important structures in the image. In order to extract features contour of an object, we must detect the edges of that object. Edge detection results, remains a key point and very important step in wide range of applications such as: image restoration, enhancement, steganography, watermarking, image retrieval, recognition, compression, and etc. An efficient boundary detection method should create a contour image containing edges at their correct locations with a minimum of misclassified pixels. However, the performance evaluationof the edge detection results is still a challenging problem. The digital images are sometimes modify by a legal or illegal data in order to send special or secret data. These changes modify slight coefficient values of the image. In order to be less visible, most of the steganography methods modify the pixel values in the edge/texture image areas. Therefore, it is important to detect the presence of hidden data in digital images. This thesis is divided mainly into two main parts. The first part, deals with filtering edge detection, contours evaluation and corners detection methods. More deeply, there are five contributions are presented in this part: first, proposed a new normalized supervised edge map quality measure. The strategy to normalize the evaluation enables to consider a score close to 0 as a good edge map, whereas a score 1 translates a poor segmentation. Second, proposed a new technique to evaluate filtering edge detection methods involving the minimum score of the considerate measures. Moreover, build a new ground truth edge map labelled in semi-automatic way in real images. Third, proposed a new measure takes into account the distances of false positive points to evaluate an edge detector in an objective way. Finally, proposed a new approach for corner detection based on the combination of directional derivative and homogeneity kernels. The proposed approach remains more stable and robust to noise than ten famous corner detection methods. The second part, deals with color image steganalysis, based on a machine learning classification. More deeply, there are three contributionsare presented in this part: first, proposed a new color image steganalysis method based on extract color features from correlations between the gradients of red, green and blue channels. Since these features give the cosine of angles between gradients. Second, proposed a new color steganalysis method based on geometric measures obtained by the sine and cosine of gradient angles between all the color channels. Finally, proposed a new approach for color image steganalysisbased on steerable Gaussian filters Bank.All the three proposed methods in this part, provide interesting and promising results by outperforming the state-of-art color image steganalysis
APA, Harvard, Vancouver, ISO, and other styles
12

Wolin, Martin Michael. "Digital high school photography curriculum." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2414.

Full text
Abstract:
The purpose of this thesis is to create a high school digital photography curriculum that is relevant to real world application and would enable high school students to enter the work force with marketable skills or go on to post secondary education with advanced knowledge in the field of digital imaging.
APA, Harvard, Vancouver, ISO, and other styles
13

Meintjes, Anthony Arthur. ""From digital to darkroom"." Thesis, Rhodes University, 2001. http://hdl.handle.net/10962/d1007418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Musoke, David. "Digital image processing with the Motorola 56001 digital signal processor." Scholarly Commons, 1992. https://scholarlycommons.pacific.edu/uop_etds/2236.

Full text
Abstract:
This report describes the design and testing of the Image56 system, an IBM-AT based system which consists of an analog video board and a digital board. The former contains all analog and video support circuitry to perform real-time image processing functions. The latter is responsible for performing non real-time, complex image processing tasks using a Motorola DSP56001 digital signal processor. It is supported by eight image data buffers and 512K words of DSP memory (see Appendix A for schematic diagram).
APA, Harvard, Vancouver, ISO, and other styles
15

Deng, Hao. "Mathematical approaches to digital color image denoising." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31708.

Full text
Abstract:
Thesis (Ph.D)--Mathematics, Georgia Institute of Technology, 2010.
Committee Chair: Haomin Zhou; Committee Member: Luca Dieci; Committee Member: Ronghua Pan; Committee Member: Sung Ha Kang; Committee Member: Yang Wang. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Ben. "Removing Textured Artifacts from Digital Photos Using Spatial Frequency Filtering." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/148.

Full text
Abstract:
An abstract of the thesis of Ben Huang for the Master of Science in Electric and Computer Science presented [August 12nd, 2010]. Title: Removing textured artifacts from digital photos by using spatial frequency filtering Virtually all image processing is now done with digital images. These images, captured with digital cameras, can be readily processed with various types of editing software to serve a multitude of personal and commercial purposes. But not all images are directly captured and even of those that are directly captured many are not of sufficiently high quality. Digital images are also acquired by scanning old paper images. The result is often a digital image of poor quality. Textured artifacts on some old paper pictures were designed to help protect pictures from discoloration. However, after scanning, these textured artifacts exhibit annoying textured noise in the digital image, highly degrading the visual definition of images on electronic screens. This kind of image noise is academically called global periodic noise. It is in a spurious and repetitive pattern that exists consistently throughout the image. There does not appear to be any commercial graphic software with a tool box to directly resolve this global periodic noise. Even Photoshop, considered to be the most powerful and authoritative graphic software, does not have an effective function to reduce textured noise. This thesis addresses this problem by proposing the use of an alternative graphic filter to what is currently available. To achieve the best image quality in photographic editing, spatial frequency domain filtering is utilized instead of spatial domain filtering. In frequency domain images, the consistent periodicity of the textured noise leads to well defined spikes in the frequency transform of the noisy image. When the noise spikes are at a sufficient distance from the image spectrum, they can be removed by reducing their frequency amplitudes. The filtered spectrum may then yield a noise reduced image through inverse frequency transforming. This thesis proposes a method to reduce periodic noise in the spatial frequency domain; summarizes the difference between DFT and DCT, FFT and fast DCT in image processing applications; uses fast DCT as the frequency transform to solve the problem in order to improve both computational load and filtered image quality; and develops software that can be implemented as a plug in for large graphic software to remove textured artifacts from digital images.
APA, Harvard, Vancouver, ISO, and other styles
17

Keskinarkaus, A. (Anja). "Digital watermarking techniques for printed images." Doctoral thesis, Oulun yliopisto, 2013. http://urn.fi/urn:isbn:9789526200583.

Full text
Abstract:
Abstract During the last few decades, digital watermarking techniques have gained a lot of interest. Such techniques enable hiding imperceptible information to images; information which can be extracted later from those images. As a result, digital watermarking techniques have many interesting applications for example in Internet distribution. Contents such as images are today manipulated mainly in digital form; thus, traditionally, the focus of watermarking research has been the digital domain. However, a vast amount of images will still appear in some physical format such as in books, posters or labels, and there are a number of possible applications of hidden information also in image printouts. In this case, an additional level of challenge is introduced, as the watermarking technique should be robust to extraction from printed output. In this thesis, methods are developed, where a watermarked image appears in a printout and the invisible information can be later extracted using a scanner or mobile phone camera and watermark extraction software. In these cases, the watermarking method has to be carefully designed because both the printing and capturing process cause distortions that make watermark extraction challenging. The focus of the study is on developing blind, multibit watermarking techniques, where the robustness of the algorithms is tested in an office environment, using standard office equipment. The possible effect of the background of the printed images, as well as compound attacks, are both paid particular attention to, since these are considered important in practical applications. The main objective is thus to provide technical means to achieve high robustness and to develop watermarking methods robust to printing and scanning process. A secondary objective is to develop methods where the extraction is possible with the aid of a mobile phone camera. The main contributions of the thesis are: (1) Methods to increase watermark extraction robustness with perceptual weighting; (2) Methods to robustly synchronize the extraction of a multibit message from a printout; (3) A method to encode a multibit message, utilizing directed periodic patterns and a method to decode the message after attacks; (4) A demonstrator of an interactive poster application and a key based robust and secure identification method from a printout
Tiivistelmä Digitaalinen vesileimaus on parin viime vuosikymmenen aikana runsaasti huomiota saanut tekniikka, jonka avulla kuviin voidaan piilottaa aistein havaitsematonta tietoa. Tämä tieto voidaan myöhemmin poimia esiin, minkä vuoksi sovelluskohteita esimerkiksi Internetin kautta tapahtuvassa jakelussa on useita. Perinteisesti vesileimaustekniikat keskittyvät pelkästään digitaalisessa muodossa pysyvään tietoon. Kuitenkin iso osa kuvainformaatiosta saa yhä vielä myös fyysisen muodon esimerkiksi kirjoissa, julisteissa ja etiketeissä. Myös vesileimauksella on useita sovelluskohteita painettujen kuvienkin osalta. Vesileimausta ajatellen painatus tuo kumminkin omat erityishaasteensa vesileimaustekniikoille. Tässä väitöskirjassa kehitetään menetelmiä, jotka mahdollistavat piilotetun tiedon säilymisen painetussa kuvassa. Piilotettu tieto voidaan lukea käyttämällä skanneria tai matkapuhelimen kameraa tiedon digitalisointiin. Digitalisoinnin jälkeen vesileimausohjelma osaa lukea piilotetun tiedon. Vesileimauksen osalta haasteellisuus tulee vääristymistä, joita sekä kuvien tulostus sekä digitalisointi aiheuttavat. Väitöstyössä keskitytään monibittisiin vesileimaustekniikoihin, joissa alkuperäistä kuvaa ei tarvita vesileimaa poimittaessa. Väitöstyössä kehitetyt menetelmät on testattu toimistoympäristössä standardi toimistolaitteita käyttäen. Käytännön sovelluksia ajatellen, testeissä on kiinnitetty huomiota myös yhdistelmähyökkäysten sekä painetun kuvan taustan vaikutukseen algoritmin robustisuudelle. Ensisijainen tavoite on kehittää menetelmiä, jotka kestävät printtaus ja skannaus operaation. Toinen tavoite on tiedon kestävyys luettaessa tietoa matkapuhelimen kameran avulla. Väitöskirjassa tarkastellaan ja kehitellään ratkaisuja neljälle eri osa-alueelle: (1) Ihmisaisteja mallintavien menetelmien käyttö vesileimauksen kestävyyden lisäämiseksi; (2) Robusti synkronointi luettaessa monibittistä tietoa painotuotteesta; (3) Suunnattuja jaksollisia kuvioita käyttävä menetelmä, joka mahdollistaa monibittisen tiedon koodaamisen ja dekoodaamisen hyökkäysten jälkeen; (4) Sovellustasolla tarkastellaan kahta pääsovellusta: interaktiivinen juliste sekä kestävä ja turvattu avaimen avulla tapahtuva painotuotteen identifiointi
APA, Harvard, Vancouver, ISO, and other styles
18

Santa, Clara Miguel Eduardo. "The application of digital photographic technologies to lighting research." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Vernacotola, Mark J. "Characterization of digital film scanner systems for use with digital scene algorithms /." Online version of thesis, 1995. http://hdl.handle.net/1850/11967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Brisbane, Gareth Charles Beattie. "On information hiding techniques for digital images." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050221.122028/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Morris, Alan. "Digital technologies and photographic archives Birmingham Central Library : a case study." Thesis, University of Wolverhampton, 2001. http://hdl.handle.net/2436/126505.

Full text
Abstract:
This thesis considers the use and potential of digital technologies for those responsible for photographic collections in public libraries. Using the Birmingham Central Library as a case study, the research has explored how information comn1unication technologies have impacted on the way in which photographic in1ages are created, stored and disseminated. The study provides an overview of both the British library service and the role of archives within this public provision. Following an examination of the characteristics of digital media and a range of issues relating to the preservation, dissemination and economic exploitation of photographic il1aterials in digital form, the thesis goes on to adopt a variety of research strategies, including a number of empirical projects used to assimilate information relating to the practical application of information communication technologies by those working in public libraries. The major outcome of the research, identified in the later sections of the thesis, has been to n1ake a unique contribution to the field of knowledge relating to the provision of digital resources by those responsible for photographic collections residing in archives within public libraries in the United Kingdom. The conclusions to emerge from the theoretical and empirical research contribute to knowledge by providing current information about the utilisation of digital technologies for the purposes of enhancing access to photographic material held within public library archives, whilst also considering possible future developments relating to the area of investigation.
APA, Harvard, Vancouver, ISO, and other styles
22

Hu, Guang-hua. "Extending the depth of focus using digital image filtering." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45653.

Full text
Abstract:

Two types of image processing methods capable of forming a composite image from a set of image slices which have in-focus as well as out-of-focus segments are discussed. The first type is based on space domain operations and has been discussed in the literature. The second type, to be introduced, is based on the intuitive concept that the spectral energy distribution of a focused object is biased towards lower frequencies after blurring. This approach requires digital image filtering in the spatial frequency domain. A comparison among methods of both types is made using a quantitative uÌ delity criterion.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

Hicks, Susan J. "Digital archiving and reproduction of black and white photography /." Online version of thesis, 1996. http://hdl.handle.net/1850/11919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Smołka, Bogdan. "Nonlinear techniques of noise reduction in digital color images." Praca habilitacyjna, Wydawnictwo Politechniki Śląskiej, 2004. https://delibra.bg.polsl.pl/dlibra/docmetadata?showContent=true&id=9132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Robins, Michael John. "Local energy feature tracing in digital images and volumes." University of Western Australia. Dept. of Computer Science, 1999. http://theses.library.uwa.edu.au/adt-WU2003.0010.

Full text
Abstract:
Digital image feature detectors often comprise two stages of processing: an initial filtering phase and a secondary search stage. The initial filtering is designed to accentuate specific feature characteristics or suppress spurious components of the image signal. The second stage of processing involves searching the results for various criteria that will identify the locations of the image features. The local energy feature detection scheme combines the squares of the signal convolved with a pair of filters that are in quadrature with each other. The resulting local energy value is proportional to phase congruency which is a measure of the local alignment of the phases of the signals constituent Fourier components. Points of local maximum phase alignment have been shown to correspond to visual features in the image. The local energy calculation accentuates the location of many types of image features, such as lines, edges and ramps and estimates of local energy can be calculated in multidimensional image data by rotating the quadrature filters to several orientations. The second stage search criterion for local energy is to locate the points that lie along the ridges in the energy map that connect the points of local maxima. In three dimensional data the relatively higher energy values will form films between connecting laments and tendrils. This thesis examines the use of recursive spatial domain filtering to calculate local energy. A quadrature pair of filters which are based on the first derivative of the Gaussian function and its Hilbert transform, are rotated in space using a kernel of basis functions to obtain various orientations of the filters. The kernel is designed to be separable and each term is implemented using a recursive digital filter. Once local energy has been calculated the ridges and surfaces of high energy values are determined using a flooding technique. Starting from the points of local minima we perform an ablative skeletonisation of the higher energy values. The topology of the original set is maintained by examining and preserving the topology of the neighbourhood of each point when considering it for removal. This combination of homotopic skeletonisation and sequential processing of each level of energy values, results in a well located, thinned and connected tracing of the ridges. The thesis contains examples of the local energy calculation using steerable recursive filters and the ridge tracing algorithm applied to two and three dimensional images. Details of the algorithms are contained in the text and details of their computer implementation are provided in the appendices.
APA, Harvard, Vancouver, ISO, and other styles
26

Lim, Suryani. "Feature extraction, browsing and retrieval of images." Monash University, School of Computing and Information Technology, 2005. http://arrow.monash.edu.au/hdl/1959.1/9677.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Albannai, Talal N. "Conversational Use of Photographic Images on Facebook: Modeling Visual Thinking on Social Media." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849631/.

Full text
Abstract:
Modeling the "thick description" of photographs began at the intersection of personal and institutional descriptions. Comparing institutional descriptions of particular photos that were also used in personal online conversations was the initial phase. Analyzing conversations that started with a photographic image from the collection of the Library of Congress (LC) or the collection of the Manchester Historic Association (MHA) provided insights into how cultural heritage institutions could enrich the description of photographs by using informal descriptions such as those applied by Facebook users. Taking photos of family members, friends, places, and interesting objects is something people do often in their daily lives. Some photographic images are stored, and some are shared with others in gatherings, occasions, and holidays. Face-to-face conversations about remembering some of the details of photographs and the event they record are themselves rarely recorded. Digital cameras make it easy to share personal photos in Web conversations and to duplicate old photos and share them on the Internet. The World Wide Web even makes it simple to insert images from cultural heritage institutions in order to enhance conversations. Images have been used as tokens within conversations along with the sharing of information and background knowledge about them. The recorded knowledge from conversations using photographic images on Social Media (SM) has resulted in a repository of rich descriptions of photographs that often include information of a type that does not result from standard archival practices. Closed group conversations on Facebook among members of a community of interest/practice often involve the use of photographs to start conversations, convey details, and initiate story-telling about objets, events, and people. Modeling of the conversational use of photographic images on SM developed from the exploratory analyses of the historical photographic images of the Manchester, NH group on Facebook. The model was influenced by the typical model of Representation by Agency from O'Connor in O'Connor, Kearns, and Anderson Doing Things with Information: Beyond Indexing and Abstracting, by considerations of how people make and use photographs, and by the notion of functionality from Patrick Wilson's Public Knowledge, Private Ignorance: Toward a Library and Information Policy. The model offers paths for thickening the descriptions of photographs in archives and for enriching the use of photographs on social media.
APA, Harvard, Vancouver, ISO, and other styles
28

Naderi, Ramin. "Quadtree-based processing of digital images." PDXScholar, 1986. https://pdxscholar.library.pdx.edu/open_access_etds/3590.

Full text
Abstract:
Image representation plays an important role in image processing applications, which usually. contain a huge amount of data. An image is a two-dimensional array of points, and each point contains information (eg: color). A 1024 by 1024 pixel image occupies 1 mega byte of space in the main memory. In actual circumstances 2 to 3 mega bytes of space are needed to facilitate the various image processing tasks. Large amounts of secondary memory are also required to hold various data sets. In this thesis, two different operations on the quadtree are presented. There are, in general, two types of data compression techniques in image processing. One approach is based on elimination of redundant data from the original picture. Other techniques rely on higher levels of processing such as interpretations, generations, inductions and deduction procedures (1, 2). One of the popular techniques of data representation that has received a considerable amount of attention in recent years is the quadtree data structure. This has led to the development of various techniques for performing conversions and operations on the quadtree. Klinger and Dyer (3) provide a good bibliography of the history of quadtrees. Their paper reports experiments on the degree of compaction of picture representation which may be achieved with tree encoding. Their experiments show that tree encoding can produce memory savings. Pavlidis [15] reports on the approximation of pictures by quadtrees. Horowitz and Pavidis [16] show how to segment a picture using traversal of a quadtree. They segment the picture by polygonal boundaries. Tanimoto [17] discusses distortions which may occur in quadtrees for pictures. Tanimoto [18, p. 27] observes that quadtree representation is particularly convenient for scaling a picture by powers of two. Quadtrees are also useful in graphics and animation applications [19, 20] which are oriented toward construction of images from polygons and superpositiofis of images. Encoded pictures are useful for display, especially if encoding lends itself to processing.
APA, Harvard, Vancouver, ISO, and other styles
29

Su, Qi, and 蘇琦. "Segmentation and reconstruction of medical images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2008. http://hub.hku.hk/bib/B41897067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Finner, Richard Paul. "Curriculum for a course in introductory digital darkroom." CSUSB ScholarWorks, 1996. https://scholarworks.lib.csusb.edu/etd-project/1261.

Full text
Abstract:
The objective of this project was to develop a curriculum for a course in Introductory Digital Darkroom. This curriculum will be used to replace existing curriculum in the Camera, Stripping and Platemaking course taught in The Graphics Technology Department of Riverside Community College (RCC), Riverside, California. In order to provide students with technologically advanced, marketable skills, the course must be revised to include computerized electronic prepress techniques.
APA, Harvard, Vancouver, ISO, and other styles
31

Allan, Christopher. "An analysis of digital photojournalistic practices: a study of the Sowetan's photographic department." Thesis, Rhodes University, 2003. http://hdl.handle.net/10962/d1003071.

Full text
Abstract:
Photojournalism in South Africa is in the process of undergoing a shift from an analogue past to a fully digital future. This shift to digital has already been completed by many of the newspapers in the United States of America and Europe, and the new technology is seen to have made fundamental differences in the way that journalists do their job. This thesis attempts to explore the differences brought about, as well as the problems experienced by the photographic department at the Sowetan newspaper as a result of the shift to digital. How the development of technology has affected the photojournalist throughout is focused upon in a brief history of photojournalism and examples of how technology has shaped different aspects of journalism in both a positive and negative manner is considered. Exactly what digital photography is, how it has been integrated into American Photographic departments and the changes that the new technology has prompted are also explained. The manipulation of images in the past as well as the relative ease of digital manipulation are covered and concerns are raised about the future implications of digital manipulation. By conductlng participant observation and holding interviews, research data was compiled which allowed conclusions to be drawn about the impact that the shift to digital had had on the Sowetan photographic department. Intentional and unintentional consequences were expected and revealed in the research. The job of the photojournalist and photographic editor was found to have changed but perhaps not as dramatically as expected. Third world factors such as crime, poverty and lack of education were discovered to have resulted in problems that differed noticeably from those experienced by American and European photographic departments. Some expected difficulties were not experienced at all, while other major obstacles, specifically the repairs that must constantly be made to the digital cameras, continue to hamper the operations of the new digital department. Some understanding of the problems that might be encountered by future photojournalism departments that are considering making the shift to digital are arrived at, in the hope that they may be foreseen and overcome.
APA, Harvard, Vancouver, ISO, and other styles
32

Brink, Anton David. "The selection and evaluation of grey-level thresholds applied to digital images." Thesis, Rhodes University, 1988. http://hdl.handle.net/10962/d1001996.

Full text
Abstract:
Many applications of image processing require the initial segmentation of the image by means of grey-level thresholding. In this thesis, the problems of automatic threshold selection and evaluation are addressed in order to find a universally applicable thresholding method. Three previously proposed threshold selection techniques are investigated, and two new methods are introduced. The results of applying these methods to several different images are evaluated using two threshold evaluation techniques, one subjective and one quantitative. It is found that no threshold selection technique is universally acceptable, as different methods work best with different images and applications
APA, Harvard, Vancouver, ISO, and other styles
33

Bishop, Tom E. "Blind image deconvolution : nonstationary Bayesian approaches to restoring blurred photos." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3788.

Full text
Abstract:
High quality digital images have become pervasive in modern scientific and everyday life — in areas from photography to astronomy, CCTV, microscopy, and medical imaging. However there are always limits to the quality of these images due to uncertainty and imprecision in the measurement systems. Modern signal processing methods offer the promise of overcoming some of these problems by postprocessing these blurred and noisy images. In this thesis, novel methods using nonstationary statistical models are developed for the removal of blurs from out of focus and other types of degraded photographic images. The work tackles the fundamental problem blind image deconvolution (BID); its goal is to restore a sharp image from a blurred observation when the blur itself is completely unknown. This is a “doubly illposed” problem — extreme lack of information must be countered by strong prior constraints about sensible types of solution. In this work, the hierarchical Bayesian methodology is used as a robust and versatile framework to impart the required prior knowledge. The thesis is arranged in two parts. In the first part, the BID problem is reviewed, along with techniques and models for its solution. Observation models are developed, with an emphasis on photographic restoration, concluding with a discussion of how these are reduced to the common linear spatially-invariant (LSI) convolutional model. Classical methods for the solution of illposed problems are summarised to provide a foundation for the main theoretical ideas that will be used under the Bayesian framework. This is followed by an indepth review and discussion of the various prior image and blur models appearing in the literature, and then their applications to solving the problem with both Bayesian and nonBayesian techniques. The second part covers novel restoration methods, making use of the theory presented in Part I. Firstly, two new nonstationary image models are presented. The first models local variance in the image, and the second extends this with locally adaptive noncausal autoregressive (AR) texture estimation and local mean components. These models allow for recovery of image details including edges and texture, whilst preserving smooth regions. Most existing methods do not model the boundary conditions correctly for deblurring of natural photographs, and a Chapter is devoted to exploring Bayesian solutions to this topic. Due to the complexity of the models used and the problem itself, there are many challenges which must be overcome for tractable inference. Using the new models, three different inference strategies are investigated: firstly using the Bayesian maximum marginalised a posteriori (MMAP) method with deterministic optimisation; proceeding with the stochastic methods of variational Bayesian (VB) distribution approximation, and simulation of the posterior distribution using the Gibbs sampler. Of these, we find the Gibbs sampler to be the most effective way to deal with a variety of different types of unknown blurs. Along the way, details are given of the numerical strategies developed to give accurate results and to accelerate performance. Finally, the thesis demonstrates state of the art results in blind restoration of synthetic and real degraded images, such as recovering details in out of focus photographs.
APA, Harvard, Vancouver, ISO, and other styles
34

Ferrier, Adrian Jon. "Processing techniques for flow images obtained by planar laser-induced fluorescence." Thesis, Georgia Institute of Technology, 1991. http://hdl.handle.net/1853/24097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kok, R. "An object detection approach for cluttered images." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53281.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2003.
ENGLISH ABSTRACT: We investigate object detection against cluttered backgrounds, based on the MINACE (Minimum Noise and Correlation Energy) filter. Application of the filter is followed by a suitable segmentation algorithm, and the standard techniques of global and local thresholding are compared to watershed-based segmentation. The aim of this approach is to provide a custom region-based object detection algorithm with a concise set of regions of interest. Two industrial case studies are examined: diamond detection in X-ray images, and the reading of a dynamic, and ink stamped, 2D barcode on packaging clutter. We demonstrate the robustness of our approach on these two diverse applications, and develop a complete algorithmic prototype for an automatic stamped code reader.
AFRIKAANSE OPSOMMING: Hierdie tesis ondersoek die herkenning van voorwerpe teen onduidelike agtergronde. Ons benadering maak staat op die MINACE (" Minimum Noise and Correlation Energy") korrelasiefilter. Die filter word aangewend saam met 'n gepaste segmenteringsalgoritme, en die standaard tegnieke van globale en lokale drumpelingsalgoritmes word vergelyk met 'n waterskeidingsgebaseerde segmenteringsalgoritme. Die doel van hierdie deteksiebenadering is om 'n klein stel moontlike voorwerpe te kan verskaf aan enige klassifikasie-algoritme wat fokus op die voorwerpe self. Twee industriële toepassings word ondersoek: die opsporing van diamante in X-straal beelde, en die lees van 'n dinamiese, inkgedrukte, 2D balkieskode op verpakkingsmateriaal. Ons demonstreer die robuustheid van ons benadering met hierdie twee uiteenlopende voorbeelde, en ontwikkel 'n volledige algoritmiese prototipe vir 'n outomatiese stempelkode leser.
APA, Harvard, Vancouver, ISO, and other styles
36

Schindler, Grant. "Unlocking the urban photographic record through 4D scene modeling." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34719.

Full text
Abstract:
Vast collections of historical photographs are being digitally archived and placed online, providing an objective record of the last two centuries that remains largely untapped. We propose that time-varying 3D models can pull together and index large collections of images while also serving as a tool of historical discovery, revealing new information about the locations, dates, and contents of historical images. In particular, our goal is to use computer vision techniques to tie together a large set of historical photographs of a given city into a consistent 4D model of the city: a 3D model with time as an additional dimension. To extract 4D city models from historical images, we must perform inference about the position of cameras and scene structure in both space and time. Traditional structure from motion techniques can be used to deal with the spatial problem, while here we focus on the problem of inferring temporal information: a date for each image and a time interval for which each structural element in the scene persists. We first formulate this task as a constraint satisfaction problem based on the visibility of structural elements in each image, resulting in a temporal ordering of images. Next, we present methods to incorporate real date information into the temporal inference solution. Finally, we present a general probabilistic framework for estimating all temporal variables in structure from motion problems, including an unknown date for each camera and an unknown time interval for each structural element. Given a collection of images with mostly unknown or uncertain dates, we can use this framework to automatically recover the dates of all images by reasoning probabilistically about the visibility and existence of objects in the scene. We present results for image collections consisting of hundreds of historical images of cities taken over decades of time, including Manhattan and downtown Atlanta.
APA, Harvard, Vancouver, ISO, and other styles
37

Revelant, Ivan L. "Restoration of images degraded by systems of random impulse response." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26731.

Full text
Abstract:
The problem of restoring an image distorted by a system consisting of a stochastic impulse response in conjuction with additive noise is investigated. The method of constrained least squares is extended to this problem, and leads to the development of a new technique based on the minimization of a weighted error function. Results obtained using the new method are compared with those obtained by constrained least squares, and by the Wiener filter and approximations thereof. It is found that the new technique, "Weighted Least Squares", gives superior results if the noise in the impulse response is comparable to or greater than the additive noise.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
38

Appia, Vikram V. "A color filter array interpolation method for digital cameras using alias cancellation." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22542.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Russell Mersereau; Committee Member: Anthony J. Yezzi; Committee Member: Yucel Altunbasak.
APA, Harvard, Vancouver, ISO, and other styles
39

Truong, Kwan K. "Vector quantizer design for images and video based on hierarchical structures." Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/13266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yau, Chin-ko, and 游展高. "Super-resolution image restoration from multiple decimated, blurred and noisy images." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30292529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Jeong, Dong-Seok. "Unified approach for the early understanding of images." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/50028.

Full text
Abstract:
In the quest for computer vision, that is the automatic understanding of images, a powerful strategy has been to model the image parametrically. Two prominent kinds of approaches have been those based. on polynomial models and those based on random-field models. This thesis combines these two nmethodologies, deciding on the proper model by means of a general decision criterion. The unified approach also admits composite polynomial/random-field. models and is applicable to other statistical models as well. This new approach has advantages in many applications, such as image identification and image segmentation. In segmentation, we achieve speed by avoiding iterative pixel-by-pixel calculations. With the general decision criterion as a sophisticated tool, we can deal with images according to a variety of model hypotheses. Our experiments with synthesized images and real images, such as Brodatz textures, illustrate some identification and segmentation uses of the unified approach.
Master of Science
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
42

Rashidi, Abbas. "Evaluating the performance of machine-learning techniques for recognizing construction materials in digital images." Thesis, Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49122.

Full text
Abstract:
Digital images acquired at construction sites contain valuable information useful for various applications including As-built documentation of building elements, effective progress monitoring, structural damage assessment, and quality control of construction material. As a result there is an increasing need for effective methods to recognize different building materials in digital images and videos. Pattern recognition is a mature field within the area of image processing; however, its application in the area of civil engineering and building construction is only recent. In order to develop any robust image recognition method, it is necessary to choose the optimal machine learning algorithm. To generate a robust color model for building material detection in an outdoor construction environment, a comparative analysis of three generative and discriminative machine learning algorithms, namely, multilayer perceptron (MLP), radial basis function (RBF), and support vector machines (SVMs), is conducted. The main focus of this study is on three classes of building materials: concrete, plywood, and brick. For training purposes a large-size data set including hundreds of images is collected. The comparison study is conducted by implementing necessary algorithms in MATLAB and testing over hundreds of construction-site images. To evaluate the performance of each technique, the results are compared with a manual classification of building materials. In order to better assess the performance of each technique, experiments are conducted by taking pictures under various realistic jobsite conditions, e.g., different ranges of image resolutions, different distance of camera from object, and different types of cameras.
APA, Harvard, Vancouver, ISO, and other styles
43

Schaefer, Charles Robert. "Magnification of bit map images with intelligent smoothing of edges." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Munechika, Stacy Mark 1961. "Applying multiresolution and graph-searching techniques for boundary detection in biomedical images." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277091.

Full text
Abstract:
An edge-based segmentation scheme (i.e. boundary detector) for nuclear medicine images has been developed and consists of a multiresolutional Gaussian-based edge detector working in conjunction with a modified version of Nilsson's A* graph-search algorithm. A multiresolution technique of analyzing the edge-signature plot (edge gradient versus resolution scale) allows the edge detector to match an appropriately sized edge operator to the edge structure in order to measure the full extent of the edge and thus gain the best compromise between noise suppression and edge localization. The graph-search algorithm uses the output from the multiresolution edge detector as the primary component in a cost function which is then minimized to obtain the boundary path. The cost function can be adapted to include global information such as boundary curvature, shape, and similarity to prototype to help guide the boundary detection process in the absence of good edge information.
APA, Harvard, Vancouver, ISO, and other styles
45

Mclean, Ivan Hugh. "An adaptive discrete cosine transform coding scheme for digital x-ray images." Thesis, Rhodes University, 1989. http://hdl.handle.net/10962/d1002032.

Full text
Abstract:
The ongoing development of storage devices and technologies for medical image management has led to a growth in the digital archiving of these images. The characteristics of medical x-rays are examined, and a number of digital coding methods are considered. An investigation of several fast cosine transform algorithms is carried out. An adaptive cosine transform coding technique is implemented which produces good quality images using bit rates lower than 0.38 bits per picture element
APA, Harvard, Vancouver, ISO, and other styles
46

Nguyen, Hieu Cuong [Verfasser], Stefan [Akademischer Betreuer] Katzenbeisser, and Jana [Akademischer Betreuer] Dittmann. "Security of Forensic Techniques for Digital Images / Hieu Cuong Nguyen. Betreuer: Stefan Katzenbeisser ; Jana Dittmann." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2013. http://d-nb.info/1107771013/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hill, Evelyn June. "Applying statistical and syntactic pattern recognition techniques to the detection of fish in digital images." University of Western Australia. School of Mathematics and Statistics, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0070.

Full text
Abstract:
This study is an attempt to simulate aspects of human visual perception by automating the detection of specific types of objects in digital images. The success of the methods attempted here was measured by how well results of experiments corresponded to what a typical human’s assessment of the data might be. The subject of the study was images of live fish taken underwater by digital video or digital still cameras. It is desirable to be able to automate the processing of such data for efficient stock assessment for fisheries management. In this study some well known statistical pattern classification techniques were tested and new syntactical/ structural pattern recognition techniques were developed. For testing of statistical pattern classification, the pixels belonging to fish were separated from the background pixels and the EM algorithm for Gaussian mixture models was used to locate clusters of pixels. The means and the covariance matrices for the components of the model were used to indicate the location, size and shape of the clusters. Because the number of components in the mixture is unknown, the EM algorithm has to be run a number of times with different numbers of components and then the best model chosen using a model selection criterion. The AIC (Akaike Information Criterion) and the MDL (Minimum Description Length) were tested.The MDL was found to estimate the numbers of clusters of pixels more accurately than the AIC, which tended to overestimate cluster numbers. In order to reduce problems caused by initialisation of the EM algorithm (i.e. starting positions of mixtures and number of mixtures), the Dynamic Cluster Finding algorithm (DCF) was developed (based on the Dog-Rabbit strategy). This algorithm can produce an estimate of the locations and numbers of clusters of pixels. The Dog-Rabbit strategy is based on early studies of learning behaviour in neurons. The main difference between Dog-Rabbit and DCF is that DCF is based on a toroidal topology which removes the tendency of cluster locators to migrate to the centre of mass of the data set and miss clusters near the edges of the image. In the second approach to the problem, data was extracted from the image using an edge detector. The edges from a reference object were compared with the edges from a new image to determine if the object occurred in the new image. In order to compare edges, the edge pixels were first assembled into curves using an UpWrite procedure; then the curves were smoothed by fitting parametric cubic polynomials. Finally the curves were converted to arrays of numbers which represented the signed curvature of the curves at regular intervals. Sets of curves from different images can be compared by comparing the arrays of signed curvature values, as well as the relative orientations and locations of the curves. Discrepancy values were calculated to indicate how well curves and sets of curves matched the reference object. The total length of all matched curves was used to indicate what fraction of the reference object was found in the new image. The curve matching procedure gave results which corresponded well with what a human being being might observe.
APA, Harvard, Vancouver, ISO, and other styles
48

Poon, Ho-shan, and 潘浩山. "Visual tracking of multiple moving objects in images based on robust estimation of the fundamental matrix." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B4322426X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Bennett, Troy. "Human-IntoFace.net : May 6th, 2003 /." access the artist's thesis portfolio on the Web, 2003. http://human-intoface.net/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Phung, Son Lam. "Automatic human face detection in color images." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1309.

Full text
Abstract:
Automatic human face detection in digital image has been an active area of research over the past decade. Among its numerous applications, face detection plays a key role in face recognition system for biometric personal identification, face tracking for intelligent human computer interface (HCI), and face segmentation for object-based video coding. Despite significant progress in the field in recent years, detecting human faces in unconstrained and complex images remains a challenging problem in computer vision. An automatic system that possesses a similar capability as the human vision system in detecting faces is still a far-reaching goal. This thesis focuses on the problem of detecting human laces in color images. Although many early face detection algorithms were designed to work on gray-scale Images, strong evidence exists to suggest face detection can be done more efficiently by taking into account color characteristics of the human face. In this thesis, we present a complete and systematic face detection algorithm that combines the strengths of both analytic and holistic approaches to face detection. The algorithm is developed to detect quasi-frontal faces in complex color Images. This face class, which represents typical detection scenarios in most practical applications of face detection, covers a wide range of face poses Including all in-plane rotations and some out-of-plane rotations. The algorithm is organized into a number of cascading stages including skin region segmentation, face candidate selection, and face verification. In each of these stages, various visual cues are utilized to narrow the search space for faces. In this thesis, we present a comprehensive analysis of skin detection using color pixel classification, and the effects of factors such as the color space, color classification algorithm on segmentation performance. We also propose a novel and efficient face candidate selection technique that is based on color-based eye region detection and a geometric face model. This candidate selection technique eliminates the computation-intensive step of window scanning often employed In holistic face detection, and simplifies the task of detecting rotated faces. Besides various heuristic techniques for face candidate verification, we developface/nonface classifiers based on the naive Bayesian model, and investigate three feature extraction schemes, namely intensity, projection on face subspace and edge-based. Techniques for improving face/nonface classification are also proposed, including bootstrapping, classifier combination and using contextual information. On a test set of face and nonface patterns, the combination of three Bayesian classifiers has a correct detection rate of 98.6% at a false positive rate of 10%. Extensive testing results have shown that the proposed face detector achieves good performance in terms of both detection rate and alignment between the detected faces and the true faces. On a test set of 200 images containing 231 faces taken from the ECU face detection database, the proposed face detector has a correct detection rate of 90.04% and makes 10 false detections. We have found that the proposed face detector is more robust In detecting in-plane rotated laces, compared to existing face detectors. +D24
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography