Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Synthetic images of curtaing.

Dissertationen zum Thema „Synthetic images of curtaing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Synthetic images of curtaing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Dvořák, Martin. „Anticurtaining - obrazový filtr pro elektronovou mikroskopii“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445537.

Der volle Inhalt der Quelle
Annotation:
Tomographic analysis produces 3D images of examined material in nanoscale by focus ion beam (FIB). This thesis presents new approach to elimination of the curtain effect by machine learning method.  Convolution neuron network is proposed for elimination of damaged imagine by the supervised learning technique. Designed network deals with features of damaged image, which are caused by wavelet transformation. The outcome is visually clear image. This thesis also designs creation of synthetic data set for training the neuron network which are created by simulating physical process of the creation of the real image. The simulation is made of creation of examined material by milling which is done by FIB and by process displaying of the surface by electron microscope (SEM). This newly created approach works precisely with real images. The qualitative evaluation of results is done by amateurs and experts of this problematic. It is done by anonymously comparing this solution to another method of eliminating curtaining effect. Solution presents new and promising approach to elimination of curtaining effect and contributes to a better procedure of dealing with images which are created during material analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

García, Armando. „Efficient rendering of synthetic images“. Thesis, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/15182.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 221-224.
by Armando Garcia.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Manamasa, Krishna Himaja. „Domain adaptation from 3D synthetic images to real images“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19303.

Der volle Inhalt der Quelle
Annotation:
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hagedorn, Michael. „Classification of synthetic aperture radar images“. Thesis, University of Canterbury. Electrical and Computer Engineering, 2004. http://hdl.handle.net/10092/5966.

Der volle Inhalt der Quelle
Annotation:
In this thesis the maximum a posteriori (MAP) approach to synthetic aperture radar (SAR) analysis is reviewed. The MAP model consists of two probability density functions (PDFs): the likelihood function and the prior model. Contributions related to both models are made. As the first contribution a new likelihood function describing the multilook three-polarisation intensity SAR speckle process, which is equivalent to the averaged squared amplitude samples from a three-dimensional complex zero-mean circular Gaussian density, has been derived. This PDF is a correlated three-dimensional chi-square density in the form of an infinite series of modified Bessel functions with seven independent parameters. Details concerning the PDF such as the estimation of the PDF parameters from sample data and the moments of the PDF are described. The new likelihood function is tested against simulated and measured SAR data. The second contribution is a novel parameter estimation method for discrete Gibbs random field (GRF) prior models. Given a quantity of sample data, the parameters of the GRF model, which comprise the values of the potential functions of individual cliques, are estimated. The method uses an error function describing the difference between the local model PDF and the equivalent estimated from sample data. The concept of "equivalencies" is introduced to simplify the process. The new parameter estimation method is validated and compared to Besag's parameter estimation method (coding method) using GRF realisations and other sample data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hasegawa, Robert Shigehisa. „Using synthetic images to improve iris biometric performance“. Scholarly Commons, 2012. https://scholarlycommons.pacific.edu/uop_etds/827.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Aubrecht, Tomáš. „Generation of Synthetic Retinal Images with High Resolution“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417220.

Der volle Inhalt der Quelle
Annotation:
K pořízení snímků sítnice, která představuje nejdůležitější část lidského oka, je potřeba speciálního vybavení, kterým je fundus kamera. Z tohoto důvodu je cílem této práce navrhnout a implementovat systém, který bude schopný generovat takovéto snímky bez použítí této kamery. Navržený systém využívá mapování vstupního černobílého snímku krevního řečiště sítnice na barevný výstupní snímek celé sítnice. Systém se skládá ze dvou neuronových sítí: generátoru, který generuje snímky sítnic, a diskriminátoru, který klasifikuje dané snímky jako reálné či syntetické. Tento systém byl natrénován na 141 snímcích z veřejně dostupných databází. Následně byla vytvořena nová databáze obsahující více než 2,800 snímků zdravých sítnic v rozlišení 1024x1024. Tato databáze může být použita jako učební pomůcka pro oční lékaře nebo může poskytovat základ pro vývoj různých aplikací pracujících se sítnicemi.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sabel, Johan. „Detecting Synthetic Images of Faces using Deep Learning“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287447.

Der volle Inhalt der Quelle
Annotation:
Significant progress has been made within human face synthesis due to recent advances in generative adversarial networks. These networks can be used to generate credible high-quality images of faces not belonging to real people, which is something that could be exploited by malicious actors. In this thesis, several state-of-the-art deep learning detection models were evaluated with respect to their robustness and generalization capability, which are two factors that must be taken into consideration for models that are intended to be deployed in the wild. The results show that some classifiers exhibited near-perfect performance when tested on real and synthetic images post-processed heavily using various augmentation techniques. These types of image perturbations further improved robustness when also incorporated in the training data. However, no model generalized well to out-of-distribution images from unseen datasets, although one model showed impressive results after being fine-tuned on a small number of samples from the target distributions. Nevertheless, the limited generalization capability remains a shortcoming that must be overcome before the detection models can become viable in the wild.
Nya arkitekturer för GAN-nätverk har möjliggjort stora framsteg inom området för syntes av bilder av människoansikten. Dessa neuronnät är kapabla att generera trovärdiga och högkvalitativa bilder av personer som inte existerar i verkligheten, vilket skulle kunna utnyttjas av illvilliga aktörer. I detta examensarbete utvärderades ett flertal djupinlärningsbaserade state-of-the-art-modeller avsedda för detektion av syntetiska bilder. Utvärderingen gjordes med hänsyn till både robusthet och generaliseringsförmåga, vilka är två avgörande faktorer för modeller som är avsedda att användas i verkliga tillämpningar. Resultaten visar att vissa klassificerare presterade nästintill perfekt vid utvärdering på äkta och syntetiska bilder som efterbehandlats kraftigt på olika sätt. Modellerna visade sig även vara ännu mer robusta när liknande bildstörningar användes under träning. Angående modellernas generaliseringsförmåga så lyckades ingen av dem uppnå tillfredsställande resultat vid utvärdering på bilder från okända källor som inte fanns tillgängliga under träning. Dock uppnådde en av de tränade modellerna imponerande resultat efter att ha tränats ytterligare på ett fåtal bilder från de tidigare okända källorna. Den begränsade generaliseringsförmågan utgör dock ett tillkortakommande såtillvida att modellerna i nuläget inte kan förväntas prestera tillfredsställande i verkliga tillämpningar.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zeid, Baker Mousa. „Generation of Synthetic Images with Generative Adversarial Networks“. Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15866.

Der volle Inhalt der Quelle
Annotation:
Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. Thus, having sufficient and relevant data is crucial for the performance. In this thesis, the researcher tackles the problem of not having a sufficient dataset, in terms of the number of training examples, for an image classification task. The idea is to use Generative Adversarial Networks to generate synthetic images similar to the ground truth, and in this way expand a dataset. Two types of experiments were conducted: the first was used to fine-tune a Deep Convolutional Generative Adversarial Network for a specific dataset, while the second experiment was used to analyze how synthetic data examples affect the accuracy of a Convolutional Neural Network in a classification task. Three well known datasets were used in the first experiment, namely MNIST, Fashion-MNIST and Flower photos, while two datasets were used in the second experiment: MNIST and Fashion-MNIST. The results of the generated images of MNIST and Fashion-MNIST had good overall quality. Some classes had clear visual errors while others were indistinguishable from ground truth examples. When it comes to the Flower photos, the generated images suffered from poor visual quality. One can easily tell the synthetic images from the real ones. One reason for the bad performance is due to the large quantity of noise in the Flower photos dataset. This made it difficult for the model to spot the important features of the flowers. The results from the second experiment show that the accuracy does not increase when the two datasets, MNIST and Fashion-MNIST, are expanded with synthetic images. This is not because the generated images had bad visual quality, but because the accuracy turned out to not be highly dependent on the number of training examples. It can be concluded that Deep Convolutional Generative Adversarial Networks are capable of generating synthetic images similar to the ground truth and thus can be used to expand a dataset. However, this approach does not completely solve the initial problem of not having adequate datasets because Deep Convolutional Generative Adversarial Networks may themselves require, depending on the dataset, a large quantity of training examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Haiderbhai, Mustafa. „Generating Synthetic X-rays Using Generative Adversarial Networks“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41092.

Der volle Inhalt der Quelle
Annotation:
We propose a novel method for generating synthetic X-rays from atypical inputs. This method creates approximate X-rays for use in non-diagnostic visualization problems where only generic cameras and sensors are available. Traditional methods are restricted to 3-D inputs such as meshes or Computed Tomography (CT) scans. We create custom synthetic X-ray datasets using a custom generator capable of creating RGB images, point cloud images, and 2-D pose images. We create a dataset using natural hand poses and train general-purpose Conditional Generative Adversarial Networks (CGANs) as well as our own novel network pix2xray. Our results show the successful plausibility of generating X-rays from point cloud and RGB images. We also demonstrate the superiority of our pix2xray approach, especially in the troublesome cases of occlusion due to overlapping or rotated anatomy. Overall, our work establishes a baseline that synthetic X-rays can be simulated using inputs such as RGB images and point cloud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Johnson, David L. „Airborne synthetic aperture radar images of an upwelling filament“. Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/7036.

Der volle Inhalt der Quelle
Annotation:
The Cape Mendicino upwelling filament was imaged in 1989 using the NASA/JPL AIRSAR multiband Synthetic Aperture Radar (SAR) and NOAA AVHRR thermal and optical radiometry. To first order, SAR images of the ocean are solely dependent on the surface wave field, but they ultimately reflect the synergy of a vast number of geophysical processes. The complexity of surface wave processes leaves a large gap between the information contained in SAR images, and our ability to describe them without conjectures. Investigated here are features associated with thermal fronts, vortices, geostrophic jets, and internal waves. SAR spectra suggest infragravity waves aligned with the wind swell. Cross jet SAR profiles were investigated in detail; comparison with results from a simple model suggest that some processes not included in the simulation are dominating in physical environment. Band dependent asymmetry of the profiles is consistent with convergence and accumulation of surfactants; band independent location of the peaks suggests that such convergence may be a jet driven process. The band independent position of humps in the profiles suggests critical reflection of strongly imaged intermediate (A>ABragg) waves or alternately a persistent and complex jet velocity profile. Apparently anomalously high damping of longer Bragg waves at some jet orientations is inconsistent with historical measurements of the modulus of elasticity of ocean surfactants and might indicate the hyperconcentration of surfactants within a zone of strong convergence. Net changes in radar cross-section across some sections of the jet could indicate a number a wave or current processes, which are discussed.
iv, 124 leaves
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Fransson, Johan. „Analysis of synthetic aperture radar images for forestry applications /“. Umeå : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 1999. http://www.resgeom.slu.se/fjarr/personal/jf/.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Aitchison, Andrew C. „Synthetic images of faces using a generic head model“. Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305878.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Fowler, E. „Interpretation of Synthetic Aperture Radar images using fractal geometry“. Thesis, Cranfield University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.385750.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Kola, Ramya Sree. „Generation of synthetic plant images using deep learning architecture“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-18450.

Der volle Inhalt der Quelle
Annotation:
Background: Generative Adversarial Networks (Goodfellow et al., 2014) (GANs)are the current state of the art machine learning data generating systems. Designed with two neural networks in the initial architecture proposal, generator and discriminator. These neural networks compete in a zero-sum game technique, to generate data having realistic properties inseparable to that of original datasets. GANs have interesting applications in various domains like Image synthesis, 3D object generation in gaming industry, fake music generation(Dong et al.), text to image synthesis and many more. Despite having a widespread application domains, GANs are popular for image data synthesis. Various architectures have been developed for image synthesis evolving from fuzzy images of digits to photorealistic images. Objectives: In this research work, we study various literature on different GAN architectures. To understand significant works done essentially to improve the GAN architectures. The primary objective of this research work is synthesis of plant images using Style GAN (Karras, Laine and Aila, 2018) variant of GAN using style transfer. The research also focuses on identifying various machine learning performance evaluation metrics that can be used to measure Style GAN model for the generated image datasets. Methods: A mixed method approach is used in this research. We review various literature work on GANs and elaborate in detail how each GAN networks are designed and how they evolved over the base architecture. We then study the style GAN (Karras, Laine and Aila, 2018a) design details. We then study related literature works on GAN model performance evaluation and measure the quality of generated image datasets. We conduct an experiment to implement the Style based GAN on leaf dataset(Kumar et al., 2012) to generate leaf images that are similar to the ground truth. We describe in detail various steps in the experiment like data collection, preprocessing, training and configuration. Also, we evaluate the performance of Style GAN training model on the leaf dataset. Results: We present the results of literature review and the conducted experiment to address the research questions. We review and elaborate various GAN architecture and their key contributions. We also review numerous qualitative and quantitative evaluation metrics to measure the performance of a GAN architecture. We then present the generated synthetic data samples from the Style based GAN learning model at various training GPU hours and the latest synthetic data sample after training for around ~8 GPU days on leafsnap dataset (Kumar et al., 2012). The results we present have a decent quality to expand the dataset for most of the tested samples. We then visualize the model performance by tensorboard graphs and an overall computational graph for the learning model. We calculate the Fréchet Inception Distance score for our leaf Style GAN and is observed to be 26.4268 (the lower the better). Conclusion: We conclude the research work with an overall review of sections in the paper. The generated fake samples are much similar to the input ground truth and appear to be convincingly realistic for a human visual judgement. However, the calculated FID score to measure the performance of the leaf StyleGAN accumulates a large value compared to that of Style GANs original celebrity HD faces image data set. We attempted to analyze the reasons for this large score.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Richards, John A. (John Alfred). „Target model generation from multiple synthetic aperture radar images“. Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/33157.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 215-223).
by John A. Richards.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Fletcher, Neil David. „Multi-scale texture segmentation of synthetic aperture radar images“. Thesis, University of Bath, 2005. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415766.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Castle, Oliver M. „Synthetic image generation for a multiple-view autostereo display“. Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285412.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Vi, Margareta. „Object Detection Using Convolutional Neural Network Trained on Synthetic Images“. Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-153224.

Der volle Inhalt der Quelle
Annotation:
Training data is the bottleneck for training Convolutional Neural Networks. A larger dataset gives better accuracy though also needs longer training time. It is shown by finetuning neural networks on synthetic rendered images, that the mean average precision increases. This method was applied to two different datasets with five distinctive objects in each. The first dataset consisted of random objects with different geometric shapes. The second dataset contained objects used to assemble IKEA furniture. The neural network with the best performance, trained on 5400 images, achieved a mean average precision of 0.81 on a test which was a sample of a video sequence. Analysis of the impact of the factors dataset size, batch size, and numbers of epochs used in training and different network architectures were done. Using synthetic images to train CNN’s is a promising path to take for object detection where access to large amount of annotated image data is hard to come by.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Stenhagen, Petter. „Improving Realism in Synthetic Barcode Images using Generative Adversarial Networks“. Thesis, Linköpings universitet, Datorseende, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-151959.

Der volle Inhalt der Quelle
Annotation:
This master thesis explores the possibility of using generative Adversarial Networks (GANs) to refine labeled synthetic code images to resemble real code images while preserving label information. The GAN used in this thesis consists of a refiner and a discriminator. The discriminator tries to distinguish between real images and refined synthetic images. The refiner tries to fool the discriminator by producing refined synthetic images such that the discriminator classify them as real. By updating these two networks iteratively, the idea is that they will push each other to get better, resulting in refined synthetic images with real image characteristics. The aspiration, if the exploration of GANs turns out successful, is to be able to use refined synthetic images as training data in Semantic Segmentation (SS) tasks and thereby eliminate the laborious task of gathering and labeling real data. Starting off from a foundational GAN-model, different network architectures, hyperparameters and other design choices are explored to find the best performing GAN-model. As is widely acknowledged in the relevant literature, GANs can be difficult to train and the results in this thesis are varying and sometimes ambiguous. Based on the results from this study, the best performing models do however perform better in SS tasks than the unrefined synthetic set they are based on and benchmarked against, with regards to Intersection over Union.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Sundin, Hannes, und Jakob Josefsson. „Evaluating synthetic training data for character recognition in natural images“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280292.

Der volle Inhalt der Quelle
Annotation:
This thesis is centered around character recognition in natural images. More specifically, evaluating the use of synthetic font images for training a Convolutional Neural Network (CNN), compared to natural training data. Training a CNN to recognize characters in natural images often demands a large amount of labeled data. One alternative is to instead generate synthetic data by using digital fonts. A total of 41,664 font images were generated, which in combination with already existing data yielded around 99,000 images. Using this synthetic dataset, the CNN was trained by incrementally increasing synthetic training data and tested on natural images. At the same time, different preprocessing methods were applied to the synthetic data in order to observe the effect on accuracy. Results show that even when using the best performing pre-processing method and having access to 99,000 synthetic training images, a smaller set of natural training data yielded better results. However, results also show that synthetic data can perform better than natural data, provided that a good preprocessing method is used and if the supply of natural images is limited.
I det här kandidatexamensarbetet behandlas bokstavigenkänning i naturliga bilder. Mer specifikt jämförs syntetiska typsnittsbilder med naturliga bilder för träning av ett Convolutional Neural Network (CNN). Att träna ett CNN för att känna igen bokstäver i naturliga bilder kräver oftast mycket betecknad naturlig data. Ett alternativ till detta är att producera syntetisk träningsdata i form av typsnittsbilder. I denna studie skapades 41664 typsnittsbilder, vilket i kombination med existerande data gav oss omkring 99 tusen syntetiska träningsbilder. Därefter tränades ett CNN med typsnittsbilder i ökande mängd för att sedan testas på naturliga bilder av bokstäver. Resultatet av detta jämfördes sedan med resultatet av att träna med naturliga bilder. Dessutom experimenterades med olika förbehandlingsmetoder för att observera förbehandlingens påverkan på klassifikationsgraden. Resultaten visade att även med den förbehandlingsmetoden som gav bäst resultat och med mycket mer data, var träning med syntetiska bilder inte lika effektivt som med naturliga bilder. Dock så visades det att med en bra förbehandlingsmetod kan syntetiska bilder ersätta naturliga bilder, givet att tillgången till naturliga bilder är begränsat.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Marshall, Gareth John. „The effectiveness of spaceborne synthetic aperture radar for glacier monitoring“. Thesis, University of Cambridge, 1996. https://www.repository.cam.ac.uk/handle/1810/268042.

Der volle Inhalt der Quelle
Annotation:
This work examines the effectiveness of spaceborne synthetic aperture radar (SAR) for investigating seasonally variable glaciological parameters, in particular its ability to discriminate glacier surface facies in order to estimate glacier mass balance. A multitemporal C-band SAR dataset of Nordenskiold Land, Spitsbergen, acquired by the ERS-1 satellite, is used for the analysis, which focuses on mountain glaciers rather than ice sheets. Validating field measurements of ice and snowpack parameters were obtained contemporaneously with two SAR images, prior to and during the ablation season. A general model for the annual backscatter cycle from a sub-polar glacier is derived from SAR data of three glacierised areas. This model reveals two seasonal reversals in the relative magnitude of backscatter from the ice and wet-snow facies, principally through a 10 dB change in the latter; these reversals mark the start and end of the ablation season. It is shown that a combination of winter and summer SAR imagery is necessary to estimate the equilibriumline altitude of a sub-polar glacier. Topographic distortion is the major limiting factor regarding the utilisation of SAR data for studying mountainous glaciers. Existing theoretical models of radar backscatter from snow and ice are validated for three scenarios: glacier ice, dry snow overlying glacier ice, and wet snow, using the in situ measurements. In addition, temporal variations of ice and snowpack parameters observed during the field campaigns are used to predict short-term seasonal changes in backscatter, and to corroborate the model of annual backscatter. ERS-1 SAR data are compared to NIR Landsat TM data in separate analyses of data information content and temporal resolution; the optical data are found to be better for both facies discrimination and obtaining synoptic glaciological information in mountainous regions. However, the Spitsbergen cloud cover is such that useful TM data may not necessarily be acquired in a given year; consequently SAR is the better sensor for obtaining guaranteed synoptic mass balance data for use in climate change studies, or for studying short-term events like glacier surges. These conclusions are shown to apply to the entire European Arctic sector except East Greenland, where the two sensors have similar temporal resolutions. Data from both sensors were integrated to provide an estimation of the synoptic mass balance of Nordenskiold Land for 1991/92; the results, which indicate an overall slightly negative mass balance, demonstrate that elevation is the principal factor governing glacier net mass balance in the region.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Piretti, Mattia. „Synthetic DNA as a novel data storage solution for digital images“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22028/.

Der volle Inhalt der Quelle
Annotation:
During the digital revolution there has been an explosion in the amount of data produced by humanity and the capacity of conventional storage devices has been struggling to keep up with this aggressive growth. This has highlighted the need for new means to store digital information, especially cold data. In this dissertations we will build upon the work done by the I3S MediaCoding team on utilizing DNA as a novel storage medium,thanks to its incredibly high information density and effectively infinite shelf life. We will expand on their previous works and adapt them to operate on the Nanopore MinIONsequencer, by increasing the noise resistance during the decoding process, and adding a post-processing step to repair the damage caused by the sequencing noise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Penaloza, Cabrera Camilo. „Giant molecular clouds : a view through molecular tracers and synthetic images“. Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/116132/.

Der volle Inhalt der Quelle
Annotation:
Line emission is strongly dependent on the local environmental conditions in which the emitting tracers reside. In this work, we focus on modelling the CO emission from simulated giant molecular clouds (GMCs), and study the variations in the resulting line ratios arising from the emission from the J = 1 − 0, J = 2 − 1 and J = 3 − 2 transitions. We first study the ratio (R2−1/1−0) between CO’s first two emission lines and examine what information it provides about the physical properties of the cloud. To study R2−1/1−0 we perform smooth particle hydrodynamic simulations with time dependent chemistry (using GADGET-2), along with post-process radiative transfer calculations on an adaptive grid (using RADMC-3D) to create synthetic emission maps of a MC. R2−1/1−0 has a bimodal distribution that is a consequence of the excitation properties of each line, given that J = 1 reaches local thermal equilibrium (LTE) while J = 2 is still sub-thermally excited in the considered clouds. The bimodality of R2−1/1−0 serves as a tracer of the physical properties of different regions of the cloud and it helps constrain local temperatures, densities and opacities. Then to study the dependence line emission has on environment we perform a set of smoothed particle hydrodynamics (SPH) simulations with time-dependent chemistry, in which environmental conditions – including total cloud mass, density, size, velocity dispersion, metallicity, interstellar radiation field (ISRF) and the cosmic ray ionisation rate (CRIR) – were systematically varied. The simulations were then post-processed using radiative transfer to produce synthetic emission maps in the 3 transitions quoted above. We find that the cloud-averaged values of the line ratios can vary by up to ±0.3 dex, triggered by changes in the environmental conditions. Changes in the ISRF and/or in the CRIR have the largest impact on line ratios since they directly affect the abundance, temperature and distribution of CO-rich gas within the clouds. We show that the standard methods used to convert CO emission to H2 column density can underestimate the total H2 molecular gas in GMCs by factors of 2 or 3, depending on the environmental conditions in the clouds. One of the underlying assumptions in star formation is that stars are formed in long lived, bound molecular clouds. This paradigm comes from examining the virial parameter of molecular clouds. To calculate the virial parameter we rely on three quantities: velocity dispersion, size and mass, each of which have their own underlying assumptions, uncertainties and biases. It should come as no surprise that variations in these quantities can have a significant impact on our assessment of cloud dynamics and hence our overall understanding of star formation. We therefore use CO line emission from synthetic observation to study how the dynamical state of clouds changes as a function of metallicity and to test how accurately the virial parameter traces these changes. First we show how the ”observed” velocity dispersion significantly decreases with lower metallicities and how this is reflected on the virial parameter. Second we highlight the importance of understanding the intrinsic assumptions that go into calculating the virial parameter, such as how the mass and radius are derived. Finally, we show how the virial parameter of a cloud changes with metallicity and how the ’observed’ virial parameter compares to the ’true’ value in the simulation.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Theisen, Erik Bjørge. „Experimental Mueller Matrix Images of Liquid Crystalline Domains in Synthetic Clay Dispersions“. Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for fysikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14329.

Der volle Inhalt der Quelle
Annotation:
This report is a study of how polarized light can improve our understandings of physical phenomena, such as local organization of anisometric nanoparticles dispersed in a liquid.The first part of the thesis considers the theoretical aspects of polarized light. The Maxwell's equations are considered together with the Stokes formalism and the Mueller matrix. The Mueller matrix is analyzed in depth by looking at different ways it can be decomposed into several matrices, each clearly representing the physical phenomena of depolarization, diattenuation and retardance. The physics behind the phenomena will then be shortly addressed.The second part of the thesis describes the Mueller Matrix Imaging (MMI) ellipsometer, developed in the Applied Optics Group at NTNU. The results of Mueller imaging of air will be presented and discussed in order to get more understanding of the ellipsometer.The third and main part of the thesis, focuses on applying the MMI ellipsometer in order to study complex phenomena in clay dispersion. By looking at the development of samples of aqueous clay dispersions, the creation of different phases will be recorded. Some of those phases have crystalline properties and a Mueller matrix imaging can reveal much about its structure. A decomposition of the Mueller matrix can tell even more about the properties of the phases.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Parker, Johne' Michelle. „A methodology for generating physically accurate synthetic images for machine vision applications“. Thesis, Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/18384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Yeang, Chen-Pang. „Target identification theory for synthetic aperture radar images using physics-based signatures“. Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80603.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kaneva, Biliana K. „Large databases of real and synthetic images for feature evaluation and prediction“. Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71478.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 157-167).
Image features are widely used in computer vision applications from stereo matching to panorama stitching to object and scene recognition. They exploit image regularities to capture structure in images both locally, using a patch around an interest point, and globally, over the entire image. Image features need to be distinctive and robust toward variations in scene content, camera viewpoint and illumination conditions. Common tasks are matching local features across images and finding semantically meaningful matches amongst a large set of images. If there is enough structure or regularity in the images, we should be able not only to find good matches but also to predict parts of the objects or the scene that were not directly captured by the camera. One of the difficulties in evaluating the performance of image features in both the prediction and matching tasks is the availability of ground truth data. In this dissertation, we take two different approaches. First, we propose using a photorealistic virtual world for evaluating local feature descriptors and leaning new feature detectors. Acquiring ground truth data and, in particular pixel to pixel correspondences between images, in complex 3D scenes under different viewpoint and illumination conditions in a controlled way is nearly impossible in a real world setting. Instead, we use a high-resolution 3D model of a city to gain complete and repeatable control of the environment. We calibrate our virtual world evaluations by comparing against feature rankings made from photographic data of the same subject matter (the Statue of Liberty). We then use our virtual world to study the effects on descriptor performance of controlled changes in viewpoint and illumination. We further employ machine learning techniques to train a model that would recognize visually rich interest points and optimize the performance of a given descriptor. In the latter part of the thesis, we take advantage of the large amounts of image data available on the Internet to explore the regularities in outdoor scenes and, more specifically, the matching and prediction tasks in street level images. Generally, people are very adept at predicting what they might encounter as they navigate through the world. They use all of their prior experience to make such predictions even when placed in unfamiliar environment. We propose a system that can predict what lies just beyond the boundaries of the image using a large photo collection of images of the same class, but not from the same location in the real world. We evaluate the performance of the system using different global or quantized densely extracted local features. We demonstrate how to build seamless transitions between the query and prediction images, thus creating a photorealistic virtual space from real world images.
by Biliana K. Kaneva.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Mattila, Marianne. „Synthetic Image Generation Using GANs : Generating Class Specific Images of Bacterial Growth“. Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176402.

Der volle Inhalt der Quelle
Annotation:
Mastitis is the most common disease affecting Swedish milk cows. Automatic image classification can be useful for quickly classifying the bacteria causing this inflammation, in turn making it possible to start treatment more quickly. However, training an automatic classifier relies on the availability of data. Data collection can be a slow process, and GANs are a promising way to generate synthetic data to add plausible samples to an existing data set. The purpose of this thesis is to explore the usefulness of GANs for generating images of bacteria. This was done through researching existing literature on the subject, implementing a GAN, and evaluating the generated images. A cGAN capable of generating class-specific bacteria was implemented and improvements upon it made. The images generated by the cGAN were evaluated using visual examination, rapid scene categorization, and an expert interview regarding the generated images. While the cGAN was able to replicate certain features in the real images, it fails in crucial aspects such as symmetry and detail. It is possible that other GAN variants may be better suited to the task. Lastly, the results highlight the challenges of evaluating GANs with current evaluation methods.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Ju, Chen. „Edge-enhanced segmentation for SAR images“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ34190.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Oliveira, Gustavo Henrique. „Analysis of M2 tidal signatures in synthetic aperture radar images of Delaware Bay“. Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 111 p, 2008. http://proquest.umi.com/pqdweb?did=1456288331&sid=3&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Rau, Richard. „Postprocessing tools for ultra-wideband SAR images“. Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/13389.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Singh, Jagmal [Verfasser]. „Spatial content understanding of very high resolution synthetic aperture radar images / Jagmal Singh“. Siegen : Universitätsbibliothek der Universität Siegen, 2014. http://d-nb.info/1054543852/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Seck, Bassirou. „Display and Analysis of Tomographic Reconstructions of Multiple Synthetic Aperture LADAR (SAL) images“. Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1547740781773769.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Xue, Jingshuang. „Internal Wave Signature Analyses with Synthetic Aperture Radar Images in the Mid-Atlantic Bight“. Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_theses/51.

Der volle Inhalt der Quelle
Annotation:
57 synthetic aperture radar (SAR) images were collected over the Mid-Atlantic Bight (MAB) during the Shallow Water 2006 experiment (SW06). The dependence of internal wave (IW) signature occurrences and types in SAR images on the wind conditions is studied. A defined signature mode parameter (S sub m ) quantifies the signature of the IW intensity profile in relation to the mean backscatter in the image background to determine different IW types (single positive, single negative and double sign). The statistical results show that moderate wind speeds of 4-7 m/s are favorable for imaging IWs by SAR, whereas very few IW signatures are observed when the wind speed is higher than 10 m/s and lower than 2 m/s. Many S sub m values are larger than 1 (positive signature) even when the angles between the wind direction and IW propagation direction (theta sub Wind-IW) are less than in the MAB, which does not agree with the result of da Silva et al. (2002). An advanced radar imaging model has been run for different wind conditions, radar look directions and IW amplitudes. The model results indicate that the proportion of S sub m values larger than 1, when theta sub Wind-IW < 90 degree , increases with IW amplitudes. In general, relating IW signature types mainly to the wind direction is an oversimplification without considering other factors such as look directions and IW amplitudes. An IW interaction pattern has been studied on the basis of two sequential images from ERS2 and ENVISAT with a time lag of 28 minutes and temperature and current measurements from moorings. Phase velocities of the pattern can be derived by two-dimensional cross correlation of two images or in-situ measurements. In this pattern, the IW packet with a larger amplitude shifts less while the one with a smaller amplitude shifts more due to the interaction. The strong intensity in the interaction zone implies an amplitude increase. The intensity changes in the same IW packet after the interaction implies the energy exchange. All the characteristics agree well with the dynamics of the two-soliton pattern with a negative phase shift, according to Peterson and van Groesen (2000).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Parker, Johne' Michelle. „An analytical and experimental investigation of physically-accurate synthetic images for machine vision design“. Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/19038.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Atapattu, Charith Nisanka. „IMPROVING THE REALISM OF SYNTHETIC IMAGES THROUGH THE MIXTURE OF ADVERSARIAL AND PERCEPTUAL LOSSES“. OpenSIUC, 2018. https://opensiuc.lib.siu.edu/theses/2439.

Der volle Inhalt der Quelle
Annotation:
This research is describing a novel method to generate realism improved synthetic images while preserving annotation information and the eye gaze direction. Furthermore, it describes how the perceptual loss can be utilized while introducing basic features and techniques from adversarial networks for better results.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

D'Agostino, Alessandro. „Automatic generation of synthetic datasets for digital pathology image analysis“. Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21722/.

Der volle Inhalt der Quelle
Annotation:
The project is inspired by an actual problem of timing and accessibility in the analysis of histological samples in the health-care system. In this project, I address the problem of synthetic histological image generation for the purpose of training Neural Networks for the segmentation of real histological images. The collection of real histological human-labeled samples is a very time consuming and expensive process and often is not representative of healthy samples, for the intrinsic nature of the medical analysis. The method I propose is based on the replication of the traditional specimen preparation technique in a virtual environment. The first step is the creation of a 3D virtual model of a region of the target human tissue. The model should represent all the key features of the tissue, and the richer it is the better will be the yielded result. The second step is to perform a sampling of the model through a virtual tomography process, which produces a first completely labeled image of the section. This image is then processed with different tools to achieve a histological-like aspect. The most significant aesthetical post-processing is given by the action of a style transfer neural network that transfers the typical histological visual texture on the synthetic image. This procedure is presented in detail for two specific models: one of pancreatic tissue and one of dermal tissue. The two resulting images compose a pair of images suitable for a supervised learning technique. The generation process is completely automatized and does not require the intervention of any human operator, hence it can be used to produce arbitrary large datasets. The synthetic images are inevitably less complex than the real samples and they offer an easier segmentation task to solve for the NN. However, the synthetic images are very abundant, and the training of a NN can take advantage of this feature, following the so-called curriculum learning strategy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

He, Wenju [Verfasser], und Olaf [Akademischer Betreuer] Hellwich. „Segmentation-Based Building Analysis from Polarimetric Synthetic Aperture Radar Images / Wenju He. Betreuer: Olaf Hellwich“. Berlin : Universitätsbibliothek der Technischen Universität Berlin, 2011. http://d-nb.info/1014971683/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Blagoiev, Aleksander. „Implementation and verification of a quantitative MRI method for creating and evaluating synthetic MR images“. Thesis, Karlstads universitet, Institutionen för ingenjörsvetenskap och fysik (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-79068.

Der volle Inhalt der Quelle
Annotation:
The purpose of this thesis was to implement and quantitatively test a quantitative MRI (qMRI) method, from which synthetic MR images are created and also evaluated. The parameter maps of T1, T2*, and effective proton density (PD*) were tested with reference tubes containing different relaxation times, and concentrations of water (H2O) and heavy water (D2O). Two normal volunteers were also used to test qMRI method, by performing regional analysis on the parameter maps of the volunteers. The synthetic FLASH MR images were evaluated by: using the relative standard deviation of a region of interest (ROI) as a measure for the signal-to-noise ratio (SNR), implanting artificial multiple sclerosis (MS) lesions in the parameter maps used to create the synthetic images, and an MRI radiologist opinion of the images. All MRI measurements were conducted on a 3.0 Tesla scanner (Siemens MAGNETOM Skyrafit). The results from reference tube testing, shows that the implementation was reasonably successful, although the T2* maps can not display values on voxels which have T2 exceeding 100 ms. In vivo parameter map ROI values were consistent between volunteers. The SNR and contrast-to-noise ratio of synthetic images are comparable to their measured counterparts depending on TE. The artificial MS lesions were distinguishable from normal appearing tissue in a T1-weighted synthetic FLASH. The radiologist thought the a synthetic T2*-weighted FLASH was somewhat promising for clinical use after further research and development, however a synthetic T1-weighted FLASH had clinical value.
Syftet med detta arbete var att implementera och kvantitativt undersöka en kvantitativ MR (qMRI) metod, för att sedan skapa och utvärdera syntetiska MR-bilder. qMRI-metodens parameterkartor (T1, T2* och effektiv proton densitets PD*) undersöktes med olika typer av referensprover. Dessa prover innehöll skilda relaxationstider, samt olika koncentrationer av vatten (H2O) och tungt vatten (D2O). In vivo parameterkartor från frivilliga granskades genom att jämföra T1, T2* och PD* värdena på intresseområden (ROIs) mellan frivilliga och publicerade värden. Syntetiska FLASH MR-bilder utvärderades genom att: använda relativa standardavvikelsen av ett intresseområde (ROI) som ett mått på signal-brusförhållande (SNR), implantera artificiell multipel skleros (MS) lesioner i de frivilligas parameterkartor för att se ifall dessa kan identifieras i de syntetiska MR-bilder, och slutligen utvärderade en MR-radiolog bilderna. MR-mätningarna utfördes på 3.0 Tesla MR-kamera (Siemens MAGNETOM Skyrafit). Resultaten från referensproverna visar att implementeringen var rimligen framgångsrik, även om beräknade T2* för voxlar som har T2 över 100 ms inte är pålitliga. Frivilligas parameterkartor visade på bra överensstämmelse, dessvärre inte med publicerade. SNR och kontrast-till-brus-förhållandet (CNR) för syntetiska bilder är jämförbara med deras uppmätta motsvarigheter, beroende på TE. De artificiella MS-lesionerna kunde tydligt skiljas från normal omgivande vävnad i en T1-viktad syntetisk FLASH. Radiologen tyckte att en syntetisk T2*-viktad FLASH var något lovande för klinisk användning efter ytterligare förbättringar, medan en syntetisk T1-viktad FLASH hade kliniskt värde.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Meyer, Rory George Vincent. „Classification of ocean vessels from low resolution satellite SAR images“. Diss., University of Pretoria, 2005. http://hdl.handle.net/2263/66224.

Der volle Inhalt der Quelle
Annotation:
In the long term it is beneficial to a country's economy to exploit the maritime environment surrounding it responsibly. It is also beneficial to protect this environment from poaching and pollution. To achieve this the responsible parties of a country must have an awareness of what is transpiring in the maritime domain. Synthetic aperture radar can provide an image, regardless of weather or light conditions, of the ocean showing most vessels therein. To monitor the ocean, using synthetic aperture radar imagery, at the lowest cost would require large swath synthetic aperture radar imagery. There exists a trade-off between large swath imagery and the image's resolution resulting in the largest swath image having the poorest resolution. Existing research has shown that it is possible to use coarse resolution synthetic aperture radar imagery to detect vessels at sea, but little work has been done on classifying those vessels. This research aims to investigate the coarse resolution classification information gap. This is done by using a dataset of matching synthetic aperture radar and ship transponder data to train a statistical classification algorithm in order to classify or estimate the length of vessels based on features extracted from their synthetic aperture radar image. The results of this research show that coarse resolution (approximately 40 m per pixel) synthetic aperture radar imagery is able to estimate vessel size for larger classes and provides insight on which vessel classes would require finer resolutions in order to be detected and classified reliably. The range of smaller vessel classes is usually limited to ports and fishing zones. These zones can be mapped using historical vessel transponder data and so a dedicated surveillance campaign can be optimised to use higher resolution products in these areas. The size estimation from the machine learning algorithm performs better than current techniques.
Dissertation (MEng)--University of Pretoria, 2017.
Electrical, Electronic and Computer Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Hammond, Patrick Douglas. „Deep Synthetic Noise Generation for RGB-D Data Augmentation“. BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7516.

Der volle Inhalt der Quelle
Annotation:
Considerable effort has been devoted to finding reliable methods of correcting noisy RGB-D images captured with unreliable depth-sensing technologies. Supervised neural networks have been shown to be capable of RGB-D image correction, but require copious amounts of carefully-corrected ground-truth data to train effectively. Data collection is laborious and time-intensive, especially for large datasets, and generation of ground-truth training data tends to be subject to human error. It might be possible to train an effective method on a relatively smaller dataset using synthetically damaged depth-data as input to the network, but this requires some understanding of the latent noise distribution of the respective camera. It is possible to augment datasets to a certain degree using naive noise generation, such as random dropout or Gaussian noise, but these tend to generalize poorly to real data. A superior method would imitate real camera noise to damage input depth images realistically so that the network is able to learn to correct the appropriate depth-noise distribution.We propose a novel noise-generating CNN capable of producing realistic noise customized to a variety of different depth-noise distributions. In order to demonstrate the effects of synthetic augmentation, we also contribute a large novel RGB-D dataset captured with the Intel RealSense D415 and D435 depth cameras. This dataset pairs many examples of noisy depth images with automatically completed RGB-D images, which we use as proxy for ground-truth data. We further provide an automated depth-denoising pipeline which may be used to produce proxy ground-truth data for novel datasets. We train a modified sparse-to-dense depth-completion network on splits of varying size from our dataset to determine reasonable baselines for improvement. We determine through these tests that adding more noisy depth frames to each RGB-D image in the training set has a nearly identical impact on depth-completion training as gathering more ground-truth data. We leverage these findings to produce additional synthetic noisy depth images for each RGB-D image in our baseline training sets using our noise-generating CNN. Through use of our augmentation method, it is possible to achieve greater than 50% error reduction on supervised depth-completion training, even for small datasets.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Yanasse, Corina da Costa Freitas. „Statistical analysis of synthetic aperture radar images and its applications to system analysis and change detection“. Thesis, University of Sheffield, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363390.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Hagvall, Hörnstedt Julia. „Synthesis of Thoracic Computer Tomography Images using Generative Adversarial Networks“. Thesis, Linköpings universitet, Avdelningen för medicinsk teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158280.

Der volle Inhalt der Quelle
Annotation:
The use of machine learning algorithms to enhance and facilitate medical diagnosis and analysis is a promising and an important area, which could improve the workload of clinicians’ substantially. In order for machine learning algorithms to learn a certain task, large amount of data needs to be available. Data sets for medical image analysis are rarely public due to restrictions concerning the sharing of patient data. The production of synthetic images could act as an anonymization tool to enable the distribution of medical images and facilitate the training of machine learning algorithms, which could be used in practice. This thesis investigates the use of Generative Adversarial Networks (GAN) for synthesis of new thoracic computer tomography (CT) images, with no connection to real patients. It also examines the usefulness of the images by comparing the quantitative performance of a segmentation network trained with the synthetic images with the quantitative performance of the same segmentation network trained with real thoracic CT images. The synthetic thoracic CT images were generated using CycleGAN for image-to-image translation between label map ground truth images and thoracic CT images. The synthetic images were evaluated using different set-ups of synthetic and real images for training the segmentation network. All set-ups were evaluated according to sensitivity, accuracy, Dice and F2-score and compared to the same parameters evaluated from a segmentation network trained with 344 real images. The thesis shows that it was possible to generate synthetic thoracic CT images using GAN. However, it was not possible to achieve an equal quantitative performance of a segmentation network trained with synthetic data compared to a segmentation network trained with the same amount of real images in the scope of this thesis. It was possible to achieve equal quantitative performance of a segmentation network, as a segmentation network trained on real images, by training it with a combination of real and synthetic images, where a majority of the images were synthetic images and a minority were real images. By using a combination of 59 real images and 590 synthetic images, equal performance as a segmentation network trained with 344 real images was achieved regarding sensitivity, Dice and F2-score. Equal quantitative performance of a segmentation network could thus be achieved by using fewer real images together with an abundance of synthetic images, created at close to no cost, indicating a usefulness of synthetically generated images.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Warrick, Abbie Lynn 1967. „Application of wavelet and radon-based techniques to the internal wake problem in synthetic aperture radar images“. Diss., The University of Arizona, 1996. http://hdl.handle.net/10150/282191.

Der volle Inhalt der Quelle
Annotation:
One problem of interest to the oceanic engineering community is the detection and enhancement of internal wakes in open water synthetic aperture radar (SAR) images. Internal wakes, which occur when a ship travels in a stratified medium, have a "V" shape extending from the ship, and a chirp-like feature across each arm. The Radon transform has been applied to the detection and the enhancement problems in internal wake images to account for the linear features while the wavelet transform has been applied to the enhancement problem in internal wake images to account for the chirp-like features. Although the Radon transform accentuates linear features, there have been several difficulties applying this transform to the wake detection and enhancement problem because the transform is not localized. In a recent article by Copeland et. al., a localized Radon transform (LRT) was developed and was shown to reduce the speckle noise. In this dissertation, another derivation of the LRT is obtained which shows that this transform is equivalent to the Radon transform with a rectangular window function. Several properties not considered in the article are derived using the new formulation. Another transform which has been applied to internal wake images is the wavelet transform. In a recent paper by Teti et. al., the wavelet transform was applied to slices through internal wakes in SAR images. Although the wavelet transform reduced the speckle noise in SAR wake images, it required extracting a line from the image. In this dissertation, a wavelet localized Radon transform is developed which performs the wavelet transform on all lines in an image without explicitly extracting slices of the image. The fundamental theory for this transform is developed and several examples are considered. This transform is then expanded to include features which occur over a region with a significant length. The fundamental theory for this new transform, a localized Radon transform with a wavelet filter, is developed and several examples are provided. These new transforms are then incorporated into optimal and sub-optimal detection schemes for images with linear features, including ship wakes, which are contaminated by additive Gaussian noise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Preiss, Mark. „Detecting scene changes using synthetic aperture radar interferometry /“. Title page, table of contents and abstract only, 2004. http://web4.library.adelaide.edu.au/theses/09PH/09php9242.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Brinkman, Wade H. „Focusing ISAR images using fast adaptive time-frequency and 3D motion detection on simulated and experimental radar data“. Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FBrinkman.pdf.

Der volle Inhalt der Quelle
Annotation:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, June 2005.
Thesis Advisor(s): Michael A. Morgan, Thayananthan Thayaparan. Includes bibliographical references (p. 119-120). Also available online.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Schilling, Lennart. „Generating synthetic brain MR images using a hybrid combination of Noise-to-Image and Image-to-Image GANs“. Thesis, Linköpings universitet, Statistik och maskininlärning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-166034.

Der volle Inhalt der Quelle
Annotation:
Generative Adversarial Networks (GANs) have attracted much attention because of their ability to learn high-dimensional, realistic data distributions. In the field of medical imaging, they can be used to augment the often small image sets available. In this way, for example, the training of image classification or segmentation models can be improved to support clinical decision making. GANs can be distinguished according to their input. While Noise-to-Image GANs synthesize new images from a random noise vector, Image-To-Image GANs translate a given image into another domain. In this study, it is investigated if the performance of a Noise-To-Image GAN, defined by its generated output quality and diversity, can be improved by using elements of a previously trained Image-To-Image GAN within its training. The data used consists of paired T1- and T2-weighted MR brain images. With the objective of generating additional T1-weighted images, a hybrid model (Hybrid GAN) is implemented that combines elements of a Deep Convolutional GAN (DCGAN) as a Noise-To-Image GAN and a Pix2Pix as an Image-To-Image GAN. Thereby, starting from the dependency of an input image, the model is gradually converted into a Noise-to-Image GAN. Performance is evaluated by the use of an independent classifier that estimates the divergence between the generative output distribution and the real data distribution. When comparing the Hybrid GAN performance with the DCGAN baseline, no improvement, neither in the quality nor in the diversity of the generated images, could be observed. Consequently, it could not be shown that the performance of a Noise-To-Image GAN is improved by using elements of a previously trained Image-To-Image GAN within its training.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Reppucci, Antonio Verfasser], und Hartmut [Akademischer Betreuer] [Graßl. „Extreme Wind and Wave Conditions in Tropical Cyclones Observed from Synthetic Aperture Radar Images / Antonio Reppucci. Betreuer: Hartmut Graßl“. Hamburg : Staats- und Universitätsbibliothek Hamburg, 2013. http://d-nb.info/103340344X/34.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Petre, Valentina. „Generating synthetic 3-D images of objects lit by speckle light, providing a test for 3-D reconstruction algorithms“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0002/MQ44033.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Rezende, Djaine Damiati. „Transdução e realidade híbrida em Avatar : uma experiência media assemblage /“. Bauru : [s.n.], 2010. http://hdl.handle.net/11449/89494.

Der volle Inhalt der Quelle
Annotation:
Orientador: Adenil Alfeu Domingos
Banca: Heloisa Helou Doca
Banca: Ana Silvia Lopes Davi Médola
Resumo: O filme Avatar (2009d), dirigido por James Cameron, trouxe inovações tecnológicas capazes de gerar efeitos visuais e sensoriais sem precedentes na história do cinema, além de promover, por meio das estratégias de construção de mundos e aspersão de conteúdos transmídia, efeitos imersivos análogos aos propostos pelas imagens corporificadas que emergem da tela durante a exibição do longametragem e que se caracterizam pelo uso da realidade aumentada. Essa combinação instaura um novo paradigma no âmbito da narrativa audiovisual adentrando o espaço híbrido da percepção, no que diz respeito tanto às fronteiras entre virtual e atual, quando entre real e ficcional, fenômeno a que chamamos aqui de media assemblage. Analisaremos as estratégias de sentido utilizadas dentro e fora do suporte cinematográfico, a fim de estabelecer relações entre a dialógica implícita no uso das tecnologias aplicadas ao processo de potencialização sensória e os efeitos de densidade conseguidos por meio da construção do universo narrativo multiplataformas, tendo como base as ideias de transdução e sinequismo desenvolvidas por Charles Sanders Peirce
Abstract: The Avatar movie directed by James Cameron (2009d) launched technological innovations able to generate never seen before visual and sensorial effects in movie industry and promote, through the strategies of world-building and sprinkling of transmedia content, immersive effects similar to those proposed by the embodied images that emerge from the screen during its displaying characterizing itself by the use of expanded realities. This combination establishes a new paradigm in the context of audiovisual narrative entering the hybrid space of perception, both with regard to the borders between virtual and actual as the real and fictional, a phenomenon that we called here by media assemblage. In this undertaking we analyze the meaning strategies used inside and outside of cinema medium, towards to establish a relation between the implicit dialogic of the technologies uses applied to the process of sensory potentializing and the density effects achieved through the construction of the narrative universe multi=platform, based on ideas of transducation and synechism developed by Charles Sanders Peirce
Mestre
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie