Auswahl der wissenschaftlichen Literatur zum Thema „Synthetic images of curtaing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Synthetic images of curtaing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Synthetic images of curtaing"

1

NEEDHAM, RODNEY. „Synthetic images“. HAU: Journal of Ethnographic Theory 4, Nr. 1 (Juni 2014): 549–64. http://dx.doi.org/10.14318/hau4.1.039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Silva, Gilberto P., Alejandro C. Frery, Sandra Sandri, Humberto Bustince, Edurne Barrenechea und Cédric Marco-Detchart. „Optical images-based edge detection in Synthetic Aperture Radar images“. Knowledge-Based Systems 87 (Oktober 2015): 38–46. http://dx.doi.org/10.1016/j.knosys.2015.07.030.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Montserrat, Daniel Mas, Qian Lin, Jan Allebach und Edward J. Delp. „Logo detection and recognition with synthetic images“. Electronic Imaging 2018, Nr. 10 (28.01.2018): 337–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.10.imawm-337.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brasher, J. D., und Mark Woodson. „Composite training images for synthetic discriminant functions“. Applied Optics 35, Nr. 2 (10.01.1996): 314. http://dx.doi.org/10.1364/ao.35.000314.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sola, Ion, Maria Gonzalez-Audicana, Jesus Alvarez-Mozos und Jose Luis Torres. „Synthetic Images for Evaluating Topographic Correction Algorithms“. IEEE Transactions on Geoscience and Remote Sensing 52, Nr. 3 (März 2014): 1799–810. http://dx.doi.org/10.1109/tgrs.2013.2255296.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sæbo/, Torstein Olsmo, Roy E. Hansen und Hayden J. Callow. „Multifrequency interferometry on synthetic aperture sonar images“. Journal of the Acoustical Society of America 123, Nr. 5 (Mai 2008): 3898. http://dx.doi.org/10.1121/1.2935867.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Denison, Kenneth, G. Neil Holland und Gordon D. DeMeester. „4881033 Noise-reduced synthetic T2 weighted images“. Magnetic Resonance Imaging 9, Nr. 3 (Januar 1991): II. http://dx.doi.org/10.1016/0730-725x(91)90442-o.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Y., V. L. Newhouse, P. M. Shankar und P. Karpur. „Speckle reduction in ultrasonic synthetic aperture images“. Ultrasonics 30, Nr. 4 (Januar 1992): 233–37. http://dx.doi.org/10.1016/0041-624x(92)90082-w.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ivanov, Andrei Yu, und Anna I. Ginzburg. „Oceanic eddies in synthetic aperture radar images“. Journal of Earth System Science 111, Nr. 3 (September 2002): 281–95. http://dx.doi.org/10.1007/bf02701974.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sychra, J. J., P. A. Bandettini, N. Bhattacharya und Q. Lin. „Synthetic images by subspace transforms I. Principal components images and related filters“. Medical Physics 21, Nr. 2 (Februar 1994): 193–201. http://dx.doi.org/10.1118/1.597374.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Dissertationen zum Thema "Synthetic images of curtaing"

1

Dvořák, Martin. „Anticurtaining - obrazový filtr pro elektronovou mikroskopii“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445537.

Der volle Inhalt der Quelle
Annotation:
Tomographic analysis produces 3D images of examined material in nanoscale by focus ion beam (FIB). This thesis presents new approach to elimination of the curtain effect by machine learning method.  Convolution neuron network is proposed for elimination of damaged imagine by the supervised learning technique. Designed network deals with features of damaged image, which are caused by wavelet transformation. The outcome is visually clear image. This thesis also designs creation of synthetic data set for training the neuron network which are created by simulating physical process of the creation of the real image. The simulation is made of creation of examined material by milling which is done by FIB and by process displaying of the surface by electron microscope (SEM). This newly created approach works precisely with real images. The qualitative evaluation of results is done by amateurs and experts of this problematic. It is done by anonymously comparing this solution to another method of eliminating curtaining effect. Solution presents new and promising approach to elimination of curtaining effect and contributes to a better procedure of dealing with images which are created during material analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

García, Armando. „Efficient rendering of synthetic images“. Thesis, Massachusetts Institute of Technology, 1986. http://hdl.handle.net/1721.1/15182.

Der volle Inhalt der Quelle
Annotation:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 221-224.
by Armando Garcia.
Ph.D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Manamasa, Krishna Himaja. „Domain adaptation from 3D synthetic images to real images“. Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19303.

Der volle Inhalt der Quelle
Annotation:
Background. Domain adaptation is described as, a model learning from a source data distribution and performing well on the target data. This concept, Domain adaptation is applied to assembly-line production tasks to perform an automatic quality inspection. Objectives. The aim of this master thesis is to apply this concept of 3D domain adaptation from synthetic images to real images. It is an attempt to bridge the gap between different domains (synthetic and real point cloud images), by implementing deep learning models that learn from synthetic 3D point cloud (CAD model images) and perform well on the actual 3D point cloud (3D Camera images). Methods. Through this course of thesis project, various methods for understand- ing the data and analyzing it for bridging the gap between CAD and CAM to make them similar is looked into. Literature review and controlled experiment are research methodologies followed during implementation. In this project, we experiment with four different deep learning models with data generated and compare their performance to know which deep learning model performs best for the data. Results. The results are explained through metrics i.e, accuracy and train time, which were the outcomes of each of the deep learning models after the experiment. These metrics are illustrated in the form of graphs for comparative analysis between the models on which the data is trained and tested on. PointDAN showed better results with higher accuracy compared to the other 3 models. Conclusions. The results attained show that domain adaptation for synthetic images to real images is possible with the data generated. PointDAN deep learning model which focuses on local feature alignment and global feature alignment with single-view point data shows better results with our data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hagedorn, Michael. „Classification of synthetic aperture radar images“. Thesis, University of Canterbury. Electrical and Computer Engineering, 2004. http://hdl.handle.net/10092/5966.

Der volle Inhalt der Quelle
Annotation:
In this thesis the maximum a posteriori (MAP) approach to synthetic aperture radar (SAR) analysis is reviewed. The MAP model consists of two probability density functions (PDFs): the likelihood function and the prior model. Contributions related to both models are made. As the first contribution a new likelihood function describing the multilook three-polarisation intensity SAR speckle process, which is equivalent to the averaged squared amplitude samples from a three-dimensional complex zero-mean circular Gaussian density, has been derived. This PDF is a correlated three-dimensional chi-square density in the form of an infinite series of modified Bessel functions with seven independent parameters. Details concerning the PDF such as the estimation of the PDF parameters from sample data and the moments of the PDF are described. The new likelihood function is tested against simulated and measured SAR data. The second contribution is a novel parameter estimation method for discrete Gibbs random field (GRF) prior models. Given a quantity of sample data, the parameters of the GRF model, which comprise the values of the potential functions of individual cliques, are estimated. The method uses an error function describing the difference between the local model PDF and the equivalent estimated from sample data. The concept of "equivalencies" is introduced to simplify the process. The new parameter estimation method is validated and compared to Besag's parameter estimation method (coding method) using GRF realisations and other sample data.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Hasegawa, Robert Shigehisa. „Using synthetic images to improve iris biometric performance“. Scholarly Commons, 2012. https://scholarlycommons.pacific.edu/uop_etds/827.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Aubrecht, Tomáš. „Generation of Synthetic Retinal Images with High Resolution“. Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417220.

Der volle Inhalt der Quelle
Annotation:
K pořízení snímků sítnice, která představuje nejdůležitější část lidského oka, je potřeba speciálního vybavení, kterým je fundus kamera. Z tohoto důvodu je cílem této práce navrhnout a implementovat systém, který bude schopný generovat takovéto snímky bez použítí této kamery. Navržený systém využívá mapování vstupního černobílého snímku krevního řečiště sítnice na barevný výstupní snímek celé sítnice. Systém se skládá ze dvou neuronových sítí: generátoru, který generuje snímky sítnic, a diskriminátoru, který klasifikuje dané snímky jako reálné či syntetické. Tento systém byl natrénován na 141 snímcích z veřejně dostupných databází. Následně byla vytvořena nová databáze obsahující více než 2,800 snímků zdravých sítnic v rozlišení 1024x1024. Tato databáze může být použita jako učební pomůcka pro oční lékaře nebo může poskytovat základ pro vývoj různých aplikací pracujících se sítnicemi.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Sabel, Johan. „Detecting Synthetic Images of Faces using Deep Learning“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-287447.

Der volle Inhalt der Quelle
Annotation:
Significant progress has been made within human face synthesis due to recent advances in generative adversarial networks. These networks can be used to generate credible high-quality images of faces not belonging to real people, which is something that could be exploited by malicious actors. In this thesis, several state-of-the-art deep learning detection models were evaluated with respect to their robustness and generalization capability, which are two factors that must be taken into consideration for models that are intended to be deployed in the wild. The results show that some classifiers exhibited near-perfect performance when tested on real and synthetic images post-processed heavily using various augmentation techniques. These types of image perturbations further improved robustness when also incorporated in the training data. However, no model generalized well to out-of-distribution images from unseen datasets, although one model showed impressive results after being fine-tuned on a small number of samples from the target distributions. Nevertheless, the limited generalization capability remains a shortcoming that must be overcome before the detection models can become viable in the wild.
Nya arkitekturer för GAN-nätverk har möjliggjort stora framsteg inom området för syntes av bilder av människoansikten. Dessa neuronnät är kapabla att generera trovärdiga och högkvalitativa bilder av personer som inte existerar i verkligheten, vilket skulle kunna utnyttjas av illvilliga aktörer. I detta examensarbete utvärderades ett flertal djupinlärningsbaserade state-of-the-art-modeller avsedda för detektion av syntetiska bilder. Utvärderingen gjordes med hänsyn till både robusthet och generaliseringsförmåga, vilka är två avgörande faktorer för modeller som är avsedda att användas i verkliga tillämpningar. Resultaten visar att vissa klassificerare presterade nästintill perfekt vid utvärdering på äkta och syntetiska bilder som efterbehandlats kraftigt på olika sätt. Modellerna visade sig även vara ännu mer robusta när liknande bildstörningar användes under träning. Angående modellernas generaliseringsförmåga så lyckades ingen av dem uppnå tillfredsställande resultat vid utvärdering på bilder från okända källor som inte fanns tillgängliga under träning. Dock uppnådde en av de tränade modellerna imponerande resultat efter att ha tränats ytterligare på ett fåtal bilder från de tidigare okända källorna. Den begränsade generaliseringsförmågan utgör dock ett tillkortakommande såtillvida att modellerna i nuläget inte kan förväntas prestera tillfredsställande i verkliga tillämpningar.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Zeid, Baker Mousa. „Generation of Synthetic Images with Generative Adversarial Networks“. Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-15866.

Der volle Inhalt der Quelle
Annotation:
Machine Learning is a fast growing area that revolutionizes computer programs by providing systems with the ability to automatically learn and improve from experience. In most cases, the training process begins with extracting patterns from data. The data is a key factor for machine learning algorithms, without data the algorithms will not work. Thus, having sufficient and relevant data is crucial for the performance. In this thesis, the researcher tackles the problem of not having a sufficient dataset, in terms of the number of training examples, for an image classification task. The idea is to use Generative Adversarial Networks to generate synthetic images similar to the ground truth, and in this way expand a dataset. Two types of experiments were conducted: the first was used to fine-tune a Deep Convolutional Generative Adversarial Network for a specific dataset, while the second experiment was used to analyze how synthetic data examples affect the accuracy of a Convolutional Neural Network in a classification task. Three well known datasets were used in the first experiment, namely MNIST, Fashion-MNIST and Flower photos, while two datasets were used in the second experiment: MNIST and Fashion-MNIST. The results of the generated images of MNIST and Fashion-MNIST had good overall quality. Some classes had clear visual errors while others were indistinguishable from ground truth examples. When it comes to the Flower photos, the generated images suffered from poor visual quality. One can easily tell the synthetic images from the real ones. One reason for the bad performance is due to the large quantity of noise in the Flower photos dataset. This made it difficult for the model to spot the important features of the flowers. The results from the second experiment show that the accuracy does not increase when the two datasets, MNIST and Fashion-MNIST, are expanded with synthetic images. This is not because the generated images had bad visual quality, but because the accuracy turned out to not be highly dependent on the number of training examples. It can be concluded that Deep Convolutional Generative Adversarial Networks are capable of generating synthetic images similar to the ground truth and thus can be used to expand a dataset. However, this approach does not completely solve the initial problem of not having adequate datasets because Deep Convolutional Generative Adversarial Networks may themselves require, depending on the dataset, a large quantity of training examples.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Haiderbhai, Mustafa. „Generating Synthetic X-rays Using Generative Adversarial Networks“. Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41092.

Der volle Inhalt der Quelle
Annotation:
We propose a novel method for generating synthetic X-rays from atypical inputs. This method creates approximate X-rays for use in non-diagnostic visualization problems where only generic cameras and sensors are available. Traditional methods are restricted to 3-D inputs such as meshes or Computed Tomography (CT) scans. We create custom synthetic X-ray datasets using a custom generator capable of creating RGB images, point cloud images, and 2-D pose images. We create a dataset using natural hand poses and train general-purpose Conditional Generative Adversarial Networks (CGANs) as well as our own novel network pix2xray. Our results show the successful plausibility of generating X-rays from point cloud and RGB images. We also demonstrate the superiority of our pix2xray approach, especially in the troublesome cases of occlusion due to overlapping or rotated anatomy. Overall, our work establishes a baseline that synthetic X-rays can be simulated using inputs such as RGB images and point cloud.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Johnson, David L. „Airborne synthetic aperture radar images of an upwelling filament“. Thesis, University of Hawaii at Manoa, 2003. http://hdl.handle.net/10125/7036.

Der volle Inhalt der Quelle
Annotation:
The Cape Mendicino upwelling filament was imaged in 1989 using the NASA/JPL AIRSAR multiband Synthetic Aperture Radar (SAR) and NOAA AVHRR thermal and optical radiometry. To first order, SAR images of the ocean are solely dependent on the surface wave field, but they ultimately reflect the synergy of a vast number of geophysical processes. The complexity of surface wave processes leaves a large gap between the information contained in SAR images, and our ability to describe them without conjectures. Investigated here are features associated with thermal fronts, vortices, geostrophic jets, and internal waves. SAR spectra suggest infragravity waves aligned with the wind swell. Cross jet SAR profiles were investigated in detail; comparison with results from a simple model suggest that some processes not included in the simulation are dominating in physical environment. Band dependent asymmetry of the profiles is consistent with convergence and accumulation of surfactants; band independent location of the peaks suggests that such convergence may be a jet driven process. The band independent position of humps in the profiles suggests critical reflection of strongly imaged intermediate (A>ABragg) waves or alternately a persistent and complex jet velocity profile. Apparently anomalously high damping of longer Bragg waves at some jet orientations is inconsistent with historical measurements of the modulus of elasticity of ocean surfactants and might indicate the hyperconcentration of surfactants within a zone of strong convergence. Net changes in radar cross-section across some sections of the jet could indicate a number a wave or current processes, which are discussed.
iv, 124 leaves
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Bücher zum Thema "Synthetic images of curtaing"

1

1949-, Quegan Shaun, Hrsg. Understanding synthetic aperture radar images. Boston: Artech House, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Matre, Henri, Hrsg. Processing of Synthetic Aperture Radar Images. London, UK: ISTE, 2008. http://dx.doi.org/10.1002/9780470611111.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kong, Kim Kok. Focusing of inverse synthetic aperture radar (ISAR) images. Birmingham: University of Birmingham, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Understanding Synthetic Aperture Radar Images. SciTech Publishing, 2004.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Henri, Maître, Hrsg. Processing of synthetic aperture radar images. Hoboken, NJ, USA: Wiley, 2008.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Henri, Maître, Hrsg. Processing of synthetic aperture radar images. Newport Beach, CA: ISTE, 2007.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ma�tre, Henri. Processing of Synthetic Aperture Radar (SAR) Images. Wiley & Sons, Incorporated, John, 2013.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ma�tre, Henri. Processing of Synthetic Aperture Radar (SAR) Images. Wiley & Sons, Incorporated, John, 2013.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ma�tre, Henri. Processing of Synthetic Aperture Radar (SAR) Images. Wiley & Sons, Incorporated, John, 2010.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

D, Wall Stephen, United States. National Aeronautics and Space Administration. Scientific and Technical Information Branch und Jet Propulsion Laboratory (U.S.), Hrsg. User guide to the Magellan synthetic aperture radar images. [Washington, D.C.]: National Aeronautics and Space Administration, Scientific and Technical Information Branch, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Synthetic images of curtaing"

1

Paulus, Dietrich W. R., und Joachim Hornegger. „Synthetic Signals and Images“. In Pattern Recognition of Images and Speech in C++, 223–33. Wiesbaden: Vieweg+Teubner Verlag, 1997. http://dx.doi.org/10.1007/978-3-663-13991-1_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Paulus, Dietrich W. R., und Joachim Hornegger. „Synthetic Signals and Images“. In Pattern Recognition and Image Processing in C++, 253–62. Wiesbaden: Vieweg+Teubner Verlag, 1995. http://dx.doi.org/10.1007/978-3-322-87867-0_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Bourke, Paul. „Synthetic Stereoscopic Panoramic Images“. In Interactive Technologies and Sociotechnical Systems, 147–55. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11890881_17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Alexa, Marc. „Synthetic Images on Real Surfaces“. In Computational Design Modelling, 79–88. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23435-4_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Bailly, Kevin, und Maurice Milgram. „Head Pose Determination Using Synthetic Images“. In Advanced Concepts for Intelligent Vision Systems, 1071–80. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-88458-3_97.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bielecka, Marzena, Andrzej Bielecki und Wojciech Wojdanowski. „Compression of Synthetic-Aperture Radar Images“. In Computer Vision and Graphics, 92–99. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11331-9_12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Barsky, Brian A. „Synthetic Images of Beta-spline Objects“. In Computer Graphics and Geometric Modeling Using Beta-splines, 119–23. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-72292-9_19.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Kasper, Mike, Nima Keivan, Gabe Sibley und Christoffer Heckman. „Light Source Estimation in Synthetic Images“. In Lecture Notes in Computer Science, 887–93. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-49409-8_72.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ernst, Ines, und Thomas Jung. „Adaptive Capture Of Existing Cityscapes Using Multiple Panoramic Images“. In 3D Synthetic Environment Reconstruction, 103–18. Boston, MA: Springer US, 2001. http://dx.doi.org/10.1007/978-1-4419-8756-3_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Goller, Alois. „Parallel Matching of Synthetic Aperture Radar Images“. In Parallel Computation, 408–16. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-49164-3_39.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Synthetic images of curtaing"

1

Hitz, O., L. Robadey und R. Ingold. „Analysis of synthetic document images“. In Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR '99 (Cat. No.PR00318). IEEE, 1999. http://dx.doi.org/10.1109/icdar.1999.791802.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Sanders, William R., und David F. McAllister. „Producing anaglyphs from synthetic images“. In Electronic Imaging 2003, herausgegeben von Andrew J. Woods, Mark T. Bolas, John O. Merritt und Stephen A. Benton. SPIE, 2003. http://dx.doi.org/10.1117/12.474130.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Imdahl, Christina, Stephan Huckemann und Carsten Gottschlich. „Towards generating realistic synthetic fingerprint images“. In 2015 9th International Symposium on Image and Signal Processing and Analysis (ISPA). IEEE, 2015. http://dx.doi.org/10.1109/ispa.2015.7306036.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Hanssen, A., J. Kongsli, R. E. Hansen und S. Chapman. „Statistics of synthetic aperture sonar images“. In Oceans 2003. Celebrating the Past ... Teaming Toward the Future (IEEE Cat. No.03CH37492). IEEE, 2003. http://dx.doi.org/10.1109/oceans.2003.178323.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Guivens, Jr., Norman R., und Philip D. Henshaw. „Image processor development with synthetic images“. In Robotics - DL tentative, herausgegeben von Donald J. Svetkoff. SPIE, 1992. http://dx.doi.org/10.1117/12.57978.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Norell, K. „Creating synthetic log end face images“. In 2009 6th International Symposium on Image and Signal Processing and Analysis. IEEE, 2009. http://dx.doi.org/10.1109/ispa.2009.5297696.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Devecchi, Bernadetta, Koen Benoist, Loes Scheers, Henny Veerman, Sven Binsbergen und Lex van Eijk. „Modelling sea clutter infrared synthetic images“. In Target and Background Signatures V, herausgegeben von Karin U. Stein und Ric Schleijpen. SPIE, 2019. http://dx.doi.org/10.1117/12.2535820.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Biswas, Sangeeta, Johan Rohdin und Martin Drahansky. „Synthetic Retinal Images from Unconditional GANs“. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, 2019. http://dx.doi.org/10.1109/embc.2019.8857857.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Geigel, Joe, und F. Kenton Musgrave. „Simulated photographic development of synthetic images“. In ACM SIGGRAPH 96 Visual Proceedings: The art and interdisciplinary programs of SIGGRAPH '96. New York, New York, USA: ACM Press, 1996. http://dx.doi.org/10.1145/253607.253892.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Levesque, Martin P., und Daniel St-Germain. „Generation of synthetic IR sea images“. In Orlando '90, 16-20 April, herausgegeben von Milton J. Triplett, Wendell R. Watkins und Ferdinand H. Zegel. SPIE, 1990. http://dx.doi.org/10.1117/12.21849.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Synthetic images of curtaing"

1

Doerry, Armin Walter. Apodized RFI filtering of synthetic aperture radar images. Office of Scientific and Technical Information (OSTI), Februar 2014. http://dx.doi.org/10.2172/1204095.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Goforth, J., T. White, P. Pope, R. Roberts, I. Burns und L. Gaines. Benchmark Imagery Project, Report on Generation of Synthetic Images. Office of Scientific and Technical Information (OSTI), Oktober 2012. http://dx.doi.org/10.2172/1053985.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

DELAURENTIS, JOHN M., und ARMIN W. DOERRY. Stereoscopic Height Estimation from Multiple Aspect Synthetic Aperture Radar Images. Office of Scientific and Technical Information (OSTI), August 2001. http://dx.doi.org/10.2172/786639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Doerry, Armin Walter. Autofocus correction of excessive migration in synthetic aperture radar images. Office of Scientific and Technical Information (OSTI), September 2004. http://dx.doi.org/10.2172/919639.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Doerry, Armin Walter. Basics of Polar-Format algorithm for processing Synthetic Aperture Radar images. Office of Scientific and Technical Information (OSTI), Mai 2012. http://dx.doi.org/10.2172/1044949.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Baumgaertel, Jessica A., Paul A. Bradley und Ian L. Tregillis. 65036 MMI data matched qualitatively by RAGE (with mix) synthetic MMI images. Office of Scientific and Technical Information (OSTI), Februar 2014. http://dx.doi.org/10.2172/1122056.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Doerry, A. W. A model for forming airborne synthetic aperture radar images of underground targets. Office of Scientific and Technical Information (OSTI), Januar 1994. http://dx.doi.org/10.2172/10127816.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

ROHWER, JUDD A. Open-Loop Adaptive Filtering for Speckle Reduction in Synthetic Aperture Radar Images. Office of Scientific and Technical Information (OSTI), Juni 2000. http://dx.doi.org/10.2172/759434.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Mourad, Pierre D. Inferring Atmospheric Turbulence Structure Using Synthetic Aperture Radar Images of the Ocean Surface. Fort Belvoir, VA: Defense Technical Information Center, September 1999. http://dx.doi.org/10.21236/ada629697.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mourad, Pierre D. Inferring Atmospheric Turbulence Structure Using Synthetic Aperture Radar Images of the Ocean Surface. Fort Belvoir, VA: Defense Technical Information Center, September 1997. http://dx.doi.org/10.21236/ada627588.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie