Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Synthetic images of curtaing.

Zeitschriftenartikel zum Thema „Synthetic images of curtaing“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Synthetic images of curtaing" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

NEEDHAM, RODNEY. „Synthetic images“. HAU: Journal of Ethnographic Theory 4, Nr. 1 (Juni 2014): 549–64. http://dx.doi.org/10.14318/hau4.1.039.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Silva, Gilberto P., Alejandro C. Frery, Sandra Sandri, Humberto Bustince, Edurne Barrenechea und Cédric Marco-Detchart. „Optical images-based edge detection in Synthetic Aperture Radar images“. Knowledge-Based Systems 87 (Oktober 2015): 38–46. http://dx.doi.org/10.1016/j.knosys.2015.07.030.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Montserrat, Daniel Mas, Qian Lin, Jan Allebach und Edward J. Delp. „Logo detection and recognition with synthetic images“. Electronic Imaging 2018, Nr. 10 (28.01.2018): 337–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.10.imawm-337.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Brasher, J. D., und Mark Woodson. „Composite training images for synthetic discriminant functions“. Applied Optics 35, Nr. 2 (10.01.1996): 314. http://dx.doi.org/10.1364/ao.35.000314.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Sola, Ion, Maria Gonzalez-Audicana, Jesus Alvarez-Mozos und Jose Luis Torres. „Synthetic Images for Evaluating Topographic Correction Algorithms“. IEEE Transactions on Geoscience and Remote Sensing 52, Nr. 3 (März 2014): 1799–810. http://dx.doi.org/10.1109/tgrs.2013.2255296.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sæbo/, Torstein Olsmo, Roy E. Hansen und Hayden J. Callow. „Multifrequency interferometry on synthetic aperture sonar images“. Journal of the Acoustical Society of America 123, Nr. 5 (Mai 2008): 3898. http://dx.doi.org/10.1121/1.2935867.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Denison, Kenneth, G. Neil Holland und Gordon D. DeMeester. „4881033 Noise-reduced synthetic T2 weighted images“. Magnetic Resonance Imaging 9, Nr. 3 (Januar 1991): II. http://dx.doi.org/10.1016/0730-725x(91)90442-o.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Li, Y., V. L. Newhouse, P. M. Shankar und P. Karpur. „Speckle reduction in ultrasonic synthetic aperture images“. Ultrasonics 30, Nr. 4 (Januar 1992): 233–37. http://dx.doi.org/10.1016/0041-624x(92)90082-w.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Ivanov, Andrei Yu, und Anna I. Ginzburg. „Oceanic eddies in synthetic aperture radar images“. Journal of Earth System Science 111, Nr. 3 (September 2002): 281–95. http://dx.doi.org/10.1007/bf02701974.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sychra, J. J., P. A. Bandettini, N. Bhattacharya und Q. Lin. „Synthetic images by subspace transforms I. Principal components images and related filters“. Medical Physics 21, Nr. 2 (Februar 1994): 193–201. http://dx.doi.org/10.1118/1.597374.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Ryu, Kyeong Hwa, Hye Jin Baek, Sung-Min Gho, Kanghyun Ryu, Dong-Hyun Kim, Sung Eun Park, Ji Young Ha, Soo Buem Cho und Joon Sung Lee. „Validation of Deep Learning-Based Artifact Correction on Synthetic FLAIR Images in a Different Scanning Environment“. Journal of Clinical Medicine 9, Nr. 2 (29.01.2020): 364. http://dx.doi.org/10.3390/jcm9020364.

Der volle Inhalt der Quelle
Annotation:
We investigated the capability of a trained deep learning (DL) model with a convolutional neural network (CNN) in a different scanning environment in terms of ameliorating the quality of synthetic fluid-attenuated inversion recovery (FLAIR) images. The acquired data of 319 patients obtained from the retrospective review were used as test sets for the already trained DL model to correct the synthetic FLAIR images. Quantitative analyses were performed for native synthetic FLAIR and DL-FLAIR images against conventional FLAIR images. Two neuroradiologists assessed the quality and artifact degree of the native synthetic FLAIR and DL-FLAIR images. The quantitative parameters showed significant improvement on DL-FLAIR in all individual tissue segments and total intracranial tissues than on the native synthetic FLAIR (p < 0.0001). DL-FLAIR images showed improved image quality with fewer artifacts than the native synthetic FLAIR images (p < 0.0001). There was no significant difference in the preservation of the periventricular white matter hyperintensities and lesion conspicuity between the two FLAIR image sets (p = 0.217). The quality of synthetic FLAIR images was improved through artifact correction using the trained DL model on a different scan environment. DL-based correction can be a promising solution for ameliorating the quality of synthetic FLAIR images to broaden the clinical use of synthetic magnetic resonance imaging (MRI).
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Łach, Błażej, und Edyta Łukasik. „Faster R-CNN model learning on synthetic images“. Journal of Computer Sciences Institute 17 (30.12.2020): 401–4. http://dx.doi.org/10.35784/jcsi.2285.

Der volle Inhalt der Quelle
Annotation:
Machine learning requires a human description of the data. The manual dataset description is very time consuming. In this article was examined how the model learns from artificially created images, with the least human participation in describing the data. It was checked how the model learned on artificially produced images with augmentations and progressive image size. The model has achieve up to 3.35 higher mean average precision on syntetic dataset in the training with increasing images resolution. Augmentations improved the quality of detection on real photos. The production of artificially generated training data has a great impact on the acceleration of prepare training, because it does not require as much human resources as normal learning process.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Zhang, Jun, Jimin Liang und Haihong Hu. „Multi-view texture classification using hierarchical synthetic images“. Multimedia Tools and Applications 76, Nr. 16 (09.12.2016): 17511–23. http://dx.doi.org/10.1007/s11042-016-4231-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Scott, JH, und N. Ritchie. „Measuring Pixel Classification Accuracy Using Synthetic Spectrum Images“. Microscopy and Microanalysis 12, S02 (31.07.2006): 1394–95. http://dx.doi.org/10.1017/s1431927606069480.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

Stutman, Dan, Maria Pia Valdivia und Michael Finkenthal. „X-ray Moiré deflectometry using synthetic reference images“. Applied Optics 54, Nr. 19 (25.06.2015): 5956. http://dx.doi.org/10.1364/ao.54.005956.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Bell, J. M., und L. M. Linnett. „Simulation and analysis of synthetic sidescan sonar images“. IEE Proceedings - Radar, Sonar and Navigation 144, Nr. 4 (1997): 219. http://dx.doi.org/10.1049/ip-rsn:19971311.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Zuo, Jinyu, Natalia A. Schmid und Xiaohan Chen. „On Generation and Analysis of Synthetic Iris Images“. IEEE Transactions on Information Forensics and Security 2, Nr. 1 (März 2007): 77–90. http://dx.doi.org/10.1109/tifs.2006.890305.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Hayt, D. W., W. Alpers, C. Brüning, R. DeWitt, F. Henyey, D. P. Kasilingam, W. C. Keller et al. „Focusing simulations of synthetic aperture radar ocean images“. Journal of Geophysical Research 95, Nr. C9 (1990): 16245. http://dx.doi.org/10.1029/jc095ic09p16245.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Karimi, Koohyar, Ali Sepehr, Zlatko Devcic und Brian J. Wong. „R007: Morphometric Analysis of Synthetic Lateral Facial Images“. Otolaryngology–Head and Neck Surgery 137, Nr. 2_suppl (August 2007): P150. http://dx.doi.org/10.1016/j.otohns.2007.06.341.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

Dudley, Christopher, und Philip L. Marston. „Bistatic synthetic aperture sonar images of penetrable cylinders.“ Journal of the Acoustical Society of America 125, Nr. 4 (April 2009): 2608. http://dx.doi.org/10.1121/1.4783934.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Parker, J. M., und Kok-Meng Lee. „Physically-Accurate Synthetic Images for Machine Vision Design“. Journal of Manufacturing Science and Engineering 121, Nr. 4 (01.11.1999): 763–70. http://dx.doi.org/10.1115/1.2833139.

Der volle Inhalt der Quelle
Annotation:
In machine vision applications, accuracy of the image far outweighs image appearance. This paper presents physically-accurate image synthesis as a flexible, practical tool for examining a large number of hardware/software configuration combinations for a wide range of parts. Synthetic images can efficiently be used to study the effects of vision system design parameters on image accuracy, providing insight into the accuracy and efficiency of image-processing algorithms in determining part location and orientation for specific applications, as well as reducing the number of hardware prototype configurations to be built and evaluated. We present results illustrating that physically accurate, rather than photo-realistic, synthesis methods are necessary to sufficiently simulate captured image gray-scale values. The usefulness of physically-accurate synthetic images in evaluating the effect of conditions in the manufacturing environment on captured images is also investigated. The prevalent factors investigated in this study are the effects of illumination, the sensor non-linearity and the finite-size pinhole on the captured image of retroreflective vision sensing and, therefore, on camera calibration was shown; if not fully understood, these effects can introduce apparent error in calibration results. While synthetic images cannot fully compensate for the real environment, they can be efficiently used to study the effects of ambient lighting and other important parameters, such as true part and environment reflectance, on image accuracy. We conclude with an evaluation of results and recommendations for improving the accuracy of the synthesis methodology.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

TOTSKY, A. V., und B. F. GORBUNENKO. „Statistical investigations of the synthetic aperture radar images“. International Journal of Remote Sensing 15, Nr. 9 (Juni 1994): 1761–74. http://dx.doi.org/10.1080/01431169408954207.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Suto, Y., M. Kamba und Y. Ohta. „Synthetic color images for contrast-enhanced MR imaging.“ American Journal of Roentgenology 163, Nr. 6 (Dezember 1994): 1531. http://dx.doi.org/10.2214/ajr.163.6.7992770.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Yang, Bowen, Ji Liu und Xiaosheng Liang. „Object segmentation using FCNs trained on synthetic images“. Journal of Intelligent & Fuzzy Systems 35, Nr. 3 (01.10.2018): 3233–42. http://dx.doi.org/10.3233/jifs-171675.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Song, Liangchen, Yonghao Xu, Lefei Zhang, Bo Du, Qian Zhang und Xinggang Wang. „Learning From Synthetic Images via Active Pseudo-Labeling“. IEEE Transactions on Image Processing 29 (2020): 6452–65. http://dx.doi.org/10.1109/tip.2020.2989100.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Suto, Yuji, Biray E. Caner, Yoichi Tamagawa, Tsuyoshi Matsuda, Issyu Kimura, Hirohiko Kimura, Takashi Toyama und Yasushi Ishii. „Subtracted Synthetic Images in Gd-DTPA Enhanced MR“. Journal of Computer Assisted Tomography 13, Nr. 5 (September 1989): 925–28. http://dx.doi.org/10.1097/00004728-198909000-00038.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Gao, Yuan, Mohammad Tayeb Al Qaseer und Reza Zoughi. „Complex Permittivity Extraction From Synthetic Aperture Radar Images“. IEEE Transactions on Instrumentation and Measurement 69, Nr. 7 (Juli 2020): 4919–29. http://dx.doi.org/10.1109/tim.2019.2952479.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Fante, R. L. „Turbulence-induced distortion of synthetic aperture radar images“. IEEE Transactions on Geoscience and Remote Sensing 32, Nr. 4 (Juli 1994): 958–61. http://dx.doi.org/10.1109/36.298027.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Ulander, L. M. H. „Radiometric slope correction of synthetic-aperture radar images“. IEEE Transactions on Geoscience and Remote Sensing 34, Nr. 5 (1996): 1115–22. http://dx.doi.org/10.1109/36.536527.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Sodagar, I., Hung-Ju Lee, P. Hatrack und Ya-Qin Zhang. „Scalable wavelet coding for synthetic/natural hybrid images“. IEEE Transactions on Circuits and Systems for Video Technology 9, Nr. 2 (März 1999): 244–54. http://dx.doi.org/10.1109/76.752092.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Jung, Sukwoo, Seunghyun Song, Minho Chang und Sangchul Park. „Range image registration based on 2D synthetic images“. Computer-Aided Design 94 (Januar 2018): 16–27. http://dx.doi.org/10.1016/j.cad.2017.08.001.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Boulanger, Pierre, André Gagalowicz und Marc Rioux. „Integration of synthetic surface relief in range images“. Computer Vision, Graphics, and Image Processing 47, Nr. 1 (Juli 1989): 129. http://dx.doi.org/10.1016/0734-189x(89)90063-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Boulanger, Pierre, André Gagalowicz und Marc Rioux. „Integration of synthetic surface relief in range images“. Computer Vision, Graphics, and Image Processing 47, Nr. 3 (September 1989): 361–72. http://dx.doi.org/10.1016/0734-189x(89)90118-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Shi, Xianglian, Bo Ou und Zheng Qin. „Tailoring reversible data hiding for 3D synthetic images“. Signal Processing: Image Communication 64 (Mai 2018): 46–58. http://dx.doi.org/10.1016/j.image.2018.02.012.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Lee, Seung Hyun, Young Han Lee, Seok Hahn, Jaemoon Yang, Ho-Taek Song und Jin-Suck Suh. „Optimization of T2-weighted imaging for shoulder magnetic resonance arthrography by synthetic magnetic resonance imaging“. Acta Radiologica 59, Nr. 8 (14.11.2017): 959–65. http://dx.doi.org/10.1177/0284185117740761.

Der volle Inhalt der Quelle
Annotation:
Background Synthetic magnetic resonance imaging (MRI) allows reformatting of various synthetic images by adjustment of scanning parameters such as repetition time (TR) and echo time (TE). Optimized MR images can be reformatted from T1, T2, and proton density (PD) values to achieve maximum tissue contrast between joint fluid and adjacent soft tissue. Purpose To demonstrate the method for optimization of TR and TE by synthetic MRI and to validate the optimized images by comparison with conventional shoulder MR arthrography (MRA) images. Material and Methods Thirty-seven shoulder MRA images acquired by synthetic MRI were retrospectively evaluated for PD, T1, and T2 values at the joint fluid and glenoid labrum. Differences in signal intensity between the fluid and labrum were observed between TR of 500–6000 ms and TE of 80–300 ms in T2-weighted (T2W) images. Conventional T2W and synthetic images were analyzed for diagnostic agreement of supraspinatus tendon abnormalities (kappa statistics) and image quality scores (one-way analysis of variance with post-hoc analysis). Results Optimized mean values of TR and TE were 2724.7 ± 1634.7 and 80.1 ± 0.4, respectively. Diagnostic agreement for supraspinatus tendon abnormalities between conventional and synthetic MR images was excellent (κ = 0.882). The mean image quality score of the joint space in optimized synthetic images was significantly higher compared with those in conventional and synthetic images (2.861 ± 0.351 vs. 2.556 ± 0.607 vs. 2.750 ± 0.439; P < 0.05). Conclusion Synthetic MRI with optimized TR and TE for shoulder MRA enables optimization of soft-tissue contrast.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Liu, Yutong, Jingyuan Yang, Yang Zhou, Weisen Wang, Jianchun Zhao, Weihong Yu, Dingding Zhang, Dayong Ding, Xirong Li und Youxin Chen. „Prediction of OCT images of short-term response to anti-VEGF treatment for neovascular age-related macular degeneration using generative adversarial network“. British Journal of Ophthalmology 104, Nr. 12 (26.03.2020): 1735–40. http://dx.doi.org/10.1136/bjophthalmol-2019-315338.

Der volle Inhalt der Quelle
Annotation:
Background/aimsThe aim of this study was to generate and evaluate individualised post-therapeutic optical coherence tomography (OCT) images that could predict the short-term response of antivascular endothelial growth factor therapy for typical neovascular age-related macular degeneration (nAMD) based on pretherapeutic images using generative adversarial network (GAN).MethodsA total of 476 pairs of pretherapeutic and post-therapeutic OCT images of patients with nAMD were included in training set, while 50 pretherapeutic OCT images were included in the tests set retrospectively, and their corresponding post-therapeutic OCT images were used to evaluate the synthetic images. The pix2pixHD method was adopted for image synthesis. Three experiments were performed to evaluate the quality, authenticity and predictive power of the synthetic images by retinal specialists.ResultsWe found that 92% of the synthetic OCT images had sufficient quality for further clinical interpretation. Only about 26%–30% synthetic post-therapeutic images could be accurately identified as synthetic images. The accuracy to predict macular status of wet or dry was 0.85 (95% CI 0.74 to 0.95).ConclusionOur results revealed a great potential of GAN to generate post-therapeutic OCT images with both good quality and high accuracy.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Esmaeilzade, M., J. Amini und S. Zakeri. „GEOREFERENCING ON SYNTHETIC APERTURE RADAR IMAGERY“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (11.12.2015): 179–84. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-179-2015.

Der volle Inhalt der Quelle
Annotation:
Due to the SAR<sup>1</sup> geometry imaging, SAR images include geometric distortions that would be erroneous image information and the images should be geometrically calibrated. As the radar systems are side looking, geometric distortion such as shadow, foreshortening and layover are occurred. To compensate these geometric distortions, information about sensor position, imaging geometry and target altitude from ellipsoid should be available. In this paper, a method for geometric calibration of SAR images is proposed. The method uses Range-Doppler equations. In this method, for the image georeferencing, the DEM<sup>2</sup> of SRTM with 30m pixel size is used and also exact ephemeris data of the sensor is required. In the algorithm proposed in this paper, first digital elevation model transmit to range and azimuth direction. By applying this process, errors caused by topography such as foreshortening and layover are removed in the transferred DEM. Then, the position of the corners on original image is found base on the transferred DEM. Next, original image registered to transfer DEM by 8 parameters projective transformation. The output is the georeferenced image that its geometric distortions are removed. The advantage of the method described in this article is that it does not require any control point as well as the need to attitude and rotational parameters of the sensor. Since the ground range resolution of used images are about 30m, the geocoded images using the method described in this paper have an accuracy about 20m (subpixel) in planimetry and about 30m in altimetry. <br><br> <sup>1</sup> Synthetic Aperture Radar <br> <sup>2</sup> Digital Elevation Model
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

LI, Yanling, Adams Wai-Kin Kong und Steven Thng. „Segmenting Vitiligo on Clinical Face Images Using CNN Trained on Synthetic and Internet Images“. IEEE Journal of Biomedical and Health Informatics 25, Nr. 8 (August 2021): 3082–93. http://dx.doi.org/10.1109/jbhi.2021.3055213.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Dirvanauskas, Maskeliūnas, Raudonis, Damaševičius und Scherer. „HEMIGEN: Human Embryo Image Generator Based on Generative Adversarial Networks“. Sensors 19, Nr. 16 (16.08.2019): 3578. http://dx.doi.org/10.3390/s19163578.

Der volle Inhalt der Quelle
Annotation:
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student’s t-test) significant (p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Gunár, S., J. Jurčák und K. Ichimoto. „The influence of Hinode/SOT NFI instrumental effects on the visibility of simulated prominence fine structures in Hα“. Astronomy & Astrophysics 629 (September 2019): A118. http://dx.doi.org/10.1051/0004-6361/201936147.

Der volle Inhalt der Quelle
Annotation:
Context. Models of entire prominences with their numerous fine structures distributed within the prominence magnetic field use approximate radiative transfer techniques to visualize the simulated prominences. However, to accurately compare synthetic images of prominences obtained in this way with observations and to precisely analyze the visibility of even the faintest prominence features, it is important to take into account the influence of instrumental properties on the synthetic spectra and images. Aims. In the present work, we investigate how synthetic Hα images of simulated prominences are impacted by the instrumental effects induced by the Narrowband Filter Imager (NFI) of the Solar Optical Telescope (SOT) onboard the Hinode satellite. Methods. To process the synthetic Hα images provided by 3D Whole-Prominence Fine Structure (WPFS) models into SOT-like synthetic Hα images, we take into account the effects of the integration over the theoretical narrow-band transmission profile of NFI Lyot filter, the influence of the stray-light and point spread function (PSF) of Hinode/SOT, and the observed noise level. This allows us to compare the visibility of the prominence fine structures in the SOT-like synthetic Hα images with the synthetic Hα line-center images used by the 3D models and with a pair of Hinode/SOT NFI observations of quiescent prominences. Results. The comparison between the SOT-like synthetic Hα images and the synthetic Hα line-center images shows that all large and small-scale features are very similar in both visualizations and that the same very faint prominence fine structures can be discerned in both. This demonstrates that the computationally efficient Hα line-center visualization technique can be reliably used for the purpose of visualization of complex 3D prominence models. In addition, the qualitative comparison between the SOT-like synthetic images and prominence observations shows that the 3D WPFS models can reproduce large-scale prominence features rather well. However, the distribution of the prominence fine structures is significantly more diffuse in the observations than in the models and the diffuse intensity areas surrounding the observed prominences are also not present in the synthetic images. We also found that the maximum intensities reached in the models are about twice as high as those present in the observations–an indication that the mass-loading assumed in the present 3D WPFS models might be too large.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Bureš, Lukáš, Ivan Gruber, Petr Neduchal, Miroslav Hlaváč und Marek Hrúz. „Semantic Text Segmentation from Synthetic Images of Full-Text Documents“. SPIIRAS Proceedings 18, Nr. 6 (02.12.2019): 1381–406. http://dx.doi.org/10.15622/sp.2019.18.6.1381-1406.

Der volle Inhalt der Quelle
Annotation:
An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR). The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used. These backgrounds enable the generation of similar background images as the training ones on the fly.The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents). A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Li, Yafen, Wen Li, Jing Xiong, Jun Xia und Yaoqin Xie. „Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images“. BioMed Research International 2020 (05.11.2020): 1–9. http://dx.doi.org/10.1155/2020/5193707.

Der volle Inhalt der Quelle
Annotation:
Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Berenguel-Baeta, Bruno, Jesus Bermudez-Cameo und Jose J. Guerrero. „OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision“. Sensors 20, Nr. 7 (07.04.2020): 2066. http://dx.doi.org/10.3390/s20072066.

Der volle Inhalt der Quelle
Annotation:
Omnidirectional and 360° images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Conti, Luis Américo, und Murilo Baptista. „SYNTHETIC APERTURE SONAR IMAGES SEGMENTATION USING DYNAMICAL MODELING ANALYSIS“. Revista Brasileira de Geofísica 31, Nr. 3 (01.09.2013): 455. http://dx.doi.org/10.22564/rbgf.v31i3.315.

Der volle Inhalt der Quelle
Annotation:
ABSTRACT. Symbolic Models applied to Synthetic Aperture Sonar images are proposed in order to assess the validity and reliability of use of such models and evaluate how effective they can be in terms of image classification and segmentation. We developed an approach for the description of sonar images where the pixels distribution can be transformed into points in the symbolic space in a similar way as symbolic space can encode a trajectory of a dynamical system. One of the main characteristic of approach is that points in the symbolic space are mapped respecting dynamical rules and, as a consequence, it can possible to calculate quantities thatcharacterize the dynamical system, such as Fractal Dimension (D), Shannon Entropy (H) and the amount of information of the image. It also showed potential to classify image sub-patterns based on the textural characteristics of the seabed. The proposed method reached a reasonable degree of success with results compatible with the classical techniques described in literature.Keywords: Synthetic Aperture Sonar, image processing, dynamical models, fractal, seabed segmentation. RESUMO. Este estudo apresenta uma proposta de metodologia para segmentação e classificação de imagens de sonar de Abertura Sintética a partir de modelos de Dinâmica Simbólica. Foram desenvolvidas, em um primeiro momento, técnicas de descrição de registros de sonar, com base na transformação da distribuição dos pixels da imagem em pontos em um espaço simbólico, codificado a partir de uma função de interação, de modo que as imagens podem ser interpretadas como sistemas dinâmicos em que trajetórias do sistema podem ser estabelecidas. Uma das características marcantes deste método é que, ao descrever uma imagem como um sistema dinâmico, é possível calcular grandezas como dimensão fractal (D) e entropia de Shannon (H) além da quantidade de informação inerente a imagem. Foi possível classificar, posteriormente, características texturais das imagens com base nas propriedades dinâmicas do espaço simbólico, o que permitiu a segmentação automática de padrões de “backscatter” indicando variações da geologia/geomorfologia do substrato marinho. O método proposto atingiu um razoável grau de sucesso em relação à acurácia de segmentação, com sucesso compatível com métodos alternativos descritos em literatura.Palavras-chave: sonar de abertura sintética, processamento de imagens, modelos dinâmicos, fractal, segmentação.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

., V. B. Pravalika, S. Nageswararoa und B. Seetharamulu. „A Survey on Despeckling Of Synthetic Aperture Radar Images“. International Journal of Computer Sciences and Engineering 7, Nr. 4 (30.04.2019): 608–12. http://dx.doi.org/10.26438/ijcse/v7i4.608612.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Sekar, Dr M. Raja. „Classification of Synthetic Aperture Radar Images using Fuzzy SVMs“. International Journal for Research in Applied Science and Engineering Technology V, Nr. VIII (26.08.2017): 289–96. http://dx.doi.org/10.22214/ijraset.2017.8040.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Bonaldi, Lorenza, Elisa Menti, Lucia Ballerini, Alfredo Ruggeri und Emanuele Trucco. „Automatic Generation of Synthetic Retinal Fundus Images: Vascular Network“. Procedia Computer Science 90 (2016): 54–60. http://dx.doi.org/10.1016/j.procs.2016.07.010.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Rajaram, Satwik, Benjamin Pavie, Nicholas E. F. Hac, Steven J. Altschuler und Lani F. Wu. „SimuCell: a flexible framework for creating synthetic microscopy images“. Nature Methods 9, Nr. 7 (28.06.2012): 634–35. http://dx.doi.org/10.1038/nmeth.2096.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Liu, Antony K., Yunhe Zhao und Ming-Kuang Hsu. „Ocean surface drift revealed by synthetic aperture radar images“. Eos, Transactions American Geophysical Union 87, Nr. 24 (2006): 233. http://dx.doi.org/10.1029/2006eo240002.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Plotnick, Daniel S., und Timothy M. Marston. „Utilization of Aspect Angle Information in Synthetic Aperture Images“. IEEE Transactions on Geoscience and Remote Sensing 56, Nr. 9 (September 2018): 5424–32. http://dx.doi.org/10.1109/tgrs.2018.2816462.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie