To see the other types of publications on this topic, follow the link: Synthetic images of curtaing.

Journal articles on the topic 'Synthetic images of curtaing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Synthetic images of curtaing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

NEEDHAM, RODNEY. "Synthetic images." HAU: Journal of Ethnographic Theory 4, no. 1 (June 2014): 549–64. http://dx.doi.org/10.14318/hau4.1.039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Silva, Gilberto P., Alejandro C. Frery, Sandra Sandri, Humberto Bustince, Edurne Barrenechea, and Cédric Marco-Detchart. "Optical images-based edge detection in Synthetic Aperture Radar images." Knowledge-Based Systems 87 (October 2015): 38–46. http://dx.doi.org/10.1016/j.knosys.2015.07.030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Montserrat, Daniel Mas, Qian Lin, Jan Allebach, and Edward J. Delp. "Logo detection and recognition with synthetic images." Electronic Imaging 2018, no. 10 (January 28, 2018): 337–1. http://dx.doi.org/10.2352/issn.2470-1173.2018.10.imawm-337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Brasher, J. D., and Mark Woodson. "Composite training images for synthetic discriminant functions." Applied Optics 35, no. 2 (January 10, 1996): 314. http://dx.doi.org/10.1364/ao.35.000314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sola, Ion, Maria Gonzalez-Audicana, Jesus Alvarez-Mozos, and Jose Luis Torres. "Synthetic Images for Evaluating Topographic Correction Algorithms." IEEE Transactions on Geoscience and Remote Sensing 52, no. 3 (March 2014): 1799–810. http://dx.doi.org/10.1109/tgrs.2013.2255296.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sæbo/, Torstein Olsmo, Roy E. Hansen, and Hayden J. Callow. "Multifrequency interferometry on synthetic aperture sonar images." Journal of the Acoustical Society of America 123, no. 5 (May 2008): 3898. http://dx.doi.org/10.1121/1.2935867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Denison, Kenneth, G. Neil Holland, and Gordon D. DeMeester. "4881033 Noise-reduced synthetic T2 weighted images." Magnetic Resonance Imaging 9, no. 3 (January 1991): II. http://dx.doi.org/10.1016/0730-725x(91)90442-o.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Y., V. L. Newhouse, P. M. Shankar, and P. Karpur. "Speckle reduction in ultrasonic synthetic aperture images." Ultrasonics 30, no. 4 (January 1992): 233–37. http://dx.doi.org/10.1016/0041-624x(92)90082-w.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ivanov, Andrei Yu, and Anna I. Ginzburg. "Oceanic eddies in synthetic aperture radar images." Journal of Earth System Science 111, no. 3 (September 2002): 281–95. http://dx.doi.org/10.1007/bf02701974.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sychra, J. J., P. A. Bandettini, N. Bhattacharya, and Q. Lin. "Synthetic images by subspace transforms I. Principal components images and related filters." Medical Physics 21, no. 2 (February 1994): 193–201. http://dx.doi.org/10.1118/1.597374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Ryu, Kyeong Hwa, Hye Jin Baek, Sung-Min Gho, Kanghyun Ryu, Dong-Hyun Kim, Sung Eun Park, Ji Young Ha, Soo Buem Cho, and Joon Sung Lee. "Validation of Deep Learning-Based Artifact Correction on Synthetic FLAIR Images in a Different Scanning Environment." Journal of Clinical Medicine 9, no. 2 (January 29, 2020): 364. http://dx.doi.org/10.3390/jcm9020364.

Full text
Abstract:
We investigated the capability of a trained deep learning (DL) model with a convolutional neural network (CNN) in a different scanning environment in terms of ameliorating the quality of synthetic fluid-attenuated inversion recovery (FLAIR) images. The acquired data of 319 patients obtained from the retrospective review were used as test sets for the already trained DL model to correct the synthetic FLAIR images. Quantitative analyses were performed for native synthetic FLAIR and DL-FLAIR images against conventional FLAIR images. Two neuroradiologists assessed the quality and artifact degree of the native synthetic FLAIR and DL-FLAIR images. The quantitative parameters showed significant improvement on DL-FLAIR in all individual tissue segments and total intracranial tissues than on the native synthetic FLAIR (p < 0.0001). DL-FLAIR images showed improved image quality with fewer artifacts than the native synthetic FLAIR images (p < 0.0001). There was no significant difference in the preservation of the periventricular white matter hyperintensities and lesion conspicuity between the two FLAIR image sets (p = 0.217). The quality of synthetic FLAIR images was improved through artifact correction using the trained DL model on a different scan environment. DL-based correction can be a promising solution for ameliorating the quality of synthetic FLAIR images to broaden the clinical use of synthetic magnetic resonance imaging (MRI).
APA, Harvard, Vancouver, ISO, and other styles
12

Łach, Błażej, and Edyta Łukasik. "Faster R-CNN model learning on synthetic images." Journal of Computer Sciences Institute 17 (December 30, 2020): 401–4. http://dx.doi.org/10.35784/jcsi.2285.

Full text
Abstract:
Machine learning requires a human description of the data. The manual dataset description is very time consuming. In this article was examined how the model learns from artificially created images, with the least human participation in describing the data. It was checked how the model learned on artificially produced images with augmentations and progressive image size. The model has achieve up to 3.35 higher mean average precision on syntetic dataset in the training with increasing images resolution. Augmentations improved the quality of detection on real photos. The production of artificially generated training data has a great impact on the acceleration of prepare training, because it does not require as much human resources as normal learning process.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Jun, Jimin Liang, and Haihong Hu. "Multi-view texture classification using hierarchical synthetic images." Multimedia Tools and Applications 76, no. 16 (December 9, 2016): 17511–23. http://dx.doi.org/10.1007/s11042-016-4231-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Scott, JH, and N. Ritchie. "Measuring Pixel Classification Accuracy Using Synthetic Spectrum Images." Microscopy and Microanalysis 12, S02 (July 31, 2006): 1394–95. http://dx.doi.org/10.1017/s1431927606069480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Stutman, Dan, Maria Pia Valdivia, and Michael Finkenthal. "X-ray Moiré deflectometry using synthetic reference images." Applied Optics 54, no. 19 (June 25, 2015): 5956. http://dx.doi.org/10.1364/ao.54.005956.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Bell, J. M., and L. M. Linnett. "Simulation and analysis of synthetic sidescan sonar images." IEE Proceedings - Radar, Sonar and Navigation 144, no. 4 (1997): 219. http://dx.doi.org/10.1049/ip-rsn:19971311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zuo, Jinyu, Natalia A. Schmid, and Xiaohan Chen. "On Generation and Analysis of Synthetic Iris Images." IEEE Transactions on Information Forensics and Security 2, no. 1 (March 2007): 77–90. http://dx.doi.org/10.1109/tifs.2006.890305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hayt, D. W., W. Alpers, C. Brüning, R. DeWitt, F. Henyey, D. P. Kasilingam, W. C. Keller, et al. "Focusing simulations of synthetic aperture radar ocean images." Journal of Geophysical Research 95, no. C9 (1990): 16245. http://dx.doi.org/10.1029/jc095ic09p16245.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Karimi, Koohyar, Ali Sepehr, Zlatko Devcic, and Brian J. Wong. "R007: Morphometric Analysis of Synthetic Lateral Facial Images." Otolaryngology–Head and Neck Surgery 137, no. 2_suppl (August 2007): P150. http://dx.doi.org/10.1016/j.otohns.2007.06.341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dudley, Christopher, and Philip L. Marston. "Bistatic synthetic aperture sonar images of penetrable cylinders." Journal of the Acoustical Society of America 125, no. 4 (April 2009): 2608. http://dx.doi.org/10.1121/1.4783934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Parker, J. M., and Kok-Meng Lee. "Physically-Accurate Synthetic Images for Machine Vision Design." Journal of Manufacturing Science and Engineering 121, no. 4 (November 1, 1999): 763–70. http://dx.doi.org/10.1115/1.2833139.

Full text
Abstract:
In machine vision applications, accuracy of the image far outweighs image appearance. This paper presents physically-accurate image synthesis as a flexible, practical tool for examining a large number of hardware/software configuration combinations for a wide range of parts. Synthetic images can efficiently be used to study the effects of vision system design parameters on image accuracy, providing insight into the accuracy and efficiency of image-processing algorithms in determining part location and orientation for specific applications, as well as reducing the number of hardware prototype configurations to be built and evaluated. We present results illustrating that physically accurate, rather than photo-realistic, synthesis methods are necessary to sufficiently simulate captured image gray-scale values. The usefulness of physically-accurate synthetic images in evaluating the effect of conditions in the manufacturing environment on captured images is also investigated. The prevalent factors investigated in this study are the effects of illumination, the sensor non-linearity and the finite-size pinhole on the captured image of retroreflective vision sensing and, therefore, on camera calibration was shown; if not fully understood, these effects can introduce apparent error in calibration results. While synthetic images cannot fully compensate for the real environment, they can be efficiently used to study the effects of ambient lighting and other important parameters, such as true part and environment reflectance, on image accuracy. We conclude with an evaluation of results and recommendations for improving the accuracy of the synthesis methodology.
APA, Harvard, Vancouver, ISO, and other styles
22

TOTSKY, A. V., and B. F. GORBUNENKO. "Statistical investigations of the synthetic aperture radar images." International Journal of Remote Sensing 15, no. 9 (June 1994): 1761–74. http://dx.doi.org/10.1080/01431169408954207.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Suto, Y., M. Kamba, and Y. Ohta. "Synthetic color images for contrast-enhanced MR imaging." American Journal of Roentgenology 163, no. 6 (December 1994): 1531. http://dx.doi.org/10.2214/ajr.163.6.7992770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Yang, Bowen, Ji Liu, and Xiaosheng Liang. "Object segmentation using FCNs trained on synthetic images." Journal of Intelligent & Fuzzy Systems 35, no. 3 (October 1, 2018): 3233–42. http://dx.doi.org/10.3233/jifs-171675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Song, Liangchen, Yonghao Xu, Lefei Zhang, Bo Du, Qian Zhang, and Xinggang Wang. "Learning From Synthetic Images via Active Pseudo-Labeling." IEEE Transactions on Image Processing 29 (2020): 6452–65. http://dx.doi.org/10.1109/tip.2020.2989100.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Suto, Yuji, Biray E. Caner, Yoichi Tamagawa, Tsuyoshi Matsuda, Issyu Kimura, Hirohiko Kimura, Takashi Toyama, and Yasushi Ishii. "Subtracted Synthetic Images in Gd-DTPA Enhanced MR." Journal of Computer Assisted Tomography 13, no. 5 (September 1989): 925–28. http://dx.doi.org/10.1097/00004728-198909000-00038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Gao, Yuan, Mohammad Tayeb Al Qaseer, and Reza Zoughi. "Complex Permittivity Extraction From Synthetic Aperture Radar Images." IEEE Transactions on Instrumentation and Measurement 69, no. 7 (July 2020): 4919–29. http://dx.doi.org/10.1109/tim.2019.2952479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Fante, R. L. "Turbulence-induced distortion of synthetic aperture radar images." IEEE Transactions on Geoscience and Remote Sensing 32, no. 4 (July 1994): 958–61. http://dx.doi.org/10.1109/36.298027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ulander, L. M. H. "Radiometric slope correction of synthetic-aperture radar images." IEEE Transactions on Geoscience and Remote Sensing 34, no. 5 (1996): 1115–22. http://dx.doi.org/10.1109/36.536527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Sodagar, I., Hung-Ju Lee, P. Hatrack, and Ya-Qin Zhang. "Scalable wavelet coding for synthetic/natural hybrid images." IEEE Transactions on Circuits and Systems for Video Technology 9, no. 2 (March 1999): 244–54. http://dx.doi.org/10.1109/76.752092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jung, Sukwoo, Seunghyun Song, Minho Chang, and Sangchul Park. "Range image registration based on 2D synthetic images." Computer-Aided Design 94 (January 2018): 16–27. http://dx.doi.org/10.1016/j.cad.2017.08.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Boulanger, Pierre, André Gagalowicz, and Marc Rioux. "Integration of synthetic surface relief in range images." Computer Vision, Graphics, and Image Processing 47, no. 1 (July 1989): 129. http://dx.doi.org/10.1016/0734-189x(89)90063-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Boulanger, Pierre, André Gagalowicz, and Marc Rioux. "Integration of synthetic surface relief in range images." Computer Vision, Graphics, and Image Processing 47, no. 3 (September 1989): 361–72. http://dx.doi.org/10.1016/0734-189x(89)90118-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Shi, Xianglian, Bo Ou, and Zheng Qin. "Tailoring reversible data hiding for 3D synthetic images." Signal Processing: Image Communication 64 (May 2018): 46–58. http://dx.doi.org/10.1016/j.image.2018.02.012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Lee, Seung Hyun, Young Han Lee, Seok Hahn, Jaemoon Yang, Ho-Taek Song, and Jin-Suck Suh. "Optimization of T2-weighted imaging for shoulder magnetic resonance arthrography by synthetic magnetic resonance imaging." Acta Radiologica 59, no. 8 (November 14, 2017): 959–65. http://dx.doi.org/10.1177/0284185117740761.

Full text
Abstract:
Background Synthetic magnetic resonance imaging (MRI) allows reformatting of various synthetic images by adjustment of scanning parameters such as repetition time (TR) and echo time (TE). Optimized MR images can be reformatted from T1, T2, and proton density (PD) values to achieve maximum tissue contrast between joint fluid and adjacent soft tissue. Purpose To demonstrate the method for optimization of TR and TE by synthetic MRI and to validate the optimized images by comparison with conventional shoulder MR arthrography (MRA) images. Material and Methods Thirty-seven shoulder MRA images acquired by synthetic MRI were retrospectively evaluated for PD, T1, and T2 values at the joint fluid and glenoid labrum. Differences in signal intensity between the fluid and labrum were observed between TR of 500–6000 ms and TE of 80–300 ms in T2-weighted (T2W) images. Conventional T2W and synthetic images were analyzed for diagnostic agreement of supraspinatus tendon abnormalities (kappa statistics) and image quality scores (one-way analysis of variance with post-hoc analysis). Results Optimized mean values of TR and TE were 2724.7 ± 1634.7 and 80.1 ± 0.4, respectively. Diagnostic agreement for supraspinatus tendon abnormalities between conventional and synthetic MR images was excellent (κ = 0.882). The mean image quality score of the joint space in optimized synthetic images was significantly higher compared with those in conventional and synthetic images (2.861 ± 0.351 vs. 2.556 ± 0.607 vs. 2.750 ± 0.439; P < 0.05). Conclusion Synthetic MRI with optimized TR and TE for shoulder MRA enables optimization of soft-tissue contrast.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Yutong, Jingyuan Yang, Yang Zhou, Weisen Wang, Jianchun Zhao, Weihong Yu, Dingding Zhang, Dayong Ding, Xirong Li, and Youxin Chen. "Prediction of OCT images of short-term response to anti-VEGF treatment for neovascular age-related macular degeneration using generative adversarial network." British Journal of Ophthalmology 104, no. 12 (March 26, 2020): 1735–40. http://dx.doi.org/10.1136/bjophthalmol-2019-315338.

Full text
Abstract:
Background/aimsThe aim of this study was to generate and evaluate individualised post-therapeutic optical coherence tomography (OCT) images that could predict the short-term response of antivascular endothelial growth factor therapy for typical neovascular age-related macular degeneration (nAMD) based on pretherapeutic images using generative adversarial network (GAN).MethodsA total of 476 pairs of pretherapeutic and post-therapeutic OCT images of patients with nAMD were included in training set, while 50 pretherapeutic OCT images were included in the tests set retrospectively, and their corresponding post-therapeutic OCT images were used to evaluate the synthetic images. The pix2pixHD method was adopted for image synthesis. Three experiments were performed to evaluate the quality, authenticity and predictive power of the synthetic images by retinal specialists.ResultsWe found that 92% of the synthetic OCT images had sufficient quality for further clinical interpretation. Only about 26%–30% synthetic post-therapeutic images could be accurately identified as synthetic images. The accuracy to predict macular status of wet or dry was 0.85 (95% CI 0.74 to 0.95).ConclusionOur results revealed a great potential of GAN to generate post-therapeutic OCT images with both good quality and high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
37

Esmaeilzade, M., J. Amini, and S. Zakeri. "GEOREFERENCING ON SYNTHETIC APERTURE RADAR IMAGERY." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XL-1-W5 (December 11, 2015): 179–84. http://dx.doi.org/10.5194/isprsarchives-xl-1-w5-179-2015.

Full text
Abstract:
Due to the SAR<sup>1</sup> geometry imaging, SAR images include geometric distortions that would be erroneous image information and the images should be geometrically calibrated. As the radar systems are side looking, geometric distortion such as shadow, foreshortening and layover are occurred. To compensate these geometric distortions, information about sensor position, imaging geometry and target altitude from ellipsoid should be available. In this paper, a method for geometric calibration of SAR images is proposed. The method uses Range-Doppler equations. In this method, for the image georeferencing, the DEM<sup>2</sup> of SRTM with 30m pixel size is used and also exact ephemeris data of the sensor is required. In the algorithm proposed in this paper, first digital elevation model transmit to range and azimuth direction. By applying this process, errors caused by topography such as foreshortening and layover are removed in the transferred DEM. Then, the position of the corners on original image is found base on the transferred DEM. Next, original image registered to transfer DEM by 8 parameters projective transformation. The output is the georeferenced image that its geometric distortions are removed. The advantage of the method described in this article is that it does not require any control point as well as the need to attitude and rotational parameters of the sensor. Since the ground range resolution of used images are about 30m, the geocoded images using the method described in this paper have an accuracy about 20m (subpixel) in planimetry and about 30m in altimetry. <br><br> <sup>1</sup> Synthetic Aperture Radar <br> <sup>2</sup> Digital Elevation Model
APA, Harvard, Vancouver, ISO, and other styles
38

LI, Yanling, Adams Wai-Kin Kong, and Steven Thng. "Segmenting Vitiligo on Clinical Face Images Using CNN Trained on Synthetic and Internet Images." IEEE Journal of Biomedical and Health Informatics 25, no. 8 (August 2021): 3082–93. http://dx.doi.org/10.1109/jbhi.2021.3055213.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Dirvanauskas, Maskeliūnas, Raudonis, Damaševičius, and Scherer. "HEMIGEN: Human Embryo Image Generator Based on Generative Adversarial Networks." Sensors 19, no. 16 (August 16, 2019): 3578. http://dx.doi.org/10.3390/s19163578.

Full text
Abstract:
We propose a method for generating the synthetic images of human embryo cells that could later be used for classification, analysis, and training, thus resulting in the creation of new synthetic image datasets for research areas lacking real-world data. Our focus was not only to generate the generic image of a cell such, but to make sure that it has all necessary attributes of a real cell image to provide a fully realistic synthetic version. We use human embryo images obtained during cell development processes for training a deep neural network (DNN). The proposed algorithm used generative adversarial network (GAN) to generate one-, two-, and four-cell stage images. We achieved a misclassification rate of 12.3% for the generated images, while the expert evaluation showed the true recognition rate (TRR) of 80.00% (for four-cell images), 86.8% (for two-cell images), and 96.2% (for one-cell images). Texture-based comparison using the Haralick features showed that there is no statistically (using the Student’s t-test) significant (p < 0.01) differences between the real and synthetic embryo images except for the sum of variance (for one-cell and four-cell images), and variance and sum of average (for two-cell images) features. The obtained synthetic images can be later adapted to facilitate the development, training, and evaluation of new algorithms for embryo image processing tasks.
APA, Harvard, Vancouver, ISO, and other styles
40

Gunár, S., J. Jurčák, and K. Ichimoto. "The influence of Hinode/SOT NFI instrumental effects on the visibility of simulated prominence fine structures in Hα." Astronomy & Astrophysics 629 (September 2019): A118. http://dx.doi.org/10.1051/0004-6361/201936147.

Full text
Abstract:
Context. Models of entire prominences with their numerous fine structures distributed within the prominence magnetic field use approximate radiative transfer techniques to visualize the simulated prominences. However, to accurately compare synthetic images of prominences obtained in this way with observations and to precisely analyze the visibility of even the faintest prominence features, it is important to take into account the influence of instrumental properties on the synthetic spectra and images. Aims. In the present work, we investigate how synthetic Hα images of simulated prominences are impacted by the instrumental effects induced by the Narrowband Filter Imager (NFI) of the Solar Optical Telescope (SOT) onboard the Hinode satellite. Methods. To process the synthetic Hα images provided by 3D Whole-Prominence Fine Structure (WPFS) models into SOT-like synthetic Hα images, we take into account the effects of the integration over the theoretical narrow-band transmission profile of NFI Lyot filter, the influence of the stray-light and point spread function (PSF) of Hinode/SOT, and the observed noise level. This allows us to compare the visibility of the prominence fine structures in the SOT-like synthetic Hα images with the synthetic Hα line-center images used by the 3D models and with a pair of Hinode/SOT NFI observations of quiescent prominences. Results. The comparison between the SOT-like synthetic Hα images and the synthetic Hα line-center images shows that all large and small-scale features are very similar in both visualizations and that the same very faint prominence fine structures can be discerned in both. This demonstrates that the computationally efficient Hα line-center visualization technique can be reliably used for the purpose of visualization of complex 3D prominence models. In addition, the qualitative comparison between the SOT-like synthetic images and prominence observations shows that the 3D WPFS models can reproduce large-scale prominence features rather well. However, the distribution of the prominence fine structures is significantly more diffuse in the observations than in the models and the diffuse intensity areas surrounding the observed prominences are also not present in the synthetic images. We also found that the maximum intensities reached in the models are about twice as high as those present in the observations–an indication that the mass-loading assumed in the present 3D WPFS models might be too large.
APA, Harvard, Vancouver, ISO, and other styles
41

Bureš, Lukáš, Ivan Gruber, Petr Neduchal, Miroslav Hlaváč, and Marek Hrúz. "Semantic Text Segmentation from Synthetic Images of Full-Text Documents." SPIIRAS Proceedings 18, no. 6 (December 2, 2019): 1381–406. http://dx.doi.org/10.15622/sp.2019.18.6.1381-1406.

Full text
Abstract:
An algorithm (divided into multiple modules) for generating images of full-text documents is presented. These images can be used to train, test, and evaluate models for Optical Character Recognition (OCR). The algorithm is modular, individual parts can be changed and tweaked to generate desired images. A method for obtaining background images of paper from already digitized documents is described. For this, a novel approach based on Variational AutoEncoder (VAE) to train a generative model was used. These backgrounds enable the generation of similar background images as the training ones on the fly.The module for printing the text uses large text corpora, a font, and suitable positional and brightness character noise to obtain believable results (for natural-looking aged documents). A few types of layouts of the page are supported. The system generates a detailed, structured annotation of the synthesized image. Tesseract OCR to compare the real-world images to generated images is used. The recognition rate is very similar, indicating the proper appearance of the synthetic images. Moreover, the errors which were made by the OCR system in both cases are very similar. From the generated images, fully-convolutional encoder-decoder neural network architecture for semantic segmentation of individual characters was trained. With this architecture, the recognition accuracy of 99.28% on a test set of synthetic documents is reached.
APA, Harvard, Vancouver, ISO, and other styles
42

Li, Yafen, Wen Li, Jing Xiong, Jun Xia, and Yaoqin Xie. "Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images." BioMed Research International 2020 (November 5, 2020): 1–9. http://dx.doi.org/10.1155/2020/5193707.

Full text
Abstract:
Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.
APA, Harvard, Vancouver, ISO, and other styles
43

Berenguel-Baeta, Bruno, Jesus Bermudez-Cameo, and Jose J. Guerrero. "OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision." Sensors 20, no. 7 (April 7, 2020): 2066. http://dx.doi.org/10.3390/s20072066.

Full text
Abstract:
Omnidirectional and 360° images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas.
APA, Harvard, Vancouver, ISO, and other styles
44

Conti, Luis Américo, and Murilo Baptista. "SYNTHETIC APERTURE SONAR IMAGES SEGMENTATION USING DYNAMICAL MODELING ANALYSIS." Revista Brasileira de Geofísica 31, no. 3 (September 1, 2013): 455. http://dx.doi.org/10.22564/rbgf.v31i3.315.

Full text
Abstract:
ABSTRACT. Symbolic Models applied to Synthetic Aperture Sonar images are proposed in order to assess the validity and reliability of use of such models and evaluate how effective they can be in terms of image classification and segmentation. We developed an approach for the description of sonar images where the pixels distribution can be transformed into points in the symbolic space in a similar way as symbolic space can encode a trajectory of a dynamical system. One of the main characteristic of approach is that points in the symbolic space are mapped respecting dynamical rules and, as a consequence, it can possible to calculate quantities thatcharacterize the dynamical system, such as Fractal Dimension (D), Shannon Entropy (H) and the amount of information of the image. It also showed potential to classify image sub-patterns based on the textural characteristics of the seabed. The proposed method reached a reasonable degree of success with results compatible with the classical techniques described in literature.Keywords: Synthetic Aperture Sonar, image processing, dynamical models, fractal, seabed segmentation. RESUMO. Este estudo apresenta uma proposta de metodologia para segmentação e classificação de imagens de sonar de Abertura Sintética a partir de modelos de Dinâmica Simbólica. Foram desenvolvidas, em um primeiro momento, técnicas de descrição de registros de sonar, com base na transformação da distribuição dos pixels da imagem em pontos em um espaço simbólico, codificado a partir de uma função de interação, de modo que as imagens podem ser interpretadas como sistemas dinâmicos em que trajetórias do sistema podem ser estabelecidas. Uma das características marcantes deste método é que, ao descrever uma imagem como um sistema dinâmico, é possível calcular grandezas como dimensão fractal (D) e entropia de Shannon (H) além da quantidade de informação inerente a imagem. Foi possível classificar, posteriormente, características texturais das imagens com base nas propriedades dinâmicas do espaço simbólico, o que permitiu a segmentação automática de padrões de “backscatter” indicando variações da geologia/geomorfologia do substrato marinho. O método proposto atingiu um razoável grau de sucesso em relação à acurácia de segmentação, com sucesso compatível com métodos alternativos descritos em literatura.Palavras-chave: sonar de abertura sintética, processamento de imagens, modelos dinâmicos, fractal, segmentação.
APA, Harvard, Vancouver, ISO, and other styles
45

., V. B. Pravalika, S. Nageswararoa, and B. Seetharamulu. "A Survey on Despeckling Of Synthetic Aperture Radar Images." International Journal of Computer Sciences and Engineering 7, no. 4 (April 30, 2019): 608–12. http://dx.doi.org/10.26438/ijcse/v7i4.608612.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Sekar, Dr M. Raja. "Classification of Synthetic Aperture Radar Images using Fuzzy SVMs." International Journal for Research in Applied Science and Engineering Technology V, no. VIII (August 26, 2017): 289–96. http://dx.doi.org/10.22214/ijraset.2017.8040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Bonaldi, Lorenza, Elisa Menti, Lucia Ballerini, Alfredo Ruggeri, and Emanuele Trucco. "Automatic Generation of Synthetic Retinal Fundus Images: Vascular Network." Procedia Computer Science 90 (2016): 54–60. http://dx.doi.org/10.1016/j.procs.2016.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Rajaram, Satwik, Benjamin Pavie, Nicholas E. F. Hac, Steven J. Altschuler, and Lani F. Wu. "SimuCell: a flexible framework for creating synthetic microscopy images." Nature Methods 9, no. 7 (June 28, 2012): 634–35. http://dx.doi.org/10.1038/nmeth.2096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Antony K., Yunhe Zhao, and Ming-Kuang Hsu. "Ocean surface drift revealed by synthetic aperture radar images." Eos, Transactions American Geophysical Union 87, no. 24 (2006): 233. http://dx.doi.org/10.1029/2006eo240002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Plotnick, Daniel S., and Timothy M. Marston. "Utilization of Aspect Angle Information in Synthetic Aperture Images." IEEE Transactions on Geoscience and Remote Sensing 56, no. 9 (September 2018): 5424–32. http://dx.doi.org/10.1109/tgrs.2018.2816462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography