Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN).

Artykuły w czasopismach na temat „CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych artykułów w czasopismach naukowych na temat „CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

Zhou, Guoqiang, Yi Fan, Jiachen Shi, Yuyuan Lu i Jun Shen. "Conditional Generative Adversarial Networks for Domain Transfer: A Survey". Applied Sciences 12, nr 16 (21.08.2022): 8350. http://dx.doi.org/10.3390/app12168350.

Pełny tekst źródła
Streszczenie:
Generative Adversarial Network (GAN), deemed as a powerful deep-learning-based silver bullet for intelligent data generation, has been widely used in multi-disciplines. Furthermore, conditional GAN (CGAN) introduces artificial control information on the basis of GAN, which is more practical for many specific fields, though it is mostly used in domain transfer. Researchers have proposed numerous methods to tackle diverse tasks by employing CGAN. It is now a timely and also critical point to review these achievements. We first give a brief introduction to the principle of CGAN, then focus on how to improve it to achieve better performance and how to evaluate such performance across the variants. Afterward, the main applications of CGAN in domain transfer are presented. Finally, as another major contribution, we also list the current problems and challenges of CGAN.
Style APA, Harvard, Vancouver, ISO itp.
2

Lee, Minhyeok, i Junhee Seok. "Estimation with Uncertainty via Conditional Generative Adversarial Networks". Sensors 21, nr 18 (15.09.2021): 6194. http://dx.doi.org/10.3390/s21186194.

Pełny tekst źródła
Streszczenie:
Conventional predictive Artificial Neural Networks (ANNs) commonly employ deterministic weight matrices; therefore, their prediction is a point estimate. Such a deterministic nature in ANNs causes the limitations of using ANNs for medical diagnosis, law problems, and portfolio management in which not only discovering the prediction but also the uncertainty of the prediction is essentially required. In order to address such a problem, we propose a predictive probabilistic neural network model, which corresponds to a different manner of using the generator in the conditional Generative Adversarial Network (cGAN) that has been routinely used for conditional sample generation. By reversing the input and output of ordinary cGAN, the model can be successfully used as a predictive model; moreover, the model is robust against noises since adversarial training is employed. In addition, to measure the uncertainty of predictions, we introduce the entropy and relative entropy for regression problems and classification problems, respectively. The proposed framework is applied to stock market data and an image classification task. As a result, the proposed framework shows superior estimation performance, especially on noisy data; moreover, it is demonstrated that the proposed framework can properly estimate the uncertainty of predictions.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Hao, i Wenlei Wang. "Imaging Domain Seismic Denoising Based on Conditional Generative Adversarial Networks (CGANs)". Energies 15, nr 18 (8.09.2022): 6569. http://dx.doi.org/10.3390/en15186569.

Pełny tekst źródła
Streszczenie:
A high-resolution seismic image is the key factor for helping geophysicists and geologists to recognize the geological structures below the subsurface. More and more complex geology has challenged traditional techniques and resulted in a need for more powerful denoising methodologies. The deep learning technique has shown its effectiveness in many different types of tasks. In this work, we used a conditional generative adversarial network (CGAN), which is a special type of deep neural network, to conduct the seismic image denoising process. We considered the denoising task as an image-to-image translation problem, which transfers a raw seismic image with multiple types of noise into a reflectivity-like image without noise. We used several seismic models with complex geology to train the CGAN. In this experiment, the CGAN’s performance was promising. The trained CGAN could maintain the structure of the image undistorted while suppressing multiple types of noise.
Style APA, Harvard, Vancouver, ISO itp.
4

Zand, Jaleh, i Stephen Roberts. "Mixture Density Conditional Generative Adversarial Network Models (MD-CGAN)". Signals 2, nr 3 (1.09.2021): 559–69. http://dx.doi.org/10.3390/signals2030034.

Pełny tekst źródła
Streszczenie:
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impressive applications highlighted in computer vision, in particular. Compared to such examples, however, there have been more limited applications of GANs to time series modeling, including forecasting. In this work, we present the Mixture Density Conditional Generative Adversarial Model (MD-CGAN), with a focus on time series forecasting. We show that our model is capable of estimating a probabilistic posterior distribution over forecasts and that, in comparison to a set of benchmark methods, the MD-CGAN model performs well, particularly in situations where noise is a significant component of the observed time series. Further, by using a Gaussian mixture model as the output distribution, MD-CGAN offers posterior predictions that are non-Gaussian.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhen, Hao, Yucheng Shi, Jidong J. Yang i Javad Mohammadpour Vehni. "Co-supervised learning paradigm with conditional generative adversarial networks for sample-efficient classification". Applied Computing and Intelligence 3, nr 1 (2022): 13–26. http://dx.doi.org/10.3934/aci.2023002.

Pełny tekst źródła
Streszczenie:
<abstract> <p>Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a <italic>co-supervisor</italic> but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.</p> </abstract>
Style APA, Harvard, Vancouver, ISO itp.
6

Huang, Yubo, i Zhong Xiang. "A Metal Character Enhancement Method based on Conditional Generative Adversarial Networks". Journal of Physics: Conference Series 2284, nr 1 (1.06.2022): 012003. http://dx.doi.org/10.1088/1742-6596/2284/1/012003.

Pełny tekst źródła
Streszczenie:
Abstract In order to improve the accuracy and stability of metal stamping character (MSC) automatic recognition technology, a metal stamping character enhancement algorithm based on conditional Generative Adversarial Networks (cGAN) is proposed. We identify character regions manually through region labeling and Unsharpen Mask (USM) sharpening algorithm, and make the cGAN learn the most effective loss function in the adversarial training process to guide the generated model and distinguish character features and interference features, so as to achieve contrast enhancement between character and non-character regions. Qualitative and quantitative analyses show that the generated results have satisfactory image quality, and that the maximum character recognition rate of the recognition network ASTER is improved by 11.03%.
Style APA, Harvard, Vancouver, ISO itp.
7

Kyslytsyna, Anastasiia, Kewen Xia, Artem Kislitsyn, Isselmou Abd El Kader i Youxi Wu. "Road Surface Crack Detection Method Based on Conditional Generative Adversarial Networks". Sensors 21, nr 21 (8.11.2021): 7405. http://dx.doi.org/10.3390/s21217405.

Pełny tekst źródła
Streszczenie:
Constant monitoring of road surfaces helps to show the urgency of deterioration or problems in the road construction and to improve the safety level of the road surface. Conditional generative adversarial networks (cGAN) are a powerful tool to generate or transform the images used for crack detection. The advantage of this method is the highly accurate results in vector-based images, which are convenient for mathematical analysis of the detected cracks at a later time. However, images taken under established parameters are different from images in real-world contexts. Another potential problem of cGAN is that it is difficult to detect the shape of an object when the resulting accuracy is low, which can seriously affect any further mathematical analysis of the detected crack. To tackle this issue, this paper proposes a method called improved cGAN with attention gate (ICGA) for roadway surface crack detection. To obtain a more accurate shape of the detected target object, ICGA establishes a multi-level model with independent stages. In the first stage, everything except the road is treated as noise and removed from the image. These images are stored in a new dataset. In the second stage, ICGA determines the cracks. Therefore, ICGA focuses on the redistribution of cracks, not the auxiliary elements in the image. ICGA adds two attention gates to a U-net architecture and improves the segmentation capacities of the generator in pix2pix. Extensive experimental results on dashboard camera images of the Unsupervised Llamas dataset show that our method has better performance than other state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
8

Link, Patrick, Johannes Bodenstab, Lars Penter i Steffen Ihlenfeldt. "Metamodeling of a deep drawing process using conditional Generative Adversarial Networks". IOP Conference Series: Materials Science and Engineering 1238, nr 1 (1.05.2022): 012064. http://dx.doi.org/10.1088/1757-899x/1238/1/012064.

Pełny tekst źródła
Streszczenie:
Abstract Optimization tasks as well as quality predictions for process control require fast responding process metamodels. A common strategy for sheet metal forming is building fast data driven metamodels based on results of Finite Element (FE) process simulations. However, FE simulations with complex material models and large parts with many elements consume extensive computational time. Hence, one major challenge in developing metamodels is to achieve a good prediction precision with limited data, while these predictions still need to be robust against varying input parameters. Therefore, the aim of this study was to evaluate if conditional Generative Adversarial Networks (cGAN) are applicable for predicting results of FE deep drawing simulations, since cGANs could achieve high performance in similar tasks in previous work. This involves investigations of the influence of data required to achieve a defined precision and to predict e.g. wrinkling phenomena. Results show that the cGAN used in this study was able to predict forming results with an averaged absolute deviation of sheet thickness of 0.025 mm, even when using a comparable small amount of data.
Style APA, Harvard, Vancouver, ISO itp.
9

Falahatraftar, Farnoush, Samuel Pierre i Steven Chamberland. "A Conditional Generative Adversarial Network Based Approach for Network Slicing in Heterogeneous Vehicular Networks". Telecom 2, nr 1 (18.03.2021): 141–54. http://dx.doi.org/10.3390/telecom2010009.

Pełny tekst źródła
Streszczenie:
Heterogeneous Vehicular Network (HetVNET) is a highly dynamic type of network that changes very quickly. Regarding this feature of HetVNETs and the emerging notion of network slicing in 5G technology, we propose a hybrid intelligent Software-Defined Network (SDN) and Network Functions Virtualization (NFV) based architecture. In this paper, we apply Conditional Generative Adversarial Network (CGAN) to augment the information of successful network scenarios that are related to network congestion and dynamicity. The results show that the proposed CGAN can be trained in order to generate valuable data. The generated data are similar to the real data and they can be used in blueprints of HetVNET slices.
Style APA, Harvard, Vancouver, ISO itp.
10

Aida, Saori, Junpei Okugawa, Serena Fujisaka, Tomonari Kasai, Hiroyuki Kameda i Tomoyasu Sugiyama. "Deep Learning of Cancer Stem Cell Morphology Using Conditional Generative Adversarial Networks". Biomolecules 10, nr 6 (19.06.2020): 931. http://dx.doi.org/10.3390/biom10060931.

Pełny tekst źródła
Streszczenie:
Deep-learning workflows of microscopic image analysis are sufficient for handling the contextual variations because they employ biological samples and have numerous tasks. The use of well-defined annotated images is important for the workflow. Cancer stem cells (CSCs) are identified by specific cell markers. These CSCs were extensively characterized by the stem cell (SC)-like gene expression and proliferation mechanisms for the development of tumors. In contrast, the morphological characterization remains elusive. This study aims to investigate the segmentation of CSCs in phase contrast imaging using conditional generative adversarial networks (CGAN). Artificial intelligence (AI) was trained using fluorescence images of the Nanog-Green fluorescence protein, the expression of which was maintained in CSCs, and the phase contrast images. The AI model segmented the CSC region in the phase contrast image of the CSC cultures and tumor model. By selecting images for training, several values for measuring segmentation quality increased. Moreover, nucleus fluorescence overlaid-phase contrast was effective for increasing the values. We show the possibility of mapping CSC morphology to the condition of undifferentiation using deep-learning CGAN workflows.
Style APA, Harvard, Vancouver, ISO itp.
11

Choi, Suyeon, i Yeonjoo Kim. "Rad-cGAN v1.0: Radar-based precipitation nowcasting model with conditional generative adversarial networks for multiple dam domains". Geoscientific Model Development 15, nr 15 (1.08.2022): 5967–85. http://dx.doi.org/10.5194/gmd-15-5967-2022.

Pełny tekst źródła
Streszczenie:
Abstract. Numerical weather prediction models and probabilistic extrapolation methods using radar images have been widely used for precipitation nowcasting. Recently, machine-learning-based precipitation nowcasting models have also been actively developed for relatively short-term precipitation predictions. This study was aimed at developing a radar-based precipitation nowcasting model using an advanced machine-learning technique, conditional generative adversarial network (cGAN), which shows high performance in image generation tasks. The cGAN-based precipitation nowcasting model, named Rad-cGAN, developed in this study was trained with the radar reflectivity data of the Soyang-gang Dam basin in South Korea with a spatial domain of 128 × 128 pixels, spatial resolution of 1 km, and temporal resolution of 10 min. The model performance was evaluated using previously developed machine-learning-based precipitation nowcasting models, namely convolutional long short-term memory (ConvLSTM) and U-Net. In addition, Eulerian persistence model and pySTEPS, a radar-based deterministic nowcasting system, are used as baseline models. We demonstrated that Rad-cGAN outperformed reference models at 10 min lead time prediction for the Soyang-gang Dam basin based on verification metrics: Pearson correlation coefficient (R), root mean square error (RMSE), Nash–Sutcliffe efficiency (NSE), critical success index (CSI), and fraction skill scores (FSS) at an intensity threshold of 0.1, 1.0, and 5.0 mm h−1. However, unlike low rainfall intensity, the CSI at high rainfall intensity in Rad-cGAN deteriorated rapidly beyond the lead time of 10 min; however, ConvLSTM and baseline models maintained better performances. This observation was consistent with the FSS calculated at high rainfall intensity. These results were qualitatively evaluated using typhoon Soulik as an example, and through this, ConvLSTM maintained relatively higher precipitation than the other models. However, for the prediction of precipitation area, Rad-cGAN showed the best results, and the advantage of the cGAN method to reduce the blurring effect was confirmed through radially averaged power spectral density (PSD). We also demonstrated the successful implementation of the transfer learning technique to efficiently train the model with the data from other dam basins in South Korea, such as the Andong Dam and Chungju Dam basins. We used the pre-trained model, which was completely trained in the Soyang-gang Dam basin. Furthermore, we analyzed the amount of data to effectively develop the model for the new domain through the transfer learning strategies applying the pre-trained model using data for additional dam basins. This study confirmed that Rad-cGAN can be successfully applied to precipitation nowcasting with longer lead times and using the transfer learning approach showed good performance in dam basins other than the originally trained basin.
Style APA, Harvard, Vancouver, ISO itp.
12

Yuan, Hao, Lei Cai, Zhengyang Wang, Xia Hu, Shaoting Zhang i Shuiwang Ji. "Computational modeling of cellular structures using conditional deep generative networks". Bioinformatics 35, nr 12 (6.11.2018): 2141–49. http://dx.doi.org/10.1093/bioinformatics/bty923.

Pełny tekst źródła
Streszczenie:
Abstract Motivation Cellular function is closely related to the localizations of its sub-structures. It is, however, challenging to experimentally label all sub-cellular structures simultaneously in the same cell. This raises the need of building a computational model to learn the relationships among these sub-cellular structures and use reference structures to infer the localizations of other structures. Results We formulate such a task as a conditional image generation problem and propose to use conditional generative adversarial networks for tackling it. We employ an encoder–decoder network as the generator and propose to use skip connections between the encoder and decoder to provide spatial information to the decoder. To incorporate the conditional information in a variety of different ways, we develop three different types of skip connections, known as the self-gated connection, encoder-gated connection and label-gated connection. The proposed skip connections are built based on the conditional information using gating mechanisms. By learning a gating function, the network is able to control what information should be passed through the skip connections from the encoder to the decoder. Since the gate parameters are also learned automatically, we expect that only useful spatial information is transmitted to the decoder to help image generation. We perform both qualitative and quantitative evaluations to assess the effectiveness of our proposed approaches. Experimental results show that our cGAN-based approaches have the ability to generate the desired sub-cellular structures correctly. Our results also demonstrate that the proposed approaches outperform the existing approach based on adversarial auto-encoders, and the new skip connections lead to improved performance. In addition, the localizations of generated sub-cellular structures by our approaches are consistent with observations in biological experiments. Availability and implementation The source code and more results are available at https://github.com/divelab/cgan/.
Style APA, Harvard, Vancouver, ISO itp.
13

Thakur, Amey. "Generative Adversarial Networks". International Journal for Research in Applied Science and Engineering Technology 9, nr 8 (31.08.2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.

Pełny tekst źródła
Streszczenie:
Abstract: Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples. Keywords: Generative Adversarial Networks (GANs), Generator, Discriminator, Supervised and Unsupervised Learning, Discriminative and Generative Modelling, Backpropagation, Loss Functions, Machine Learning, Deep Learning, Neural Networks, Convolutional Neural Network (CNN), Deep Convolutional GAN (DCGAN), Conditional GAN (cGAN), Information Maximizing GAN (InfoGAN), Stacked GAN (StackGAN), Pix2Pix, Wasserstein GAN (WGAN), Progressive Growing GAN (ProGAN), BigGAN, StyleGAN, CycleGAN, Super-Resolution GAN (SRGAN), Image Synthesis, Image-to-Image Translation.
Style APA, Harvard, Vancouver, ISO itp.
14

Cem Birbiri, Ufuk, Azam Hamidinekoo, Amélie Grall, Paul Malcolm i Reyer Zwiggelaar. "Investigating the Performance of Generative Adversarial Networks for Prostate Tissue Detection and Segmentation". Journal of Imaging 6, nr 9 (24.08.2020): 83. http://dx.doi.org/10.3390/jimaging6090083.

Pełny tekst źródła
Streszczenie:
The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during diagnostic imaging, radiotherapy and monitoring the progress of disease. Conditional GAN (cGAN), cycleGAN and U-Net models and their performances were studied for the detection and segmentation of prostate tissue in 3D multi-parametric MRI scans. These models were trained and evaluated on MRI data from 40 patients with biopsy-proven prostate cancer. Due to the limited amount of available training data, three augmentation schemes were proposed to artificially increase the training samples. These models were tested on a clinical dataset annotated for this study and on a public dataset (PROMISE12). The cGAN model outperformed the U-Net and cycleGAN predictions owing to the inclusion of paired image supervision. Based on our quantitative results, cGAN gained a Dice score of 0.78 and 0.75 on the private and the PROMISE12 public datasets, respectively.
Style APA, Harvard, Vancouver, ISO itp.
15

Green, Adrian J., Martin J. Mohlenkamp, Jhuma Das, Meenal Chaudhari, Lisa Truong, Robyn L. Tanguay i David M. Reif. "Leveraging high-throughput screening data, deep neural networks, and conditional generative adversarial networks to advance predictive toxicology". PLOS Computational Biology 17, nr 7 (2.07.2021): e1009135. http://dx.doi.org/10.1371/journal.pcbi.1009135.

Pełny tekst źródła
Streszczenie:
There are currently 85,000 chemicals registered with the Environmental Protection Agency (EPA) under the Toxic Substances Control Act, but only a small fraction have measured toxicological data. To address this gap, high-throughput screening (HTS) and computational methods are vital. As part of one such HTS effort, embryonic zebrafish were used to examine a suite of morphological and mortality endpoints at six concentrations from over 1,000 unique chemicals found in the ToxCast library (phase 1 and 2). We hypothesized that by using a conditional generative adversarial network (cGAN) or deep neural networks (DNN), and leveraging this large set of toxicity data we could efficiently predict toxic outcomes of untested chemicals. Utilizing a novel method in this space, we converted the 3D structural information into a weighted set of points while retaining all information about the structure. In vivo toxicity and chemical data were used to train two neural network generators. The first was a DNN (Go-ZT) while the second utilized cGAN architecture (GAN-ZT) to train generators to produce toxicity data. Our results showed that Go-ZT significantly outperformed the cGAN, support vector machine, random forest and multilayer perceptron models in cross-validation, and when tested against an external test dataset. By combining both Go-ZT and GAN-ZT, our consensus model improved the SE, SP, PPV, and Kappa, to 71.4%, 95.9%, 71.4% and 0.673, respectively, resulting in an area under the receiver operating characteristic (AUROC) of 0.837. Considering their potential use as prescreening tools, these models could provide in vivo toxicity predictions and insight into the hundreds of thousands of untested chemicals to prioritize compounds for HT testing.
Style APA, Harvard, Vancouver, ISO itp.
16

Soni, Ayush, Alexander Loui, Scott Brown i Carl Salvaggio. "High-quality multispectral image generation using Conditional GANs". Electronic Imaging 2020, nr 8 (26.01.2020): 86–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.8.imawm-086.

Pełny tekst źródła
Streszczenie:
In this paper, we demonstrate the use of a Conditional Generative Adversarial Networks (cGAN) framework for producing high-fidelity, multispectral aerial imagery using low-fidelity imagery of the same kind as input. The motivation behind is that it is easier, faster, and often less costly to produce low-fidelity images than high-fidelity images using the various available techniques, such as physics-driven synthetic image generation models. Once the cGAN network is trained and tuned in a supervised manner on a data set of paired low- and high-quality aerial images, it can then be used to enhance new, lower-quality baseline images of similar type to produce more realistic, high-fidelity multispectral image data. This approach can potentially save significant time and effort compared to traditional approaches of producing multispectral images.
Style APA, Harvard, Vancouver, ISO itp.
17

Huang, Yin-Fu, i Wei-De Liu. "Choreography cGAN: generating dances with music beats using conditional generative adversarial networks". Neural Computing and Applications 33, nr 16 (15.03.2021): 9817–33. http://dx.doi.org/10.1007/s00521-021-05752-x.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Eom, Gayeong, i Haewon Byeon. "Searching for Optimal Oversampling to Process Imbalanced Data: Generative Adversarial Networks and Synthetic Minority Over-Sampling Technique". Mathematics 11, nr 16 (21.08.2023): 3605. http://dx.doi.org/10.3390/math11163605.

Pełny tekst źródła
Streszczenie:
Classification problems due to data imbalance occur in many fields and have long been studied in the machine learning field. Many real-world datasets suffer from the issue of class imbalance, which occurs when the sizes of classes are not uniform; thus, data belonging to the minority class are likely to be misclassified. It is particularly important to overcome this issue when dealing with medical data because class imbalance inevitably arises due to incidence rates within medical datasets. This study adjusted the imbalance ratio (IR) within the National Biobank of Korea dataset “Epidemiologic data of Parkinson’s disease dementia patients” to values of 6.8 (raw data), 9, and 19 and compared four traditional oversampling methods with techniques using the conditional generative adversarial network (CGAN) and conditional tabular generative adversarial network (CTGAN). The results showed that when the classes were balanced with CGAN and CTGAN, they showed a better classification performance than the more traditional oversampling techniques based on the AUC and F1-score. We were able to expand the application scope of GAN, widely used in unstructured data, to structured data. We also offer a better solution for the imbalanced data problem and suggest future research directions.
Style APA, Harvard, Vancouver, ISO itp.
19

Li, Jie, Boyu Zhao, Kai Wu, Zhicheng Dong, Xuerui Zhang i Zhihao Zheng. "A Representation Generation Approach of Transmission Gear Based on Conditional Generative Adversarial Network". Actuators 10, nr 5 (23.04.2021): 86. http://dx.doi.org/10.3390/act10050086.

Pełny tekst źródła
Streszczenie:
Gear reliability assessment of vehicle transmission has been a challenging issue of determining vehicle safety in the transmission industry due to a significant amount of classification errors with high-coupling gear parameters and insufficient high-density data. In terms of the preprocessing of gear reliability assessment, this paper presents a representation generation approach based on generative adversarial networks (GAN) to advance the performance of reliability evaluation as a classification problem. First, with no need for complex modeling and massive calculations, a conditional generative adversarial net (CGAN) based model is established to generate gear representations through discovering inherent mapping between features with gear parameters and gear reliability. Instead of producing intact samples like other GAN techniques, the CGAN based model is designed to learn features of gear data. In this model, to raise the diversity of produced features, a mini-batch strategy of randomly sampling from the combination of raw and generated representations is used in the discriminator, instead of using all of the data features. Second, in order to overcome the unlabeled ability of CGAN, a Wasserstein labeling (WL) scheme is proposed to tag the created representations from our model for classification. Lastly, original and produced representations are fused to train classifiers. Experiments on real-world gear data from the industry indicate that the proposed approach outperforms other techniques on operational metrics.
Style APA, Harvard, Vancouver, ISO itp.
20

Li, Chen, Yuanbo Li, Zhiqiang Weng, Xuemei Lei i Guangcan Yang. "Face Aging with Feature-Guide Conditional Generative Adversarial Network". Electronics 12, nr 9 (4.05.2023): 2095. http://dx.doi.org/10.3390/electronics12092095.

Pełny tekst źródła
Streszczenie:
Face aging is of great importance for the information forensics and security fields, as well as entertainment-related applications. Although significant progress has been made in this field, the authenticity, age specificity, and identity preservation of generated face images still need further discussion. To better address these issues, a Feature-Guide Conditional Generative Adversarial Network (FG-CGAN) is proposed in this paper, which contains extra feature guide module and age classifier module. To preserve the identity of the input facial image during the generating procedure, in the feature guide module, perceptual loss is introduced to minimize the identity difference between the input and output face image of the generator, and L2 loss is introduced to constrain the size of the generated feature map. To make the generated image fall into the target age group, in the age classifier module, an age-estimated loss is constructed, during which L-Softmax loss is combined to make the sample boundaries of different categories more obvious. Abundant experiments are conducted on the widely used face aging dataset CACD and Morph. The results show that target aging face images generated by FG-CGAN have promising validation confidence for identity preservation. Specifically, the validation confidence levels for age groups 20–30, 30–40, and 40–50 are 95.79%, 95.42%, and 90.77% respectively, which verify the effectiveness of our proposed method.
Style APA, Harvard, Vancouver, ISO itp.
21

Zhang, Pengfei, i Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks". Mathematical Problems in Engineering 2021 (13.09.2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Pełny tekst źródła
Streszczenie:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial samples and their flow through the layers of a deep neural network. In the supervised scenario, the latent feature of the semisupervised GAN and the target network’s logit information are used as the input of the external classifier support vector machine to detect the adversarial samples. In the unsupervised scenario, first, we proposed a one-class classier based on the semisupervised Gaussian mixture conditional generative adversarial network (GM-CGAN) to fit the joint feature information of the normal data, and then, we used a discriminator network to detect normal data and adversarial samples. In both supervised scenarios and unsupervised scenarios, experimental results show that our method outperforms latest methods.
Style APA, Harvard, Vancouver, ISO itp.
22

Ezeme, Okwudili M., Qusay H. Mahmoud i Akramul Azim. "Design and Development of AD-CGAN: Conditional Generative Adversarial Networks for Anomaly Detection". IEEE Access 8 (2020): 177667–81. http://dx.doi.org/10.1109/access.2020.3025530.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Kim, Hee-Joung, i Donghoon Lee. "Image denoising with conditional generative adversarial networks (CGAN) in low dose chest images". Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 954 (luty 2020): 161914. http://dx.doi.org/10.1016/j.nima.2019.02.041.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
24

Chrysos, Grigorios G., Jean Kossaifi i Stefanos Zafeiriou. "RoCGAN: Robust Conditional GAN". International Journal of Computer Vision 128, nr 10-11 (14.07.2020): 2665–83. http://dx.doi.org/10.1007/s11263-020-01348-5.

Pełny tekst źródła
Streszczenie:
Abstract Conditional image generation lies at the heart of computer vision and conditional generative adversarial networks (cGAN) have recently become the method of choice for this task, owing to their superior performance. The focus so far has largely been on performance improvement, with little effort in making cGANs more robust to noise. However, the regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGANs unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Specifically, we augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and establish with both synthetic and real data the merits of our model. We perform a thorough experimental validation on large scale datasets for natural scenes and faces and observe that our model outperforms existing cGAN architectures by a large margin. We also empirically demonstrate the performance of our approach in the face of two types of noise (adversarial and Bernoulli).
Style APA, Harvard, Vancouver, ISO itp.
25

Majid, Haneen, i Khawla Ali. "Expanding New Covid-19 Data with Conditional Generative Adversarial Networks". Iraqi Journal for Electrical and Electronic Engineering 18, nr 1 (4.04.2022): 103–10. http://dx.doi.org/10.37917/ijeee.18.1.12.

Pełny tekst źródła
Streszczenie:
COVID-19 is an infectious viral disease that mostly affects the lungs. That quickly spreads across the world. Early detection of the virus boosts the chances of patients recovering quickly worldwide. Many radiographic techniques are used to diagnose an infected person such as X-rays, deep learning technology based on a large amount of chest x-ray images is used to diagnose COVID-19 disease. Because of the scarcity of available COVID-19 X-rays image, the limited COVID-19 Datasets are insufficient for efficient deep learning detection models. Another problem with a limited dataset is that training models suffer from over-fitting, and the predictions are not generalizable to address these problems. In this paper, we developed Conditional Generative Adversarial Networks (CGAN) to produce synthetic images close to real images for the COVID-19 case and traditional augmentation that was used to expand the limited dataset then used to train by Customized deep detection model. The Customized Deep learning model was able to obtain excellent detection accuracy of 97% accurate with only ten epochs. The proposed augmentation outperforms other augmentation techniques. The augmented dataset includes 6988 high-quality and resolution COVID-19 X-rays images. At the same time, the original COVID-19 X-rays images are only 587.
Style APA, Harvard, Vancouver, ISO itp.
26

Ali, Zeeshan, Sheneela Naz, Hira Zaffar, Jaeun Choi i Yongsung Kim. "An IoMT-Based Melanoma Lesion Segmentation Using Conditional Generative Adversarial Networks". Sensors 23, nr 7 (28.03.2023): 3548. http://dx.doi.org/10.3390/s23073548.

Pełny tekst źródła
Streszczenie:
Currently, Internet of medical things-based technologies provide a foundation for remote data collection and medical assistance for various diseases. Along with developments in computer vision, the application of Artificial Intelligence and Deep Learning in IOMT devices aids in the design of effective CAD systems for various diseases such as melanoma cancer even in the absence of experts. However, accurate segmentation of melanoma skin lesions from images by CAD systems is necessary to carry out an effective diagnosis. Nevertheless, the visual similarity between normal and melanoma lesions is very high, which leads to less accuracy of various traditional, parametric, and deep learning-based methods. Hence, as a solution to the challenge of accurate segmentation, we propose an advanced generative deep learning model called the Conditional Generative Adversarial Network (cGAN) for lesion segmentation. In the suggested technique, the generation of segmented images is conditional on dermoscopic images of skin lesions to generate accurate segmentation. We assessed the proposed model using three distinct datasets including DermQuest, DermIS, and ISCI2016, and attained optimal segmentation results of 99%, 97%, and 95% performance accuracy, respectively.
Style APA, Harvard, Vancouver, ISO itp.
27

Zaytar, Mohamed Akram, i Chaker El Amrani. "Satellite image inpainting with deep generative adversarial neural networks". IAES International Journal of Artificial Intelligence (IJ-AI) 10, nr 1 (1.03.2021): 121. http://dx.doi.org/10.11591/ijai.v10.i1.pp121-130.

Pełny tekst źródła
Streszczenie:
This work addresses the problem of recovering lost or damaged satellite image pixels (gaps) caused by sensor processing errors or by natural phenomena like cloud presence. Such errors decrease our ability to monitor regions of interest and significantly increase the average revisit time for all satellites. This paper presents a novel neural system based on conditional deep generative adversarial networks (cGAN) optimized to fill satellite imagery gaps using surrounding pixel values and static high-resolution visual priors. Experimental results show that the proposed system outperforms traditional and neural network baselines. It achieves a normalized least absolute deviations error of ( &amp; decrease in error compared with the two baselines) and a mean squared error loss of ( &amp; decrease in error) over the test set. The model can be deployed within a remote sensing data pipeline to reconstruct missing pixel measurements for near-real-time monitoring and inference purposes, thus empowering policymakers and users to make environmentally informed decisions.
Style APA, Harvard, Vancouver, ISO itp.
28

Ku, Hyeeun, i Minhyeok Lee. "TextControlGAN: Text-to-Image Synthesis with Controllable Generative Adversarial Networks". Applied Sciences 13, nr 8 (19.04.2023): 5098. http://dx.doi.org/10.3390/app13085098.

Pełny tekst źródła
Streszczenie:
Generative adversarial networks (GANs) have demonstrated remarkable potential in the realm of text-to-image synthesis. Nevertheless, conventional GANs employing conditional latent space interpolation and manifold interpolation (GAN-CLS-INT) encounter challenges in generating images that accurately reflect the given text descriptions. To overcome these limitations, we introduce TextControlGAN, a controllable GAN-based model specifically designed for text-to-image synthesis tasks. In contrast to traditional GANs, TextControlGAN incorporates a neural network structure, known as a regressor, to effectively learn features from conditional texts. To further enhance the learning performance of the regressor, data augmentation techniques are employed. As a result, the generator within TextControlGAN can learn conditional texts more effectively, leading to the production of images that more closely adhere to the textual conditions. Furthermore, by concentrating the discriminator’s training efforts on GAN training exclusively, the overall quality of the generated images is significantly improved. Evaluations conducted on the Caltech-UCSD Birds-200 (CUB) dataset demonstrate that TextControlGAN surpasses the performance of the cGAN-based GAN-INT-CLS model, achieving a 17.6% improvement in Inception Score (IS) and a 36.6% reduction in Fréchet Inception Distance (FID). In supplementary experiments utilizing 128 × 128 resolution images, TextControlGAN exhibits a remarkable ability to manipulate minor features of the generated bird images according to the given text descriptions. These findings highlight the potential of TextControlGAN as a powerful tool for generating high-quality, text-conditioned images, paving the way for future advancements in the field of text-to-image synthesis.
Style APA, Harvard, Vancouver, ISO itp.
29

Ma, Fei, Fei Gao, Jinping Sun, Huiyu Zhou i and Amir Hussain. "Weakly Supervised Segmentation of SAR Imagery Using Superpixel and Hierarchically Adversarial CRF". Remote Sensing 11, nr 5 (2.03.2019): 512. http://dx.doi.org/10.3390/rs11050512.

Pełny tekst źródła
Streszczenie:
Synthetic aperture radar (SAR) image segmentation aims at generating homogeneous regions from a pixel-based image and is the basis of image interpretation. However, most of the existing segmentation methods usually neglect the appearance and spatial consistency during feature extraction and also require a large number of training data. In addition, pixel-based processing cannot meet the real time requirement. We hereby present a weakly supervised algorithm to perform the task of segmentation for high-resolution SAR images. For effective segmentation, the input image is first over-segmented into a set of primitive superpixels. This algorithm combines hierarchical conditional generative adversarial nets (CGAN) and conditional random fields (CRF). The CGAN-based networks can leverage abundant unlabeled data learning parameters, reducing their reliance on the labeled samples. In order to preserve neighborhood consistency in the feature extraction stage, the hierarchical CGAN is composed of two sub-networks, which are employed to extract the information of the central superpixels and the corresponding background superpixels, respectively. Afterwards, CRF is utilized to perform label optimization using the concatenated features. Quantified experiments on an airborne SAR image dataset prove that the proposed method can effectively learn feature representations and achieve competitive accuracy to the state-of-the-art segmentation approaches. More specifically, our algorithm has a higher Cohen's kappa coefficient and overall accuracy. Its computation time is less than the current mainstream pixel-level semantic segmentation networks.
Style APA, Harvard, Vancouver, ISO itp.
30

Rodríguez-Suárez, Brais, Pablo Quesada-Barriuso i Francisco Argüello. "Design of CGAN Models for Multispectral Reconstruction in Remote Sensing". Remote Sensing 14, nr 4 (9.02.2022): 816. http://dx.doi.org/10.3390/rs14040816.

Pełny tekst źródła
Streszczenie:
Multispectral imaging methods typically require cameras with dedicated sensors that make them expensive. In some cases, these sensors are not available or existing images are RGB, so the advantages of multispectral processing cannot be exploited. To solve this drawback, several techniques have been proposed to reconstruct the spectral reflectance of a scene from a single RGB image captured by a camera. Deep learning methods can already solve this problem with good spectral accuracy. Recently, a new type of deep learning network, the Conditional Generative Adversarial Network (CGAN), has been proposed. It is a deep learning architecture that simultaneously trains two networks (generator and discriminator) with the additional feature that both networks are conditioned on some sort of auxiliary information. This paper focuses the use of CGANs to achieve the reconstruction of multispectral images from RGB images. Different regression network models (convolutional neuronal networks, U-Net, and ResNet) have been adapted and integrated as generators in the CGAN, and compared in performance for multispectral reconstruction. Experiments with the BigEarthNet database show that CGAN with ResNet as a generator provides better results than other deep learning networks with a root mean square error of 316 measured over a range from 0 to 16,384.
Style APA, Harvard, Vancouver, ISO itp.
31

Ramazyan, T., O. Kiss, M. Grossi, E. Kajomovitz i S. Vallecorsa. "Generating muonic force carriers events with classical and quantum neural networks". Journal of Physics: Conference Series 2438, nr 1 (1.02.2023): 012089. http://dx.doi.org/10.1088/1742-6596/2438/1/012089.

Pełny tekst źródła
Streszczenie:
Abstract Generative models (GM) are promising applications for near-term quantum computers due to the probabilistic nature of quantum mechanics. This work compares a classical conditional generative adversarial network (CGAN) with a quantum circuit Born machine while addressing their strengths and limitations to generate muonic force carriers (MFCs) events. The former uses a neural network as a discriminator to train the generator, while the latter takes advantage of the stochastic nature of measurements in quantum mechanics to generate samples. We consider a muon fixed-target collision between muons produced at the high-energy collisions of the LHC and the detector material of the ForwArd Search ExpeRiment (FASER) or the ATLAS calorimeter. In the ATLAS case, independent muon measurements performed by the inner detector (ID) and muon system (MS) can help observe new force carriers coupled to muons, which are usually not detected. We numerically observed that CGANs could reproduce the complete data set and interpolate to different regimes. Moreover, we show on a simplified problem that Born machines are promising generative models for near-term quantum devices.
Style APA, Harvard, Vancouver, ISO itp.
32

Jafrasteh, B., I. Manighetti i J. Zerubia. "GENERATIVE ADVERSARIAL NETWORKS AS A NOVEL APPROACH FOR TECTONIC FAULT AND FRACTURE EXTRACTION IN HIGH-RESOLUTION SATELLITE AND AIRBORNE OPTICAL IMAGES". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (21.08.2020): 1219–27. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1219-2020.

Pełny tekst źródła
Streszczenie:
Abstract. We develop a novel method based on Deep Convolutional Networks (DCN) to automate the identification and mapping of fracture and fault traces in optical images. The method employs two DCNs in a two players game: a first network, called Generator, learns to segment images to make them resembling the ground truth; a second network, called Discriminator, measures the differences between the ground truth image and each segmented image and sends its score feedback to the Generator; based on these scores, the Generator improves its segmentation progressively. As we condition both networks to the ground truth images, the method is called Conditional Generative Adversarial Network (CGAN). We propose a new loss function for both the Generator and the Discriminator networks, to improve their accuracy. Using two criteria and a manually annotated optical image, we compare the generalization performance of the proposed method to that of a classical DCN architecture, U-net. The comparison demonstrates the suitability of the proposed CGAN architecture. Further work is however needed to improve its efficiency.
Style APA, Harvard, Vancouver, ISO itp.
33

Zhang, Zaijun, Hiroaki Ishihata, Ryuto Maruyama, Tomonari Kasai, Hiroyuki Kameda i Tomoyasu Sugiyama. "Deep Learning of Phase-Contrast Images of Cancer Stem Cells Using a Selected Dataset of High Accuracy Value Using Conditional Generative Adversarial Networks". International Journal of Molecular Sciences 24, nr 6 (10.03.2023): 5323. http://dx.doi.org/10.3390/ijms24065323.

Pełny tekst źródła
Streszczenie:
Artificial intelligence (AI) technology for image recognition has the potential to identify cancer stem cells (CSCs) in cultures and tissues. CSCs play an important role in the development and relapse of tumors. Although the characteristics of CSCs have been extensively studied, their morphological features remain elusive. The attempt to obtain an AI model identifying CSCs in culture showed the importance of images from spatially and temporally grown cultures of CSCs for deep learning to improve accuracy, but was insufficient. This study aimed to identify a process that is significantly efficient in increasing the accuracy values of the AI model output for predicting CSCs from phase-contrast images. An AI model of conditional generative adversarial network (CGAN) image translation for CSC identification predicted CSCs with various accuracy levels, and convolutional neural network classification of CSC phase-contrast images showed variation in the images. The accuracy of the AI model of CGAN image translation was increased by the AI model built by deep learning of selected CSC images with high accuracy previously calculated by another AI model. The workflow of building an AI model based on CGAN image translation could be useful for the AI prediction of CSCs.
Style APA, Harvard, Vancouver, ISO itp.
34

Yadav, Jyoti Deshwal, Vivek K. Dwivedi i Saurabh Chaturvedi. "ResNet-Enabled cGAN Model for Channel Estimation in Massive MIMO System". Wireless Communications and Mobile Computing 2022 (29.08.2022): 1–9. http://dx.doi.org/10.1155/2022/2697932.

Pełny tekst źródła
Streszczenie:
Massive multiple-input multiple-output (MIMO), or large-scale MIMO, is one of the key technologies for future wireless networks to exhibit a large accessible spectrum and throughput. The performance of a massive MIMO system is strongly reliant on the nature of various channels and interference during multipath transmission. Therefore, it is important to compute accurate channel estimation. This paper considers a massive MIMO system with one-bit analog-to-digital converters (ADCs) on each receiver antenna of the base station. Deep learning (DL)-based channel estimation framework has been developed to reduce signal processing complexity. This DL framework uses conditional generative adversarial networks (cGANs) and various convolutional neural networks, namely reverse residual network (reverse ResNet), squeeze-and-excitation ResNet (SE ResNet), ResUNet++, and reverse SE ResNet, as the generator model of cGAN for extracting the features from the quantized received signals. The simulation results of this paper show that the trained residual block-based generator model of cGAN has better channel generation performance than the standard generator model in terms of mean square error.
Style APA, Harvard, Vancouver, ISO itp.
35

Bittner, K., P. d’Angelo, M. Körner i P. Reinartz. "AUTOMATIC LARGE-SCALE 3D BUILDING SHAPE REFINEMENT USING CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2 (30.05.2018): 103–8. http://dx.doi.org/10.5194/isprs-archives-xlii-2-103-2018.

Pełny tekst źródła
Streszczenie:
<p><strong>Abstract.</strong> Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the <i>digital surface models (DSMs)</i>. The DSMs can be obtained either by <i>light detection and ranging (LIDAR)</i>, SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a <i>conditional generative adversarial network (cGAN)</i> to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.</p>
Style APA, Harvard, Vancouver, ISO itp.
36

List, Florian, Ishaan Bhat i Geraint F. Lewis. "A black box for dark sector physics: predicting dark matter annihilation feedback with conditional GANs". Monthly Notices of the Royal Astronomical Society 490, nr 3 (3.10.2019): 3134–43. http://dx.doi.org/10.1093/mnras/stz2759.

Pełny tekst źródła
Streszczenie:
Abstract Traditionally, incorporating additional physics into existing cosmological simulations requires re-running the cosmological simulation code, which can be computationally expensive. We show that conditional Generative Adversarial Networks (cGANs) can be harnessed to predict how changing the underlying physics alters the simulation results. To illustrate this, we train a cGAN to learn the impact of dark matter annihilation feedback (DMAF) on the gas density distribution. The predicted gas density slices are visually difficult to distinguish from their real brethren and the peak counts differ by less than 10 per cent for all test samples (the average deviation is <3 per cent). Finally, we invert the problem and show that cGANs are capable of endowing smooth density distributions with realistic substructure. The cGAN does however have difficulty generating new knots as well as creating/eliminating bubble-like structures. We conclude that trained cGANs can be an effective approach to provide mock samples of cosmological simulations incorporating DMAF physics from existing samples of standard cosmological simulations of the evolution of cosmic structure.
Style APA, Harvard, Vancouver, ISO itp.
37

Yoshiura, Shintaro, Hayato Shimabukuro, Kenji Hasegawa i Keitaro Takahashi. "Predicting 21 cm-line map from Lyman-α emitter distribution with generative adversarial networks". Monthly Notices of the Royal Astronomical Society 506, nr 1 (18.06.2021): 357–71. http://dx.doi.org/10.1093/mnras/stab1718.

Pełny tekst źródła
Streszczenie:
ABSTRACT The radio observation of 21 cm-line signal from the epoch of reionization (EoR) enables us to explore the evolution of galaxies and intergalactic medium in the early Universe. However, the detection and imaging of the 21 cm-line signal are tough due to the foreground and instrumental systematics. In order to overcome these obstacles, as a new approach, we propose to take a cross correlation between observed 21 cm-line data and 21 cm-line images generated from the distribution of the Lyman-α emitters (LAEs) through machine learning. In order to create 21 cm-line maps from LAE distribution, we apply conditional Generative Adversarial Network (cGAN) trained with the results of our numerical simulations. We find that the 21 cm-line brightness temperature maps and the neutral fraction maps can be reproduced with correlation function of 0.5 at large scales k &lt; 0.1 Mpc−1. Furthermore, we study the detectability of the cross-correlation assuming the LAE deep survey of the Subaru Hyper Suprime Cam, the 21 cm observation of the MWA Phase II, and the presence of the foreground residuals. We show that the signal is detectable at k &lt; 0.1 Mpc−1 with 1000 h of MWA observation even if the foreground residuals are 5 times larger than the 21 cm-line power spectrum. Our new approach of cross-correlation with image construction using the cGAN cannot only boost the detectability of EoR 21 cm-line signal but also allow us to estimate the 21 cm-line auto-power spectrum.
Style APA, Harvard, Vancouver, ISO itp.
38

Shao, Changcheng, Xiaolin Li, Fang Li i Yifan Zhou. "Large Mask Image Completion with Conditional GAN". Symmetry 14, nr 10 (14.10.2022): 2148. http://dx.doi.org/10.3390/sym14102148.

Pełny tekst źródła
Streszczenie:
Recently, learning-based image completion methods have made encouraging progress on square or irregular masks. The generative adversarial networks (GANs) have been able to produce visually realistic and semantically correct results. However, much texture and structure information will be lost in the completion process. If the missing part is too large to provide useful information, the result will be ambiguity, residual shadow, and object confusion. In order to complete large mask images, we present a novel model using conditional GAN called coarse-to-fine condition GAN (CF CGAN). We use a coarse-to-fine generator with symmetry and new perceptual loss based on VGG-16. The generator is symmetric in structure. For large mask image completion, our method produces visually realistic and semantically correct results. The generalization ability of our model is also excellent. We evaluate our model on the CelebA dataset and use FID, LPIPS, and SSIM as the metrics. Experiments demonstrate superior performance in terms of both quality and reality in free-form image completion.
Style APA, Harvard, Vancouver, ISO itp.
39

Rastin, Zahra, Gholamreza Ghodrati Amiri i Ehsan Darvishan. "Generative Adversarial Network for Damage Identification in Civil Structures". Shock and Vibration 2021 (3.09.2021): 1–12. http://dx.doi.org/10.1155/2021/3987835.

Pełny tekst źródła
Streszczenie:
In recent years, many efforts have been made to develop efficient deep-learning-based structural health monitoring (SHM) methods. Most of the proposed methods employ supervised algorithms that require data from different damaged states of a structure in order to monitor its health conditions. As such data are not usually available for real civil structures, using supervised algorithms for the health monitoring of these structures might be impracticable. This paper presents a novel two-stage technique based on generative adversarial networks (GANs) for unsupervised SHM and damage identification. In the first stage, a deep convolutional GAN (DCGAN) is used to detect and quantify structural damages; the detected damages are then localized in the second stage using a conditional GAN (CGAN). Raw acceleration signals from a monitored structure are used for this purpose, and the networks are trained by only the intact state data of the structure. The proposed method is validated through applications on the numerical model of a bridge health monitoring (BHM) benchmark structure, an experimental steel structure located at Qatar University, and the full-scale Tianjin Yonghe Bridge.
Style APA, Harvard, Vancouver, ISO itp.
40

Weng, Yongchun, Yong Ma, Fu Chen, Erping Shang, Wutao Yao, Shuyan Zhang, Jin Yang i Jianbo Liu. "Temporal Co-Attention Guided Conditional Generative Adversarial Network for Optical Image Synthesis". Remote Sensing 15, nr 7 (31.03.2023): 1863. http://dx.doi.org/10.3390/rs15071863.

Pełny tekst źródła
Streszczenie:
In the field of SAR-to-optical image synthesis, current methods based on conditional generative adversarial networks (CGANs) have satisfying performance under simple scenarios, but the performance drops severely under complicated scenarios. Considering that SAR images can form a robust time series due to SAR’s all-weather imaging ability, we take advantage of this and extract a temporal correlation from bi-temporal SAR images to guide the translation. To achieve this, we introduce a co-attention mechanism into the CGAN that learns the correlation between optically-available and optically-absent time points, selectively enhances the features of the former time point, and eventually guides the model to a better optical image synthesis on the latter time point. Additionally, we adopt a strategy to balance the weight of optical and SAR features to extract better features from the SAR input. With these strategies, the quality of synthesized images is notably improved in complicated scenarios. The synthesized images can increase the spatial and temporal resolution of optical imagery, greatly improving the availability of data for the applications of crop monitoring, change detection, and visual interpretation.
Style APA, Harvard, Vancouver, ISO itp.
41

Sharafudeen, Misaj, Andrew J. i Vinod Chandra S. S. "Leveraging Vision Attention Transformers for Detection of Artificially Synthesized Dermoscopic Lesion Deepfakes Using Derm-CGAN". Diagnostics 13, nr 5 (21.02.2023): 825. http://dx.doi.org/10.3390/diagnostics13050825.

Pełny tekst źródła
Streszczenie:
Synthesized multimedia is an open concern that has received much too little attention in the scientific community. In recent years, generative models have been utilized in maneuvering deepfakes in medical imaging modalities. We investigate the synthesized generation and detection of dermoscopic skin lesion images by leveraging the conceptual aspects of Conditional Generative Adversarial Networks and state-of-the-art Vision Transformers (ViT). The Derm-CGAN is architectured for the realistic generation of six different dermoscopic skin lesions. Analysis of the similarity between real and synthesized fakes revealed a high correlation. Further, several ViT variations were investigated to distinguish between actual and fake lesions. The best-performing model achieved an accuracy of 97.18% which has over 7% marginal gain over the second best-performing network. The trade-off of the proposed model compared to other networks, as well as a benchmark face dataset, was critically analyzed in terms of computational complexity. This technology is capable of harming laymen through medical misdiagnosis or insurance scams. Further research in this domain would be able to assist physicians and the general public in countering and resisting deepfake threats.
Style APA, Harvard, Vancouver, ISO itp.
42

Li, Bing, Yong Xian, Juan Su, Da Q. Zhang i Wei L. Guo. "I-GANs for Infrared Image Generation". Complexity 2021 (23.03.2021): 1–11. http://dx.doi.org/10.1155/2021/6635242.

Pełny tekst źródła
Streszczenie:
The making of infrared templates is of great significance for improving the accuracy and precision of infrared imaging guidance. However, collecting infrared images from fields is difficult, of high cost, and time-consuming. In order to address this problem, an infrared image generation method, infrared generative adversarial networks (I-GANs), based on conditional generative adversarial networks (CGAN) architecture is proposed. In I-GANs, visible images instead of random noise are used as the inputs, and the D-LinkNet network is also utilized to build the generative model, enabling improved learning of rich image textures and identification of dependencies between images. Moreover, the PatchGAN architecture is employed to build a discriminant model to process the high-frequency components of the images effectively and reduce the amount of calculation required. In addition, batch normalization is used to optimize the training process, and thereby, the instability and mode collapse of the generated adversarial network training can be alleviated. Finally, experimental verification is conducted on the produced infrared/visible light dataset (IVFG). The experimental results reveal that high-quality and reliable infrared data are generated by the proposed I-GANs.
Style APA, Harvard, Vancouver, ISO itp.
43

Yuan, X., J. Tian i P. Reinartz. "GENERATING ARTIFICIAL NEAR INFRARED SPECTRAL BAND FROM RGB IMAGE USING CONDITIONAL GENERATIVE ADVERSARIAL NETWORK". ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2020 (3.08.2020): 279–85. http://dx.doi.org/10.5194/isprs-annals-v-3-2020-279-2020.

Pełny tekst źródła
Streszczenie:
Abstract. Near infrared bands (NIR) provide rich information for many remote sensing applications. In addition to deriving useful indices to delineate water and vegetation, near infrared channels could also be used to facilitate image pre-processing. However, synthesizing bands from RGB spectrum is not an easy task. The inter-correlations between bands are not clearly identified in physical models. Generative adversarial networks (GAN) have been used in many tasks such as generating photorealistic images, monocular depth estimation and Digital Surface Model (DSM) refinement etc. Conditional GAN is different in that it observes some data as a condition. In this paper, we explore a cGAN network structure to generate a NIR spectral band that is conditioned on the input RGB image. We test different discriminators and loss functions, and evaluate results using various metrics. The best simulated NIR channel has a mean absolute error of around 5 percent in Sentinel-2 dataset. In addition, the simulated NIR image can correctly distinguish between various classes of landcover.
Style APA, Harvard, Vancouver, ISO itp.
44

Rojas-Campos, Adrian, Michael Langguth, Martin Wittenbrink i Gordon Pipa. "Deep learning models for generation of precipitation maps based on numerical weather prediction". Geoscientific Model Development 16, nr 5 (8.03.2023): 1467–80. http://dx.doi.org/10.5194/gmd-16-1467-2023.

Pełny tekst źródła
Streszczenie:
Abstract. Numerical weather prediction (NWP) models are atmospheric simulations that imitate the dynamics of the atmosphere and provide high-quality forecasts. One of the most significant limitations of NWP is the elevated amount of computational resources required for its functioning, which limits the spatial and temporal resolution of the outputs. Traditional meteorological techniques to increase the resolution are uniquely based on information from a limited group of interest variables. In this study, we offer an alternative approach to the task where we generate precipitation maps based on the complete set of variables of the NWP to generate high-resolution and short-time precipitation predictions. To achieve this, five different deep learning models were trained and evaluated: a baseline, U-Net, two deconvolution networks and one conditional generative model (Conditional Generative Adversarial Network; CGAN). A total of 20 independent random initializations were performed for each of the models. The predictions were evaluated using skill scores based on mean absolute error (MAE) and linear error in probability space (LEPS), equitable threat score (ETS), critical success index (CSI) and frequency bias after applying several thresholds. The models showed a significant improvement in predicting precipitation, showing the benefits of including the complete information from the NWP. The algorithms doubled the resolution of the predictions and corrected an over-forecast bias from the input information. However, some new models presented new types of bias: U-Net tended to mid-range precipitation events, and the deconvolution models favored low rain events and generated some spatial smoothing. The CGAN offered the highest-quality precipitation forecast, generating realistic outputs and indicating possible future research paths.
Style APA, Harvard, Vancouver, ISO itp.
45

Lee, JooHwa, i KeeHyun Park. "AE-CGAN Model based High Performance Network Intrusion Detection System". Applied Sciences 9, nr 20 (10.10.2019): 4221. http://dx.doi.org/10.3390/app9204221.

Pełny tekst źródła
Streszczenie:
In this paper, a high-performance network intrusion detection system based on deep learning is proposed for situations in which there are significant imbalances between normal and abnormal traffic. Based on the unsupervised learning models autoencoder (AE) and the generative adversarial networks (GAN) model during deep learning, the study aim is to solve the imbalance of data and intrusion detection of high performance. The AE-CGAN (autoencoder-conditional GAN) model is proposed to improve the performance of intrusion detection. This model oversamples rare classes based on the GAN model in order to solve the performance degradation caused by data imbalance after processing the characteristics of the data to a lower level using the autoencoder model. To measure the performance of the AE-CGAN model, data is classified using random forest (RF), a typical machine learning classification algorithm. In this experiment, we used the canadian institute for cybersecurity intrusion detection system (CICIDS)2017 dataset, the latest public dataset of network intrusion detection system (NIDS), and compared the three models to confirm efficacy of the proposed model. We compared the performance of three types of models. These included single-RF, a classification model using only a classification algorithm, AE-RF which is processed by classifying data features, and the AE-CGAN model which is classified after solving the data feature processing and data imbalance. Experimental results showed that the performance of the AE-CGAN model proposed in this paper was the highest. In particular, when the data were unbalanced, the performances of recall and F1 score, which are more accurate performance indicators, were 93.29% and 95.38%, respectively. The AE-CGAN model showed much better performance.
Style APA, Harvard, Vancouver, ISO itp.
46

Hong, Zhiwei, Xiaocheng Fan, Tao Jiang i Jianxing Feng. "End-to-End Unpaired Image Denoising with Conditional Adversarial Networks". Proceedings of the AAAI Conference on Artificial Intelligence 34, nr 04 (3.04.2020): 4140–49. http://dx.doi.org/10.1609/aaai.v34i04.5834.

Pełny tekst źródła
Streszczenie:
Image denoising is a classic low level vision problem that attempts to recover a noise-free image from a noisy observation. Recent advances in deep neural networks have outperformed traditional prior based methods for image denoising. However, the existing methods either require paired noisy and clean images for training or impose certain assumptions on the noise distribution and data types. In this paper, we present an end-to-end unpaired image denoising framework (UIDNet) that denoises images with only unpaired clean and noisy training images. The critical component of our model is a noise learning module based on a conditional Generative Adversarial Network (cGAN). The model learns the noise distribution from the input noisy images and uses it to transform the input clean images to noisy ones without any assumption on the noise distribution and data types. This process results in pairs of clean and pseudo-noisy images. Such pairs are then used to train another denoising network similar to the existing denoising methods based on paired images. The noise learning and denoising components are integrated together so that they can be trained end-to-end. Extensive experimental evaluation has been performed on both synthetic and real data including real photographs and computer tomography (CT) images. The results demonstrate that our model outperforms the previous models trained on unpaired images as well as the state-of-the-art methods based on paired training data when proper training pairs are unavailable.
Style APA, Harvard, Vancouver, ISO itp.
47

Al-Shargabi, Amal A., Jowharah F. Alshobaili, Abdulatif Alabdulatif i Naseem Alrobah. "COVID-CGAN: Efficient Deep Learning Approach for COVID-19 Detection Based on CXR Images Using Conditional GANs". Applied Sciences 11, nr 16 (4.08.2021): 7174. http://dx.doi.org/10.3390/app11167174.

Pełny tekst źródła
Streszczenie:
COVID-19, a novel coronavirus infectious disease, has spread around the world, resulting in a large number of deaths. Due to a lack of physicians, emergency facilities, and equipment, medical systems have been unable to treat all patients in many countries. Deep learning is a promising approach for providing solutions to COVID-19 based on patients’ medical images. As COVID-19 is a new disease, its related dataset is still being collected and published. Small COVID-19 datasets may not be sufficient to build powerful deep learning detection models. Such models are often over-fitted, and their prediction results cannot be generalized. To fill this gap, we propose a deep learning approach for accurately detecting COVID-19 cases based on chest X-ray (CXR) images. For the proposed approach, named COVID-CGAN, we first generated a larger dataset using generative adversarial networks (GANs). Specifically, a customized conditional GAN (CGAN) was designed to generate the target COVID-19 CXR images. The expanded dataset, which contains 84.8% generated images and 15.2% original images, was then used for training five deep detection models: InceptionResNetV2, Xception, SqueezeNet, VGG16, and AlexNet. The results show that the use of the synthetic CXR images, which were generated by the customized CGAN, helped all deep learning models to achieve high detection accuracies. In particular, the highest accuracy was achieved by the InceptionResNetV2 model, which was 99.72% accurate with only ten epochs. All five models achieved kappa coefficients between 0.81 and 1, which is interpreted as an almost perfect agreement between the actual labels and the detected labels. Furthermore, the experiment showed that some models were faster yet smaller compared to the others but could still achieve high accuracy. For instance, SqueezeNet, which is a small network, required only three minutes and achieved comparable accuracy to larger networks such as InceptionResNetV2, which needed about 143 min. Our proposed approach can be applied to other fields with scarce datasets.
Style APA, Harvard, Vancouver, ISO itp.
48

Luo, Qingli, Hong Li, Zhiyuan Chen i Jian Li. "ADD-UNet: An Adjacent Dual-Decoder UNet for SAR-to-Optical Translation". Remote Sensing 15, nr 12 (15.06.2023): 3125. http://dx.doi.org/10.3390/rs15123125.

Pełny tekst źródła
Streszczenie:
Synthetic aperture radar (SAR) imagery has the advantages of all-day and all-weather observation. However, due to the imaging mechanism of microwaves, it is difficult for nonexperts to interpret SAR images. Transferring SAR imagery into optical imagery can better improve the interpretation of SAR data and support the further fusion research of multi-source remote sensing. Methods based on generative adversarial networks (GAN) have been proven to be effective in SAR-to-optical translation tasks. To further improve the translation results of SAR data, we propose a method of an adjacent dual-decoder UNet (ADD-UNet) based on conditional GAN (cGAN) for SAR-to-optical translation. The proposed network architecture adds an adjacent scale of the decoder to the UNet, and the multi-scale feature aggregation of the two decoders improves structures, details, and edge sharpness of generated images while introducing fewer parameters compared with UNet++. In addition, we combine multi-scale structure similarity (MS-SSIM) loss and L1 loss as loss functions with cGAN loss together to help preserve structures and details. The experimental results demonstrate the superiority of our method compared with several state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
49

Rizkinia, Mia, Nathaniel Faustine i Masahiro Okuda. "Conditional Generative Adversarial Networks with Total Variation and Color Correction for Generating Indonesian Face Photo from Sketch". Applied Sciences 12, nr 19 (5.10.2022): 10006. http://dx.doi.org/10.3390/app121910006.

Pełny tekst źródła
Streszczenie:
Historically, hand-drawn face sketches have been commonly used by Indonesia’s police force, especially to quickly describe a person’s facial features in searching for fugitives based on eyewitness testimony. Several studies have been performed, aiming to increase the effectiveness of the method, such as comparing the facial sketch with the all-points bulletin (DPO in Indonesian terminology) or generating a facial composite. However, making facial composites using an application takes quite a long time. Moreover, when these composites are directly compared to the DPO, the accuracy is insufficient, and thus, the technique requires further development. This study applies a conditional generative adversarial network (cGAN) to convert a face sketch image into a color face photo with an additional Total Variation (TV) term in the loss function to improve the visual quality of the resulting image. Furthermore, we apply a color correction to adjust the resulting skin tone similar to that of the ground truth. The face image dataset was collected from various sources matching Indonesian skin tone and facial features. We aim to provide a method for Indonesian face sketch-to-photo generation to visualize the facial features more accurately than the conventional method. This approach produces visually realistic photos from face sketches, as well as true skin tones.
Style APA, Harvard, Vancouver, ISO itp.
50

Ramdani, Ahmad, Andika Perbawa, Ingrid Puspita i Volker Vahrenkamp. "Acoustic impedance to outcrop: Presenting near-surface seismic data as a virtual outcrop in carbonate analog studies". Leading Edge 41, nr 9 (wrzesień 2022): 599–610. http://dx.doi.org/10.1190/tle41090599.1.

Pełny tekst źródła
Streszczenie:
Outcrop analogs play a central role in understanding subseismic interwell depositional facies heterogeneity of carbonate reservoirs. Outcrop geologists rarely utilize near-surface seismic data due to the limited vertical resolution and difficulty visualizing seismic signals as “band-limited rocks.” This study proposes a methodology using a combination of forward modeling and conditional generative adversarial network (cGAN) to translate seismic-derived acoustic impedance (AI) into a pseudo-high-resolution virtual outcrop. We tested the methodology on the Hanifa reservoir analog outcropping in Wadi Birk, Saudi Arabia. We interpret a 4 km long outcrop photomosaic from a digital outcrop model (DOM) for its depositional facies, populate the DOM with AI properties, and forward calculate the band-limited AI of the DOM facies using colored inversion. We pair the synthetic band-limited AI with DOM facies and train them using a cGAN. Similarly, we pair the DOM facies with outcrop photos and train them using a cGAN. We chain the two trained networks and apply them to the approximately 600 m long seismic-derived AI data acquired just behind the outcrop. The result translates AI images into a virtual outcrop “behind-the-outcrop” model. This virtual outcrop model is a visual medium that operates at a resolution and format more familiar to outcrop geologists. This model resolves subseismic stratigraphic features such as the intricate downlap-onlap stratal termination at scales of tens of centimeters and the outline of buildup facies, which are otherwise unresolvable in the band-limited AI.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii