Gotowa bibliografia na temat „CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Zobacz listy aktualnych artykułów, książek, rozpraw, streszczeń i innych źródeł naukowych na temat „CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Artykuły w czasopismach na temat "CONDITIONAL GENERATIVE ADVERARIAL NETWORKS (CGAN)"

1

Zhou, Guoqiang, Yi Fan, Jiachen Shi, Yuyuan Lu, and Jun Shen. "Conditional Generative Adversarial Networks for Domain Transfer: A Survey." Applied Sciences 12, no. 16 (2022): 8350. http://dx.doi.org/10.3390/app12168350.

Pełny tekst źródła
Streszczenie:
Generative Adversarial Network (GAN), deemed as a powerful deep-learning-based silver bullet for intelligent data generation, has been widely used in multi-disciplines. Furthermore, conditional GAN (CGAN) introduces artificial control information on the basis of GAN, which is more practical for many specific fields, though it is mostly used in domain transfer. Researchers have proposed numerous methods to tackle diverse tasks by employing CGAN. It is now a timely and also critical point to review these achievements. We first give a brief introduction to the principle of CGAN, then focus on how to improve it to achieve better performance and how to evaluate such performance across the variants. Afterward, the main applications of CGAN in domain transfer are presented. Finally, as another major contribution, we also list the current problems and challenges of CGAN.
Style APA, Harvard, Vancouver, ISO itp.
2

Lee, Minhyeok, and Junhee Seok. "Estimation with Uncertainty via Conditional Generative Adversarial Networks." Sensors 21, no. 18 (2021): 6194. http://dx.doi.org/10.3390/s21186194.

Pełny tekst źródła
Streszczenie:
Conventional predictive Artificial Neural Networks (ANNs) commonly employ deterministic weight matrices; therefore, their prediction is a point estimate. Such a deterministic nature in ANNs causes the limitations of using ANNs for medical diagnosis, law problems, and portfolio management in which not only discovering the prediction but also the uncertainty of the prediction is essentially required. In order to address such a problem, we propose a predictive probabilistic neural network model, which corresponds to a different manner of using the generator in the conditional Generative Adversarial Network (cGAN) that has been routinely used for conditional sample generation. By reversing the input and output of ordinary cGAN, the model can be successfully used as a predictive model; moreover, the model is robust against noises since adversarial training is employed. In addition, to measure the uncertainty of predictions, we introduce the entropy and relative entropy for regression problems and classification problems, respectively. The proposed framework is applied to stock market data and an image classification task. As a result, the proposed framework shows superior estimation performance, especially on noisy data; moreover, it is demonstrated that the proposed framework can properly estimate the uncertainty of predictions.
Style APA, Harvard, Vancouver, ISO itp.
3

Zhang, Hao, and Wenlei Wang. "Imaging Domain Seismic Denoising Based on Conditional Generative Adversarial Networks (CGANs)." Energies 15, no. 18 (2022): 6569. http://dx.doi.org/10.3390/en15186569.

Pełny tekst źródła
Streszczenie:
A high-resolution seismic image is the key factor for helping geophysicists and geologists to recognize the geological structures below the subsurface. More and more complex geology has challenged traditional techniques and resulted in a need for more powerful denoising methodologies. The deep learning technique has shown its effectiveness in many different types of tasks. In this work, we used a conditional generative adversarial network (CGAN), which is a special type of deep neural network, to conduct the seismic image denoising process. We considered the denoising task as an image-to-image translation problem, which transfers a raw seismic image with multiple types of noise into a reflectivity-like image without noise. We used several seismic models with complex geology to train the CGAN. In this experiment, the CGAN’s performance was promising. The trained CGAN could maintain the structure of the image undistorted while suppressing multiple types of noise.
Style APA, Harvard, Vancouver, ISO itp.
4

Zand, Jaleh, and Stephen Roberts. "Mixture Density Conditional Generative Adversarial Network Models (MD-CGAN)." Signals 2, no. 3 (2021): 559–69. http://dx.doi.org/10.3390/signals2030034.

Pełny tekst źródła
Streszczenie:
Generative Adversarial Networks (GANs) have gained significant attention in recent years, with impressive applications highlighted in computer vision, in particular. Compared to such examples, however, there have been more limited applications of GANs to time series modeling, including forecasting. In this work, we present the Mixture Density Conditional Generative Adversarial Model (MD-CGAN), with a focus on time series forecasting. We show that our model is capable of estimating a probabilistic posterior distribution over forecasts and that, in comparison to a set of benchmark methods, the MD-CGAN model performs well, particularly in situations where noise is a significant component of the observed time series. Further, by using a Gaussian mixture model as the output distribution, MD-CGAN offers posterior predictions that are non-Gaussian.
Style APA, Harvard, Vancouver, ISO itp.
5

Zhen, Hao, Yucheng Shi, Jidong J. Yang, and Javad Mohammadpour Vehni. "Co-supervised learning paradigm with conditional generative adversarial networks for sample-efficient classification." Applied Computing and Intelligence 3, no. 1 (2022): 13–26. http://dx.doi.org/10.3934/aci.2023002.

Pełny tekst źródła
Streszczenie:
<abstract> <p>Classification using supervised learning requires annotating a large amount of classes-balanced data for model training and testing. This has practically limited the scope of applications with supervised learning, in particular deep learning. To address the issues associated with limited and imbalanced data, this paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN), in which a conditional generative adversarial network (CGAN) is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process. In this setting, the CGAN not only serves as a <italic>co-supervisor</italic> but also provides complementary quality examples to aid the classifier training in an end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18 classifier. For the comparison, all classifiers in above methods adopt the ResNet-18 architecture as the backbone. Particularly, for the Street View House Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline classifier; for the highway image dataset, using the 10% of training data, a test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN and 95.52% by the baseline classifier.</p> </abstract>
Style APA, Harvard, Vancouver, ISO itp.
6

Huang, Yubo, and Zhong Xiang. "A Metal Character Enhancement Method based on Conditional Generative Adversarial Networks." Journal of Physics: Conference Series 2284, no. 1 (2022): 012003. http://dx.doi.org/10.1088/1742-6596/2284/1/012003.

Pełny tekst źródła
Streszczenie:
Abstract In order to improve the accuracy and stability of metal stamping character (MSC) automatic recognition technology, a metal stamping character enhancement algorithm based on conditional Generative Adversarial Networks (cGAN) is proposed. We identify character regions manually through region labeling and Unsharpen Mask (USM) sharpening algorithm, and make the cGAN learn the most effective loss function in the adversarial training process to guide the generated model and distinguish character features and interference features, so as to achieve contrast enhancement between character and non-character regions. Qualitative and quantitative analyses show that the generated results have satisfactory image quality, and that the maximum character recognition rate of the recognition network ASTER is improved by 11.03%.
Style APA, Harvard, Vancouver, ISO itp.
7

Kyslytsyna, Anastasiia, Kewen Xia, Artem Kislitsyn, Isselmou Abd El Kader, and Youxi Wu. "Road Surface Crack Detection Method Based on Conditional Generative Adversarial Networks." Sensors 21, no. 21 (2021): 7405. http://dx.doi.org/10.3390/s21217405.

Pełny tekst źródła
Streszczenie:
Constant monitoring of road surfaces helps to show the urgency of deterioration or problems in the road construction and to improve the safety level of the road surface. Conditional generative adversarial networks (cGAN) are a powerful tool to generate or transform the images used for crack detection. The advantage of this method is the highly accurate results in vector-based images, which are convenient for mathematical analysis of the detected cracks at a later time. However, images taken under established parameters are different from images in real-world contexts. Another potential problem of cGAN is that it is difficult to detect the shape of an object when the resulting accuracy is low, which can seriously affect any further mathematical analysis of the detected crack. To tackle this issue, this paper proposes a method called improved cGAN with attention gate (ICGA) for roadway surface crack detection. To obtain a more accurate shape of the detected target object, ICGA establishes a multi-level model with independent stages. In the first stage, everything except the road is treated as noise and removed from the image. These images are stored in a new dataset. In the second stage, ICGA determines the cracks. Therefore, ICGA focuses on the redistribution of cracks, not the auxiliary elements in the image. ICGA adds two attention gates to a U-net architecture and improves the segmentation capacities of the generator in pix2pix. Extensive experimental results on dashboard camera images of the Unsupervised Llamas dataset show that our method has better performance than other state-of-the-art methods.
Style APA, Harvard, Vancouver, ISO itp.
8

Link, Patrick, Johannes Bodenstab, Lars Penter, and Steffen Ihlenfeldt. "Metamodeling of a deep drawing process using conditional Generative Adversarial Networks." IOP Conference Series: Materials Science and Engineering 1238, no. 1 (2022): 012064. http://dx.doi.org/10.1088/1757-899x/1238/1/012064.

Pełny tekst źródła
Streszczenie:
Abstract Optimization tasks as well as quality predictions for process control require fast responding process metamodels. A common strategy for sheet metal forming is building fast data driven metamodels based on results of Finite Element (FE) process simulations. However, FE simulations with complex material models and large parts with many elements consume extensive computational time. Hence, one major challenge in developing metamodels is to achieve a good prediction precision with limited data, while these predictions still need to be robust against varying input parameters. Therefore, the aim of this study was to evaluate if conditional Generative Adversarial Networks (cGAN) are applicable for predicting results of FE deep drawing simulations, since cGANs could achieve high performance in similar tasks in previous work. This involves investigations of the influence of data required to achieve a defined precision and to predict e.g. wrinkling phenomena. Results show that the cGAN used in this study was able to predict forming results with an averaged absolute deviation of sheet thickness of 0.025 mm, even when using a comparable small amount of data.
Style APA, Harvard, Vancouver, ISO itp.
9

Falahatraftar, Farnoush, Samuel Pierre, and Steven Chamberland. "A Conditional Generative Adversarial Network Based Approach for Network Slicing in Heterogeneous Vehicular Networks." Telecom 2, no. 1 (2021): 141–54. http://dx.doi.org/10.3390/telecom2010009.

Pełny tekst źródła
Streszczenie:
Heterogeneous Vehicular Network (HetVNET) is a highly dynamic type of network that changes very quickly. Regarding this feature of HetVNETs and the emerging notion of network slicing in 5G technology, we propose a hybrid intelligent Software-Defined Network (SDN) and Network Functions Virtualization (NFV) based architecture. In this paper, we apply Conditional Generative Adversarial Network (CGAN) to augment the information of successful network scenarios that are related to network congestion and dynamicity. The results show that the proposed CGAN can be trained in order to generate valuable data. The generated data are similar to the real data and they can be used in blueprints of HetVNET slices.
Style APA, Harvard, Vancouver, ISO itp.
10

Aida, Saori, Junpei Okugawa, Serena Fujisaka, Tomonari Kasai, Hiroyuki Kameda, and Tomoyasu Sugiyama. "Deep Learning of Cancer Stem Cell Morphology Using Conditional Generative Adversarial Networks." Biomolecules 10, no. 6 (2020): 931. http://dx.doi.org/10.3390/biom10060931.

Pełny tekst źródła
Streszczenie:
Deep-learning workflows of microscopic image analysis are sufficient for handling the contextual variations because they employ biological samples and have numerous tasks. The use of well-defined annotated images is important for the workflow. Cancer stem cells (CSCs) are identified by specific cell markers. These CSCs were extensively characterized by the stem cell (SC)-like gene expression and proliferation mechanisms for the development of tumors. In contrast, the morphological characterization remains elusive. This study aims to investigate the segmentation of CSCs in phase contrast imaging using conditional generative adversarial networks (CGAN). Artificial intelligence (AI) was trained using fluorescence images of the Nanog-Green fluorescence protein, the expression of which was maintained in CSCs, and the phase contrast images. The AI model segmented the CSC region in the phase contrast image of the CSC cultures and tumor model. By selecting images for training, several values for measuring segmentation quality increased. Moreover, nucleus fluorescence overlaid-phase contrast was effective for increasing the values. We show the possibility of mapping CSC morphology to the condition of undifferentiation using deep-learning CGAN workflows.
Style APA, Harvard, Vancouver, ISO itp.
Więcej źródeł
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!