Статті в журналах з теми "GAN Generative Adversarial Network"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: GAN Generative Adversarial Network.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 статей у журналах для дослідження на тему "GAN Generative Adversarial Network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Thakur, Amey. "Generative Adversarial Networks." International Journal for Research in Applied Science and Engineering Technology 9, no. 8 (August 31, 2021): 2307–25. http://dx.doi.org/10.22214/ijraset.2021.37723.

Повний текст джерела
Анотація:
Abstract: Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples. Keywords: Generative Adversarial Networks (GANs), Generator, Discriminator, Supervised and Unsupervised Learning, Discriminative and Generative Modelling, Backpropagation, Loss Functions, Machine Learning, Deep Learning, Neural Networks, Convolutional Neural Network (CNN), Deep Convolutional GAN (DCGAN), Conditional GAN (cGAN), Information Maximizing GAN (InfoGAN), Stacked GAN (StackGAN), Pix2Pix, Wasserstein GAN (WGAN), Progressive Growing GAN (ProGAN), BigGAN, StyleGAN, CycleGAN, Super-Resolution GAN (SRGAN), Image Synthesis, Image-to-Image Translation.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Iranmanesh, Seyed Mehdi, and Nasser M. Nasrabadi. "HGAN: Hybrid generative adversarial network." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 8927–38. http://dx.doi.org/10.3233/jifs-201202.

Повний текст джерела
Анотація:
In this paper, we present a simple approach to train Generative Adversarial Networks (GANs) in order to avoid a mode collapse issue. Implicit models such as GANs tend to generate better samples compared to explicit models that are trained on tractable data likelihood. However, GANs overlook the explicit data density characteristics which leads to undesirable quantitative evaluations and mode collapse. To bridge this gap, we propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model and support both adversarial and likelihood framework in a joint training manner which diversify the estimated density in order to cover different modes. We propose to use an adversarial network to transfer knowledge from an autoregressive model (teacher) to the generator (student) of a GAN model. A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach. We conduct extensive experiments on real-world datasets (i.e., MNIST, CIFAR-10, STL-10) to demonstrate the effectiveness of the proposed HGAN under qualitative and quantitative evaluations. The experimental results show the superiority and competitiveness of our method compared to the baselines.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Naman and Sudha Narang, Chaudhary Sarimurrab, Ankita Kesari. "Human Face Generation using Deep Convolution Generative Adversarial Network." January 2021 7, no. 01 (January 29, 2021): 114–20. http://dx.doi.org/10.46501/ijmtst070127.

Повний текст джерела
Анотація:
The Generative Models have gained considerable attention in the field of unsupervised learning via a new and practical framework called Generative Adversarial Networks (GAN) due to its outstanding data generation capability. Many models of GAN have proposed, and several practical applications emerged in various domains of computer vision and machine learning. Despite GAN's excellent success, there are still obstacles to stable training. In this model, we aim to generate human faces through un-labelled data via the help of Deep Convolutional Generative Adversarial Networks. The applications for generating faces are vast in the field of image processing, entertainment, and other such industries. Our resulting model is successfully able to generate human faces from the given un-labelled data and random noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Chi, Wanle, Yun Huoy Choo, and Ong Sing Goh. "Review of Generative Adversarial Networks in Image Generation." Journal of Advanced Computational Intelligence and Intelligent Informatics 26, no. 1 (January 20, 2022): 3–7. http://dx.doi.org/10.20965/jaciii.2022.p0003.

Повний текст джерела
Анотація:
Generative adversarial network (GAN) model generates and discriminates images using an adversarial competitive strategy to generate high-quality images. The implementation of GAN in different fields is helpful for generating samples that are not easy to obtain. Image generation can help machine learning to balance data and improve the accuracy of the classifier. This paper introduces the principles of the GAN model and analyzes the advantages and disadvantages of improving GANs. The applications of GANs in image generation are analyzed. Finally, the problems of GANs in image generation are summarized.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

C, Ms Faseela Kathun. "Generating faces From the Sketch Using GAN." International Journal for Research in Applied Science and Engineering Technology 9, no. 10 (October 31, 2021): 1–3. http://dx.doi.org/10.22214/ijraset.2021.38259.

Повний текст джерела
Анотація:
Abstract: In most cases, sketch images simply show basic profile details and do not include facial detail. As a result, precisely generating facial features is difficult. Using the created adversarial network and attributes, we propose an image translation network. A feature extracting network and a down-sampling up-sampling network make up the generator network. There is a generator and a discriminator in GANs. The Generator creates fake data samples (images, audio, etc.) in intended to mislead the Discriminator. On the other hand, the Discriminator attempts to distinguish between the real and fake sample Keywords: Deep Learning, Generative Adversarial Networks, Image Translation, face generation, skip-connection.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

C, Ms Faseela Kathun. "Generating faces From the Sketch Using GAN." International Journal for Research in Applied Science and Engineering Technology 9, no. 10 (October 31, 2021): 1–3. http://dx.doi.org/10.22214/ijraset.2021.38259.

Повний текст джерела
Анотація:
Abstract: In most cases, sketch images simply show basic profile details and do not include facial detail. As a result, precisely generating facial features is difficult. Using the created adversarial network and attributes, we propose an image translation network. A feature extracting network and a down-sampling up-sampling network make up the generator network. There is a generator and a discriminator in GANs. The Generator creates fake data samples (images, audio, etc.) in intended to mislead the Discriminator. On the other hand, the Discriminator attempts to distinguish between the real and fake sample Keywords: Deep Learning, Generative Adversarial Networks, Image Translation, face generation, skip-connection.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Cai, Zhipeng, Zuobin Xiong, Honghui Xu, Peng Wang, Wei Li, and Yi Pan. "Generative Adversarial Networks." ACM Computing Surveys 54, no. 6 (July 2021): 1–38. http://dx.doi.org/10.1145/3459992.

Повний текст джерела
Анотація:
Generative Adversarial Networks (GANs) have promoted a variety of applications in computer vision and natural language processing, among others, due to its generative model’s compelling ability to generate realistic examples plausibly drawn from an existing distribution of samples. GAN not only provides impressive performance on data generation-based tasks but also stimulates fertilization for privacy and security oriented research because of its game theoretic optimization strategy. Unfortunately, there are no comprehensive surveys on GAN in privacy and security, which motivates this survey to summarize systematically. The existing works are classified into proper categories based on privacy and security functions, and this survey conducts a comprehensive analysis of their advantages and drawbacks. Considering that GAN in privacy and security is still at a very initial stage and has imposed unique challenges that are yet to be well addressed, this article also sheds light on some potential privacy and security applications with GAN and elaborates on some future research directions.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Fu Jie Tey, Fu Jie Tey, Tin-Yu Wu Fu Jie Tey, Yueh Wu Tin-Yu Wu, and Jiann-Liang Chen Yueh Wu. "Generative Adversarial Network for Simulation of Load Balancing Optimization in Mobile Networks." 網際網路技術學刊 23, no. 2 (March 2022): 297–304. http://dx.doi.org/10.53106/160792642022032302010.

Повний текст джерела
Анотація:
<p>The commercial operation of 5G networks is almost ready to be launched, but problems related to wireless environment, load balancing for example, remain. Many load balancing methods have been proposed, but they were implemented in simulation environments that greatly differ from 5G networks. Current load balancing algorithms, on the other hand, focus on the selection of appropriate Wi-Fi or macro & small cells for Device to Device (D2D) communications, but Wi-Fi facilities and small cells are not available all the time. For this reason, we propose to use the macro cells that provide large coverage to achieve load balancing. By combing Generative Adversarial Network (GAN) with the ns-3 network simulator, this paper uses neural networks in TensorFlow to optimize load balancing of mobile networks, increase the data throughput and reduce the packet loss rate. In addition, to discuss the load balancing problem, we take the data produced by the ns-3 network simulator as the real data for GAN.</p> <p>&nbsp;</p>
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Tu, Jun, Willies Ogola, Dehong Xu, and Wei Xie. "Intrusion Detection Based on Generative Adversarial Network of Reinforcement Learning Strategy for Wireless Sensor Networks." International Journal of Circuits, Systems and Signal Processing 16 (January 13, 2022): 478–82. http://dx.doi.org/10.46300/9106.2022.16.58.

Повний текст джерела
Анотація:
Due to the wireless nature of wireless sensor networks (WSN), the network can be deployed in most of the unattended environment, which makes the networks more vulnerable for attackers who may listen to the traffic and inject their own nodes in the sensor network. In our work, we research on a novel machine learning algorithm on intrusion detection based on reinforcement learning (RL) strategy using generative adversarial network (GAN) for WSN which can automatically detect intrusion or malicious attacks into the network. We combine Actor-Critic Algorithm in RL with GAN in a simulated WSN. The GAN is employed as part of RL environment to generate fake data with possible attacks, which is similar to the real data generated by the sensor networks. Its main aim is to confuse the adversarial network into differentiating between the real and fake data with possible attacks. The results that is from the experiments are based on environment of GAN and Network Simulator 3 (NS3) illustrate that Actor-Critic&GAN algorithm enhances security of the simulated WSN by protecting the networks data against adversaries and improves on the accuracy of the detection.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Han, Ligong, Ruijiang Gao, Mun Kim, Xin Tao, Bo Liu, and Dimitris Metaxas. "Robust Conditional GAN from Uncertainty-Aware Pairwise Comparisons." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10909–16. http://dx.doi.org/10.1609/aaai.v34i07.6723.

Повний текст джерела
Анотація:
Conditional generative adversarial networks have shown exceptional generation performance over the past few years. However, they require large numbers of annotations. To address this problem, we propose a novel generative adversarial network utilizing weak supervision in the form of pairwise comparisons (PC-GAN) for image attribute editing. In the light of Bayesian uncertainty estimation and noise-tolerant adversarial training, PC-GAN can estimate attribute rating efficiently and demonstrate robust performance in noise resistance. Through extensive experiments, we show both qualitatively and quantitatively that PC-GAN performs comparably with fully-supervised methods and outperforms unsupervised baselines. Code and Supplementary can be found on the project website*.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Liu, Aishan, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. "Perceptual-Sensitive GAN for Generating Adversarial Patches." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1028–35. http://dx.doi.org/10.1609/aaai.v33i01.33011028.

Повний текст джерела
Анотація:
Deep neural networks (DNNs) are vulnerable to adversarial examples where inputs with imperceptible perturbations mislead DNNs to incorrect results. Recently, adversarial patch, with noise confined to a small and localized patch, emerged for its easy accessibility in real-world. However, existing attack strategies are still far from generating visually natural patches with strong attacking ability, since they often ignore the perceptual sensitivity of the attacked network to the adversarial patch, including both the correlations with the image context and the visual attention. To address this problem, this paper proposes a perceptual-sensitive generative adversarial network (PS-GAN) that can simultaneously enhance the visual fidelity and the attacking ability for the adversarial patch. To improve the visual fidelity, we treat the patch generation as a patch-to-patch translation via an adversarial process, feeding any types of seed patch and outputting the similar adversarial patch with high perceptual correlation with the attacked image. To further enhance the attacking ability, an attention mechanism coupled with adversarial generation is introduced to predict the critical attacking areas for placing the patches, which can help producing more realistic and aggressive patches. Extensive experiments under semi-whitebox and black-box settings on two large-scale datasets GTSRB and ImageNet demonstrate that the proposed PS-GAN outperforms state-of-the-art adversarial patch attack methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Kumar, Dheeraj, Mayuri A. Mehta, and Indranath Chatterjee. "Empirical Analysis of Deep Convolutional Generative Adversarial Network for Ultrasound Image Synthesis." Open Biomedical Engineering Journal 15, no. 1 (October 18, 2021): 71–77. http://dx.doi.org/10.2174/1874120702115010071.

Повний текст джерела
Анотація:
Introduction: Recent research on Generative Adversarial Networks (GANs) in the biomedical field has proven the effectiveness in generating synthetic images of different modalities. Ultrasound imaging is one of the primary imaging modalities for diagnosis in the medical domain. In this paper, we present an empirical analysis of the state-of-the-art Deep Convolutional Generative Adversarial Network (DCGAN) for generating synthetic ultrasound images. Aims: This work aims to explore the utilization of deep convolutional generative adversarial networks for the synthesis of ultrasound images and to leverage its capabilities. Background: Ultrasound imaging plays a vital role in healthcare for timely diagnosis and treatment. Increasing interest in automated medical image analysis for precise diagnosis has expanded the demand for a large number of ultrasound images. Generative adversarial networks have been proven beneficial for increasing the size of data by generating synthetic images. Objective: Our main purpose in generating synthetic ultrasound images is to produce a sufficient amount of ultrasound images with varying representations of a disease. Methods: DCGAN has been used to generate synthetic ultrasound images. It is trained on two ultrasound image datasets, namely, the common carotid artery dataset and nerve dataset, which are publicly available on Signal Processing Lab and Kaggle, respectively. Results: Results show that good quality synthetic ultrasound images are generated within 100 epochs of training of DCGAN. The quality of synthetic ultrasound images is evaluated using Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM). We have also presented some visual representations of the slices of generated images for qualitative comparison. Conclusion: Our empirical analysis reveals that synthetic ultrasound image generation using DCGAN is an efficient approach. Other: In future work, we plan to compare the quality of images generated through other adversarial methods such as conditional GAN, progressive GAN.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Rui Yang, Rui Yang, Tian-Jie Cao Rui Yang, Xiu-Qing Chen Tian-Jie Cao, Feng-rong Zhang Xiu-Qing Chen, and Yun-Yan Qi Feng-rong Zhang. "An Ensemble Denoiser Based on Generative Adversarial Networks to Eliminate Adversarial Perturbations." 電腦學刊 32, no. 5 (October 2021): 055–75. http://dx.doi.org/10.53106/199115992021103205005.

Повний текст джерела
Анотація:
Deep neural networks (DNNs) have been applied in various machine learning tasks with the success of deep learning technologies. However, they are surprisingly vulnerable to adversarial examples, which can easily fool deep neural networks. Due to this drawback of deep neural networks, numerous methods have been proposed to eliminate the effect of adversarial examples. Although they do play a significant role in protecting deep neural networks, most of them all have one flaw in common. They are only effective for certain types of adversarial examples. This paper proposes an ensemble denoiser based on generative adversarial networks (GANs) to protect deep neural networks. This proposed method aims to remove the effect of multiple types of adversarial examples before they are fed into deep neural networks. Therefore, it is model-independent and cannot modify deep neural networks&rsquo; parameters. We employ a generative adversarial network for this proposed method to learn multiple mappings between adversarial examples and benign examples. Each mapping behaves differently for different types of adversarial examples. Therefore, we integrate these mappings as the ultimate method to defend against multiple types of adversarial examples. Experiments are conducted on the MNIST and CIFAR10 datasets. We compare this proposed method with several existing excellent methods. Results show that this proposed method achieves better performance than other methods when defending against multiple types of adversarial examples. The code is available at <a href="https://github.com/Afreadyang/ensemble-ape-gan">https://github.com/Afreadyang/ensemble-ape-gan</a>
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Zhu, Lin, Du Baolin, Zhao Xiaomeng, Fang Shaoliang, Che Zhen, Zhou Junjie, and Chen Shumin. "Surface Defect Detection Method Based on Improved Semisupervised Multitask Generative Adversarial Network." Scientific Programming 2022 (January 19, 2022): 1–17. http://dx.doi.org/10.1155/2022/4481495.

Повний текст джерела
Анотація:
The detection methods based on deep learning networks have attracted widespread interest in industrial manufacture. However, the existing methods are mainly trapped by a large amount of training data with excellent labels and also show difficulty for the simultaneous detection of multiple defects in practical detection. Therefore, in this article, a defect detection method based on improved semisupervised multitask generative adversarial network (iSSMT-GAN) is proposed for generating better image features and improving classification accuracy. First, the training data are manually labeled according to the types of defects, and the generative adversarial network (GAN) is constructed according to the reliable annotations about defects. Thus, a classification decision surface for the detection of multitype defects is formed in the discriminative network of GAN in an integrated manner. Moreover, the semisupervised samples generated by the discriminative network give the generative network feedback for enhancing the image features and avoiding gradient disappearance or overfitting. Finally, the experimental results show that the proposed method can generate high-quality image features compared with the classic GAN. Furthermore, this increase in classification accuracy of RegNet model, MobileNet v3 model, VGG-19 model, and AlexNet-based transfer learning is 3.13%, 2.30%, 2.48%, and 3.12%, respectively.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Balint, Adam, and Graham Taylor. "Pal-GAN: Palette-conditioned Generative Adversarial Networks." Journal of Computational Vision and Imaging Systems 6, no. 1 (January 15, 2021): 1–5. http://dx.doi.org/10.15353/jcvis.v6i1.3536.

Повний текст джерела
Анотація:
Recent advances in Generative Adversarial Networks (GANs) have shown great progress on a large variety of tasks. A common technique used to yield greater diversity of samples is conditioning on class labels. Conditioning on high-dimensional structured or unstructured information has also been shown to improve generation results, e.g. Image-to-Image translation. The conditioning information is provided in the form of human annotations, which can be expensive and difficult to obtain in cases where domain knowledge experts are needed. In this paper, we present an alternative: conditioning on low-dimensional structured information that can be automatically extracted from the input without the need for human annotators. Specifically, we propose a Palette-conditioned Generative Adversarial Network (Pal-GAN), an architecture-agnostic model that conditions on both a colour palette and a segmentation mask for high quality image synthesis. We show improvements on conditional consistency, intersection-over-union, and Fréchet inception distance scores. Additionally, we show that sampling colour palettes significantly changes the style of the generated images.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Du, Chuan, and Lei Zhang. "Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network." Remote Sensing 13, no. 21 (October 29, 2021): 4358. http://dx.doi.org/10.3390/rs13214358.

Повний текст джерела
Анотація:
Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems. The adversarial attack can make a deep convolutional neural network (CNN)-based SAR-ATR system output the intended wrong label predictions by adding small adversarial perturbations to the SAR images. The existing optimization-based adversarial attack methods generate adversarial examples by minimizing the mean-squared reconstruction error, causing smooth target edge and blurry weak scattering centers in SAR images. In this paper, we build a UNet-generative adversarial network (GAN) to refine the generation of the SAR-ATR models’ adversarial examples. The UNet learns the separable features of the targets and generates the adversarial examples of SAR images. The GAN makes the generated adversarial examples approximate to real SAR images (with sharp target edge and explicit weak scattering centers) and improves the generation efficiency. We carry out abundant experiments using the proposed adversarial attack algorithm to fool the SAR-ATR models based on several advanced CNNs, which are trained on the measured SAR images of the ground vehicle targets. The quantitative and qualitative results demonstrate the high-quality adversarial example generation and excellent attack effectiveness and efficiency improvement.
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Taheri, Shayan, Aminollah Khormali, Milad Salem, and Jiann-Shiun Yuan. "Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks." Big Data and Cognitive Computing 4, no. 2 (May 22, 2020): 11. http://dx.doi.org/10.3390/bdcc4020011.

Повний текст джерела
Анотація:
In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline using combination of pre-trained convolutional neural network and an external GAN, that is, Pix2Pix conditional GAN, to determine the transformations between adversarial examples and clean data, and to automatically synthesize new adversarial examples. These adversarial examples are employed to strengthen the model, attack, and defense in an iterative pipeline. Our simulation results demonstrate the success of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Hazra, Debapriya, and Yung-Cheol Byun. "SynSigGAN: Generative Adversarial Networks for Synthetic Biomedical Signal Generation." Biology 9, no. 12 (December 3, 2020): 441. http://dx.doi.org/10.3390/biology9120441.

Повний текст джерела
Анотація:
Automating medical diagnosis and training medical students with real-life situations requires the accumulation of large dataset variants covering all aspects of a patient’s condition. For preventing the misuse of patient’s private information, datasets are not always publicly available. There is a need to generate synthetic data that can be trained for the advancement of public healthcare without intruding on patient’s confidentiality. Currently, rules for generating synthetic data are predefined and they require expert intervention, which limits the types and amount of synthetic data. In this paper, we propose a novel generative adversarial networks (GAN) model, named SynSigGAN, for automating the generation of any kind of synthetic biomedical signals. We have used bidirectional grid long short-term memory for the generator network and convolutional neural network for the discriminator network of the GAN model. Our model can be applied in order to create new biomedical synthetic signals while using a small size of the original signal dataset. We have experimented with our model for generating synthetic signals for four kinds of biomedical signals (electrocardiogram (ECG), electroencephalogram (EEG), electromyography (EMG), photoplethysmography (PPG)). The performance of our model is superior wheen compared to other traditional models and GAN models, as depicted by the evaluation metric. Synthetic biomedical signals generated by our approach have been tested while using other models that could classify each signal significantly with high accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Dharejo, Fayaz Ali, Farah Deeba, Yuanchun Zhou, Bhagwan Das, Munsif Ali Jatoi, Muhammad Zawish, Yi Du, and Xuezhi Wang. "TWIST-GAN: Towards Wavelet Transform and Transferred GAN for Spatio-Temporal Single Image Super Resolution." ACM Transactions on Intelligent Systems and Technology 12, no. 6 (December 31, 2021): 1–20. http://dx.doi.org/10.1145/3456726.

Повний текст джерела
Анотація:
Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hemberg, Erik, Jamal Toutouh, Abdullah Al-Dujaili, Tom Schmiedlechner, and Una-May O’reilly. "Spatial Coevolution for Generative Adversarial Network Training." ACM Transactions on Evolutionary Learning and Optimization 1, no. 2 (July 23, 2021): 1–28. http://dx.doi.org/10.1145/3458845.

Повний текст джерела
Анотація:
Generative Adversarial Networks (GANs) are difficult to train because of pathologies such as mode and discriminator collapse. Similar pathologies have been studied and addressed in competitive evolutionary computation by increased diversity. We study a system, Lipizzaner, that combines spatial coevolution with gradient-based learning to improve the robustness and scalability of GAN training. We study different features of Lipizzaner’s evolutionary computation methodology. Our ablation experiments determine that communication, selection, parameter optimization, and ensemble optimization each, as well as in combination, play critical roles. Lipizzaner succumbs less frequently to critical collapses and, as a side benefit, demonstrates improved performance. In addition, we show a GAN-training feature of Lipizzaner: the ability to train simultaneously with different loss functions in the gradient descent parameter learning framework of each GAN at each cell. We use an image generation problem to show that different loss function combinations result in models with better accuracy and more diversity in comparison to other existing evolutionary GAN models. Finally, Lipizzaner with multiple loss function options promotes the best model diversity while requiring a large grid size for adequate accuracy.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Chakraborty, Sujoy. "Camera Fingerprint estimation with a Generative Adversarial Network (GAN)." Electronic Imaging 2021, no. 4 (January 18, 2021): 336–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.4.mwsf-336.

Повний текст джерела
Анотація:
For forensic analysis of digital images or videos, the PRNU or camera fingerprint is the most important characteristics, for source attribution and manipulation localization. Typically, a good estimate of the PRNU is obtained by computing its Maximum Likelihood estimate from noise residuals of a large number of flatfield images captured by the camera. In this paper, we propose a novel approach of estimating the fingerprint of a camera, with a Generative Adversarial Network (GAN). The idea is to let the Generator network learn a distribution, from which PRNU samples will be drawn after training of the two adversarial networks. Experimental results indicate that the GAN-generated PRNU yields state-of-the-art camera identification and manipulation localization results.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Fekri, Mohammad Navid, Ananda Mohon Ghosh, and Katarina Grolinger. "Generating Energy Data for Machine Learning with Recurrent Generative Adversarial Networks." Energies 13, no. 1 (December 26, 2019): 130. http://dx.doi.org/10.3390/en13010130.

Повний текст джерела
Анотація:
The smart grid employs computing and communication technologies to embed intelligence into the power grid and, consequently, make the grid more efficient. Machine learning (ML) has been applied for tasks that are important for smart grid operation including energy consumption and generation forecasting, anomaly detection, and state estimation. These ML solutions commonly require sufficient historical data; however, this data is often not readily available because of reasons such as data collection costs and concerns regarding security and privacy. This paper introduces a recurrent generative adversarial network (R-GAN) for generating realistic energy consumption data by learning from real data. Generativea adversarial networks (GANs) have been mostly used for image tasks (e.g., image generation, super-resolution), but here they are used with time series data. Convolutional neural networks (CNNs) from image GANs are replaced with recurrent neural networks (RNNs) because of RNN’s ability to capture temporal dependencies. To improve training stability and increase quality of generated data, Wasserstein GANs (WGANs) and Metropolis-Hastings GAN (MH-GAN) approaches were applied. The accuracy is further improved by adding features created with ARIMA and Fourier transform. Experiments demonstrate that data generated by R-GAN can be used for training energy forecasting models.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Gayadhankar, Kaustubh, Rishi Patel, Hrithik Lodha, and Swapnil Shinde. "Image plagiarism detection using GAN - (Generative Adversarial Network)." ITM Web of Conferences 40 (2021): 03013. http://dx.doi.org/10.1051/itmconf/20214003013.

Повний текст джерела
Анотація:
In Today’s date plagiarism is a very important aspect because content originality is the client's prior requirement. Many people on the internet use others' images and get publicity while the owner of the image or data won′t get anything out of it. Many users copy the data or image features from the other users and modify it a little bit or create an artificial replica of it. With sufficient computational power and volume of data, the GAN models are capable enough to produce fake images that look very much similar to the real images. These kinds of images are generally not detected by modern plagiarism systems. GAN stands for generative adversarial network. It has two neural networks working inside. The first one is the generator which generates a random image and the second one is the discriminator which identifies whether the image being generated is a real or a fake image. In this paper, we have proposed a system that has been trained on both fake images (GAN Generated images) and real images and will help us in flagging whether the image is plagiarised or a real image.
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Kumar, Chandan, Amzad Choudhary, Gurpreet Singh, and Ms Deepti Gupta. "Enhanced Super-Resolution Using GAN." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (May 31, 2022): 2077–80. http://dx.doi.org/10.22214/ijraset.2022.42718.

Повний текст джерела
Анотація:
Abstract: Super-resolution reconstruction is an increasingly important area in computer vision. To eliminate the problems that super-resolution reconstruction models based on generative adversarial networks are difficult to train and contain artifacts in reconstruction results. besides the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks. However, the hallucinated details are often accompanied with unpleasant artifacts. This paper presented ESRGAN model which was also based on generative adversarial networks. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Yue, Fei, Chao Zhang, MingYang Yuan, Chen Xu, and YaLin Song. "Survey of Image Augmentation Based on Generative Adversarial Network." Journal of Physics: Conference Series 2203, no. 1 (February 1, 2022): 012052. http://dx.doi.org/10.1088/1742-6596/2203/1/012052.

Повний текст джерела
Анотація:
Abstract On the visual side of computer science, image data is very important in the training of neural network models. Sufficient training data can alleviate the over-fitting problem of the model during training and help the model obtain the optimal solution. However, in many computer vision assignments, it is not easy and costly to obtain sufficient training samples. Therefore, image augmentation has become a commonly used method to increase training samples. Generative Adversarial Network (GAN) is a generative method of machine learning that can generate realistic images and provide a new solution for image augmentation. This article first introduces image augmentation and its commonly used four types of methods. Secondly, the basic principles of GAN and its direct and integrated methods in image augmentation are introduced, and the typical methods used to calculate whether the images from the networks meets the requirements; then the research status of GAN in image augmentation is analyzed. Finally, the problems and development trends of GAN model in image augmentation are summarized and prospected.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chen, Shanxiong, Ye Yang, Xuxin Liu, and Shiyu Zhu. "Dual Discriminator GAN: Restoring Ancient Yi Characters." ACM Transactions on Asian and Low-Resource Language Information Processing 21, no. 4 (July 31, 2022): 1–23. http://dx.doi.org/10.1145/3490031.

Повний текст джерела
Анотація:
In China, the damage of ancient Yi books are serious. Due to the lack of ancient Yi experts, the repairation of ancient Yi books is progressing very slowly. The artificial intelligence is successful in the field of image and text, so it is feasible for the automatic restoration of ancient books. In this article, a generative adversarial networks with dual discriminator (DDGAN) is designed to restore incomplete characters in the ancient Yi literature. The DDGAN integrates the deep convolution generative adversarial network with an ancient Yi comparison discriminator. Through two training stages, it could iteratively optimizes the ancient Yi character generation networks to obtain the text generator According to the loss of comparison discriminator, DDGAN mode could be optimized. The DDGAN model can generate characters to restore the missing stroke in the ancient Yi. The experiment shows that the proposed method achieves a restoration rate of 77.3% when no more than one third of the characters are missing. This work is effective for the protection of Yi ancient books.
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Zhou, Yingbo, Pengcheng Zhao, Weiqin Tong, and Yongxin Zhu. "CDL-GAN: Contrastive Distance Learning Generative Adversarial Network for Image Generation." Applied Sciences 11, no. 4 (February 3, 2021): 1380. http://dx.doi.org/10.3390/app11041380.

Повний текст джерела
Анотація:
While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Yan, Li, Xingfen Tang, and Yi Zhang. "High Accuracy Interpolation of DEM Using Generative Adversarial Network." Remote Sensing 13, no. 4 (February 13, 2021): 676. http://dx.doi.org/10.3390/rs13040676.

Повний текст джерела
Анотація:
Digital elevation model (DEM) interpolation is aimed at predicting the elevation values of unobserved locations, given a series of collected points. Over the years, the traditional interpolation methods have been widely used but can easily lead to accuracy degradation. In recent years, generative adversarial networks (GANs) have been proven to be more efficient than the traditional methods. However, the interpolation accuracy is not guaranteed. In this paper, we propose a GAN-based network named gated and symmetric-dilated U-net GAN (GSUGAN) for improved DEM interpolation, which performs visibly and quantitatively better than the traditional methods and the conditional encoder-decoder GAN (CEDGAN). We also discuss combinations of new techniques in the generator. This shows that the gated convolution and symmetric dilated convolution structure perform slightly better. Furthermore, based on the performance of the different methods, it was concluded that the Convolutional Neural Network (CNN)-based method has an advantage in the quantitative accuracy but the GAN-based method can obtain a better visual quality, especially in complex terrains. In summary, in this paper, we propose a GAN-based network for improved DEM interpolation and we further illustrate the GAN-based method’s performance compared to that of the CNN-based method.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Kim, Chung-Il, Meejoung Kim, Seungwon Jung, and Eenjun Hwang. "Simplified Fréchet Distance for Generative Adversarial Nets." Sensors 20, no. 6 (March 11, 2020): 1548. http://dx.doi.org/10.3390/s20061548.

Повний текст джерела
Анотація:
We introduce a distance metric between two distributions and propose a Generative Adversarial Network (GAN) model: the Simplified Fréchet distance (SFD) and the Simplified Fréchet GAN (SFGAN). Although the data generated through GANs are similar to real data, GAN often undergoes unstable training due to its adversarial structure. A possible solution to this problem is considering Fréchet distance (FD). However, FD is unfeasible to realize due to its covariance term. SFD overcomes the complexity so that it enables us to realize in networks. The structure of SFGAN is based on the Boundary Equilibrium GAN (BEGAN) while using SFD in loss functions. Experiments are conducted with several datasets, including CelebA and CIFAR-10. The losses and generated samples of SFGAN and BEGAN are compared with several distance metrics. The evidence of mode collapse and/or mode drop does not occur until 3000k steps for SFGAN, while it occurs between 457k and 968k steps for BEGAN. Experimental results show that SFD makes GANs more stable than other distance metrics used in GANs, and SFD compensates for the weakness of models based on BEGAN-based network structure. Based on the experimental results, we can conclude that SFD is more suitable for GAN than other metrics.
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Dewi, Christine, Rung-Ching Chen, Yan-Ting Liu, and Hui Yu. "Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation." Applied Sciences 11, no. 7 (March 24, 2021): 2913. http://dx.doi.org/10.3390/app11072913.

Повний текст джерела
Анотація:
A synthetic image is a critical issue for computer vision. Traffic sign images synthesized from standard models are commonly used to build computer recognition algorithms for acquiring more knowledge on various and low-cost research issues. Convolutional Neural Network (CNN) achieves excellent detection and recognition of traffic signs with sufficient annotated training data. The consistency of the entire vision system is dependent on neural networks. However, locating traffic sign datasets from most countries in the world is complicated. This work uses various generative adversarial networks (GAN) models to construct intricate images, such as Least Squares Generative Adversarial Networks (LSGAN), Deep Convolutional Generative Adversarial Networks (DCGAN), and Wasserstein Generative Adversarial Networks (WGAN). This paper also discusses, in particular, the quality of the images produced by various GANs with different parameters. For processing, we use a picture with a specific number and scale. The Structural Similarity Index (SSIM) and Mean Squared Error (MSE) will be used to measure image consistency. Between the generated image and the corresponding real image, the SSIM values will be compared. As a result, the images display a strong similarity to the real image when using more training images. LSGAN outperformed other GAN models in the experiment with maximum SSIM values achieved using 200 images as inputs, 2000 epochs, and size 32 × 32.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Zhang, Pengfei, and Xiaoming Ju. "Adversarial Sample Detection with Gaussian Mixture Conditional Generative Adversarial Networks." Mathematical Problems in Engineering 2021 (September 13, 2021): 1–18. http://dx.doi.org/10.1155/2021/8268249.

Повний текст джерела
Анотація:
It is important to detect adversarial samples in the physical world that are far away from the training data distribution. Some adversarial samples can make a machine learning model generate a highly overconfident distribution in the testing stage. Thus, we proposed a mechanism for detecting adversarial samples based on semisupervised generative adversarial networks (GANs) with an encoder-decoder structure; this mechanism can be applied to any pretrained neural network without changing the network’s structure. The semisupervised GANs also give us insight into the behavior of adversarial samples and their flow through the layers of a deep neural network. In the supervised scenario, the latent feature of the semisupervised GAN and the target network’s logit information are used as the input of the external classifier support vector machine to detect the adversarial samples. In the unsupervised scenario, first, we proposed a one-class classier based on the semisupervised Gaussian mixture conditional generative adversarial network (GM-CGAN) to fit the joint feature information of the normal data, and then, we used a discriminator network to detect normal data and adversarial samples. In both supervised scenarios and unsupervised scenarios, experimental results show that our method outperforms latest methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Do, Huy, Pascal Bourdon, David Helbert, Mathieu Naudin, and Remy Guillevin. "7T MRI super-resolution with Generative Adversarial Network." Electronic Imaging 2021, no. 18 (January 18, 2021): 106–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.18.3dia-106.

Повний текст джерела
Анотація:
The high-resolution magnetic resonance image (MRI) provides detailed anatomical information critical for clinical application diagnosis. However, high-resolution MRI typically comes at the cost of long scan time, small spatial coverage, and low signal-to-noise ratio. The benefits of the convolutional neural network (CNN) can be applied to solve the super-resolution task to recover high-resolution generic images from low-resolution inputs. Additionally, recent studies have shown the potential to use the generative advertising network (GAN) to generate high-quality super-resolution MRIs using learned image priors. Moreover, existing approaches require paired MRI images as training data, which is difficult to obtain with existing datasets when the alignment between high and low-resolution images has to be implemented manually.This paper implements two different GAN-based models to handle the super-resolution: Enhanced super-resolution GAN (ESRGAN) and CycleGAN. Different from the generic model, the architecture of CycleGAN is modified to solve the super-resolution on unpaired MRI data, and the ESRGAN is implemented as a reference to compare GAN-based methods performance. The results of GAN-based models provide generated high-resolution images with rich textures compared to the ground-truth. Moreover, results from experiments are performed on both 3T and 7T MRI images in recovering different scales of resolution.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Oh, Se Eun, Nate Mathews, Mohammad Saidur Rahman, Matthew Wright, and Nicholas Hopper. "GANDaLF: GAN for Data-Limited Fingerprinting." Proceedings on Privacy Enhancing Technologies 2021, no. 2 (January 29, 2021): 305–22. http://dx.doi.org/10.2478/popets-2021-0029.

Повний текст джерела
Анотація:
Abstract We introduce Generative Adversarial Networks for Data-Limited Fingerprinting (GANDaLF), a new deep-learning-based technique to perform Website Fingerprinting (WF) on Tor traffic. In contrast to most earlier work on deep-learning for WF, GANDaLF is intended to work with few training samples, and achieves this goal through the use of a Generative Adversarial Network to generate a large set of “fake” data that helps to train a deep neural network in distinguishing between classes of actual training data. We evaluate GANDaLF in low-data scenarios including as few as 10 training instances per site, and in multiple settings, including fingerprinting of website index pages and fingerprinting of non-index pages within a site. GANDaLF achieves closed-world accuracy of 87% with just 20 instances per site (and 100 sites) in standard WF settings. In particular, GANDaLF can outperform Var-CNN and Triplet Fingerprinting (TF) across all settings in subpage fingerprinting. For example, GANDaLF outperforms TF by a 29% margin and Var-CNN by 38% for training sets using 20 instances per site.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Alibasa, Muhammad Johan, Rizka Widyarini Purwanto, Yudi Priyadi, and Rosa Reska Riskiana. "Towards Generating Unit Test Codes Using Generative Adversarial Networks." Jurnal RESTI (Rekayasa Sistem dan Teknologi Informasi) 6, no. 2 (April 29, 2022): 305–14. http://dx.doi.org/10.29207/resti.v6i2.3940.

Повний текст джерела
Анотація:
Unit testing is one of the important software development steps to ensure the software’s quality. Despite its importance, unit testing is often neglected since it requires a significant amount of time and effort from the software developers to write them. Existing automated testing generating systems from past research still have shortcomings due to the Genetic Algorithm (GA) limitations to generate the appropriate unit test codes. This study explores the feasibility of using Generative Adversarial Networks (GAN) models to generate unit test code with the ability of GAN to cover GA’s drawbacks. We perform experimentations using four state-of-the-art GAN models to generate basic unit test codes and compare the results by analyzing the generated output codes using novel metrics proposed from past studies as well as performing qualitative evaluation on the generated outputs. The results show that the generated codes have satisfactory quality scores (BLEU-2 of around 99%) from the models and adequate diversity score (NLL-Div and NLL-Gen) in most models. Our study shows positive indications and potential in the use of GAN for automatic unit test code generation and suggests recommendations for future studies in GAN-based unit test code generation systems
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kong, Quan, Bin Tong, Martin Klinkigt, Yuki Watanabe, Naoto Akira, and Tomokazu Murakami. "Active Generative Adversarial Network for Image Classification." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4090–97. http://dx.doi.org/10.1609/aaai.v33i01.33014090.

Повний текст джерела
Анотація:
Sufficient supervised information is crucial for any machine learning models to boost performance. However, labeling data is expensive and sometimes difficult to obtain. Active learning is an approach to acquire annotations for data from a human oracle by selecting informative samples with a high probability to enhance performance. In recent emerging studies, a generative adversarial network (GAN) has been integrated with active learning to generate good candidates to be presented to the oracle. In this paper, we propose a novel model that is able to obtain labels for data in a cheaper manner without the need to query an oracle. In the model, a novel reward for each sample is devised to measure the degree of uncertainty, which is obtained from a classifier trained with existing labeled data. This reward is used to guide a conditional GAN to generate informative samples with a higher probability for a certain label. With extensive evaluations, we have confirmed the effectiveness of the model, showing that the generated samples are capable of improving the classification performance in popular image classification tasks.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Wang, Haobo, Kaiming Li, Xiaofei Lu, Qun Zhang, Ying Luo, and Le Kang. "ISAR Resolution Enhancement Method Exploiting Generative Adversarial Network." Remote Sensing 14, no. 5 (March 6, 2022): 1291. http://dx.doi.org/10.3390/rs14051291.

Повний текст джерела
Анотація:
Deep learning has been used in inverse synthetic aperture radar (ISAR) imaging to improve resolution performance, but there still exist some problems: the loss of weak scattering points, over-smoothed imaging results, and the universality and generalization. To address these problems, an ISAR resolution enhancement method of exploiting a generative adversarial network (GAN) is proposed in this paper. We adopt a relativistic average discriminator (RaD) to enhance the ability of the network to describe target details. The proposed loss function is composed of feature loss, adversarial loss, and absolute loss. The feature loss is used to get the main characteristics of the target. The adversarial loss ensures that the proposed GAN recovers more target details. The absolute loss is adopted to make the imaging results not over-smoothed. Experiments based on simulated and measured data under different conditions demonstrate that the proposed method has good imaging performance. In addition, the universality and generalization of the proposed GAN are also well verified.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Chen, Kaizheng, Yaping Dai, Zhiyang Jia, and Kaoru Hirota. "Single Image De-Raining Using Spinning Detail Perceptual Generative Adversarial Networks." Journal of Advanced Computational Intelligence and Intelligent Informatics 24, no. 7 (December 20, 2020): 811–19. http://dx.doi.org/10.20965/jaciii.2020.p0811.

Повний текст джерела
Анотація:
In this paper, Spinning Detail Perceptual Generative Adversarial Networks (SDP-GAN) is proposed for single image de-raining. The proposed method adopts the Generative Adversarial Network (GAN) framework and consists of two following networks: the rain streaks generative network G and the discriminative network D. To reduce the background interference, we propose a rain streaks generative network which not only focuses on the high frequency detail map of rainy image, but also directly reduces the mapping range from input to output. To further improve the perceptual quality of generated images, we modify the perceptual loss by extracting high-level features from discriminative network D, rather than pre-trained networks. Furthermore, we introduce a new training procedure based on the notion of self spinning to improve the final de-raining performance. Extensive experiments on the synthetic and real-world datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Liu, Shuqi, Mingwen Shao, and Xinping Liu. "GAN-based classifier protection against adversarial attacks." Journal of Intelligent & Fuzzy Systems 39, no. 5 (November 19, 2020): 7085–95. http://dx.doi.org/10.3233/jifs-200280.

Повний текст джерела
Анотація:
In recent years, deep neural networks have made significant progress in image classification, object detection and face recognition. However, they still have the problem of misclassification when facing adversarial examples. In order to address security issue and improve the robustness of the neural network, we propose a novel defense network based on generative adversarial network (GAN). The distribution of clean - and adversarial examples are matched to solve the mentioned problem. This guides the network to remove invisible noise accurately, and restore the adversarial example to a clean example to achieve the effect of defense. In addition, in order to maintain the classification accuracy of clean examples and improve the fidelity of neural network, we input clean examples into proposed network for denoising. Our method can effectively remove the noise of the adversarial examples, so that the denoised adversarial examples can be correctly classified. In this paper, extensive experiments are conducted on five benchmark datasets, namely MNIST, Fashion-MNIST, CIFAR10, CIFAR100 and ImageNet. Moreover, six mainstream attack methods are adopted to test the robustness of our defense method including FGSM, PGD, MIM, JSMA, CW and Deep-Fool. Results show that our method has strong defensive capabilities against the tested attack methods, which confirms the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Kannan, K. Gokul, and T. R. Ganesh Babu. "Semi Supervised Generative Adversarial Network for Automated Glaucoma Diagnosis with Stacked Discriminator Models." Journal of Medical Imaging and Health Informatics 11, no. 5 (May 1, 2021): 1334–40. http://dx.doi.org/10.1166/jmihi.2021.3787.

Повний текст джерела
Анотація:
Generative Adversarial Network (GAN) is neural network architecture, widely used in many computer vision applications such as super-resolution image generation, art creation and image to image translation. A conventional GAN model consists of two sub-models; generative model and discriminative model. The former one generates new samples based on an unsupervised learning task, and the later one classifies them into real or fake. Though GAN is most commonly used for training generative models, it can be used for developing a classifier model. The main objective is to extend the effectiveness of GAN into semi-supervised learning, i.e., for the classification of fundus images to diagnose glaucoma. The discriminator model in the conventional GAN is improved via transfer learning to predict n + 1 classes by training the model for both supervised classification (n classes) and unsupervised classification (fake or real). Both models share all feature extraction layers and differ in the output layers. Thus any update in one of the model will impact both models. Results show that the semi-supervised GAN performs well than a standalone Convolution Neural Networks (CNNs) model.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Gnanha, Aurele Tohokantche, Wenming Cao, Xudong Mao, Si Wu, Hau-San Wong та Qing Li. "αβ-GAN: Robust generative adversarial networks". Information Sciences 593 (травень 2022): 177–200. http://dx.doi.org/10.1016/j.ins.2022.01.073.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Luo, Jingrui, and Jie Wang. "Image Demosaicing Based on Generative Adversarial Network." Mathematical Problems in Engineering 2020 (June 16, 2020): 1–13. http://dx.doi.org/10.1155/2020/7367608.

Повний текст джерела
Анотація:
Digital cameras with a single sensor use a color filter array (CFA) that captures only one color component in each pixel. Therefore, noise and artifacts will be generated when reconstructing the color image, which reduces the resolution of the image. In this paper, we proposed an image demosaicing method based on generative adversarial network (GAN) to obtain high-quality color images. The proposed network does not need any initial interpolation process in the data preparation phase, which can greatly reduce the computational complexity. The generator of the GAN is designed using the U-net to directly generate the demosaicing images. The dense residual network is used for the discriminator to improve the discriminant ability of the network. We compared the proposed method with several interpolation-based algorithms and the DnCNN. Results from the comparative experiments proved that the proposed method can more effectively eliminate the image artifacts and can better recover the color image.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Picetti, Francesco, Vincenzo Lipari, Paolo Bestagini, and Stefano Tubaro. "Seismic image processing through the generative adversarial network." Interpretation 7, no. 3 (August 1, 2019): SF15—SF26. http://dx.doi.org/10.1190/int-2018-0232.1.

Повний текст джерела
Анотація:
The advent of new deep-learning and machine-learning paradigms enables the development of new solutions to tackle the challenges posed by new geophysical imaging applications. For this reason, convolutional neural networks (CNNs) have been deeply investigated as novel tools for seismic image processing. In particular, we have studied a specific CNN architecture, the generative adversarial network (GAN), through which we process seismic migrated images to obtain different kinds of output depending on the application target defined during training. We have developed two proof-of-concept applications. In the first application, a GAN is trained to turn a low-quality migrated image into a high-quality one, as if the acquisition geometry was much more dense than in the input. In the second example, the GAN is trained to turn a migrated image into the respective deconvolved reflectivity image. The effectiveness of the investigated approach is validated by means of tests performed on synthetic examples.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Sun, Tuo, Bo Sun, Zehao Jiang, Ruochen Hao, and Jiemin Xie. "Traffic Flow Online Prediction Based on a Generative Adversarial Network with Multi-Source Data." Sustainability 13, no. 21 (November 4, 2021): 12188. http://dx.doi.org/10.3390/su132112188.

Повний текст джерела
Анотація:
Traffic prediction is essential for advanced traffic planning, design, management, and network sustainability. Current prediction methods are mostly offline, which fail to capture the real-time variation of traffic flows. This paper establishes a sustainable online generative adversarial network (GAN) by combining bidirectional long short-term memory (BiLSTM) and a convolutional neural network (CNN) as the generative model and discriminative model, respectively, to keep learning with continuous feedback. BiLSTM constantly generates temporal candidate flows based on valuable memory units, and CNN screens out the best spatial prediction by returning the feedback gradient to BiLSTM. Multi-dimensional indicators are selected to map the multi-view fusion local trend for accurate prediction. To balance computing efficiency and accuracy, different batch sizes are pre-tested and allocated to different lanes. The models are trained with rectified adaptive moment estimation (RAdam) by dividing the dataset into the training and testing sets with a rolling time-domain scheme. In comparison with the autoregressive integrated moving average (ARIMA), BiLSTM, generating adversarial network for traffic flow (GAN-TF), and generating adversarial network for non-signal traffic (GAN-NST), the proposed improved generating adversarial network for traffic flow (IGAN-TF) successfully generates more accurate and stable flows and performs better.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Yang, Xiaogang, Maik Kahnt, Dennis Brückner, Andreas Schropp, Yakub Fam, Johannes Becher, Jan-Dierk Grunwaldt, Thomas L. Sheppard, and Christian G. Schroer. "Tomographic reconstruction with a generative adversarial network." Journal of Synchrotron Radiation 27, no. 2 (February 18, 2020): 486–93. http://dx.doi.org/10.1107/s1600577520000831.

Повний текст джерела
Анотація:
This paper presents a deep learning algorithm for tomographic reconstruction (GANrec). The algorithm uses a generative adversarial network (GAN) to solve the inverse of the Radon transform directly. It works for independent sinograms without additional training steps. The GAN has been developed to fit the input sinogram with the model sinogram generated from the predicted reconstruction. Good quality reconstructions can be obtained during the minimization of the fitting errors. The reconstruction is a self-training procedure based on the physics model, instead of on training data. The algorithm showed significant improvements in the reconstruction accuracy, especially for missing-wedge tomography acquired at less than 180° rotational range. It was also validated by reconstructing a missing-wedge X-ray ptychographic tomography (PXCT) data set of a macroporous zeolite particle, for which only 51 projections over 70° could be collected. The GANrec recovered the 3D pore structure with reasonable quality for further analysis. This reconstruction concept can work universally for most of the ill-posed inverse problems if the forward model is well defined, such as phase retrieval of in-line phase-contrast imaging.
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Spooner, James, Vasile Palade, Madeline Cheah, Stratis Kanarachos, and Alireza Daneshkhah. "Generation of Pedestrian Crossing Scenarios Using Ped-Cross Generative Adversarial Network." Applied Sciences 11, no. 2 (January 6, 2021): 471. http://dx.doi.org/10.3390/app11020471.

Повний текст джерела
Анотація:
The safety of vulnerable road users is of paramount importance as transport moves towards fully automated driving. The richness of real-world data required for testing autonomous vehicles is limited and furthermore, available data do not present a fair representation of different scenarios and rare events. Before deploying autonomous vehicles publicly, their abilities must reach a safety threshold, not least with regards to vulnerable road users, such as pedestrians. In this paper, we present a novel Generative Adversarial Networks named the Ped-Cross GAN. Ped-Cross GAN is able to generate crossing sequences of pedestrians in the form of human pose sequences. The Ped-Cross GAN is trained with the Pedestrian Scenario dataset. The novel Pedestrian Scenario dataset, derived from existing datasets, enables training on richer pedestrian scenarios. We demonstrate an example of its use through training and testing the Ped-Cross GAN. The results show that the Ped-Cross GAN is able to generate new crossing scenarios that are of the same distribution from those contained in the Pedestrian Scenario dataset. Having a method with these capabilities is important for the future of transport, as it will allow for the adequate testing of Connected and Autonomous Vehicles on how they correctly perceive the intention of pedestrians crossing the street, ultimately leading to fewer pedestrian casualties on our roads.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Cong, Kelun, and Mian Zhou. "Face Dataset Augmentation with Generative Adversarial Network." Journal of Physics: Conference Series 2218, no. 1 (March 1, 2022): 012035. http://dx.doi.org/10.1088/1742-6596/2218/1/012035.

Повний текст джерела
Анотація:
Abstract Face recognition is a widely used scene of artificial intelligence technology. However, some face occlusions cause the face to be unable to be effectively detected in a specific environment. Although many algorithms have been proposed to solve this problem, in essence, a large number of face image data containing occlusion elements is needed to train to improve the detection ability of the algorithm. In recent years, this problem can be effectively solved by using the image generation ability of generative adversarial network. This paper proposes an improved Generative Adversarial Networks (GAN), which improves the effect of occluded face image generation by adding coding module. Through the expansion of data set, the detection accuracy of several classic face detection models for occluded faces is improved by more than 3%. At the moment when the epidemic has not been over, occlusion face data is of great significance to improve the performance of face detection systems in specific public places such as customs security inspection and medical centers.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Yuan, C., C. Q. Sun, X. Y. Tang, and R. F. Liu. "FLGC-Fusion GAN: An Enhanced Fusion GAN Model by Importing Fully Learnable Group Convolution." Mathematical Problems in Engineering 2020 (October 22, 2020): 1–13. http://dx.doi.org/10.1155/2020/6384831.

Повний текст джерела
Анотація:
The purpose of image fusion is to combine the source images of the same scene into a single composite image with more useful information and better visual effects. Fusion GAN has made a breakthrough in this field by proposing to use the generative adversarial network to fuse images. In some cases, considering retain infrared radiation information and gradient information at the same time, the existing fusion methods ignore the image contrast and other elements. To this end, we propose a new end-to-end network structure based on generative adversarial networks (GANs), termed as FLGC-Fusion GAN. In the generator, using the learnable grouping convolution can improve the efficiency of the model and save computing resources. Therefore, we can have a better trade-off between the accuracy and speed of the model. Besides, we take the residual dense block as the basic network building unit and use the perception characteristics of the inactive as content loss characteristics of input, achieving the effect of deep network supervision. Experimental results on two public datasets show that the proposed method performs well in subjective visual performance and objective criteria and has obvious advantages over other current typical methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Lee, Je-Yeol, and Sang-Il Choi . "Improvement of Learning Stability of Generative Adversarial Network Using Variational Learning." Applied Sciences 10, no. 13 (June 30, 2020): 4528. http://dx.doi.org/10.3390/app10134528.

Повний текст джерела
Анотація:
In this paper, we propose a new network model using variational learning to improve the learning stability of generative adversarial networks (GAN). The proposed method can be easily applied to improve the learning stability of GAN-based models that were developed for various purposes, given that the variational autoencoder (VAE) is used as a secondary network while the basic GAN structure is maintained. When the gradient of the generator vanishes in the learning process of GAN, the proposed method receives gradient information from the decoder of the VAE that maintains gradient stably, so that the learning processes of the generator and discriminator are not halted. The experimental results of the MNIST and the CelebA datasets verify that the proposed method improves the learning stability of the networks by overcoming the vanishing gradient problem of the generator, and maintains the excellent data quality of the conventional GAN-based generative models.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Deng, Junjie, Gege Luo, and Caidan Zhao. "UCT-GAN: underwater image colour transfer generative adversarial network." IET Image Processing 14, no. 14 (December 1, 2020): 3613–22. http://dx.doi.org/10.1049/iet-ipr.2020.0003.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Shieh, Chin-Shiuh, Thanh-Tuan Nguyen, Wan-Wei Lin, Yong-Lin Huang, Mong-Fong Horng, Tsair-Fwu Lee, and Denis Miu. "Detection of Adversarial DDoS Attacks Using Generative Adversarial Networks with Dual Discriminators." Symmetry 14, no. 1 (January 4, 2022): 66. http://dx.doi.org/10.3390/sym14010066.

Повний текст джерела
Анотація:
DDoS (Distributed Denial of Service) has emerged as a serious and challenging threat to computer networks and information systems’ security and integrity. Before any remedial measures can be implemented, DDoS assaults must first be detected. DDoS attacks can be identified and characterized with satisfactory achievement employing ML (Machine Learning) and DL (Deep Learning). However, new varieties of aggression arise as the technology for DDoS attacks keep evolving. This research explores the impact of a new incarnation of DDoS attack–adversarial DDoS attack. There are established works on ML-based DDoS detection and GAN (Generative Adversarial Network) based adversarial DDoS synthesis. We confirm these findings in our experiments. Experiments in this study involve the extension and application of the GAN, a machine learning framework with symmetric form having two contending neural networks. We synthesize adversarial DDoS attacks utilizing Wasserstein Generative Adversarial Networks featuring Gradient Penalty (GP-WGAN). Experiment results indicate that the synthesized traffic can traverse the detection systems such as k-Nearest Neighbor (KNN), Multi-Layer Perceptron (MLP) and Random Forest (RF) without being identified. This observation is a sobering and pessimistic wake-up call, implying that countermeasures to adversarial DDoS attacks are urgently needed. To this problem, we propose a novel DDoS detection framework featuring GAN with Dual Discriminators (GANDD). The additional discriminator is designed to identify adversary DDoS traffic. The proposed GANDD can be an effective solution to adversarial DDoS attacks, as evidenced by the experimental results. We use adversarial DDoS traffic synthesized by GP-WGAN to train GANDD and validate it alongside three other DL technologies: DNN (Deep Neural Network), LSTM (Long Short-Term Memory) and GAN. GANDD outperformed the other DL models, demonstrating its protection with a TPR of 84.3%. A more sophisticated test was also conducted to examine GANDD’s ability to handle unseen adversarial attacks. GANDD was evaluated with adversarial traffic not generated from its training data. GANDD still proved effective with a TPR around 71.3% compared to 7.4% of LSTM.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії