Journal articles on the topic 'Domain Adversarial Learning'

To see the other types of publications on this topic, follow the link: Domain Adversarial Learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Domain Adversarial Learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Rosenberg, Ishai, Asaf Shabtai, Yuval Elovici, and Lior Rokach. "Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain." ACM Computing Surveys 54, no. 5 (June 2021): 1–36. http://dx.doi.org/10.1145/3453158.

Full text
Abstract:
In recent years, machine learning algorithms, and more specifically deep learning algorithms, have been widely used in many fields, including cyber security. However, machine learning systems are vulnerable to adversarial attacks, and this limits the application of machine learning, especially in non-stationary, adversarial environments, such as the cyber security domain, where actual adversaries (e.g., malware developers) exist. This article comprehensively summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques and illuminates the risks they pose. First, the adversarial attack methods are characterized based on their stage of occurrence, and the attacker’ s goals and capabilities. Then, we categorize the applications of adversarial attack and defense methods in the cyber security domain. Finally, we highlight some characteristics identified in recent research and discuss the impact of recent advancements in other adversarial learning domains on future research directions in the cyber security domain. To the best of our knowledge, this work is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain, map them in a unified taxonomy, and use the taxonomy to highlight future research directions.
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Minghao, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. "Adversarial Domain Adaptation with Domain Mixup." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6502–9. http://dx.doi.org/10.1609/aaai.v34i04.6123.

Full text
Abstract:
Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Lan, Chongyang Li, Qiliang Chen, and Binquan Li. "Deep adversarial domain adaptation network." International Journal of Advanced Robotic Systems 17, no. 5 (September 1, 2020): 172988142096464. http://dx.doi.org/10.1177/1729881420964648.

Full text
Abstract:
The advantage of adversarial domain adaptation is that it uses the idea of adversarial adaptation to confuse the feature distribution of two domains and solve the problem of domain transfer in transfer learning. However, although the discriminator completely confuses the two domains, adversarial domain adaptation still cannot guarantee the consistent feature distribution of the two domains, which may further deteriorate the recognition accuracy. Therefore, in this article, we propose a deep adversarial domain adaptation network, which optimises the feature distribution of the two confused domains by adding multi-kernel maximum mean discrepancy to the feature layer and designing a new loss function to ensure good recognition accuracy. In the last part, some simulation results based on the Office-31 and Underwater data sets show that the deep adversarial domain adaptation network can optimise the feature distribution and promote positive transfer, thus improving the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Kaiyang, Yongxin Yang, Timothy Hospedales, and Tao Xiang. "Deep Domain-Adversarial Image Generation for Domain Generalisation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 13025–32. http://dx.doi.org/10.1609/aaai.v34i07.7003.

Full text
Abstract:
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on Deep Domain-Adversarial Image Generation (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Minghao, Shuai Zhao, Haifeng Liu, and Deng Cai. "Adversarial-Learned Loss for Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3521–28. http://dx.doi.org/10.1609/aaai.v34i04.5757.

Full text
Abstract:
Recently, remarkable progress has been made in learning transferable representation across domains. Previous works in domain adaptation are majorly based on two techniques: domain-adversarial learning and self-training. However, domain-adversarial learning only aligns feature distributions between domains but does not consider whether the target features are discriminative. On the other hand, self-training utilizes the model predictions to enhance the discrimination of target features, but it is unable to explicitly align domain distributions. In order to combine the strengths of these two methods, we propose a novel method called Adversarial-Learned Loss for Domain Adaptation (ALDA). We first analyze the pseudo-label method, a typical self-training method. Nevertheless, there is a gap between pseudo-labels and the ground truth, which can cause incorrect training. Thus we introduce the confusion matrix, which is learned through an adversarial manner in ALDA, to reduce the gap and align the feature distributions. Finally, a new loss function is auto-constructed from the learned confusion matrix, which serves as the loss for unlabeled target samples. Our ALDA outperforms state-of-the-art approaches in four standard domain adaptation datasets. Our code is available at https://github.com/ZJULearning/ALDA.
APA, Harvard, Vancouver, ISO, and other styles
6

Wu, Yuan, and Yuhong Guo. "Dual Adversarial Co-Learning for Multi-Domain Text Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6438–45. http://dx.doi.org/10.1609/aaai.v34i04.6115.

Full text
Abstract:
With the advent of deep learning, the performance of text classification models have been improved significantly. Nevertheless, the successful training of a good classification model requires a sufficient amount of labeled data, while it is always expensive and time consuming to annotate data. With the rapid growth of digital data, similar classification tasks can typically occur in multiple domains, while the availability of labeled data can largely vary across domains. Some domains may have abundant labeled data, while in some other domains there may only exist a limited amount (or none) of labeled data. Meanwhile text classification tasks are highly domain-dependent — a text classifier trained in one domain may not perform well in another domain. In order to address these issues, in this paper we propose a novel dual adversarial co-learning approach for multi-domain text classification (MDTC). The approach learns shared-private networks for feature extraction and deploys dual adversarial regularizations to align features across different domains and between labeled and unlabeled data simultaneously under a discrepancy based co-learning framework, aiming to improve the classifiers' generalization capacity with the learned features. We conduct experiments on multi-domain sentiment classification datasets. The results show the proposed approach achieves the state-of-the-art MDTC performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Zou, Han, Yuxun Zhou, Jianfei Yang, Huihan Liu, Hari Prasanna Das, and Costas J. Spanos. "Consensus Adversarial Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5997–6004. http://dx.doi.org/10.1609/aaai.v33i01.33015997.

Full text
Abstract:
We propose a novel domain adaptation framework, namely Consensus Adversarial Domain Adaptation (CADA), that gives freedom to both target encoder and source encoder to embed data from both domains into a common domaininvariant feature space until they achieve consensus during adversarial learning. In this manner, the domain discrepancy can be further minimized in the embedded space, yielding more generalizable representations. The framework is also extended to establish a new few-shot domain adaptation scheme (F-CADA), that remarkably enhances the ADA performance by efficiently propagating a few labeled data once available in the target domain. Extensive experiments are conducted on the task of digit recognition across multiple benchmark datasets and a real-world problem involving WiFi-enabled device-free gesture recognition under spatial dynamics. The results show the compelling performance of CADA versus the state-of-the-art unsupervised domain adaptation (UDA) and supervised domain adaptation (SDA) methods. Numerical experiments also demonstrate that F-CADA can significantly improve the adaptation performance even with sparsely labeled data in the target domain.
APA, Harvard, Vancouver, ISO, and other styles
8

Tang, Hui, and Kui Jia. "Discriminative Adversarial Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5940–47. http://dx.doi.org/10.1609/aaai.v34i04.6054.

Full text
Abstract:
Given labeled instances on a source domain and unlabeled ones on a target domain, unsupervised domain adaptation aims to learn a task classifier that can well classify target instances. Recent advances rely on domain-adversarial training of deep networks to learn domain-invariant features. However, due to an issue of mode collapse induced by the separate design of task and domain classifiers, these methods are limited in aligning the joint distributions of feature and category across domains. To overcome it, we propose a novel adversarial learning method termed Discriminative Adversarial Domain Adaptation (DADA). Based on an integrated category and domain classifier, DADA has a novel adversarial objective that encourages a mutually inhibitory relation between category and domain predictions for any input instance. We show that under practical conditions, it defines a minimax game that can promote the joint distribution alignment. Except for the traditional closed set domain adaptation, we also extend DADA for extremely challenging problem settings of partial and open set domain adaptation. Experiments show the efficacy of our proposed methods and we achieve the new state of the art for all the three settings on benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Wenjing, and Zhongcheng Wu. "OVL: One-View Learning for Human Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11410–17. http://dx.doi.org/10.1609/aaai.v34i07.6804.

Full text
Abstract:
This paper considers a novel problem, named One-View Learning (OVL), in human retrieval a.k.a. person re-identification (re-ID). Unlike fully-supervised learning, OVL only requires pretty cheap annotation cost: labeled training images are only provided from one camera view (source view/domain), while the annotations of training images from other camera views (target views/domains) are not available. OVL is a problem of multi-target open set domain adaptation that is difficult for existing domain adaptation methods to handle. This is because 1) unlabeled samples are drawn from multiple target views in different distributions, and 2) the target views may contain samples of “unknown identity” that are not shared by the source view. To address this problem, this work introduces a novel one-view learning framework for person re-ID. This is achieved by adversarial multi-view learning (AMVL) and adversarial unknown rejection learning (AURL). The former learns a multi-view discriminator by adversarial learning to align the feature distributions between all views. The later is designed to reject unknown samples from target views through adversarial learning with two unknown identity classifiers. Extensive experiments on three large-scale datasets demonstrate the advantage of the proposed method over state-of-the-art domain adaptation and semi-supervised methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen Duc, Tho, Chanh Minh Tran, Phan Xuan Tan, and Eiji Kamioka. "Domain Adaptation for Imitation Learning Using Generative Adversarial Network." Sensors 21, no. 14 (July 9, 2021): 4718. http://dx.doi.org/10.3390/s21144718.

Full text
Abstract:
Imitation learning is an effective approach for an autonomous agent to learn control policies when an explicit reward function is unavailable, using demonstrations provided from an expert. However, standard imitation learning methods assume that the agents and the demonstrations provided by the expert are in the same domain configuration. Such an assumption has made the learned policies difficult to apply in another distinct domain. The problem is formalized as domain adaptive imitation learning, which is the process of learning how to perform a task optimally in a learner domain, given demonstrations of the task in a distinct expert domain. We address the problem by proposing a model based on Generative Adversarial Network. The model aims to learn both domain-shared and domain-specific features and utilizes it to find an optimal policy across domains. The experimental results show the effectiveness of our model in a number of tasks ranging from low to complex high-dimensional.
APA, Harvard, Vancouver, ISO, and other styles
11

WALCZAK, STEVEN. "PATTERN-BASED TACTICAL PLANNING." International Journal of Pattern Recognition and Artificial Intelligence 06, no. 05 (December 1992): 955–88. http://dx.doi.org/10.1142/s0218001492000473.

Full text
Abstract:
Our research demonstrates the utility of incorporating pattern recognition with machine learning in adversarial domains. We use induction to recognize cognitive behavioral patterns of an adversary and use those patterns for tactical planning in the adversarial domain. Results of our methodology for the application domain of chess are presented.
APA, Harvard, Vancouver, ISO, and other styles
12

Jia, Meixia, Jinrui Wang, Zongzhen Zhang, Baokun Han, Zhaoting Shi, Lei Guo, and Weitao Zhao. "A novel method for diagnosing bearing transfer faults based on a maximum mean discrepancies guided domain-adversarial mechanism." Measurement Science and Technology 33, no. 1 (November 26, 2021): 015109. http://dx.doi.org/10.1088/1361-6501/ac346e.

Full text
Abstract:
Abstract Transfer learning has been successfully applied in fault diagnosis to solve the difficulty in constructing network models due to the lack of labeled data in practical engineering. The current transfer learning models mainly use the adaptive method to obtain the similarity between source and target domains, but the obtained similarity is incomplete. Inspired by the domain-adversarial mechanism, a novel method called ‘distance guided domain-adversarial network’ (DGDAN) is proposed in this study. DGDAN includes two modules: domain-adversarial network and maximum mean discrepancies (MMD) guided domain adaptation. In this method, a stacked autoencoder (SAE) is used as the feature extractor of the domain-adversarial network to learn domain invariant features, and MMD is used to measure the non-parametric distance of different metric spaces to improve domain alignment. Reduction of the distance of the bottleneck layer of the feature extractor is employed to improve the feature extraction capability of the network. Experimental results show that the classification accuracy rate of DGDAN is more than 98%, and DGDAN has superior robustness and generalization ability.
APA, Harvard, Vancouver, ISO, and other styles
13

Rasheed, Bader, Adil Khan, Muhammad Ahmad, Manuel Mazzara, and S. M. Ahsan Kazmi. "Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects." International Transactions on Electrical Energy Systems 2022 (October 10, 2022): 1–11. http://dx.doi.org/10.1155/2022/2890761.

Full text
Abstract:
Although neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial training (AT), which improves a model’s robustness by augmenting the training data with adversarial examples. However, AT usually decreases the model’s accuracy on clean samples and could overfit to a specific attack, inhibiting its ability to generalize to new attacks. In this paper, we investigate the usage of domain adaptation to enhance AT’s performance. We propose a novel multiple adversarial domain adaptation (MADA) method, which looks at this problem as a domain adaptation task to discover robust features. Specifically, we use adversarial learning to learn features that are domain-invariant between multiple adversarial domains and the clean domain. We evaluated MADA on MNIST and CIFAR-10 datasets with multiple adversarial attacks during training and testing. The results of our experiments show that MADA is superior to AT on adversarial samples by about 4% on average and on clean samples by about 1% on average.
APA, Harvard, Vancouver, ISO, and other styles
14

Wu, Hanjie, Yongtuo Liu, Hongmin Cai, and Shengfeng He. "Learning Transferable Perturbations for Image Captioning." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 2 (May 31, 2022): 1–18. http://dx.doi.org/10.1145/3478024.

Full text
Abstract:
Present studies have discovered that state-of-the-art deep learning models can be attacked by small but well-designed perturbations. Existing attack algorithms for the image captioning task is time-consuming, and their generated adversarial examples cannot transfer well to other models. To generate adversarial examples faster and stronger, we propose to learn the perturbations by a generative model that is governed by three novel loss functions. Image feature distortion loss is designed to maximize the encoded image feature distance between original images and the corresponding adversarial examples at the image domain, and local-global mismatching loss is introduced to separate the mapping encoding representation of the adversarial images and the ground true captions from a local and global perspective in the common semantic space as far as possible cross image and caption domain. Language diversity loss is to make the image captions generated by the adversarial examples as different as possible from the correct image caption at the language domain. Extensive experiments show that our proposed generative model can efficiently generate adversarial examples that successfully generalize to attack image captioning models trained on unseen large-scale datasets or with different architectures, or even the image captioning commercial service.
APA, Harvard, Vancouver, ISO, and other styles
15

Xue, Qianming, Wei Zhang, and Hongyuan Zha. "Improving Domain-Adapted Sentiment Classification by Deep Adversarial Mutual Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9362–69. http://dx.doi.org/10.1609/aaai.v34i05.6477.

Full text
Abstract:
Domain-adapted sentiment classification refers to training on a labeled source domain to well infer document-level sentiment on an unlabeled target domain. Most existing relevant models involve a feature extractor and a sentiment classifier, where the feature extractor works towards learning domain-invariant features from both domains, and the sentiment classifier is trained only on the source domain to guide the feature extractor. As such, they lack a mechanism to use sentiment polarity lying in the target domain. To improve domain-adapted sentiment classification by learning sentiment from the target domain as well, we devise a novel deep adversarial mutual learning approach involving two groups of feature extractors, domain discriminators, sentiment classifiers, and label probers. The domain discriminators enable the feature extractors to obtain domain-invariant features. Meanwhile, the label prober in each group explores document sentiment polarity of the target domain through the sentiment prediction generated by the classifier in the peer group, and guides the learning of the feature extractor in its own group. The proposed approach achieves the mutual learning of the two groups in an end-to-end manner. Experiments on multiple public datasets indicate our method obtains the state-of-the-art performance, validating the effectiveness of mutual learning through label probers.
APA, Harvard, Vancouver, ISO, and other styles
16

Yang, Kaichen, Tzungyu Tsai, Honggang Yu, Tsung-Yi Ho, and Yier Jin. "Beyond Digital Domain: Fooling Deep Learning Based Recognition System in Physical World." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 01 (April 3, 2020): 1088–95. http://dx.doi.org/10.1609/aaai.v34i01.5459.

Full text
Abstract:
Adversarial examples that can fool deep neural network (DNN) models in computer vision present a growing threat. The current methods of launching adversarial attacks concentrate on attacking image classifiers by adding noise to digital inputs. The problem of attacking object detection models and adversarial attacks in physical world are rarely touched. Some prior works are proposed to launch physical adversarial attack against object detection models, but limited by certain aspects. In this paper, we propose a novel physical adversarial attack targeting object detection models. Instead of simply printing images, we manufacture real metal objects that could achieve the adversarial effect. In both indoor and outdoor experiments we show our physical adversarial objects can fool widely applied object detection models including SSD, YOLO and Faster R-CNN in various environments. We also test our attack in a variety of commercial platforms for object detection and demonstrate that our attack is still valid on these platforms. Consider the potential defense mechanisms our adversarial objects may encounter, we conduct a series of experiments to evaluate the effect of existing defense methods on our physical attack.
APA, Harvard, Vancouver, ISO, and other styles
17

Dixit, Akhil. "Few-Shot Learning under Domain Shift using Adversarial Domain Adaptation." International Journal for Research in Applied Science and Engineering Technology 7, no. 9 (September 30, 2019): 969–76. http://dx.doi.org/10.22214/ijraset.2019.9135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jan, Steve T. K., Joseph Messou, Yen-Chen Lin, Jia-Bin Huang, and Gang Wang. "Connecting the Digital and Physical World: Improving the Robustness of Adversarial Attacks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 962–69. http://dx.doi.org/10.1609/aaai.v33i01.3301962.

Full text
Abstract:
While deep learning models have achieved unprecedented success in various domains, there is also a growing concern of adversarial attacks against related applications. Recent results show that by adding a small amount of perturbations to an image (imperceptible to humans), the resulting adversarial examples can force a classifier to make targeted mistakes. So far, most existing works focus on crafting adversarial examples in the digital domain, while limited efforts have been devoted to understanding the physical domain attacks. In this work, we explore the feasibility of generating robust adversarial examples that remain effective in the physical domain. Our core idea is to use an image-to-image translation network to simulate the digital-to-physical transformation process for generating robust adversarial examples. To validate our method, we conduct a large-scale physical-domain experiment, which involves manually taking more than 3000 physical domain photos. The results show that our method outperforms existing ones by a large margin and demonstrates a high level of robustness and transferability.
APA, Harvard, Vancouver, ISO, and other styles
19

Leece, Michael. "Unsupervised Learning of HTNs in Complex Adversarial Domains." Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment 10, no. 6 (June 29, 2021): 6–9. http://dx.doi.org/10.1609/aiide.v10i6.12697.

Full text
Abstract:
While Hierarchical Task Networks are frequently cited as flexible and powerful planning models, they are often ignored due to the intensive labor cost for experts/programmers, due to the need to create and refine the model by hand. While recent work has begun to address this issue by working towards learning aspects of an HTN model from demonstration, or even the whole framework, the focus so far has been on simple toy domains, which lack many of the challenges faced in the real world such as imperfect information and continuous environments. I plan to extend this work using the domain of real-time strategy (RTS) games, which have gained recent popularity as a challenging and complex domain for AI research.
APA, Harvard, Vancouver, ISO, and other styles
20

Li, Ranran, Shunming Li, Kun Xu, Xianglian Li, Jiantao Lu, Mengjie Zeng, Miaozhen Li, and Jun Du. "Adversarial domain adaptation of asymmetric mapping with CORAL alignment for intelligent fault diagnosis." Measurement Science and Technology 33, no. 5 (February 1, 2022): 055101. http://dx.doi.org/10.1088/1361-6501/ac3d47.

Full text
Abstract:
Abstract Rolling bearings play a vital role in the overall operation of rotating machinery. In practice, many learning methods for variable-speed fault diagnosis ignore task-specific decision boundaries, making it very difficult to completely match feature distribution between different domains. Therefore, to overcome this problem, an adversarial domain adaptation of asymmetric mapping with CORAL alignment is presented. The asymmetric mapping feature extractor is able to extract more specific-domain features with obvious distinction. Meanwhile, combining the maximum classifier discrepancy of deep transfer to give an adversarial approach and taking the task-specific decision boundaries into account, class-level alignment between the features of the source domain and target domain can be attempted. To prevent degenerate learning, which is possibly caused by asymmetric mapping and adversarial learning, the model is constrained by deep CORAL alignment to extract more domain-invariant features. Experimental results show that the proposed method can solve the variable-speed (a small span of intermediate vehicle speeds) fault diagnosis problem well, with high transfer accuracy and strong generalization.
APA, Harvard, Vancouver, ISO, and other styles
21

Feiyan Fan, Feiyan Fan, Jiazhen Hou Feiyan Fan, and Tanghuai Fan Jiazhen Hou. "Fault Diagnosis under Varying Working Conditions with Domain Adversarial Capsule Networks." 電腦學刊 33, no. 3 (June 2022): 135–46. http://dx.doi.org/10.53106/199115992022063303011.

Full text
Abstract:
<p>Most existing studies that develop fault diagnosis methods focus on performance under steady operation while overlooking adaptability under varying working conditions. This results in the low generalization of the fault diagnosis methods. In this study, a novel deep transfer learning architecture is proposed for fault diagnosis under varying working conditions. A modified capsule network is developed by combining the domain adversarial framework and classical capsule network to simultaneously recognize the machinery fault and working conditions. The novelty of the proposed architecture mainly lies in the integration of the domain adversarial mechanism and capsule network. The idea of the domain adversarial mechanism is exploited in transfer learning, which can achieve a promising performance in cross-condition fault diagnosis tasks. With the novel architecture, learned features exhibit identical or very similar distributions in the source and target domains. Hence, the deep learning architecture trained in one working condition can be applicable to discriminative conditions without being hindered by the shift between the two domains. The proposed method is applied to analyze vibrations of a bearing system acquired under different working conditions, i.e., loads and rolling speed. The experimental results indicate that the proposed method outperforms other state-of-the-art methods in fault diagnosis under varying working conditions.</p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Wang, Ximei, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. "Transferable Attention for Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5345–52. http://dx.doi.org/10.1609/aaai.v33i01.33015345.

Full text
Abstract:
Recent work in domain adaptation bridges different domains by adversarially learning a domain-invariant representation that cannot be distinguished by a domain discriminator. Existing methods of adversarial domain adaptation mainly align the global images across the source and target domains. However, it is obvious that not all regions of an image are transferable, while forcefully aligning the untransferable regions may lead to negative transfer. Furthermore, some of the images are significantly dissimilar across domains, resulting in weak image-level transferability. To this end, we present Transferable Attention for Domain Adaptation (TADA), focusing our adaptation model on transferable regions or images. We implement two types of complementary transferable attention: transferable local attention generated by multiple region-level domain discriminators to highlight transferable regions, and transferable global attention generated by single image-level domain discriminator to highlight transferable images. Extensive experiments validate that our proposed models exceed state of the art results on standard domain adaptation datasets.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Haoliang, Sinno Jialin Pan, Renjie Wan, and Alex C. Kot. "Heterogeneous Transfer Learning via Deep Matrix Completion with Adversarial Kernel Embedding." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8602–9. http://dx.doi.org/10.1609/aaai.v33i01.33018602.

Full text
Abstract:
Heterogeneous Transfer Learning (HTL) aims to solve transfer learning problems where a source domain and a target domain are of heterogeneous types of features. Most existing HTL approaches either explicitly learn feature mappings between the heterogeneous domains or implicitly reconstruct heterogeneous cross-domain features based on matrix completion techniques. In this paper, we propose a new HTL method based on a deep matrix completion framework, where kernel embedding of distributions is trained in an adversarial manner for learning heterogeneous features across domains. We conduct extensive experiments on two different vision tasks to demonstrate the effectiveness of our proposed method compared with a number of baseline methods.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Jifa, Guojun Zhai, Gang Chen, Bo Fang, Ping Zhou, and Nan Yu. "Unsupervised Domain Adaption for High-Resolution Coastal Land Cover Mapping with Category-Space Constrained Adversarial Network." Remote Sensing 13, no. 8 (April 13, 2021): 1493. http://dx.doi.org/10.3390/rs13081493.

Full text
Abstract:
Coastal land cover mapping (CLCM) across image domains presents a fundamental and challenging segmentation task. Although adversaries-based domain adaptation methods have been proposed to address this issue, they always implement distribution alignment via a global discriminator while ignoring the data structure. Additionally, the low inter-class variances and intricate spatial details of coastal objects may entail poor presentation. Therefore, this paper proposes a category-space constrained adversarial method to execute category-level adaptive CLCM. Focusing on the underlying category information, we introduce a category-level adversarial framework to align semantic features. We summarize two diverse strategies to extract category-wise domain labels for source and target domains, where the latter is driven by self-supervised learning. Meanwhile, we generalize the lightweight adaptation module to multiple levels across a robust baseline, aiming to fine-tune the features at different spatial scales. Furthermore, the self-supervised learning approach is also leveraged as an improvement strategy to optimize the result within segmented training. We examine our method on two converse adaptation tasks and compare them with other state-of-the-art models. The overall visualization results and evaluation metrics demonstrate that the proposed method achieves excellent performance in the domain adaptation CLCM with high-resolution remotely sensed images.
APA, Harvard, Vancouver, ISO, and other styles
25

Huo, Lin, Huanchao Qi, Simiao Fei, Cong Guan, and Ji Li. "A Generative Adversarial Network Based a Rolling Bearing Data Generation Method Towards Fault Diagnosis." Computational Intelligence and Neuroscience 2022 (July 13, 2022): 1–21. http://dx.doi.org/10.1155/2022/7592258.

Full text
Abstract:
As a new generative model, the generative adversarial network (GAN) has great potential in the accuracy and efficiency of generating pseudoreal data. Nowadays, bearing fault diagnosis based on machine learning usually needs sufficient data. If enough near-real data can be generated in the case of insufficient samples in the actual operating condition, the effect of fault diagnosis will be greatly improved. In this study, a new rolling bearing data generation method based on the generative adversarial network (GAN) is proposed, which can be trained adversarially and jointly via a learned embedding, and applied to solve fault diagnosis problems with insufficient data. By analyzing the time-domain characteristics of rolling bearing life cycle monitoring data in actual working conditions, the operation data are divided into three periods, and the construction and training of the generative adversarial network model are carried out. Data generated by adversarial are compared with the real data in the time domain and frequency domain, respectively, and the similarity between the generated data and the real data is verified.
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Jinrui, Shanshan Ji, Baokun Han, Huaiqian Bao, and Xingxing Jiang. "Deep Adaptive Adversarial Network-Based Method for Mechanical Fault Diagnosis under Different Working Conditions." Complexity 2020 (July 23, 2020): 1–11. http://dx.doi.org/10.1155/2020/6946702.

Full text
Abstract:
The demand for transfer learning methods for mechanical fault diagnosis has considerably progressed in recent years. However, the existing methods always depend on the maximum mean discrepancy (MMD) in measuring the domain discrepancy. But MMD can not guarantee the different domain features to be similar enough. Inspired by generative adversarial networks (GAN) and domain adversarial training of neural networks (DANN), this study presents a novel deep adaptive adversarial network (DAAN). The DAAN comprises a condition recognition module and domain adversarial learning module. The condition recognition module is constructed with a generator to extract features and classify the health condition of machinery automatically. The domain adversarial learning module is achieved with a discriminator based on Wasserstein distance to learn domain-invariant features. Then spectral normalization (SN) is employed to accelerate convergence. The effectiveness of DAAN is demonstrated through three transfer fault diagnosis experiments, and the results show that the DAAN can converge to zero after approximately 15 training epochs, and all the average testing accuracies in each case can achieve over 92%. It is expected that the proposed DAAN can effectively learn domain-invariant features to bridge the discrepancy between the data from different working conditions.
APA, Harvard, Vancouver, ISO, and other styles
27

Noa, J., P. J. Soto, G. A. O. P. Costa, D. Wittich, R. Q. Feitosa, and F. Rottensteiner. "ADVERSARIAL DISCRIMINATIVE DOMAIN ADAPTATION FOR DEFORESTATION DETECTION." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-3-2021 (June 17, 2021): 151–58. http://dx.doi.org/10.5194/isprs-annals-v-3-2021-151-2021.

Full text
Abstract:
Abstract. Although very efficient in a number of application fields, deep learning based models are known to demand large amounts of labeled data for training. Particularly for remote sensing applications, responding to that demand is generally expensive and time consuming. Moreover, supervised training methods tend to perform poorly when they are tested with a set of samples that does not match the general characteristics of the training set. Domain adaptation methods can be used to mitigate those problems, especially in applications where labeled data is only available for a particular region or epoch, i.e., for a source domain, but not for a target domain on which the model should be tested. In this work we introduce a domain adaptation approach based on representation matching for the deforestation detection task. The approach follows the Adversarial Discriminative Domain Adaptation (ADDA) framework, and we introduce a margin-based regularization constraint in the learning process that promotes a better convergence of the model parameters during training. The approach is evaluated using three different domains, which represent sites in different forest biomes. The experimental results show that the approach is successful in the adaptation of most of the domain combination scenarios, usually with considerable gains in relation to the baselines.
APA, Harvard, Vancouver, ISO, and other styles
28

Xiang, Shoubing, Jiangquan Zhang, Hongli Gao, Dalei Shi, and Liang Chen. "A Deep Transfer Learning Method for Bearing Fault Diagnosis Based on Domain Separation and Adversarial Learning." Shock and Vibration 2021 (June 18, 2021): 1–9. http://dx.doi.org/10.1155/2021/5540084.

Full text
Abstract:
Current studies on intelligent bearing fault diagnosis based on transfer learning have been fruitful. However, these methods mainly focus on transfer fault diagnosis of bearings under different working conditions. In engineering practice, it is often difficult or even impossible to obtain a large amount of labeled data from some machines, and an intelligent diagnostic method trained by labeled data from one machine may not be able to classify unlabeled data from other machines, strongly hindering the application of these intelligent diagnostic methods in certain industries. In this study, a deep transfer learning method for bearing fault diagnosis, domain separation reconstruction adversarial networks (DSRAN), was proposed for the transfer fault diagnosis between machines. In DSRAN, domain-difference and domain-invariant feature extractors are used to extract and separate domain-difference and domain-invariant features, respectively Moreover, the idea of generative adversarial networks (GAN) was used to improve the network in learning domain-invariant features. By using domain-invariant features, DSRAN can adopt the distribution of the data in the source and target domains. Six transfer fault diagnosis experiments were performed to verify the effectiveness of the proposed method, and the average accuracy reached 89.68%. The results showed that the DSRAN method trained by labeled data obtained from one machine can be used to identify the health state of the unlabeled data obtained from other machines.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhang, Yixin, and Zilei Wang. "Joint Adversarial Learning for Domain Adaptation in Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6877–84. http://dx.doi.org/10.1609/aaai.v34i04.6169.

Full text
Abstract:
Unsupervised domain adaptation in semantic segmentation is to exploit the pixel-level annotated samples in the source domain to aid the segmentation of unlabeled samples in the target domain. For such a task, the key point is to learn domain-invariant representations and adversarial learning is usually used, in which the discriminator is to distinguish which domain the input comes from, and the segmentation model targets to deceive the domain discriminator. In this work, we first propose a novel joint adversarial learning (JAL) to boost the domain discriminator in output space by introducing the information of domain discriminator from low-level features. Consequently, the training of the high-level decoder would be enhanced. Then we propose a weight transfer module (WTM) to alleviate the inherent bias of the trained decoder towards source domain. Specifically, WTM changes the original decoder into a new decoder, which is learned only under the supervision of adversarial loss and thus mainly focuses on reducing domain divergence. The extensive experiments on two widely used benchmarks show that our method can bring considerable performance improvement over different baseline methods, which well demonstrates the effectiveness of our method in the output space adaptation.
APA, Harvard, Vancouver, ISO, and other styles
30

Huang, Min, and Jinghan Yin. "Research on Adversarial Domain Adaptation Method and Its Application in Power Load Forecasting." Mathematics 10, no. 18 (September 6, 2022): 3223. http://dx.doi.org/10.3390/math10183223.

Full text
Abstract:
Domain adaptation has been used to transfer the knowledge from the source domain to the target domain where training data is insufficient in the target domain; thus, it can overcome the data shortage problem of power load forecasting effectively. Inspired by Generative Adversarial Networks (GANs), adversarial domain adaptation transfers knowledge in adversarial learning. Existing adversarial domain adaptation faces the problems of adversarial disequilibrium and a lack of transferability quantification, which will eventually decrease the prediction accuracy. To address this issue, a novel adversarial domain adaptation method is proposed. Firstly, by analyzing the causes of the adversarial disequilibrium, an initial state fusion strategy is proposed to improve the reliability of the domain discriminator, thus maintaining the adversarial equilibrium. Secondly, domain similarity is calculated to quantify the transferability of source domain samples based on information entropy; through weighting in the process of domain alignment, the knowledge is transferred selectively and the negative transfer is suppressed. Finally, the Building Data Genome Project 2 (BDGP2) dataset is used to validate the proposed method. The experimental results demonstrate that the proposed method can alleviate the problem of adversarial disequilibrium and reasonably quantify the transferability to improve the accuracy of power load forecasting.
APA, Harvard, Vancouver, ISO, and other styles
31

Kushchuk, Denis, Maxim Ryndin, Alexander Yatskov, and Maksim Varlamov. "Using Domain Adversarial Learning for Text Captchas Recognition." Proceedings of the Institute for System Programming of the RAS 32, no. 4 (2020): 203–16. http://dx.doi.org/10.15514/ispras-2020-32(4)-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Shanshan, Lei Zhang, and Jingru Fu. "Adversarial transfer learning for cross-domain visual recognition." Knowledge-Based Systems 204 (September 2020): 106258. http://dx.doi.org/10.1016/j.knosys.2020.106258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Grießhaber, Daniel, Ngoc Thang Vu, and Johannes Maucher. "Low-resource text classification using domain-adversarial learning." Computer Speech & Language 62 (July 2020): 101056. http://dx.doi.org/10.1016/j.csl.2019.101056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Peng (Edward), and Matthew Russell. "Domain Adversarial Transfer Learning for Generalized Tool Wear Prediction." Annual Conference of the PHM Society 12, no. 1 (November 3, 2020): 8. http://dx.doi.org/10.36001/phmconf.2020.v12i1.1137.

Full text
Abstract:
Given its demonstrated ability in analyzing and revealing patterns underlying data, Deep Learning (DL) has been increasingly investigated to complement physics-based models in various aspects of smart manufacturing, such as machine condition monitoring and fault diagnosis, complex manufacturing process modeling, and quality inspection. However, successful implementation of DL techniques relies greatly on the amount, variety, and veracity of data for robust network training. Also, the distributions of data used for network training and application should be identical to avoid the internal covariance shift problem that reduces the network performance applicability. As a promising solution to address these challenges, Transfer Learning (TL) enables DL networks trained on a source domain and task to be applied to a separate target domain and task. This paper presents a domain adversarial TL approach, based upon the concepts of generative adversarial networks. In this method, the optimizer seeks to minimize the loss (i.e., regression or classification accuracy) across the labeled training examples from the source domain while maximizing the loss of the domain classifier across the source and target data sets (i.e., maximizing the similarity of source and target features). The developed domain adversarial TL method has been implemented on a 1-D CNN backbone network and evaluated for prediction of tool wear propagation, using NASA's milling dataset. Performance has been compared to other TL techniques, and the results indicate that domain adversarial TL can successfully allow DL models trained on certain scenarios to be applied to new target tasks.
APA, Harvard, Vancouver, ISO, and other styles
35

Chen, Keyu, Di Zhuang, and J. Morris Chang. "Discriminative adversarial domain generalization with meta-learning based cross-domain validation." Neurocomputing 467 (January 2022): 418–26. http://dx.doi.org/10.1016/j.neucom.2021.09.046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Pan, Boxiao, Zhangjie Cao, Ehsan Adeli, and Juan Carlos Niebles. "Adversarial Cross-Domain Action Recognition with Co-Attention." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11815–22. http://dx.doi.org/10.1609/aaai.v34i07.6854.

Full text
Abstract:
Action recognition has been a widely studied topic with a heavy focus on supervised learning involving sufficient labeled videos. However, the problem of cross-domain action recognition, where training and testing videos are drawn from different underlying distributions, remains largely under-explored. Previous methods directly employ techniques for cross-domain image recognition, which tend to suffer from the severe temporal misalignment problem. This paper proposes a Temporal Co-attention Network (TCoN), which matches the distributions of temporally aligned action features between source and target domains using a novel cross-domain co-attention mechanism. Experimental results on three cross-domain action recognition datasets demonstrate that TCoN improves both previous single-domain and cross-domain methods significantly under the cross-domain setting.
APA, Harvard, Vancouver, ISO, and other styles
37

Vitorino, João, Nuno Oliveira, and Isabel Praça. "Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection." Future Internet 14, no. 4 (March 29, 2022): 108. http://dx.doi.org/10.3390/fi14040108.

Full text
Abstract:
Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Liquan, and Yan Liu. "Spectral Normalization for Domain Adaptation." Information 11, no. 2 (January 27, 2020): 68. http://dx.doi.org/10.3390/info11020068.

Full text
Abstract:
The transfer learning method is used to extend our existing model to more difficult scenarios, thereby accelerating the training process and improving learning performance. The conditional adversarial domain adaptation method proposed in 2018 is a particular type of transfer learning. It uses the domain discriminator to identify which images the extracted features belong to. The features are obtained from the feature extraction network. The stability of the domain discriminator directly affects the classification accuracy. Here, we propose a new algorithm to improve the predictive accuracy. First, we introduce the Lipschitz constraint condition into domain adaptation. If the constraint condition can be satisfied, the method will be stable. Second, we analyze how to make the gradient satisfy the condition, thereby deducing the modified gradient via the spectrum regularization method. The modified gradient is then used to update the parameter matrix. The proposed method is compared to the ResNet-50, deep adaptation network, domain adversarial neural network, joint adaptation network, and conditional domain adversarial network methods using the datasets that are found in Office-31, ImageCLEF-DA, and Office-Home. The simulations demonstrate that the proposed method has a better performance than other methods with respect to accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Mao, Ye, Farzaneh Khoshnevisan, Thomas Price, Tiffany Barnes, and Min Chi. "Cross-Lingual Adversarial Domain Adaptation for Novice Programming." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 7 (June 28, 2022): 7682–90. http://dx.doi.org/10.1609/aaai.v36i7.20735.

Full text
Abstract:
Student modeling sits at the epicenter of adaptive learning technology. In contrast to the voluminous work on student modeling for well-defined domains such as algebra, there has been little research on student modeling in programming (SMP) due to data scarcity caused by the unbounded solution spaces of open-ended programming exercises. In this work, we focus on two essential SMP tasks: program classification and early prediction of student success and propose a Cross-Lingual Adversarial Domain Adaptation (CrossLing) framework that can leverage a large programming dataset to learn features that can improve SMP's build using a much smaller dataset in a different programming language. Our framework maintains one globally invariant latent representation across both datasets via an adversarial learning process, as well as allocating domain-specific models for each dataset to extract local latent representations that cannot and should not be united. By separating globally-shared representations from domain-specific representations, our framework outperforms existing state-of-the-art methods for both SMP tasks.
APA, Harvard, Vancouver, ISO, and other styles
40

Soto, P. J., G. A. O. P. Costa, R. Q. Feitosa, P. N. Happ, M. X. Ortega, J. Noa, C. A. Almeida, and C. Heipke. "DOMAIN ADAPTATION WITH CYCLEGAN FOR CHANGE DETECTION IN THE AMAZON FOREST." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B3-2020 (August 22, 2020): 1635–43. http://dx.doi.org/10.5194/isprs-archives-xliii-b3-2020-1635-2020.

Full text
Abstract:
Abstract. Deep learning classification models require large amounts of labeled training data to perform properly, but the production of reference data for most Earth observation applications is a labor intensive, costly process. In that sense, transfer learning is an option to mitigate the demand for labeled data. In many remote sensing applications, however, the accuracy of a deep learning-based classification model trained with a specific dataset drops significantly when it is tested on a different dataset, even after fine-tuning. In general, this behavior can be credited to the domain shift phenomenon. In remote sensing applications, domain shift can be associated with changes in the environmental conditions during the acquisition of new data, variations of objects’ appearances, geographical variability and different sensor properties, among other aspects. In recent years, deep learning-based domain adaptation techniques have been used to alleviate the domain shift problem. Recent improvements in domain adaptation technology rely on techniques based on Generative Adversarial Networks (GANs), such as the Cycle-Consistent Generative Adversarial Network (CycleGAN), which adapts images across different domains by learning nonlinear mapping functions between the domains. In this work, we exploit the CycleGAN approach for domain adaptation in a particular change detection application, namely, deforestation detection in the Amazon forest. Experimental results indicate that the proposed approach is capable of alleviating the effects associated with domain shift in the context of the target application.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Yuan, Regina Barzilay, and Tommi Jaakkola. "Aspect-augmented Adversarial Networks for Domain Adaptation." Transactions of the Association for Computational Linguistics 5 (December 2017): 515–28. http://dx.doi.org/10.1162/tacl_a_00077.

Full text
Abstract:
We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.
APA, Harvard, Vancouver, ISO, and other styles
42

Hasan, S. M. Kamrul, and Cristian A. Linte. "Learning Deep Representations of Cardiac Structures for 4D Cine MRI Image Segmentation through Semi-Supervised Learning." Applied Sciences 12, no. 23 (November 28, 2022): 12163. http://dx.doi.org/10.3390/app122312163.

Full text
Abstract:
Learning good data representations for medical imaging tasks ensures the preservation of relevant information and the removal of irrelevant information from the data to improve the interpretability of the learned features. In this paper, we propose a semi-supervised model—namely, combine-all in semi-supervised learning (CqSL)—to demonstrate the power of a simple combination of a disentanglement block, variational autoencoder (VAE), generative adversarial network (GAN), and a conditioning layer-based reconstructor for performing two important tasks in medical imaging: segmentation and reconstruction. Our work is motivated by the recent progress in image segmentation using semi-supervised learning (SSL), which has shown good results with limited labeled data and large amounts of unlabeled data. A disentanglement block decomposes an input image into a domain-invariant spatial factor and a domain-specific non-spatial factor. We assume that medical images acquired using multiple scanners (different domain information) share a common spatial space but differ in non-spatial space (intensities, contrast, etc.). Hence, we utilize our spatial information to generate segmentation masks from unlabeled datasets using a generative adversarial network (GAN). Finally, to reconstruct the original image, our conditioning layer-based reconstruction block recombines spatial information with random non-spatial information sampled from the generative models. Our ablation study demonstrates the benefits of disentanglement in holding domain-invariant (spatial) as well as domain-specific (non-spatial) information with high accuracy. We further apply a structured L2 similarity (SL2SIM) loss along with a mutual information minimizer (MIM) to improve the adversarially trained generative models for better reconstruction. Experimental results achieved on the STACOM 2017 ACDC cine cardiac magnetic resonance (MR) dataset suggest that our proposed (CqSL) model outperforms fully supervised and semi-supervised models, achieving an 83.2% performance accuracy even when using only 1% labeled data. We hypothesize that our proposed model has the potential to become an efficient semantic segmentation tool that may be used for domain adaptation in data-limited medical imaging scenarios, where annotations are expensive. Code, and experimental configurations will be made available publicly.
APA, Harvard, Vancouver, ISO, and other styles
43

Cao, Yu, Meng Fang, Baosheng Yu, and Joey Tianyi Zhou. "Unsupervised Domain Adaptation on Reading Comprehension." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 7480–87. http://dx.doi.org/10.1609/aaai.v34i05.6245.

Full text
Abstract:
Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate the problem, we investigate unsupervised domain adaptation on RC, wherein a model is trained on the labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, a model can not generalize well from one domain to another. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable performance to supervised models on multiple large-scale benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
44

Wang, Xingmei, Boxuan Sun, and Hongbin Dong. "Domain-invariant adversarial learning with conditional distribution alignment for unsupervised domain adaptation." IET Computer Vision 14, no. 8 (December 1, 2020): 642–49. http://dx.doi.org/10.1049/iet-cvi.2019.0514.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Yuan, Yumeng, Yuhua Li, Zhenlong Zhu, Ruixuan Li, and Xiwu Gu. "Joint Domain Adaptation Based on Adversarial Dynamic Parameter Learning." IEEE Transactions on Emerging Topics in Computational Intelligence 5, no. 4 (August 2021): 714–23. http://dx.doi.org/10.1109/tetci.2021.3055873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yu, Hao, and Mengqi Hu. "Epilepsy SEEG Data Classification Based On Domain Adversarial Learning." IEEE Access 9 (2021): 82000–82009. http://dx.doi.org/10.1109/access.2021.3086885.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ding, Xiao, Qiankun Shi, Bibo Cai, Ting Liu, Yanyan Zhao, and Qiang Ye. "Learning Multi-Domain Adversarial Neural Networks for Text Classification." IEEE Access 7 (2019): 40323–32. http://dx.doi.org/10.1109/access.2019.2904858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Zhao, Xin, and Shengsheng Wang. "Adversarial Learning and Interpolation Consistency for Unsupervised Domain Adaptation." IEEE Access 7 (2019): 170448–56. http://dx.doi.org/10.1109/access.2019.2956103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Qian, Qi, Shenghuo Zhu, Jiasheng Tang, Rong Jin, Baigui Sun, and Hao Li. "Robust Optimization over Multiple Domains." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4739–46. http://dx.doi.org/10.1609/aaai.v33i01.33014739.

Full text
Abstract:
In this work, we study the problem of learning a single model for multiple domains. Unlike the conventional machine learning scenario where each domain can have the corresponding model, multiple domains (i.e., applications/users) may share the same machine learning model due to maintenance loads in cloud computing services. For example, a digit-recognition model should be applicable to hand-written digits, house numbers, car plates, etc. Therefore, an ideal model for cloud computing has to perform well at each applicable domain. To address this new challenge from cloud computing, we develop a framework of robust optimization over multiple domains. In lieu of minimizing the empirical risk, we aim to learn a model optimized to the adversarial distribution over multiple domains. Hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency. Theoretically, we analyze the convergence rate for convex and non-convex models. To our best knowledge, we first study the convergence rate of learning a robust non-convex model with a practical algorithm. Furthermore, we demonstrate that the robustness of the framework and the convergence rate can be further enhanced by appropriate regularizers over the adversarial distribution. The empirical study on real-world fine-grained visual categorization and digits recognition tasks verifies the effectiveness and efficiency of the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
50

Kwon, Hyun, and Sanghyun Lee. "Textual Adversarial Training of Machine Learning Model for Resistance to Adversarial Examples." Security and Communication Networks 2022 (April 7, 2022): 1–12. http://dx.doi.org/10.1155/2022/4511510.

Full text
Abstract:
Deep neural networks provide good performance for image recognition, speech recognition, text recognition, and pattern recognition. However, such networks are vulnerable to attack by adversarial examples. Adversarial examples are created by adding a small amount of noise to an original sample in such a way that no problem is perceptible to humans, yet the sample will be incorrectly recognized by a model. Adversarial examples have been studied mainly in the context of images, but research has expanded to include the text domain. In the textual context, an adversarial example is a sample of text in which certain important words have been changed so that the sample will be misclassified by a model even though to humans it is the same as the original text in terms of meaning and grammar. In the text domain, there have been relatively few studies on defenses against adversarial examples compared with the number of studies on adversarial example attacks. In this paper, we propose an adversarial training method to defend against adversarial examples that target the latest text model, bidirectional encoder representations from transformers (BERT). In the proposed method, adversarial examples are generated using various parameters and then are applied in additional training of the target model to instill robustness against unknown adversarial examples. Experiments were conducted using five datasets (AG’s News, a movie review dataset, the IMDB Large Movie Review Dataset (IMDB), the Stanford Natural Language Inference (SNLI) corpus, and the Multi-Genre Natural Language Inference (MultiNLI) corpus), with TensorFlow as the machine learning library. According to the experimental results, the baseline model had an accuracy of 88.1% on the original sentences and an accuracy of 9.2% on the adversarial sentences, whereas the model that underwent the proposed training method maintained an average accuracy of 87.2% on the original sentences and had an average accuracy of 22.5% on the adversarial sentences.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography