To see the other types of publications on this topic, follow the link: Target domain.

Journal articles on the topic 'Target domain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Target domain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Haoliang, Wen Li, and Shiqi Wang. "Discovering and incorporating latent target-domains for domain adaptation." Pattern Recognition 108 (December 2020): 107536. http://dx.doi.org/10.1016/j.patcog.2020.107536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Choi, Jongwon, Youngjoon Choi, Jihoon Kim, Jinyeop Chang, Ilhwan Kwon, Youngjune Gwon, and Seungjai Min. "Visual Domain Adaptation by Consensus-Based Transfer to Intermediate Domain." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10655–62. http://dx.doi.org/10.1609/aaai.v34i07.6692.

Full text
Abstract:
We describe an unsupervised domain adaptation framework for images by a transform to an abstract intermediate domain and ensemble classifiers seeking a consensus. The intermediate domain can be thought as a latent domain where both the source and target domains can be transferred easily. The proposed framework aligns both domains to the intermediate domain, which greatly improves the adaptation performance when the source and target domains are notably dissimilar. In addition, we propose an ensemble model trained by confusing multiple classifiers and letting them make a consensus alternately to enhance the adaptation performance for ambiguous samples. To estimate the hidden intermediate domain and the unknown labels of the target domain simultaneously, we develop a training algorithm using a double-structured architecture. We validate the proposed framework in hard adaptation scenarios with real-world datasets from simple synthetic domains to complex real-world domains. The proposed algorithm outperforms the previous state-of-the-art algorithms on various environments.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Yifan, Kekai Sheng, Weiming Dong, Baoyuan Wu, Changsheng Xu, and Bao-Gang Hu. "Towards Corruption-Agnostic Robust Domain Adaptation." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (November 30, 2022): 1–16. http://dx.doi.org/10.1145/3501800.

Full text
Abstract:
Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable corruptions (e.g., noise and blur) in real data, such as web images and real-world object detection, domain adaptation methods are increasingly required to be corruption robust on target domains. We investigate a new task, corruption-agnostic robust domain adaptation (CRDA), to be accurate on original data and robust against unavailable-for-training corruptions on target domains. This task is non-trivial due to the large domain discrepancy and unsupervised target domains. We observe that simple combinations of popular methods of domain adaptation and corruption robustness have suboptimal CRDA results. We propose a new approach based on two technical insights into CRDA, as follows: (1) an easy-to-plug module called domain discrepancy generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; (2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains. Experiments verify that DDG maintains or even improves its performance on original data and achieves better corruption robustness than baselines. Our code is available at: https://github.com/YifanXu74/CRDA .
APA, Harvard, Vancouver, ISO, and other styles
4

GLYNN, Paul. "Neuropathy target esterase." Biochemical Journal 344, no. 3 (December 8, 1999): 625–31. http://dx.doi.org/10.1042/bj3440625.

Full text
Abstract:
Neuropathy target esterase (NTE) is an integral membrane protein present in all neurons and in some non-neural-cell types of vertebrates. Recent data indicate that NTE is involved in a cell-signalling pathway controlling interactions between neurons and accessory glial cells in the developing nervous system. NTE has serine esterase activity and efficiently catalyses the hydrolysis of phenyl valerate (PV) in vitro, but its physiological substrate is unknown. By sequence analysis NTE has been found to be related neither to the major serine esterase family, which includes acetylcholinesterase, nor to any other known serine hydrolases. NTE comprises at least two functional domains: an N-terminal putative regulatory domain and a C-terminal effector domain which contains the esterase activity and is, in part, conserved in proteins found in bacteria, yeast, nematodes and insects. NTE's effector domain contains three predicted transmembrane segments, and the active-site serine residue lies at the centre of one of these segments. The isolated recombinant domain shows PV hydrolase activity only when incorporated into phospholipid liposomes. NTE's esterase activity appears to be largely redundant in adult vertebrates, but organophosphates which react with NTE in vivo initiate unknown events which lead, after a delay of 1-3 weeks, to a neuropathy with degeneration of long axons. These neuropathic organophosphates leave a negatively charged group covalently attached to the active-site serine residue, and it is suggested that this may cause a toxic gain of function in NTE.
APA, Harvard, Vancouver, ISO, and other styles
5

Ye, Fei, and Mingjie Zhang. "Structures and target recognition modes of PDZ domains: recurring themes and emerging pictures." Biochemical Journal 455, no. 1 (September 13, 2013): 1–14. http://dx.doi.org/10.1042/bj20130783.

Full text
Abstract:
PDZ domains are highly abundant protein–protein interaction modules and are often found in multidomain scaffold proteins. PDZ-domain-containing scaffold proteins regulate multiple biological processes, including trafficking and clustering receptors and ion channels at defined membrane regions, organizing and targeting signalling complexes at specific cellular compartments, interfacing cytoskeletal structures with membranes, and maintaining various cellular structures. PDZ domains, each with ~90-amino-acid residues folding into a highly similar structure, are best known to bind to short C-terminal tail peptides of their target proteins. A series of recent studies have revealed that, in addition to the canonical target-binding mode, many PDZ–target interactions involve amino acid residues beyond the regular PDZ domain fold, which we refer to as extensions. Such extension sequences often form an integral structural and functional unit with the attached PDZ domain, which is defined as a PDZ supramodule. Correspondingly, PDZ-domain-binding sequences from target proteins are frequently found to require extension sequences beyond canonical short C-terminal tail peptides. Formation of PDZ supramodules not only affords necessary binding specificities and affinities demanded by physiological functions of PDZ domain targets, but also provides regulatory switches to be built in the PDZ–target interactions. At the 20th anniversary of the discovery of PDZ domain proteins, we try to summarize structural features and target-binding properties of such PDZ supramodules emerging from studies in recent years.
APA, Harvard, Vancouver, ISO, and other styles
6

Yeung, W. K., and S. Evans. "Time-domain microwave target imaging." IEE Proceedings H Microwaves, Antennas and Propagation 132, no. 6 (1985): 345. http://dx.doi.org/10.1049/ip-h-2.1985.0063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Xu, Minghao, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. "Adversarial Domain Adaptation with Domain Mixup." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6502–9. http://dx.doi.org/10.1609/aaai.v34i04.6123.

Full text
Abstract:
Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.
APA, Harvard, Vancouver, ISO, and other styles
8

Ren, Chuan-Xian, Yong-Hui Liu, Xi-Wen Zhang, and Ke-Kun Huang. "Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain." IEEE Transactions on Image Processing 31 (2022): 2122–35. http://dx.doi.org/10.1109/tip.2022.3152052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jin, Wei, and Nan Jia. "Learning Transferable Convolutional Proxy by SMI-Based Matching Technique." Shock and Vibration 2020 (October 14, 2020): 1–15. http://dx.doi.org/10.1155/2020/8873137.

Full text
Abstract:
Domain-transfer learning is a machine learning task to explore a source domain data set to help the learning problem in a target domain. Usually, the source domain has sufficient labeled data, while the target domain does not. In this paper, we propose a novel domain-transfer convolutional model by mapping a target domain data sample to a proxy in the source domain and applying a source domain model to the proxy for the purpose of prediction. In our framework, we firstly represent both source and target domains to feature vectors by two convolutional neural networks and then construct a proxy for each target domain sample in the source domain space. The proxy is supposed to be matched to the corresponding target domain sample convolutional representation vector well. To measure the matching quality, we proposed to maximize their squared-loss mutual information (SMI) between the proxy and target domain samples. We further develop a novel neural SMI estimator based on a parametric density ratio estimation function. Moreover, we also propose to minimize the classification error of both source domain samples and target domain proxies. The classification responses are also smoothened by manifolds of both the source domain and proxy space. By minimizing an objective function of SMI, classification error, and manifold regularization, we learn the convolutional networks of both source and target domains. In this way, the proxy of a target domain sample can be matched to the source domain data and thus benefits from the rich supervision information of the source domain. We design an iterative algorithm to update the parameters alternately and test it over benchmark data sets of abnormal behavior detection in video, Amazon product reviews sentiment analysis, etc.
APA, Harvard, Vancouver, ISO, and other styles
10

Doğan, Tunca, Ece Akhan Güzelcan, Marcus Baumann, Altay Koyas, Heval Atas, Ian R. Baxendale, Maria Martin, and Rengul Cetin-Atalay. "Protein domain-based prediction of drug/compound–target interactions and experimental validation on LIM kinases." PLOS Computational Biology 17, no. 11 (November 29, 2021): e1009171. http://dx.doi.org/10.1371/journal.pcbi.1009171.

Full text
Abstract:
Predictive approaches such as virtual screening have been used in drug discovery with the objective of reducing developmental time and costs. Current machine learning and network-based approaches have issues related to generalization, usability, or model interpretability, especially due to the complexity of target proteins’ structure/function, and bias in system training datasets. Here, we propose a new method “DRUIDom” (DRUg Interacting Domain prediction) to identify bio-interactions between drug candidate compounds and targets by utilizing the domain modularity of proteins, to overcome problems associated with current approaches. DRUIDom is composed of two methodological steps. First, ligands/compounds are statistically mapped to structural domains of their target proteins, with the aim of identifying their interactions. As such, other proteins containing the same mapped domain or domain pair become new candidate targets for the corresponding compounds. Next, a million-scale dataset of small molecule compounds, including those mapped to domains in the previous step, are clustered based on their molecular similarities, and their domain associations are propagated to other compounds within the same clusters. Experimentally verified bioactivity data points, obtained from public databases, are meticulously filtered to construct datasets of active/interacting and inactive/non-interacting drug/compound–target pairs (~2.9M data points), and used as training data for calculating parameters of compound–domain mappings, which led to 27,032 high-confidence associations between 250 domains and 8,165 compounds, and a finalized output of ~5 million new compound–protein interactions. DRUIDom is experimentally validated by syntheses and bioactivity analyses of compounds predicted to target LIM-kinase proteins, which play critical roles in the regulation of cell motility, cell cycle progression, and differentiation through actin filament dynamics. We showed that LIMK-inhibitor-2 and its derivatives significantly block the cancer cell migration through inhibition of LIMK phosphorylation and the downstream protein cofilin. One of the derivative compounds (LIMKi-2d) was identified as a promising candidate due to its action on resistant Mahlavu liver cancer cells. The results demonstrated that DRUIDom can be exploited to identify drug candidate compounds for intended targets and to predict new target proteins based on the defined compound–domain relationships. Datasets, results, and the source code of DRUIDom are fully-available at: https://github.com/cansyl/DRUIDom.
APA, Harvard, Vancouver, ISO, and other styles
11

Zhu, Yongchun, Fuzhen Zhuang, and Deqing Wang. "Aligning Domain-Specific Distribution and Classifier for Cross-Domain Classification from Multiple Sources." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5989–96. http://dx.doi.org/10.1609/aaai.v33i01.33015989.

Full text
Abstract:
While Unsupervised Domain Adaptation (UDA) algorithms, i.e., there are only labeled data from source domains, have been actively studied in recent years, most algorithms and theoretical results focus on Single-source Unsupervised Domain Adaptation (SUDA). However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other. Thus, domain adapters from multiple sources should not be modeled in the same way. Recent deep learning based Multi-source Unsupervised Domain Adaptation (MUDA) algorithms focus on extracting common domain-invariant representations for all domains by aligning distribution of all pairs of source and target domains in a common feature space. However, it is often very hard to extract the same domain-invariant representations for all domains in MUDA. In addition, these methods match distributions without considering domain-specific decision boundaries between classes. To solve these problems, we propose a new framework with two alignment stages for MUDA which not only respectively aligns the distributions of each pair of source and target domains in multiple specific feature spaces, but also aligns the outputs of classifiers by utilizing the domainspecific decision boundaries. Extensive experiments demonstrate that our method can achieve remarkable results on popular benchmark datasets for image classification.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Jie, Kaibin Tian, Dayong Ding, Gang Yang, and Xirong Li. "Unsupervised Domain Expansion for Visual Categorization." ACM Transactions on Multimedia Computing, Communications, and Applications 17, no. 4 (November 30, 2021): 1–24. http://dx.doi.org/10.1145/3448108.

Full text
Abstract:
Expanding visual categorization into a novel domain without the need of extra annotation has been a long-term interest for multimedia intelligence. Previously, this challenge has been approached by unsupervised domain adaptation (UDA). Given labeled data from a source domain and unlabeled data from a target domain, UDA seeks for a deep representation that is both discriminative and domain-invariant. While UDA focuses on the target domain, we argue that the performance on both source and target domains matters, as in practice which domain a test example comes from is unknown. In this article, we extend UDA by proposing a new task called unsupervised domain expansion (UDE), which aims to adapt a deep model for the target domain with its unlabeled data, meanwhile maintaining the model’s performance on the source domain. We propose Knowledge Distillation Domain Expansion (KDDE) as a general method for the UDE task. Its domain-adaptation module can be instantiated with any existing model. We develop a knowledge distillation-based learning mechanism, enabling KDDE to optimize a single objective wherein the source and target domains are equally treated. Extensive experiments on two major benchmarks, i.e., Office-Home and DomainNet, show that KDDE compares favorably against four competitive baselines, i.e., DDC, DANN, DAAN, and CDAN, for both UDA and UDE tasks. Our study also reveals that the current UDA models improve their performance on the target domain at the cost of noticeable performance loss on the source domain.
APA, Harvard, Vancouver, ISO, and other styles
13

Li, Jingmei, Weifei Wu, Di Xue, and Peng Gao. "Multi-Source Deep Transfer Neural Network Algorithm." Sensors 19, no. 18 (September 16, 2019): 3992. http://dx.doi.org/10.3390/s19183992.

Full text
Abstract:
Transfer learning can enhance classification performance of a target domain with insufficient training data by utilizing knowledge relating to the target domain from source domain. Nowadays, it is common to see two or more source domains available for knowledge transfer, which can improve performance of learning tasks in the target domain. However, the classification performance of the target domain decreases due to mismatching of probability distribution. Recent studies have shown that deep learning can build deep structures by extracting more effective features to resist the mismatching. In this paper, we propose a new multi-source deep transfer neural network algorithm, MultiDTNN, based on convolutional neural network and multi-source transfer learning. In MultiDTNN, joint probability distribution adaptation (JPDA) is used for reducing the mismatching between source and target domains to enhance features transferability of the source domain in deep neural networks. Then, the convolutional neural network is trained by utilizing the datasets of each source and target domain to obtain a set of classifiers. Finally, the designed selection strategy selects classifier with the smallest classification error on the target domain from the set to assemble the MultiDTNN framework. The effectiveness of the proposed MultiDTNN is verified by comparing it with other state-of-the-art deep transfer learning on three datasets.
APA, Harvard, Vancouver, ISO, and other styles
14

Cuong, Hoang, Khalil Sima’an, and Ivan Titov. "Adapting to All Domains at Once: Rewarding Domain Invariance in SMT." Transactions of the Association for Computational Linguistics 4 (December 2016): 99–112. http://dx.doi.org/10.1162/tacl_a_00086.

Full text
Abstract:
Existing work on domain adaptation for statistical machine translation has consistently assumed access to a small sample from the test distribution (target domain) at training time. In practice, however, the target domain may not be known at training time or it may change to match user needs. In such situations, it is natural to push the system to make safer choices, giving higher preference to domain-invariant translations, which work well across domains, over risky domain-specific alternatives. We encode this intuition by (1) inducing latent subdomains from the training data only; (2) introducing features which measure how specialized phrases are to individual induced sub-domains; (3) estimating feature weights on out-of-domain data (rather than on the target domain). We conduct experiments on three language pairs and a number of different domains. We observe consistent improvements over a baseline which does not explicitly reward domain invariance.
APA, Harvard, Vancouver, ISO, and other styles
15

Yang, Guanglei, Haifeng Xia, Mingli Ding, and Zhengming Ding. "Bi-Directional Generation for Unsupervised Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6615–22. http://dx.doi.org/10.1609/aaai.v34i04.6137.

Full text
Abstract:
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information. The conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure. To balance the mitigation of domain gap and the preservation of the inherent structure, we propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains. Specifically, two cross-domain generators are employed to synthesize one domain conditioned on the other. The performance of our proposed method can be further enhanced by the consistent classifiers and the cross-domain alignment constraints. We also design two classifiers which are jointly optimized to maximize the consistency on target sample prediction. Extensive experiments verify that our proposed model outperforms the state-of-the-art on standard cross domain visual benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
16

Schäffner, Christina. "Unknown agents in translated political discourse." Target. International Journal of Translation Studies 24, no. 1 (September 7, 2012): 103–25. http://dx.doi.org/10.1075/target.24.1.07sch.

Full text
Abstract:
This article investigates the role of translation and interpreting in political discourse. It illustrates discursive events in the domain of politics and the resulting discourse types, such as jointly produced texts, press conferences and speeches. It shows that methods of Critical Discourse Analysis can be used effectively to reveal translation and interpreting strategies as well as transformations that occur in recontextualisation processes across languages, cultures, and discourse domains, in particular recontextualisation in mass media. It argues that the complexity of translational activities in the field of politics has not yet seen sufficient attention within Translation Studies. The article concludes by outlining a research programme for investigating political discourse in translation.
APA, Harvard, Vancouver, ISO, and other styles
17

Lin, Yuewei, Jing Chen, Yu Cao, Youjie Zhou, Lingfeng Zhang, Yuan Yan Tang, and Song Wang. "Cross-Domain Recognition by Identifying Joint Subspaces of Source Domain and Target Domain." IEEE Transactions on Cybernetics 47, no. 4 (April 2017): 1090–101. http://dx.doi.org/10.1109/tcyb.2016.2538199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Khatin-Zadeh, Omid, Zahra Eskandari, Yousef Bakhshizadeh-Gashti, Sedigheh Vahdat, and Hassan Banaruee. "An algebraic perspective on abstract and concrete domains." Cognitive Linguistic Studies 6, no. 2 (December 31, 2019): 354–69. http://dx.doi.org/10.1075/cogls.00036.kha.

Full text
Abstract:
Abstract Looking at isomorphic constructs from an algebraic perspective, this article suggests that every concrete construct is understood by reference to an underlying abstract schema in the mind of comprehender. The complex form of every abstract schema is created by the gradual development of its elementary form. Throughout the process of cognitive development, new features are added to the elementary form of abstract schema, which leads to gradual formation of a fully-developed abstract schema. Every developed abstract schema is the underlying source for understanding an infinite number of concrete isomorphic constructs. It is suggested that the process of the mapping of base domain (base construct) unto target domain (target construct) is conducted and mediated by an abstract domain. This abstract domain, which is free from concrete features of base and target, is isomorphic to both base and target domains. To describe the mediatory role of this abstract domain, it might be argued that the chain process of understanding a less familiar domain in terms of a relatively more familiar domain (salience imbalance model) cannot continue infinitely. This chain must stop at some point. This point is the abstract domain, which is isomorphic to base and target domains.
APA, Harvard, Vancouver, ISO, and other styles
19

Shu, Yang, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. "Transferable Curriculum for Weakly-Supervised Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4951–58. http://dx.doi.org/10.1609/aaai.v33i01.33014951.

Full text
Abstract:
Domain adaptation improves a target task by knowledge transfer from a source domain with rich annotations. It is not uncommon that “source-domain engineering” becomes a cumbersome process in domain adaptation: the high-quality source domains highly related to the target domain are hardly available. Thus, weakly-supervised domain adaptation has been introduced to address this difficulty, where we can tolerate the source domain with noises in labels, features, or both. As such, for a particular target task, we simply collect the source domain with coarse labeling or corrupted data. In this paper, we try to address two entangled challenges of weaklysupervised domain adaptation: sample noises of the source domain and distribution shift across domains. To disentangle these challenges, a Transferable Curriculum Learning (TCL) approach is proposed to train the deep networks, guided by a transferable curriculum informing which of the source examples are noiseless and transferable. The approach enhances positive transfer from clean source examples to the target and mitigates negative transfer of noisy source examples. A thorough evaluation shows that our approach significantly outperforms the state-of-the-art on weakly-supervised domain adaptation tasks.
APA, Harvard, Vancouver, ISO, and other styles
20

Wittich, D. "DEEP DOMAIN ADAPTATION BY WEIGHTED ENTROPY MINIMIZATION FOR THE CLASSIFICATION OF AERIAL IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2020 (August 3, 2020): 591–98. http://dx.doi.org/10.5194/isprs-annals-v-2-2020-591-2020.

Full text
Abstract:
Abstract. Fully convolutional neural networks (FCN) are successfully used for the automated pixel-wise classification of aerial images and possibly additional data. However, they require many labelled training samples to perform well. One approach addressing this issue is semi-supervised domain adaptation (SSDA). Here, labelled training samples from a source domain and unlabelled samples from a target domain are used jointly to obtain a target domain classifier, without requiring any labelled samples from the target domain. In this paper, a two-step approach for SSDA is proposed. The first step corresponds to a supervised training on the source domain, making use of strong data augmentation to increase the initial performance on the target domain. Secondly, the model is adapted by entropy minimization using a novel weighting strategy. The approach is evaluated on the basis of five domains, corresponding to five cities. Several training variants and adaptation scenarios are tested, indicating that proper data augmentation can already improve the initial target domain performance significantly resulting in an average overall accuracy of 77.5%. The weighted entropy minimization improves the overall accuracy on the target domains in 19 out of 20 scenarios on average by 1.8%. In all experiments a novel FCN architecture is used that yields results comparable to those of the best-performing models on the ISPRS labelling challenge while having an order of magnitude fewer parameters than commonly used FCNs.
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Seongmin, Hyunsik Jeon, and U. Kang. "Multi-EPL: Accurate multi-source domain adaptation." PLOS ONE 16, no. 8 (August 5, 2021): e0255754. http://dx.doi.org/10.1371/journal.pone.0255754.

Full text
Abstract:
Given multiple source datasets with labels, how can we train a target model with no labeled data? Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels. MSDA is a crucial problem applicable to many practical cases where labels for the target data are unavailable due to privacy issues. Existing MSDA frameworks are limited since they align data without considering labels of the features of each domain. They also do not fully utilize the target data without labels and rely on limited feature extraction with a single extractor. In this paper, we propose Multi-EPL, a novel method for MSDA. Multi-EPL exploits label-wise moment matching to align the conditional distributions of the features for the labels, uses pseudolabels for the unavailable target labels, and introduces an ensemble of multiple feature extractors for accurate domain adaptation. Extensive experiments show that Multi-EPL provides the state-of-the-art performance for MSDA tasks in both image domains and text domains, improving the accuracy by up to 13.20%.
APA, Harvard, Vancouver, ISO, and other styles
22

Adayel, Reham, Yakoub Bazi, Haikel Alhichri, and Naif Alajlan. "Deep Open-Set Domain Adaptation for Cross-Scene Classification based on Adversarial Learning and Pareto Ranking." Remote Sensing 12, no. 11 (May 27, 2020): 1716. http://dx.doi.org/10.3390/rs12111716.

Full text
Abstract:
Most of the existing domain adaptation (DA) methods proposed in the context of remote sensing imagery assume the presence of the same land-cover classes in the source and target domains. Yet, this assumption is not always realistic in practice as the target domain may contain additional classes unknown to the source leading to the so-called open set DA. Under this challenging setting, the problem turns to reducing the distribution discrepancy between the shared classes in both domains besides the detection of the unknown class samples in the target domain. To deal with the openset problem, we propose an approach based on adversarial learning and pareto-based ranking. In particular, the method leverages the distribution discrepancy between the source and target domains using min-max entropy optimization. During the alignment process, it identifies candidate samples of the unknown class from the target domain through a pareto-based ranking scheme that uses ambiguity criteria based on entropy and the distance to source class prototype. Promising results using two cross-domain datasets that consist of very high resolution and extremely high resolution images, show the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Bing Wen, and Shi Long Liu. "Infrared Target Detection Based on Temporal-Spatial Domain Fusion." Advanced Materials Research 1044-1045 (October 2014): 1186–89. http://dx.doi.org/10.4028/www.scientific.net/amr.1044-1045.1186.

Full text
Abstract:
In order to improve the accuracy and stability of infrared target detection, a novel moving target detection approach based on temporal-spatial domain fusion is presented. A multi-level spatial-temporal median filter is utilized to extract the background frame, with which the background clutters are suppressed by using the background subtraction technique. Then a local weighted operator is applied to enhance the targets. Lastly, the otsu thresholding algorithm is utilized to detect the targets. Experimental results demonstrate that the proposed approach is capable of detecting infrared moving targets effectively for F1 measurement up to 92.8%.
APA, Harvard, Vancouver, ISO, and other styles
24

Kimura, Yoko, Mirai Tanigawa, Junko Kawawaki, Kenji Takagi, Tsunehiro Mizushima, Tatsuya Maeda, and Keiji Tanaka. "Conserved Mode of Interaction between Yeast Bro1 Family V Domains and YP(X) n L Motif-Containing Target Proteins." Eukaryotic Cell 14, no. 10 (July 6, 2015): 976–82. http://dx.doi.org/10.1128/ec.00091-15.

Full text
Abstract:
ABSTRACT Yeast Bro1 and Rim20 belong to a family of proteins which possess a common architecture of Bro1 and V domains. Alix and His domain protein tyrosine phosphatase (HD-PTP), mammalian Bro1 family proteins, bind YP(X) n L ( n = 1 to 3) motifs in their target proteins through their V domains. In Alix, the Phe residue, which is located in the hydrophobic groove of the V domain, is critical for binding to the YP(X) n L motif. Although the overall sequences are not highly conserved between mammalian and yeast V domains, we show that the conserved Phe residue in the yeast Bro1 V domain is important for binding to its YP(X) n L-containing target protein, Rfu1. Furthermore, we show that Rim20 binds to its target protein Rim101 through the interaction between the V domain of Rim20 and the YPIKL motif of Rim101. The mutation of either the critical Phe residue in the Rim20 V domain or the YPIKL motif of Rim101 affected the Rim20-mediated processing of Rim101. These results suggest that the interactions between V domains and YP(X) n L motif-containing proteins are conserved from yeast to mammalian cells. Moreover, the specificities of each V domain to their target protein suggest that unidentified elements determine the binding specificity.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Jinfeng, Weifeng Liu, Yicong Zhou, Jun Yu, Dapeng Tao, and Changsheng Xu. "Domain-invariant Graph for Adaptive Semi-supervised Domain Adaptation." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 3 (August 31, 2022): 1–18. http://dx.doi.org/10.1145/3487194.

Full text
Abstract:
Domain adaptation aims to generalize a model from a source domain to tackle tasks in a related but different target domain. Traditional domain adaptation algorithms assume that enough labeled data, which are treated as the prior knowledge are available in the source domain. However, these algorithms will be infeasible when only a few labeled data exist in the source domain, thus the performance decreases significantly. To address this challenge, we propose a Domain-invariant Graph Learning (DGL) approach for domain adaptation with only a few labeled source samples. Firstly, DGL introduces the Nyström method to construct a plastic graph that shares similar geometric property with the target domain. Then, DGL flexibly employs the Nyström approximation error to measure the divergence between the plastic graph and source graph to formalize the distribution mismatch from the geometric perspective. Through minimizing the approximation error, DGL learns a domain-invariant geometric graph to bridge the source and target domains. Finally, we integrate the learned domain-invariant graph with the semi-supervised learning and further propose an adaptive semi-supervised model to handle the cross-domain problems. The results of extensive experiments on popular datasets verify the superiority of DGL, especially when only a few labeled source samples are available.
APA, Harvard, Vancouver, ISO, and other styles
26

Yang, Baoyao, and Pong C. Yuen. "Cross-Domain Visual Representations via Unsupervised Graph Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5613–20. http://dx.doi.org/10.1609/aaai.v33i01.33015613.

Full text
Abstract:
In unsupervised domain adaptation, distributions of visual representations are mismatched across domains, which leads to the performance drop of a source model in the target domain. Therefore, distribution alignment methods have been proposed to explore cross-domain visual representations. However, most alignment methods have not considered the difference in distribution structures across domains, and the adaptation would subject to the insufficient aligned cross-domain representations. To avoid the misclassification/misidentification due to the difference in distribution structures, this paper proposes a novel unsupervised graph alignment method that aligns both data representations and distribution structures across the source and target domains. An adversarial network is developed for unsupervised graph alignment, which maps both source and target data to a feature space where data are distributed with unified structure criteria. Experimental results show that the graph-aligned visual representations achieve good performance on both crossdataset recognition and cross-modal re-identification.
APA, Harvard, Vancouver, ISO, and other styles
27

Vagushchenko, L. L., and A. J. Kozachenko. "METHOD OF DISPLAYING THE RESULTS OF TRIAL PLAN FOR COLLISION AVOIDANCE." Shipping & Navigation 32, no. 2 (December 12, 2021): 18–25. http://dx.doi.org/10.31653/2306-5761.32.2021.18-25.

Full text
Abstract:
A method of displaying the results of trial plan for collision avoidance is proposed. It is considered that in general form this plan can be represented by segments of its own ship movement with a constant velocity vector, and sections of this vector change. The basic requirements for displaying trial plan results are formulated. It is accepted that the domains of danger, which are used in solving the tasks to avoid collision are formed at the target and are convex shapes which are symmetrical about the target course line. These domains can be smooth closed curves or closed broken lines (polygons). A program for simulating the execution of an anti-collision plan is proposed to obtain data on this process. It is noted that the risk of passing the target is the greatest one at the moment when the shortest distance between its domain and own ship is small. The vessels’positions, special point and closest to own ship point of the domain up to this moment are considered to be displayed as trial data. Such data along the own ship path to avoid collision are called informational marks. They represent the demanding attention areas, and allow making the reasonable conclusion on acceptability of anticollision plan. Algorithms for calculating elements of informational marks, when using elliptical and polygonal danger domains for targets, are determined. To test the procedure of simulating the implementation of the evading plan and the proposed method of displaying information, a special program was compiled in the Borland Delphi language. This program was used to test the plans to avoid collision in many collision situations applying various forms of targets danger domains. The display of testing results for two segments of the evading plan in a situation with six targets, using circular danger domains with a center shifted towards the nose from the center of the target mass, is presented. Target and own ship dimensions are included in the size of each domain.
APA, Harvard, Vancouver, ISO, and other styles
28

Kang, Byung Ok, Hyeong Bae Jeon, and Jeon Gue Park. "Speech Recognition for Task Domains with Sparse Matched Training Data." Applied Sciences 10, no. 18 (September 4, 2020): 6155. http://dx.doi.org/10.3390/app10186155.

Full text
Abstract:
We propose two approaches to handle speech recognition for task domains with sparse matched training data. One is an active learning method that selects training data for the target domain from another general domain that already has a significant amount of labeled speech data. This method uses attribute-disentangled latent variables. For the active learning process, we designed an integrated system consisting of a variational autoencoder with an encoder that infers latent variables with disentangled attributes from the input speech, and a classifier that selects training data with attributes matching the target domain. The other method combines data augmentation methods for generating matched target domain speech data and transfer learning methods based on teacher/student learning. To evaluate the proposed method, we experimented with various task domains with sparse matched training data. The experimental results show that the proposed method has qualitative characteristics that are suitable for the desired purpose, it outperforms random selection, and is comparable to using an equal amount of additional target domain data.
APA, Harvard, Vancouver, ISO, and other styles
29

Shen, Junyi, Hankz Hankui Zhuo, Jin Xu, Bin Zhong, and Sinno Pan. "Transfer Value Iteration Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5676–83. http://dx.doi.org/10.1609/aaai.v34i04.6022.

Full text
Abstract:
Value iteration networks (VINs) have been demonstrated to have a good generalization ability for reinforcement learning tasks across similar domains. However, based on our experiments, a policy learned by VINs still fail to generalize well on the domain whose action space and feature space are not identical to those in the domain where it is trained. In this paper, we propose a transfer learning approach on top of VINs, termed Transfer VINs (TVINs), such that a learned policy from a source domain can be generalized to a target domain with only limited training data, even if the source domain and the target domain have domain-specific actions and features. We empirically verify that our proposed TVINs outperform VINs when the source and the target domains have similar but not identical action and feature spaces. Furthermore, we show that the performance improvement is consistent across different environments, maze sizes, dataset sizes as well as different values of hyperparameters such as number of iteration and kernel size.
APA, Harvard, Vancouver, ISO, and other styles
30

Kuroki, Seiichi, Nontawat Charoenphakdee, Han Bao, Junya Honda, Issei Sato, and Masashi Sugiyama. "Unsupervised Domain Adaptation Based on Source-Guided Discrepancy." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4122–29. http://dx.doi.org/10.1609/aaai.v33i01.33014122.

Full text
Abstract:
Unsupervised domain adaptation is the problem setting where data generating distributions in the source and target domains are different and labels in the target domain are unavailable. An important question in unsupervised domain adaptation is how to measure the difference between the source and target domains. Existing discrepancy measures for unsupervised domain adaptation either require high computation costs or have no theoretical guarantee. To mitigate these problems, this paper proposes a novel discrepancy measure called source-guided discrepancy (S-disc), which exploits labels in the source domain unlike the existing ones. As a consequence, S-disc can be computed efficiently with a finitesample convergence guarantee. In addition, it is shown that S-disc can provide a tighter generalization error bound than the one based on an existing discrepancy measure. Finally, experimental results demonstrate the advantages of S-disc over the existing discrepancy measures.
APA, Harvard, Vancouver, ISO, and other styles
31

Wu, Hanrui, Qingyao Wu, and Michael K. Ng. "Knowledge Preserving and Distribution Alignment for Heterogeneous Domain Adaptation." ACM Transactions on Information Systems 40, no. 1 (January 31, 2022): 1–29. http://dx.doi.org/10.1145/3469856.

Full text
Abstract:
Domain adaptation aims at improving the performance of learning tasks in a target domain by leveraging the knowledge extracted from a source domain. To this end, one can perform knowledge transfer between these two domains. However, this problem becomes extremely challenging when the data of these two domains are characterized by different types of features, i.e., the feature spaces of the source and target domains are different, which is referred to as heterogeneous domain adaptation (HDA). To solve this problem, we propose a novel model called Knowledge Preserving and Distribution Alignment (KPDA), which learns an augmented target space by jointly minimizing information loss and maximizing domain distribution alignment. Specifically, we seek to discover a latent space, where the knowledge is preserved by exploiting the Laplacian graph terms and reconstruction regularizations. Moreover, we adopt the Maximum Mean Discrepancy to align the distributions of the source and target domains in the latent space. Mathematically, KPDA is formulated as a minimization problem with orthogonal constraints, which involves two projection variables. Then, we develop an algorithm based on the Gauss–Seidel iteration scheme and split the problem into two subproblems, which are solved by searching algorithms based on the Barzilai–Borwein (BB) stepsize. Promising results demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
32

Zou, Han, Yuxun Zhou, Jianfei Yang, Huihan Liu, Hari Prasanna Das, and Costas J. Spanos. "Consensus Adversarial Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5997–6004. http://dx.doi.org/10.1609/aaai.v33i01.33015997.

Full text
Abstract:
We propose a novel domain adaptation framework, namely Consensus Adversarial Domain Adaptation (CADA), that gives freedom to both target encoder and source encoder to embed data from both domains into a common domaininvariant feature space until they achieve consensus during adversarial learning. In this manner, the domain discrepancy can be further minimized in the embedded space, yielding more generalizable representations. The framework is also extended to establish a new few-shot domain adaptation scheme (F-CADA), that remarkably enhances the ADA performance by efficiently propagating a few labeled data once available in the target domain. Extensive experiments are conducted on the task of digit recognition across multiple benchmark datasets and a real-world problem involving WiFi-enabled device-free gesture recognition under spatial dynamics. The results show the compelling performance of CADA versus the state-of-the-art unsupervised domain adaptation (UDA) and supervised domain adaptation (SDA) methods. Numerical experiments also demonstrate that F-CADA can significantly improve the adaptation performance even with sparsely labeled data in the target domain.
APA, Harvard, Vancouver, ISO, and other styles
33

Li-ping, Yu, Tang Huan-ling, and An Zhi-yong. "Domain Adaptation for Pedestrian Detection Based on Prediction Consistency." Scientific World Journal 2014 (2014): 1–7. http://dx.doi.org/10.1155/2014/280382.

Full text
Abstract:
Pedestrian detection is an active area of research in computer vision. It remains a quite challenging problem in many applications where many factors cause a mismatch between source dataset used to train the pedestrian detector and samples in the target scene. In this paper, we propose a novel domain adaptation model for merging plentiful source domain samples with scared target domain samples to create a scene-specific pedestrian detector that performs as well as rich target domain simples are present. Our approach combines the boosting-based learning algorithm with an entropy-based transferability, which is derived from the prediction consistency with the source classifications, to selectively choose the samples showing positive transferability in source domains to the target domain. Experimental results show that our approach can improve the detection rate, especially with the insufficient labeled data in target scene.
APA, Harvard, Vancouver, ISO, and other styles
34

Lin, Chuang, Sicheng Zhao, Lei Meng, and Tat-Seng Chua. "Multi-Source Domain Adaptation for Visual Sentiment Classification." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2661–68. http://dx.doi.org/10.1609/aaai.v34i03.5651.

Full text
Abstract:
Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.
APA, Harvard, Vancouver, ISO, and other styles
35

Hu, Cheng-Jun, Aneesa Sataur, Liyi Wang, Hongqing Chen, and M. Celeste Simon. "The N-Terminal Transactivation Domain Confers Target Gene Specificity of Hypoxia-inducible Factors HIF-1α and HIF-2α." Molecular Biology of the Cell 18, no. 11 (November 2007): 4528–42. http://dx.doi.org/10.1091/mbc.e06-05-0419.

Full text
Abstract:
The basic helix-loop-helix-Per-ARNT-Sim–proteins hypoxia-inducible factor (HIF)-1α and HIF-2α are the principal regulators of the hypoxic transcriptional response. Although highly related, they can activate distinct target genes. In this study, the protein domain and molecular mechanism important for HIF target gene specificity are determined. We demonstrate that although HIF-2α is unable to activate multiple endogenous HIF-1α–specific target genes (e.g., glycolytic enzymes), HIF-2α still binds to their promoters in vivo and activates reporter genes derived from such targets. In addition, comparative analysis of the N-terminal DNA binding and dimerization domains of HIF-1α and HIF-2α does not reveal any significant differences between the two proteins. Importantly, replacement of the N-terminal transactivation domain (N-TAD) (but not the DNA binding domain, dimerization domain, or C-terminal transactivation domain [C-TAD]) of HIF-2α with the analogous region of HIF-1α is sufficient to convert HIF-2α into a protein with HIF-1α functional specificity. Nevertheless, both the N-TAD and C-TAD are important for optimal HIF transcriptional activity. Additional experiments indicate that the ETS transcription factor ELK is required for HIF-2α to activate specific target genes such as Cited-2, EPO, and PAI-1. These results demonstrate that the HIF-α TADs, particularly the N-TADs, confer HIF target gene specificity, by interacting with additional transcriptional cofactors.
APA, Harvard, Vancouver, ISO, and other styles
36

Ma, Chenhui, Dexuan Sha, and Xiaodong Mu. "Unsupervised Adversarial Domain Adaptation with Error-Correcting Boundaries and Feature Adaption Metric for Remote-Sensing Scene Classification." Remote Sensing 13, no. 7 (March 26, 2021): 1270. http://dx.doi.org/10.3390/rs13071270.

Full text
Abstract:
Unsupervised domain adaptation (UDA) based on adversarial learning for remote-sensing scene classification has become a research hotspot because of the need to alleviating the lack of annotated training data. Existing methods train classifiers according to their ability to distinguish features from source or target domains. However, they suffer from the following two limitations: (1) the classifier is trained on source samples and forms a source-domain-specific boundary, which ignores features from the target domain and (2) semantically meaningful features are merely built from the adversary of a generator and a discriminator, which ignore selecting the domain invariant features. These issues limit the distribution matching performance of source and target domains, since each domain has its distinctive characteristic. To resolve these issues, we propose a framework with error-correcting boundaries and feature adaptation metric. Specifically, we design an error-correcting boundaries mechanism to build target-domain-specific classifier boundaries via multi-classifiers and error-correcting discrepancy loss, which significantly distinguish target samples and reduce their distinguished uncertainty. Then, we employ a feature adaptation metric structure to enhance the adaptation of ambiguous features via shallow layers of the backbone convolutional neural network and alignment loss, which automatically learns domain invariant features. The experimental results on four public datasets outperform other UDA methods of remote-sensing scene classification.
APA, Harvard, Vancouver, ISO, and other styles
37

Wofford, Haley A., Josh Myers-Dean, Brandon A. Vogel, Kevin Alexander Estrada Alamo, Frederick A. Longshore-Neate, Filip Jagodzinski, and Jeanine F. Amacher. "Domain Analysis and Motif Matcher (DAMM): A Program to Predict Selectivity Determinants in Monosiga brevicollis PDZ Domains Using Human PDZ Data." Molecules 26, no. 19 (October 5, 2021): 6034. http://dx.doi.org/10.3390/molecules26196034.

Full text
Abstract:
Choanoflagellates are single-celled eukaryotes with complex signaling pathways. They are considered the closest non-metazoan ancestors to mammals and other metazoans and form multicellular-like states called rosettes. The choanoflagellate Monosiga brevicollis contains over 150 PDZ domains, an important peptide-binding domain in all three domains of life (Archaea, Bacteria, and Eukarya). Therefore, an understanding of PDZ domain signaling pathways in choanoflagellates may provide insight into the origins of multicellularity. PDZ domains recognize the C-terminus of target proteins and regulate signaling and trafficking pathways, as well as cellular adhesion. Here, we developed a computational software suite, Domain Analysis and Motif Matcher (DAMM), that analyzes peptide-binding cleft sequence identity as compared with human PDZ domains and that can be used in combination with literature searches of known human PDZ-interacting sequences to predict target specificity in choanoflagellate PDZ domains. We used this program, protein biochemistry, fluorescence polarization, and structural analyses to characterize the specificity of A9UPE9_MONBE, a M. brevicollis PDZ domain-containing protein with no homology to any metazoan protein, finding that its PDZ domain is most similar to those of the DLG family. We then identified two endogenous sequences that bind A9UPE9 PDZ with <100 μM affinity, a value commonly considered the threshold for cellular PDZ–peptide interactions. Taken together, this approach can be used to predict cellular targets of previously uncharacterized PDZ domains in choanoflagellates and other organisms. Our data contribute to investigations into choanoflagellate signaling and how it informs metazoan evolution.
APA, Harvard, Vancouver, ISO, and other styles
38

Karman, Zoltan, Zsuzsanna Rethi-Nagy, Edit Abraham, Lilla Fabri-Ordogh, Akos Csonka, Peter Vilmos, Janusz Debski, Michal Dadlez, David M. Glover, and Zoltan Lipinszki. "Novel perspectives of target-binding by the evolutionarily conserved PP4 phosphatase." Open Biology 10, no. 12 (December 2020): 200343. http://dx.doi.org/10.1098/rsob.200343.

Full text
Abstract:
Protein phosphatase 4 (PP4) is an evolutionarily conserved and essential Ser/Thr phosphatase that regulates cell division, development and DNA repair in eukaryotes. The major form of PP4, present from yeast to human, is the PP4c-R2-R3 heterotrimeric complex. The R3 subunit is responsible for substrate-recognition via its EVH1 domain. In typical EVH1 domains, conserved phenylalanine, tyrosine and tryptophan residues form the specific recognition site for their target's proline-rich sequences. Here, we identify novel binding partners of the EVH1 domain of the Drosophila R3 subunit, Falafel, and demonstrate that instead of binding to proline-rich sequences this EVH1 variant specifically recognizes atypical ligands, namely the FxxP and MxPP short linear consensus motifs. This interaction is dependent on an exclusively conserved leucine that replaces the phenylalanine invariant of all canonical EVH1 domains. We propose that the EVH1 domain of PP4 represents a new class of the EVH1 family that can accommodate low proline content sequences, such as the FxxP motif. Finally, our data implicate the conserved Smk-1 domain of Falafel in target-binding. These findings greatly enhance our understanding of the substrate-recognition mechanisms and function of PP4.
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Minghao, Shuai Zhao, Haifeng Liu, and Deng Cai. "Adversarial-Learned Loss for Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 3521–28. http://dx.doi.org/10.1609/aaai.v34i04.5757.

Full text
Abstract:
Recently, remarkable progress has been made in learning transferable representation across domains. Previous works in domain adaptation are majorly based on two techniques: domain-adversarial learning and self-training. However, domain-adversarial learning only aligns feature distributions between domains but does not consider whether the target features are discriminative. On the other hand, self-training utilizes the model predictions to enhance the discrimination of target features, but it is unable to explicitly align domain distributions. In order to combine the strengths of these two methods, we propose a novel method called Adversarial-Learned Loss for Domain Adaptation (ALDA). We first analyze the pseudo-label method, a typical self-training method. Nevertheless, there is a gap between pseudo-labels and the ground truth, which can cause incorrect training. Thus we introduce the confusion matrix, which is learned through an adversarial manner in ALDA, to reduce the gap and align the feature distributions. Finally, a new loss function is auto-constructed from the learned confusion matrix, which serves as the loss for unlabeled target samples. Our ALDA outperforms state-of-the-art approaches in four standard domain adaptation datasets. Our code is available at https://github.com/ZJULearning/ALDA.
APA, Harvard, Vancouver, ISO, and other styles
40

Wiliński, Jarosław. "War Metaphors in Business: A Metaphostructional Analysis." Anglica. An International Journal of English Studies, no. 26/2 (September 11, 2017): 61–78. http://dx.doi.org/10.7311/0860-5734.26.2.05.

Full text
Abstract:
This paper adopts the notion of metaphostruction (Wiliński 2015), the conceptual theory of metaphor (Kӧvecses 2002) and the corpus-based method geared specifically for investigating the interaction between target domains and the source domain lexemes that occur in them. The method, referred to as metaphostructional analysis (Wiliński 2015), is used to determine the degree of association between the target domain of business and the source domain lexemes derived from military terminology. The results of the metaphostructional analysis reveal that there are indeed war terms that demonstrate strong or loose associations with the target domain of business, and that these instantiate different metaphorical mappings.
APA, Harvard, Vancouver, ISO, and other styles
41

Wang, Ke, Jiayong Liu, and Jing-Yan Wang. "Learning Domain-Independent Deep Representations by Mutual Information Minimization." Computational Intelligence and Neuroscience 2019 (June 16, 2019): 1–14. http://dx.doi.org/10.1155/2019/9414539.

Full text
Abstract:
Domain transfer learning aims to learn common data representations from a source domain and a target domain so that the source domain data can help the classification of the target domain. Conventional transfer representation learning imposes the distributions of source and target domain representations to be similar, which heavily relies on the characterization of the distributions of domains and the distribution matching criteria. In this paper, we proposed a novel framework for domain transfer representation learning. Our motive is to make the learned representations of data points independent from the domains which they belong to. In other words, from an optimal cross-domain representation of a data point, it is difficult to tell which domain it is from. In this way, the learned representations can be generalized to different domains. To measure the dependency between the representations and the corresponding domain which the data points belong to, we propose to use the mutual information between the representations and the domain-belonging indicators. By minimizing such mutual information, we learn the representations which are independent from domains. We build a classwise deep convolutional network model as a representation model and maximize the margin of each data point of the corresponding class, which is defined over the intraclass and interclass neighborhood. To learn the parameters of the model, we construct a unified minimization problem where the margins are maximized while the representation-domain mutual information is minimized. In this way, we learn representations which are not only discriminate but also independent from domains. An iterative algorithm based on the Adam optimization method is proposed to solve the minimization to learn the classwise deep model parameters and the cross-domain representations simultaneously. Extensive experiments over benchmark datasets show its effectiveness and advantage over existing domain transfer learning methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Yuh-Shyan, Yu-Chi Chang, and Chun-Yu Li. "A Semi-Supervised Transfer Learning with Dynamic Associate Domain Adaptation for Human Activity Recognition Using WiFi Signals." Sensors 21, no. 24 (December 19, 2021): 8475. http://dx.doi.org/10.3390/s21248475.

Full text
Abstract:
Human activity recognition without equipment plays a vital role in smart home applications, freeing humans from the shackles of wearable devices. In this paper, by using the channel state information (CSI) of the WiFi signal, semi-supervised transfer learning with dynamic associate domain adaptation is proposed for human activity recognition. In order to improve the CSI quality and denoising of CSI, we carried out missing packet filling, burst noise removal, background estimation, feature extraction, feature enhancement, and data augmentation in the data pre-processing stage. This paper considers the problem of environment-independent human activity recognition, also known as domain adaptation. The pre-trained model is trained from the source domain by collecting a complete labeled dataset of all of the CSI of human activity patterns. Then, the pre-trained model is transferred to the target environment through the semi-supervised transfer learning stage. Therefore, when humans move to different target domains, a partial labeled dataset of the target domain is required for fine-tuning. In this paper, we propose a dynamic associate domain adaptation called DADA. By modifying the existing associate domain adaptation algorithm, the target domain can provide a dynamic ratio of labeled dataset/unlabeled dataset, while the existing associate domain adaptation algorithm only allows target domains with the unlabeled dataset. The advantage of DADA is that it provides a dynamic strategy to eliminate different effects on different environments. In addition, we further designed an attention-based DenseNet model, or AD, as our training network, which is modified by an existing DenseNet by adding the attention function. The solution we proposed was simplified to DADA-AD throughout the paper. The experimental results show that for domain adaptation in different domains, the accuracy of human activity recognition of the DADA-AD scheme is 97.4%. It also shows that DADA-AD has advantages over existing semi-supervised learning schemes.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Zhe, Bo Yan, Chunhua Wu, Bin Wu, Xiujuan Wang, and Kangfeng Zheng. "Graph Adaptation Network with Domain-Specific Word Alignment for Cross-Domain Relation Extraction." Sensors 20, no. 24 (December 15, 2020): 7180. http://dx.doi.org/10.3390/s20247180.

Full text
Abstract:
Cross-domain relation extraction has become an essential approach when target domain lacking labeled data. Most existing works adapted relation extraction models from the source domain to target domain through aligning sequential features, but failed to transfer non-local and non-sequential features such as word co-occurrence which are also critical for cross-domain relation extraction. To address this issue, in this paper, we propose a novel tripartite graph architecture to adapt non-local features when there is no labeled data in the target domain. The graph uses domain words as nodes to model the co-occurrence relation between domain-specific words and domain-independent words. Through graph convolutions on the tripartite graph, the information of domain-specific words is propagated so that the word representation can be fine-tuned to align domain-specific features. In addition, unlike the traditional graph structure, the weights of edges innovatively combine fixed weight and dynamic weight, to capture the global non-local features and avoid introducing noise to word representation. Experiments on three domains of ACE2005 datasets show that our method outperforms the state-of-the-art models by a big margin.
APA, Harvard, Vancouver, ISO, and other styles
44

Yu, Xu, Jun-yu Lin, Feng Jiang, Jun-wei Du, and Ji-zhong Han. "A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression." Computational Intelligence and Neuroscience 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/1425365.

Full text
Abstract:
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Li, Li, Hui Li, Tian Li, and Feng Gao. "Infrared small target detection in compressive domain." Electronics Letters 50, no. 7 (March 2014): 510–12. http://dx.doi.org/10.1049/el.2014.0180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Tress, Michael, Chin-Hsien Tai, Guoli Wang, Iakes Ezkurdia, Gonzalo López, Alfonso Valencia, Byungkook Lee, and Roland L. Dunbrack. "Domain definition and target classification for CASP6." Proteins: Structure, Function, and Bioinformatics 61, S7 (2005): 8–18. http://dx.doi.org/10.1002/prot.20717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Clarke, Neil D., Iakes Ezkurdia, Jürgen Kopp, Randy J. Read, Torsten Schwede, and Michael Tress. "Domain definition and target classification for CASP7." Proteins: Structure, Function, and Bioinformatics 69, S8 (2007): 10–18. http://dx.doi.org/10.1002/prot.21686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tress, Michael L., Iakes Ezkurdia, and Jane S. Richardson. "Target domain definition and classification in CASP8." Proteins: Structure, Function, and Bioinformatics 77, S9 (2009): 10–17. http://dx.doi.org/10.1002/prot.22497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Yongmei, Sun, and Huo Hua. "Research on Domain-independent Opinion Target Extraction." International Journal of Hybrid Information Technology 8, no. 1 (January 31, 2015): 237–48. http://dx.doi.org/10.14257/ijhit.2015.8.1.21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Shi, Yu, Lan Du, and Yuchen Guo. "Unsupervised Domain Adaptation for SAR Target Detection." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 6372–85. http://dx.doi.org/10.1109/jstars.2021.3089238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography