Journal articles on the topic 'Domain'

To see the other types of publications on this topic, follow the link: Domain.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Domain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Sirait, Timoteus Natanael, and Jimmy BP Simangungsong. "ANALISIS YURIDIS PELAKSANAAN TUGAS POKOK PENGELOLA DOMAIN INTERNET INDONESIA." NOMMENSEN JOURNAL OF LEGAL OPINION 1, no. 01 (June 30, 2020): 52–62. http://dx.doi.org/10.51622/njlo.v1i01.38.

Full text
Abstract:
PANDI (Indonesian Internet Domain Manager) Special authority is given to manage Indonesian domain names on the basis of law and their authority is stipulated in Law No. 23 of 2013 concerning Domain Name Management. PANDI's failure has been considered since the domain dispute between (bmw.co.id) and (bmw.id). There is no synchronization between domian (.co.id and .id) under the authority of PANDI. This research uses descriptive qualitative method.Qualitative method is a research method which in nature provides an explanation using analysis. In practice, this method is subjective in that the research process is more visible and tends to focus more on the theoretical foundation, with research methods aimed at explaining an event that is happening now and in the past. Domain is, the address on the internet that contains electronic data information, which can be accessed anywhere with an internet network on the website, the domain is needed to accelerate the development of information and communication and also in material and non-material transactions, there are many things that must be addressed regarding supervision Several Indone domains between: There is a need for firmer supervision by PANDI in terms of synchronization between dominance (.co.id) and (.id). There needs to be a synchronization between Government Regulation on Communication andInformation number 23 of 2013 concerning the management of Indonesian domain names, with law no. 20 of 2016 on Trademarks and Act No. 19 of 2016 amending the Law No. 11 of 2018 on Information and Electronic Transactions There needs to be a revision of Law No. 23 of 2013 concerning Management of the name Domaian Indonesia. The need for new legal policies that can provide a deterrent effect on crime in the field of cyber-specific domains. To avoid the many problems that will arise in the future about the Indonesian Domain some of the above need to be executed more quickly.
APA, Harvard, Vancouver, ISO, and other styles
2

Sampathirao Suneetha, Et al. "Cross-Domain Aspect Extraction using Adversarial Domain Adaptation." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 11s (October 31, 2023): 672–82. http://dx.doi.org/10.17762/ijritcc.v11i11s.9658.

Full text
Abstract:
Aspect extraction, the task of identifying and categorizing aspects or features in text, plays a crucial role in sentiment analysis. However, aspect extraction models often struggle to generalize well across different domains due to domain-specific language patterns and variations. In order to tackle this challenge, we propose an approach called "Cross-Domain Aspect Extraction using Adversarial-Based Domain Adaptation". Our model combines the power of pre-trained language models, such as BERT, with adversarial training techniques to enable effective aspect extraction in diverse domains. The model learns to extract domain-invariant aspects by incorporating a domain discriminator, making it adaptable to different domains. We evaluate our model on datasets from multiple domains and demonstrate its effectiveness in achieving cross-domain aspect extraction. The results of our experiments reveal that our model outperforms baseline techniques, resulting in significant gains in aspect extraction across various domains. Our approach opens new possibilities for domain adaptation in aspect extraction tasks, providing valuable insights for sentiment analysis in diverse domains.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Minghao, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. "Adversarial Domain Adaptation with Domain Mixup." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6502–9. http://dx.doi.org/10.1609/aaai.v34i04.6123.

Full text
Abstract:
Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Bhaskara, Ramachandra M., Alexandre G. de Brevern, and Narayanaswamy Srinivasan. "Understanding the role of domain–domain linkers in the spatial orientation of domains in multi-domain proteins." Journal of Biomolecular Structure and Dynamics 31, no. 12 (December 2013): 1467–80. http://dx.doi.org/10.1080/07391102.2012.743438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Meng, and Songcan Chen. "Mixup-Induced Domain Extrapolation for Domain Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11168–76. http://dx.doi.org/10.1609/aaai.v38i10.28994.

Full text
Abstract:
Domain generalization aims to learn a well-performed classifier on multiple source domains for unseen target domains under domain shift. Domain-invariant representation (DIR) is an intuitive approach and has been of great concern. In practice, since the targets are variant and agnostic, only a few sources are not sufficient to reflect the entire domain population, leading to biased DIR. Derived from PAC-Bayes framework, we provide a novel generalization bound involving the number of domains sampled from the environment (N) and the radius of the Wasserstein ball centred on the target (r), which have rarely been considered before. Herein, we can obtain two natural and significant findings: when N increases, 1) the gap between the source and target sampling environments can be gradually mitigated; 2) the target can be better approximated within the Wasserstein ball. These findings prompt us to collect adequate domains against domain shift. For seeking convenience, we design a novel yet simple Extrapolation Domain strategy induced by the Mixup scheme, namely EDM. Through a reverse Mixup scheme to generate the extrapolated domains, combined with the interpolated domains, we expand the interpolation space spanned by the sources, providing more abundant domains to increase sampling intersections to shorten r. Moreover, EDM is easy to implement and be plugged-and-played. In experiments, EDM has been plugged into several methods in both closed and open set settings, achieving up to 5.73% improvement.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Hongyi, Bin Xue, and Yaoqi Zhou. "DDOMAIN: Dividing structures into domains using a normalized domain-domain interaction profile." Protein Science 16, no. 5 (May 2007): 947–55. http://dx.doi.org/10.1110/ps.062597307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Chengyang, Ke-Yue Zhang, Taiping Yao, Shice Liu, Shouhong Ding, Xin Tan, and Lizhuang Ma. "Domain-Hallucinated Updating for Multi-Domain Face Anti-spoofing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2193–201. http://dx.doi.org/10.1609/aaai.v38i3.27992.

Full text
Abstract:
Multi-Domain Face Anti-Spoofing (MD-FAS) is a practical setting that aims to update models on new domains using only novel data while ensuring that the knowledge acquired from previous domains is not forgotten. Prior methods utilize the responses from models to represent the previous domain knowledge or map the different domains into separated feature spaces to prevent forgetting. However, due to domain gaps, the responses of new data are not as accurate as those of previous data. Also, without the supervision of previous data, separated feature spaces might be destroyed by new domains while updating, leading to catastrophic forgetting. Inspired by the challenges posed by the lack of previous data, we solve this issue from a new standpoint that generates hallucinated previous data for updating FAS model. To this end, we propose a novel Domain-Hallucinated Updating (DHU) framework to facilitate the hallucination of data. Specifically, Domain Information Explorer learns representative domain information of the previous domains. Then, Domain Information Hallucination module transfers the new domain data to pseudo-previous domain ones. Moreover, Hallucinated Features Joint Learning module is proposed to asymmetrically align the new and pseudo-previous data for real samples via dual levels to learn more generalized features, promoting the results on all domains. Our experimental results and visualizations demonstrate that the proposed method outperforms state-of-the-art competitors in terms of effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
8

López-Huertas, María J. "Domain Analysis for Interdisciplinary Knowledge Domains." KNOWLEDGE ORGANIZATION 42, no. 8 (2015): 570–80. http://dx.doi.org/10.5771/0943-7444-2015-8-570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anderson, D. D., J. Coykendall, L. Hill, and M. Zafrullah. "Monoid Domain Constructions of Antimatter Domains." Communications in Algebra 35, no. 10 (September 21, 2007): 3236–41. http://dx.doi.org/10.1080/00914030701410294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Matzen, Sylvia, and Stéphane Fusil. "Domains and domain walls in multiferroics." Comptes Rendus Physique 16, no. 2 (March 2015): 227–40. http://dx.doi.org/10.1016/j.crhy.2015.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Choi, Jongwon, Youngjoon Choi, Jihoon Kim, Jinyeop Chang, Ilhwan Kwon, Youngjune Gwon, and Seungjai Min. "Visual Domain Adaptation by Consensus-Based Transfer to Intermediate Domain." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 10655–62. http://dx.doi.org/10.1609/aaai.v34i07.6692.

Full text
Abstract:
We describe an unsupervised domain adaptation framework for images by a transform to an abstract intermediate domain and ensemble classifiers seeking a consensus. The intermediate domain can be thought as a latent domain where both the source and target domains can be transferred easily. The proposed framework aligns both domains to the intermediate domain, which greatly improves the adaptation performance when the source and target domains are notably dissimilar. In addition, we propose an ensemble model trained by confusing multiple classifiers and letting them make a consensus alternately to enhance the adaptation performance for ambiguous samples. To estimate the hidden intermediate domain and the unknown labels of the target domain simultaneously, we develop a training algorithm using a double-structured architecture. We validate the proposed framework in hard adaptation scenarios with real-world datasets from simple synthetic domains to complex real-world domains. The proposed algorithm outperforms the previous state-of-the-art algorithms on various environments.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhou, Kaiyang, Yongxin Yang, Timothy Hospedales, and Tao Xiang. "Deep Domain-Adversarial Image Generation for Domain Generalisation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 13025–32. http://dx.doi.org/10.1609/aaai.v34i07.7003.

Full text
Abstract:
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution. To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains. In this paper, we propose a novel DG approach based on Deep Domain-Adversarial Image Generation (DDAIG). Specifically, DDAIG consists of three components, namely a label classifier, a domain classifier and a domain transformation network (DoTNet). The goal for DoTNet is to map the source training data to unseen domains. This is achieved by having a learning objective formulated to ensure that the generated data can be correctly classified by the label classifier while fooling the domain classifier. By augmenting the source training data with the generated unseen domain data, we can make the label classifier more robust to unknown domain changes. Extensive experiments on four DG datasets demonstrate the effectiveness of our approach.
APA, Harvard, Vancouver, ISO, and other styles
13

Ju, Hyunjun, SeongKu Kang, Dongha Lee, Junyoung Hwang, Sanghwan Jang, and Hwanjo Yu. "Multi-Domain Recommendation to Attract Users via Domain Preference Modeling." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8582–90. http://dx.doi.org/10.1609/aaai.v38i8.28702.

Full text
Abstract:
Recently, web platforms are operating various service domains simultaneously. Targeting a platform that operates multiple service domains, we introduce a new task, Multi-Domain Recommendation to Attract Users (MDRAU), which recommends items from multiple ``unseen'' domains with which each user has not interacted yet, by using knowledge from the user's ``seen'' domains. In this paper, we point out two challenges of MDRAU task. First, there are numerous possible combinations of mappings from seen to unseen domains because users have usually interacted with a different subset of service domains. Second, a user might have different preference for each of the target unseen domains, which requires recommendations to reflect users' preference on domains as well as items. To tackle these challenges, we propose DRIP framework that models users' preference at two levels (i.e., domain and item) and learns various seen-unseen domain mappings in a unified way with masked domain modeling. Our extensive experiments demonstrate the effectiveness of DRIP in MDRAU task and its ability to capture users' domain-level preferences.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhi Tan, Zhi Tan, and Zhao-Fei Teng Zhi Tan. "Image Domain Generalization Method based on Solving Domain Discrepancy Phenomenon." 電腦學刊 33, no. 3 (June 2022): 171–85. http://dx.doi.org/10.53106/199115992022063303014.

Full text
Abstract:
<p>In order to solve the problem that the recognition performance is obviously degraded when the model trained by known data distribution transfer to unknown data distribution, domain generalization method based on attention mechanism and adversarial training is proposed. Firstly, a multi-level attention mechanism module is designed to capture the underlying abstract information features of the image; Secondly, increases the loss limit of the generative adversarial network,the virtual enhanced domain which can simulate the target domain of unknown data distribution is generated by adversarial training on the premise of ensuring the consistency of data features and semantics; Finally, through the data mixing algorithm, the source domain and virtual enhanced domain are mixed and input into the model to improve the performance of the classifier. The experiment is carried out on five classic digit recognition and CIFAR-10 series datasets. The experimental results show that the model can learn better decision boundary, generate virtual enhanced domain and significantly improve the accuracy of recognition after model transplantation. Comparing to the previous method, our method improves average accuracy by at least 2.5% and 3% respectively. Experiments on five classic digit recognition and CIFAR-10 series datasets which significantly improves the classification average accuracy after model transfer. </p> <p>&nbsp;</p>
APA, Harvard, Vancouver, ISO, and other styles
15

Song, Xiang, Yuhang He, Songlin Dong, and Yihong Gong. "Non-exemplar Domain Incremental Object Detection via Learning Domain Bias." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 13 (March 24, 2024): 15056–65. http://dx.doi.org/10.1609/aaai.v38i13.29427.

Full text
Abstract:
Domain incremental object detection (DIOD) aims to gradually learn a unified object detection model from a dataset stream composed of different domains, achieving good performance in all encountered domains. The most critical obstacle to this goal is the catastrophic forgetting problem, where the performance of the model improves rapidly in new domains but deteriorates sharply in old ones after a few sessions. To address this problem, we propose a non-exemplar DIOD method named learning domain bias (LDB), which learns domain bias independently at each new session, avoiding saving examples from old domains. Concretely, a base model is first obtained through training during session 1. Then, LDB freezes the weights of the base model and trains individual domain bias for each new incoming domain, adapting the base model to the distribution of new domains. At test time, since the domain ID is unknown, we propose a domain selector based on nearest mean classifier (NMC), which selects the most appropriate domain bias for a test image. Extensive experimental evaluations on two series of datasets demonstrate the effectiveness of the proposed LDB method in achieving high accuracy on new and old domain datasets. The code is available at https://github.com/SONGX1997/LDB.
APA, Harvard, Vancouver, ISO, and other styles
16

Wei, Yikang, and Yahong Han. "Multi-Source Collaborative Gradient Discrepancy Minimization for Federated Domain Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 14 (March 24, 2024): 15805–13. http://dx.doi.org/10.1609/aaai.v38i14.29510.

Full text
Abstract:
Federated Domain Generalization aims to learn a domain-invariant model from multiple decentralized source domains for deployment on unseen target domain. Due to privacy concerns, the data from different source domains are kept isolated, which poses challenges in bridging the domain gap. To address this issue, we propose a Multi-source Collaborative Gradient Discrepancy Minimization (MCGDM) method for federated domain generalization. Specifically, we propose intra-domain gradient matching between the original images and augmented images to avoid overfitting the domain-specific information within isolated domains. Additionally, we propose inter-domain gradient matching with the collaboration of other domains, which can further reduce the domain shift across decentralized domains. Combining intra-domain and inter-domain gradient matching, our method enables the learned model to generalize well on unseen domains. Furthermore, our method can be extended to the federated domain adaptation task by fine-tuning the target model on the pseudo-labeled target domain. The extensive experiments on federated domain generalization and adaptation indicate that our method outperforms the state-of-the-art methods significantly.
APA, Harvard, Vancouver, ISO, and other styles
17

Matsuura, Toshihiko, and Tatsuya Harada. "Domain Generalization Using a Mixture of Multiple Latent Domains." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11749–56. http://dx.doi.org/10.1609/aaai.v34i07.6846.

Full text
Abstract:
When domains, which represent underlying data distributions, vary during training and testing processes, deep neural networks suffer a drop in their performance. Domain generalization allows improvements in the generalization performance for unseen target domains by using multiple source domains. Conventional methods assume that the domain to which each sample belongs is known in training. However, many datasets, such as those collected via web crawling, contain a mixture of multiple latent domains, in which the domain of each sample is unknown. This paper introduces domain generalization using a mixture of multiple latent domains as a novel and more realistic scenario, where we try to train a domain-generalized model without using domain labels. To address this scenario, we propose a method that iteratively divides samples into latent domains via clustering, and which trains the domain-invariant feature extractor shared among the divided latent domains via adversarial learning. We assume that the latent domain of images is reflected in their style, and thus, utilize style features for clustering. By using these features, our proposed method successfully discovers latent domains and achieves domain generalization even if the domain labels are not given. Experiments show that our proposed method can train a domain-generalized model without using domain labels. Moreover, it outperforms conventional domain generalization methods, including those that utilize domain labels.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Wenlan, Zhaoping He, Thomas J. O'Shaughnessy, John Rux, and William W. Reenstra. "Domain-domain associations in cystic fibrosis transmembrane conductance regulator." American Journal of Physiology-Cell Physiology 282, no. 5 (May 1, 2002): C1170—C1180. http://dx.doi.org/10.1152/ajpcell.00337.2001.

Full text
Abstract:
Cystic fibrosis is caused by mutations in the cystic fibrosis transmembrane conductance regulator (CFTR) gene. CFTR is a chloride channel whose activity requires protein kinase A-dependent phosphorylation of an intracellular regulatory domain (R-domain) and ATP hydrolysis at the nucleotide-binding domains (NBDs). To identify potential sites of domain-domain interaction within CFTR, we expressed, purified, and refolded histidine (His)- and glutathione- S-transferase (GST)-tagged cytoplasmic domains of CFTR. ATP-binding to his-NBD1 and his-NBD2 was demonstrated by measuring tryptophan fluorescence quenching. Tryptic dig estion of in vitro phosphorylated his-NBD1-R and in situ phosphorylated CFTR generated the same phosphopeptides. An interaction between NBD1-R and NBD2 was assayed by tryptophan fluorescence quenching. Binding among all pairwise combinations of R-domain, NBD1, and NBD2 was demonstrated with an overlay assay. To identify specific sites of interaction between domains of CFTR, an overlay assay was used to probe an overlapping peptide library spanning all intracellular regions of CFTR with his-NBD1, his-NBD2, and GST-R-domain. By mapping peptides from NBD1 and NBD2 that bound to other intracellular domains onto crystal structures for HisP, MalK, and Rad50, probable sites of interaction between NBD1 and NBD2 were identified. Our data support a model where NBDs form dimers with the ATP-binding sites at the domain-domain interface.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Minsu, Sunghun Joung, Seungryong Kim, JungIn Park, Ig-Jae Kim, and Kwanghoon Sohn. "Cross-Domain Grouping and Alignment for Domain Adaptive Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 3 (May 18, 2021): 1799–807. http://dx.doi.org/10.1609/aaai.v35i3.16274.

Full text
Abstract:
Existing techniques to adapt semantic segmentation networks across source and target domains within deep convolutional neural networks (CNNs) deal with all the samples from the two domains in a global or category-aware manner. They do not consider an inter-class variation within the target domain itself or estimated category, providing the limitation to encode the domains having a multi-modal data distribution. To overcome this limitation, we introduce a learnable clustering module, and a novel domain adaptation framework, called cross-domain grouping and alignment. To cluster the samples across domains with an aim to maximize the domain alignment without forgetting precise segmentation ability on the source domain, we present two loss functions, in particular, for encouraging semantic consistency and orthogonality among the clusters. We also present a loss so as to solve a class imbalance problem, which is the other limitation of the previous methods. Our experiments show that our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
APA, Harvard, Vancouver, ISO, and other styles
20

Meier, Dennis, Nagarajan Valanoor, Qi Zhang, and Donghwa Lee. "Domains and domain walls in ferroic materials." Journal of Applied Physics 129, no. 23 (June 21, 2021): 230401. http://dx.doi.org/10.1063/5.0057144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Bonino, Dario, and Fulvio Corno. "DoMAIns: Domain-based modeling for Ambient Intelligence." Pervasive and Mobile Computing 8, no. 4 (August 2012): 614–28. http://dx.doi.org/10.1016/j.pmcj.2011.10.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Boysen, H. "Diffuse scattering by domains and domain walls." Phase Transitions 55, no. 1-4 (December 1995): 1–16. http://dx.doi.org/10.1080/01411599508200422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Dentakos, Stella, Wafa Saoud, Rakefet Ackerman, and Maggie E. Toplak. "Does domain matter? Monitoring accuracy across domains." Metacognition and Learning 14, no. 3 (July 17, 2019): 413–36. http://dx.doi.org/10.1007/s11409-019-09198-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Zhu, Yongchun, Fuzhen Zhuang, and Deqing Wang. "Aligning Domain-Specific Distribution and Classifier for Cross-Domain Classification from Multiple Sources." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5989–96. http://dx.doi.org/10.1609/aaai.v33i01.33015989.

Full text
Abstract:
While Unsupervised Domain Adaptation (UDA) algorithms, i.e., there are only labeled data from source domains, have been actively studied in recent years, most algorithms and theoretical results focus on Single-source Unsupervised Domain Adaptation (SUDA). However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other. Thus, domain adapters from multiple sources should not be modeled in the same way. Recent deep learning based Multi-source Unsupervised Domain Adaptation (MUDA) algorithms focus on extracting common domain-invariant representations for all domains by aligning distribution of all pairs of source and target domains in a common feature space. However, it is often very hard to extract the same domain-invariant representations for all domains in MUDA. In addition, these methods match distributions without considering domain-specific decision boundaries between classes. To solve these problems, we propose a new framework with two alignment stages for MUDA which not only respectively aligns the distributions of each pair of source and target domains in multiple specific feature spaces, but also aligns the outputs of classifiers by utilizing the domainspecific decision boundaries. Extensive experiments demonstrate that our method can achieve remarkable results on popular benchmark datasets for image classification.
APA, Harvard, Vancouver, ISO, and other styles
25

Wang, W., R. Jiang, W. Zhang, and Y. Luan. "Prioritisation of associations between protein domains and complex diseases using domain–domain interaction networks." IET Systems Biology 4, no. 3 (May 1, 2010): 212–22. http://dx.doi.org/10.1049/iet-syb.2009.0037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Akiva, E., Z. Itzhaki, and H. Margalit. "Built-in loops allow versatility in domain-domain interactions: Lessons from self-interacting domains." Proceedings of the National Academy of Sciences 105, no. 36 (August 29, 2008): 13292–97. http://dx.doi.org/10.1073/pnas.0801207105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Hongyu, and Yong Xia. "Domain-ensemble learning with cross-domain mixup for thoracic disease classification in unseen domains." Biomedical Signal Processing and Control 81 (March 2023): 104488. http://dx.doi.org/10.1016/j.bspc.2022.104488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Yoshida, Yasuhisa, Tsutomu Hirao, Tomoharu Iwata, Masaaki Nagata, and Yuji Matsumoto. "Transfer Learning for Multiple-Domain Sentiment Analysis — Identifying Domain Dependent/Independent Word Polarity." Proceedings of the AAAI Conference on Artificial Intelligence 25, no. 1 (August 4, 2011): 1286–91. http://dx.doi.org/10.1609/aaai.v25i1.8081.

Full text
Abstract:
Sentiment analysis is the task of determining the attitude (positive or negative) of documents. While the polarity of words in the documents is informative for this task, polarity of some words cannot be determined without domain knowledge. Detecting word polarity thus poses a challenge for multiple-domain sentiment analysis. Previous approaches tackle this problem with transfer learning techniques, but they cannot handle multiple source domains and multiple target domains. This paper proposes a novel Bayesian probabilistic model to handle multiple source and multiple target domains. In this model, each word is associated with three factors: Domain label, domain dependence/independence and word polarity. We derive an efficient algorithm using Gibbs sampling for inferring the parameters of the model, from both labeled and unlabeled texts. Using real data, we demonstrate the effectiveness of our model in a document polarity classification task compared with a method not considering the differences between domains. Moreover our method can also tell whether each word's polarity is domain-dependent or domain-independent. This feature allows us to construct a word polarity dictionary for each domain.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Xu, Dezhong Peng, Ming Yan, and Peng Hu. "Correspondence-Free Domain Alignment for Unsupervised Cross-Domain Image Retrieval." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 8 (June 26, 2023): 10200–10208. http://dx.doi.org/10.1609/aaai.v37i8.26215.

Full text
Abstract:
Cross-domain image retrieval aims at retrieving images across different domains to excavate cross-domain classificatory or correspondence relationships. This paper studies a less-touched problem of cross-domain image retrieval, i.e., unsupervised cross-domain image retrieval, considering the following practical assumptions: (i) no correspondence relationship, and (ii) no category annotations. It is challenging to align and bridge distinct domains without cross-domain correspondence. To tackle the challenge, we present a novel Correspondence-free Domain Alignment (CoDA) method to effectively eliminate the cross-domain gap through In-domain Self-matching Supervision (ISS) and Cross-domain Classifier Alignment (CCA). To be specific, ISS is presented to encapsulate discriminative information into the latent common space by elaborating a novel self-matching supervision mechanism. To alleviate the cross-domain discrepancy, CCA is proposed to align distinct domain-specific classifiers. Thanks to the ISS and CCA, our method could encode the discrimination into the domain-invariant embedding space for unsupervised cross-domain image retrieval. To verify the effectiveness of the proposed method, extensive experiments are conducted on four benchmark datasets compared with six state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
30

Cai, Yiqing, Lianggangxu Chen, Haoyue Guan, Shaohui Lin, Changhong Lu, Changbo Wang, and Gaoqi He. "Explicit Invariant Feature Induced Cross-Domain Crowd Counting." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 1 (June 26, 2023): 259–67. http://dx.doi.org/10.1609/aaai.v37i1.25098.

Full text
Abstract:
Cross-domain crowd counting has shown progressively improved performance. However, most methods fail to explicitly consider the transferability of different features between source and target domains. In this paper, we propose an innovative explicit Invariant Feature induced Cross-domain Knowledge Transformation framework to address the inconsistent domain-invariant features of different domains. The main idea is to explicitly extract domain-invariant features from both source and target domains, which builds a bridge to transfer more rich knowledge between two domains. The framework consists of three parts, global feature decoupling (GFD), relation exploration and alignment (REA), and graph-guided knowledge enhancement (GKE). In the GFD module, domain-invariant features are efficiently decoupled from domain-specific ones in two domains, which allows the model to distinguish crowds features from backgrounds in the complex scenes. In the REA module both inter-domain relation graph (Inter-RG) and intra-domain relation graph (Intra-RG) are built. Specifically, Inter-RG aggregates multi-scale domain-invariant features between two domains and further aligns local-level invariant features. Intra-RG preserves taskrelated specific information to assist the domain alignment. Furthermore, GKE strategy models the confidence of pseudolabels to further enhance the adaptability of the target domain. Various experiments show our method achieves state-of-theart performance on the standard benchmarks. Code is available at https://github.com/caiyiqing/IF-CKT.
APA, Harvard, Vancouver, ISO, and other styles
31

Hu, Xuming, Zhaochen Hong, Yong Jiang, Zhichao Lin, Xiaobin Wang, Pengjun Xie, and Philip S. Yu. "Three Heads Are Better than One: Improving Cross-Domain NER with Progressive Decomposed Network." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 16 (March 24, 2024): 18261–69. http://dx.doi.org/10.1609/aaai.v38i16.29785.

Full text
Abstract:
Cross-domain named entity recognition (NER) tasks encourage NER models to transfer knowledge from data-rich source domains to sparsely labeled target domains. Previous works adopt the paradigms of pre-training on the source domain followed by fine-tuning on the target domain. However, these works ignore that general labeled NER source domain data can be easily retrieved in the real world, and soliciting more source domains could bring more benefits. Unfortunately, previous paradigms cannot efficiently transfer knowledge from multiple source domains. In this work, to transfer multiple source domains' knowledge, we decouple the NER task into the pipeline tasks of mention detection and entity typing, where the mention detection unifies the training object across domains, thus providing the entity typing with higher-quality entity mentions. Additionally, we request multiple general source domain models to suggest the potential named entities for sentences in the target domain explicitly, and transfer their knowledge to the target domain models through the knowledge progressive networks implicitly. Furthermore, we propose two methods to analyze in which source domain knowledge transfer occurs, thus helping us judge which source domain brings the greatest benefit. In our experiment, we develop a Chinese cross-domain NER dataset. Our model improved the F1 score by an average of 12.50% across 8 Chinese and English datasets compared to models without source domain data.
APA, Harvard, Vancouver, ISO, and other styles
32

Nandula, Anuradha, and Panuganti Vijayapal Reddy. "Exploring RoBERTa model for cross-domain suggestion detection in online reviews." Indonesian Journal of Electrical Engineering and Computer Science 35, no. 3 (September 1, 2024): 1637. http://dx.doi.org/10.11591/ijeecs.v35.i3.pp1637-1644.

Full text
Abstract:
Detecting suggestions in online review requires contextual understanding of review text, which is an important real-world application of natural language processing. Given the disparate text domains found in product reviews, a common strategy involves fine-tuning bidirectional encoder representations from transformers (BERT) models using reviews from various domains. However, there hasn't been an empirical examination of how BERT models behave across different domains in tasks related to detecting suggestion sentences from online reviews. In this study, we explore BERT models for suggestion classification that have been fine-tuned using single-domain and cross-domain Amazon review datasets. Our results indicate that while single-domain models achieved slightly better performance within their respective domains compared to cross-domain models, the latter outperformed single-domain models when evaluated on cross-domain data. This was also observed for single-domain data not used for fine-tuning the single-domain model and on average across all tests. Although fine-tuning single-domain models can lead to minor accuracy improvements, employing multi-domain models that perform well across domains can help in cold start problems and reduce annotation costs.
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yu, Ronghang Zhu, Pengsheng Ji, and Sheng Li. "Open-Set Graph Domain Adaptation via Separate Domain Alignment." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 9142–50. http://dx.doi.org/10.1609/aaai.v38i8.28765.

Full text
Abstract:
Domain adaptation has become an attractive learning paradigm, as it can leverage source domains with rich labels to deal with classification tasks in an unlabeled target domain. A few recent studies develop domain adaptation approaches for graph-structured data. In the case of node classification task, current domain adaptation methods only focus on the closed-set setting, where source and target domains share the same label space. A more practical assumption is that the target domain may contain new classes that are not included in the source domain. Therefore, in this paper, we introduce a novel and challenging problem for graphs, i.e., open-set domain adaptive node classification, and propose a new approach to solve it. Specifically, we develop an algorithm for efficient knowledge transfer from a labeled source graph to an unlabeled target graph under a separate domain alignment (SDA) strategy, in order to learn discriminative feature representations for the target graph. Our goal is to not only correctly classify target nodes into the known classes, but also classify unseen types of nodes into an unknown class. Experimental results on real-world datasets show that our method outperforms existing methods on graph domain adaptation.
APA, Harvard, Vancouver, ISO, and other styles
34

Ding, Deqiang, Chao Wei, Kunzhe Dong, Jiali Liu, Alexander Stanton, Chao Xu, Jinrong Min, Jian Hu, and Chen Chen. "LOTUS domain is a novel class of G-rich and G-quadruplex RNA binding domain." Nucleic Acids Research 48, no. 16 (August 7, 2020): 9262–72. http://dx.doi.org/10.1093/nar/gkaa652.

Full text
Abstract:
Abstract LOTUS domains are helix-turn-helix protein folds identified in essential germline proteins and are conserved in prokaryotes and eukaryotes. Despite originally predicted as an RNA binding domain, its molecular binding activity towards RNA and protein is controversial. In particular, the most conserved binding property for the LOTUS domain family remains unknown. Here, we uncovered an unexpected specific interaction of LOTUS domains with G-rich RNA sequences. Intriguingly, LOTUS domains exhibit high affinity to RNA G-quadruplex tertiary structures implicated in diverse cellular processes including piRNA biogenesis. This novel LOTUS domain-RNA interaction is conserved in bacteria, plants and animals, comprising the most ancient binding feature of the LOTUS domain family. By contrast, LOTUS domains do not preferentially interact with DNA G-quadruplexes. We further show that a subset of LOTUS domains display both RNA and protein binding activities. These findings identify the LOTUS domain as a specialized RNA binding domain across phyla and underscore the molecular mechanism underlying the function of LOTUS domain-containing proteins in RNA metabolism and regulation.
APA, Harvard, Vancouver, ISO, and other styles
35

Chu, Yu-Ming. "Arcwise Connected Domains, Quasiconformal Mappings, and Quasidisks." Abstract and Applied Analysis 2014 (2014): 1–5. http://dx.doi.org/10.1155/2014/419850.

Full text
Abstract:
We prove that a homeomorphismf:R2→R2is a quasiconformal mapping if and only iff(D)is an arcwise connected domain for any arcwise connected domainD⊆R2, andDis a quasidisk if and only if bothDand its exteriorD*=R2∖D¯are arcwise connected domains.
APA, Harvard, Vancouver, ISO, and other styles
36

Yadav, Akshay, David Fernández-Baca, and Steven B. Cannon. "Family-Specific Gains and Losses of Protein Domains in the Legume and Grass Plant Families." Evolutionary Bioinformatics 16 (January 2020): 117693432093994. http://dx.doi.org/10.1177/1176934320939943.

Full text
Abstract:
Protein domains can be regarded as sections of protein sequences capable of folding independently and performing specific functions. In addition to amino-acid level changes, protein sequences can also evolve through domain shuffling events such as domain insertion, deletion, or duplication. The evolution of protein domains can be studied by tracking domain changes in a selected set of species with known phylogenetic relationships. Here, we conduct such an analysis by defining domains as “features” or “descriptors,” and considering the species (target + outgroup) as instances or data-points in a data matrix. We then look for features (domains) that are significantly different between the target species and the outgroup species. We study the domain changes in 2 large, distinct groups of plant species: legumes (Fabaceae) and grasses (Poaceae), with respect to selected outgroup species. We evaluate 4 types of domain feature matrices: domain content, domain duplication, domain abundance, and domain versatility. The 4 types of domain feature matrices attempt to capture different aspects of domain changes through which the protein sequences may evolve—that is, via gain or loss of domains, increase or decrease in the copy number of domains along the sequences, expansion or contraction of domains, or through changes in the number of adjacent domain partners. All the feature matrices were analyzed using feature selection techniques and statistical tests to select protein domains that have significant different feature values in legumes and grasses. We report the biological functions of the top selected domains from the analysis of all the feature matrices. In addition, we also perform domain-centric gene ontology (dcGO) enrichment analysis on all selected domains from all 4 feature matrices to study the gene ontology terms associated with the significantly evolving domains in legumes and grasses. Domain content analysis revealed a striking loss of protein domains from the Fanconi anemia (FA) pathway, the pathway responsible for the repair of interstrand DNA crosslinks. The abundance analysis of domains found in legumes revealed an increase in glutathione synthase enzyme, an antioxidant required from nitrogen fixation, and a decrease in xanthine oxidizing enzymes, a phenomenon confirmed by previous studies. In grasses, the abundance analysis showed increases in domains related to gene silencing which could be due to polyploidy or due to enhanced response to viral infection. We provide a docker container that can be used to perform this analysis workflow on any user-defined sets of species, available at https://cloud.docker.com/u/akshayayadav/repository/docker/akshayayadav/protein-domain-evolution-project .
APA, Harvard, Vancouver, ISO, and other styles
37

Jia, Pengyue, Yichao Wang, Shanru Lin, Xiaopeng Li, Xiangyu Zhao, Huifeng Guo, and Ruiming Tang. "D3: A Methodological Exploration of Domain Division, Modeling, and Balance in Multi-Domain Recommendations." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 8 (March 24, 2024): 8553–61. http://dx.doi.org/10.1609/aaai.v38i8.28699.

Full text
Abstract:
To enhance the efficacy of multi-scenario services in industrial recommendation systems, the emergence of multi-domain recommendation has become prominent, which entails simultaneous modeling of all domains through a unified model, effectively capturing commonalities and differences among them. However, current methods rely on manual domain partitioning, which overlook the intricate domain relationships and the heterogeneity of different domains during joint optimization, hindering the integration of domain commonalities and differences. To address these challenges, this paper proposes a universal and flexible framework D3 aimed at optimizing the multi-domain recommendation pipeline from three key aspects. Firstly, an attention-based domain adaptation module is introduced to automatically identify and incorporate domain-sensitive features during training. Secondly, we propose a fusion gate module that enables the seamless integration of commonalities and diversities among domains, allowing for implicit characterization of intricate domain relationships. Lastly, we tackle the issue of joint optimization by deriving loss weights from two complementary viewpoints: domain complexity and domain specificity, alleviating inconsistencies among different domains during the training phase. Experiments on three public datasets demonstrate the effectiveness and superiority of our proposed framework. In addition, D3 has been implemented on a real-life, high-traffic internet platform catering to millions of users daily.
APA, Harvard, Vancouver, ISO, and other styles
38

Gauthier, P. M., and V. Nestoridis. "Domains of Injective Holomorphy." Canadian Mathematical Bulletin 55, no. 3 (September 1, 2012): 509–22. http://dx.doi.org/10.4153/cmb-2011-099-9.

Full text
Abstract:
AbstractA domain Ω is called a domain of injective holomorphy if there exists an injective holomorphic function ƒ: Ω → ℂ that is non-extendable. We give examples of domains that are domains of injective holomorphy and others that are not. In particular, every regular domain is a domain of injective holomorphy, and every simply connected domain is a domain of injective holomorphy as well.
APA, Harvard, Vancouver, ISO, and other styles
39

Dou, Chenxiao, Xianghui Sun, Yaoshu Wang, Yunjie Ji, Baochang Ma, and Xiangang Li. "Domain-Adapted Dependency Parsing for Cross-Domain Named Entity Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 11 (June 26, 2023): 12737–44. http://dx.doi.org/10.1609/aaai.v37i11.26498.

Full text
Abstract:
In recent years, many researchers have leveraged structural information from dependency trees to improve Named Entity Recognition (NER). Most of their methods take dependency-tree labels as input features for NER model training. However, such dependency information is not inherently provided in most NER corpora, making the methods with low usability in practice. To effectively exploit the potential of word-dependency knowledge, motivated by the success of Multi-Task Learning on cross-domain NER, we investigate a novel NER learning method incorporating cross-domain Dependency Parsing (DP) as its auxiliary learning task. Then, considering the high consistency of word-dependency relations across domains, we present an unsupervised domain-adapted method to transfer word-dependency knowledge from high-resource domains to low-resource ones. With the help of cross-domain DP to bridge different domains, both useful cross-domain and cross-task knowledge can be learned by our model to considerably benefit cross-domain NER. To make better use of the cross-task knowledge between NER and DP, we unify both tasks in a shared network architecture for joint learning, using Maximum Mean Discrepancy(MMD). Finally, through extensive experiments, we show our proposed method can not only effectively take advantage of word-dependency knowledge, but also significantly outperform other Multi-Task Learning methods on cross-domain NER. Our code is open-source and available at https://github.com/xianghuisun/DADP.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Ximei, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. "Transferable Attention for Domain Adaptation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5345–52. http://dx.doi.org/10.1609/aaai.v33i01.33015345.

Full text
Abstract:
Recent work in domain adaptation bridges different domains by adversarially learning a domain-invariant representation that cannot be distinguished by a domain discriminator. Existing methods of adversarial domain adaptation mainly align the global images across the source and target domains. However, it is obvious that not all regions of an image are transferable, while forcefully aligning the untransferable regions may lead to negative transfer. Furthermore, some of the images are significantly dissimilar across domains, resulting in weak image-level transferability. To this end, we present Transferable Attention for Domain Adaptation (TADA), focusing our adaptation model on transferable regions or images. We implement two types of complementary transferable attention: transferable local attention generated by multiple region-level domain discriminators to highlight transferable regions, and transferable global attention generated by single image-level domain discriminator to highlight transferable images. Extensive experiments validate that our proposed models exceed state of the art results on standard domain adaptation datasets.
APA, Harvard, Vancouver, ISO, and other styles
41

Green, M., T. J. Schuetz, E. K. Sullivan, and R. E. Kingston. "A heat shock-responsive domain of human HSF1 that regulates transcription activation domain function." Molecular and Cellular Biology 15, no. 6 (June 1995): 3354–62. http://dx.doi.org/10.1128/mcb.15.6.3354.

Full text
Abstract:
Human heat shock factor 1 (HSF1) stimulates transcription from heat shock protein genes following stress. We have used chimeric proteins containing the GAL4 DNA binding domain to identify the transcriptional activation domains of HSF1 and a separate domain that is capable of regulating activation domain function. This regulatory domain conferred heat shock inducibility to chimeric proteins containing the activation domains. The regulatory domain is located between the transcriptional activation domains and the DNA binding domain of HSF1 and is conserved between mammalian and chicken HSF1 but is not found in HSF2 or HSF3. The regulatory domain was found to be functionally homologous between chicken and human HSF1. This domain does not affect DNA binding by the chimeric proteins and does not contain any of the sequences previously postulated to regulate DNA binding of HSF1. Thus, we suggest that activation of HSF1 by stress in humans is controlled by two regulatory mechanisms that separately confer heat shock-induced DNA binding and transcriptional stimulation.
APA, Harvard, Vancouver, ISO, and other styles
42

Wu, Lan, Chongyang Li, Qiliang Chen, and Binquan Li. "Deep adversarial domain adaptation network." International Journal of Advanced Robotic Systems 17, no. 5 (September 1, 2020): 172988142096464. http://dx.doi.org/10.1177/1729881420964648.

Full text
Abstract:
The advantage of adversarial domain adaptation is that it uses the idea of adversarial adaptation to confuse the feature distribution of two domains and solve the problem of domain transfer in transfer learning. However, although the discriminator completely confuses the two domains, adversarial domain adaptation still cannot guarantee the consistent feature distribution of the two domains, which may further deteriorate the recognition accuracy. Therefore, in this article, we propose a deep adversarial domain adaptation network, which optimises the feature distribution of the two confused domains by adding multi-kernel maximum mean discrepancy to the feature layer and designing a new loss function to ensure good recognition accuracy. In the last part, some simulation results based on the Office-31 and Underwater data sets show that the deep adversarial domain adaptation network can optimise the feature distribution and promote positive transfer, thus improving the classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
43

Yu, Xu, Jun-yu Lin, Feng Jiang, Jun-wei Du, and Ji-zhong Han. "A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression." Computational Intelligence and Neuroscience 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/1425365.

Full text
Abstract:
Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.
APA, Harvard, Vancouver, ISO, and other styles
44

TELL, Gianluca, Lorena PERRONE, Dora FABBRO, Lucia PELLIZZARI, Carlo PUCILLO, Mario DE FELICE, Renato ACQUAVIVA, Silvestro FORMISANO, and Giuseppe DAMANTE. "Structural and functional properties of the N transcriptional activation domain of thyroid transcription factor-1: similarities with the acidic activation domains." Biochemical Journal 329, no. 2 (January 15, 1998): 395–403. http://dx.doi.org/10.1042/bj3290395.

Full text
Abstract:
The thyroid transcription factor 1 (TTF-1) is a tissue-specific transcription factor involved in the development of thyroid and lung. TTF-1 contains two transcriptional activation domains (N and C domain). The primary amino acid sequence of the N domain does not show any typical characteristic of known transcriptional activation domains. In aqueous solution the N domain exists in a random-coil conformation. The increase of the milieu hydrophobicity, by the addition of trifluoroethanol, induces a considerable gain of α-helical structure. Acidic transcriptional activation domains are largely unstructured in solution, but, under hydrophobic conditions, folding into α-helices or β-strands can be induced. Therefore our data indicate that the inducibility of α-helix by hydrophobic conditions is a property not restricted to acidic domains. Co-transfections experiments indicate that the acidic domain of herpes simplex virus protein VP16 (VP16) and the TTF-1 N domain are interchangeable and that a chimaeric protein, which combines VP16 linked to the DNA-binding domain of TTF-1, undergoes the same regulatory constraints that operate for the wild-type TTF-1. In addition, we demonstrate that the TTF-1 N domain possesses two typical properties of acidic activation domains: TBP (TATA-binding protein) binding and ability to activate transcription in yeast. Accordingly, the TTF-1 N domain is able to squelch the activity of the p65 acidic domain. Altogether, these structural and functional data suggest that a non-acidic transcriptional activation domain (TTF-1 N domain) activates transcription by using molecular mechanisms similar to those used by acidic domains. TTF-1 N domain and acidic domains define a family of proteins whose common property is to activate transcription through the use of mechanisms largely conserved during evolutionary development.
APA, Harvard, Vancouver, ISO, and other styles
45

Chou, Chia-Yeh, Yu-Chih Ou, and Tsuey-Ru Chiang. "Psychometric comparisons of four disease-specific health-related quality of life measures for stroke survivors." Clinical Rehabilitation 29, no. 8 (October 28, 2014): 816–29. http://dx.doi.org/10.1177/0269215514555137.

Full text
Abstract:
Objective: To examine psychometric properties of four stroke-specific health-related quality of life (HRQoL) measures, including original Stroke-Specific Quality of Life Scale (12-domain SSQoL), modified 8-domain SSQoL, Stroke Impact Scale (SIS 3.0), and modified SIS-16 focused on physical domains. Design and Setting: Prospective repeated measures study conducted in rehabilitation and wards in hospitals. Subjects: Study cohort was recruited with 263 patients in the first administration and 121 in the second administration, an average of two weeks later. To investigate discriminant validity, the same number of patients (i.e., 52) was grouped for each of 3 levels of stroke severity. Main measures: Outcome measures, including National Institutes of Health Stroke Scale, Mini-Mental State Examination, and Barthel Index. Patients completed HRQoL self-reports. Results: Domains of four measures showed (1) good reliability, except 12-domainSSQoL family roles (Cronbach’s α = 0.68) and personality domains (Cronbach’s α = 0.65) and SIS 3.0 social participation (ICC=0.67) domain; (2) acceptable precision, except 12-domain SSQoL family role domain and SIS 3.0 social participation domain; (3) good convergent validity, except 12-domain SSQoL/8-domain SSQoL vision domain (r = 0.19), (4) good discriminant validity, except 12-domain SSQoL and 8-domain SSQoL thinking domains ( P = 0.365); and (5) acceptable floor effects and strong ceiling effects. The 12-domain SSQoL and 8-domain SSQoL met scaling assumptions better than SIS 3.0 and SIS-16. Conclusions: Four measures showed acceptable psychometric properties with some domains slightly less satisfactory. Overall, use of 8-domain SSQoL and SIS 3.0 are feasible for clinical practice to monitor HRQoL of stroke survivors.
APA, Harvard, Vancouver, ISO, and other styles
46

Xu, Yifan, Kekai Sheng, Weiming Dong, Baoyuan Wu, Changsheng Xu, and Bao-Gang Hu. "Towards Corruption-Agnostic Robust Domain Adaptation." ACM Transactions on Multimedia Computing, Communications, and Applications 18, no. 4 (November 30, 2022): 1–16. http://dx.doi.org/10.1145/3501800.

Full text
Abstract:
Great progress has been achieved in domain adaptation in decades. Existing works are always based on an ideal assumption that testing target domains are independent and identically distributed with training target domains. However, due to unpredictable corruptions (e.g., noise and blur) in real data, such as web images and real-world object detection, domain adaptation methods are increasingly required to be corruption robust on target domains. We investigate a new task, corruption-agnostic robust domain adaptation (CRDA), to be accurate on original data and robust against unavailable-for-training corruptions on target domains. This task is non-trivial due to the large domain discrepancy and unsupervised target domains. We observe that simple combinations of popular methods of domain adaptation and corruption robustness have suboptimal CRDA results. We propose a new approach based on two technical insights into CRDA, as follows: (1) an easy-to-plug module called domain discrepancy generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; (2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains. Experiments verify that DDG maintains or even improves its performance on original data and achieves better corruption robustness than baselines. Our code is available at: https://github.com/YifanXu74/CRDA .
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Zihan, Yan Xu, Tiezheng Yu, Wenliang Dai, Ziwei Ji, Samuel Cahyawijaya, Andrea Madotto, and Pascale Fung. "CrossNER: Evaluating Cross-Domain Named Entity Recognition." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 15 (May 18, 2021): 13452–60. http://dx.doi.org/10.1609/aaai.v35i15.17587.

Full text
Abstract:
Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leading to a less effective cross-domain evaluation. To address these obstacles, we introduce a cross-domain NER dataset (CrossNER), a fully-labeled collection of NER data spanning over five diverse domains with specialized entity categories for different domains. Additionally, we also provide a domain-related corpus since using it to continue pre-training language models (domain-adaptive pre-training) is effective for the domain adaptation. We then conduct comprehensive experiments to explore the effectiveness of leveraging different levels of the domain corpus and pre-training strategies to do domain-adaptive pre-training for the cross-domain task. Results show that focusing on the fractional corpus containing domain-specialized entities and utilizing a more challenging pre-training strategy in domain-adaptive pre-training are beneficial for the NER domain adaptation, and our proposed method can consistently outperform existing cross-domain NER baselines. Nevertheless, experiments also illustrate the challenge of this cross-domain NER task. We hope that our dataset and baselines will catalyze research in the NER domain adaptation area. The code and data are available at https://github.com/zliucr/CrossNER.
APA, Harvard, Vancouver, ISO, and other styles
48

Brot, Nathan, Jean-François Collet, Lynnette C. Johnson, Thomas J. Jönsson, Herbert Weissbach, and W. Todd Lowther. "The Thioredoxin Domain of Neisseria gonorrhoeae PilB Can Use Electrons from DsbD to Reduce Downstream Methionine Sulfoxide Reductases." Journal of Biological Chemistry 281, no. 43 (August 22, 2006): 32668–75. http://dx.doi.org/10.1074/jbc.m604971200.

Full text
Abstract:
The PilB protein from Neisseria gonorrhoeae is located in the periplasm and made up of three domains. The N-terminal, thioredoxin-like domain (NT domain) is fused to tandem methionine sulfoxide reductase A and B domains (MsrA/B). We show that the α domain of Escherichia coli DsbD is able to reduce the oxidized NT domain, which suggests that DsbD in Neisseria can transfer electrons from the cytoplasmic thioredoxin to the periplasm for the reduction of the MsrA/B domains. An analysis of the available complete genomes provides further evidence for this proposition in other bacteria where DsbD/CcdA, Trx, MsrA, and MsrB gene homologs are all located in a gene cluster with a common transcriptional direction. An examination of wild-type PilB and a panel of Cys to Ser mutants of the full-length protein and the individually expressed domains have also shown that the NT domain more efficiently reduces the MsrA/B domains when in the polyprotein context. Within this frame-work there does not appear to be a preference for the NT domain to reduce the proximal MsrA domain over MsrB domain. Finally, we report the 1.6Å crystal structure of the NT domain. This structure confirms the presence of a surface loop that makes it different from other membrane-tethered, Trx-like molecules, including TlpA, CcmG, and ResA. Subtle differences are observed in this loop when compared with the Neisseria meningitidis NT domain structure. The data taken together supports the formation of specific NT domain interactions with the MsrA/B domains and its in vivo recycling partner, DsbD.
APA, Harvard, Vancouver, ISO, and other styles
49

Ferreira, Pedro, Luis Sanchez-Pulido, Anika Marko, Chris P. Ponting, and Dominik Boos. "Refining the domain architecture model of the replication origin firing factor Treslin/TICRR." Life Science Alliance 5, no. 5 (January 28, 2022): e202101088. http://dx.doi.org/10.26508/lsa.202101088.

Full text
Abstract:
Faithful genome duplication requires appropriately controlled replication origin firing. The metazoan origin firing regulation hub Treslin/TICRR and its yeast orthologue Sld3 share the Sld3-Treslin domain and the adjacent TopBP1/Dpb11 interaction domain. We report a revised domain architecture model of Treslin/TICRR. Protein sequence analyses uncovered a conserved Ku70-homologous β-barrel fold in the Treslin/TICRR middle domain (M domain) and in Sld3. Thus, the Sld3-homologous Treslin/TICRR core comprises its three central domains, M domain, Sld3-Treslin domain, and TopBP1/Dpb11 interaction domain, flanked by non-conserved terminal domains, the CIT (conserved in Treslins) and the C terminus. The CIT includes a von Willebrand factor type A domain. Unexpectedly, MTBP, Treslin/TICRR, and Ku70/80 share the same N-terminal domain architecture, von Willebrand factor type A and Ku70-like β-barrels, suggesting a common ancestry. Binding experiments using mutants and the Sld3–Sld7 dimer structure suggest that the Treslin/Sld3 and MTBP/Sld7 β-barrels engage in homotypic interactions, reminiscent of Ku70-Ku80 dimerization. Cells expressing Treslin/TICRR domain mutants indicate that all Sld3-core domains and the non-conserved terminal domains fulfil important functions during origin firing in human cells. Thus, metazoa-specific and widely conserved molecular processes cooperate during metazoan origin firing.
APA, Harvard, Vancouver, ISO, and other styles
50

Reeck, Crystal, O’Dhaniel A. Mullette-Gillman, R. Edward McLaurin, and Scott A. Huettel. "Beyond money: Risk preferences across both economic and non-economic contexts predict financial decisions." PLOS ONE 17, no. 12 (December 16, 2022): e0279125. http://dx.doi.org/10.1371/journal.pone.0279125.

Full text
Abstract:
Important decisions about risk occur in wide-ranging contexts, from investing to healthcare. While an underlying, domain-general risk attitude has been identified across contexts, it remains unclear what role it plays in shaping behavior relative to more domain-specific risk attitudes. Clarifying the relationship between domain-general and domain-specific risk attitudes would inform decision-making theories and the construction of decision aids. The present research assessed the relative contribution of domain-general and domain-specific risk attitudes to financial risk taking. We examined risk attitudes across different decision domains, as revealed through a well-validated measure, the Domain-Specific Risk-Taking Scale (DOSPERT). Confirmatory factor analysis indicated that a domain-general risk attitude shaped responses across multiple domains, and structural equation modeling showed that this domain-general risk attitude predicted observed behavioral risk premiums in a financial decision-making task better than domain-specific financial risk attitudes. Thus, assessments of risk attitudes that include both economic and non-economic domains improve predictions of financial risk taking due to the enhanced insight they provide into underlying, domain-general risk preferences.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography