Academic literature on the topic 'Domain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Domain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Domain"

1

Sirait, Timoteus Natanael, and Jimmy BP Simangungsong. "ANALISIS YURIDIS PELAKSANAAN TUGAS POKOK PENGELOLA DOMAIN INTERNET INDONESIA." NOMMENSEN JOURNAL OF LEGAL OPINION 1, no. 01 (June 30, 2020): 52–62. http://dx.doi.org/10.51622/njlo.v1i01.38.

Full text
Abstract:
PANDI (Indonesian Internet Domain Manager) Special authority is given to manage Indonesian domain names on the basis of law and their authority is stipulated in Law No. 23 of 2013 concerning Domain Name Management. PANDI's failure has been considered since the domain dispute between (bmw.co.id) and (bmw.id). There is no synchronization between domian (.co.id and .id) under the authority of PANDI. This research uses descriptive qualitative method.Qualitative method is a research method which in nature provides an explanation using analysis. In practice, this method is subjective in that the research process is more visible and tends to focus more on the theoretical foundation, with research methods aimed at explaining an event that is happening now and in the past. Domain is, the address on the internet that contains electronic data information, which can be accessed anywhere with an internet network on the website, the domain is needed to accelerate the development of information and communication and also in material and non-material transactions, there are many things that must be addressed regarding supervision Several Indone domains between: There is a need for firmer supervision by PANDI in terms of synchronization between dominance (.co.id) and (.id). There needs to be a synchronization between Government Regulation on Communication andInformation number 23 of 2013 concerning the management of Indonesian domain names, with law no. 20 of 2016 on Trademarks and Act No. 19 of 2016 amending the Law No. 11 of 2018 on Information and Electronic Transactions There needs to be a revision of Law No. 23 of 2013 concerning Management of the name Domaian Indonesia. The need for new legal policies that can provide a deterrent effect on crime in the field of cyber-specific domains. To avoid the many problems that will arise in the future about the Indonesian Domain some of the above need to be executed more quickly.
APA, Harvard, Vancouver, ISO, and other styles
2

Sampathirao Suneetha, Et al. "Cross-Domain Aspect Extraction using Adversarial Domain Adaptation." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 11s (October 31, 2023): 672–82. http://dx.doi.org/10.17762/ijritcc.v11i11s.9658.

Full text
Abstract:
Aspect extraction, the task of identifying and categorizing aspects or features in text, plays a crucial role in sentiment analysis. However, aspect extraction models often struggle to generalize well across different domains due to domain-specific language patterns and variations. In order to tackle this challenge, we propose an approach called "Cross-Domain Aspect Extraction using Adversarial-Based Domain Adaptation". Our model combines the power of pre-trained language models, such as BERT, with adversarial training techniques to enable effective aspect extraction in diverse domains. The model learns to extract domain-invariant aspects by incorporating a domain discriminator, making it adaptable to different domains. We evaluate our model on datasets from multiple domains and demonstrate its effectiveness in achieving cross-domain aspect extraction. The results of our experiments reveal that our model outperforms baseline techniques, resulting in significant gains in aspect extraction across various domains. Our approach opens new possibilities for domain adaptation in aspect extraction tasks, providing valuable insights for sentiment analysis in diverse domains.
APA, Harvard, Vancouver, ISO, and other styles
3

Xu, Minghao, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. "Adversarial Domain Adaptation with Domain Mixup." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6502–9. http://dx.doi.org/10.1609/aaai.v34i04.6123.

Full text
Abstract:
Recent works on domain adaptation reveal the effectiveness of adversarial learning on filling the discrepancy between source and target domains. However, two common limitations exist in current adversarial-learning-based methods. First, samples from two domains alone are not sufficient to ensure domain-invariance at most part of latent space. Second, the domain discriminator involved in these methods can only judge real or fake with the guidance of hard label, while it is more reasonable to use soft scores to evaluate the generated images or features, i.e., to fully utilize the inter-domain information. In this paper, we present adversarial domain adaptation with domain mixup (DM-ADA), which guarantees domain-invariance in a more continuous latent space and guides the domain discriminator in judging samples' difference relative to source and target domains. Domain mixup is jointly conducted on pixel and feature level to improve the robustness of models. Extensive experiments prove that the proposed approach can achieve superior performance on tasks with various degrees of domain shift and data complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Bhaskara, Ramachandra M., Alexandre G. de Brevern, and Narayanaswamy Srinivasan. "Understanding the role of domain–domain linkers in the spatial orientation of domains in multi-domain proteins." Journal of Biomolecular Structure and Dynamics 31, no. 12 (December 2013): 1467–80. http://dx.doi.org/10.1080/07391102.2012.743438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cao, Meng, and Songcan Chen. "Mixup-Induced Domain Extrapolation for Domain Generalization." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 10 (March 24, 2024): 11168–76. http://dx.doi.org/10.1609/aaai.v38i10.28994.

Full text
Abstract:
Domain generalization aims to learn a well-performed classifier on multiple source domains for unseen target domains under domain shift. Domain-invariant representation (DIR) is an intuitive approach and has been of great concern. In practice, since the targets are variant and agnostic, only a few sources are not sufficient to reflect the entire domain population, leading to biased DIR. Derived from PAC-Bayes framework, we provide a novel generalization bound involving the number of domains sampled from the environment (N) and the radius of the Wasserstein ball centred on the target (r), which have rarely been considered before. Herein, we can obtain two natural and significant findings: when N increases, 1) the gap between the source and target sampling environments can be gradually mitigated; 2) the target can be better approximated within the Wasserstein ball. These findings prompt us to collect adequate domains against domain shift. For seeking convenience, we design a novel yet simple Extrapolation Domain strategy induced by the Mixup scheme, namely EDM. Through a reverse Mixup scheme to generate the extrapolated domains, combined with the interpolated domains, we expand the interpolation space spanned by the sources, providing more abundant domains to increase sampling intersections to shorten r. Moreover, EDM is easy to implement and be plugged-and-played. In experiments, EDM has been plugged into several methods in both closed and open set settings, achieving up to 5.73% improvement.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhou, Hongyi, Bin Xue, and Yaoqi Zhou. "DDOMAIN: Dividing structures into domains using a normalized domain-domain interaction profile." Protein Science 16, no. 5 (May 2007): 947–55. http://dx.doi.org/10.1110/ps.062597307.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hu, Chengyang, Ke-Yue Zhang, Taiping Yao, Shice Liu, Shouhong Ding, Xin Tan, and Lizhuang Ma. "Domain-Hallucinated Updating for Multi-Domain Face Anti-spoofing." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 3 (March 24, 2024): 2193–201. http://dx.doi.org/10.1609/aaai.v38i3.27992.

Full text
Abstract:
Multi-Domain Face Anti-Spoofing (MD-FAS) is a practical setting that aims to update models on new domains using only novel data while ensuring that the knowledge acquired from previous domains is not forgotten. Prior methods utilize the responses from models to represent the previous domain knowledge or map the different domains into separated feature spaces to prevent forgetting. However, due to domain gaps, the responses of new data are not as accurate as those of previous data. Also, without the supervision of previous data, separated feature spaces might be destroyed by new domains while updating, leading to catastrophic forgetting. Inspired by the challenges posed by the lack of previous data, we solve this issue from a new standpoint that generates hallucinated previous data for updating FAS model. To this end, we propose a novel Domain-Hallucinated Updating (DHU) framework to facilitate the hallucination of data. Specifically, Domain Information Explorer learns representative domain information of the previous domains. Then, Domain Information Hallucination module transfers the new domain data to pseudo-previous domain ones. Moreover, Hallucinated Features Joint Learning module is proposed to asymmetrically align the new and pseudo-previous data for real samples via dual levels to learn more generalized features, promoting the results on all domains. Our experimental results and visualizations demonstrate that the proposed method outperforms state-of-the-art competitors in terms of effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
8

López-Huertas, María J. "Domain Analysis for Interdisciplinary Knowledge Domains." KNOWLEDGE ORGANIZATION 42, no. 8 (2015): 570–80. http://dx.doi.org/10.5771/0943-7444-2015-8-570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Anderson, D. D., J. Coykendall, L. Hill, and M. Zafrullah. "Monoid Domain Constructions of Antimatter Domains." Communications in Algebra 35, no. 10 (September 21, 2007): 3236–41. http://dx.doi.org/10.1080/00914030701410294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Matzen, Sylvia, and Stéphane Fusil. "Domains and domain walls in multiferroics." Comptes Rendus Physique 16, no. 2 (March 2015): 227–40. http://dx.doi.org/10.1016/j.crhy.2015.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Domain"

1

Hamrin, Göran. "Effective Domains and Admissible Domain Representations." Doctoral thesis, Uppsala University, Department of Mathematics, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5883.

Full text
Abstract:

This thesis consists of four papers in domain theory and a summary. The first two papers deal with the problem of defining effectivity for continuous cpos. The third and fourth paper present the new notion of an admissible domain representation, where a domain representation D of a space X is λ-admissible if, in principle, all other λ-based domain representations E of X can be reduced to X via a continuous function from E to D.

In Paper I we define a cartesian closed category of effective bifinite domains. We also investigate the method of inducing effectivity onto continuous cpos via projection pairs, resulting in a cartesian closed category of projections of effective bifinite domains.

In Paper II we introduce the notion of an almost algebraic basis for a continuous cpo, showing that there is a natural cartesian closed category of effective consistently complete continuous cpos with almost algebraic bases. We also generalise the notion of a complete set, used in Paper I to define the bifinite domains, and investigate what closure results that can be obtained.

In Paper III we consider admissible domain representations of topological spaces. We present a characterisation theorem of exactly when a topological space has a λ-admissible and κ-based domain representation. We also show that there is a natural cartesian closed category of countably based and countably admissible domain representations.

In Paper IV we consider admissible domain representations of convergence spaces, where a convergence space is a set X together with a convergence relation between nets on X and elements of X. We study in particular the new notion of weak κ-convergence spaces, which roughly means that the convergence relation satisfies a generalisation of the Kuratowski limit space axioms to cardinality κ. We show that the category of weak κ-convergence spaces is cartesian closed. We also show that the category of weak κ-convergence spaces that have a dense, λ-admissible, κ-continuous and α-based consistently complete domain representation is cartesian closed when α ≤ λ ≥ κ. As natural corollaries we obtain corresponding results for the associated category of weak convergence spaces.

APA, Harvard, Vancouver, ISO, and other styles
2

Hamrin, Göran. "Effective domains and admissible domain representations /." Uppsala : Department of Mathematics, Uppsala University [distributör], 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-5883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kucheruk, Liliya. "Modern English Legal Terminology : linguistic and cognitive aspects." Thesis, Bordeaux 3, 2013. http://www.theses.fr/2013BOR30016/document.

Full text
Abstract:
La présente étude intitulée «Terminologie juridique moderne de la langue anglaise: aspects linguistiques et cognitifs » aborde le langage juridique contemporain dans le cadre de la linguistique cognitive. Les objectifs de l'étude sont d'étudier les particularités de la terminologie juridique et de proposer des principes de systématisation, en se référant à la théorie cognitive de la métaphore. Il s’agit principalement : 1) de déterminer les concepts de base utilisés métaphoriquement dans la langue juridique ; 2) d'établir les correspondances principales entre domaines et les corrélations entre des éléments particuliers dans des domaines spécifiques. Pour répondre à cette question, un corpus d’anglais juridique a été constitué et soumis à une étude quantitative. Les expressions métaphoriques liées à la terminologie juridique ont été retirés et classés selon leur sens métaphorique. Il est ainsi apparu que les métaphores conceptuelles de la GUERRE, de la MEDECINE, du SPORT et de la CONSTRUCTION étaient les plus nombreuses et prégnantes en anglais juridique. Les projections et correspondances entre ces domaines sources et le domaine cible de la LOI ont été établies.Cette étude empirique repose sur 156 textes juridiques qui ont été rassemblés au sein d’un même corpus (COLE – Corpus of Legal English). Les sources renvoient à différentes catégories thématiques. Le corpus a été utilisé pour établir la réalité de certains phénomènes et interpréter les résultats quantitatifs dans le cadre de la théorie de la métaphore conceptuelle
The present doctoral dissertation entitled “Modern English Legal Terminology: linguistic and cognitive aspects” investigates the contemporary legal idiom, from a cognitive linguistics perspective. The aim of this study is to map out the peculiarities of English legal terminology and develop principles of systematization, within the framework of conceptual metaphor theory. This means 1) determining the basic concepts used metaphorically in English legal language, and 2) establishing the main cross-domain mappings and correlations between separate items within concrete domains.The Corpus of Legal English (COLE) was set up and a quantitative analysis performed, in which metaphorical expressions related to legal terminology were searched for and classified on the basis of meanings, conceptual domains and mappings. Thus, the conceptual metaphors of WAR, MEDICINE, SPORT and CONSTRUCTION were found to be the most numerous and valuable in Legal English. The main cross-domain mappings between these source domains and the target domain of LAW were established.In order to carry out this data-driven study, 156 legal texts were selected and compiled into the Corpus of Legal English (COLE). The source-texts represent various thematic categories. The COLE was systematically used to interpret frequency counts from the point of view of conceptual metaphor theory
Дисертаційне дослідження на тему «Сучасна англійська юридична термінологія: лінгвокогнитивний аспект» досліджує сучасну мову права з точки зору когнітивної лінгвістики. Головною метою дослідження було дослідження особливостей англійської юридичної термінології та принципів її систематизації з точки зору когнітивної теорії і власне теорії концептуальної метафори. В ході написання роботи були поставлені наступні цілі: 1) визначити головні концепти які використовуються у якості метафор в англійській мові права; 2) встановити головні концептуальні зв’язки між окремими елементами доменів.З метою вирішення цих питань і задач був проведений кількісний аналіз корпусу юридичної англійської мови. В ході цього аналізу біли виділені та класифіковані метафоричні вирази які пов’язані з юридичною термінологією згідно їх метафоричного значення. В результаті аналізу було виявлено що концептуальні метафори WAR, MEDICINE, SPORT та CONSTRUCTION займають домінуюче положення в мові права. Також були встановлені основні концептуальні зв’язки між сферою-джерелом та сферою-ціллю.В даному дослідженні було використано спеціально створений корпус, який включає в себе 156 правових текстів різноманітної сюжетної направленості, для проведення кількісного аналізу з точки зору концептуальної метафори
APA, Harvard, Vancouver, ISO, and other styles
4

Comitz, Paul H. "A Domain-Specific Language for Aviation Domain Interoperability." NSUWorks, 2013. http://nsuworks.nova.edu/gscis_etd/122.

Full text
Abstract:
Modern information systems require a flexible, scalable, and upgradeable infrastructure that allows communication and collaboration between heterogeneous information processing and computing environments. Aviation systems from different organizations often use differing representations and distribution policies for the same data and messages, limiting interoperability and collaboration. Although this problem is conceptually straightforward, information exchange is error prone, often dramatically underestimated, and unexpectedly complex. In the air traffic domain, complexity is often the result of the many different uncoordinated information processing environments that are used. The complexity and variation in information processing environments results in a barrier between domain practitioners and the engineers that build the information systems. These divisions have contributed to development challenges on high profile systems such as the FAA's Advanced Automation System and the FBI's Virtual Case File. Operationally, difficulties in sharing information have contributed to significant coordination challenges between organizations. These coordination problems are evident in events such as the response to Hurricane Katrina, the October 2009 Northwest Airlines flight that overflew its scheduled destination by more than 100 miles, and other incidents requiring coordination between multiple organizations. To address interoperability in the aviation domain, a prototype Domain-Specific Language (DSL) for aviation data, an aviation metadata repository, and a data generation capability was designed and implemented. These elements provide the capability to specify and generate data for use in the aviation domain. The DSL was designed to allow the domain practitioner to participate in dynamic information exchange without being burdened by the complexities of information technology and organizational policy. The DSL provides the capability to specify and generate information system usable representations of aviation data. Data is generated according to the representational details stored in the aviation metadata repository. The combination of DSL, aviation metadata repository, and data generation provide the capability for aviation systems to interoperate, enabling collaboration, information sharing, and coordination.
APA, Harvard, Vancouver, ISO, and other styles
5

Sankaran, Krishnaswamy. "Accurate domain truncation techniques for time-domain conformal methods /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ding, Ziwei. "Domain functions and domain interactions of CTP, phosphocholine cytidylyltransferase." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0023/MQ51332.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

El, Boukkouri Hicham. "Domain adaptation of word embeddings through the exploitation of in-domain corpora and knowledge bases." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG086.

Full text
Abstract:
Il existe, à la base de la plupart des systèmes de TAL, des représentations numériques appelées « plongements lexicaux » qui permettent à la machine de traiter, d'interagir avec et, dans une certaine mesure, de comprendre le langage humain. Ces plongements lexicaux nécessitent une quantité importante de textes afin d'être entraînés correctement, ce qui conduit souvent les praticiens du TAL à collecter et fusionner des textes provenant de sources multiples, mélangeant souvent différents styles et domaines (par exemple, des encyclopédies, des articles de presse, des articles scientifiques, etc.). Ces corpus dits du « domaine général » sont aujourd'hui la base sur laquelle s'entraînent la plupart des plongements lexicaux, limitant fortement leur utilisation dans des domaines plus spécifiques. En effet, les « domaines spécialisés » comme le domaine médical manifestent généralement assez de spécificités lexicales, sémantiques et stylistiques (par exemple, l'utilisation d'acronymes et de termes techniques) pour que les plongements lexicaux généraux ne soient pas en mesure de les représenter efficacement. Dans le cadre de cette thèse, nous explorons comment différents types de ressources peuvent être exploités afin soit d’entraîner de nouveaux plongements spécialisés, soit de spécialiser davantage des représentations préexistantes. Plus précisément, nous étudions d'abord comment des corpus de textes peuvent être utilisés à cette fin. En particulier, nous montrons que la taille du corpus ainsi que son degré de similarité au domaine d’intérêt jouent un rôle important dans ce processus puis proposons un moyen de tirer parti d'un petit corpus du domaine cible afin d’obtenir de meilleurs résultats dans des contextes à faibles ressources. Ensuite, nous abordons le cas des modèles de type BERT et observons que les vocabulaires généraux de ces modèles conviennent mal aux domaines spécialisés. Cependant, nous montrons des résultats indiquant que des modèles formés à l'aide de tels vocabulaires peuvent néanmoins être comparables à des systèmes entièrement spécialisés et utilisant des vocabulaires du domaine du domaine, ce qui nous amène à la conclusion que le ré-entraînement de modèles du domaine général est une approche tout à fait efficace pour construire des systèmes spécialisés. Nous proposons également CharacterBERT, une variante de BERT capable de produire des représentations de mots entiers en vocabulaire ouvert via la consultation des caractères de ces mots. Nous montrons des résultats indiquant que cette architecture conduit à une amélioration des performances dans le domaine médical tout en étant plus robuste aux fautes d'orthographe. Enfin, nous étudions comment des ressources externes sous forme de bases de connaissances et ontologies du domaine peuvent être exploitées pour spécialiser des représentations de mots préexistantes. Dans ce cadre, nous proposons une approche simple qui consiste à construire des représentations denses de bases de connaissances puis à combiner ces ``vecteurs de connaissances’’ avec les plongements lexicaux cibles. Nous généralisons cette approche et proposons également des Modules d'Injection de Connaissances, de petites couches neuronales permettant l'intégration de représentations de connaissances externes au sein des couches cachées de modèles à base de Transformers. Globalement, nous montrons que ces approches peuvent conduire à de meilleurs résultats, cependant, nous avons l'intuition que ces performances finales dépendent en fin de compte de la disponibilité de connaissances pertinentes pour la tâche cible au sein des bases de connaissances considérées. Dans l'ensemble, notre travail montre que les corpus et bases de connaissances du domaine peuvent être utilisés pour construire de meilleurs plongements lexicaux en domaine spécialisé. Enfin, afin de faciliter les recherches futures sur des sujets similaires, nous publions notre code et partageons autant que possible nos modèles pré-entraînés
There are, at the basis of most NLP systems, numerical representations that enable the machine to process, interact with and—to some extent—understand human language. These “word embeddings” come in different flavours but can be generally categorised into two distinct groups: on one hand, static embeddings that learn and assign a single definitive representation to each word; and on the other, contextual embeddings that instead learn to generate word representations on the fly, according to a current context. In both cases, training these models requires a large amount of texts. This often leads NLP practitioners to compile and merge texts from multiple sources, often mixing different styles and domains (e.g. encyclopaedias, news articles, scientific articles, etc.) in order to produce corpora that are sufficiently large for training good representations. These so-called “general domain” corpora are today the basis on which most word embeddings are trained, greatly limiting their use in more specific areas. In fact, “specialized domains” like the medical domain usually manifest enough lexical, semantic and stylistic idiosyncrasies (e.g. use of acronyms and technical terms) that general-purpose word embeddings are unable to effectively encode out-of-the-box. In this thesis, we explore how different kinds of resources may be leveraged to train domain-specific representations or further specialise preexisting ones. Specifically, we first investigate how in-domain corpora can be used for this purpose. In particular, we show that both corpus size and domain similarity play an important role in this process and propose a way to leverage a small corpus from the target domain to achieve improved results in low-resource settings. Then, we address the case of BERT-like models and observe that the general-domain vocabularies of these models may not be suited for specialized domains. However, we show evidence that models trained using such vocabularies can be on par with fully specialized systems using in-domain vocabularies—which leads us to accept re-training general domain models as an effective approach for constructing domain-specific systems. We also propose CharacterBERT, a variant of BERT that is able to produce word-level open-vocabulary representations by consulting a word's characters. We show evidence that this architecture leads to improved performance in the medical domain while being more robust to misspellings. Finally, we investigate how external resources in the form of knowledge bases may be leveraged to specialise existing representations. In this context, we propose a simple approach that consists in constructing dense representations of these knowledge bases then combining these knowledge vectors with the target word embeddings. We generalise this approach and propose Knowledge Injection Modules, small neural layers that incorporate external representations into the hidden states of a Transformer-based model. Overall, we show that these approaches can lead to improved results, however, we intuit that this final performance ultimately depends on whether the knowledge that is relevant to the target task is available in the input resource. All in all, our work shows evidence that both in-domain corpora and knowledge may be used to construct better word embeddings for specialized domains. In order to facilitate future research on similar topics, we open-source our code and share pre-trained models whenever appropriate
APA, Harvard, Vancouver, ISO, and other styles
8

Hitchins, Matthew G. "Domain Disparity| Informing the Debate between Domain-General and Domain-Specific Information Processing in Working Memory." Thesis, The George Washington University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10607221.

Full text
Abstract:

Working memory is a collection of cognitive resources that allow for the temporary maintenance and manipulation of information. This information can then be used to accomplish task goals in a variety of different contexts. To do this, the working memory system is able to process many different kinds of information using resources dedicated to the processing of those specific types of information. This processing is modulated by a control component which is responsible for guiding actions in the face of interference. Recently, the way in which working memory handles the processing of this information has been the subject of debate. Specifically, current models of working memory differ in their conceptualization of its functional architecture and the interaction between domain-specific storage structures and domain-general control processes. Here, domain-specific processing is when certain components of a model are dedicated to processing certain kinds of information, be it spatial or verbal. Domain-general processing is a when a component of a model can process multiple kinds of information. One approach conceptualizes working memory as consisting of various discrete components that are dedicated to processing specific kinds of information. These multiple component models attempt to explain how domain-specific storage structures are coordinated by a domain-general control mechanism. They also predict that capacity variations in those domain-specific storage structures can directly affect the performance of the domain-general control mechanism. Another approach focuses primarily on the contributions of a domain-general control mechanism to behavior. These controlled attention approaches collapse working memory and attention and propose that a domain-general control mechanism is the primary source of individual differences. This means that variations in domain-specific storage structures are not predicted to affect the functioning of the domain-general control mechanism. This dissertation will make the argument that conceptualizing working memory as either domain-specific or domain-general creates a false dichotomy. To do this, different ways of measuring working memory capacity will first be discussed. That discussion will serve as a basis for understanding the differences, and similarities between both models. A more detailed exposition of both the multiple component model and controlled attention account will follow. Behavioral and physiological evidence will accompany the descriptions of both models. The emphasis of the evidence presented here will be on load effects: observed changes in task performance when information is maintained in working memory. Load effects can be specific to the type of information being maintained (domain-specific), or occur regardless of information type (domain-general). This dissertation will demonstrate how the two models fail to address evidence for both domain-specific and domain-general load effects. Given these inadequacies, a new set of experiments will be proposed that will seek to demonstrate both domain-specific and domain-general effects within the same paradigm. Being able to demonstrate both these effects will go some way towards accounting for the differing evidence presented in the literature. A brief conceptualization of a possible account to explain these effects will then be discussed. Finally, future directions for research will be described.

APA, Harvard, Vancouver, ISO, and other styles
9

Scheuffgen, Kristina. "Domain-general and domain-specific deficits in autism and dyslexia." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gale, Andrew J. (Andrew John). "Protein-RNA domain-domain interactions in a tRNA sythetase system." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/39369.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Domain"

1

McBryde, Ian. Domain. [Wollongong, N.S.W.]: Five Islands Press, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Herbert, James. Domain. London: Book Club Associates, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Herbert, James. Domain. New York: New American Library, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kangueane, Pandjassarame, and Christina Nilofer. Protein-Protein and Domain-Domain Interactions. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7347-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Iljoong, Hojun Lee, and Ilya Somin, eds. Eminent Domain. Cambridge: Cambridge University Press, 2015. http://dx.doi.org/10.1017/9781316822685.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Reinhartz-Berger, Iris, Arnon Sturm, Tony Clark, Sholom Cohen, and Jorn Bettin, eds. Domain Engineering. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Books, Ace, and Copyright Paperback Collection (Library of Congress), eds. Dragon's domain. New York: Ace Books, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Andrade, Eugenio de. Dark domain. Toronto: Guernica, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

translator, Shipley Krista, and Sacramento Ludwig, eds. Species domain. Los Angeles, CA: Seven Seas Entertainment, LLC, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Matthews, Alex. Death's domain. Toronto: Worldwide, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Domain"

1

Kangueane, Pandjassarame, and Christina Nilofer. "Domain-Domain Interactions." In Protein-Protein and Domain-Domain Interactions, 143–46. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-7347-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gooch, Jan W. "Domain." In Encyclopedic Dictionary of Polymers, 239. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_3927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gooch, Jan W. "Domain." In Encyclopedic Dictionary of Polymers, 888. New York, NY: Springer New York, 2011. http://dx.doi.org/10.1007/978-1-4419-6247-8_13593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Weik, Martin H. "domain." In Computer Science and Communications Dictionary, 453. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/1-4020-0613-6_5500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hubaux, Arnaud, Mathieu Acher, Thein Than Tun, Patrick Heymans, Philippe Collet, and Philippe Lahire. "Separating Concerns in Feature Models: Retrospective and Support for Multi-Views." In Domain Engineering, 3–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Koshima, Amanuel Alemayehu, Vincent Englebert, and Philippe Thiran. "A Reconciliation Framework to Support Cooperative Work with DSM." In Domain Engineering, 239–59. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bettin, Jorn. "Model Oriented Domain Analysis and Engineering." In Domain Engineering, 263–90. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Henderson-Sellers, Brian, and Cesar Gonzalez-Perez. "Multi-Level Meta-Modelling to Underpin the Abstract and Concrete Syntax for Domain-Specific Modelling Languages." In Domain Engineering, 291–316. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guizzardi, Giancarlo. "Ontology-Based Evaluation and Design of Visual Conceptual Modeling Languages." In Domain Engineering, 317–47. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pastor, Oscar, Giovanni Giachetti, Beatriz Marín, and Francisco Valverde. "Automating the Interoperability of Conceptual Models in Specific Development Domains." In Domain Engineering, 349–73. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-36654-3_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Domain"

1

Ćiprijanović, Aleksandra, Diana Kafkes, Sydney Jenkins, K. Downey, Gabriel Perdue, S. Madireddy, T. Johnston, and Brian Nord. "Domain Adaptation for Cross-Domain Studies of Merging Galaxies." In Domain Adaptation for Cross-Domain Studies of Merging Galaxies. US DOE, 2021. http://dx.doi.org/10.2172/1825309.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ćiprijanović, Aleksandra. "Domain Adaptation for Cross-Domain Studies in Astronomy: Merging Galaxies Identification." In Domain Adaptation for Cross-Domain Studies in Astronomy: Merging Galaxies Identification. US DOE, 2021. http://dx.doi.org/10.2172/1827857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sun, Zhishu, Zhifeng Shen, Luojun Lin, Yuanlong Yu, Zhifeng Yang, Shicai Yang, and Weijie Chen. "Dynamic Domain Generalization." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/187.

Full text
Abstract:
Domain generalization (DG) is a fundamental yet very challenging research topic in machine learning. The existing arts mainly focus on learning domain-invariant features with limited source domains in a static model. Unfortunately, there is a lack of training-free mechanism to adjust the model when generalized to the agnostic target domains. To tackle this problem, we develop a brand-new DG variant, namely Dynamic Domain Generalization (DDG), in which the model learns to twist the network parameters to adapt to the data from different domains. Specifically, we leverage a meta-adjuster to twist the network parameters based on the static model with respect to different data from different domains. In this way, the static model is optimized to learn domain-shared features, while the meta-adjuster is designed to learn domain-specific features. To enable this process, DomainMix is exploited to simulate data from diverse domains during teaching the meta-adjuster to adapt to the agnostic target domains. This learning mechanism urges the model to generalize to different agnostic target domains via adjusting the model without training. Extensive experiments demonstrate the effectiveness of our proposed method. Code is available: https://github.com/MetaVisionLab/DDG
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yingnan, Yingtian Zou, Rui Qiao, Fusheng Liu, Mong Li Lee, and Wynne Hsu. "Cross-Domain Feature Augmentation for Domain Generalization." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. California: International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/127.

Full text
Abstract:
Domain generalization aims to develop models that are robust to distribution shifts. Existing methods focus on learning invariance across domains to enhance model robustness, and data augmentation has been widely used to learn invariant predictors, with most methods performing augmentation in the input space. However, augmentation in the input space has limited diversity whereas in the feature space is more versatile and has shown promising results. Nonetheless, feature semantics is seldom considered and existing feature augmentation methods suffer from a limited variety of augmented features. We decompose features into class-generic, class-specific, domain-generic, and domain-specific components. We propose a cross-domain feature augmentation method named XDomainMix that enables us to increase sample diversity while emphasizing the learning of invariant representations to achieve domain generalization. Experiments on widely used benchmark datasets demonstrate that our proposed method is able to achieve state-of-the-art performance. Quantitative analysis indicates that our feature augmentation approach facilitates the learning of effective models that are invariant across different domains.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Xiang, Lei Li, Shuofei Qiao, Ningyu Zhang, Chuanqi Tan, Yong Jiang, Fei Huang, and Huajun Chen. "One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER." In Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}. California: International Joint Conferences on Artificial Intelligence Organization, 2023. http://dx.doi.org/10.24963/ijcai.2023/559.

Full text
Abstract:
Cross-domain NER is a challenging task to address the low-resource problem in practical scenarios. Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain. Owing to the mismatch issue among entity types in different domains, previous approaches normally tune all parameters of PLMs, ending up with an entirely new NER model for each domain. Moreover, current models only focus on leveraging knowledge in one general source domain while failing to successfully transfer knowledge from multiple sources to the target. To address these issues, we introduce Collaborative Domain-Prefix Tuning for cross-domain NER (CP-NER) based on text-to-text generative PLMs. Specifically, we present text-to-text generation grounding domain-related instructors to transfer knowledge to new domain NER tasks without structural modifications. We utilize frozen PLMs and conduct collaborative domain-prefix tuning to stimulate the potential of PLMs to handle NER tasks across various domains. Experimental results on the Cross-NER benchmark show that the proposed approach has flexible transfer ability and performs better on both one-source and multiple-source cross-domain NER tasks.
APA, Harvard, Vancouver, ISO, and other styles
6

Cai, Yitao, and Xiaojun Wan. "Multi-Domain Sentiment Classification Based on Domain-Aware Embedding and Attention." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/681.

Full text
Abstract:
Sentiment classification is a fundamental task in NLP. However, as revealed by many researches, sentiment classification models are highly domain-dependent. It is worth investigating to leverage data from different domains to improve the classification performance in each domain. In this work, we propose a novel completely-shared multi-domain neural sentiment classification model to learn domain-aware word embeddings and make use of domain-aware attention mechanism. Our model first utilizes BiLSTM for domain classification and extracts domain-specific features for words, which are then combined with general word embeddings to form domain-aware word embeddings. Domain-aware word embeddings are fed into another BiLSTM to extract sentence features. The domain-aware attention mechanism is used for selecting significant features, by using the domain-aware sentence representation as the query vector. Evaluation results on public datasets with 16 different domains demonstrate the efficacy of our proposed model. Further experiments show the generalization ability and the transferability of our model.
APA, Harvard, Vancouver, ISO, and other styles
7

Mancini, Massimiliano, Lorenzo Porzi, Samuel Rota Bulo, Barbara Caputo, and Elisa Ricci. "Boosting Domain Adaptation by Discovering Latent Domains." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2018. http://dx.doi.org/10.1109/cvpr.2018.00397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Utzmann, Jens, and Claus-Dieter Munz. "Domain Decompositions for CAA in Complex Domains." In 13th AIAA/CEAS Aeroacoustics Conference (28th AIAA Aeroacoustics Conference). Reston, Virigina: American Institute of Aeronautics and Astronautics, 2007. http://dx.doi.org/10.2514/6.2007-3488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ayub, Md Ahsan, Steven Smith, Ambareen Siraj, and Paul Tinker. "Domain Generating Algorithm based Malicious Domains Detection." In 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). IEEE, 2021. http://dx.doi.org/10.1109/cscloud-edgecom52276.2021.00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhe, Yu, Kazuto Fukuchi, Youhei Akimoto, and Jun Sakuma. "Domain Generalization Via Adversarially Learned Novel Domains." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9860025.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Domain"

1

Wodicka, N., H. M. Steenkamp, T. Peterson, I. Therriault, J. B. Whalen, V. Tschirhart, C. J. M. Lawley, et al. An overview of Archean and Proterozoic history of the Tehery Lake-Wager Bay area, central Rae Craton, Nunavut. Natural Resources Canada/CMSS/Information Management, 2024. http://dx.doi.org/10.4095/332501.

Full text
Abstract:
This short contribution describes the Archean and Proterozoic history of the central Rae Craton in the Tehery Lake-Wager Bay area, Nunavut. The study area comprises six lithotectonic domains separated by large-scale structures: the Gordon Domain, Lunan Domain, Daly Bay complex, Douglas Harbour Domain, Kummel Lake Domain, and Ukkusiksalik Domain. These domains can be differentiated on the basis of metamorphic assemblages, Nd model and U-Pb ages, absence or presence of specific lithologies, and/or geophysical characteristics. Links between these domains and neighbouring areas of the central Rae Craton, the timing of assembly of domains and terranes, and the effects of the Snowbird and Trans-Hudson orogenies are briefly described.
APA, Harvard, Vancouver, ISO, and other styles
2

Klenk, Matthew, and Ken Forbus. Cross Domain Analogies for Learning Domain Theories. Fort Belvoir, VA: Defense Technical Information Center, January 2007. http://dx.doi.org/10.21236/ada471251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lutz, Carsten. Interval-based Temporal Reasoning with General TBoxes. Aachen University of Technology, 2000. http://dx.doi.org/10.25368/2022.109.

Full text
Abstract:
Aus der Motivation: Description Logics (DLs) are a family of formalisms well-suited for the representation of and reasoning about knowledge. Whereas most Description Logics represent only static aspects of the application domain, recent research resulted in the exploration of various Description Logics that allow to, additionally, represent temporal information, see [4] for an overview. The approaches to integrate time differ in at least two important aspects: First, the basic temporal entity may be a time point or a time interval. Second, the temporal structure may be part of the semantics (yielding a multi-dimensional semantics) or it may be integrated as a so-called concrete domain. Examples for multi-dimensional point-based logics can be find in, e.g., [21;29], while multi-dimensional interval-based logics are used in, e.g., [23;2]. The concrete domain approach needs some more explanation. Concrete domains have been proposed by Baader and Hanschke as an extension of Description Logics that allows reasoning about 'concrete qualities' of the entities of the application domain such as sizes, length, or weights of real-worlds objects [5]. Description Logics with concrete domains do usually not use a fixed concrete domain; instead the concrete domain can be thought of as a parameter to the logic. As was first described in [16], if a 'temporal' concrete domain is employed, then concrete domains may be point-based, interval-based, or both.
APA, Harvard, Vancouver, ISO, and other styles
4

Christon, Mark Allen. The Rendezvous Algorithm for Domain-to-Domain Data Transfers. Office of Scientific and Technical Information (OSTI), February 2015. http://dx.doi.org/10.2172/1169154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stahl, M. K. Domain administrators guide. RFC Editor, November 1987. http://dx.doi.org/10.17487/rfc1032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cooper, A., and J. Postel. The US Domain. RFC Editor, December 1992. http://dx.doi.org/10.17487/rfc1386.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cooper, A., and J. Postel. The US Domain. RFC Editor, June 1993. http://dx.doi.org/10.17487/rfc1480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cross, L. E. Ferroelectric Domain Studies. Fort Belvoir, VA: Defense Technical Information Center, December 2001. http://dx.doi.org/10.21236/ada413114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Black, Alan W., and Kevin A. Lenzo. Limited Domain Synthesis. Fort Belvoir, VA: Defense Technical Information Center, January 2000. http://dx.doi.org/10.21236/ada461150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Friedlander, Benjamin, and J. O. Smith. Time Domain Algorithms. Fort Belvoir, VA: Defense Technical Information Center, September 1985. http://dx.doi.org/10.21236/ada163054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography